["itemContainer",{"xmlns:xsi":"http://www.w3.org/2001/XMLSchema-instance","xsi:schemaLocation":"http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd","uri":"https://www.johnntowse.com/LUSTRE/items/browse?output=omeka-json&sort_field=Dublin+Core%2CCreator","accessDate":"2026-05-02T22:02:10+00:00"},["miscellaneousContainer",["pagination",["pageNumber","1"],["perPage","10"],["totalResults","148"]]],["item",{"itemId":"193","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"215"},["src","https://www.johnntowse.com/LUSTRE/files/original/2d733bde1c35f66edba319392e339771.pdf"],["authentication","bcd96b51fb4c89cefd082eb9845b288a"]]],["itemType",{"itemTypeId":"14"},["name","Dataset"],["description","Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing."]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"3852"},["text","Investigating infant expectation on object search tasks"]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"3853"},["text"," Leah Murphy"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3854"},["text","2023"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3855"},["text","The current study aims to distinguish between Piaget’s (1954) theory of object understanding, highlighting the role of object permanence on A not B task performance, and Diamond’s (1985) theory highlighting the role of motor demands and lack of ability to inhibit habitual behaviours during the task. These two theories differ in their predictions for the expectations of the infants taking part, with Piaget (1954) predicting that infants’ lack of object permanence causes poor performance on the task and Diamond (1985) predicting that infants understand the movement of objects and a lack of inhibition of habitual behaviours cause error in performance. We tested 15 nine-month-old infants on a looking version of the A not B task. The use of impossible and possible outcomes was also incorporated on B trials, with the object being revealed from either the correct or incorrect location (e.g., see Ahmed & Ruffman, 1998). Infant first look direction, accumulated looking time during trials and the number of social looks initiated post-outcome, were used as measures. We found significant evidence of the ‘AB’ error during trials, with a significantly increased number of incorrect first looks on B trials. There was also a descriptive pattern showing surprise at object location reveals with increased number of social looks during B compared to A trials, though this was not significant. Accumulated looking analysis showed that infants looked longer on A than B trials, suggesting that infants expected the object to be in location B on B trials, demonstrating infants’ ability to understand objects and supporting Diamond’s (1985) theory. However, implications for a small sample size and presence of individual differences on interpretation of looking time data are discussed. Implications in theory and future research are suggested and overall, results provide support for the application of Piaget’s (1954) theory and suggest that infants have limited object understanding based on their displayed expectations during testing."]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"3856"},["text","3.1. Participants\r\nIn this study, 15 participants took part, aged 8 months and 12 days to 9 months and 27 days old (M = 9 months and 3 days, SD= 11.3 days). Six further infants were excluded from data analysis as they became too fussy to complete the study. Participants were recruited from the Lancaster Baby lab database, along with the Lancaster Baby lab Facebook page and were also recruited via word of mouth from guardians taking part in the \r\nstudy. \r\n3.2. Materials\r\nThe video stimuli were created using Canva software (Canva.com, 2023) and was uploaded onto ‘Habit 2’ software (see Oakes et al., 2019) to display the stimuli during testing and to measure the accumulated looking time of the infant participants. The stimuli involved a novel object obtained from the NOUN database (Horst & Hout, 2016). A camera was used to record the social looks exchanged between the infant and guardian, as well \r\nas the direction of the infants’ first looks during testing. \r\n3.3. Design\r\nThis study had a within-subjects design, with all participants being exposed to the same experimental conditions and the same stimuli. To counterbalance for location effects, half of the participants witnessed A trials being hidden in the box on the left, whilst the other half witnessed the object being hidden in the box on the right during A trials. The presentation of the accurate and inaccurate B trials was further counterbalanced across participants, as half of the participants viewed the inaccurate B trials first, and the other half viewed the accurate B trials first.\r\n3.4. Ethical approval\r\nEthical approval for this study was granted by the departmental ethics committee (DEC) at Lancaster University. Guardians were recruited via their preferred contact method and were sent the participant information sheet to read before agreeing to take part in the study. A date and time of testing was arranged at the Babylab building at Lancaster University, via telephone or email. Upon arrival, guardians were presented with the consent form to sign and initial all points before being allowed to take part. They were also given the opportunity to ask any questions about the study and were informed that they could withdraw at any time. \r\nAfter the study, the guardian received a five-pound contribution to travel costs, along with a free children’s book for the infant, as a reward for taking part in the study. The guardian also received a debrief sheet to read and to take home, providing them with all contact information of the lead researcher, if they wished to ask any questions or to withdraw from the study. \r\n3.5. Procedure\r\nThe testing took place in a private room within the Whewell building at Lancaster University. The infant and guardian were sat in front of a computer screen with the infant sat in a highchair positioned directly in front of the screen, and the guardian sat in a chair to the side, slightly behind the infant (to allow researchers to see clearly when the infant initiated a social look). The experimenter sat behind a divider at a computer, out of sight of the infant and guardian. A social engagement video of the experimenter saying, “Let’s hide the blap, can you find the blap?” was presented to the infants at the start of the experiment and between each trial, to insert social communication and guide the attention of the infant to the screen before the stimuli were presented. The infant then watched a series of video stimuli in which a novel object appeared on the screen and moved into one of two boxes, both boxes were then covered (the object was hidden), and a there was a delay period of five seconds (see figure 1). After the delay period, both boxes were revealed, and the location of the toy was visible to the infant. Any movement of the object was accompanied by a sound to guide the attention of the infant to the object, but this sound was not present when the object was revealed to avoid any leading factors when measuring infant expectation. Instead, the occluders made a simple “whoosh” sound when they were removed, to ensure the infant was paying attention. After five identical A trials, the object was then hidden in the second location and the process was repeated consisting of six B trials. However, during the B trials, the object was hidden in the second location, but was either revealed to be in the correct (accurate) or incorrect (inaccurate) location (see figure 2). This variation in outcome was presented alternately to the infant, with the object being revealed from the incorrect location for three out of the six B trials. The study lasted for approximately 10 minutes per participant.\r\n3.6. Behavioural coding\r\nInfant looking time was coded online as trial lengths were infant controlled. Each trial ended when the infant looked away for four seconds. As this controlled the trial length, this was not double coded as this inherently will lead to a high agreement level. For the coding of infant first look and number of social looks, the videos recorded of the participants were saved and uploaded onto Microsoft OneDrive to be offline coded. First look was defined as the direction that the infant first looked towards once the occluder was removed and the object was revealed. On trials where the infant was not looking as the occluder was removed, the first look was defined as the direction in which they looked once their gaze returned to the screen. The first look direction was coded as correct and incorrect. The number of social looks initiated by the infant per trial was also measured during coding, defined by the infant turning towards the guardian during each trial after an outcome was revealed. Twenty percent of the videos were dual coded and there were no discrepancies between researchers during the dual coding process for first looks (r = 1, p<0.01) or social looking (r= 1, p<0.01)."]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3857"},["text",".xslx"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"3858"},["text","Shiyu Pang\r\nYuewen Qin"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3859"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3863"},["text","dataset"]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3864"},["text","Chi-squared, Correlation, Factor analysis, Linear mixed effects modelling"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"3866"},["text","Murphy(2023)"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"3867"},["text","open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"3868"},["text","none"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"3860"},["text"," Kirsty Dunn"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"3861"},["text","In this study, 15 participants took part, aged 8 months and 12 days to 9 months and 27 days old (M = 9 months and 3 days, SD= 11.3 days). Six further infants were excluded from data analysis as they became too fussy to complete the study."]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"3862"},["text","Chi-squared, Correlation, Factor analysis, Linear mixed effects modelling"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"3870"},["text","Cognitive - developmental, Developmental"]]]]]]]],["item",{"itemId":"95","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"55"},["src","https://www.johnntowse.com/LUSTRE/files/original/b4b8d43c207e3cf7ec573afec543d6c8.doc"],["authentication","3f4ccd4b3e23dc0e56bf5bdd11a14b53"]]],["collection",{"collectionId":"9"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"499"},["text","Behavioural observations"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"500"},["text","Project focusing on observation of behaviours.\r\nIncludes infant habituation studies"]]]]]]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2176"},["text","An Investigation into the Effects of Temporary Visual Deprivation on Cortical Hyperexcitability, and Links with Multisensory Integration"]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"2177"},["text","Abbie Cochrane"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2178"},["text","2018"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2179"},["text","Cortical hyperexcitability is a state of highly increased neuronal activity in the brain. The current research is a novel investigation into the effects of short term temporary visual deprivation on cortical hyperexcitability and resultant aberrant visual experiences in non-migraineurs, migraine with aura, and migraine only participants. This research also assesses the link between cortical hyperexcitability and its effects on aberrant experiences across all senses; vision, audition, gustation, olfaction, and bodily sensations. Forty-three participants, including three migraine aura sufferers and three migraine only sufferers, completed the pattern glare test to induce and measure state-based cortical hyperexcitability under normal and temporary visual deprivation conditions, along with two questionnaire measures; the Cortical Hyperexcitability Index (version II; CHi-II), measuring trait-based cortical hyperexcitability; and the Multi-Modality Unusual Sensory Experiences Questionnaire, assessing aberrant experiences across senses. Results indicated no effect of temporary visual deprivation on cortical hyperexcitability, although migraine aura participants reported higher cortical hyperexcitability levels overall compared to migraine only and non-migraineurs. State-based pattern glare was not associated with unusual experiences in senses aside from olfactory, however the trait-based CHi-II was strongly correlated with unusual auditory, gustatory, and bodily sensations. Potential methodological and theoretical reasons for these results are discussed, alongside improvements and new directions for future research."]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2180"},["text","Cortical hyperexcitability, pattern glare, sensory hallucinations, temporary visual deprivation, migraine with aura"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"2181"},["text","Participants\r\nForty-three participants took part in this study, consisting of 28 females and 15 males. All participants were students at Lancaster University with a mean age of 22.5 years, ranging from 19 to 36 years (SD=2.92, SE=0.45). Twenty-two participants were native English speakers, and 21 spoke English as their second language. Of these participants, three self-reported suffering with migraine only (MO) and three with migraine with aura (MA). Participants were recruited using opportunity sampling, and all gave fully informed consent before completing the experiment. \r\nPrior to participation, all participants were screened to ensure they did not suffer with any form of epilepsy, seizures of an unknown origin, and that they had not recently undergone brain or eye surgery. As no subjects reported these experiences, no participants were excluded on this basis. One participant reported suffering with micropsia; a visual impairment causing distortion of object size, so was removed from future analyses. All remaining participants reported normal or corrected to normal vision (i.e. through aid of glasses or contact lenses). Two participants were later removed from analysis for unusual scores on the baseline pattern glare task measure, explained in the results section. As such, the final sample size was 40 (age: M=22.53, SD=3.02, SE=0.48).\r\n\r\nMaterials and Procedure\r\n\tPattern glare task. Participants completed the pattern glare task under two conditions; blindfold or non-blindfold, creating a within-subjects design. Half completed the blindfold condition first, followed by the non-blindfold condition, with the other half completing the non-blindfold condition followed by the blindfold condition.\r\n\tThe pattern glare task utilised three black and white striped grating patterns. The low frequency grating, calculated to have a spatial frequency of 0.5 cycles per degree (cpd; Figure 1), and the high frequency grating of 5.8cpd (Figure 2) acted as baseline measures. The medium frequency was the critical triggering stimuli, with a grating of 2.5cpd (Figure 3). Stimuli measured 17.5cm by 13.5cm each and were presented on paper. They were placed on the wall at eye level 50cm from the participant, resulting in a visual angle of 15.4°.\r\nParticipants completed two trials; blindfold and non-blindfold. In the non-blindfold trial, participants were presented the three striped gratings, one at a time. Participants were asked to look at the grating for fifteen seconds, focusing on a central fixation point. If they found stimuli too aversive to view for the full time, they could inform the researcher, who would promptly remove the stimuli. There were 10 second intervals between presentations of gratings to allow the researcher to prepare the next stimulus. All stimuli were presented in a randomised order, to avoid order and carryover effects confounding results. After viewing each grating, participants completed a questionnaire consisting of seventeen items (Appendix A) asking about any visual distortions and discomforts experienced whilst viewing the stimuli, such as “shadowy shapes”, “colour distortions”, and “illusory stripes”. These are termed Associated Visual Distortions (AVDs). Each question was answered using a 7-point Likert scale assessing the intensity of each AVD experienced (0 = “not at all”, 6 = “extremely”). Responses were used to calculate a pattern glare score; a measure of state-based cortical hyperexcitability triggered by the stimuli. The blindfold condition followed a similar procedure, the only difference being that participants were required to wear a blackout blindfold for five minutes at the start of the trial before viewing only the medium and high frequency stimuli and answering the questionnaire as in the non-blindfold condition. \r\nWhilst conducting the experiment, laboratory light conditions were controlled with blackout blinds covering all windows and relying on internal lighting controlled by the researcher. This prevented differences in intensity of light affecting how participants responded to the stimuli, particularly after removing the blindfold. Each pattern glare trial took approximately 10-15 minutes to complete. Additional questionnaire measures were carried out between the two pattern glare task trials, allowing a washout period for participants’ eyes to recover between viewings of uncomfortable stimuli, and excitability levels to return to normal. The full experiment took approximately 40 minutes to complete.\r\n\r\n\r\nFigure 1. Stimuli with low frequency grating (0.5 cycles per degree) for pattern glare task.\r\n\r\n\r\nFigure 2. Stimuli with high frequency grating (5.8 cycles per degree) for pattern glare task.\r\n\r\n\r\nFigure 3. Stimuli with medium frequency grating (2.5 cycles per degree) for pattern glare task.\r\n\r\nQuestionnaire measures. Participants were asked to complete two different questionnaire measures; the Cortical Hyperexcitability Index version II (CHi-II; Fong et al., under review), and the Multi-Modality Unusual Sensory Experiences Questionnaire (MUSEQ; Mitchell et al., 2017).\r\nCortical Hyperexcitability Index version II (CHi-II). The CHi-II (Appendix B) is a trait-based proxy measure for assessing experiences thought to reflect cortical hyperexcitability. Measurements from the original CHi questionnaire (Braithwaite, Marchant, Takahashi, Dewe, & Watson, 2015) correlate with neurological measures of cortical hyperexcitability (Braithwaite, Mevorach, & Takahashi, 2015), suggesting CHi accurately and reliably measures cortical hyperexcitability. \r\nThe updated version (CHi-II) consists of 30 questions. Each item used a seven-point Likert response scale to rate participants’ unusual visual experiences in terms of frequency (0 = “never”, 6 = “all the time”) and intensity (0 = “not at all”, 6 = “extremely intense”). Experiences examined fall under three factors; Heightened Visual Sensitivity and Discomfort (HVSD), for example “irritation from indoor lights”; Aura-Like Hallucinatory Experiences (AHE), such as “flashes of moving shapes”; and Distorted Visual Perception, including “everyday objects look different”. Frequency and intensity scores for each question were added, making a maximum of twelve. The totals for each of the 30 items were summed to give a score of cortical hyperexcitability for each participant, with a maximum score of 360.\r\nMulti-Modality Unusual Sensory Experiences Questionnaire (MUSEQ). The recently devised MUSEQ (Appendix C) measures unusual sensory experiences across six human senses: auditory, visual, olfactory, gustatory, bodily sensations, and sensed presence of others. Within each factor, questions range from broad sensory tricks (i.e. “my eyes have played tricks on me”) to hallucinatory experiences (i.e. “I have heard a person’s voice and found that no-one was there”), encompassing a range of more common to more unusual perceptual experiences. Questions used five-point Likert scales (0 = “never; never happened”, 4 = “frequently; at least monthly”). \r\nAs one item in the original MUSEQ was highly similar to an item in CHi-II, this was removed from the present version of MUSEQ used in the current study, in order to avoid conflation of results when comparing the two questionnaires.\r\n\r\nEthics statement\r\nThis research was ethically approved by the Departmental Ethics Committee at Lancaster University’s Department of Psychology on 11/05/2018.\r\n"]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"2182"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2183"},["text","Data/Excel.xlsx"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"2184"},["text","Cochrane2018"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2185"},["text","Rebecca James"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"2186"},["text","Open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"2187"},["text","None"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2188"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2189"},["text","Data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"2190"},["text","LA1 4YF"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"2191"},["text","Jason Braithwaite"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"2192"},["text","MSc"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"2193"},["text","Neuropsychology"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"2194"},["text","43 participants"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"2195"},["text","Correlations, t-tests, ANOVA, Bayesian Analysis"]]]]]]]],["item",{"itemId":"83","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"41"},["src","https://www.johnntowse.com/LUSTRE/files/original/70e8b6f0e20b7e3f46e642c7284bd8a8.doc"],["authentication","6d2e0f9e5936d11253c9ab16b9bc1842"]]],["collection",{"collectionId":"2"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"179"},["text","Eye tracking "]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"180"},["text","Understanding psychological processes though eye tracking"]]]]]]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"1913"},["text","Experiencing social acceptance and rejection through ‘likes’ and ‘dislikes’: Does sleep quality affect the processing of social rewards?"]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"1914"},["text","Abigail Taylor-Spencer"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1915"},["text","2018"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1916"},["text","In adolescence, high importance is placed on peer evaluations and social rewards have increased salience during this developmental period. Sleep patterns also change in adolescence, as teenagers typically experience insufficient sleep. This research measured the pupil dilation of forty-four adolescents aged 16 to 18 using two tasks (audio and visual) to investigate whether sleep duration influenced the way social acceptance and rejection were processed. Sleep duration scores were obtained using the measure of sleep debt; this was calculated by subtracting sleep duration during the week from sleep duration at the weekend, plus weekday bedtime. It was expected that higher sleep debt would be linked to increased pupil reactivity towards social feedback and that there would be a greater pupil dilation in response to social rejection compared to social acceptance. In the visual task, it was found that sleep debt affected males and females differently when processing social rewards, as females with high sleep debt showed increased pupil dilation towards positive feedback compared to negative feedback, whereas males with low sleep debt showed a larger dilation towards positive feedback than females. It was also found that females with lower sleep debt gave more likes than dislikes when rating photos. This implies that sleep duration affects the social feedback adolescents provide. When a male voice was used in the audio task, more pupillary reactivity towards social acceptance was observed, however when a female voice was used, pupils dilated more in response to social rejection. Future research should further investigate these gender differences."]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1917"},["text","Adolescence\r\n Pupil dilation\r\nSocial feedback\r\n Reward\r\n Rejection\r\nSleep debt."]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"1918"},["text","Participants\r\n\tForty-four participants (N=44) were recruited from Haslingden High School and Sixth Form to participate in this research. The participants (35 female, 9 male) were all between the ages of 16 and 18 (Mage = 16.98, SDage = .63). Students in Psychology, Sociology and English classes were given the opportunity to participate in the research and contacted the researcher via email if they wished to participate. Each participant provided their informed consent before beginning the study.\r\nMaterials\r\n\tPhoto ratings. Firstly, the participants were shown a PowerPoint containing 40 photos, which had been previously collected by the researcher, and featured adolescents which the participants did not know. Each photo was displayed individually for four seconds, meaning that the presentation lasted two minutes and forty seconds in total. Participants were provided with a sheet of paper on which they had an option to tick either ‘like’ or ‘dislike’ for each photo on the PowerPoint (see Appendix A). The total number of likes was calculated for each participant.\r\n\tEye tracker. An eye-tribe desktop eye tracker with a 30Hz sampling rate was used to measure the pupil dilation of the participants in response to stimuli on two tasks - a visual task and an audio task. A chin rest was used to ensure the participants kept their heads still.\r\n\tVisual task. The visual task involved showing the participants the same 40 photos which they had previously been shown in the photo rating task, however, each photo had either a ‘like’ symbol or ‘dislike’ symbol (see Figure 1) in the bottom right hand corner. Participants were informed prior to beginning the task that if a photo contained the ‘like’ symbol, it meant that the individual in the photo had liked the participant’s picture, however the ‘dislike’ symbol meant that the individual in the photo had disliked the participant’s picture. The presentation of photos was randomised across participants\r\n\r\nAudio task. The audio task involved the participants listening to forty voice recordings, which each lasted between six and seven seconds in length. Twenty of these recordings were nice comments and twenty were nasty comments, which were found on online social media platforms. An example of a nice comment is; ‘You look unreal and your outfit is amazing. You are a true inspiration to everyone’ and an example of a nasty comment is; ‘You are so fake, and you are such a liar. Every single thing you say is a lie’ (see Appendix B for the complete list of comments). A male voice read out half of the nice and half of the nasty comments, and a female voice featured in the other half of the recordings. The nice comments were characterised as positive social feedback, and the nasty as negative social feedback. The presentation of nice and nasty comments was randomised across participants. The audio material was rated for emotional valence and arousal; the former being how positive or negative the recordings were, and the latter being the intensity of this positivity or negativity (Citron, Gray, Critchley, Weekes, & Ferstl, 2014). See Appendix C for the emotional valence and arousal scores, which were rated by six individuals using Qualtrics. Presentation of the nice and nasty comments was randomised across participants.\r\n\tQuestionnaires. Participants were asked to complete two questionnaires; one which was an adaptation of the MCTQ questionnaire (Munich ChronoType Questionnaire; Roenneberg, Wirz-Justice & Merrow, 2003), to identify the sleeping patterns of the participants (see Appendix D), and a questionnaire about their social media use (see Appendix E) which was used to maintain the ruse that the study was interested in the participants’ social media use.\r\n\tThis study received ethical approval from Lancaster University on 05/04/2018.\r\nDesign\r\n\tVariables. The dependent variable in this study was pupil size, which was measured in arbitrary units, using an eye tribe eye tracker. An average pupil diameter was calculated for each trial; each participant had 40 average pupil size measurements in the visual task and 40 average pupil diameter measurements in the audio task. The dependent variables of median and area under the curve were used. The independent variables in the study were; feedback valence, sleep debt, gender voice and gender.\r\n\tFeedback valence. The feedback was within subjects, as all forty-four participants experienced both positive and negative feedback in both tasks. In the visual task, all participants saw twenty people who had supposedly ‘liked’ their photo, and twenty people who had supposedly ‘disliked’ their photo. In the auditory task, all participants heard twenty positive comments and twenty negative comments. This was analysed to assess whether varying pupillary responses were elicited towards positive and negative social feedback.\r\n\tSleep debt. Sleep debt was determined by the MCTQ (Roenneberg et al., 2003); a value of sleep debt was calculated by subtracting sleep duration during the week from sleep duration at the weekend, plus weekday bedtime. Participants were split into two groups; high sleep debt and low sleep debt. Those with a high sleep debt had less weekday sleep and greater weekend sleep, which is a marker of poor sleep quality. This was a between subject factor, as half of the participants were in the high sleep debt group, and half in the low sleep debt group.\r\n\tVoice Gender. In the audio task, half of the audio clips featured a male voice, and half featured a female voice, therefore this was a between subject factor. This was analysed to investigate whether the gender of the voice or pictured individual had an effect on the pupillary responses.\r\n\tGender. In the visual task, the gender of the participants was investigated as a between subjects factor, as nine of the participants were male, and thirty-five were female.\r\n\tAudio task. The design of the audio task was a factorial design with a between subjects factor of sleep debt (which had two levels – low and high) and a within subjects factor of social feedback valence (two levels: positive and negative) and a second within subjects factor of voice gender (two levels: male and female).\r\n\tVisual task. The design of the visual task was a factorial design with a between subjects factor of sleep debt (which had two levels – low and high) and within subjects factors of social feedback valence (two levels: positive and negative) and participant gender (two levels: male and female).\r\nProcedure\r\n\tApproximately two weeks prior to the beginning of data collection, students in Psychology, Sociology and English classes at Haslingden Sixth Form were contacted and given the opportunity to participate in this research. Those who were interested in participating, and would provide consent, were asked to send a picture containing only themselves (eg. a Facebook profile picture) to the researcher via email for use in the study. The participants were informed that the photo they sent would be liked or disliked by students at another school, and that that during the study, there would be an opportunity to like or dislike photos of the individuals who rated their picture. No other information about the other ‘students’ was provided. The participants were led to believe that the study was investigating whether social media use affects responses to being judged online, and whether the use of social media affects sleep patterns in adolescence.\r\n\tAll participants were tested in the same office in Haslingden High School and Sixth Form. Participants were invited into the office and invited to sit down a desk which featured an eye-tribe eye tracker, 24-inch iMac monitor and keyboard, and a chin rest was placed 50 cm away from the eye tracker. The computer had MatLab 2015 installed. Each participant was provided with an information sheet (see Appendix F), and was given the opportunity to ask any questions, before signing an informed consent form (see Appendix G) if they still wished to participate and consented to partake in the study.\r\n\tOnce the consent form had been signed, the photo rating task was explained. This task involved presenting forty photos to the participants using Microsoft PowerPoint. The photos were shown individually; each photo was on an individual slide, and each one was presented for four seconds. The participants were asked to mark whether they ‘liked’ or ‘disliked’ each photo on a sheet of paper (see Appendix A). The presentation was on an automatic timer however, the participants were informed that if a slide moved on too quickly, the left arrow key would take them back to the previous slide, and the timed presentation would continue by pressing the right arrow key. The participants were led to believe that the photographs they were rating were of the individuals who had rated their photos. The eye tracker was not used during this task.\r\n\tNext, the participants were asked to place their head on the chin rest, and the eye tracker was calibrated. Participants were asked to keep their heads as still as possible, and to move their eyes towards the dots as they appeared on the screen. The calibration was accepted when three stars or above was achieved, and the eye tracker was used for both the visual and auditory tasks. The order in which the tasks were completed was counterbalanced, therefore half of the participants completed the visual task first, and half completed the auditory task first. The participants were informed what would happen during each task and were given the opportunity to ask any questions before the tasks began.\r\n\tThe participants were told that, in the auditory task, they would hear forty voice clips; twenty nasty and twenty nice. They were asked to look at a black cross that was located in the centre of the screen whilst the voice clips were playing. Ten of the ‘nice’ clips and ten of the ‘nasty’ clips were read aloud by a female, and the remaining were read by a male voice. The nice and nasty comments which featured in the voice clips were found on online social media platforms (see Appendix B for the full list of comments used), however the participants were asked to imagine that the comments had been directed towards themselves. Participants were told that, in the visual task, they would view the photographs which they had previously ‘liked’ or ‘disliked’ in the photo rating task. However, this time, the photos would either have a ‘like’ thumb or a ‘dislike’ thumb in the bottom right hand corner (see Figure 2 and Figure 3 for examples). If a photo had a ‘like’ thumb, it meant that person had supposedly liked the participant’s photo, whereas a ‘dislike’ thumb meant the individual in the photo had disliked the participant’s photo. Half of the participants completed the visual task first, and half of the participants completed the audio task first; the tasks were counterbalanced to determine whether the order in which they were presented influenced the outcome.\r\nAfter finishing both the visual and auditory tasks, participants were asked to complete two questionnaires; the MCTQ (Roenneberg et al., 2003) to determine a sleep debt score and a questionnaire on social media use. After completing the questionnaires, participants were informed that their photo had not actually been seen or rated by pupils at another school, and that the ratings which they gave in the photo rating task wouldn’t be seen by the individuals in the photos. Participants were then provided with a debrief sheet (see Appendix H) and given the opportunity to ask any questions they may have had.\r\nAnalysis\r\nPreliminary data analysis. In order to measure the magnitude of change in pupil dilation and compare across the conditions, each trial pupil size was baseline adjusted by subtraction of the mean pupil size in the 300ms prior to stimulus onset from each sampled value during the further 4 seconds of stimuli presentation. The area under the curve and median were then calculated from the trial level baseline adjusted data to provide the dependent variables in the analysis. These were used as dependent variables to show the magnitude and duration of the effects. The median was used as opposed to the mean because the median is less likely to be skewed by outliers.\r\nTwo multilevel mixed effects general linear mixed models (GLMM) were used to analyse the data for the two tasks with participant included as a random effect with intercept. An AR(1) heterogeneous first order autoregressive structure with homogenous variances was selected because it was expected that the error variance would become less correlated as the trials became further apart. The total number of likes each participant gave on the photo rating task was calculated and a 2 (gender: male vs. female) x 2 (sleep debt: low vs. high) between factor analysis of variance (ANOVA) was carried out.\r\n\r\n"]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"1919"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1920"},["text","data/SPSS.sav"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"1921"},["text","Taylor-Spencer2018"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"1922"},["text","Ellie Ball"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"1923"},["text","Open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"1924"},["text","None"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1925"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1926"},["text","Data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"1927"},["text","LA1 4YF"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"1928"},["text","Judith Lunn"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"1929"},["text","MSc"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"1930"},["text","Cognitive Psychology\r\nDevelopmental Psychology"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"1931"},["text","44 Participants (9 male and 35 female)"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"1932"},["text","ANOVA\r\nLinear Mixed Effects Modelling"]]]]]]]],["item",{"itemId":"26","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"80"},["src","https://www.johnntowse.com/LUSTRE/files/original/4040157fa601c7af5352c3bdde6e94e9.doc"],["authentication","f8c5c5955bd9c9ec49d08b681086a724"]],["file",{"fileId":"81"},["src","https://www.johnntowse.com/LUSTRE/files/original/a58af13d4ccfff0e7bcf680470b5108b.csv"],["authentication","c99643a599dadcb3e6de66e9465d6cb3"]],["file",{"fileId":"82"},["src","https://www.johnntowse.com/LUSTRE/files/original/9307c412766661fd91fec82c6be1d3cb.csv"],["authentication","9ba030772c51bdb154fd6ada79c3ceb2"]],["file",{"fileId":"83"},["src","https://www.johnntowse.com/LUSTRE/files/original/8e6e2c7b34c8e947bac97d32fe25b27d.csv"],["authentication","8f6ab6798a300736d6d14acbd62d243f"]]],["collection",{"collectionId":"5"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"185"},["text","Questionnaire-based study"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"186"},["text","An analysis of self-report data from the administration of questionnaires(s)"]]]]]]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"931"},["text","Gender identity, attitudes, and bystander intervention "]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"932"},["text","Adriana Vivas Zurita"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"933"},["text","2017"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"934"},["text","Identifying as a feminist and demonstrating a commitment to feminist activism has suggested an increased likelihood of engaging in bystander interventions in sexist situations in women university students (Brinkman et al., 2015), and awareness about gender prejudices as a result of undertaking women studies and/or diversity courses seems to relate to an increased involvement in feminist activity (Stake & Hoffmann, 2001). Together with this, confrontational responses to prejudicial attitudes can be perceived as a means for decreasing stereotypic responding (Mallett, Ford & Woodzicka, 2016;; Czopp, & Monteith, 2003). For this research levels of exposure to feminist research and self- identification as feminist were examined to determine its effect on sexism levels, and the ability to identify sexism on given hostile and benevolent sexist scenarios. Likewise, the responses participants have given in the past when witnessing sexism was also recorded, and then analyzed to determine correlations between a confrontational response, exposure to feminism, and the strength of feminist identity participants self-identify with. Gender differences were also analysed. Results revealed that participants with high levels of exposure to feminist had significant lower levels of only benevolent sexism. Further analysis also suggests that those with exposure to feminist theory are significantly more likely to identify sexism in hostile sexist scenarios than are those with no exposure. Exposure to feminist theory also increases the likelihood to have a stronger feminist identity. Significant gender differences were also found. Application of these findings and recommendations for future research is further discussed."]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"935"},["text","Gender prejudice\r\nFeminist identity\r\nFeminist theory\r\n three partite model of violence.\r\n"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"936"},["text","Measurements\r\nVignettes Exercise. The vignettes exercise presented participants with 15 scenarios, of which 5 were hostile sexism scenarios, 5 were benevolent sexism scenarios, and 5 were neutral scenarios. The participants were asked 3 questions after reading each vignette. First, they were asked if the scenario presented involved sexism, which was evaluated with a 5 point Likert scale from “strongly disagree” to “strongly disagree”. Secondly, the participants were asked to rate the seriousness of the event, with a 6 point Likert scale which rated from “not applicable”, “not at all serious” to “very serious”. The third question asked participants to pick the type of phenomena that best described the scenario from 8 different choices, which included “hostile or negative comments about women”, “reproduction of the idea that women are not complete without a significant other”, and “the scenario does not describe a situation that involves sexism”, among others that derived from Glick and Fiske ́s (1996) definitions of sexism. Examples for the vignettes (see Appendix A) were taken from Mallett, Ford, and Woodzicka (2016), McCarty, and Kelly (2015), Durán, Moya, & Megías, (2011), Kato et al. (2011), Expósito, Herrera, Moya, and Glick (2010), and Sibley and Wilson (2004). \r\n\r\nExperiences of Gender Prejudices Instrument. Past experiences of gender prejudice were measured using Brinkman et al’s (2015) Experience of Gender Prejudices Instrument. Participants were asked to identify the last time they were in a situation in which they witnessed a woman being the target of sexism (see Appendix B). They were asked to pick which scenario best described the type of sexism witnessed from 7 options that included “hostile or negative comments about women” and “reproduction of the idea that women are not complete without a significant other”. They were then asked how they reacted to the situation, and if they intervened what their motivation had been. The participant ́s reactions to the sexism situation were coded as either ́confrontational ́ or ́non-confrontational ́, and as ́not applicable ́ in two occasions. Responses “tried to help the victim”, “ignored the person/people”, “left the situation”, “responded indirectly, but in a way I hoped would end the situation”, “used a nonverbal gesture to express that I was offended (ex. rolled my eyes, gave them a dirty look, etc.)”, “said something to the instigator(s) to express my thoughts/feelings”, and “used a physical response to express my thoughts/feelings (ex. slap the instigator)” were classified as confrontational. Responses “ignored the person/people”, “left the situation”, and “nothing” were coded as non-confrontational. Where participants reported a confrontational response, their motivations to intervene were again sought. Participants were presented with a list of 8 options which included “wanted to do my duty as a man by being chivalrous / wanted to do my duty as a woman by being nice”, “wanted to help a person in distress”, “wanted to stop the sexist behaviour because is wrong”, and “other”. Their motivations were then coded as “feminist goal”, “non-feminist goal”, “neutral” and “other”.\r\n\r\n\r\nThe Ambivalent Sexism Inventory (ASI). The Ambivalent Sexism Inventory (ASI; Glick & Fiske, 1996) is a measure of modern sexism in participants. It comprises 22 statements, such as “men are incomplete without women” and “women exaggerate problems they have at work”, which participants evaluate on a 5 point Linkert scale, from “disagree strongly” to “agree strongly” (see Appendix C). The mean of all 22 items was obtained, closer means to 5 equals higher levels of sexism. The ASI also measures two sub-scales, the mean of 11 items was used to generate a hostile sexism score and the mean the other 11 items generated a benevolent sexism score. \r\n\r\nDemographic Information. Demographic information was collected relating to each participant ́s gender, age, and year in University (see Appendix D). Participants were also asked to quantify the hours of exposure to teaching on gender-related topics during their undergraduate and/or postgraduate studies on the following scale, from 0 hours, to 1-10, 10-20, 20-40, 40-60 or 60(+). Participants were also asked if they self-identified as feminist or not, and the strength of their identification as feminist was measured on a 5 point Likert scale, from “I strongly identify as a feminist” to “I strongly do not identify as a feminist”. \r\n\r\nThe Demographic Information Questionnaire also measured, on a 5 point Likert scale, the degree to which participants identified with feminist goals and the degree to which they agree that the transformation of gender relations is needed in order to achieve gender equality. \r\n\r\nDesign \r\nThe study adopted a survey design and the variables measured are as follow: Independent and participant variables: Gender, age, feminist identity, strength of feminist identity, feminist goal, sexism and exposure to feminist theory.\r\nDependent variables: Bystander intervention, identification, and evaluation of different forms of sexism, ambivalent sexism scale. \r\n\r\nProcedures \r\nEthical approval for this study was obtained from the Psychology department research ethics committee at Lancaster University on May 26th 2017. Once ethical approval was gained, the participants’ recruitment stage began. \r\n\r\nParticipants answered an invitation to complete an online survey which was hosted on the Qualtrics platform (2017). First, participants read the Participant Information Sheet (see Appendix E), and then completed the consent form (Appendix F). Then, participants answered the Vignettes exercise, followed by the “Experiences of Gender Prejudice Instrument” (Brinkman et al., 2015), then they were asked to fill “The Ambivalent Sexism Inventory” (Glick & Fiske, 1996), to finish with the Demographic Information Questionnaire. After answering the participants were debriefed (Appendix G) through the same platform. Completion of the survey typically took 20-30 minutes. \r\n\r\nResults Section:\r\n\r\nDemographic information\r\nTable 1 shows the demographic data relating to the gender of the participants and identification as feminist; the category “rather not say” was excluded from all the analysis of the gender variable owing to nil response. \r\n\r\nFrom the total of participants, 56 self-identified as feminist (68.3%) and 26 said they did not self-identify as feminist (31.7%). Chi-square analysis revealed significant gender differences in self-identification as feminist X2(1,81)=4.858, p<.05, significantly more female 77.4% participants reported being feminist than did male 53.6% participants. \r\n\r\nEffect of exposure to feminist theory, effect of gender, and interactions\r\nThe purpose of this study was to look the effect of exposure to feminist theory, the effect of gender, and the effect of the interaction between gender and exposure to feminist theory on the sexism levels of the participants, on recognition of sexist scenarios and on their responses to witnessing sexism in their lives. The effect of exposure to feminist theory to the strength of self-identification as feminist was also measured. \r\n\r\nEffect of exposure to feminist theory, effect of gender, and interactions on sexism levels\r\n\r\nParticipants were asked to quantify in hours their exposure to feminist research/teaching, then their answers were coded as “exposure” and “no exposure” and results were compared. \r\n\r\nSexism was measured with the Ambivalent Sexism Inventory (Glick & Fiske, 1996), which provides three measures; the ambivalent (or overall) sexism levels, benevolent sexism levels and the hostile sexism levels. The levels of sexism were calculated for each participant, higher numbers indicating higher levels of sexism. \r\n"]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"937"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"938"},["text","data/SPSS.sav"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"939"},["text","Zurita2017"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"940"},["text","John Towse"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"941"},["text","Open"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"942"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"943"},["text","Data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"944"},["text","LA1 4YF"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"945"},["text","Chris Walton"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"946"},["text","MSc"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"947"},["text","Social Psychology"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"948"},["text","82 participant’s responses to the survey were analysed, of which 28 were male, 53 were women and one person rather not saying"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"949"},["text","ANOVA\r\nChi-Square"]]]]]]]],["item",{"itemId":"105","public":"1","featured":"0"},["collection",{"collectionId":"5"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"185"},["text","Questionnaire-based study"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"186"},["text","An analysis of self-report data from the administration of questionnaires(s)"]]]]]]]],["itemType",{"itemTypeId":"17"},["name","Software"],["description","A computer program in source or compiled form. Examples include a C source file, MS-Windows .exe executable, or Perl script."]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2345"},["text","The effects of screen exposure on developmental skills among children at two and three years of age."]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"2346"},["text","Afrah Alazemi"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2347"},["text","2015"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2348"},["text","Previous research into the topic of children’s development has tended to take place in Western nations (Kuta, 2017; Martinot, 2021). One aspect of development is language development, and one aspect of research on that matter is the use of electronic devices, with the potential for consequent effects on children’s language abilities. This paper reviews and builds upon the scope of the available research, with its disparate findings, by offering research from the context of Kuwait, a non-western nation where parents tend to be in favour of their children having access to new technologies regardless of their age (Dashti & Yateem, 2018). The increasing number of children being exposed to electronic devices of various descriptions raises concerns regarding the possible adverse effects of screen exposure on their development, particularly through displacement of educationally enriching activities, which provides the motivation here (Haughton, Aiken & Cheevers 2015). Based on a review of the existing literature, the present research starts from the hypothesis that language development will be negatively correlated with media exposure. Valid data relating to 96 children of 24 to 36 months of age were collected using two questionnaires, one relating to the child’s knowledge of Arabic words on various topics (voices of animals, names of animals, vehicles, toys, food and drink, etc.) and the other quantifying the child’s daily screen time. Ordinary least squares analysis was performed using SPSS, version 26. While a statistically significant positive moderate correlation between language expression score and age was found – an increase in age was associated with an increase in language expression or the number of words understood and expressed – no significant effect of screen time on language expression was found after adjusting for age. This indicates, therefore, the value of employing non-western populations in research into cognitive development, and suggests the need for further research in order to attain generalisable findings."]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2349"},["text","Developmental Psychology "]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"2350"},["text","The parents of a total of 100 participant children) took part in a questionnaire survey. The reports of 4 parents were excluded because their child’s age exceeded 36 months and the inclusion criteria for the study were set at 24 to 36 months. Participants were selected by means of opportunity sampling. An announcement was sent via WhatsApp to those of my contacts who had children of an age appropriate for inclusion in the study. Parents were recruited by sending a link to the survey through WhatsApp. Family and friends were then asked to deliver the WhatsApp number to those who they knew who had children within the set age range. \r\nParents read information about the study and their informed consent to participate in the questionnaire survey was obtained via Qualtrics. The Lancaster University Psychology Department gave ethical approval for the present study. \r\n\r\nProcedure\r\nThe data for the present work were gathered by means of an online questionnaire via Qualtrics between 7 June 2021 and 22 June 2021. During this time, participants submitted answers to two questionnaires: a) the Arabic CDI, which measures Arabic words arranged according to groups (for example voices of animals, names of animals, vehicles, toys, food and drink, etc.) to measure the child’s knowledge of the Arabic language (Abdel Wahab, 2020) and b) a questionnaire related to the number of hours the child spent in front of the screen , and their opinion of the appropriate amount of screen time which children can spend at their screens, as well as their control over their children’s viewing of the screens, and whether or not they are allowed to watch while sleeping and eating. The survey instruments were designed to measure the extent to which screen viewing is related to the language development of Kuwaiti children aged between two and three years.\r\nMaterials\r\nCDI: The Arabic CDI language scale developed by Abdel Wahab (2020) is a questionnaire comprising a set of categories containing checklists for identifying variety and number of words. In front of each word there are three options (‘knows it’, ‘knows it and says it’, ‘does not know it’) and parents are asked to respond to each item according to their children’s knowledge of these words. The Arabic CDI questionnaire contains 100 words divided into the following categories: voices of animals, names of animals, transport, toys, food and drink, clothes, parts of body, home furniture, little things inside the house, things and places outside the home, people, games and daily routine, actions, time-related words, adjectives, pronouns, question words, prepositions, and number formulas.\r\nMedia exposure questionnaire: Following the language questionnaire, parents completed a second survey measuring their children’s screen viewing, stating how many hours per day they spent watching a screen. Parents were asked to report frequency of screen use by choosing among the following six options: None, 0 to 1 hour, 1 to 2 hours, 3 to 4 hours, 5 to 6 hours, and > 6 hours. Participating parents were then asked to state what length of time they would consider it appropriate for their children to watch a screen, with the same set of responses available to them. There was then an item asking the parents whether they were making any efforts to reduce their children’s screen time, such as setting specific days or times for viewing or preventing them from viewing their screens while eating or in the bedroom, for example.\r\n"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2785"},["text","Data"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"2786"},["text","Kristy Dunn"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"2787"},["text","100 "]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"2788"},["text","correlation and regression. "]]]]]]]],["item",{"itemId":"106","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"134"},["src","https://www.johnntowse.com/LUSTRE/files/original/722ae4ceef6a14d9bbfc8bca41b825cf.pdf"],["authentication","657e3892388b2f3c175c84267315a3bb"]]],["collection",{"collectionId":"11"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"987"},["text","Secondary analysis"]]]]]]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2352"},["text","Film language affecting behaviour: A psycholinguistic approach"]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"2353"},["text","Aleksandra Tuneski"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2354"},["text","2021"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2355"},["text","Films are a popular form of art and entertainment that enable people to enjoy a story through multiple stimuli perception and stimulation of emotions. Plenty are the film elements that impact the audience’s attitude towards the film, yet language style has rarely been taken in consideration for research. This study focused on examining whether there exists a relationship between the audience’s favouritism for films and the linguistic style present in them, predominantly concentrating on emotional factors of language in films. A dataset containing the widest public ratings of films was obtained from the Internet Movie Database platform and paired with respective transcribed film dialogues provided by OpenSubtitles.org. The corpora’s transcripts (n=88,573) were analysed using the Linguistic Inquiry and Word Count software and all the variables produced were then correlated with IMDb’s weighted film ratings. The project found that all types of emotions present in transcripts of film language were significantly, negatively associated with the IMDb rating outcomes, while the effect sizes were small. This finding suggests there might be an inclination for emotions to be felt in other areas of stimuli perception, rather than verbal language, when it comes to films. Additional exploratory analyses showed how other variables correlated with film rating scores and practical application of study findings within the advertising industry were identified."]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2356"},["text","Pearson’s correlation"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"2357"},["text","Dataset\r\n\r\nThe dataset used for the study is purely secondary and consists of transcribed film dialogues (N=88,573) complemented with each film’s respective Internet Movie Database (IMDb) rating, which at the time of collection had a minimum of 100 user ratings per film. IMDb is an online film rating platform where the wider audience must register for an account and is then able to rate and review the films they have watched. Registered IMDb members rate films on a 10-point scale, with 1 indicating “terrible” and 10 indicating “excellent” (Boyd et al., 2020). IMDb’s rating algorithms produce ratings that are weighted by metrics associated with users, rather than average ratings. Although the algorithms are unavailable to the public, IMDb’s rating system has shown consistency across all films because the weighted ratings constantly provide reliability by reducing the possibilities of a small group of users to take advantage of the rating system (IMDb, 2021). IMDb is one of the most popular and authoritative film rating websites, where the total ratings of a film are anonymous and voluntarily provided (Sawers, 2015). \r\n\r\nThe transcribed film dialogues data was provided by OpenSubtitles.org and the corpora was previously organised and used in a study by Boyd et al. (2020); it was generally provided by the authors for the purpose of this project. OpenSubtitles.org is an online website that provides transcribed and translated captions of motion pictures, audio files and various other audio-visual files (OpenSubtitles.org, 2021). The corpora used by Boyd et al. (2020) contains purely English-language film subtitles, corresponding to films originally released in English, or foreign films whose dialogues have been translated to English. Boyd et al. (2020) combined the transcribed film dialogues provided by OpenSubtitles.org with the IMDb ratings, along with other IMDb categories such as film genre, year of release, country of production, et cetera. Almost 90% of the IMDb categories linked to the films’ ratings are irrelevant for the purpose of this project, thus solely the film ratings will be taken in consideration for analysis.  \r\n\r\nAutomated Textual Analysis Software (LIWC)\r\n\r\nTo conduct the automated textual analysis, this research project will use the Linguistic Inquiry and Word Count (LIWC) tool; also called “Luke”. LIWC is a textual analysis program that measures the degree to which various dimensions of words are used in a text (Tausczik & Pennebaker, 2010). LIWC program has two central features – the processing component and the dictionaries. The processing feature takes a text file and analyses it word by word, comparing each word with the dictionary files, sorting the word out as, for example, verb or second person pronoun (Boyd, 2017). Once the program finishes running, it produces an output where all the LIWC categories used in the text are listed, as well as the rates and percentages that each category was used in the given text. \r\n\r\nThe dictionaries are at the heart of the LIWC program and they identify the group of words that belong to each category (Pennebaker et al., 2015). When the program was being created, the authors aimed at developing measures to define emotions present in words, cognitive processes, signs of self-reflection, et cetera, and in order to assign a psychological component to words, human judges contributed in developing the categories LIWC possesses today (Boyd, 2017). Across approximately 80 dimensions (see Appendix A), LIWC analyses the text in relation to various parts of speech, thinking styles, social concerns and emotions (Pennebaker et al., 2001). For example, the “positive emotion” category contains words such as “love”, “happy” and “nice”, while the “cognitive processes” category comprises words like “examine”, “think” and “understand”. \r\n\r\nOver the years, LIWC has been able to uncover psychological patters and personalities purely from textual analysis; Petrie et al. (2008) used LIWC to investigate the Beatles’ lyrics and found out that it was possible to distinguish each songwriter’s unique language style, and also to discover whose Beatle’s style was predominant in collaboratively written songs. Researches have shown LIWC to be one of the most reliable automated textual analysis tools that is able to uncover and predict psychological implications residing in written sources, thus this study will employ this tool to test its hypothesis. \r\n\r\nData Preparation and Analysis\r\n\r\nThe initial corpora was subjected to cleaning procedures, where data which did not meet all inclusion criteria was removed from the dataset. The inclusion criteria consisted of film ratings having at least 100 user votes, transcribed dialogues having at least 100 words and corpora variables containing all data values. The cleared dataset (N=85,130) is going to be tested in the LIWC program, where each word within the transcripts will be counted and sorted among the LIWC dictionary categories it belongs to. For the main hypothesis, the program will analyse the dataset for LIWC variables that have been shown to be correlated with positive and negative evaluations in the past. This way, the quantified rates of positive and negative emotion words in each dialogue will be identified. Once the rates have been extracted, a bivariate Pearson’s correlation will be conducted to assess whether there exists a significant relationship between positive and negative emotion words in film dialogues and their IMDb ratings. Additionally, exploratory analyses will be run to search for significant relationships between the dataset variables and the film ratings, again by conducting Pearson’s correlation tests between the ratings and all LIWC variables produced.\r\n"]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"2358"},["text","Lancaster University"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"2359"},["text","Tuneski (2021)"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2360"},["text","Amy Austin and Lesley wu "]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"2361"},["text","Open"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2363"},["text","English "]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2364"},["text","Secondary Data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"2365"},["text","LA1 4YF"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"2956"},["text","Ryan Boyd"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"2957"},["text","MSc"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"2958"},["text","Language psychology"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"2959"},["text","88,573"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"2960"},["text","Pearson Correlation "]]]]]]]],["item",{"itemId":"130","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"125"},["src","https://www.johnntowse.com/LUSTRE/files/original/8084b44115d59813660e075ffce6d2ea.doc"],["authentication","f1a23c86f34e4f68a5974dbcff0f1e50"]],["file",{"fileId":"126"},["src","https://www.johnntowse.com/LUSTRE/files/original/262025a38e591b0c3482ff3dae927560.doc"],["authentication","b70dc83ed90fba9c6c24a6b32ae6b3de"]]],["collection",{"collectionId":"10"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"819"},["text","Interviews"]]]]]]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2765"},["text","Understanding the psychological, perceptual and emotional impact signage has on residents in a local community. "]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"2766"},["text","Alexander Wootton"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2767"},["text","2021"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2768"},["text","The placement of signage, street furniture and advertisements can have a profound impact on the appearance of a built environment. They play a vital role in shaping the cultural, physical and social identities that impact the perceptions that residents and other stakeholders hold towards local communities, which in turn impacts on behaviours. Adopting a qualitative approach, this study will examine the impact of signage and other visual features that can contribute to the psychological, perceptual and emotional impact that these elements can have on residents in a local community. A number of semi-structured interviews were conducted amongst residents in One Manchester property areas, One Manchester place officers and residents near these areas. Participants were shown a variety of visual images of signage and were prompted to discuss their emotional response and thoughts, and propose suggestions to improve signage. A thematic analysis was conducted using the interview data and indicated the following four themes: signage design, reputation, community engagement and impact of signage. Reflecting upon these themes, the results suggested that existing signage was psychically ill-fitted and visually dull, lacking positive influential stimuli and evocative colours and that it lacked the authenticity and character needed to emotionally resonate with passers-by. This negatively impacted the reputation of the communities, leading them to be categorised as economically poor with high crime rates, resulting in stakeholders feeling alienated and some fearful. The results highlighted that the signage needs to be revitalised as a part of a wider placemaking strategy to rejuvenate local environments, perceived to be run down. This should support the ongoing evolution of these areas and engage community members to instal signage that is both influential and reflects an overall collective vision.  "]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2769"},["text","signage, placemaking, community engagement, qualitative research, community reputation"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"2770"},["text","Design\r\nDue to the need to gain an in-depth understanding of the psychological, perceptual, and emotional impact signage has on residents in a community and factoring in the Covid-19 pandemic, a qualitative approach was adopted consisting of semi-structured interviews. This style of interviews was considered the most suitable method as they provide rich data on the participant’s thoughts which are not constrained by the bounds of tick box exercises or strict discussion guides. They enable researchers to “assess, confirm, validate, refute, or elaborate upon existing knowledge and the discovery of new knowledge” (Mcintosh & Morse, 2015, p. 1). This enables the discussion between the moderator and participant to flow more smoothly and naturally (Roulston et al., 2003) yet, a flexible guide at the moderators disposal keeps the conversation on topic. Interviews in the project were conducted using Microsoft Teams and telephone communication. The data was then assessed using Braun and Clarke’s (2006) six step thematic analysis.\r\nBraun & Clarke’s (2006) six-steps thematic analysis: \r\nFamiliarisation: Getting to know the overall data collected through re-reads of transcripts. \r\nCoding: Reducing sentences and phrases into small fragments of meaning or “codes”.  \r\nGenerating themes: Identifying patterns among codes. \r\nReview themes: Assuring that the meanings identified are relevant to the representation of data collected (research objectives). \r\nDefine themes: Refine themes developed by establishing their essence and significance. \r\nAnalysing themes: Highlight the frequency of themes and meanings derived from qualitative data analysis. Generate conclusions agreed-upon by all researchers.\r\n\r\nParticipants\r\nA sample of 24 participants was originally agreed, however, only 14 participants were interviewed for the project. Participants were either recruited by One Manchester or the lead researcher from areas across south, east and central Manchester. Participants were made up of the following:\r\n\r\nEight One Manchester residents \r\nThree One Manchester Place Coordinators who worked in specific patch areas\r\nThree Local residents living in areas where One Manchester own property \r\n\r\nThe lead researcher conducted site visits around areas of Manchester, this was done so the lead researcher could physically inspect communities to identify signage which were used to aid the discussion guide. The sites visits were conducted in Rusholme, Openshawe and Clayton. \r\n\r\nVisiting these locations first to view all the signage, symbols and other visual features was invaluable both to generating stimulus material for the interviews and the discussion guides. The aim of the sample was to gain a diverse range of viewpoints from a variety of demographics across Manchester to generate a rich data. Participants were recruited from: Clayton, Droysden, Fallowfield, Gorton, Hulme, Openshawe, Rusholme and Whalley range. A £20 shopping voucher was put forward to incentivise participation in the study. \r\n\r\n\r\nMatierials \r\nInterview guide \r\n\r\nTo obtain the most effective feedback from participants, a discussion guide was created, which provided a structured framework to guide discussions (See Appendix A, see Appendix B for discussed images). When formatting the discussion guide, the lead researcher took into consideration current literature on signage and sought to examine resident’s attitudes, perceptions and behaviours in connection to signage in their local community. \r\n\r\nThe discussion guide was composed of four sections:\r\nSection 1:  Was a general introduction to the subject area and participants’ current awareness of signage and other visuals in their area.\r\nSection 2: Heavily focussed on signage and other visuals gathered from site visits  In all of the interviews, participants were shown the images in the order reflected in Appendix B, and they will be asked the same set of questions in relation to each image in order to generate an in-depth discussion on such images. One Manchester and the lead researcher agreed participants would not be informed figures 1-4 were the perceived negative images and figures 5-8 were the perceived positive images.\r\nSection 3: Focused on the future trajectory for signage and symbols. Participants were asked how their perceptions would be impacted if any of the discussed signage was placed in their areas now and in the future. Following this, participants were invited to share any recommendations into the designs of signage.\r\nSection 4: This was only for One Manchester residents. They were asked questions about One Manchester’s performance and potential future actions with their communities. The section was designed to give residents an active voice in how One Manchester can strengthen their relations with residents and enact positive change to protect the future of local communities.\r\n\r\nEach question in the discussion guide was designed to be open-ended, to allow participants to have a wider scope and openly share their opinions. The guide was configured to offer flexibility to discuss topics, therefore when required the lead researcher altered the order and wording of questions to maintain the natural flow of discussion with participants.\r\n\r\nProcedure\r\n\r\nInterviews were carried out between June and August 2021. Participants were requested to share their opinions around a variety of topics concerning how signage in local communities impact a resident psychological, perceptual and emotionally. Before embarking with interviews, participants were provided an information sheet outlining the study procedure, purpose, confidentiality and their right to withdraw at any time of the study’s duration. If participants accepted the conditions to being interviewed and part of the project, a time was then arranged to administer the interview at the convenience of the participant. Nine of the interviews were overseen through Microsoft Teams, the remaining five were facilitated by telephone at the request of the participants. Before proceeding with the interview, the lead researcher pointed out again the aims of the project and received verbal permission to go ahead with the discussion. Interviews were expedited using the discussion guide to ensure interviews remained structured whilst probing concepts tied to the research question. Attention was devoted to each interview to give participants adequate flexibility to discuss matters significant to them not included in the discussion guide. When required, to guarantee ample depth, follow-up questions and prompts were employed to stimulate participants to delve deeper on essential and intriguing answers (DeJonckheere & Vaughn, 2019). Field notes were developed during discussion, underlining both relevant and vital points, which enabled the researcher to refer to any major points and subsequently, assist them with data analysis (Rapley, 2004). As soon as all the questions had been completed, participants were promptly asked to share any other matters they deemed crucial. If participants were then satisfied with the feedback provided, the moderator would end the interview, and debrief participants about the study which was sent electronically. Discussions typically ranged between 30 minutes – 1 hour which were then all transcribed.\r\n\r\nAnalysis \r\n\r\nAs previously mentioned, Braun and Clarke’s (2006) six step thematic analysis was used to detect themes and patterns underpinning residents’ psychological perceptions, attitudes and behaviours towards signage in local communities. To support Braun and Clarke’s (2006) thematic analysis, a bottom-up analysis was utilised due to the project’s exploratory nature and this facilitates identification of themes that arise from consistent patterns within the data set. Firstly, after each interview was completed, the researcher instantly made notes of the key concepts and beliefs and then transcribed the discussion. To guarantee preciseness of the transcript and the lead researchers’ familiarity with the data content, audio recordings and transcripts were reviewed several times. Subsequently, the process to create codes began, the lead researcher analysed the data set and identified key extracts from the data on the basis of their significance and relevance which led to the creation of the codes. Thereafter, provisional themes were produced through a thorough examination of the coded data set, when shared patterns were discovered and judged to be similar or unified under a core notion. All codes were integrated into a central theme. From this, the provisional themes then were revised and reviewed to ensure the themes had remained articulated and unique. During this period, the coded excerpts linked to a core theme was re-examined to verify it could reinforce the central theme and they featured no inconsistencies with that theme (Braun and Clarke, 2006). By which time, a number of themes were either excluded or merged due the lack of sufficient data to uphold the theme. The procedure was repeated several times to consolidate relevancy of the themes to the research question whilst rigorously ensuring they mirrored the patterns found in the data set (Braun and Clarke, 2006). Ultimately, the final themes had been selected and a meticulous account of each theme was supplied. Once the thematical analysis process had been completed, extracts from the content were chosen to illustrate and support the relevant themes in the report."]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"2771"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2772"},["text","Word doc."]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"2773"},["text","Wootton2021"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2774"},["text","Reva Maria George"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"2775"},["text","Open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"2776"},["text","Consultancy - Commercial report "]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2777"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2778"},["text","Text"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"2779"},["text","LA1 4YF"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"2780"},["text","Leslie Hallam"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"2781"},["text","MSc."]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"2782"},["text","Psychology of Advertising"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"2783"},["text","14"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"2784"},["text","Qualitative (Thematic Analysis)"]]]]]]]],["item",{"itemId":"138","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"132"},["src","https://www.johnntowse.com/LUSTRE/files/original/a339e171ed4f4ad6da75e1f93c80db7c.pdf"],["authentication","74c6799c7cc96af439fc872b4f1cc5f2"]]],["collection",{"collectionId":"10"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"819"},["text","Interviews"]]]]]]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2889"},["text","Understanding the psychological, perceptual and emotional impact signage has on residents in a local community. "]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"2890"},["text","Alexander Wootton"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2891"},["text","15/09/2021"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2892"},["text","The placement of signage, street furniture and advertisements can have a profound impact on the appearance of a built environment. They play a vital role in shaping the cultural, physical and social identities that impact the perceptions that residents and other stakeholders hold towards local communities, which in turn impacts on behaviours. Adopting a qualitative approach, this study will examine the impact of signage and other visual features that can contribute to the psychological, perceptual and emotional impact that these elements can have on residents in a local community. A number of semi-structured interviews were conducted amongst residents in One Manchester property areas, One Manchester place officers and residents near these areas. Participants were shown a variety of visual images of signage and were prompted to discuss their emotional response and thoughts, and propose suggestions to improve signage. A thematic analysis was conducted using the interview data and indicated the following four themes: signage design, reputation, community engagement and impact of signage. Reflecting upon these themes, the results suggested that existing signage was psychically ill-fitted and visually dull, lacking positive influential stimuli and evocative colours and that it lacked the authenticity and character needed to emotionally resonate with passers-by. This negatively impacted the reputation of the communities, leading them to be categorised as economically poor with high crime rates, resulting in stakeholders feeling alienated and some fearful. The results highlighted that the signage needs to be revitalised as a part of a wider placemaking strategy to rejuvenate local environments, perceived to be run down. This should support the ongoing evolution of these areas and engage community members to instal signage that is both influential and reflects an overall collective vision.  \r\n\r\n"]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2893"},["text","signage, placemaking, community engagement, qualitative research, community reputation\r\n"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"2894"},["text","Design\r\nDue to the need to gain an in-depth understanding of the psychological, perceptual, and emotional impact signage has on residents in a community and factoring in the Covid-19 pandemic, a qualitative approach was adopted consisting of semi-structured interviews. This style of interviews was considered the most suitable method as they provide rich data on the participant’s thoughts which are not constrained by the bounds of tick box exercises or strict discussion guides. They enable researchers to “assess, confirm, validate, refute, or elaborate upon existing knowledge and the discovery of new knowledge” (Mcintosh & Morse, 2015, p. 1). This enables the discussion between the moderator and participant to flow more smoothly and naturally (Roulston et al., 2003) yet, a flexible guide at the moderators disposal keeps the conversation on topic. Interviews in the project were conducted using Microsoft Teams and telephone communication. The data was then assessed using Braun and Clarke’s (2006) six step thematic analysis.\r\nBraun & Clarke’s (2006) six-steps thematic analysis: \r\nFamiliarisation: Getting to know the overall data collected through re-reads of transcripts. \r\nCoding: Reducing sentences and phrases into small fragments of meaning or “codes”.  \r\nGenerating themes: Identifying patterns among codes. \r\nReview themes: Assuring that the meanings identified are relevant to the representation of data collected (research objectives). \r\nDefine themes: Refine themes developed by establishing their essence and significance. \r\nAnalysing themes: Highlight the frequency of themes and meanings derived from qualitative data analysis. Generate conclusions agreed-upon by all researchers.\r\n\r\nParticipants\r\nA sample of 24 participants was originally agreed, however, only 14 participants were interviewed for the project. Participants were either recruited by One Manchester or the lead researcher from areas across south, east and central Manchester. Participants were made up of the following:\r\n\r\nEight One Manchester residents \r\nThree One Manchester Place Coordinators who worked in specific patch areas\r\nThree Local residents living in areas where One Manchester own property \r\n\r\nThe lead researcher conducted site visits around areas of Manchester, this was done so the lead researcher could physically inspect communities to identify signage which were used to aid the discussion guide. The sites visits were conducted in Rusholme, Openshawe and Clayton. \r\n\r\nVisiting these locations first to view all the signage, symbols and other visual features was invaluable both to generating stimulus material for the interviews and the discussion guides. The aim of the sample was to gain a diverse range of viewpoints from a variety of demographics across Manchester to generate a rich data. Participants were recruited from: Clayton, Droysden, Fallowfield, Gorton, Hulme, Openshawe, Rusholme and Whalley range. A £20 shopping voucher was put forward to incentivise participation in the study. \r\n\r\n\r\nMatierials \r\nInterview guide \r\n\r\nTo obtain the most effective feedback from participants, a discussion guide was created, which provided a structured framework to guide discussions (See Appendix A, see Appendix B for discussed images). When formatting the discussion guide, the lead researcher took into consideration current literature on signage and sought to examine resident’s attitudes, perceptions and behaviours in connection to signage in their local community. \r\n\r\nThe discussion guide was composed of four sections:\r\nSection 1:  Was a general introduction to the subject area and participants’ current awareness of signage and other visuals in their area.\r\nSection 2: Heavily focussed on signage and other visuals gathered from site visits  In all of the interviews, participants were shown the images in the order reflected in Appendix B, and they will be asked the same set of questions in relation to each image in order to generate an in-depth discussion on such images. One Manchester and the lead researcher agreed participants would not be informed figures 1-4 were the perceived negative images and figures 5-8 were the perceived positive images.\r\nSection 3: Focused on the future trajectory for signage and symbols. Participants were asked how their perceptions would be impacted if any of the discussed signage was placed in their areas now and in the future. Following this, participants were invited to share any recommendations into the designs of signage.\r\nSection 4: This was only for One Manchester residents. They were asked questions about One Manchester’s performance and potential future actions with their communities. The section was designed to give residents an active voice in how One Manchester can strengthen their relations with residents and enact positive change to protect the future of local communities.\r\n\r\nEach question in the discussion guide was designed to be open-ended, to allow participants to have a wider scope and openly share their opinions. The guide was configured to offer flexibility to discuss topics, therefore when required the lead researcher altered the order and wording of questions to maintain the natural flow of discussion with participants.\r\n\r\nProcedure\r\n\r\nInterviews were carried out between June and August 2021. Participants were requested to share their opinions around a variety of topics concerning how signage in local communities impact a resident psychological, perceptual and emotionally. Before embarking with interviews, participants were provided an information sheet outlining the study procedure, purpose, confidentiality and their right to withdraw at any time of the study’s duration. If participants accepted the conditions to being interviewed and part of the project, a time was then arranged to administer the interview at the convenience of the participant. Nine of the interviews were overseen through Microsoft Teams, the remaining five were facilitated by telephone at the request of the participants. Before proceeding with the interview, the lead researcher pointed out again the aims of the project and received verbal permission to go ahead with the discussion. Interviews were expedited using the discussion guide to ensure interviews remained structured whilst probing concepts tied to the research question. Attention was devoted to each interview to give participants adequate flexibility to discuss matters significant to them not included in the discussion guide. When required, to guarantee ample depth, follow-up questions and prompts were employed to stimulate participants to delve deeper on essential and intriguing answers (DeJonckheere & Vaughn, 2019). Field notes were developed during discussion, underlining both relevant and vital points, which enabled the researcher to refer to any major points and subsequently, assist them with data analysis (Rapley, 2004). As soon as all the questions had been completed, participants were promptly asked to share any other matters they deemed crucial. If participants were then satisfied with the feedback provided, the moderator would end the interview, and debrief participants about the study which was sent electronically. Discussions typically ranged between 30 minutes – 1 hour which were then all transcribed.\r\n\r\nAnalysis \r\n\r\nAs previously mentioned, Braun and Clarke’s (2006) six step thematic analysis was used to detect themes and patterns underpinning residents’ psychological perceptions, attitudes and behaviours towards signage in local communities. To support Braun and Clarke’s (2006) thematic analysis, a bottom-up analysis was utilised due to the project’s exploratory nature and this facilitates identification of themes that arise from consistent patterns within the data set. Firstly, after each interview was completed, the researcher instantly made notes of the key concepts and beliefs and then transcribed the discussion. To guarantee preciseness of the transcript and the lead researchers’ familiarity with the data content, audio recordings and transcripts were reviewed several times. Subsequently, the process to create codes began, the lead researcher analysed the data set and identified key extracts from the data on the basis of their significance and relevance which led to the creation of the codes. Thereafter, provisional themes were produced through a thorough examination of the coded data set, when shared patterns were discovered and judged to be similar or unified under a core notion. All codes were integrated into a central theme. From this, the provisional themes then were revised and reviewed to ensure the themes had remained articulated and unique. During this period, the coded excerpts linked to a core theme was re-examined to verify it could reinforce the central theme and they featured no inconsistencies with that theme (Braun and Clarke, 2006). By which time, a number of themes were either excluded or merged due the lack of sufficient data to uphold the theme. The procedure was repeated several times to consolidate relevancy of the themes to the research question whilst rigorously ensuring they mirrored the patterns found in the data set (Braun and Clarke, 2006). Ultimately, the final themes had been selected and a meticulous account of each theme was supplied. Once the thematical analysis process had been completed, extracts from the content were chosen to illustrate and support the relevant themes in the report \r\n\r\n"]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"2895"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2896"},["text","Word doc"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"2897"},["text","Wooton2022"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2898"},["text","Joel Fox"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"2899"},["text","open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"2900"},["text","Consultancy - Commercial report"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2901"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2902"},["text","Data"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"2903"},["text","Leslie Hallam"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"2904"},["text","MSC"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"2905"},["text","Psychology of Advertising"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"2906"},["text","14"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"2907"},["text","Qualitative (thematic analysis)"]]]]]]]],["item",{"itemId":"186","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"205"},["src","https://www.johnntowse.com/LUSTRE/files/original/42f25a4afae4681322de3eaca175d305.pdf"],["authentication","f34904e516c4c04821ec1e52402b3ea9"]]],["collection",{"collectionId":"5"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"185"},["text","Questionnaire-based study"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"186"},["text","An analysis of self-report data from the administration of questionnaires(s)"]]]]]]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"3707"},["text","Cerebral Lateralisation for Emotion Processing of Chimeric Faces in Individuals with Autism Spectrum Disorder "]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"3708"},["text","Alexandra Crossley"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3709"},["text","5th September 2023"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3710"},["text","Many studies have suggested that typical lateralisation for emotion processing tasks, such as facial emotion recognition, is lateralised to the right-hemisphere, with different emotions eliciting differing strengths of lateralisation (Bourne, 2010). However, there has been much debate as to the lateralisation of individuals with autism spectrum disorder (ASD) (Ashwin et al., 2005; Shamay-Tsoory et al., 2010). This study assessed the cerebral lateralisation of 30 adults with ASD, five children with ASD, 435 neurotypical adults and ten neurotypical children in a chimeric faces task, and aimed to identify whether the atypical lateralisation seen in children with ASD persists into adulthood (Taylor et al., 2012). Furthermore, the study aimed to identify whether lateralisation strength is affected by the emotion of the facial stimuli. No emotion- or age-related change in lateralisation was found, however, participants with ASD demonstrated a weaker right-hemispheric lateralisation compared to neurotypical participants. Therefore, this study supported the concept that individuals with ASD show atypical lateralisation which persists into adulthood, however, no evidence was found to support the concept that different emotions elicit different strengths of lateralisation."]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3711"},["text","autism spectrum disorder, cerebral lateralisation, emotion processing, adults, children, chimeric faces task"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"3712"},["text","Method\r\nParticipants\r\nData from a total of 481 participants with native level English proficiency (or age expected language development in children), normal or corrected-to-normal vision and no history of neurological disease or hearing loss were analysed for the current study (Table 1). Participants in the group ‘adults with ASD’ (N = 30; age: M = 30.17, SD = 9.85) were recruited through adverts on social media, through Prolific Academic (www.prolific.co), and through word of mouth. Participants in the groups ‘children with ASD’ (N = 5; age: M = 6.8, SD = 1.48) and ‘neurotypical children’ (N = 11; age: M = 7.0, SD = 1.90) were recruited through primary schools and word of mouth (Brooks, 2023), and parents of potential child participants were required to \r\n \r\nemail a researcher to express their interest in participation. Participants in the group ‘neurotypical adults’ (N = 435; age: M = 29.44, SD = 8.03) were recruited through Prolific Academic (www.prolific.co) as part of a larger online behavioural laterality battery (Parker et al., 2021). Of the 481 participants who took part in the study, 32 were excluded during the data cleaning process (see Table 1 and Data Analysis for further information).\r\nMeasures\r\nAs part of the study, a series of questionnaires were administered to collect information about the participants to ensure that individual differences could be accounted for. Participants were asked to complete the study and its associated questionnaires and tasks prior to beginning the main chimeric faces task, and were requested to use a desktop or laptop computer for the entirety of the study. For the ‘neurotypical children’ and ‘children with ASD’ groups, parents were asked to complete the questionnaires on behalf of the children and were asked to be present for the tasks, which were completed during a Microsoft Teams call with a researcher.\r\nThe study was completed online using the Gorilla Experiment Builder (www.gorilla.sc), a cloud-based tool for collecting data in the behavioural sciences. \r\nDemographic Questionnaire\r\nThe demographic questionnaire asked participants their age, gender, length of time in education (in years), language status, two questions assessing handedness (“Which is your dominant hand? / Which hand do you prefer to use for tasks such as writing, cutting, and catching a ball?”) and footedness (“Which foot do you normally use to step up on a ladder/step?”), and two eye dominance tests (Miles, 1929; Porac & Coren, 1976). Participants were also asked whether they had a diagnosis of any developmental disorders, including ASD, dyslexia, attention deficit hyperactivity disorder or a language disorder (such as 'developmental language disorder' or 'specific language impairment'). For each diagnosis, participants had the option to answer “Yes”, “No”, or “Prefer not to say”, with the exception of ASD which also had the option to answer “No but I am self-diagnosed”. At this point, participants were sorted into their groups based on age (‘children’: five- to 11-years-old; or ‘adults’: 18- to 50-years-old) and ASD diagnosis (‘with ASD’, or ‘neurotypical’). Adults with a self-diagnosis of ASD were included in the ‘adults with ASD’ group.\r\nEdinburgh Handedness Inventory\r\nThe Edinburgh Handedness Inventory (EHI; Oldfield, 1971) was administered to provide a scaled score of handedness. Adult participants were asked to score ten daily tasks on a five-point Likert scale based on which hand they preferred to use during each task (“Left hand strongly preferred” = 2, “Left hand preferred” = 1, “No preference” = 0, “Right hand preferred” = 1, or “Right hand strongly preferred” = 2). These tasks included daily activities such as writing, brushing teeth, and opening a box. The EHI was scored by combining the direction and exclusiveness of the hand preference. Two totals were created: one of right-hand preference and one of left-hand preference. The difference was then found by subtracting the left-hand total from the right-hand total. This was then divided by the total score of both hand preference scores and multiplied by 100 (i.e., 100 x (right-hand total – left-hand total) / (right-hand total + left-hand total)). Final EHI scores ranged from -100 to +100, with positive scores indicating right-handedness, and negative scores indicating left-handedness. Child participants were not required to complete the EHI questionnaire.\r\nLexical Test for Advanced Learners of English\r\nA version of the Lexical Test for Advanced Learners of English (LexTALE; Lemhöfer & Broersma, 2012) was provided to assess the participants’ level of proficiency in English. Within this, adult participants were shown 60 written stimuli comprised of English words and pseudowords (words that follow the orthographical and phonetic rules of the English language and are pronounceable but are otherwise nonsense words, e.g. ‘proom’) and asked to assess whether each word was an existing English word or not. Scores of the test were collected by averaging the percentages of correct answers for English words and pseudowords, with final scores ranging from 0-100. Child participants were not required to complete the LexTALE task.\r\nAutism-Spectrum Quotient (Short Version)\r\nAn abridged version of the Autism-Spectrum Quotient (AQ-Short; Hoekstra et al., 2011) was used to provide a measure of ASD traits. Participants with ASD were asked to rate 28 statements on a four-point Likert scale based on their level of agreement, with each answer accruing a different number of points (“Definitely agree” = 1, “Slightly agree” = 2, “Slightly disagree” = 3, or “Definitely disagree” = 4). On items in which “Definitely agree” represented a characteristic of ASD, the scoring was reversed. The scores for each question were totalled, with potential scores ranging between 28 (no ASD traits) to 112 (full inclusion of all ASD traits). Scores above 65 indicated ASD traits to a diagnosable degree. Neurotypical participants were not required to complete the AQ-Short questionnaire.\r\nProcedure\r\nLateralisation for Facial Emotion Processing Task\r\nA chimeric faces task was used to assess lateralisation for facial emotion processing.\r\nStimuli. The chimeric faces stimuli were created by Dr Michael Burt (Burt & Perrett, 1997) and provided by Parker et al. (2021).\r\nA collection of 16 different facial stimuli were created by merging two photographs of a man’s face depicting one of four emotions (‘happiness’, ‘sadness’, ‘anger’, or ‘disgust’) vertically down the centre of the face and blended at the midline (see Figure 1 for an example). Each emotion was paired either with itself, causing both hemifaces of the facial stimuli to match in emotion (a ‘same face’), or with a differing emotion, causing both hemifaces of the facial stimuli to be different (a ‘chimeric face’). Of the 16 stimuli, 12 were ‘chimeric face’ and four were ‘same face’.\r\nTask. Each trial began with a fixation cross shown for 1000ms, followed by the face stimuli for 400ms. Participants then recorded which emotion they saw most strongly by clicking the corresponding button from a choice of the four emotions (Figure 2). For the children, emoticons were used instead of written words (Oleszkiewicz et al., 2017) (Figure 3). A response triggered the beginning of the next trial, with a time-out duration set at 10400ms after which the next trial was triggered automatically. Response choice and response times were recorded. \r\nThe task was split into four blocks of trials with a break between each block. Stimuli were presented in a random order and shown twice in each block, resulting in the participants being shown 32 stimuli per block and a total of 128 within the whole task. \r\n\r\n   \r\nParticipants were familiarised with the stimuli at the start of the task, with the ‘same face’ stimuli being shown alongside a label explaining which emotion was being presented, to ensure they could recognise the emotions. A practice block was given at the start of the task to ensure participants knew how to complete the task, using the emotions ‘surprise’ and ‘fear’. \r\nAdditional Measures\r\nAs data collection also included tasks for other studies, participants were also asked to complete a version of the Empathy Quotient – short (Wakabayashi et al., 2006), and undertake a dichotic listening task and its associated device checks (Parker et al., 2021). As these items were not part of the main study, participants were asked to complete these following the completion of the main study and its associated questionnaires and tasks, to ensure any findings from the study were not due to the additional measures.\r\nLaterality Index\r\nA laterality index (LI) for each participant was calculated using the same method as Parker et al. (2021) by finding the difference between the number of times the participant chose the right-hemiface emotion and the left-hemiface emotion. This was then divided by the total number of times they chose either the right- or left-hemiface emotion, and multiplied by 100 (i.e., 100 x (right hemiface – left hemiface) / (right hemiface + left hemiface)). Scores ranged between -100 and +100, with a negative LI indicating a left-hemiface bias, and thus, a right-hemispheric dominance, and a positive LI showing the opposite.\r\nData Analysis\r\nParticipants who scored less than 80 on the LexTALE task were removed as it was deemed their understanding of the English language was not strong enough and may cause issues with understanding the instructions (Parker et al., 2021). Furthermore, all trials with a response time faster than 200ms were removed as it was suggested that responses at this speed were too quick to have been based on the processing of the stimuli (Parker et al., 2021). In addition to this, outlier response times for each participant were removed using Hoaglin & Iglewicz's (1987) procedure. Within this, outliers were any response times 1.65 times the difference between the first and third quartiles, below the first quartile or above the third (e.g., below Q1 – (1.65 x (Q3-Q1)), and above Q3 + (1.65 x (Q3-Q1))). Following the removal of all outlying trials, any participant with less than 80% of trials remaining were removed. In addition to this, participants who scored less than 75% on ‘same face’ trials (trials in which both hemifaces depicted the same emotion) were noted, because emotion processing is an area of difficulty for individuals with ASD. Within this, three participants in the ‘children with ASD’ group (60%), three participants in the 'neurotypical children’ group (27.27%), four participants in the ‘adults with ASD group (13.33%), and 30 participants in the ‘neurotypical adults’ group (7.41%) scored less than 75% on ‘same face’ trials, suggesting they had difficulties identifying the emotions.\r\nTo address the hypotheses, a linear model was performed using LI as the outcome and group (‘ASD’ or ‘neurotypical’), age (‘adult’ or ‘child’) and emotion (‘happy’ and ‘angry’, or ‘sad’ and ‘disgust’) as the predictors, including interactions between each predictor (Group x Age; Group x Emotion; Age x Emotion; and a three-way interaction, Group x Age x Emotion).\r\n\r\n\r\n"]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"3713"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3714"},["text",".csv"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"3715"},["text","Crossley2023"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"3716"},["text","Alexandra Haslam \r\nAlexis McGuire\r\nxue guo"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"3717"},["text","Open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"3718"},["text","None"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3719"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3720"},["text","Data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"3721"},["text","LA1 4YF"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"3722"},["text","Margriet Groen"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"3723"},["text","MSC"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"3724"},["text","Developmental, Neuropsychology "]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"3725"},["text","481 participants with native level English proficiency, 164 Male, 240 female and 1 other."]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"3726"},["text","Linear Mixed Effects Modelling and T-Test"]]]]]]]],["item",{"itemId":"200","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"227"},["src","https://www.johnntowse.com/LUSTRE/files/original/e062f8b5eaffecab9990636ba589a6b1.pdf"],["authentication","f34904e516c4c04821ec1e52402b3ea9"]]],["collection",{"collectionId":"6"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"187"},["text","RT & Accuracy"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"188"},["text","Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes"]]]]]]]],["itemType",{"itemTypeId":"14"},["name","Dataset"],["description","Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing."]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"3988"},["text","Cerebral Lateralisation for Emotion Processing of Chimeric Faces in Individuals with Autism Spectrum Disorder "]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"3989"},["text","Alexandra Crossley"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3990"},["text","5th September 2023"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3991"},["text","Many studies have suggested that typical lateralisation for emotion processing tasks, such as facial emotion recognition, is lateralised to the right-hemisphere, with different emotions eliciting differing strengths of lateralisation (Bourne, 2010). However, there has been much debate as to the lateralisation of individuals with autism spectrum disorder (ASD) (Ashwin et al., 2005; Shamay-Tsoory et al., 2010). This study assessed the cerebral lateralisation of 30 adults with ASD, five children with ASD, 435 neurotypical adults and ten neurotypical children in a chimeric faces task, and aimed to identify whether the atypical lateralisation seen in children with ASD persists into adulthood (Taylor et al., 2012). Furthermore, the study aimed to identify whether lateralisation strength is affected by the emotion of the facial stimuli. No emotion- or age-related change in lateralisation was found, however, participants with ASD demonstrated a weaker right-hemispheric lateralisation compared to neurotypical participants. Therefore, this study supported the concept that individuals with ASD show atypical lateralisation which persists into adulthood, however, no evidence was found to support the concept that different emotions elicit different strengths of lateralisation."]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3992"},["text","autism spectrum disorder, cerebral lateralisation, emotion processing, adults, children, chimeric faces task"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"3993"},["text","Method\r\nParticipants\r\nData from a total of 481 participants with native level English proficiency (or age expected language development in children), normal or corrected-to-normal vision and no history of neurological disease or hearing loss were analysed for the current study (Table 1). Participants in the group ‘adults with ASD’ (N = 30; age: M = 30.17, SD = 9.85) were recruited through adverts on social media, through Prolific Academic (www.prolific.co), and through word of mouth. Participants in the groups ‘children with ASD’ (N = 5; age: M = 6.8, SD = 1.48) and ‘neurotypical children’ (N = 11; age: M = 7.0, SD = 1.90) were recruited through primary schools and word of mouth (Brooks, 2023), and parents of potential child participants were required to email a researcher to express their interest in participation. Participants in the group ‘neurotypical adults’ (N = 435; age: M = 29.44, SD = 8.03) were recruited through Prolific Academic (www.prolific.co) as part of a larger online behavioural laterality battery (Parker et al., 2021). Of the 481 participants who took part in the study, 32 were excluded during the data cleaning process (see Table 1 and Data Analysis for further information).\r\n\r\nMeasures\r\nAs part of the study, a series of questionnaires were administered to collect information about the participants to ensure that individual differences could be accounted for. Participants were asked to complete the study and its associated questionnaires and tasks prior to beginning the main chimeric faces task, and were requested to use a desktop or laptop computer for the entirety of the study. For the ‘neurotypical children’ and ‘children with ASD’ groups, parents were asked to complete the questionnaires on behalf of the children and were asked to be present for the tasks, which were completed during a Microsoft Teams call with a researcher.\r\nThe study was completed online using the Gorilla Experiment Builder (www.gorilla.sc), a cloud-based tool for collecting data in the behavioural sciences.\r\n\r\nDemographic Questionnaire\r\nThe demographic questionnaire asked participants their age, gender, length of time in education (in years), language status, two questions assessing handedness (“Which is your dominant hand? / Which hand do you prefer to use for tasks such as writing, cutting, and catching a ball?”) and footedness (“Which foot do you normally use to step up on a ladder/step?”), and two eye dominance tests (Miles, 1929; Porac & Coren, 1976). Participants were also asked whether they had a diagnosis of any developmental disorders, including ASD, dyslexia, attention deficit hyperactivity disorder or a language disorder (such as 'developmental language disorder' or 'specific language impairment'). For each diagnosis, participants had the option to answer “Yes”, “No”, or “Prefer not to say”, with the exception of ASD which also had the option to answer “No but I am self-diagnosed”. At this point, participants were sorted into their groups based on age (‘children’: five- to 11-years-old; or ‘adults’: 18- to 50-years-old) and ASD diagnosis (‘with ASD’, or ‘neurotypical’). Adults with a self-diagnosis of ASD were included in the ‘adults with ASD’ group.\r\n\r\nEdinburgh Handedness Inventory\r\nThe Edinburgh Handedness Inventory (EHI; Oldfield, 1971) was administered to provide a scaled score of handedness. Adult participants were asked to score ten daily tasks on a five-point Likert scale based on which hand they preferred to use during each task (“Left hand strongly preferred” = 2, “Left hand preferred” = 1, “No preference” = 0, “Right hand preferred” = 1, or “Right hand strongly preferred” = 2). These tasks included daily activities such as writing, brushing teeth, and opening a box. The EHI was scored by combining the direction and exclusiveness of the hand preference. Two totals were created: one of right-hand preference and one of left-hand preference. The difference was then found by subtracting the left-hand total from the right-hand total. This was then divided by the total score of both hand preference scores and multiplied by 100 (i.e., 100 x (right-hand total – left-hand total) / (right-hand total + left-hand total)). Final EHI scores ranged from -100 to +100, with positive scores indicating right-handedness, and negative scores indicating left-handedness. Child participants were not required to complete the EHI questionnaire.\r\n\r\nLexical Test for Advanced Learners of English\r\nA version of the Lexical Test for Advanced Learners of English (LexTALE; Lemhöfer & Broersma, 2012) was provided to assess the participants’ level of proficiency in English. Within this, adult participants were shown 60 written stimuli comprised of English words and pseudowords (words that follow the orthographical and phonetic rules of the English language and are pronounceable but are otherwise nonsense words, e.g. ‘proom’) and asked to assess whether each word was an existing English word or not. Scores of the test were collected by averaging the percentages of correct answers for English words and pseudowords, with final scores ranging from 0-100. Child participants were not required to complete the LexTALE task.\r\n\r\nAutism-Spectrum Quotient (Short Version)\r\nAn abridged version of the Autism-Spectrum Quotient (AQ-Short; Hoekstra et al., 2011) was used to provide a measure of ASD traits. Participants with ASD were asked to rate 28 statements on a four-point Likert scale based on their level of agreement, with each answer accruing a different number of points (“Definitely agree” = 1, “Slightly agree” = 2, “Slightly disagree” = 3, or “Definitely disagree” = 4). On items in which “Definitely agree” represented a characteristic of ASD, the scoring was reversed. The scores for each question were totalled, with potential scores ranging between 28 (no ASD traits) to 112 (full inclusion of all ASD traits). Scores above 65 indicated ASD traits to a diagnosable degree. Neurotypical participants were not required to complete the AQ-Short questionnaire.\r\n\r\nProcedure Lateralisation for Facial Emotion Processing Task\r\nA chimeric faces task was used to assess lateralisation for facial emotion processing.\r\nStimuli. The chimeric faces stimuli were created by Dr Michael Burt (Burt & Perrett, 1997) and provided by Parker et al. (2021).\r\nA collection of 16 different facial stimuli were created by merging two photographs of a man’s face depicting one of four emotions (‘happiness’, ‘sadness’, ‘anger’, or ‘disgust’) vertically down the centre of the face and blended at the midline (see Figure 1 for an example). Each emotion was paired either with itself, causing both hemifaces of the facial stimuli to match in emotion (a ‘same face’), or with a differing emotion, causing both hemifaces of the facial stimuli to be different (a ‘chimeric face’). Of the 16 stimuli, 12 were ‘chimeric face’ and four were ‘same face’.\r\nTask. Each trial began with a fixation cross shown for 1000ms, followed by the face stimuli for 400ms. Participants then recorded which emotion they saw most strongly by clicking the corresponding button from a choice of the four emotions (Figure 2). For the children, emoticons were used instead of written words (Oleszkiewicz et al., 2017) (Figure 3). A response triggered the beginning of the next trial, with a time-out duration set at 10400ms after which the next trial was triggered automatically. Response choice and response times were recorded.\r\nThe task was split into four blocks of trials with a break between each block. Stimuli were presented in a random order and shown twice in each block, resulting in the participants being shown 32 stimuli per block and a total of 128 within the whole task.\r\n\r\n\r\nParticipants were familiarised with the stimuli at the start of the task, with the ‘same face’ stimuli being shown alongside a label explaining which emotion was being presented, to ensure they could recognise the emotions. A practice block was given at the start of the task to ensure participants knew how to complete the task, using the emotions ‘surprise’ and ‘fear’.\r\n\r\nAdditional Measures\r\nAs data collection also included tasks for other studies, participants were also asked to complete a version of the Empathy Quotient – short (Wakabayashi et al., 2006), and undertake a dichotic listening task and its associated device checks (Parker et al., 2021). As these items were not part of the main study, participants were asked to complete these following the completion of the main study and its associated questionnaires and tasks, to ensure any findings from the study were not due to the additional measures.\r\n\r\nLaterality Index\r\nA laterality index (LI) for each participant was calculated using the same method as Parker et al. (2021) by finding the difference between the number of times the participant chose the right-hemiface emotion and the left-hemiface emotion. This was then divided by the total number of times they chose either the right- or left-hemiface emotion, and multiplied by 100 (i.e., 100 x (right hemiface – left hemiface) / (right hemiface + left hemiface)). Scores ranged between -100 and +100, with a negative LI indicating a left-hemiface bias, and thus, a right-hemispheric dominance, and a positive LI showing the opposite.\r\n\r\nData Analysis\r\nParticipants who scored less than 80 on the LexTALE task were removed as it was deemed their understanding of the English language was not strong enough and may cause issues with understanding the instructions (Parker et al., 2021). Furthermore, all trials with a response time faster than 200ms were removed as it was suggested that responses at this speed were too quick to have been based on the processing of the stimuli (Parker et al., 2021). In addition to this, outlier response times for each participant were removed using Hoaglin & Iglewicz's (1987) procedure. Within this, outliers were any response times 1.65 times the difference between the first and third quartiles, below the first quartile or above the third (e.g., below Q1 – (1.65 x (Q3-Q1)), and above Q3 + (1.65 x (Q3-Q1))). Following the removal of all outlying trials, any participant with less than 80% of trials remaining were removed. In addition to this, participants who scored less than 75% on ‘same face’ trials (trials in which both hemifaces depicted the same emotion) were noted, because emotion processing is an area of difficulty for individuals with ASD. Within this, three participants in the ‘children with ASD’ group (60%), three participants in the 'neurotypical children’ group (27.27%), four participants in the ‘adults with ASD group (13.33%), and 30 participants in the ‘neurotypical adults’ group (7.41%) scored less than 75% on ‘same face’ trials, suggesting they had difficulties identifying the emotions.\r\nTo address the hypotheses, a linear model was performed using LI as the outcome and group (‘ASD’ or ‘neurotypical’), age (‘adult’ or ‘child’) and emotion (‘happy’ and ‘angry’, or ‘sad’ and ‘disgust’) as the predictors, including interactions between each predictor (Group x Age; Group x Emotion; Age x Emotion; and a three-way interaction, Group x Age x Emotion)."]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"3994"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3995"},["text",".csv"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"3996"},["text","Crossley2023"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"3997"},["text","Open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"3998"},["text","None"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"3999"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"4000"},["text","Data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"4001"},["text","LA1 4YF"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"4027"},["text","Mshary Al Jaber"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"4002"},["text","Margriet Groen"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"4003"},["text","MSC"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"4004"},["text","Developmental, Neuropsychology"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"4005"},["text","481 participants with native level English proficiency, 164 Male, 240 female and 1 other."]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"4006"},["text","Linear Mixed Effects Modelling and T-Test"]]]]]]]]]