["itemContainer",{"xmlns:xsi":"http://www.w3.org/2001/XMLSchema-instance","xsi:schemaLocation":"http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd","uri":"https://www.johnntowse.com/LUSTRE/items/browse?output=omeka-json&page=12","accessDate":"2026-05-03T09:41:04+00:00"},["miscellaneousContainer",["pagination",["pageNumber","12"],["perPage","10"],["totalResults","148"]]],["item",{"itemId":"71","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"25"},["src","https://www.johnntowse.com/LUSTRE/files/original/2d240b7ef45b825fd4cfdb477cc8aa00.pdf"],["authentication","9b4db285519912b22505ae113ad6ad1b"]]],["collection",{"collectionId":"6"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"187"},["text","RT & Accuracy"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"188"},["text","Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes"]]]]]]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"1675"},["text","Contrast polarity of a stimulus does not affect the cueing effect"]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"1676"},["text","Eleni Sevastopoulou"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1677"},["text","2018"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1678"},["text","According to the contrast polarity effect, people’s attention is sensitive to dark objects within light backgrounds. According to the gaze-cueing effect, a human gaze shift attracts people’s attention towards the direction of the darker region of the observed eyes, thus the gaze-cueing effect depends on the contrast polarity of the observed eyes. Therefore, a human gaze is perceived as a darker spot within a lighter background. In the present study, combining the contrast polarity effect and the gaze-cueing effect we examined whether the colour contrast between a black and a white square that suddenly flip on a computer screen can have a similar effect to that of gaze-cueing. The prediction was that participants would perceive the side where the black square moved after the flipping as attentional cue, therefore when an object appeared on the side that the black square moved, reaction times would be shorter compared to when the object appeared on the opposite side. The results showed that reaction times in the two conditions did not differ significantly. Thus, the contrast polarity of a stimulus does not affect the cueing effect. "]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1679"},["text","Gaze cueing\r\nContrast polarity\r\nGaze perception"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"1680"},["text","The experiment was conducted using a single within-subject design. The independent variable was the cue congruency, which consisted of two conditions: the object appeared either congruently or incongruently with the attentional cue. The dependent variable was the reaction times of the participants which were measured in millisecond (ms). \r\nProcedure. Each participant was tested individually in a quiet room at the library of Lancaster University. Participants were tested at different days and times, including morning and evening hours. The only people present in the room during the conduct of the experiment were the participant and the experimenter.\r\nIn the beginning, participants were asked to read the experiment instructions from the computer screen and they were also given clarifications, if needed, by the researcher. Afterwards, the experiment started and two squares, one black and one white, sharing one side were presented on the screen for half a second. The side that the squares shared was located at the centre of the screen, therefore one square appeared on the left side of the screen and the other one on the right side of the screen. Then the squares flipped and changed position and the apparent motion of the two squares was the cue. One second after flipping, the squares disappeared and a picture of an object randomly appeared either on the left or on the right side of the screen for one more second. Afterwards, the object disappeared and the screen remained blank. \r\nThe task of the participants was to press the appropriate keyboard button as fast and as accurately as possible, depending on the side of the screen where the object appeared. So, they had to press the «Q» button on the keyboard when the object appeared on the left side of the screen or the «P» button when the object appeared on the right side of the screen. They were given one second to respond to the object appearance. The sequence of the trials was the same for every participant. Each one of the 6 objects appeared on total 30 times congruently with the cue and 30 times incongruently with it. Thus, the total number of trials for every participant was 360, 180 trials that the objects appeared congruently with the cue and  180 that they appeared incongruently with it. The experiment lasted for 20 minutes for each participant and at the end of every session, a message appeared on the screen which informed the participants that the experiment was over.\r\nThe prediction was that the side where the black square would move after the flipping would be perceived as an attentional cue by the participants. Their gaze would be attracted to the cue and an effect similar to the gaze-cueing effect would appear. So, their reaction times would be shorter for the trials where the objects would appear on the same side with the attentional cue compared to the trials where the objects would appear on the opposite side of the cue. The independent variable was the cue congruency which included two conditions, the congruent trials (when the object appeared on the same side with the cue) and the incongruent trials (when the object appeared on the opposite side of the cue). \r\n"]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"1681"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1682"},["text","data/Excel.csv"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"1683"},["text","Sevastopoulou2018"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"1684"},["text","Ellie Ball"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"1685"},["text","Open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"1686"},["text","None"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1687"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1688"},["text","Data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"1689"},["text","LA1 4YF"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"1690"},["text","Dr. Eugenio Parise"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"1691"},["text","MSc"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"1692"},["text","Cognitive Psychology"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"1693"},["text","25 Participants"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"1694"},["text","t-test"]]]]]]]],["item",{"itemId":"70","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"24"},["src","https://www.johnntowse.com/LUSTRE/files/original/59a7edf93d70608679c4404a6a2cf427.pdf"],["authentication","19db76515b8d3de5a0a79a15c9b3551a"]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"1658"},["text","Visual engagement with different animals"]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"1659"},["text","Rebecca Gregson"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1660"},["text","2018"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1661"},["text","People treat animals differently depending on how they are dichotomized.The present study tested the consequences of dichotomization on our visual engagement with still images of different animals. Fifty-seven participants took part in two identical image visualization tasks, the first preceded a short empathy inducing video,and the second followed. We used eye-tracking to study dwell time percentage oriented towards the eyes of companion, farmed and endangered animals. Eye-directed visual engagement was greatest for companion animals in the first image visualization task. This bias in visual engagement towards companion animals was attenuated in the second image visualization task.We hypothesised that the empathy inducing video would change gaze towards farmed animals, evidencing either increased attentional avoidance or increased engagement. Although mean averages suggest a slight increase in visual engagement following the video, this difference was not significant. Participants reported highest levels of negative emotion regarding the farmed animal’s videos. Empathic gaze with farmed animals correlated positively with participants’ level of meat consumption restriction.  The findings support several pre-registered hypotheses but disconfirm others, and are discussed in terms of the extension of empathic gaze to animals. "]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1662"},["text","Animals, dichotomization, eye-tracking, empathic gaze, guilt"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"1663"},["text","Participants\r\nOur pre-registered recruitment strategy was to collect fifty participants with complete data. Fifty participants were recruited through (1) Lancaster University’s research participation system, SONA or(2) poster advertisementand were paid £3for their involvement. Each participant saw 9 images, presented twice, each for 10 seconds, totaling 180 seconds of eye-tracking data. On first inspection of the data we were forced to exclude seven participants whose eyes had not been tracked for 50% of the experiment. To reach our pre-registered participant pool of 50 we recruited seven more participants, one of whom had to be excluded on the same grounds as previous. Our final data set was comprised of 49 participants, 36 females and 13 males. Age ranged between 18 and 30 (M= 21.10, SD= 2.13). Participants reported a range of nationalities, including: American (n=1), British (n=28), Bulgarian (n=3), Chinese (n=3), Croatian (n=2), German (n=2), Hungarian (n=2) Indian (n=3), Indonesian (n= 1), Latvian (n=1), Nigerian (n=1), Malaysian (n=1) and Slovakian (n =1). Participants dietary classifications were as follows:Meat lover(n =1), Omnivore (n =23), Semi-vegetarian (n =16), Pescatarian(n =3),Lacto-or Ovo-vegetarian(n =5), Strict vegetarian(n =0), Dietary vegan(n =0),andLifestyle vegan(n =1).Design The experiment employed a 3x2 fully within-subjects design. The independent variables were animal category and time. The variable animal category hadthree levels: farmed animals (sheep, cow, pig), companion animals (dog, cat), and endangered wild animals (chimpanzee, tiger, koala) and was operationalized using still images. Our main research interest was the distinction between farmed and companion animals, given the marginalized status of farmed animals in society and the privileged status of companion animals. Endangered animals are vulnerable to human interference and confer some value \r\nVISUAL ENGAGMENT WITH ANIMALS.4due to their endangered status, but they are not actively used by humans as objects of consumption. For this reason, endangered animals were used as a control or comparison group. The variable time had two levels, pre-and post-video task. Participants took part in two IVT, one before a video watching task and one after. Our main dependent variable was dwell time percentage on the eyes of the animal. This was recorded during the presentation of each of the nine images in both IVT. At no other point in the experiment were eye-movements recorded. Additional outcome measures.We recorded the participants emotional state immediately after the video watching task. Participants emotion ratings were transformed into numerical valuesas follows: Extremely positive (+3), Fairly positive (+2), Slightly positive(+1), Neutral (0), Slightly negative (-1), Fairly negative (-2) and Extremely negative(-3). As a result, more negative responses were represented by a more negative value. We asked participants if they(Yes/No) contribute to the suffering and well-being of each animal category. Participants were also asked to state their agreement (Yes/No) with two statements, the first regarding their outrage having heard about the harm inflicted on animals, and the second about the animal’s capacity to suffer as being meaningfully similar to a human’s capacity to suffer.However, due to an experimenter error, these four measures were not recorded by the experiment-analysis system, and therefore cannot be discussed further.MaterialsImages. In total we sourced nine images, three for each animal category in our design. We sourced images for three different species of animal to make up our target category. The companion animal category was the only exception to this rule. For this category of animal, we used two dog images (Siberian Husky and Staffordshire Bull Terrier) and one cat image. In our original companion animal category, we had considered using the \r\nVISUAL ENGAGMENT WITH ANIMALS.5image of a horse, but decided against this for two reasons. Firstly, the composition of the face was noticeably different in comparison to the other eight images. The horses face was longer with its eyes positioned laterally. Secondly, the category in which horses fall in to (i.e. farmed or companion) is often blurred. Whilst cows pose similar facial composite issues tothe horse, there is no question that cows are members of the farmed animal category. We decided that this justified the inclusion of the cow in the experiment, but we could not justify the use of the horse. The original source for each image is displayed in Appendix A.Due to limited financial resources we were restricted to the use of free, open-source images. This meant that the images contain some background colour and contextual inconsistencies. Nonetheless, all images share these same consistencies: forward facing gaze, minimal to no background noise and the absence of other animals. We adjusted some of the images so that the body of the animals is mostly cropped out. As a result, all nine images have a central focus on the animals face. We ensured that the images did not objectively indicate animal harm nor confinement. Finally, all animals were adult so as to avoid the baby schema effect–the finding that infantile features promote caregiving behaviour(Archer & Monton, 2011; Borgi, Cogliati-Dezza, Brelsford, Meints & Cirulli., 2014; Fridlund& MacDonald, 1998). This was an important consideration as the baby schema effect has been linked to stronger caregiving motivationswith animals(Piazza, McLatchie & Olesen, 2018).Videos. Three videos were selected to induce empathic concern with each of the three animal categories. Each video targeted a specific class of animal (companion, farmed, or endangered) and was presented prior to the second viewing session. All three videos outlined the harm inflicted upon the relevant animal category. They include emotional but not graphic content and were selected for their empathy arousing nature. To reduce any variation caused by the different music styles of the videos, all audio was removed. Videos were trimmed to \r\nVISUAL ENGAGMENT WITH ANIMALS.6ensure that they had a similar duration time. Supplementary details of each video can be found in Appendix B. Additionally, each video can be accessed in the “Materials” section of our OSF file. Stimuli presentation. All stimuli werepresented on a Windows 10 Pro hplaptop which had a 14-inch monitor, a screen resolution of 60 Hz and the Intel® Core™ i7-4710MQ CPU processor. Stimuli ran semi-automatically. The experiment was built using Experiment Centre (Version 3.6, SensoMotoric Instruments).Eye-tracking device. Eye movements were recorded monocularly and at a frequency of 30Hz using the REDn Scientific eye-tracking device (SensoMotoric Instruments). Gaze was calibrated using a 5-point method and a calibration area of 1920 X 1080. We used a centered black cross for the fixation points during the initial calibration and throughout the experiment. These were Arial in font and 72 in size. The experiment was built to measure dwell time percentage during the IVT only. Diet. Diet was assessed using an adapted version of the 5 item dietary practice scale used by Piazza, Ruby, Loughnan et al.(2015).We expanded the original scale to include 8 dietary practices. These included “Meat lover,” “Omnivore,”“Semi-vegetarian,” “Pescatarian,” “Lacto-or Ovo-vegetarian,”“Strict vegetarian,”“Dietary vegan,” and“Lifestyle vegan”. Definitions for each category are provided in Appendix C. Procedure Preliminary procedures. Participants were tested individually. Having been welcomed into the lab each participant received an information sheet and consent form. All participants who arrived at the lab gave their consent. Each participant was seated on a stationary chair at a desk where the equipment stood. The experimenter explained that they\r\nVISUAL ENGAGMENT WITH ANIMALS.7would load up the experiment and leave them to complete it in privacy. The experiment ran an initial calibration of the eye before moving through into the task information. Task information was presented across three separate screens which outlined for the participant what would be required of them (See Appendix D). Warm-up. Participants took part in two identical IVT. The first was framed as a warm-up. These warm-up trials ran automatically and did not require any participant action. Following task information, participants saw a screen which read “Warm-up” for 4000ms. The animal category was then announced (e.g. “Farmed Animals” “Companion Animals” or Endangered Animals”) and remained on screen for 4000ms. A centered fixation point appeared for 500ms before the first category animal image appeared for 10,000ms. It was during each 10,000ms image presentation that eye-movements were recorded. This same fixation point/image presentation routine was repeated three times over to cover all three images in each category. The order in which each animal category was presented was randomized across participants. Having completed the IVT for each animal category, participants were presented with a screen instructing them that the warm-up was now complete. This instruction screen was advanced manually by the participant. Video watching task. Following the first IVT, participants took part in the video watching task. The animal category was first announced and remained on screen for 4000ms. The appropriate video then played and was concluded with a blank screen lasting 3000ms. Participants were then made aware that the video had finished. Having manually moved the experiment along, the participant was next asked to indicate their current emotional state. They read: “Howpositive or negative do you feel right now?” and should select their response via mouse-click on a 7-point scale with the following range: “Really negative,” “Fairly negative,” “Slightly negative,” “Neutral,” “Slightly positive,” “Fairly positive” and “Really positive”. Again, this screen was manually advanced. The participant was next \r\nVISUAL ENGAGMENT WITH ANIMALS.8presented with the statement “I contribute to the suffering of Farmed/ Companion/ Endangered animals” and was asked to indicate their response using the “Y” (Yes) and “N” (No) keys on the keyboard before pressing space bar to advance.  “I contribute to the well-being of Farmed/Companion/Endangered animals” was presented on the next screen and participants indicated their response as previous. Responses to these Y/N questions failed to record due to a programming error, and therefore will not be discussed further.The second IVT. As in the first IVT, participants saw a centered fixation point (500ms) followed by the first category animal image (10,000ms). Again, the REDn was programmed to record eye-movement during each of the 10,000ms image presentation. After each animal image the participant was then presented with the statement: “Thinking about how ___ (e.g. Cows) are slaughtered for their meat makes me feel outraged” and are again asked to indicate their response using the “Y” (Yes) and “N” (No) keys on the keyboard. This question was tailored to each animal category and target animal (see Appendix E for a list of each statements used). Next the participant read: “___ (e.g.,Cows) possess a capacity to suffer that is meaningfully similar to humans” and are asked to indicate their response Y/N as previous. This procedure was repeated three times over, once for each animal target. Due to a programming error, responses to these Y/N question were not recorded, and therefore they will not be discussed further. The entire procedure from the beginning of the video watching task to the end of the second IVT was repeated for each animal category, the order of which was randomized for each participant. See Appendix F for a visual representation of the experiment flow."]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"1664"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1665"},["text","SPSS data "]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"1666"},["text","Gregson2018"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"1667"},["text","Rebecca James"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1668"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1669"},["text","SPSS.sav"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"1695"},["text","LA1 4YF"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"1670"},["text","Dr. Jared Piazza"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"1671"},["text","MSC"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"1672"},["text","Social"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"1673"},["text","49 participants"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"1674"},["text","ANOVA, correlation, t-test"]]]]]]]],["item",{"itemId":"69","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"23"},["src","https://www.johnntowse.com/LUSTRE/files/original/4517b206e143941069f6f7a9faebec5a.pdf"],["authentication","66b07a82533d2587067e7f9f510521af"]]],["itemType",{"itemTypeId":"14"},["name","Dataset"],["description","Data encoded in a defined structure. Examples include lists, tables, and databases. A dataset may be useful for direct machine processing."]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"1638"},["text","How does metaphorical language affect individuals’ aesthetic perception in modern poetry: In the life span view"]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"1639"},["text","Qishan Liao"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1640"},["text","2015"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1641"},["text","This study examined the relationship between the degree of metaphoricity and beauty perception as well as between cognitive load and beauty perception, by controlling for other possibly confounding variables such as familiarity and imageability. While previous research has shown that the variables of metaphoricity, familiarity and imageability influence beauty perception, no study investigate how the degree of metaphoricity and cognitive load influence beauty perception in poetic sentences reading. Therefore, this study aimed to bridge this gap.  Beauty rating scale and keypress experiment were conducted, involving 22 young adults and 18 elderly adults. Because of the collinearity among metaphoricity, familiarity and iamgeability, a new variable called interpretation of metaphors was used to explain the hypotheses in the present study. Rather than cognitive load, interpretability was the predictor of beauty perception in poetry sentences reading. Young adults’ beauty perception achieved to the highest point at novel metaphors, while elderly adults considered dead metaphors as the most beautiful stimuli. This study suggests that poetic sentences are generally perceived as more beautiful when its degree of interpretability is lower in young adults rather than elderly adults. These findings provide an initial implication for future longitudinal or neuroaesthetic studies to further the understanding between metaphorical language and beauty perception."]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1647"},["text","Beauty perception\r\nMetaphoricity\r\nFamiliarity\r\nImageability"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"1648"},["text","This study has been approved by the Psychology ethics committee at Lancaster University on 24/04/2018. Besides, this study were preregistered in ‘AsPredicted’ website, and the number was 11034.\r\nParticipants.\r\nThe participants were 22 young adults between the ages of 18 and 30, and 20 elderly adults between the ages of 55-75. They were recruited from SONA systems, social media (e.g., Facebook advert). All young participants have not suffered from any learning disability (i.e., dyslexia) and they were native English speakers. However, two elderly participants confirmed that they had a history of dyslexia, so they were excluded. Finally, there were 22 young adults with a mean age of 21.64 years (SD=3.05) and 18 elderly adults with a mean age of 63.22 years (SD=6.07) have participated. Participants were required to give informed consent via an online consent form before completing the online survey, and they would fill in a paper version consent form before the keypress experiment. All participant would receive four pounds after finishing all experiments.\r\nMaterials.\r\nStimuli. A bank of 92 stimuli, was generated by a previous student who was previously supervised by Dr Francesca Citron. Partially sentences are excerpted from modern poetry. The remaining sentences were created by this student, inspired by other poetic works. Creating novel sentences was to decrease the deviation caused by participants being familiar with some stimuli. All stimuli were divided into five categories, and the degree of metaphoricity of these categories was increasing. The first one is the literal expression which has concrete and pragmatic meaning and it usually equal to its literal meaning. It is not part of the metaphorical language. The following category is dead metaphors – a kind of metaphor that lose its imaginative space because of frequent use (Punter, 2007). The third one is the conventional metaphor that is commonly used in everyday life, and it is highly related to the specific culture. The fourth one is novel metaphors which is usually unusual in everyday life and challenging for the layperson to understand. The last category is extremely novel metaphors which are the most abstract and challenging. The semantic category overlap of subject and predicate in these sentences is less obvious than other categories. Considering the potential fatigue of the elderly participants, the researcher randomly selected 50 stimuli from the original stimuli bank as experimental materials (See Appendix A). There were ten sentences for each category. All stimuli were given a specific code for identification in the analysis procedure. The creator of the stimuli bank has invited 85 participants to rate the degree of metaphoricity for each stimulus via a 7-point Likert scale (1 for the minimum and 7 for the maximum). The result has shown that the degree of metaphoricity was increasing as the original design (See Figure 1).\r\n \r\nFigure 1. Scatterplot showing the trend of metaphoricity ratings of stimuli. The categories corresponding to the stimuli number as follows: literal sentences (1-14), dead metaphors (15-28), conventional metaphors (29-53), novel metaphors (54-75), and extremely novel metaphors (76-92).\r\nApart from metaphoricity, these stimuli have been tested on multiple sentence-level characteristics, including familiarity and imageability in the same group of participants. Briefly, all ratings were collected by asking participants to rate \"how familiar is this sentence to you?\" and \"how easy is it to imagine this sentence?\" on two separate 7-point Likert scales. These raw data would be used for analysis in this study.\r\nSurvey. Beauty rating scale was designed as a 7-point Likert scale via the online survey software ‘Qualtrics’. The scale included a digital version of the information sheet, consent form and debrief form, and it also investigated several basic information like age, biological sex and reading frequency (Appendix B). More importantly, the scale included the questions for checking whether the participants are British native speakers and whether they have had the history of learning disability (i.e., dyslexia) since these factors can influence the beauty ratings. In the formal test, 50 stimuli would be randomly presented to the participants through Qualtrics. Participants would see the poetic sentences, as well as the question ‘How beautiful is this sentence to you?\" on the page. They need to give their responses by rating from 1 to 7(1 for not at all beautiful and 7 for extremely beautiful) for each sentence. \r\nExperiment.  The researcher created a keypress experiment on ‘Presentation neurobehavioral system’ software. The material were identical to the online survey and included extra four filler sentences, five odd sentences, four questions related to the poetic stimuli. All new stimuli were generated by the researcher, but they would not be analysed eventually because of their functions (Appendix A). Filler sentences were used to let participants practice how to give their responses by the keypress. Odd sentences were unreasonable, and they were used to avoid the mechanically repeated responses. Similarly, some poetic stimuli would be followed by a question for checking whether participants have answered the question seriously. To ensure the randomness of the experimental materials, six versions of the experiment were created. Participants would be asked to read each sentence once at a time and to evaluate whether it was sensible for them by pressing a button (“F” for indicating “Yes” and “J” for indicating “No” via keyboard). Because wanting to avoid the habitual reaction caused by the participants being familiar with the traditional key press experiments, we also created six corresponding flipped version of the experiment. Overall, this experiment has 12 version, and they would be randomly allocated to the participant. Participants would take part in the experiment on the researcher’s computer, whereby the answer and the reaction time of each sentence would be collected by Presentation automatically and anonymously. \r\n\r\n\r\nProcedure. \r\n      Questionnaire. When the participants decided to participate in the project, the researcher would send an anonymous questionnaire link to the participants by e-mail. The questionnaire can be completed on any electronic device, and the participants could pause the questionnaire at any time when they need a break. After clicking the link, the participants would read the information sheet and the electronic consent form orderly to ensure that they understood the necessary information of the questionnaire and gave their consents. They then need to answer ‘check questions’ to check whether they were native speakers and whether they have a previous or current learning disability. Knowing the answers to these questions was to confirm that the participants were suitable for the questionnaire. Subsequently, the demographic information would be asked, and all answers would be kept confidential.\r\nAfter, a brief instruction form would be presented to explain the basic operations of the questionnaire and some important terms (e.g., beauty) involved in the questionnaire. Then, 50 poetic stimuli which were varied in the degree of metaphoricity would be presented randomly, followed by a question after each stimulus: How beautiful is this sentence to you. Participants should give their responses by rating from the 7-point Likert scale (e.g., 1 for not at all beautiful and 7 for extremely beautiful). All answers would be automatically recorded by Qualtrics. \r\nAfter completing all, the participants would read the debrief sheet to understand the purpose and the design of this questionnaire. Also, the references about this questionnaire and the contact information of the experimenter would be given. When the participants completed the questionnaire, they would receive an e-mail from the experimenter to make an appointment for the keypress experiment. Time for the experiment was usually one or two days after completing the questionnaire.\r\nKeypress experiment. All participants were required to meet the experimenter personally to complete the keypress experiment. Before the experiment began, participants were required to sign on the paper version of the consent form. After that, the experimenter would verbally explain the operation of the experiment. Then, the experimenter would randomly select one of the twelve versions of the experiments and give the participant a unique code. Participants were asked to evaluate whether the sentences presented on the screen were reasonable at the time. When they think it was sensible, they need to press the button that represents ‘Yes’, and vice versa. When they need to answer the ‘Yes/No’ questions, the operation was the same. When the participants understood the operation, they would press F or J key to start the experiment. Before the poetic sentence was presented, there would be a white fixation cross in the center on the black screen, and the duration was 1000ms. Then, the stimulus would present on the screen and last for 8700ms. Usually, the participants need to give their responses during this period, and their reaction time was automatically recorded by the software. After the presentation of a stimulus, it would be followed by a blank screen that lasts for 300ms with a white jittered fixation cross before the next sentence/question was presented. If the subject answers the question at this time, their reaction time of this stimulus will be the reaction time during this period plus 8700ms. The font for all stimuli was 12, the font colour was white, and the background screen was black.\r\n"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"1649"},["text","Lauren McCann"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1650"},["text","Data"]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"1651"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1652"},["text","data/excel.xlsx"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"1653"},["text","Liao2015"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"1654"},["text","Open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"1655"},["text","None"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1656"},["text","English"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"1657"},["text","LA1 4YF"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"1642"},["text","Francesca Citron"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"1643"},["text","MSc"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"1644"},["text","Beauty Perception"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"1645"},["text","22 young adults and 20 elderly adults"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"1646"},["text","Independent T-test\r\nPerason's correlation\r\nPartial correlation\r\nHierarchical regression\r\nSimple regression"]]]]]]]],["item",{"itemId":"68","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"21"},["src","https://www.johnntowse.com/LUSTRE/files/original/6e55fa69336c955afd8161d2c2f4951f.doc"],["authentication","4f750621696649cd87b16387c2a59e72"]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"1620"},["text","Neural response to infant-directed speech: gamma band oscillatory activity in 4-month-old infants "]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"1621"},["text","Marina Ciampolini"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1622"},["text","2019"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1623"},["text","Infant-directed speech is an ostensive signal preferred by infants over adult-directed speech. We studied infants’ neural response to auditory stimuli by measuring gamma band oscillatory activity over the frontal area of the brain in response to ostensive infant-directed speech and non-ostensive adult-directed speech. Two groups of 4-month-old infants were presented with the same auditory stimuli, but the two groups differed in terms of visual stimuli (inverted vs. upright faces), being our study a part of a broader research project. We investigated only the auditory portion of the trial. We found that, in the inverted face group, the activation to the ostensive infant-directed speech was significantly enhanced while, in the upright group, this outcome was not found. These findings support the use of gamma band oscillations in assessing the basis of social communication and establish infants’ early specialization in understanding communicative signals directed to them. "]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1624"},["text","Infant-directed speech; neural response; EEG; gamma oscillation"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"1625"},["text","Experimental Design \r\nWe used data that had already been collected for a broader study, designed for the observation of the influence of auditory stimuli on face processing. In the main experiment, a total of 36 four-months-old infants was divided into two groups that differed on the visual stimulus presented at the end of each trial. Immediately after an auditory stimulus in IDS or ADS, the first group was presented with inverted faces, while the second group was exposed to upright faces. Participants in both groups were thus exposed to the same auditory stimuli, just before being presented to the visual stimuli, that were different depending on the group. In this research we focused only on the auditory portion of the trial, where participants were exposed to IDS or ADS (Fig. 1).  \r\n \r\n \r\nFigure 1. Representation of the complete trial presented to infants. In every trial, 18 infants were presented with upright faces, while the other 18 were presented with inverted faces. However, each infant was exposed to auditory stimuli in ADS or IDS, regardless of the visual stimulus. The green rectangle shows the portion of the trial analysed in this dissertation. \r\nParticipants \r\nInfants were recruited from the Lancaster Babylab database. All were free of any known neurological, ocular or auditory abnormality and met the screening criteria of normal birth, born full term (gestational age >37 weeks), in the normal weight range (>2500g) and with an Apgar score of at least 8 at five minutes after the birth.  \r\nIn our study we focused on infants’ neural response to the auditory stimuli (Fig. 1). However, the distinction between the two groups was preserved in order to observe possible differences between them. The group exposed to inverted faces included 18 infants (5 females, age range 117 to 161 days, M= 135.61 days). Thirteen additional infants were excluded owing to an insufficient number of artifact free segments (n=10), sleep (n=1), and technical issues during the experiment (n=2). The group presented with upright faces included 18 infants (5 females, age range 115 to 171 days, M= 145.22 days). 17 additional infants were excluded because of an insufficient number of artifact free segments (n=14) and technical issues during the experiment (n=3). In the final datasets (N=36) were included infants that provided artifact-free EEG recording in at least 10 trials within each experimental condition.   \r\nStimuli \r\nThe auditory stimulus was the word “Hello” pronounced by a female voice using two different intonations: either IDS or ADS. The two words were recorded and edited with Audacity (v. 1.2.5) and Praat (v. 5.1) by setting a digitalization at 32-bit resolution and a sampling rate at 48 kHz. Both words were 850 ms long. The IDS stimulus had an average volume intensity of 61.86 dB, while the ADS stimulus had an average volume intensity of 61.50 dB.  \r\nApparatus \r\nInfants’ behaviour was video recorded for the entire duration of the test by a remote-control video camera placed behind the monitor. A pair of computer speakers situated behind the monitor were used for the presentation of the auditory stimuli. The infants’ EEG was recorded at a sampling rate of 500 Hz using a 124-channel Hydrocel Geodesic Sensor Nets (Electrical Geodesic Inc., Eugene, OR, USA). \r\nProcedure \r\nInfants sat on their parent’s lap at a distance of 70 cm from a computer monitor. Each trial started with a dynamic fixation grabber at the centre of the monitor, for the duration of 2150 ms. Then the attention grabber stopped moving and the auditory stimulus (in IDS or ADS) was released by loudspeakers positioned behind the monitor and lasting 850 ms. The attention grabber remained still for an interval randomly varying between 200 and 400 ms. Then the grabber disappeared and the visual stimulus was presented for 1000 ms. A blank screen as an inter-trial interval between 1000 and 1200 ms was inserted between successive trials. Auditory stimulus in IDS or ADS were presented in a random order with the following constraint: no more than three successive trials of the same kind in a row. The trials were presented as long as the infants were willing to look at them. When they became fussy, the experimenters played a dynamic spiral together with an attractive sound. The session ended when the infant could no longer be attracted to the screen.  \r\nEEG measurement and data analysis \r\nThe electrical potential was band-pass filtered between 0.3-100 Hz. The filtered EEG was then segmented into epochs including 600 ms before stimulus onset and 1400 ms following the stimulus onset for each trial. EEG epochs containing artifacts caused by body and eye movement were automatically eliminated, whenever the average amplitude of a 80 ms gliding window exceeded 55 µV at horizontal Electrooculogram (EOG) channels or 150 µV at any other channel. In addition to automatic rejection, each individual epoch was visually inspected for further epoch selection. When <10% of the channels contained artifacts, the contaminated channels were replaced by mean of spline interpolation, while segments in which >10% of the channels included artifacts were rejected. Infants exposed to upright faces contributed on average 17.5 artifact free trials to the IDS condition (range: 8 to 36) and 18.34 trials to the ADS condition (range: 9 to 39). Infants exposed to inverted faces contributed on average 20.89 artifact free trials to the IDS condition (range: 8 to 38) and 20.78 trials to the ADS condition (range: 10 to 39).  \r\nIn the artifact free segments induced gamma-band oscillations were uncovered through time-frequency analysis. These segments were imported into Matlab® and re-referenced to average reference through the free toolbox EEGLAB (v. 9.0.5.6b). The custom-made scripts collection WTools (available at request) was used to compute complex Morlet wavelets for the frequencies 10-90 Hz with 1 Hz resolution. A continuous wavelets transformation of single trials of EEG in each channel was performed, on 2000 ms long segments (600 ms pre-stimulus onset and 1400 ms after stimulus onset). The transformed segments were averaged for each condition separately. To remove the distortion in the time-frequency decomposition caused by convolution with the wavelets, 400 ms at each edge of the epochs were chopped, leaving a segment from -200 to 1000 ms around the auditory event. The average amplitude of the 200 ms pre-stimulus window was used as the baseline and was subtracted from the whole segment at each frequency. \r\nBased on prior findings (Parise & Csibra, 2013), we selected the scalp location over the forehead (the average of channels 3, 9, 10, 15, 16, 18, 22, 23, corresponding to Fp2, Fpz, Fp1, respectively, Figure 2), a time window from 200 ms to 600 ms, and 25 to 45 Hz frequency window.  \r\nIn order to verify that there were no significant differences between the accepted segment for each participant, a t-test between the average of accepted segments for each condition (speech) was performed and the same procedure was repeated for each group (face orientation). The mean amplitude was assessed by a repeated measure ANOVA with Speech (IDS x ADS) as a within subject factor, and Group as a between subject factor (upright x inverted). Paired-Sample t-tests were used for post hoc comparisons between the induced gamma-band oscillatory activity in response to IDS and to ADS. One-sample t-tests against 0 were used to assess whether the analysed gamma-band oscillatory activity differed significantly from the baseline.  \r\n \r\nFigure 2. Sensor layout for the Electrical Geodesics Inc. (EGI) 124-channel hydrocel sensor net, showing the locations of the electrodes under study (circled in green), averaged for measurement of the oscillatory activation.  "]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"1626"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1627"},["text","Excel files; Matlab files; SPSS files. "]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"1628"},["text","Ciampolini2019"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"1629"},["text","John Towse"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1630"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1631"},["text","Data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"1632"},["text","LA1 4YF"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"1633"},["text","Eugenio Parise "]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"1634"},["text","MSc"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"1635"},["text","Cognitive; developmental "]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"1636"},["text","36 infants "]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"1637"},["text","Anova; t-tests "]]]]]]]],["item",{"itemId":"64","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"51"},["src","https://www.johnntowse.com/LUSTRE/files/original/d2dc5985e57b07e35905e64acb47b7b4.doc"],["authentication","3370ed59d929ffce6ca5d977ec62bb7f"]],["file",{"fileId":"52"},["src","https://www.johnntowse.com/LUSTRE/files/original/99408598e35363745a56c58e81430f29.doc"],["authentication","628eb1ba4a73e232e13333647109334e"]]],["collection",{"collectionId":"5"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"185"},["text","Questionnaire-based study"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"186"},["text","An analysis of self-report data from the administration of questionnaires(s)"]]]]]]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"1558"},["text","Assessing Inference Making in Listening Comprehension in Children in Special Education"]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"1559"},["text","Rebecca Hindle"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1560"},["text","2018"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1561"},["text","Successful listening comprehension involves making accurate inferences to interpret the meaning of a story. We assessed inference making in listening comprehension of children in special education in years 4, 5, and 6 (n=12). Children listened to short stories and answered questions to assess local and global coherence inference after each story. Analysis of variance (ANOVA) revealed no significant main effects for children’s first responses for presentation type (whole, segmented) and inference type (local, global). However, after children had received prompts a significant main effect of inference type was shown with children performing better on global than local coherence inferences. Correlational analysis revealed no significant correlations between IQ and inference type but there was a stronger correlation between verbal IQ and inference type than non-verbal IQ and inference type. An independent t-test revealed no significant effect of diagnostic group on IQ or inference type but children in the Autism group performed better than children in the MLD group on both IQ measures and the MLD group scored better on both inference types. We conclude that inference type is important to consider when setting and asking comprehension questions along with the use of prompts to portray and assess children’s full comprehension ability. "]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1562"},["text","Developmental Disorders"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"1563"},["text","\r\nParticipants\r\n\tThe participants were 12 children from years 4, 5 and 6 aged between 8 and 11 (N=12, 3 girls and 9 boys, M=9.67, SD=0.99) from a special needs school in the North West of England. All children had a statement of special educational needs including; Autism, Foetal Alcohol Syndrome, Moderate Learning Disability, Nonans Syndrome, Fetal Vulprate Syndrome and Speech and Language Impairment. All children were verbal with English as their first language. Consent was provided by parents/ carers, the Head of School and each class Teacher. \r\nMeasures\r\n\tIQ Task\r\n\tThe WISC IV was used to determine children’s IQ levels. Children completed one verbal and one non-verbal measure of IQ. The verbal measure was a vocabulary task, children were first shown pictures of items and asked what this is, progressing onto words asking, what does this mean? Children could score either 0, 1 or 2 points depending on the accuracy of their definitions according to the WISC IV manual. There were 36 items, increasing in difficulty, and testing stopped when children answered 5 questions incorrectly in a row. The non-verbal measure was a block design task, this comprised of 14 items starting with simple designs progressing to more difficult designs. Children had to copy patterns either demonstrated by the experimenter for the first 3 patterns or presented in picture format for the following items. There were time constraints for each pattern starting with 30s progressing in length for the more difficult items to 120s. Once children had failed to complete 3 patterns in a row, testing ended.\r\nListening comprehension task\r\n\tThe listening comprehension task was taken from Freed and Cain (2016) devised by the Language and Reading Research Consortium (LARRC) (2015). The full set of materials comprised 6 short stories however, for the current study only 4 stories were used: Grandma’s Birthday, The Game, New Pet and A Family Day Out. The stories were all topics appropriate to this age group. There were 8 questions paired with each story assessing both local and global coherence inferences; 4 of each. With questions either asked throughout the stories (segmented format) or at the end of the stories (whole format). In 2 of the sessions, stories were presented in a whole format and in the other 2 sessions, the stories were presented in a segmented format. All the stories were pre-recorded by Freed and Cain (2016) and delivered on PowerPoint presentation on the researcher’s laptop to ensure consistent delivery of the stories regarding pace and word emphasis. All stories were available in a whole and a segmented format. The format in which children listened to the stories was counterbalanced based on children’s IQ levels from low to high.  \r\n•\tWhole story format. Children listened to the full story and at the end were asked 8 comprehension questions. The delivery of each whole formatted story followed the same format. \r\n•\tSegmented story format. Children listened to the story in 5 segments. After each segment the child was asked either 1 or 2 questions with 8 questions in total. The delivery of each segmented story followed the same format. \r\n\tThe average length of the story was 157 words, there were no pictures included in the PowerPoint which the story recordings were presented on. This was to avoid children using the pictures to help them answer the questions. Children were provided with verbal prompts if incomplete answers were given to direct them to the correct answer. If children were still unable to answer full knowledge checks were provided, see Table 1. All prompts were pre-written to ensure all children received the same level of prompting.\r\nProcedure\r\n\tPre-test\r\n\tThe IQ assessments were implemented individually in a quiet room in two separate sessions. Each session lasted between 10 and 15 minutes depending on how many questions/ trials they completed. First children completed the vocabulary test then in a separate session completed the non-verbal measure, block design. \r\nMain assessment \r\n\tChildren were presented with 4 short stories on 4 separate occasions, each story paired with 8 questions. Each story had to be completed in a separate session due to the attention and engagement levels of the children being tested. Each session lasted approximately 10 minutes depending on children’s accuracy and speed of answering. The procedure was explained to the children at the beginning of each session using a script to ensure consistency. They were informed that they would either be asked questions throughout the story or at the end of the story. \r\n"]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"1564"},["text","Lancaster University"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"1565"},["text","Open"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1566"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1567"},["text","Data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"1568"},["text","La1 4YF"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2133"},["text","Data/SPSS.sav"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"2134"},["text","Hindle2018"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2135"},["text","Ellie Ball"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"2136"},["text","None"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"2137"},["text","Professor Kate Cain"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"2138"},["text","MSc"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"2139"},["text","Developmental Psychology"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"2140"},["text","12 Participants (9 boys and 3 girls- aged between 4-11)"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"2141"},["text","ANOVA\r\nt-test\r\nCorrelation"]]]]]]]],["item",{"itemId":"63","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"53"},["src","https://www.johnntowse.com/LUSTRE/files/original/6706f99fb62f6749b7c0d33bae37059f.pdf"],["authentication","38f45aae780ada036b447d77607c2a80"]]],["collection",{"collectionId":"6"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"187"},["text","RT & Accuracy"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"188"},["text","Projects that focus on behavioural data, using chronometric analysis and accuracy analysis to draw inferences about psychological processes"]]]]]]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"1552"},["text","Investigating the effects of dimensionality and referent variability on word learning in autism and typical development.\r\n"]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"1553"},["text","Fiona Smith\r\n"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1554"},["text","2015"]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1555"},["text","Dimensionality, referent variability, word learning."]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2142"},["text","The ability to learn words from pictures could give children another forum to develop\r\ntheir lexical understanding and vocabulary. This is particularly important for children\r\nwith developmental disorders such as Autism. This research investigated how word\r\nlearning processes (referent selection, retention and generalisation) in autism and\r\ntypical development are influenced by learning from pictures and objects, including\r\nsingle and multiple exemplars of symbols. The participants in this study were 16\r\ntypically developing children, M age=3.68, the TD group was composed of 7 males\r\n(43.75%) and 8 females (56.25%). And 16 children diagnosed with ASD, M\r\nage=9.37, 8 males (50%) and 8 females (50%). Participants looked at pictorial and\r\nobject referents. This was to differentiate whether there was a preference in word\r\nacquisition and retention, depending on the structure of the stimuli. It was expected\r\nthat word referent selection, retention and generalisation would be more accurate in\r\nthe object condition compared to the picture condition, as participants would not be\r\nrelying of picture-word-associations. Participants also examined words paired with\r\neither single or multiple exemplars of referents, to determine whether multiple \r\nexemplars of shaped matched referents would promote shape-based generalisation\r\nin the ASD group, which has been shown to be impaired (Hartley and Allen, 2014).\r\nIt was expected that retention would be superior when learning directly from objects\r\nin both the ASD and TD groups, which was found in this research. We also\r\nanticipated that labelling from multiple exemplars, rather than single exemplars,\r\nmay scaffold more consistent shape-based generalisation. We found that referent\r\nselection was more accurate in both groups in the multiple exemplar condition\r\ncompared to the single exemplar condition. The implications of this research are\r\nthat we can further understanding of how symbols or objects benefit word learning,\r\nretention and generalisation in ASD or TD children. And whether there are any\r\ncognitive differences in the ASD and TD groups when it comes to word learning\r\nprocesses. "]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"2143"},["text","Participants\r\nThe participants in this study were 16 minimally verbal children with ASD (M age =\r\n10.42 years, SD = 3.29) and 16 typically developing children (M age = 3.64, SD =\r\n1.64).\r\nChildren with ASD were recruited from the specialist schools Dee Banks School in\r\nChester, and Hinderton School in Ellesmere Port. Typically developing children were\r\nrecruited via opportunity sampling, via the social media platform Facebook through\r\nadvertisement. \r\nAll the children with ASD received their diagnosis from a qualified clinical or\r\neducational psychologist. This was obtained using standardised instruments (i.e.\r\nAutism Diagnostic Observation Scale and Autism Diagnostic Interview—Revised;\r\n(Lord, Rutter & Le Couteur, 1994, Lord, Rutter, DiLavore & Risi, 2002) and expert\r\njudgment. Clinical diagnosis was confirmed for children with autism using the\r\nChildhood Autism Rating Scale (CARS; Schopler, Van Bourgondien, Wellman &\r\nLove, 2010), which was completed by a class teacher (Raw Score M score = 37.26,\r\nRaw Score range = 27 – 53.5). The ASD were tested for non-verbal vocabulary using\r\nthe British Picture Vocabulary Scale (BPVS; Dunn, Dunn, Whetton, & Burley, 1997),\r\nwhich was conducted by the experimenter. Mean receptive vocabulary of children\r\nwith autism was years 2.84 (range = 6 years – 2 years 4 months).\r\nSome of the children diagnosed with ASD who participated in this study were current\r\nPECS-users with impaired expressive language skills. Most of the children with ASD\r\nwho participated in this study were functionally non-verbal (no spoken words),\r\nalthough, some produced speech of 1–2 words in length (however, much of this was\r\necholalia) and one child could speak some short phrases over three words in length.\r\nTherefore, the sample was linguistically representative of children with ASD who\r\nreceive and may benefit from picture-based communication interventions. Participants\r\nhad 1–6 years’ experience of using PECS.\r\nWhen recruiting the children diagnosed with ASD, the experimenter emailed\r\nspecialist schools, explaining the study and whether the school would be interested in\r\nparticipating. When recruiting the TD children, advertisements were put on social \r\nmedia platforms such as Facebook (see Appendix A). The information poster\r\ninstructed the parents to contact the experimenter via email if they were interested in\r\ntheir child participating.\r\nThe study was approved by the Lancaster University Ethics Committee and informed\r\nconsent was obtained from parents before children were included in the study.\r\nSee Appendix B for completed and approved Lancaster University Ethics Committee\r\nform\r\nMaterials\r\nFor the warm up test trials in all tests the participants were shown three familiar\r\nobjects (for example; dog, bus, chair), these were small laminated pictorial symbols.\r\nIn the picture, single and multiple exemplar conditions the participants were shown 12\r\nlaminated pictorial symbols, 6 familiar and 4 novel. The participants saw each novel\r\nsymbol once and the novel symbol twice. Participants saw the same named novel\r\nsymbols in the retention test trial, in this trial the named novel objects were shown to\r\neach participant twice. In the generalisation test trial, the participants saw shape\r\nmatches (same object or picture, for example both would be paperclips) to the named\r\nnovel objects from the referent selection test trial and retention test trial, however they\r\nwere different colour variations (for example a red and blue paperclip). In the object\r\ncondition participants followed the same test layout and number of referents as the\r\nother conditions, the difference being that the stimuli were actual objects compared to \r\npictorial symbols. The words for the familiar stimuli were gathered using the CDI\r\ndatabase (Fenson, Dale, Reznick, Bates, Thal, Pethick, Stiles, 1994) and appropriately\r\naged matched to the non-verbal age range of the ASD children and the chronological\r\nage of the TD children (See Appendix C). The words for the novel stimuli were\r\npicked from the NOUN database (Horst & Hout, 2016), these were picked to all be\r\ntwo syllables long and words to have different phonological sounds per set. In the\r\npicture condition were, Gloop, Virdex, Akar and Teebu. For the novel words for the\r\nobject condition were, Fiffin, Tranzer, Brisp and Pentants. For the single exemplar\r\ncondition the novel word were, Tulver, Kaki, Jefa and Blicket. For the multiple\r\nexemplar condition the novel word were, Zepper, Toma, Modi and Chatten (see\r\nAppendix D)\r\nObjects were obtained through the equipment assistant at Lancaster University and\r\npurchased through amazon. Appendix E is an example warm up selection trial which\r\na participant saw, and the response form completed by the experimenter. Appendix F\r\nis an example of a referents selection trial, which participant saw, and the response\r\nform completed by the experimenter. Appendix G is an example of a retention test\r\ntrial a participant will have seen, and the response form completed by the\r\nexperimenter. Appendix H is an example of the generalisation test trial which a\r\nparticipant saw, and the response form completed by the experimenter. All test trials\r\nwere pseudorandomised per participant per condition and trial. Therefore, while all\r\nthe participants will have seen the same number of familiar and novel objects or\r\npictures. And each picture or object will have had the same name per shape matched\r\nobject they will have been in a different order. Therefore, a different response form\r\nwas required per participant, for the change in referent location and set order. \r\nProcedure\r\nPrior to the children participating the parents received, the information sheet (see\r\nAppendix I), and the consent form (see Appendix J). On the last day of experiments\r\nthe experimenter brought the debrief forms (see Appendix K).\r\nParticipants were test individually, in their schools for the children with ASD or in\r\ntheir own homes for the TD children, and were always accompanied by a familiar\r\nadult, teaching assistant or parent. The participants were seated at a table opposite the\r\nexperimenter; the materials were placed within reaching distance of the participants.\r\nChildren were reinforced throughout the session; correct performance was only\r\nreinforced during the warm up trial. The first test examined the picture condition vs\r\nthe object condition, the second test examined single vs multiple exemplars. The tasks\r\nwere between participants, as they were examining the results of the TD group\r\ncompared to the ASD group, however for the analysis some within participants\r\nanalysis was carried out to determine accuracy between test conditions (e.g. picture vs\r\nobject). Each task always consisted of a warm up stage, referent selection trial,\r\ndistracter familiarisation trial, retention test trial and generalisation test trial. The test\r\ntrials were based on that done by Horst and Samuelson in 2008, with the extension of\r\nthe generalisation trial which was not included in the Horst and Samuelson (2008)\r\nstudy.\r\nPicture Condition vs Object Condition Tests\r\nWarm Up Stage\r\nParticipants were shown three sets of three familiar objects, in the object condition, in\r\nthe picture condition participants were shown three familiar pictures. Participants\r\nwere asked to identify each in turn, the warm up objects or pictures were\r\npseudorandomised per participant, changing the order and location per participant per\r\ncondition. The pictures or objects were removed and reordered after each set, and the\r\nparticipants response recorded.\r\nReferent Selection Trial\r\nParticipants were shown four sets of stimuli (pictures for the picture condition and\r\nobjects for the object condition) the sets of stimuli were different per condition, each\r\nconsisting of two familiar items and one novel item, each set was shown four times,\r\nthe novel referent was shown twice and the two familiar referents once. The order and\r\nlocation of the sets was pseudorandomised for each participant, the location of the\r\nnovel object was never in the same location twice consecutively, and a novel or\r\nfamiliar object or picture was never requested more than twice consecutively. Sets\r\nwere not presented twice in a row.\r\nDistractor Familiarisation\r\nTo control for novelty or familiarity preferences in the subsequent test trials, children\r\nwere shown all the novel objects that used in generalisation test trials. The new novel\r\nobjects were a different colour variation of a previously seen novel object, which was\r\nnamed in the referent selection trial. Novel objects or pictures were shown against a\r\npreviously named novel objects or pictures, which was not a shape or colour match to\r\nthe new novel object. Objects or pictures were shown so one previous named novel \r\nobject was shown against a new novel object or picture. The objects were not shape or\r\ncolour matched, the objects or pictures were placed in front of the participant, they\r\nwere not asked to identify them just to “look”.\r\nRetention Test Trial\r\nRetention trials will assess children’s memory of the newly-learned word-referent\r\npairings. Participants were shown four sets; each set was shown twice with the target\r\nobject requested twice. The sets were made up of three named novel objects, names\r\nwere picked from the NOUN database (Horst & Houst, 2016), each made up of two\r\nsyllables, objects or pictures were picked on the basis that participants items that\r\nwould be novel to them, for instance gym or plumbing equipment. Objects and\r\npictures which were not shape or colour matches to each other and were shown in the\r\nreferent selection test trial. The order and location of each object or picture per set\r\nwas pseudorandomised per participant per trial. The location of the novel object was\r\nnever in the same location twice consecutively, and a novel or familiar object or\r\npicture was never requested more than twice consecutively. Sets were not presented\r\ntwice in a row.\r\nGeneralisation Test Trial\r\nGeneralisation trials will assess children’s extension of labels to new items.\r\nParticipants were shown four sets; each consisting of three objects or pictures, each\r\nset was shown twice with the target object being requested twice. The objects or\r\npictures in the sets were shape matches to the objects or pictures shown in the referent\r\nselection, and retention trials, but different colour variations. All the shape matched \r\nobjects or pictures were also colour matched to a non-shape matched object from the\r\nprevious conditions. The order and location of each object or picture per set was\r\npseudorandomised per participant per trial. The location of the novel object was never\r\nin the same location twice consecutively, and a novel or familiar object or picture was\r\nnever requested more than twice consecutively. Sets were not presented twice in a\r\nrow.\r\nSingle vs Multiple Exemplars Tests\r\nWarm Up Trial\r\nParticipants were shown three sets of three familiar pictures in both the single and\r\nmultiple exemplar conditions. Participants were asked to identify each in turn, the\r\npictures were pseudorandomised per participant, changing the order and location per\r\nparticipant per condition. The pictures were removed and reordered after each set, and\r\nthe participants response recorded.\r\nReferent selection Trial\r\nParticipants were shown four sets of stimuli, the sets of stimuli were different per\r\ncondition, each consisting of two familiar items and one novel item, each set was\r\nshown four times, the novel referent was shown twice and the two familiar referents\r\nonce. In the multiple exemplar trial, two differently-coloured versions of each\r\nunfamiliar object were named (one per novel trial for each set). The order of the sets\r\nwas pseudorandomised for each participant. The order and location of each object or\r\npicture per set was pseudorandomised per participant per trial. The location of the\r\nnovel object was never in the same location twice consecutively, and a novel or \r\nfamiliar object or picture was never requested more than twice consecutively. Sets\r\nwere not presented twice in a row. The order and location of the sets was\r\npseudorandomised for each participant, the location of the novel object was never in\r\nthe same location twice consecutively, and a novel or familiar object or picture was\r\nnever requested more than twice consecutively. Sets were not presented twice in a\r\nrow.\r\nDistractor Familiarisation\r\nTo control for novelty or familiarity preferences in the subsequent test trials, children\r\nwere shown all the novel pictures that used in generalisation test trials. The new novel\r\npictures were a different colour variation of a previously seen novel picture referent,\r\nwhich was named in the referent selection trial. Novel pictures were shown against a\r\npreviously named novel pictures, which was not a shape or colour match to the new\r\nnovel picture. Pictures were shown so one previous named novel referent was shown\r\nagainst a new novel picture. The referents were not shape or colour matched, the\r\npictures were placed in front of the participant, they were not asked to identify them\r\njust to “look”.\r\nRetention Test Trial\r\nRetention trials will assess children’s memory of the newly-learned word-referent\r\npairings. Participants were shown four sets; each set was shown twice with the target\r\nreferent requested twice. The sets were made up of three named novel objects, names\r\nwere picked from the NOUN database (Horst & Houst, 2016), each made up of two\r\nsyllables, pictures were picked on the basis that participants items that would be novel\r\nto them, for instance gym or plumbing equipment. Pictures which were not shape or \r\ncolour matches to each other and were shown in the referent selection test trial. The\r\norder and location of each picture per set was pseudorandomised per participant per\r\ntrial. The location of the novel object was never in the same location twice\r\nconsecutively, and a novel or familiar object or picture was never requested more than\r\ntwice consecutively. Sets were not presented twice in a row.\r\nGeneralisation Test Trial\r\nGeneralisation trials will assess children’s extension of labels to new items.\r\nParticipants were shown four sets; each consisting of three pictures, each set was\r\nshown twice with the target object being requested twice. The pictures in the set were\r\nshape matches to the picture shown in the referent selection, and retention trials, but\r\ndifferent colour variations. All the shape matched pictures were also colour matched\r\nto a non-shape matched object from the previous conditions. The order and location\r\nof each picture per set was pseudorandomised per participant per trial. The location of\r\nthe novel object was never in the same location twice consecutively, and a novel or\r\nfamiliar picture was never requested more than twice consecutively. Sets were not\r\npresented twice in a row. In the multiple exemplar condition the generalisation test\r\ntrial introduced the shape matched referent in a third colour that was coloured\r\nmatched to a referent of a different shape matched seen in the referent selection or\r\nretention test trial. "]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"2144"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2145"},["text","Data/SPSS.sav"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"2146"},["text","Smith2015"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"2147"},["text","Rebecca James"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"2148"},["text","Open"]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"2149"},["text","None"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2150"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"2151"},["text","Data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"2152"},["text","LA1 4YF"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"1556"},["text","Calum Hartley\r\n"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"1557"},["text","MSC"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"2153"},["text","Cognitive, Developmental Psychology"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"2154"},["text","16 minimally verbal children with ASD and 16 typically developing children "]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"2155"},["text","ANOVA, Correlation, quantitative, t-test"]]]]]]]],["item",{"itemId":"61","public":"1","featured":"0"},["fileContainer",["file",{"fileId":"57"},["src","https://www.johnntowse.com/LUSTRE/files/original/ee6c9a9fb70a964519577d2b8a098680.doc"],["authentication","dd2a4ec39b75345858daecc1f5050a4f"]]],["collection",{"collectionId":"4"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"183"},["text","Focus group"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"184"},["text","Primarily qualitative analysis based on forming focus groups to collect opinions and attitudes on a topic of interest"]]]]]]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"1521"},["text","Use This or You’ll Lose That: Investigating Appropriate Psychological Theories to Market the Bogallme Tracking System."]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"1522"},["text","Elizabeth Wardman"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1523"},["text","2015"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1524"},["text","The Bogallme Tracking System is an anonymous ‘Lost and Found’ system which uses stickers with QR codes printed on them to facilitate the return of lost items. It is thought that the main motivations behind the purchasing of these stickers are fear appeal and loss aversion, as people fear losing their possessions and will do whatever they can to prevent this from occurring. This study aimed to investigate whether this is the case using focus groups consisting of primarily students - the target audience for this specific product. The research also explored Rogers’ (1962; 1976) Diffusion of Innovations Theory (DOI) in relation to this product as well as opinions regarding the product and brand. Findings suggested that all three of the above theories are relevant and useful in the development of this product and can be used to create an efficient marketing campaign whilst creating scope for further research which would benefit the development of the brand and product. "]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1525"},["text","Marketing/Advertising\r\nQualitative (Thematic Analysis)"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"1526"},["text","Methodology\r\nParticipants.\r\nSixteen participants took part in this study. Participants were recruited via opportunity sampling through various social media platforms and word of mouth. The age of participants ranged between 20 and 23. This age range was selected due to a market segmentation suggesting that over 50% of QR code users were aged between 18 and 34 and that 18 to 24 year olds were 36% more likely to scan them (14 Million Americans Scanned QR Codes on their Mobile Phones in June 2011., n.d.)\r\nMaterials. \r\nThe focus groups loosely followed a discussion guide (See Appendix D) which asked general questions corresponding to the product, brand and incentives as well as questions related to Fear, Loss Aversion and Diffusion of Innovations theory. The majority of questions within the Discussion Guide were open-ended as they encourage participants to express their views and opinions in full (Turner, 2010) and allow for any further elaboration. During the focus group participants were shown three potential names for the brand (Scannit, GlobalQR and the brand name Bogallme) and an example of the Diffusion of Innovations Model (Figure 1). Participants were each given prototypes of the product that they tested during the group and were allowed to keep these at the end of the study. \r\nProcedure.\r\nFocus Groups\r\nFocus groups were used as the method of data collection for this study. Although focus groups cannot provide data as rich as that of individual interviews, they can allow for group discussions. These group discussions and interactions allow for comparisons between participant experiences and opinions which could otherwise only be inferred after proceedings with individual interviews (Morgan, 1997). \r\nThis study consisted of two focus groups which lasted approximately 60 minutes each. Within each focus group, eight participants sat facing one another around a circular table. After reading the information sheet and signing the consent forms, the focus group started with introductory questions to make participants feel more comfortable and able to voice their opinions. After this brief period, participants were asked questions which followed the discussion guide (See Appendix D), however elaboration was allowed and encouraged. Each participant was encouraged to answer all questions and to contribute to discussions as much as possible. Participants were also made aware that they did not have to answer anything that made them feel uncomfortable. Debrief sheets were handed out to participants at the end of each group and any further questions were answered.\r\nAnalysis\r\nBoth of the focus groups were audio recorded on an Edirol R-09HR recorder and then transferred to a computer so that they could be deleted from the device. Recordings were then transcribed verbatim using the app Audacity, with each participant being given an anonymous ID in case of withdrawal. From these transcriptions, thematic analysis was conducted using the software NVivo, which identified and inferred themes and opinions in order to draw conclusions regarding the discussed theories of Fear, Loss Aversion and Diffusion of Innovations. Other themes and inferences also came to light which will be outlined in the Results section. "]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"1527"},["text","Lancaster University"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1528"},["text","Results\r\nThere were several overarching themes present in both focus groups which relate to the three discussed theories (Fear, Loss Aversion and DOI Theory) and the proposed areas for exploration, along with new themes which were not previously considered. In response to the second objective relating to participant motivations to buy and use the product, the main theme of ‘motivations’ was created to investigate motivations to buy and use the product. Under this theme came the categories ‘fear’, ‘loss aversion’ and ‘adoption’. Following this, further sub-categories were created for each category which each included ‘effective’ and ‘ineffective’. The ‘adoption’ category under this main theme also included the further sub-categories ‘explicit’ and ‘implicit’. The category ‘explicit’ was based on what participants said outright whereas the ‘implicit’ category was based on inferences and implications from the discussion. The next main theme was created in relation to the first objective which aimed to explore brand and product opinions and was named ‘brand ideas’ and contained the categories ‘name’, ‘product idea’, ‘incentives’ and ‘other opinions’. For the final and arguably most significant objective, the main theme of ‘development’ was created which contained the categories ‘audience’, ‘barriers’ and ‘ideas’ which aimed to assist in making informed suggestions as to how to proceed with product development. \r\nBrand Name\r\nThe opinions relating to the brand name were very clear: participants did not like it. After being presented with three options of a possible brand name with no previous knowledge not one participant deemed ‘Bogallme’ appropriate for the product. Not one participant worked out that the word ‘Bogall’ was an anagram of the word ‘Global’ and the majority of participants chose the name ‘Scannit’ as the most appropriate for both the product and the brand. Many participants also had trouble in pronouncing the brand name correctly and it was pointed out in the first group that some individuals may have trouble reading it.\r\n“It’s not compatible with my dyslexia that one! Not at all.” (PL: Age 22)\r\n“The other two also worked like internationally, you’d have to think about that as even as people who speak English we didn’t get that.” (SD: Age 22)\r\nParticipants in both groups suggested that the name seemed quite childish and was trying too hard to be ‘down with the kids’ instead of being marketed at their age range. Another general consensus regarding the brand name was that it sounded similar to ‘Boggle’, the famous children’s board game, which again gave it a childish theme. \r\n“It’s like the game Boggle you used to play when you were a kid.” (GP: Age 22)\r\nOverall, it seems apparent that the brand name could have detrimental effects for the future development of the product.\r\nProduct Idea\r\nDespite the brand name, after reading the product description, participants liked the concept of the product and agreed that it was something that they would use. \r\n“I need this in my life (laughs)” (EG: Age 22)\r\nThe suggested uses for the stickers included: phones, keys, laptops, passports, luggage and notebooks. Participants said they were more likely to use the service in its current state (using Safari or another web browser) as opposed to downloading an app. However, some participants did have concerns surrounding the legitimacy of the product and would be wary when asked to fill in their details on the website. In terms of pricing, ideas of how much participants would pay for one sticker ranged from £1 to £10 with some participants suggesting that they would prefer to pay for a subscription service. The suggested subscription service consisted of paying a yearly fee for a certain number of stickers.\r\n“Yeah you could subscribe for like a year and you get five stickers and you could use it on whatever you want” (RD: Age 22)\r\nDespite this suggestion, many participants still disliked the idea of a subscription service and compared it to services such as Amazon Prime which continues to charge you if you forget to cancel it. As participants were all students or graduates, most liked the idea of paying per sticker best as it was affordable and not tying. However, another subscription idea came to light when participants were discussing potential problems with people forging the stickers. It was suggested that a subscription would include unlimited stickers and you would instead be paying to use the service as a whole. This would stop people from forging stickers because it would not be necessary once payment had already been made.\r\n“Unless, if you do have a subscription then surely you’d be paying the same amount anyway no matter how many… so why would anyone copy theirs.” (GP: Age 22)\r\nThe issue of forging was quite a prominent topic within the second focus group. They suggested a variety of ways to overcome this: customisable stickers, laminated stickers and the creation of a unique QR code similar to that of Snapchat or Messenger. The idea of customisation was also popular in the first group. Several participants from this group said that they wouldn’t put the sticker on their mobile phone as it is currently for aesthetic reasons. They did however state that if the stickers came in different colours or were customizable, that they would be much more likely to purchase the product. \r\n“I’d say make them customisable. If you could design your own stickers that would be… To match your phone case you could be like ‘ooh I’ll have it black with rose gold’ and then it would match and look cute” (GP: Age 22)\r\nThese participants did still agree that they would put the stickers on items other than phones, such as keys and passports, as it is not as important to participants for these items to be aesthetically pleasing. Stemming from this, the use of the stickers for travelling purposes was discussed in detail. Participants in the first group all agreed that it would be a useful addition to travelling supplies as the stickers could be placed on passports and luggage items. This was a very popular idea with the group for a number of reasons. Firstly, a passport doesn’t have the same sell-on value as a mobile phone, so you’d be much more likely to have it returned to you. Another suggested reason was the speed of having the item returned to you. If you are travelling across several different countries and using many different transportation methods, it may be difficult to continue without documents such as your passport and so a speedy return is very important. The final reason was that people often buy new products and innovations for when they travel due to excitement.\r\n“You’re just looking for stuff to buy when you’re going travelling as well,  like ‘what do I need, what do I need’ so yeah I think that would work quite well.” (KR: Age 23)\r\n\r\nFear and Loss Aversion\r\nWhen asked how they would feel if they lost an item, most participants described feelings of stress and anxiety along with anger. Not all participants had the experience of losing an important item, but all at least had a friend or family member who had had this experience. Participants suggested that the feelings they experience when losing something would make them want to return an item and that they would be more likely to return an item of personal over financial value. \r\nOne of the main advantages of the product was discussed when participants compared the product to insurance. It was suggested that the product was a cheaper alternative that, although return is not guaranteed, is better than no back-up at all. In terms of product development, these findings suggest that there is potential to work with an insurance company to effectively market the Tracking System.\r\n“It’s kind of like an insurance isn’t it? Like for your phone so… I’d pay like a tenner if it was a one off because people pay, I don’t know, I think mine…well I don’t pay insurance lol but I think it’s like sixty pounds” (AB: Age 22)\r\nThe time-saving of the product compared to insurance also produced positive comments about the product as it was explained how long it takes for an item to be replaced through insurance and how much effort this can be.\r\n“Also, insurance is like an effort, like you have to file a claim and then it takes ages for them to get it back but if you could just like message someone you like might get it today. It’s easier” (TM: Age 20)\r\nAnother comparison to insurance was made in terms of the personal value of possessions. When discussing phones, participants pointed out that they’d prefer their original phone returned over a new phone of the same model as their original phone has all their photos, music and original settings on it which can often be difficult to retrieve if lost. \r\n“(Be)cause you’ve got your photos and everything…like everything is set up on your phone in the way you like it. I hate setting up a phone when you first get it and you have to download everything and set it back up again.” (GP: Age 22)\r\nParticipants in the first group felt so strongly about the insurance aspect of the product that one attendee suggested that the brand partner up with a phone company and sell the product as an add-on for phone contracts. \r\n“You need to have a partnership with like a phone company or something so when people start getting new phones and upgrades, say you partnership with O2 and you have it as part of your package on your phone or something.” (DF: Age 22)\r\n\r\nIncentives\r\nThe majority of participants stated that they would not require an incentive to use the service and to return an item and that empathy alone would be enough. Participants also suggested that the gratitude of the person who had lost the item could contribute towards them returning it. Some suggested that an incentive could add extra persuasion however it was quickly pointed out that there would be issues with monitoring any incentives. Examples of incentives discussed included: a lottery, money, and a points system whereby points could be collected to go towards a discount or a cash reward. Participants admitted that some of them would be likely to abuse the incentive as there would be no way to monitor whether people are actually finding items or are just working together with friends to make some money or have more chance in a lottery. Overall it was decided that any incentive would either be abused or would not encourage someone who was unlikely to return the item to return it. \r\n“Yeah it would’ve been such a good idea saying five returns gets you a free sticker but people literally will just get each other’s items and be like oh” (BC: Age 22)\r\nHowever, it is quite naïve of participants to expect all individuals to return items via the service with no incentive. They made good points surrounding the potential abuse of incentives, yet the use of incentives is not something that should simply be ignored because of this potential hurdle. It would be best to suggest plausible alternatives, such as the individual who lost the item having to pay an incentive to the returner in order to retrieve their item. \r\nAdoption\r\nWhen presented with the Diffusion of Innovations Model, all participants initially suggested that they would personally be in the centre of the model between Early and Late Majority or in the Late Majority. However, after asking what stage they thought they were at across different innovations such as iPhones and Apps this altered somewhat. From broader discussion it could be inferred that most participants would fit in the ‘Early Majority’ stage of the model as they would be more likely to buy the product if they could see it used successfully by someone else, but they also usually try new innovations earlier than the majority. \r\n“I was probably an early majority. I’d say I’m between early and late majority.” (GP: Age 22)\r\n“Yeah, I’d have to hear people like using it well, like see people all around using it” (RH: Age 22)\r\nWhen questioned as to the type of person that would be situated in the first two stages of the model, there was a variety of answers. In the first group the most popular answer was people in an older age group, with many participants describing the habits and behaviours of their fathers. \r\n“I actually feel like older people like my dad or someone, he’d totally buy into this” (EG: Age 22)\r\nThey suggested that due to the simplicity of the product and its purpose, this would be the first market to espouse. Many were surprised by their own responses to this question as they initially assumed that the product would be more popular with a younger audience. The second group also agreed on an older audience, with suggestions of ‘overprotective mothers’ buying the product to protect their children’s’ possessions. The second group also indicated that, whilst they didn’t think that students would be the Innovators or Early Adopters, businesses targeting students would still be very interested in the product. \r\n“I think anyone who’s in the student-y industry. I reckon you could quite easily do this with like nightclubs. Anything to do with students people would want to get involved with.” (BC: Age 22)"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"1529"},["text","Bogallmetrackingsystem2015"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"1530"},["text","Frances Jackson "]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"1531"},["text","There is no license suggested for this work as far as the research is aware."]]]],["element",{"elementId":"46"},["name","Relation"],["description","A related resource"],["elementTextContainer",["elementText",{"elementTextId":"1532"},["text","Leslie Hallam"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1533"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1534"},["text","Qualitative interview data"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"1535"},["text","LA1 4YQ"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"2216"},["text","Leslie Hallam"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"2217"},["text","MSc"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"2218"},["text","Marketing/Advertising"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"2219"},["text","16 participants"]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"2220"},["text","Qualitative (Thematic Analysis)"]]]]]]]],["item",{"itemId":"60","public":"1","featured":"0"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"1520"},["text","temp"]]]]]]]],["item",{"itemId":"49","public":"1","featured":"0"},["collection",{"collectionId":"4"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"183"},["text","Focus group"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"184"},["text","Primarily qualitative analysis based on forming focus groups to collect opinions and attitudes on a topic of interest"]]]]]]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"1394"},["text","An Exploration of the Use and Effectiveness of Nature Imagery, Metaphor, and Symbolism in Advertising. "]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"1395"},["text","Konstantinos Perimenis"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1396"},["text","2018"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1397"},["text","Core participation of nature imagery, indoor scenery, visual metaphor, and literal image in the construction of commercials and advertising industry has been established through repeated research. The current study aims to deeper investigate regarding the role of two specific components of aesthetic communication (nature imagery, poetry) in advertising. Results suggested that between nature imagery and indoor scenery there was a significant preference to nature imagery. However, results from the comparison between visual metaphor and literal image indicated a more divided outcome with participants suggesting that both presented as equally appealing. Overall, our results suggest that nature imagery was established to be the \r\nmost significant component towards forming an appealing advertisement. We indicated that further research could investigate and highlight the effectiveness of other mediator components of aesthetics (verbal language, humour, music, etc) in advertising."]]]],["element",{"elementId":"49"},["name","Subject"],["description","The topic of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1398"},["text","Nature imagery in advertising, symbolism in advertising, metaphors in advertising"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"1399"},["text","<p>&nbsp; &nbsp; &nbsp; In all focus groups, a digital voice recorder was used for further analysis. The first selected pair of ads, with indoor and outdoor imageries, was about Coca-Cola brand. At the first Coca Cola’s ad film, which was broadcasted for the first time in 2010, participants had the chance to watch two young people inside an overcrowded bus. Even if these two passengers were completely strangers, they finally broke the ice between them, thanks to an invisible Coca-Cola bottle. At the second Coca-Cola’s commercial, diversity in terms of gender, religion and race, within the United States of America, was presented. At the same time, the viewers were given the opportunity of admiring some of the most breathtaking landscapes in USA.</p>\r\n<p>&nbsp;&nbsp;&nbsp;&nbsp; The second selected pair of ads, in terms of connotative and denotative imageries, was about Smirnoff brand. At the Smirnoff’s connotative commercial, there were clear signs that its creators intended to show temptation and seduction.&nbsp; From the beginning it was clear that the starring couple was meant to represent a modern day Adam and Eve. As the music picked up, snakes appeared from the bartender’s sleeves to help make an Apple Bite and the customers got up to dance in a fast-paced song. The bartender was leading ‘Adam and Eve’ to the apple flavour cocktail and the fast-paced music suggested that something big would happen if the drink was taken.&nbsp; This also insinuated that the drink was so desirable that they would not be able to resist the apple drink. At the denotative one, there was a stylish, classy man that was just listing the values of Smirnoff vodka. The initial 40 advertisements were selected randomly from Coloribus.com (See Appendix H for full links) an online databases for commercials and advertisements.</p>\r\n<p></p>\r\n<p><b>Data analysis </b></p>\r\n<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Responses to the focus groups’ questions were thematically analyzed. &nbsp;The current research followed the six step thematic analysis approach as described by Braun and Clarke (2006). Notes of detailed observation were used to generate and apply codes to the qualitative data and to identify potential themes, as the small sample gave us that opportunity.</p>"]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"1400"},["text","Lancaster University"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1401"},["text","English"]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1402"},["text","Data"]]]],["element",{"elementId":"42"},["name","Format"],["description","The file format, physical medium, or dimensions of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1403"},["text","WAV"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"1404"},["text","Perimenis2018"]]]],["element",{"elementId":"47"},["name","Rights"],["description","Information about rights held in and over the resource"],["elementTextContainer",["elementText",{"elementTextId":"1405"},["text","Open"]]]],["element",{"elementId":"38"},["name","Coverage"],["description","The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant"],["elementTextContainer",["elementText",{"elementTextId":"1406"},["text","LA2 0PF"]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"1407"},["text","Leslie Hallam"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"1408"},["text","MSC"]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"1409"},["text","Psychology of Advertising"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"1410"},["text","For the purpose of advertisement selection a pilot group was conducted consisted of 3 participants. Following the advertisement selection, two focus groups were formed, 6 participants were included in the first focus group, and 7 in the second focus group. Participants recruited in the pilot group and both focus groups (N= 16) were students from Lancaster University (age range 22-28). Inclusion criteria required participants to be above the age of 18 and be able to physically attend the focus group. Participants of both focus groups were 5 males, 8 females, and participants consisted the pilot group 2 males, and 1 female.  "]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"1411"},["text","Qualitative"]]]]]]]],["item",{"itemId":"48","public":"1","featured":"0"},["collection",{"collectionId":"5"},["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"185"},["text","Questionnaire-based study"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"186"},["text","An analysis of self-report data from the administration of questionnaires(s)"]]]]]]]],["elementSetContainer",["elementSet",{"elementSetId":"1"},["name","Dublin Core"],["description","The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/."],["elementContainer",["element",{"elementId":"50"},["name","Title"],["description","A name given to the resource"],["elementTextContainer",["elementText",{"elementTextId":"1358"},["text","Recalling Memories of Childhood Bullying: Links Between Early Victimisation and Anxiety in Adulthood\r\n\r\n"]]]],["element",{"elementId":"39"},["name","Creator"],["description","An entity primarily responsible for making the resource"],["elementTextContainer",["elementText",{"elementTextId":"1359"},["text","Jenna Rayner"]]]],["element",{"elementId":"40"},["name","Date"],["description","A point or period of time associated with an event in the lifecycle of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1360"},["text","2014"]]]],["element",{"elementId":"41"},["name","Description"],["description","An account of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1361"},["text","Objectives: This study investigated the relationship between retrospective reports of bullying (primary school, secondary school and general experiences of bullying) with social anxiety (SAD), generalised anxiety (GAD) and grit (perseverance). Method: Demographic information was obtained from participants (n=147) as well as measures from primary school bullying, secondary school bullying and general bullying experiences utilising the Retrospective Bullying Questionnaire (RBQ; Schafer et al., 2004). The Social Phobia Inventory (Connor et al., 2000) measured social anxiety in participants, the Penn State Worry Questionnaire (Meyer et al., 1990) assessed general anxiety and the Grit Test (Duckwoth et al., 2007) evaluated participant’s determination. Results: There was evidence that primary school bullying was associated with higher levels of GAD whilst higher levels of SAD were associated with general bullying experiences. There was no evidence to suggest that the individual difference measure of grit impacted upon anxiety for participants. The results support previous studies which have linked anxiety disorders in adulthood to earlier experiences of bullying"]]]],["element",{"elementId":"48"},["name","Source"],["description","A related resource from which the described resource is derived"],["elementTextContainer",["elementText",{"elementTextId":"1362"},["text","  \tIn the Retrospective Bullying Questionnaire (RBQ) (Schafer et al., 2004), there are a number of sections, three of which were used in this study. The first looks at bullying in primary school, the second at bullying in secondary school and the third section at general bullying behaviour.  The general bullying behaviour section concentrated on the long-term effects of any bullying the participants had experiences of in primary or secondary school. This section asked such questions as “Do you ever have dreams or nightmares about the bullying events?” and “Do you ever feel distressed in situations which remind you of the bullying event(s)?” (appendix A). \r\nThis questionnaire was subject to intensive pilot studies by Schafer et al. (2004) and insight was gained from the success of Rivers (2001) study which also utilised a retrospective measure.  Reliability of the RBQ was assessed in the Schafer et al. study, which found a good level of test-retest reliability (Spearman correlation coefficients, primary school r=.88, secondary school r=.87). \r\n   \tThe Social Phobia Inventory is a 17-item self-report questionnaire (Connor et al., 2000) that screens for social anxiety disorder and assesses the acuteness of such a disorder. The measure has three subsections which evaluate key symptoms of SAD: fear of social situations, avoidance of social situations and physiological discomfort within social situation. Each item is rated on a scale from zero to four. Scores ranged from 0 to 68, and a cut off score of 19 or above distinguishes between healthy controls and SAD sufferers. The SPIN has previously demonstrated good internal consistency as well as suitable test-retest reliability.\r\n   \tThe Penn State Worry Questionnaire (Meyer, Miller, Metzger & Borkovec, 1990) is a 16-item questionnaire which has been considerably utilised in existing studies to measure generalised anxiety disorder in participants. This questionnaire has been shown to differentiate between different anxiety disorders, e.g. General anxiety sufferers score higher than phobics (Meyer et al., 1990). The scoring for questions 1, 3, 8, 10 and 11 were reverse scored for the analysis. Each answer is scored on a five point likert-type scale ranging from 1= not at all typical to 5= very typical. The scores could range from 16 to 80 where the average score in a “normal” student population was 49. The average score in a GAD population was 68 for men and women (Hawkins, 2008). \r\n   \tThe Grit Test (Duckworth, Peterson, Matthews, & Kelly, D.R. (2007) is 12-item questionnaire which considers how ‘gritty’ a person you are. It looks at how you face challenges as a person and what your reaction to them is. The scores are added up and divided by 12. The maximum score on this scale is 5 (extremely gritty), and the lowest scale on this scale is 1 (not at all gritty). This measure was included as a personality measure to explore if there are any links between what type of person you are, and whether this affects if you are bullied or not. \r\nProcedure \r\n   \tFollowing the briefing sheet, participants received a consent form to inform them of the nature of the study, their participation requirements, and their right to withdraw should they so wish. Once consent was gained, participants were asked to provide some demographic information on the following: gender, age, educational achievement, relationship status, ethnicity and employment status. For the purposes of analysis, females were coded as 1, whilst males were coded as 2.\r\n   \tQuestionnaires made up the materials for this research project. Once participants had completed these they were informed of the end of the study and given more insight into the nature of the study. Participants were also given helplines and details of advisory websites, where they could go if they felt they had been affected by the nature of the research. The information for two journal articles whose research has facets of the current research were given, so that participants could gather more information if they so wished\r\n"]]]],["element",{"elementId":"45"},["name","Publisher"],["description","An entity responsible for making the resource available"],["elementTextContainer",["elementText",{"elementTextId":"1363"},["text","Lancaster University"]]]],["element",{"elementId":"43"},["name","Identifier"],["description","An unambiguous reference to the resource within a given context"],["elementTextContainer",["elementText",{"elementTextId":"1364"},["text","Rayner2014\r\n\r\n"]]]],["element",{"elementId":"37"},["name","Contributor"],["description","An entity responsible for making contributions to the resource"],["elementTextContainer",["elementText",{"elementTextId":"1365"},["text","Anamarija Veic"]]]],["element",{"elementId":"44"},["name","Language"],["description","A language of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1366"},["text","English "]]]],["element",{"elementId":"51"},["name","Type"],["description","The nature or genre of the resource"],["elementTextContainer",["elementText",{"elementTextId":"1367"},["text","Data and a form "]]]]]],["elementSet",{"elementSetId":"4"},["name","LUSTRE"],["description","Adds LUSTRE specific project information"],["elementContainer",["element",{"elementId":"52"},["name","Supervisor"],["description","Name of the project supervisor"],["elementTextContainer",["elementText",{"elementTextId":"1368"},["text","Dr Kathleen MCulloch"]]]],["element",{"elementId":"53"},["name","Project Level"],["description","Project levels should be entered as UG or MSC"],["elementTextContainer",["elementText",{"elementTextId":"1369"},["text","MSc"]]]],["element",{"elementId":"56"},["name","Sample Size"],["description"],["elementTextContainer",["elementText",{"elementTextId":"1370"},["text","  \tA total of 167 adults participated in the study and were all informed of the nature of the research. Participation was voluntary and all participants completed the survey online via Surveymonkey. The sample was an opportunity sample, as the researcher posted links to her survey via Facebook, twitter and www.thestudentroom.co.uk (a site for students to offer advice and help to each other). Friends on Facebook reposted or shared the advertisement for participants in order to reach a wider audience. Once the participants followed the link to the survey on Surveymonkey, they were faced with a briefing note which explained the nature of the study, as well as their voluntary participation in the study (describing how the participant can withdraw from the study with no repercussions).  \r\n   \tFrom the initial sample of 167 adults, data from 20 participants were excluded due to the incomplete nature of the data. This left a total of 147 participants, 72% female (106 female, 41 male) with an age range of 16-63. Participants were predominantly Caucasian (95%)."]]]],["element",{"elementId":"55"},["name","Statistical Analysis Type"],["description","The type of statistical analysis used in the project"],["elementTextContainer",["elementText",{"elementTextId":"1371"},["text","Correlational Analysis   "]]]],["element",{"elementId":"54"},["name","Topic"],["description","Should contain the sub-category of Psychology the project falls under"],["elementTextContainer",["elementText",{"elementTextId":"1376"},["text","Social Psychology"]]]]]]]]]