Visual engagement with different animals

Dublin Core

Title

Visual engagement with different animals

Creator

Rebecca Gregson

Date

2018

Description

People treat animals differently depending on how they are dichotomized.The present study tested the consequences of dichotomization on our visual engagement with still images of different animals. Fifty-seven participants took part in two identical image visualization tasks, the first preceded a short empathy inducing video,and the second followed. We used eye-tracking to study dwell time percentage oriented towards the eyes of companion, farmed and endangered animals. Eye-directed visual engagement was greatest for companion animals in the first image visualization task. This bias in visual engagement towards companion animals was attenuated in the second image visualization task.We hypothesised that the empathy inducing video would change gaze towards farmed animals, evidencing either increased attentional avoidance or increased engagement. Although mean averages suggest a slight increase in visual engagement following the video, this difference was not significant. Participants reported highest levels of negative emotion regarding the farmed animal’s videos. Empathic gaze with farmed animals correlated positively with participants’ level of meat consumption restriction. The findings support several pre-registered hypotheses but disconfirm others, and are discussed in terms of the extension of empathic gaze to animals.

Subject

Animals, dichotomization, eye-tracking, empathic gaze, guilt

Source

Participants
Our pre-registered recruitment strategy was to collect fifty participants with complete data. Fifty participants were recruited through (1) Lancaster University’s research participation system, SONA or(2) poster advertisementand were paid £3for their involvement. Each participant saw 9 images, presented twice, each for 10 seconds, totaling 180 seconds of eye-tracking data. On first inspection of the data we were forced to exclude seven participants whose eyes had not been tracked for 50% of the experiment. To reach our pre-registered participant pool of 50 we recruited seven more participants, one of whom had to be excluded on the same grounds as previous. Our final data set was comprised of 49 participants, 36 females and 13 males. Age ranged between 18 and 30 (M= 21.10, SD= 2.13). Participants reported a range of nationalities, including: American (n=1), British (n=28), Bulgarian (n=3), Chinese (n=3), Croatian (n=2), German (n=2), Hungarian (n=2) Indian (n=3), Indonesian (n= 1), Latvian (n=1), Nigerian (n=1), Malaysian (n=1) and Slovakian (n =1). Participants dietary classifications were as follows:Meat lover(n =1), Omnivore (n =23), Semi-vegetarian (n =16), Pescatarian(n =3),Lacto-or Ovo-vegetarian(n =5), Strict vegetarian(n =0), Dietary vegan(n =0),andLifestyle vegan(n =1).Design The experiment employed a 3x2 fully within-subjects design. The independent variables were animal category and time. The variable animal category hadthree levels: farmed animals (sheep, cow, pig), companion animals (dog, cat), and endangered wild animals (chimpanzee, tiger, koala) and was operationalized using still images. Our main research interest was the distinction between farmed and companion animals, given the marginalized status of farmed animals in society and the privileged status of companion animals. Endangered animals are vulnerable to human interference and confer some value
VISUAL ENGAGMENT WITH ANIMALS.4due to their endangered status, but they are not actively used by humans as objects of consumption. For this reason, endangered animals were used as a control or comparison group. The variable time had two levels, pre-and post-video task. Participants took part in two IVT, one before a video watching task and one after. Our main dependent variable was dwell time percentage on the eyes of the animal. This was recorded during the presentation of each of the nine images in both IVT. At no other point in the experiment were eye-movements recorded. Additional outcome measures.We recorded the participants emotional state immediately after the video watching task. Participants emotion ratings were transformed into numerical valuesas follows: Extremely positive (+3), Fairly positive (+2), Slightly positive(+1), Neutral (0), Slightly negative (-1), Fairly negative (-2) and Extremely negative(-3). As a result, more negative responses were represented by a more negative value. We asked participants if they(Yes/No) contribute to the suffering and well-being of each animal category. Participants were also asked to state their agreement (Yes/No) with two statements, the first regarding their outrage having heard about the harm inflicted on animals, and the second about the animal’s capacity to suffer as being meaningfully similar to a human’s capacity to suffer.However, due to an experimenter error, these four measures were not recorded by the experiment-analysis system, and therefore cannot be discussed further.MaterialsImages. In total we sourced nine images, three for each animal category in our design. We sourced images for three different species of animal to make up our target category. The companion animal category was the only exception to this rule. For this category of animal, we used two dog images (Siberian Husky and Staffordshire Bull Terrier) and one cat image. In our original companion animal category, we had considered using the
VISUAL ENGAGMENT WITH ANIMALS.5image of a horse, but decided against this for two reasons. Firstly, the composition of the face was noticeably different in comparison to the other eight images. The horses face was longer with its eyes positioned laterally. Secondly, the category in which horses fall in to (i.e. farmed or companion) is often blurred. Whilst cows pose similar facial composite issues tothe horse, there is no question that cows are members of the farmed animal category. We decided that this justified the inclusion of the cow in the experiment, but we could not justify the use of the horse. The original source for each image is displayed in Appendix A.Due to limited financial resources we were restricted to the use of free, open-source images. This meant that the images contain some background colour and contextual inconsistencies. Nonetheless, all images share these same consistencies: forward facing gaze, minimal to no background noise and the absence of other animals. We adjusted some of the images so that the body of the animals is mostly cropped out. As a result, all nine images have a central focus on the animals face. We ensured that the images did not objectively indicate animal harm nor confinement. Finally, all animals were adult so as to avoid the baby schema effect–the finding that infantile features promote caregiving behaviour(Archer & Monton, 2011; Borgi, Cogliati-Dezza, Brelsford, Meints & Cirulli., 2014; Fridlund& MacDonald, 1998). This was an important consideration as the baby schema effect has been linked to stronger caregiving motivationswith animals(Piazza, McLatchie & Olesen, 2018).Videos. Three videos were selected to induce empathic concern with each of the three animal categories. Each video targeted a specific class of animal (companion, farmed, or endangered) and was presented prior to the second viewing session. All three videos outlined the harm inflicted upon the relevant animal category. They include emotional but not graphic content and were selected for their empathy arousing nature. To reduce any variation caused by the different music styles of the videos, all audio was removed. Videos were trimmed to
VISUAL ENGAGMENT WITH ANIMALS.6ensure that they had a similar duration time. Supplementary details of each video can be found in Appendix B. Additionally, each video can be accessed in the “Materials” section of our OSF file. Stimuli presentation. All stimuli werepresented on a Windows 10 Pro hplaptop which had a 14-inch monitor, a screen resolution of 60 Hz and the Intel® Core™ i7-4710MQ CPU processor. Stimuli ran semi-automatically. The experiment was built using Experiment Centre (Version 3.6, SensoMotoric Instruments).Eye-tracking device. Eye movements were recorded monocularly and at a frequency of 30Hz using the REDn Scientific eye-tracking device (SensoMotoric Instruments). Gaze was calibrated using a 5-point method and a calibration area of 1920 X 1080. We used a centered black cross for the fixation points during the initial calibration and throughout the experiment. These were Arial in font and 72 in size. The experiment was built to measure dwell time percentage during the IVT only. Diet. Diet was assessed using an adapted version of the 5 item dietary practice scale used by Piazza, Ruby, Loughnan et al.(2015).We expanded the original scale to include 8 dietary practices. These included “Meat lover,” “Omnivore,”“Semi-vegetarian,” “Pescatarian,” “Lacto-or Ovo-vegetarian,”“Strict vegetarian,”“Dietary vegan,” and“Lifestyle vegan”. Definitions for each category are provided in Appendix C. Procedure Preliminary procedures. Participants were tested individually. Having been welcomed into the lab each participant received an information sheet and consent form. All participants who arrived at the lab gave their consent. Each participant was seated on a stationary chair at a desk where the equipment stood. The experimenter explained that they
VISUAL ENGAGMENT WITH ANIMALS.7would load up the experiment and leave them to complete it in privacy. The experiment ran an initial calibration of the eye before moving through into the task information. Task information was presented across three separate screens which outlined for the participant what would be required of them (See Appendix D). Warm-up. Participants took part in two identical IVT. The first was framed as a warm-up. These warm-up trials ran automatically and did not require any participant action. Following task information, participants saw a screen which read “Warm-up” for 4000ms. The animal category was then announced (e.g. “Farmed Animals” “Companion Animals” or Endangered Animals”) and remained on screen for 4000ms. A centered fixation point appeared for 500ms before the first category animal image appeared for 10,000ms. It was during each 10,000ms image presentation that eye-movements were recorded. This same fixation point/image presentation routine was repeated three times over to cover all three images in each category. The order in which each animal category was presented was randomized across participants. Having completed the IVT for each animal category, participants were presented with a screen instructing them that the warm-up was now complete. This instruction screen was advanced manually by the participant. Video watching task. Following the first IVT, participants took part in the video watching task. The animal category was first announced and remained on screen for 4000ms. The appropriate video then played and was concluded with a blank screen lasting 3000ms. Participants were then made aware that the video had finished. Having manually moved the experiment along, the participant was next asked to indicate their current emotional state. They read: “Howpositive or negative do you feel right now?” and should select their response via mouse-click on a 7-point scale with the following range: “Really negative,” “Fairly negative,” “Slightly negative,” “Neutral,” “Slightly positive,” “Fairly positive” and “Really positive”. Again, this screen was manually advanced. The participant was next
VISUAL ENGAGMENT WITH ANIMALS.8presented with the statement “I contribute to the suffering of Farmed/ Companion/ Endangered animals” and was asked to indicate their response using the “Y” (Yes) and “N” (No) keys on the keyboard before pressing space bar to advance. “I contribute to the well-being of Farmed/Companion/Endangered animals” was presented on the next screen and participants indicated their response as previous. Responses to these Y/N questions failed to record due to a programming error, and therefore will not be discussed further.The second IVT. As in the first IVT, participants saw a centered fixation point (500ms) followed by the first category animal image (10,000ms). Again, the REDn was programmed to record eye-movement during each of the 10,000ms image presentation. After each animal image the participant was then presented with the statement: “Thinking about how ___ (e.g. Cows) are slaughtered for their meat makes me feel outraged” and are again asked to indicate their response using the “Y” (Yes) and “N” (No) keys on the keyboard. This question was tailored to each animal category and target animal (see Appendix E for a list of each statements used). Next the participant read: “___ (e.g.,Cows) possess a capacity to suffer that is meaningfully similar to humans” and are asked to indicate their response Y/N as previous. This procedure was repeated three times over, once for each animal target. Due to a programming error, responses to these Y/N question were not recorded, and therefore they will not be discussed further. The entire procedure from the beginning of the video watching task to the end of the second IVT was repeated for each animal category, the order of which was randomized for each participant. See Appendix F for a visual representation of the experiment flow.

Publisher

Lancaster University

Format

SPSS data

Identifier

Gregson2018

Contributor

Rebecca James

Language

English

Type

SPSS.sav

Coverage

LA1 4YF

LUSTRE

Supervisor

Dr. Jared Piazza

Project Level

MSC

Topic

Social

Sample Size

49 participants

Statistical Analysis Type

ANOVA, correlation, t-test

Files

Participant Consent Form.pdf

Citation

Rebecca Gregson, “Visual engagement with different animals,” LUSTRE, accessed April 19, 2024, https://www.johnntowse.com/LUSTRE/items/show/70.