An investigation into automatic imitation: Comparing live and video setups, the effect
of prior training and the influence on affective empathy

Dublin Core

Title

An investigation into automatic imitation: Comparing live and video setups, the effect
of prior training and the influence on affective empathy

Creator

Evangelos Baltatzis

Date

2017

Description

If decreased Automatic Imitation(AI) improves empathetic abilities, then selfother distinction processes are probably the mediating factor between imitation and
empathy. But if increased AI improves empathy, then probably imitation is at the core
of the socio-cognitive functions. Until now, it was shown that decreased AI improved
visual perspective taking, corticospinal empathy and self-reported empathy. Also, the
studies until now focus on video AI stimuli. But to understand whether AI has a more
direct relation to mimicry, I developed also live paradigm. My research questions
were firstly, what effect will imitation training and inhibition training have on AI.
Secondly, whether live stimuli AI will have the same effects on AI testing (inhibition
versus imitation) and arousal empathy testing. Thirdly, whether the effects are
transferable on arousal empathy. As expected, there was a significant decrease of AI
in the video inhibition condition in comparison to the video imitation condition.
Unexpectedly, a significant, but weak increase in arousal empathy was observed in
the video imitation condition and not in the video inhibition group. The difference in
AI and arousal empathy between the life imitation group and the life inhibition group
were not significant. The results give a new perspective on the topic of AI. If the
results can be reproduced by more studies, then probably imitation is more important
than self-other distinction processes or maybe arousal empathy is different from other
forms of empathy. Finally, the insignificant results in the life imitation versus life
inhibition training indicate that there are maybe confounding factors in live AI
research or that the video AI designs are more artificial than it is assumed.

Subject

automatic imitation, empathy, imitation training, inhibition
training, mirror neuron system, self-other distinction

Source

Participants
Sixty(N=60) participants were recruited in a two-by-two factorial design, and
divided equally among two between-subject factors. The first factor is Stimulus,
whereby AI will be measured in response hand actions performed by an experimenter
sat across a table from the participant (Live) or to the actor’s pre-recorded hand-action
stimuli presented on a monitor. The second factor is Training, whereby participants
will undertake a brief period of imitating the actions of the live or videoed handaction stimuli (IMI) or performing the opposite actions (IMI-IN).
The participants were recruited from students of the University of Lancaster.
Random selection could not be used, because of practical and logistical difficulties.
Hence, most of the participants were Masters students and some were PhD students.
The participants were either friends and acquaintances or they had the motivation to
win a 10 pounds amazon voucher. In many cases they had the motivation to
participate in my study, because they also wanted me to participate in their study.
Firstly, we conducted the experiments with the video paradigm (15 participants in the
imitation training condition and 15 participants in the inhibition training condition)
and then we conducted the experiments of the live paradigm (15 participants in the
imitation training condition and 15 participants in the inhibition training condition).
We use random assignment for the recruitment in the training condition, thusly, every
participant was randomly assigned in either the imitation training or in the inhibition
training condition. For instance, we did not conduct first 15 experiments in the
imitation training condition and then 15 experiments in the inhibition training
condition, but every participant was in a different training condition. Nevertheless,
one possible limitation may be that we did not do the same for the stimulus condition,
as we conducted first the experiments of the video condition and then the experiments
of the real condition.
Materials
The experiment was conducted on the personal laptop of the researcher. No
specific room was needed for the experiment to have more flexibility with the data
collection. The software Mathlab and the program Cogent were used to code and
make the script. In order to measure affective empathy, we used the Multi-faceted
empathy test (MET). It consisted of 40 images, but it was split in two METS to
include also a pre-test approach. For imitation training and for inhibition training we
used three images of the hand of the researcher. In one image, the hand of the
participant was in the neutral position. In the second image, the index finger was
lifted and in the third image the middle finger was lifted.
Design and Procedure
First, we conducted the experiments of the video paradigm (30 participants)
and then we conducted the experiments of the live paradigm. The experimental
procedure was divided into four phases: First, participants will do the MET
(Multifaceted empathy test). The first Met had 22 images. The MET tests affective
empathy. Participants must choose from a scale from 1 until 4 how strong is their
affective arousal when they see the image. The MET took approximately 5-10
minutes, depending on the participants.
After the first MET, participants did either imitation training or imitation
inhibition training. The default position for the participants in this task was to press
two buttons all the time with their right-hand index and middle finger. In the video
condition, they pressed button A and button Z -with their right-hand index and middle
finger, and in the live paradigm they pressed the left and right arrow button -with their
right-hand index and middle finger respectively. In imitation testing, they had to lift
their index finger when they saw a lifted index finger (video or live) and to lift their
middle finger when they saw a lifted middle finger (video or live). Both actions
should be done as quickly as possible. In the inhibition training the participants did
the opposite actions of the observed movements. Thus, when they saw a lifted index
finger, they lifted their middle finger as quickly as possible. When they saw a lifted
middle finger, then they lifted their index finger, again, as quickly as possible. The
training phase consisted of two tasks and a small break. Every task had a duration of 6
minutes approximately.
After the training, there was the testing phase. Here we tested the effects of
training on Automatic Imitation. The training phase consisted of two 6 minutes tasks
with a break between the two tasks. In the first task, the participants had to lift only
their index finger as quickly as possible, irrespective of the lifted finger they saw
(either in video or in the live condition). In the second testing task, they had to lift
their middle finger as quickly as possible, again irrespective of the lifted fingers that
they saw.
Automatic imitation is measured as the difference in their latency to lift the
pre-defined finger when the observed action is the same in relation to when the
observed action is the opposite finger movement. For instance, when the participant
lifts his index finger, we measure the reaction time of his movement, when he sees a
lifted index finger and when he sees a lifted middle finger. Automatic imitation is the
difference of those two reaction times. This testing phase lasted 10 minutes,
comprising 100 trials divided among two blocks. After the lifting of the finger, the
participants pressed the button again (default position). Thusly, the reaction times
were measured by how fast the participant would lift his finger.
To ensure that the training and the testing really focused on Automatic
Imitation and to exclude the spatial compatibility confounds, the participants were
perpendicular to the stimuli (in both the video and the live condition). Sadly, we could
not have the same perpendicular angle for both conditions, but the difference of the
degrees was very small. In the video condition, the angle was approximately 45
degrees (the fingers of the participants were at the buttons A and Z and the stimuli
were on the laptop screen) and on the live condition, the stimuli were approximately
90 degrees perpendicular (the fingers of the participants were on the right and left
arrow and the real stimulus of the experimenter was at the buttons “tab” and “shift”).
In the final phase, the participants did a second MET test. It was exactly like
the first, only with different images. The order of the MET tests was changed with
every participant. In other words, one participant did first the MET.1 and in the end
the MET.2, while the other participants did first MET.1 and in the end, they did the
MET.2. Both MET tests different parts of the same MET test, but we splitted the test
arbitrarily in the middle to have also a pretest empathy base. I changed the order of
the MET tests with every participant to exclude the factor that some pictures of the
Test are less difficult than the others. Thus, if we find a large and statistical significant
difference in the final MET between the imitation and the inhibition training group,
then we can say that in both training conditions we changed equally the order of the
MET tests, so the observed change in empathy performance does not have to do with
some images being easier or more difficult than the others.
In the IMI condition, the participants were required to lift their index finger
when they see the stimulus hand (live or videoed) perform an index-finger action, or
lift their middle finger when they observe a middle-finger action; in the IMI-IN
condition they will do the opposite - they will lift their index finger when they
observe a middle-finger action or lift their middle finger when they see an indexfinger action.
In the second phase, the participants performed AI testing, during which they
will be required to make a pre-defined finger-lifting movement (index- or middlefinger lifting action) as soon as the stimulus hand (live or videoed) moves, regardless
of whether the observed movement is an index- or middle-finger lifting action. In the
third phase, participants will perform the Multi-Faceted Empathy Test, during which
they will be presented with 30 images of individuals expressing emotions and asked
to judge which emotion is being expressed. The accuracy of their responses will be
recorded. This final phase takes 10 minutes.

Publisher

Lancaster University

Format

Data/SPSS.sav

Identifier

Baltatzis2017

Contributor

Rebecca James

Rights

Open

Relation

None

Language

English

Type

Data

Coverage

LA1 4YF

LUSTRE

Supervisor

Dr. Daniel Shaw

Project Level

MSc

Topic

Social psychology

Sample Size

60 participants

Statistical Analysis Type

ANOVA, t-test

Files

Consent form.pdf

Collection

Citation

Evangelos Baltatzis , “An investigation into automatic imitation: Comparing live and video setups, the effect
of prior training and the influence on affective empathy,” LUSTRE, accessed April 29, 2024, https://www.johnntowse.com/LUSTRE/items/show/90.