Does implicit mentalising involve the representation of others’ mental state content?

Dublin Core

Title

Does implicit mentalising involve the representation of others’ mental state content?

Creator

Malcolm Wong

Date

07/09/2022

Description

Implicit mentalising involves the automatic awareness of the perspectives of those around oneself. Its development is crucial to successful social functioning and joint action. However, the domain specificity of implicit mentalising is debated. The individual/joint Simon task is often used to demonstrate implicit mentalising in the form of a Joint Simon Effect (JSE), in which a spatial compatibility effect is elicited more strongly in a joint versus an individual condition. Some have proposed that the JSE stems from the automatic action co-representation of a social partner’s frame-of-reference, which creates a spatial overlap between stimulus-response location in the joint (but not individual) condition. However, others have argued that any sufficiently salient entity (not necessarily a social partner) can induce the JSE. To provide a fresh perspective, the present study attempted to investigate the content of co-representation (n = 65). We employed a novel variant of the individual/joint Simon task where typical geometric stimuli were replaced with a unique set of animal silhouettes. Half of the set were each surreptitiously assigned to either the participant themselves or their partner. Critically, to examine the content of co-representation, participants were presented with a surprise image recognition task afterwards. Image memory accuracy was analysed to identify any partner-driven effects exclusive to the joint condition. However, the current experiment failed to replicate the key JSE in the Simon task, as only a cross-condition spatial compatibility effect was found. This severely limited our ability to interpret the results of the recognition memory task and its implications on the contents of co-representation. Potential design-related reasons for these inconclusive results were discussed. Possible methodological remedies for future studies were suggested.

Subject

implicit mentalising, co-representation, joint action, domain specificity

Source

Pre-test: Selection of Suitable Stimuli
Participants
Twenty-five undergraduate students at Lancaster University were recruited via SONA systems (a University-managed research participation system) and gave informed consent to participate in an online pre-test that aided in the selection of suitable experimental stimuli for the main experiment. Ethical considerations were reviewed and approved by a member of the University Psychology department.
Stimuli and Materials
Pavlovia, the online counterpart to the experiment building software package PsychoPy (version 2022.2.0; Peirce et al., 2019), was used to remotely run the stimuli selection pre-test. One hundred images of common black-and-white animal silhouettes were initially selected and downloaded from PhyloPic (Palomo-Munoz, n.d.), an online database of taxonomic organism images, freely reusable under a Creative Commons Attribution 3.0 Unported license . All images were resized and standardised to fit within an 854 x 480-pixel rectangle.
Design and Procedure
An online pre-test was conducted to identify the recognisability of possible animal stimuli and to select the most recognisable set of 32 animal silhouettes to use in the main experiment. Recognisability was an important consideration because participants would only briefly glimpse at the animals; therefore, the ability to recognise the silhouettes quickly and subconsciously was paramount. The 100 chosen animal silhouettes (as outlined in the Stimuli and Materials section) were randomised and sequentially presented. Each image was displayed for 1000ms to match the duration of stimuli exposure in the final experimental design.
The participant then rated each animal’s recognisability on a 7-point Likert scale (1 = Extremely Unrecognisable to 7 = Extremely Recognisable). Additionally, they were asked to guess each animal’s name by typing it in a text box, and to provide a confidence rating corresponding to each naming attempt (again, on a 7-point Likert scale, from 1 = Extremely Unconfident to 7 = Extremely Confident). To choose which 32 animals were included, the recognisability scores for each animal were summed, averaged, and sorted in descending order. Duplicate animal species were excluded by removing all but the highest-scoring animal of the same species. Because the 32nd place was tied between two animals which achieved the same recognisability scores, the animal with the highest name-guessing confidence rating was selected.
Main Experiment
Participants
Sixty-five participants who have not previously participated in the pre-test gave informed consent to participate in the main experiment (M¬age = 23.93 years, SDage = 8.06; 49 females), 51 of whom were students/staff/members of the public at Lancaster University recruited via SONA systems or through opportunistic recruitment around the University campus (e.g., on University Open Days). The remaining 14 participants were A-level students around Lancashire, recruited as part of a Psychology taster event at the University. All participants had normal or corrected-to-normal vision and had normal colour vision.
Past studies of the JSE obtained medium-to-large effect sizes (e.g., Shafaei et al., 2020; Stenzel et al., 2014). An a priori power analysis was performed using G*Power (Version 3.1.9.6; Faul et al., 2009) to estimate the participant sample size required to detect a similar interaction. Due to the novel adaptation made to the Simon task (thus possibly attenuating the strength of previously found effects) and the additional memory/recognition task, a conservative-leaning effect size estimate was used. With power set to 0.8 and effect size f set to 0.2, the projected sample size needed to detect a medium-small effect size, repeated measures, within-between interaction was approximately 52.
Stimuli and Materials
The online survey software Qualtrics (Qualtrics, 2022) was used to provide participants in the main experiment with information and consent forms, plus obtain demographic information and (for participants in the joint condition) interpersonal relationship scores (see Appendix A for a list of the presented questions). The Simon and Recognition Tasks were run using the PsychoPy on three iMac desktop computers with screen sizes of 60 cm by 34 cm and screen resolutions of 5120 x 2880 @ 60 Hz. Responses to the Simon task were recorded using custom pushbuttons (see Appendix B for images) assembled and provided by Departmental technicians.
The 32 animals chosen via the pre-test to be used in the main experiment (Simon/Recognition task) were recoloured to be entirely in either blue (hexadecimal colour code: #00FFFF) or orange (#FFA500). Varying by trial, the animals were displayed either 1440 pixels on the left or the right from the centre of the screen (for an example, see Figure 1).
Figure 1
Example of Stimuli Used in Simon Task

Note. Diagram (a) contains a screenshot of the Simon Task in which the orange stimulus appeared on left, whilst diagram (b) depicts a blue stimulus appearing on the right.
Design and Procedure
Simon Task. For the Simon task, a 2 x 2 mixed design was employed, with Compatibility (compatible vs. incompatible) as a within-subject variable and Condition (individual vs. joint) as a between-subject variable. Participants were first individually directed to computers running Qualtrics to read and sign information and consent forms, and to provide demographic information. Afterwards, participants were guided to sit at a third computer, where they sat approximately 60 cm (diagonally, approximately 45° from the centre of the screen) away from the computer either on the left or right side, with a custom pushbutton set directly in front of them. They were instructed to use their dominant hand on the pushbutton. In the joint condition, each pair of participants sat side-by-side, approximately 75 cm beside their partner. In the individual condition, an empty chair was placed in an equivalent location next to the participant.
In both conditions, participants were individually assigned a colour (either blue or orange) to pay attention to. Participants were instructed to “catch” the animals by pressing their pushbutton when an animal silhouette of their assigned colour appeared on the computer screen . Participants were not otherwise instructed to pay specific attention to any of the animal species, nor the location (left/right) that it appears in; the focus was solely on the animals’ colour. Crucially, participants were unaware of the recognition task which came afterwards. Sixteen out of the 32 animal silhouettes selected during the pre-test were chosen to be displayed to them during the Simon task. The 16 animals were further divided in half and matched to each of the two colours, such that each participant was assigned eight animals in their respective colours. The remaining unchosen 16 animals were used as foils in the Recognition Task. Participant sitting location (left/right), stimuli colour (blue/orange), and animals presented (as stimuli in the Simon task/ as foils in the Recognition task) were counterbalanced between participants. Additionally, stimuli presentation position (left/right, and by extension, compatibility/incompatibility) was pseudorandomised on a within-subject, per-block basis.
After reading brief instructions, participants completed a practice section. When participants achieved eight more cumulative correct trials than incorrect/time-out trials, they were allowed to proceed to the main experiment. This consisted of eight experimental blocks, where each block contained 16 trials (which corresponded to the 16 chosen animals), totalling 128 trials. Half of the trials in each block (i.e., 8) were spatially compatible, while the remaining half were incompatible. Furthermore, each block contained the same number of (in)compatible trials for each participant (i.e., four of each compatible/incompatible trials per participant). Trials in which the coloured stimulus and its correct corresponding response pushbutton were spatially congruent were coded as compatible, whilst spatially incongruent trials were coded as incompatible trials.
A mandatory 10-second break was included at the half-way point of the experiment (i.e., after block four, 64 trials). Each trial began with a fixation cross in the centre of the screen for 250 ms. Following this, colour stimuli (circles in the practice trials, animal silhouettes in the main experiment) appeared on either the left or right of the screen for 1000 ms. A 250 ms intertrial interval (blank screen) was implemented. If a participant correctly pressed their pushbutton when stimuli of their assigned colour appeared, they were met with the feedback “well done”. Incorrect responses (i.e., when a participant pressed their pushbutton when a stimulus not of their assigned colour appeared) or timeouts (i.e., failing to respond within 1000 ms) were met with the feedback “incorrect, sorry” or “timeout exceeded” respectively. In addition to recording accuracy (correct/incorrect responses), each trial’s reaction time (time elapsed between stimulus display and pushbutton response) was also recorded and coded as response variables.
Regardless of participants’ response time, each stimulus appeared for the full 1000 ms, and feedback was only provided after a full second has elapsed. This deviated from the design of previously used Simon tasks—in some studies, each trial (and thus stimuli presentation) immediately terminated upon any type of response (e.g., Dudarev et al., 2021); in other studies, each stimulus was only displayed for a fraction of a second (e.g., 150 ms; Dittrich et al., 2012), after which was a response window during which the stimulus was not displayed at all. The design choice of fixing the stimuli presentation duration to 1000 ms irrespective of participant response was to ensure that each animal colour/species were displayed for an equal duration of time. This was important so as not to bias the incidental memory of participants towards trials wherein one participant was slower to respond (and would have therefore kept the stimulus on screen for longer, disproportionally encouraging encoding).
Surprise Recognition Task. For the recognition task, a 2 x 2 mixed design was employed, with Colour Assignment (self-assigned vs. other-assigned) as a within-subject variable and Condition (individual vs. joint) as a between-subject variable. Colour Assignment refers to whether the animal was previously assigned to, and presented in the Simon task as, the participant’s personal colour (i.e., self-assigned) or their partner’s colour (in individual condition’s case, this simply refers to the not-self-assigned colour, i.e., other-assigned).
After completing the Simon task, participants were each guided back to their individual computers which they had initially used to give consent and demographic information, so as to minimize bias from familiarity effects on memory. Using a PsychoPy programme, participants were shown 32 black-and-white animal silhouettes one-by-one and were asked two questions: (1) “Do you recall seeing this animal in the task before?”, with binary “yes” or “no” response options; and (2) “How confident are you in your answer above?”, with a 7-point Likert scale between 1 = Extremely Unconfident to 7 = Extremely Confident as response options. For both questions, participants used a mouse to click on their desired response. Participants were additionally instructed that it did not matter what colour the animals appeared as during the previous (Simon) task—so long as they remember having seen the silhouette at all, they were asked to select “yes”. There was no time limit on this task. Thirty-two animal silhouettes were presented, of which 16 were seen in the Simon task, while the remaining 16 yet-to-be-seen animal images were added in as foils in this recognition task. The participants’ responses to the two aforementioned questions were recorded as key response variables.
Check Questions and Interpersonal Closeness Ratings. At the end of the study, participants were asked several check questions which, depending on their answers, would lead to further questions. For example, they were asked about whether they had any suspicions of what the study was testing, or whether they paid specific attention to, and/or memorised the animal species shown in the Simon task on purpose (see Appendix A for a full list of questions and associated branching paths). The latter questions served to identify whether participants had intentionally memorised the animals, which may undermine the usefulness of the data collected in the object recognition task.
Additionally, participants in the joint condition were also asked to individually rate their feelings of interpersonal closeness with their task partner with two questions. The first was a text-based question which asks how well the participant knows their partner (Shafaei et al., 2020), with four possible responses between “I have never seen him/her before: s/he is a stranger to me.”, and “I know him/her very well and I have a familial/friendly/spousal relationship with him/her.” The next was question contained the Inclusion of the Other in the Self (IOS) scale (Aron et al., 1992), which consisted of pictographic representations of the degree of interpersonal relationships. Specifically, as can be seen in Figure 2, the scale contained six diagrams, each of which consisted of two Venn diagram-esque labelled circles which represented the “self” (i.e., the participant) and the “other” (i.e., the participant’s partner) respectively. The six diagrams depicted the circles at varying levels of overlap, as a proxy measure of increasing interconnectedness. Participants were asked to rate which diagram best described the relationship with their partner during the study. In following the steps of Shafaei et al. (2020), the first text-based question was included, and was used as a confirmatory measure for the IOS scale, the latter of which was the primary measure for interpersonal closeness.
Figure 2
Inclusion of Other in the Self (IOS) scale

Publisher

Lancaster University

Format

Data/Excel.csv
Analysis/r_file.R

Identifier

Wong07092022

Contributor

Malcolm Wong
Aubrey Covill
Elisha Moreton

Rights

Open

Relation

N/A

Language

English

Type

Data

Coverage

LA1 4YF

LUSTRE

Supervisor

Dr. Jessica Wang

Project Level

MSc

Topic

Cognitive, Perception

Sample Size

25 in a pre-test, 65 in the main experiment

Statistical Analysis Type

Linear Mixed Effects Modelling

Files

Collection

Citation

Malcolm Wong, “Does implicit mentalising involve the representation of others’ mental state content? ,” LUSTRE, accessed March 28, 2024, https://www.johnntowse.com/LUSTRE/items/show/149.