The Effect of Ambient Temperature on Cognitive Processing

Dublin Core

Title

The Effect of Ambient Temperature on Cognitive Processing

Creator

Nicola Cook

Date

2018

Description

Over recent decades, climate change has caused the world to get warmer and this trend is set to continue into the future. Relationships between increased temperature and changes in human behaviour, such as increased aggression, have been identified and it is therefore important to consider the impact it may have on other aspects of behaviour. At present, there are limited amounts of research on the effect of temperature on cognitive performance. Within the framework of dual-process theories of cognition and using a Cognitive Reflection Task (CRT) and a Syllogisms Task, the current report researches whether increased ambient temperature (artificially manipulated in a temperature lab) encourages the use of System 1 (i.e. fast, unconscious) processing as opposed to System 2 (i.e. slow, deliberate) processing. The paper asks whether increased temperature leads to more heuristic answers on the CRT and more belief bias on the Syllogisms task. We observed no effect of temperature on performance on the CRT or the Syllogisms task. Similarly, we observed no effect of ambient temperature on belief bias or confidence in answers to the Syllogisms task. However, an effect of ambient temperature was found on how many heuristic responses were given to the CRT, with those in the cold condition giving more heuristic answers than those in the hot condition. We conclude that these findings do not provide support for increased temperature impairing certain aspects of cognitive performance, but also explore unexpected results and discuss potential reasons for these

Subject

Ambient temperature
Cognitive reflection
Syllogistic reasoning
Logistic mixed effects modelling.

Source

Participants
65 individuals participated in this research study. Three were excluded for not meeting the pre-decided eligibility criteria of being a native English speaker aged between 18 and 65. This left 62 participants, 19 male and 43 female (Mage = 25.29 SDage = 8.83). Prior to the study, 1.61% had attained a PhD, 9.68% a Master’s degree, 40.32% a Bachelor’s degree, 33.87% A-Levels, 3.23% GSCEs, 9.68% a Certificate or Diploma and 1.61% had no qualifications. All participants completed the whole study, and none indicated awareness of the true aims of the study, thus, following pre-agreed exclusion criteria all participants were retained for analysis.
Materials
Cognitive Reflection Task. To test participants’ cognitive reflection, a form of the CRT (Frederick, 2005) was utilised. The CRT consists of a series of problem solving questions, with four multiple choice answers. For example, the question, ‘A bat and a ball cost £1.10 in total. The bat costs £1.00 more than the ball. How much does the ball cost?’ is presented alongside the following four options; ‘10p’, ‘5p’, ‘15p’ and ‘90p’. In this case the gut instinct is usually to respond with ‘10p’ however this is incorrect, and the correct response is, ‘5p’.
Frederick’s (2005) original version of the task only consisted of three items and has since been criticised for being too short; about 44% of participants who are given the task have previously seen the questions and this leads to the inflation of their scores on subsequent testing sessions (Stieger & Reips, 2016). Consequently, both Primi, Morsanyi, Chiesi, Donati and Hamilton (2016) and Travers, Rolison and Feeney (2016) have since developed longer versions of the tasks; Primi et al.’s (2016) consisted of 6 items, whilst Travers, Rolison and Feeney’s (2016) consisted of 8. The present study combined items from both papers, taking 6 critical items from Primi et al. (2016) and 4 items, used as fillers, (adapted) from Travers, Rolison and Feeney (2016). The filler questions are included to reduce the chance of participants identifying the aims of the study. These questions differ from the critical questions in that the most obvious answer is the correct one. See Table A1 for a full list of the items used in the CRT.
Syllogisms Task. In order to test participants’ syllogistic reasoning 10 Syllogisms were presented to the participant. Six critical Syllogisms (where the answer was invalid) were taken from Morley, Evans and Handley (2004) and used in the present study. Half of these Syllogisms had believable conclusions, whilst half had unbelievable ones. The believable Syllogisms, concluded with a statement that was believable in the real world (e.g. ‘Some addictive things are not cigarettes’), but remained invalid given the two premises, whilst the unbelievable ones concluded with a statement that was both unbelievable in the real world (e.g. ‘Some millionaires are not rich people’), and illogical given the two premises. The task also consisted of four filler Syllogisms. Again, half of the filler items had believable conclusions and half had unbelievable conclusions, however all of the conclusions were valid. See Table A2 for a full list of the items used in the Syllogisms task.
Procedure
Participants were either recruited through the University’s recruitment portal (SONA), or through individual volunteer sampling. Each testing session was pre-designated as either a hot or cold session and each session consisted of multiple testing slots which were advertised to participants. Participants were unaware of this temperature manipulation and blindly signed up to a testing slot under the pretence of completing a study which investigated behaviour in decision making tasks. As varying numbers of participants signed up to each session, the researchers updated the pre-designated condition of each session accordingly, to ensure there were the same number of participants, 31, within each condition overall.
The study was conducted in a temperature control lab at Lancaster University. This room contains a temperature control panel, which was used to set the ambient temperature of the room to either 16˚C in the cold condition, or 28˚C in the hot condition. A KTJ TA318 Thermometer (with precision of 0.1˚C) was used to record the exact temperature at which each participant completed the study. In the cold condition, the temperature ranged from 15.5˚C to 16.9˚C (M = 16.14) and in the hot condition the temperature ranged from 27.8˚C to 29.8˚C (M = 28.56).
The room consisted of five workstations, separated by partitions, meaning it was possible to test up to five participants at once. Each participant completed the study independently at one of the workstations, which contained a computer monitor, keyboard and mouse, stood on an individual sized table. When participants arrived at the study, they were seated at an adjustable chair facing the computer, within easy reach of the keyboard and mouse. If participants commented on the temperature of the room, the researcher responded with short statements of agreement, such as ‘yes, it is isn’t it’, but did not elaborate further to ensure that researcher influence was kept to a minimum.
Each participant was given time to read the information sheet and provide consent (both digitally presented). Participants then entered demographic information such as their age, nationality and education level. Following this, the main section of the study began, and participants completed both the CRT and the Syllogisms task along with two other short tasks administered on behalf of a separate researcher. These two other tasks were not part of this research study. As part of the Syllogisms task, participants were asked to rate how confident they were in their response to each item, on a sliding scale from 0 (completely unconfident) to 100 (extremely confident). The order in which all four tasks were presented was randomised and counterbalanced across participants to negate any potential order effects. Additionally, the order of items within a task was also randomised for the same reason. Participants were given 5 minutes to complete the CRT, as this is consistent with previous administrations of a CRT (e.g. Primi, et al., 2016) and 30 seconds to complete each of the items on the Syllogisms task. These time limits were utilised to encourage participants to keep focus and to mimic the kind of time pressure associated with examinations.
After these tasks, participants were asked 3 debriefing questions (see Appendix B) to assess whether they had identified the aims of the study. Answers to these questions were reviewed independently by two members of the research team and if participants demonstrated a link between temperature and cognitive performance their data would have been removed from the analysis, as their results may have been influenced by their awareness. Both assessors agreed that there was no cause to remove any participant on this basis.
Finally, participants provided information about how comfortable they felt in the lab, on a 6-point scale, and then also how hot or cold they feel on average, on a sliding scale from -50 (extremely cold) to +50 (extremely hot). This second measure was taken to account for individual differences, as many people generally feel warmer or colder for reasons such as illness or medical condition, and this may influence how hot or cold they felt in the lab.
At the end of the study participants were offered the chance to enter a prize draw to win one of twelve £10 Amazon vouchers. This rumination method was chosen above the option of paying every participant, to mimic the uncertainty of reward which is common in many settings such as examinations.
Pre-registration
This project was verified and registered on the Open Science Framework on the 21st May 2018 (https://osf.io/p6879/). The present study deviated from the initial plans in the followings ways. Firstly, the initial plan to recruit 120 participants proved unachievable within the time constraints and therefore 62 participants were tested. Secondly, logistic mixed effects models were used for most analyses instead of linear mixed effects models. This was a consequence of reformatting the data to be able to take into account the random effect of items on each task, resulting in the dependent variable being binary. Thirdly, the random effect of items and participants were not always included. This was because models with and without these factors were compared and random factors were only included if they helped the model to better fit the variation in the data. Finally, the initial plan was to investigate the effect of mood as an exploratory factor. The data on mood was collected, however further investigation was not possible due to project constraints.
Analyses Strategy
The aim of this paper was to determine whether increased temperature impairs cognitive performance as measured by a CRT and Syllogisms task. To facilitate assessment of results, the data was analysed using R (R Core Team, 2017). The numerical variables used as predictors in analysis were then scaled using the ‘scale’ function from the ‘standardization’ package (Eager, 2017). To conduct the desired analysis, the data was transformed from wide to long format using the ‘gather’ function from the ‘tidyr’ package (Wickham & Henry, 2018).
To assess whether the data collected supported the hypotheses and therefore the extent to which temperature condition predicted test performance, several logistic mixed effects (LME) models were computed, using the ‘glmer’ function from the ‘lme4’ package (Bates, Maechler, Bolker & Walker, 2015). This was the most appropriate method of analysis to use as both the dependent and key independent variables were binary and it allowed the random effects of participants’ individual differences, as well as the random effect of items within each task, to be taken into account, which is necessary in a repeated measures design. The models contained the fixed effects of condition (Hot vs. Cold), baseline temperature and comfort level and the interaction effects of condition with comfort level and with baseline temperature. They also included the random effects of participants and/or items, depending on which random factors (if any) were found to aid the model to fit the variation in data best. To evaluate whether the inclusion of the random effects was required in each model, comparisons were made between the Akaike Information Criterion (AIC) of the final model and identical models with (a) the random effects removed, (b) only the random effect of items, and (c) only the random effect of participants, see Table C1.
When reporting logistic models, we give estimated coefficients (ß), standard errors (SE), z-values (z) and p-values (p) of predicting variables. We also report the conditional R2 value (R2_c) for each model; a ratio which gives the variance explained by the fixed and random effects as a proportion of the total variance explained by the fixed effects, random effects and residuals. This is calculated using the ‘r.squaredGLMM’ function of the ‘MuMIn’ package (Barton, 2018). Where significant effects are found, estimated log odds are transformed into odds ratios by exponentiating the coefficients, to aid the interpretation of the effect.
Cognitive Reflection Task. To investigate whether there was a difference in performance on the CRT between individuals in the hot condition and individuals in the cold condition, the data was coded such that a correct answer was given the value of ‘1’ whilst incorrect answers were given the value of ‘0’. To address whether there was a difference in the number of heuristic responses given on the CRT, the data was recoded (‘1’ = Heuristic response, ‘0’ = Other response).
Syllogisms Task. To investigate whether there was a difference in performance on the Syllogisms task between individuals in the hot condition and individuals in the cold condition, the data was coded such that a correct answer (‘Invalid’) to a critical item was given the value of ‘1’ whilst incorrect answers (‘Valid’) were given the value of ‘0’. In order to investigate whether participants in the hot condition showed more belief bias than those in the cold condition, we extracted the three invalid believable Syllogisms and the two valid unbelievable Syllogisms. The data was recoded such that when a ‘valid’ answer was given to an invalid but believable syllogism or when an ’invalid’ answer was given to a valid but unbelievable syllogism, responses were given a value of ‘1’, to signify belief bias. Other responses were given a value of ‘0’. To analyse the ratings of confidence in participants’ answers to the Syllogisms task a linear mixed effects models was used, as the dependent variable was continuous.
Exploratory Analysis. Data collection was conducted during the summer months, partly whilst Britain was experiencing a period of unusually hot weather. It is therefore possible that participants may not have been fully affected by the temperature manipulation. For example, those in the cold condition may have still suffered the negative effects of heat as a result of spending time prior to the study, outside in the heat. To address this, actual environmental temperature at a local weather station, for the times of participation were taken from ‘WeatherOnline.co.uk’ and added to the data set. The LME models included the outside temperature along with condition and the interaction between outside temperature and temperature condition as the fixed factors, and the random effects of items and participants.

Publisher

Lancaster University

Format

data/Excel.csv

Identifier

Cook2018

Contributor

Ellie Ball

Rights

Open

Relation

None

Language

English

Type

Data

Coverage

LA1 4YF

LUSTRE

Supervisor

Dr. Dermot Lynott

Project Level

MSc

Topic

Cognitive Psychology

Sample Size

62 Participants (19 male and 43 female)

Statistical Analysis Type

Pearson's Correlation

Files

Citation

Nicola Cook, “The Effect of Ambient Temperature on Cognitive Processing,” LUSTRE, accessed May 5, 2024, https://www.johnntowse.com/LUSTRE/items/show/92.