The Oxford Seminar in the Psychology of Music (OSPoM)
The Oxford Seminar in the Psychology of Music (OSPoM) features leading researchers presenting a wide variety of topics in the intersection between music and psychology. The Seminar is convened by Eric Clarke and Manuel Anglada-Tort (University of Oxford).
Enjoying a position at a neglected part of the clock, seminars will start at 4.56pm GMT, and will last for 90 minutes – 45 minutes of presentation followed by 45 minutes of discussion. These seminars are open to all and are hosted in a hybrid format: join in person (in the Committee Room of the Oxford Faculty of Music) or remotely via YouTube (on the Faculty’s YouTube channel).
Calendar of speakers 2022-23:
- 26 October 2022: Peter Harrison (University of Cambridge): Timbre and consonance
- 9 November 2022: Eldritch Priest (Simon Fraser University): A plague on both your ears... (or, a reverie on the technogenesis of earworms)
- 23 November 2022: Kelly Jukabowski (Durham University): The power(?) of music: Probing the relationship between music and autobiographical memories
- 25 January 2023, Manuel Anglada-Tort (University of Oxford): Studying the effect of oral transmission on music evolution using online singing experiments
- *Next speaker: 8 February 2023: Freya Bailes (University of Leeds): Emotional Engagement with Music affects subsequent Musical Imagery: An Experimental Study
See the tabs below for speakers and abstracts of previous seminars.
To be confirmed.
Speaker: To be confirmed.
Title: To be confirmed.
To be confirmed.
Alexandra Lamont (Keele University): What can Musical Memories tell us about Musical Preferences? (17 February 2021)
Existing research into music preferences has illustrated, broadly, that different people seem to prefer different types of music and at different ages, with some suggestions that context may also play a role. In this talk I present new data exploring preferences through musical memories, using a variety of recognition and recall techniques (experiments, self-reports, and interviews). The data shed light on people’s memories of a wide range of music over the lifespan and, more importantly, the influences on these memories. The talk will cover details of theories, methods and evidence on questions of musical memories and preferences.
Renee Timmers (University of Sheffield): A Probabilistic Analysis of Emotion and Meaning in Music (3 March 2021)
Perception of emotion and meaning in music is to a large extent probabilistic rather than deterministic. Certain properties of music may increase the likelihood that a particular emotion is perceived over another or a particular imagery or association is evoked. What emotion or imagery is perceived also depends on contextual factors such as the a priori probability of emotions, listeners’ sensitivities and biases, and the distinctiveness of the properties within the musical context. In this presentation, I will explore the use of Bayes’ rule to model the perception of emotion and meaning, and to capture the influence of these contextually shaping factors.
Considering emotion perception according to Bayes’ rule, the posterior probability of perceiving an emotion given a musical property M is equal to the likelihood of the observation of the musical property if the hypothesis of that emotion was true, times the prior probability of that emotion (in the context of competing emotions). To develop this method, measures of prior probability of emotions are required as well as probability estimates of musical properties in emotional expressions. Analogously, the posterior probability of multimodal imagery given musical property M is equal to the likelihood of that musical property in the context of the hypothesised multimodal phenomenon, additionally taking into account the prior probability of the phenomenon and the frequency of occurrence of the musical property across multimodal phenomena. Finally, probability calculations can be used to examine relationships between emotion and meaning in music: what is the posterior probability of an emotion given a multimodal association or conversely what is the probability of a given multimodal imagery given an emotion?
Data from existing research articles are used to get a proof of concept of these applications of Bayes’ rule to model perception of emotion and meaning in music. Future directions for research are discussed as well as benefits and limitations of the adoption of a Bayesian approach to music cognition.
Diana Omigie (Goldsmiths, University of London): Music listening as a window into the aesthetic pleasure of information seeking (12 May 2021)
Information seeking may be defined as the motivation to seek and explore information in the environment. The availability of computational tools that allow the information theoretic properties of musical events to be objectively quantified thus makes music an optimal testbed for studying this important drive. In this talk, I will first present studies in which we have used depth-electrode intracranial recordings to examine the cortical and subcortical correlates of music-induced surprise and uncertainty. I will then present studies that, using computational modelling, provide evidence of music’s ability to induce curiosity as a function of the idiosyncrasies of its unfolding structure. Finally, I will provide evidence that individual differences in trait curiosity may account for variations in the timepoints at which listeners derive maximal enjoyment from music. Several theories suggest a role of curiosity and interest in the aesthetic response, but the potential of musical stimuli to throw light on these epistemic emotions are still under-exploited. I will close with recommendations as to how musical stimuli might be useful in addressing important open questions in the cognitive neurosciences of information seeking.
Ian Cross (University of Cambridge): Affiliative Interaction in Music and Speech (26 May 2021)
Most research into language-music relationships has privileged language in the comparisons that it makes between the two domains. Music has generally been explored as though it were a sonic domain made up of complex patterns that can elicit aesthetic or hedonic responses, while studies of language are founded on its capacity to express complex propositions that can reflect states of affairs in the world. While music may resemble language in its combinatorial properties, in comparison with language it lacks the all-important property of compositionality; it thus appears to be a pale analogue of language with limited utility and little relevance outside the realm of entertainment. This view is, however, completely controverted by the fact that across cultures music constitutes a participatory medium for communicative interaction with diverse and significant functions. Participatory music displays features and involves processes that equip it to manage social relations by inducing a sense of mutual affiliation between participants. At least some of those features and processes are present in other modes of human interaction, particularly those genres of speech concerned with establishing or continuing mutual affiliation or attachment, generally termed "phatic". I suggest that music as an interactive medium intersects so significantly with speech in the phatic register as to be indistinguishable from it. I hypothesise that affiliative communicative interaction need be neither music nor speech, but that these are best construed as culturally-constituted categories of human behaviour; the superordinate and generalisable category into which both fall is that of human affiliative communicative behaviour, which can be claimed in different cultures to be music, speech, or any one of a range of other categories in other possible taxonomies of human communicative behaviour. This paper will survey evidence from ethnomusicology, linguistics and the cognitive sciences of music, and from recent research at Cambridge into spontaneous interaction in speech and music, that lends support to this hypothesis.
Martin Clayton (Durham University): Interpersonal Entrainment in Music Performance (3 November 2021)
Entrainment has proved a useful tool to researchers seeking to understand how musicians, dancers and other participants in musical events get and remain ‘in time’ with one another. Just as mechanical systems can synchronise with each other if connected by a coupling force, biological systems can become synchronised through the exchange of sensory information, and in this view human beings use auditory, visual and other modalities to coordinate their musical actions. In the Interpersonal Entrainment in Music Performance (IEMP) project we distinguished two aspects of entrainment: synchronisation, which we studied through statistical analysis of onset timing information in instrumental music, and coordination, which we explored through analysis of body movement. The former approach allows synchronisation to be compared on a like for like basis between very different musical genres (e.g. Afrogenic drum ensembles, Indian instrumental duos, Tunisian stambeli groups), and allows us to speculate on the factors which may influence the precision of alignment between sound events. The latter approach draws attention to another dimension of interpersonal coordination, taking place over longer time-spans, which includes both deliberate and unconscious processes through which individuals manage the course of performances. In this presentation I focus on the implications of this work for understanding the ways in which interpersonal musical entrainment is socially effective and culturally mediated.
Maria Witek (University of Birmingham): Embodied Entrainment and DJing (16 February 2022)
We can explain how musicians, listeners and dancers synchronise their attention and movements to a musical beat by examining the process of entrainment. Entrainment can be modelled using two self-sustaining oscillators which gradually become synchronised as one oscillator drives the adaptations in phase and period of the other. In the normal understanding of rhythmic behaviour, these adaptations take the form of phase and period error corrections, allowing agents to adjust their periodic movements by moving faster or slower in order to reduce asynchrony. In DJing, this sensorimotor relationship between movement and timing is complicated. When mixing different records together, a DJ working without automated synchronisation has to manipulate the positions of the records and adjust the controllers on the turntables to align their phase and period – a process known as beatmatching. These movements are not periodic themselves, but rather exert second-order control over the synchrony of the moving records. What characterises the process of entrainment during beatmatching in DJing? What constraints are put on the mechanisms of temporal error correction, and what does it mean for our understanding of the mind that these corrections happen on the turntables, as opposed to inside the head of the DJ? Via the theory of Enactivism - in which the coupling between agents and their environments form the basis for embodied life and mind - I argue that beatmatching presents a form of entrainment that has yet to be considered in music psychology and philosophy of mind. In beatmatching, temporal error correction must be consciously controlled via the skilful manipulation of the records and the turntables, and the DJ embodies the driving force that synchronises the two records. In this way, the beatmatching DJ offers an unusually vivid example of the embodied distribution of rhythmic entrainment and its underlying temporal correction mechanisms.
Jonna Vuoskoski (University of Oslo): Empathy, Entrainment and Social Bonding (2 March 2022)
Music is an inherently social phenomenon. Even when we listen to music in solitude, social cognitive and affective processes play an important role in shaping our perception and experience. In my own work, I have explored how empathy in particular facilitates and modulates our engagement with music. Through recent empirical studies, I will demonstrate how empathy contributes to both affective attunement and bodily entrainment to music. Furthermore, I will argue that trait empathy may also facilitate the social bonding effects of musical engagement, whether in the context music listening or joint action. Finally, I will also discuss how feelings of being moved by music could be understood through a ‘social lens’ as experiences and appraisals of connectedness, facilitated by empathic engagement.
Tal Chen Rabinowitch (University of Haifa): Musical Interaction: Between Tight and Loose (1 June 2022)
In this talk I will explore the possible social structures underlying different modes of performance: from improvisational to structured. Following a review of several musical genres, I will present a theoretical model based on the tight-loose paradigm and expand on the intricate social affordances that are embedded in different forms of musical performances.
Peter Harrison (University of Cambridge): Timbre and consonance (26 October 2022)
The phenomenon of ‘consonance’ in Western tonal music is traditionally considered to be a function of the underlying frequency ratios between chord tones. Here we explore the sense in which consonance also depends on the spectral properties of the chord tones themselves. Through large-scale behavioural experiments combined with computer modelling we show that this relationship between tone spectra and chordal consonance runs deep, and provides valuable new perspectives on the psychoacoustic and cognitive mechanisms underlying consonance perception.
Eldritch Priest (Simon Fraser University): A plague on both your ears... (or, a reverie on the technogenesis of earworms) (9 November 2022)
Almost everyone knows what it’s like to have a song “stuck their head,” or more accurately, what it’s like to have the abstract refrains of a melodic shard or lyrical splinter spread to the finer tissues of feeling that we call thinking. But curiously, no one seems to understand why this occurs or how it may be remedied or prevented. Although research in experimental psychology and the neurosciences is being conducted to determine the memory systems and brain networks that are implicated in the production and maintenance of “earworms,” there’s very little philosophical or speculative thinking that seeks to address the technical and ecological nature of these attentional parasites, and the way their spontaneous capture of attention articulates with certain tactics of cognitive capitalism. In this work, I approach earworms as an artificial matter in the sense that cognition is always, as Bernard Stiegler argues, a function of technics. The earworm, I suggest, is not a simple neurological aberration but the way musical sounds pressed into the unconscious technological refrains of the everyday—where cognition, reflex and habit coincide as pre-individuated techniques of existence—make themselves felt as thought. Expanding on this premise of their technogenesis, I argue that earworms demonstrate a phase shift in audition initiated by the coupling of nervous and sound systems where listening becomes, like the “activity” of dying, something that both happens to us and something that we do. Moreover, I contend that this “deponent mode” of listening is implicated in a broader economic drift led by cognitive capitalism to make so-called “task-less” or stimulus independent psychic events productive activities, as evidenced by recent work in neuroscience to identify a “default-state” for the resting brain such that daydreaming and mind-wandering can be conceptually re-figured to function as the basis for an essentially distracted self.
Kelly Jukabowski (Durham University): The power(?) of music: Probing the relationship between music and autobiographical memories (23 November 2022)
Many people think that music is a particularly “powerful” cue for bringing back memories from our lives. Empirical research has partially supported this idea, by showing that music can evoke more vivid and emotional autobiographical memories than various other everyday cues. But it is still not well understood as to why music might be a particularly salient cue for such memories. In this talk I will discuss recent studies in which I've been probing this question, in an attempt to begin to identify the factors that underpin this complex relationship between music and autobiographical memories.
Manuel Anglada-Tort (University of Oxford): Studying the effect of oral transmission on music evolution using online singing experiments (25 January 2023)
Music has been transmitted orally for countless human generations, changing over time under the influence of biological, cognitive, and cultural factors. How does oral transmission shape the evolution of music, and why do human songs have the structure they do? Here we explored these questions by running large-scale music evolution experiments with singing, in which melodies were orally transmitted from one participant to the next. Our results show that oral transmission plays a profound role in the emergence of musical structures, shaping initially random sounds into more structured systems that increasingly reuse and combine fewer elements (e.g., small pitch sets, small pitch intervals, arch-shaped melodic contours). However, we find that the emergence of these structures depends on a complex interplay between individual factors (e.g., vocal constraints and memory biases) and social influences acting on participants during cultural transmission. Together, these results provide the first quantitative characterization of the rich collection of biases that oral transmission imposes on music evolution, giving us a new understanding of how human song structures emerge via cultural transmission.
Freya Bailes (University of Leeds): Emotional Engagement with Music affects subsequent Musical Imagery: An Experimental Study (8 February 2023)
Research has established that the encoding of events in memory can be impacted by our emotional state. However, when it comes to memory for music, remarkably little is known about the impact of emotional engagement with music on subsequently imagining that music. In collaboration with colleagues from the University of Leeds, a within-subjects musical imagery induction paradigm was used to investigate the relationship between emotional engagement when listening to music, and subsequent musical imagery. We hypothesised that Involuntary Musical Imagery (INMI) is more likely to occur, and to be more vivid, for music felt to be emotional than for affectively neutral music. Following pilot testing, we created stimuli by counterbalancing the pairing of emotionally neutral music tracks with positive, negative, and neutral film clips. Participants (N = 73) encountered these stimuli in an exposure phase, before completing a silent filler task. We then retrospectively asked about anyexperiences of imagining music during the filler task. Finally, a test of voluntary musical imagery accuracy (incorporating participants’ own music nominations) allowed us to test the hypothesis of greater imagery accuracy for music felt to be emotional than not. Binomial logistic analysis of INMI occurrence revealed that the most frequently imagined music came from the last stimulus presented, but also that music paired with the positive film significantly increased the odds of INMI occurrence. Neither INMI vividness, nor accuracy in the voluntary musical imagery task were affected by emotional valence. We provide new evidence of a link between positive emotion and subsequent INMI occurrence, with scope for further exploration of the role of emotional intensity as a factor contributing to musical imagery formation.