Music and language, distinct in so many obvious ways, may be tethered together by shared cognitive resources. Professor Bill Thompson and colleagues investigated the relationship between language and music sensitivity to emotional speech (e.g., tone of voice) in individuals with congenital amusia, a disorder characterised by difficulties in perceiving and remembering music. Participants with amusia had difficulty understanding emotional speech, with decoding rates for some emotions 20% below that of matched controls. These participants also reported difficulty understanding emotional prosody in their daily lives. The findings support the idea that the human brain recruits the same mechanisms in generating emotional responses to music and to the sounds of language. This supports theories that contend that music and language have a common evolutionary origin (Thompson, Marin, & Stewart, 2012). The published research led to follow up studies on peoples’ emotional responses to sound. One study found that human emotions closely track changes in the acoustic environment. This also provided evidence for the view that a single acoustic ‘code’ underlies our perception of emotional signals in music, speech and environmental sounds (Ma & Thompson, 2015).