I have terrible hearing. I’m not hearing-impaired in any actual way, but whenever there is a lot of background noise – terrible music at a bar, the burbling of friends at a big party – I just cannot understand what people are saying even when they’re right nearby. I honestly spend most of time responding to what I guess they’re talking about. But this ability to separate what a friend is signaling from the background noise is not just a problem most of us are able to solve at “cocktail parties” but is also something that ubiquitous technology like cell phones have been developed to cope with.
A less understood problem is not how to detect and understand these signals, but how to convey them. Should you speak really loudly? Have a particularly distinctive voice? This is something that animals in the wild have to deal with all the time. Among the cacophony that is multiple species trying to chatter at each other, they have to decide how to send messages to each other that are both detectable and understandable from the background noise.
The traditional view has been that animals will act like to channels: partition the space so that they don’t interfere with each other too much. This bird over here will squawk loudly, this dove will coo softly, and so on. That would be the most informative way if each species were acting on their own. But of course there are other things to consider. Two birds may occupy the same ecological niche, worried about the same predators and needing to warn off other animals that are battling for the same food. If that was the driving evolutionary pressure, signals might end up more clustered than you’d otherwise expect.
In fact, the latter possibility is exactly what happens. Tobias et al. visited the Amazon and recorded the dense vocalizations of more than 300 animals throughout the day. Taking the principal components, they found that the three most relevant ways to describe the data are in pitch, duration, and pace of the signal. In fact, there is much more clustering than you’d expect from animals partitioning their signal. Although they are not able to test it directly, this suggests that there could be a lot of communication between different species. This interspecies communication shouldn’t be too shocking: we all understand a growl when we hear it, right?
One of the fundamental questions in neuroscience is how our sensory neurons are able to represent the world. An extremely fruitful line of research has been to study how neurons respond to natural stimuli. It makes sense, then, that sensory neurons have evolved to represent as much information as possible about the natural world – after all, why would you throw away information right away? An influential paper by Michael Lewicki proposed an answer for audition by finding the independent components of natural sounds. But no one has thought about this in an ecological context! Natural sounds have to compete – or cooperate – with vocalizations from other animals. Hopefully we will see evidence of that in the future.
Tobias JA, Planqué R, Cram DL, & Seddon N (2014). Species interactions and the structure of complex communication networks. Proceedings of the National Academy of Sciences of the United States of America, 111 (3), 1020-5 PMID: 24395769
M Lewicki (2002). Efficient coding of natural sounds Nature Neuroscience DOI: 10.1038/nn831