Decoding the language of whales—and beyond
Project CETI's new Whale Acoustics Model and the movement toward interspecies communication
This is the newsletter of the Earth Action Index: a discovery platform for climate and ecology. Explore the Index as an early bird.
Dear Earthlings,
The conversation between humans and whales opened a new chapter this week. For the first time, we can translate any audio into the vocalizations of sperm whales. We still don’t understand the meaning of their language - but we’ve learned how to sound like whales.
It’s a milestone built on a fifty-five year legacy. Roger Payne’s Songs of the Humpback Whale, released in 1970, was the first invitation to listen to cetacean communication. The record was a watershed moment - of oceanic proportions - for the early environmental movement. Carl Safina described its release as “A momentous turning point in the human relationship with life on the planet.” Humpback songs soundtracked early ‘70s environmental campaigns, appeared in pop music, arrived in ten million National Geographic magazines, and even sailed into space as part of the Voyager’s Golden Record. Critically, the album ignited the Save the Whales movement: “Within a few years, whale hunting was largely ended.”
Now, Payne’s mentee David Gruber, founder of Project CETI, is carrying the torch - hoping to spark a similar surge of energy. CETI is applying AI to listen and eventually understand what whales are saying: to translate their complex vocalizations that predate human language by millions of years. They moved closer this week.
This Fall, in the clamshell amphitheater at the Frick Collection, I witnessed a live performance by Garth Stevenson: the double bassist who plays in concert with whales. His short show sent chills down my body, and reawakened my awe for the ocean - making an enduring memory from the Atmos Blue Renaissance gathering.
For more than 15 years, Garth has rehearsed his recreation of the calls of humpback whales. In fact, he practiced by playing along with Payne’s record. Floating on tiny boats in Baja and Antarctica, balancing his standup bass, he amplifies the music with an underwater speaker. Soon, the whales breach the surface, and join him in a chorus of calls. An interspecies concert. Listening to Garth’s performance, mixed with the real field recordings, I couldn’t tell: Where does the human music end and the whale song begin?
This question became shockingly and magically relevant on Tuesday. Project CETI released its Whale Acoustics Model (WhAM): an AI system able to transform any audio into the sounds of sperm whales. This is a first-of-its-kind generative model of animal communication. WhAM is like a performer with formal music training, who closely studied the sounds of sperm whales to recreate their rhythm - not unlike Garth Stevenson.
It’s been a staggering two years of progress for the team at CETI in Dominica. Last May, CETI discovered the sperm whale phonetic alphabet. This November, they published research to prove whales have vowels. Now, in December, CETI can translate any audio into the acoustic signature of sperm whales - a series of rhythmic clicks called a coda - with uncanny precision. Diving in Dominica, and observing whales with drones, CETI researchers have learned about their female-led social units that share vocal dialects.
AI models now enable us to classify and generate whale language. CETI is far from understanding what these sounds mean and will not broadcast these vocalizations to real whales. Instead, WhAM provides an ethical research tool: scientists can run experiments with synthetic whale sounds rather than disrupting actual whale communities with messages we don’t yet understand.
I’m endlessly fascinated by the potential of bioacoustics and interspecies communication. To bring us into closer connection with the living world. To expand our understanding of intelligence beyond human limitations. To inspire protection of more-than-human life. Project CETI is one node in this vast field, and we’ve been curating many other signals of interspecies connection on the Earth Action Index.
Invitations from the Index
For more, explore this collection of invitations to dive deeper into the movement to decode nonhuman language.
Project CETI: Understand what whales are saying
🐋 Support their research and 🎧 Listen to sperm whale codas
It sure seems that David Gruber is constantly speaking at in-person and online events, so stay tuned on the Project CETI socials to keep up with their work.
Earth Species Project: Decoding nonhuman communication
🐦⬛ Explore their work and 🎧 Test their model
As Project CETI focuses its scope on sperm whales, the Earth Species Project casts its net to any vocal species in the living world. Founded Aza Raskin, Britt Selvitelle, and Katie Zacarian, ESP also uses machine learning to analyze the sounds of more-than-human life. Released after 7 years, their flagship NatureLM model was trained on decades of bioacoustic archives and human speech, and can already detect and classify the sounds of thousands of species. The two projects are in direct conversation: in the WhAM paper, it’s clear that ESP leads the way on audio analysis and classification, but CETI has made the first breakthrough on audio generation.
More-Than-Human Life Project at NYU Law
📅 Attend a future MOTH Festival and 💡 Read stories from Ideas Hub
The MOTH Project is a close partner to Project CETI, working as their guides to navigate the legal and ethical implications of interspecies communication. Led by César Rodríguez-Garavito, the group advances the rights of nature with a collective of lawyers, scientists, indigenous leaders, writers, artists, musicians, and more. Housed at NYU Law—but venturing on field expeditions to the Amazon, Chile, and beyond—MOTH is a leading light in the movement and a “convener, connector, and incubator for ecocentric experiments.” For the public, the group hosts programming, podcasts, and gatherings worth following along.
Interspecies Internet
📰 Subscribe to the newsletter and 📹 Join the slack
The Interspecies Internet is the think-tank bringing this conversation to the people. I subscribe to their newsletter to get looped into their monthly lecture series, which I always try to tune into. From them I’ve learned about our efforts to decode dolphins, that primates go by individual names, and that plants do “talk back,” if we can learn to listen. — Hannah Seckendorf
The Sounds of Invisible Worlds
📰 Read the essay and 📹 Watch her TED talk
The late Karen Bakker was a leading voice in the field of digital biacoustics, bringing us closer to the worlds of animals and plants. She wrote wonderful books on the subject and has a great TED talk too. I especially love this longform essay she published in Noema, arguing: “Sonics is the optics of the 21st century.” The piece surveys everything from astronomers converting cosmic data into audio, to biologists learning to decipher elephants’ infrasonic sounds, and even plants’ ultrasonic signals. TLDR: We now know the elephant word for honeybee.
Appreciate your attention, more soon.
Michael & the Index team


