Lessons in Intelligence from a Cat (and Other Creatures)
What does it mean to be intelligent in a more-than-human paradigm?
This essay was originally published in the Polycene Design Manual, an ongoing attempt to articulate a design standard in response to the polycrisis. It was created by the Center for Complexity at the Rhode Island School of Design, in collaboration with Horizon 2045 and the 10x100 network.
This is the second essay in a series titled Embodying Planetarity that contextualizes what it means to function within a planetary paradigm. Read Part 1 here.
Blinking Cognition
Four years ago, I found myself locked up at home in New Delhi. Much like everywhere else, the deadening silence of a global lockdown had begun to take root, and communication—even with ourselves—had never been more difficult. Like others during the first lockdown, I quickly realized the need for a companion and decided to adopt a kitten (fig. 3). He was the size of my palm when I first brought him home, with a spotted grey-brown coat and large curious eyes. The minute he was let out of his cage, he did what many of us would for years after the pandemic began: he darted under the sofa and hid from the world. Now, unfortunately, I didn’t speak cat. For all my pss-pss-ing, the kitten mostly ignored me. The next few weeks involved multiple experiments in non-human communication, miscalculated guesses at his needs as I offered food, affection, and play completely at whim, desperately trying to overwrite my cluelessness.
Language is, arguably, a defining feature of human intelligence. It’s what helps us hold complex ideas, read and record our histories, build relationships, and cooperate at scale. With the kitten, however, language was failing me—clearly, I wasn’t as evolved as I thought. On the other hand, he was a different story. Within a week, he’d figured out when it was most ideal to sneak out from under the sofa, where his food was stored, and how to turn the closet handle to steal as much as he could. I would find him some mornings carefully rubbing his scent on my furniture as he marked his way around the house, or patiently imitating bird calls by the window to get the attention of pigeons. He was a spicy character—unexpected and capable of significant damage. Naturally, I named him Wasabi. Eventually, bouts of notoriety aside, we fell into something of a routine: eat, play, nap, repeat. Simple cycles of soul and sustenance, wrapped in a quiet bubble of silence. Wasabi watched me, and I watched him, soundlessly, discovering that language takes on many forms.
The most poignant instance of this (a breakthrough!) was learning what’s considered the cat-equivalent of saying “I love you.” According to cat behaviorist Jackson Galaxy, it goes something like this: step one, look into the cat’s eyes and blink very slowly (in closing my eyes, I say I trust you); step two: hold out something that carries your scent (I offered my glasses); and crucially, step three: slowly extend out your palm and…wait. So, there I was, sprawled out like a sphinx, arm stretching out under the sofa. Although my prior experiences prepared me had for rejection, I blinked and stretched as instructed. And much to my surprise (!!!!), this time Wasabi actually responded. He blinked back—slowly, delicately, and eventually tip-toed a little closer to put his paw on mine. Wordlessly, he let me know: I understand you; I trust you; you have nothing to be afraid of.
The Cooperation of the Fittest
Now imagine a conversation between a songbird and a computer. In the bird’s melody, a wisdom honed through millennia of evolution; in the computer’s response, a logic shaped by its extensive neural networks. This conversation unfolds not in words but in the patterned symphony of rhythms and computations, relaying geography, intent, history, and more, in its subtle configuration. What the bird communicates, what the computer processes—neither is completely known to humankind, only experienced. Elsewhere, a tree communicates with its neighbors through a network of subterranean fungi, while a robot vacuum cleaner learns to navigate the world through a matrix of sensors and algorithms. Humans, among them all, read wind patterns and satellite data alike. If language is a sign of intelligence, the planet clearly offers many examples. Ultimately, we find ourselves in a world of multiple worldings, where entangled models of cognition reflexively shape (and are shaped) by their experiences.
This leads us to the primary assertion of this essay: that embedded in the critical questions of relationality1 are equally crucial enquiries on the nature of intelligence. In misreading the Darwinian model of evolution as evidence of human exceptionalism (or at worst, justification for eugenics), we limit our ability to comprehensively engage with the planet. Indeed, the seeds of dissociation can be found at every step of popular epistemology. Heckert offers an alternate reading of Darwin2 as being:
…different from those who saw evolution as justification for Empire, those who imagined that survival of the fittest meant the most fit, the most dominant, the most masculine, the most “advanced.” For Kropotkin, and I think for Darwin, too, fittest meant best able to fit in with other beings in an ecosystem. In other words, to cooperate.3
This renewed reading of Darwin—the cooperation of the fittest—lays the groundwork for an expansive view of intelligence, including theories that see Earth itself as a living, self-regulating organism. For instance, the Gaia hypothesis, co-developed by scientist James Lovelock and microbiologist Lynn Margulis in the 1970s, posits that the biosphere, atmosphere, oceans, and soil form a complex, interconnected system that maintains the conditions necessary for life. It frames the Earth as a planetary superorganism that renders cooperation into its foundational ordering mechanism. Some critics argue that this idea anthropomorphizes the planet, attributing intentionality and purpose where there may be none. Others question its scientific rigor, challenging this notion that the Earth’s systems are inherently stable or self-correcting. Even so, the Gaia hypothesis is useful because it invites us to consider the planet not just as a stage for life but as an active participant in its unfolding drama.
A Gaian Provocation
This is intelligence at a planetary scale. As radical it may sound, the roots of this thinking can be traced back to early conceptualizations of the biosphere, first emphasized in 1926 by Russian geophysicist, Vladimir Vernadskiĭ, who viewed the biosphere as “a dynamic force that harnessed and redistributed solar energy”4 and believed that life’s ability to manipulate Earth’s fundamental energy flow steered the planet’s evolution in directions that would be inconceivable in a lifeless world. For Vernadskiĭ, life itself possessed a form of cognitive activity, a phenomenon he referred to as “cultural biogeochemical energy,” existing even before humanity’s emergence. To illustrate further, consider Lovelock’s Daisyworld model. In this hypothetical scenario, a planet is populated by black and white daisies. The black daisies absorb sunlight, warming their surroundings, while the white daisies reflect it, creating a cooling effect (fig. 2). As the environment changes, the balance between these two types of daisies shifts, maintaining a stable climate. Developed as a simplified metaphor for the Gaia hypothesis, Daisyworld demonstrates how interconnected life forms can contribute to the self-regulating, adaptive intelligence of an entire planet. It alludes to the intricate dance of cooperation that characterizes life at every scale. Indeed, beyond the singular conception of the human mind, these models posit a cognitive model crucially centered around multispecies cooperation.
The allure of such a conception is evident, particularly among environmentalists, mystics, and poets. Terrence McKenna is believed to have said, “Human history is a Gaian dream.” However, the Daisyworld model, as illustrative as it is of cooperation and adaptation, may not fully account for the human-induced disruptions that threaten the delicate balance of life. Or perhaps, in framing itself in cooperative terms, it highlights the human inability to cooperate with planetary systems. Either way, the current polycrisis—marked by the interplay of multiple existential crises—reminds us that the intricate systems governing our planet are not invulnerable, turning the Gaia hypothesis into a call-to-action (a Gaia provocation, perhaps). In some ways, the conception of the Earth as being capable of self-regulation is akin to the faith of free-market fundamentalists in the invisible hand. As James Bridle points out:
“We humans live in such a narrow slice of time and space that we are incapable of thinking of, or thinking at, the pace and scale of the world, the changes we have wrought in it, and the changes we will have to make to survive it.”5
The Gaia hypothesis, Vernadskiĭ’s biosphere, and Daisyworld all point to a world where intelligence is not the sole domain of humanity but a shared, complex phenomenon that we are only beginning to grasp. These cognitive models may lie outside our comprehension but in attempting to understand them we open the door to a profound understanding of the intricacies of our planet. Consider what Bridle refers to as a phenological mismatch,6 where climate change-induced shifts cause flowers to bloom out of sync with their pollinators. This isn’t just an ecological curiosity; it's a manifestation of a different understanding of time and rhythm. Simply put, pollinators don’t care for the Gregorian calendar; the planet’s myriad intelligences operate on scales and cycles that appear to be catastrophically misaligned with our own; a reminder that the anthropo-centric (and -morphic) view of intelligence may be both limited and limiting.
Synthetic Futures
A recent paper by Frank, Grinspoon, and Walker7 proposes the following system to make sense of these limitations, offering four distinct states of planetary intelligence:
the immature biosphere, where life has just begun to take root, but its influence on the planet is minimal,
the mature biosphere, where life has evolved to significantly shape the environment,
the immature technosphere, which marks the emergence of technology, where human-made structures begin to integrate with natural systems but are not yet in harmony, and finally
the mature technosphere, that represents a fully integrated state where technology and nature coexist in a balanced relationship.
By this framing, the Earth is currently transitioning between the mature biosphere and the immature technosphere. The Anthropocene makes clear that human life profoundly impacts the planet; our technologies already interact with natural ecology at a planetary scale, often to disastrous effect. Benjamin Bratton8 highlights the emergence of what he calls “planetary scale computation,” i.e. the complex and evolving techno-exoskeleton—interconnected networks of sensors, satellites, fiber optic cables, electric lines, artificial intelligences, and other human-made infrastructures—that envelops the Earth and supports everything from monitoring environmental conditions to enabling real-time communication across continents. This techno-exoskeleton is a living entity that integrates technology with ecology, informing what Bratton calls planetary sapience.
At the time of writing, the discourse on techno-integration is at an all-time high, with the rising ubiquity of machine learning (fig. 2) and AI tools like ChatGPT, Midjourney, and more. Actors and writers in Hollywood are on strike, calling for comprehensive regulations of the use of AI in film and TV, with similar concerns raised across several other industries (education, journalism, law, to name a few.) These fears are not unfounded. The current usage of AI is mostly driven by a neoliberal focus on profit maximization and process efficiency rather than the explicit welfare of the planet or its inhabitants. As Bender, Gebru, et. al, highlight in “On the Dangers of Stochastic Parrots,” the algorithms that power these technologies are frequently opaque, leading to concerns about bias, ethics, and accountability.9 The environmental impact of training and running large-scale AI models is enormous, making it pointedly misaligned with the climate demands of our time. Its benefits to neoliberal productivity are obvious—but now that the AI genie is out of the bottle, we can no longer tell how this might affect human dialogue and epistemology, either in their making or consumption. The technocratic zeitgeist has invented conditions that indenture the hyper-abstraction of an already-dissociated reality. “Post-truth” doesn’t even begin to cut it.
The optimistic position, however, argues that the problem isn’t in the technology itself but in our limited imagination around its use and development. Could these tools be harnessed to foster multispecies communication, ecological monitoring, and sustainable development while also ensuring a clear focus on labor rights and social justice? And this is not just about AI! Other non-AI-based technologies, such as satellite imaging, renewable energy systems, and biotechnology hold immense potential for fostering a more compatible relationship with and within the planet. Could we align these technologies with the principles of cooperation, empathy, and sustainability, rather than domination and extraction?
Bratton and researchers at the Moscow-based Strelka Institute draw a distinction between “artificial” and “synthetic” intelligence, inspired by the economist Herbert Simon’s ideas from half a century ago. While artificial intelligence often aims to mimic human-like cognition, synthetic intelligence emphasizes the coalescence of human and machine intelligence, fostering insights or creativity that would be impossible for either on their own. This understanding of intelligence, Bratton argues, functions as an “accidental megastructure” extending from the Earth’s core to outer space. A planetary phenomenon. To him, synthetic intelligence is not confined to the virtual realm; it is a kind of “terraforming,” a deliberate shaping of our planet. Could we extend this synthesis by engaging not just the human and the machine, but also the plant, the animal, the ocean, the forest, and more? Relatedly, what would consensual terraforming look like? Indeed, how might we embed a commitment towards multispecies cooperation and planetary respect into our conceptions of synthetic intelligence? And how might we do so while continuing to stay with uncertainty and accept what might be unknowable?
We offer these as thought exercises and as signposts of what is possible. For instance, in the case of slime mold, whose ability to swiftly establish efficient route networks (in its pursuit for nutrition) was used to map optimal transport routes in Tokyo. Or the successful demonstration of analogue computing using buckets of water, where researchers built a proof-of-concept computer that uses running water instead of traditional logical circuitry. This water-based computer, utilizing an approach called “reservoir computing,” can model chaotic time patterns and make predictions, even outperforming high-performance digital computers in some cases.10 How? Here, Bridle:
Knowledge produced through the medium of the shifting surface of a bucket of water is made in cooperation with the world, rather than by conquering it.11
A World Where Many Worlds Fit
Colombian-American anthropologist Arturo Escobar envisions a cognitive model that is not just multi-species but also multi-world, drawn from the Zapatista pursuit12 for “a world where many worlds fit.” In what he terms the pluriverse13, different forms of knowing, being, and doing coexist, each with its own validity and wisdom, its own reality. This decentralized approach functions through the interaction of plural microcosmic biomes—for instance, algal chloroplasts that convert sunlight into food, microorganisms within the human body, mycorrhizal networks that connect trees in a forest, collective behaviors of animal and bird swarms, etc. Perhaps the nature of the current polycrisis, with its intertwined challenges, demands a pluriversal perspective. If the Daisyworld model suggests a delicate balance of cooperation on a planetary scale, then the pluriverse compels us to consider how this balance may be achieved across and through diverse, decentralized intelligences.
I am reminded of Ursula K. Le Guin’s 1976 science-fiction novel, The Word for World is Forest,14 where the Athshean world’s capacity for “dream-time” allows its people, the environment, and the planet itself to collectively share in a form of collaborative intelligence:
They did not act in history; they were history, and they were at rest in it. After all, they were very close to the dream-time, to the presence in the world of the unacted and unacting, pure potential. They were that presence; and so, they were content.
In the Athshean dream-time, Le Guin presents a vision of intelligence that weaves together the fabric of an entire world. As we grapple with the complexities of the polycrisis, we find ourselves at a crossroads, faced with choices that will shape our future, and the future of our planet. In the words of the Athsheans, to be at rest in history is to be a presence in the world, a presence that has the power to shape, heal, and transform. Our challenge, and our opportunity, is to become that presence. The word for world may indeed be forest but it is also a dream. Perhaps it can become a shared one.
We explore these questions with respect to how we relate to our planet in the first essay in the Embodying Planetarity series, titled “No One Lives on the Globe.”
Here, Heckert echoes the thinking of Peter Kropotkin. See: Kropotkin, P. (1902). Mutual Aid: A Factor of Evolution. McClure, Phillips & Co.
Heckert, J. (2012). Anarchy without Opposition. In C. B. Daring, J. Rogue, D. Shannon, & A. Volcano (Eds.), Queering Anarchism: Essays on Gender, Power and Desire. Essay.
Vernadskiĭ, V. I., (1926). Биосфера (The biosphere). University of Illinois at Urbana-Champaign.
Bridle, J. (2023). Ways of Being. Animals, Plants, Machines: The Search for a Planetary Intelligence. Penguin Books. 125.
Ibid. 118-124.
Frank, A., Grinspoon, D., & Walker, S. (2022). Intelligence as a planetary scale process. International Journal of Astrobiology, 21(2), 47-61. doi:10.1017/S147355042100029X.
Bratton, B. (2021, June 17). Planetary Sapience. NOEMA. https://www.noemamag.com/planetary-sapience.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3442188.3445922.
Heil, B. (2021, April 5). How to Build a Neural Network from a Bucket of Water. Ben Heil - ML. https://autobencoder.com/2021-04-05-bucket.
Bridle, J. (2023). Ways of Being. Animals, Plants, Machines: The Search for a Planetary Intelligence. Penguin Books. 199.
Gahman, L. (2017). Building ‘a world where many worlds fit’: Indigenous autonomy, mutual aid, and an (anti-capitalist) moral economy of the (rebel) peasant. In Sustainable Food Futures (Vol. 1). Essay, Routledge.
Escobar A. (2018). Designs for the Pluriverse: Radical Interdependence, Autonomy, and the Making of Worlds. Duke University Press.