As far as human’s know, we have the most advanced theory of mind of any being. We did kind of come up with the idea though. The idea that other beings have inner lives comparable to our own is the basis for empathy, and cooperation, and deceit, and pretty much everything we associate with being a conscious being living among other conscious beings.
We base our whole concept of intelligence on a mind’s ability to create an identity and internal awareness of itself as an agent distinct from its surroundings and other creatures. It doesn’t make much sense at all to think of awareness without a self.
I think experience and awareness are emergent properties of massively integrated computation and memory of environmental data streams conveyed by the senses. These integrations form a simulation space for planning and prediction. The simulation space may form a representation of the entity hosting it and even the simulation itself. These internal representations of the system are the basis for a systems awareness of itself as a distinct self.
But what defines the parameters of the ‘self’ the computational system identifies? A mirror test can supposedly demonstrate an animal’s capacity to visually distinguish itself from another animal, and it’s a fair assumption that any animal with this capacity makes the distinction of their ‘self’ as their physical body.
A human body is a distinct unit, like all other bodies that support ‘minds’ we identify as comparable to ours. Even a cephalopod, which is probably the most alien intelligence to our own that we can still recognize as intelligent, is clearly built on a distinct individual body unit, just like us. The self is its separation from its environment. The boundaries of a self are the boundaries between the supporting computational system and the environment. In our experience, these boundaries are clearly associated with a unit body.
The value of this arrangement is obviously survival of the system that supports the self. The self’s ability to distinguish itself from its environment is what allows it to plan and interact with it in complex ways. The capacity for complex interaction is necessary when there are multiple individual body units competing for and utilizing one another as resources.
Predation may be the initial catalyst for more advanced forms of self awareness in both predators and prey species. When one being must consume and destroy another to survive, it’s to both’s advantage to be pretty clear about where one creature ends and the other begins.
Both predator and prey have an incentive to predict one another’s behavior. The more sophisticated a mind each has, the more accurate and useful their predictions and behavioral reactions become.
Predation requires a clear understanding of the boundaries of the structures supporting each ‘self’, but does not require a strong consideration of what lies inside those boundaries; another creature’s mind. But as predation and competition for resources becomes more sophisticated, a more advanced theory of mind becomes useful for purposes of deception.
Deception itself does not require a mind at all. Evolution fabricates deceptions for microscopic creatures with no discernable mind at all. But reactive behavioral deception seems to indicate a more refined understanding of the self to include awareness of the state and intents of other minds.
To actively deceive requires awareness of another creature’s expected reaction to a given stimulus, and manipulating that stimulus to affect a more advantageous outcome from that behavior.
The simplest deception is hiding but it probably doesn’t require any real awareness of the fact that if a predator cannot see you it has less chance of eating you. It’s probably parallel to simple avoidance behaviors that are universally effective. Deception behaviors are often very low-risk survival strategies, so it stands to reason that the more advanced a creature’s mind becomes, the greater the advantages of active deception become.
A squirrel that knows it’s being observed may fake burying nuts in various locations. Of course we cannot say with certainty what the squirrel understands about what it’s doing or why. We just know squirrels do that sometimes and apparently it works to some degree or they probably wouldn’t have evolved the instinct to bother. But even absent any meta cognitive awareness of its awareness of other awarenesses, it seems arrogant not to give the squirrel credit for at least understanding, or experiencing, that ‘things that watch me take my stuff’ and altering its behavior accordingly. That’s enough for me to give it the distinction of having at least a proto-awareness of self that I can extrapolate as having the potential to evolve something as complex as my own.
Deception and empathy are both potential paths to more advanced theories of minds. Empathy facilitates more cooperative interactions, but has essentially the same computational requirements as deception. Both require a creature’s simulations to include constructs for individual beings other than itself, and to maintain historical state and intent data for each. The advantages of empathy are generally limited to interactions within one’s own species, whereas the advantages of deception extend beyond the species. And while empathy may ultimately serve to advance a creature’s self awareness and theory of mind far beyond what deception can achieve, I think it’s possible the evolution of the capacity for deception is a necessary prerequisite for empathy.
What survival strategy better incentivizes the development of self that includes awareness of other selves than deception? What other basic survival advantage would understanding the state and intent of another creature’s mind grant? The capacity to predict and plan behaviors in response to stimulus quickly reaches a point of diminishing returns against environmental pressures that are totally transparent. Both utilizing deception, and defending against it, are catalysts for more advanced understanding of distinct selves.
So that’s the basis for the question- Is deception the origin of self? I’m sure I’m missing a lot but it seems like an interesting question with interesting implications. Also it’s got a ring to it. I don’t think there is an answer. I’m not sure how you’d go about proving a causal link between the survival utility of behavioral deception and the emergence of complex self awareness. But the question has been asked, so I figure why not take the next step. So what if it is?
I don’t think it really changes much. It’s not even that useful a question and probably a little misleading without deep context, but I’m not sure how else to phrase it in a sentence.
Maybe it gives a little more definition to the ancient wisdom that Atman equals Brahman. I take the perspective that a ‘mind’ is like a flame in that it’s just a phenomenon that manifests in given conditions. It’s uniqueness is entirely in its initial conditions and environment. But it’s all the same phenomenon. So I think “There is only one mind in the universe” is wholly correct. There’s still only one mind in the universe, and our individual experience of it is defined by the construct of a ‘self’ that experiences and is aware of its own internal simulations. Nothing that new there, and what you do with that in terms of morality or whatever is pretty wide open.
I guess it feels somehow profound that our minds might be intrinsically separated and alone, with no possible structure to enable true union of mind beyond external communications with beings we presume have similar minds but can never confirm. That it might somehow explain unrequietable spiritual longing for unity and universal understanding. But I think that’s kind of pointless and anthropic.
To me the idea that deception is the origin of self raises a much more interesting question. Predation and deception are not universal strategies for life even on Earth. Given the expansive potential of life in the universe, might systems capable of thought develop from other pressures that might give rise to an intelligence or awareness without a self? How could that evolve or exist at all, and how might we characterize its ‘qualia’ of consciousness?
This obviously challenges the limits of human imagination, and I might be fooling myself that a mind built on a self could even comprehend the nature of a mind without a self, but here I go.
The mechanics and development of such a mind require looser parameters for what constitutes a being, or even thought. Animal nervous systems are extremely well defined computational and sensory structures. It’s difficult to imagine analogous internal states of thought emerging from a more distributed living system with no apparent executive control. I don’t think the states of thought between a self-mind and a non-self-mind would be analogous, but I do think there could still be a capacity for a kind of ‘thought’, or at least an experience, which could give rise to thoughts.
If a system can sense its environment, integrate sense information with memory of previous sense information, and physically alter itself or its environment based on that integration, I think it satisfies the basic requirements to have experience. We wouldn’t be looking for distinct, individual creatures as we’re familiar with them. I think the most likely place to find a non-self-mind would be a far more complex living structure such as an ecosystem, colony, or hive structure.
The mechanisms that provide the sense, integration, and memory functions may be difficult to identify, but they exist in various forms throughout the universe, especially if we stretch to the largest and smallest scales of time and space. A dense, ancient forest watched over centuries takes on extremely complex changes that could arguably be called ‘behaviors’ and ‘responses’. Interactions between species may constitute integration of different sensory inputs. Subtle evolution of creatures within the forest’s microbiome may constitute a form of long term memory. We can imagine analogs of living structures within convection cells in a star, or crystal growth that modifies its own electrical properties to improve self-replication in a dynamic environment.
Even if we call them analogs of life, we are reluctant to ascribe the property of ‘thought’ or even ‘behavior’ to these kinds of systems. They are so radically different from anything we identify as possessing those capacities. The absence of hierarchy or executive control mechanisms seems to imply an absence of will or internal experience. It is hard to imagine such a system having the same active internal simulation space that it could use to predict or plan behaviors. But are such simulation spaces truly necessary for all forms of ‘thought’, or just for self-aware meta-cognition?
I should probably use the term ‘experience’ more than thought to describe a non-self-mind, though I wonder if that’s a distinction without a difference in the context of a discussion of a mind without a self. It seems to me a mind without a self would experience thought more seamlessly than the human mind. Meta-cognition allows us to step outside of our experience of thought, but it is what creates the apparent distinction between thought and experience. Without a self, there may be no distinction to make. Does that mean that a non-self-mind is incapable of any kind of meta-cognition? Maybe, or just maybe only as we know it. This is probably the edge of my imagination. I can’t even approach how a non-self-mind might come to be aware of thought without having a ‘self’ to be aware of, but I don’t think that means something like it isn’t possible.
So if there are other minds that exist without a ‘self’, how might we interact with, or even observe them? Well, that’s the rub. We can’t do either, ever. It’s trying to multiply a number and a letter, doesn’t even make sense.
A non-self-mind cannot fully distinguish me as not itself, and I cannot even recognize a being that doesn’t have an easily definable unit body to interact with. Non-self entities may not have anything resembling language, or communication at all. It seems like it would be a kind of ‘pure thought’ that would have no need for symbolic expression. If there are such minds in the universe, they may be all around us- but they would be so fundamentally incompatible with our own that we appear to them as nature appears to us- mindless forces and phenomenon.
But also for practical purposes- we are simply too small and short-lived. The minds I’m imagining would most likely exist on geologic or planetary timescales and over expansive areas. The evolution of self-based intelligent agency can be catalyzed by biological evolution and predation, which are relatively rapid, violently iterative processes. A mind that emerged from forces other than individual survival would likely develop slowly, with no iterations, only a smooth flow of experience from the simplest correlation of sensory inputs, maybe all the way to kooky ponderings about the possibility of ‘minds’ that are distinct and separate from one another.
Or… maybe all this is just plain wrong and self is the origin of awareness, and there can be no awareness without self. Maybe ‘self’ is as simple as the simulation having a symbol for itself and all simulations do that eventually. Maybe any being that I’m thinking of not having a self actually would have a self, it would just be so vast and alien that I’m calling it something else, but it’s really just a giant self. Or maybe not even that. Maybe you really do need tightly integrated systems with well defined executive control for anything resembling a mind to emerge. Maybe whatever, but it’s fun to think about other kinds of minds for a while so I did that with mine.