How Does Systems Define What It Means to Be a Person?

For centuries, people have argued over who truly governs: kings, parliaments, experts, or markets. Machiavelli focused on how rulers gain and use power, while Foucault focused on how power spreads, shapes people, and seeps into everyday social life. Today, we are living in a phase where power often appears as infrastructure (code, platforms, data flows) that quietly organizes the field of what is thinkable, sayable and doable.

A Higher Narrative view takes all these earlier insights seriously but asks a further question: how do these infrastructures shape the evolution of human consciousness itself? AI does not only centralize or decentralize power. It is reconfiguring the space in which interiority, freedom and meaning-making take place.

The Algorithmic Environment of the Self

The familiar institutions that once visibly disciplined us (school timetables, factory clocks, bureaucratic forms) are being supplemented by invisible feedback systems: feeds, scores, recommendations, rankings. In this new environment, each person lives in continuous dialogue with an implicit, ever-updating judgment: “How did that perform?”

Instead of a single sovereign looking down, we navigate a dense ecology of micro-evaluations. Likes, impressions, engagement, visibility – these metrics become intuitive barometers of whether we are moving with or against the current. The result is a subtle but profound shift: identity is experienced less as a “given” to unfold and more as a project to optimize. 

There is a mixed development. On one hand, reflexivity and self-authorship can be signs of a higher stage of consciousness: people become aware that identity is constructed. On the other, when construction is relentlessly mediated by opaque algorithms and market incentives, self-authorship risks collapsing into self-marketing.

When Normalization Becomes Design

AI is built on pattern recognition. It needs norms: baselines of “typical” behavior and “likely” outcomes. Those baselines become powerful, though often implicit, standards for who is low-risk, high-value, trustworthy, relevant.

The modern subject then learns, often without naming it, a new existential lesson: “To move smoothly through the world, I must resemble the patterns the system prefers.” This is not enforced by a jailer or censor but by frictions and affordances, what is easy versus what is strangely hard. This is not just unjust; it is also developmentally regressive. It nudges people away from complexity, divergence and inner ambiguity, toward more legible and profitable forms of life.

Aiming for the higher we can ask: what would it mean to design AI systems that expand the range of viable ways of being, that honor the strange, the emergent, the not-yet-legible, instead of compressing them into what is most predictable and monetizable?

The Digital Double and the Flattening of Depth

Each person now coexists with a data-self: a profile synthesized from transactions, clicks, movements, biometrics, social ties. This digital double is increasingly the version of you that institutions “trust” when they evaluate risk, eligibility and worth.

From a purely instrumental logic, this makes sense. However, it is deeply incomplete. Human beings are not just the sum of previous behaviors plus probabilistic forecasts. We are also carriers of interior depth: intentions, insights, remorse, creativity, spiritual experiences, sudden transformations.

When decisions are made primarily with reference to the data-self, the qualities that matter most for long-term human and civilizational growth (unpredictability, moral breakthrough, existential reorientation) barely register. The danger is not only exclusion and bias. It is a slow cultural training in self-misrecognition: people start to see themselves as the numbers see them.

Subjectivity in the Age of Scores

A striking feature of our moment is how effortlessly metrics migrate into the space of meaning. Sleep becomes sleep scores. Friendship becomes follower counts. Creativity becomes engagement graphs. Even wellbeing becomes dashboard indicators.

This is not simply “bad” in a moral sense; it is alluring. Metrics offer clarity, comparability, a sense of progress. But there exists the trap: when numbers become the main language in which we relate to ourselves, inner life is colonized by a logic that cannot speak to mystery, grace, paradox or transcendence.

Some of us should insist on preserving and cultivating spaces of non-quantified value: practices, institutions and norms where being cannot be reduced to performance, and where persons are held as more than their behavioral traces.

Designing for Higher Stages

Typical debates about AI oscillate between utopian promise and dystopian fear. We have a stance that moves differently. It acknowledges risk and harm but orients toward conscious design: How can these powerful tools be woven into a developmental arc that supports wiser, more compassionate, more complex forms of society?

That implies at least three commitments:

Multi-perspectival awareness

Systems that govern millions should be shaped by many lenses: technical, ethical, psychological, spiritual, ecological. No single discipline or ideology is sufficient. The very process of designing and regulating AI ought to mirror the complexity of the world it will help organize.

Institutional reflexivity

Platforms, states and firms must not only wield AI; they must study how their own use of it reshapes politics and economy. This means building feedback loops that surface not just efficiency metrics, but impacts on meaning, agency and social cohesion.

Protection of interior freedom

At a certain point of development, freedom is not mainly about the absence of external constraint. It is about the capacity to remain in contact with one’s inner truth, even amongst powerful incentives to perform a different self. Any serious politics of AI must therefore treat interiority (the right to be more than one’s predicted pattern) as a core good to be protected.

The Deeper Political Question

“Can we control AI?” is an important, but ultimately surface-level, question. The deeper inquiry is: Who are we becoming in relationship with these systems, and who might we become instead?

If we allow optimization logic to reign unchecked, we drift toward a civilization of highly functional, increasingly anxious performers – selves tuned to signals they did not choose, shaping their lives around scores. If we meet AI with greater consciousness, we have an opportunity of a different order: to use these tools as mirrors that reveal our patterns, while building cultures and institutions that refuse to mistake patterns for destiny.

The task, then, is not to return to a pre-digital past, nor to surrender to a post-human future. It is to inhabit this in-between with enough courage, humility and depth that the systems we build do not merely predict us, but help us grow beyond who we already are.

Next
Next

What the Dead Internet Theory Reveals About Our AI Future