I remember where I was the first time I encountered an alien intelligence.
It was 2012 and I was standing outside of R2, one of the offices at the Australian Department of Defence Headquarters, near the entrance to the underground carpark. In retrospect, I shouldn't have been surprised: the X-Files had conditioned me to expect to find them on a military base. It was a clear Canberra morning, and I don't remember freezing, so it must have been some time before Anzac Day.
I'd been for a walk to clear my head after thinking myself in circles for the past few days. The problem I'd been facing was how to take the fuzzy, contradictory information created by intelligence reporting and ground it with observations made in combat. I wanted to do it in an automated way, to match the objects and events in a way that made the accuracy of the intelligence easier to judge. If we had LLMs back then it would have been a much easier task.
To reframe the problem, I visualised the information flowing through the department. There were dozens of sense analogues, including intelligence collection, and analytic units which translated the raw data it received into standardised information to build knowledge about the world. It made its way through to decision makers with an understanding of the capabilities and constraints of the institution, and how to use them to achieve the organisation's tactical, operational, and strategic goals. It was then passed to its instruments, the people and equipment, which it could use to impact the world.
After staring at the grainy UFO photo for what felt like hours, the blur became structure. Once seen, it couldn't be unseen:
Institutions are emergent, non-conscious intelligences. If we want them to share our values, we must design those values in and hold their architects to account.
The threads of information weren't just connecting people, they were moving through a larger structure that was shaping, filtering, and directing them. It wasn't enough to understand the data or the individuals. To make sense of what was happening, I had to see the organisation itself as the thing doing the thinking.
There's a popular idea that if we encountered consciousness unlike our own, like an alien mind with radically different structures, we wouldn't recognise it. I think there is something similar going on here. I don't think that Defence, or any other system, is conscious, and it doesn't feel like a mind. It's an institution. But once I saw it as an information-processing entity with goals, values, memory, and internal logic, I couldn't unsee it. In that sense, Defence behaved as an emergent intelligence: a system with autonomy, logic, and purpose, but without consciousness or empathy. And like any intelligence, its behaviour emerges less from the people inside it than on the system it had become.
It's no accident that I first saw this pattern in Defence. It's one of the few institutions where the features of an artificial intelligence are clearly visible. It has a rigid hierarchy which creates clear pathways that mimic algorithmic control flow. It has formal cognitive standards, things like minutes and reports, which define how information is expected to move in structured formats. There are also different channels for different types of information, and strict controls on which parts of the organisation have access to that information.
It has standardised cognitive subunits. By that I mean the people. Military personnel are trained, indoctrinated, evaluated, and reshaped into uniform decision makers. As instruments, they're interchangeable, comprehensible, and consistent. And it has an enormously strong culture and values which it imparts onto those cognitive subunits. As an institution it has a worldview, sense of humour, memory, loyalty, and preferred interpretation of reality.
This insight isn't about Defence. It's about institutions as systems of cognition. And I don't want to anthropomorphise institutions or more abstract systems during this series, they are a very different type of thing to humans. This is a conceptual lens rather than a statement about ultimate reality. But, when taken as a whole, I began to see Defence behave as if it were an intelligence optimised for resilience, predictability, and control. It became a very useful framework to apply to large organisations more broadly.
The echoes are everywhere. In corporations, incentives and compliance systems shape behaviour more than any CEO. In bureaucracies, legacy procedures exist long after their rationale is gone. Even in movements and social platforms, collective identity and internal logic outpace the intent of any one founder. This intelligence only emerges at a certain level of complexity, where systems outgrow the control of a small group of people. If goals and behaviours remain stable as people rotate, and the system learns from feedback, treat it as an agent. Or as an alien intelligence if that works better for you.
I managed to avoid a Mulderesque breakdown when I realised that the aliens were everywhere and they were controlling our lives. That might be because he saw conspiracy where I saw structural misalignment.
On the ontological foundations of organisations as agents
I'm going to take a bit of a diversion here to talk about ontology. Ontology is the study of what exists: the types of entities in the world, their properties, and the relationships between them across time and space. It asks questions like: What kinds of things are there? What are their essential characteristics? How do they relate to each other?
In practical terms, ontologies are how we categorise things and make sense of the world. They shape how we interpret reality, and they underpin everything from our scientific models to our social systems. Whether we realise it or not, we all have ontologies. They're the invisible scaffolding of our worldview.
You might ask why ontology is relevant outside of philosophy departments. In intelligence analysis we use them a lot. They are practically deployed as models of reality in a domain that we need to subject to intelligence collection and analysis. They're necessary because different target sets, such as a tribal society, a state military, or a criminal syndicate, operate within distinct social, political, or religious structures. Each context may require different classes of relationships between people, organisations, equipment, and events. The underlying shared ontology that we use in our 21st century material reality still provides the scaffolding, while these ontologies provide the details.
Once you've established what you need to know and how the stuff you're collecting relates to each other, you can go about building things like knowledge bases or other types of information stores. A good example is that a police department database will have classes of people that establish their relationship with a criminal incident, such as victim, witness, suspect, and offender. These classes of people have very precise definitions that are sometimes defined differently in other domains or contexts.
What does this have to do with organisations as alien intelligences, you ask? Well, it's because organisations share some attributes with people but are also radically different in key areas. They share ontological features like intentionality, capability, and participation. It means they do stuff in the world because they have goals, the means to deploy resources to achieve them, and are permitted access to the world to be able to influence outcomes. This means that they are agents in the world.
The classification of humans and organisations both as agents, as in entities capable of acting and making decisions, is why they're grouped together in the agent ontology of the Common Core Ontology (CCO). CCO is a mid-level ontology developed to provide a consistent framework for representing entities and relationships across diverse domains. It's been adopted by the U.S. Department of Defense and Intelligence Community to standardise how information systems model reality.
Humans and organisations are grouped together as agents not because they are the same kind of entity, but because they exhibit similar external behaviour: they act on the world, pursue goals, and participate in events. From an ontological standpoint, this shared functionality justifies treating both as agents, even if one is conscious and the other isn't. That means you can relate them to the other things in the world, such as locations, time, and events, using many of the same patterns and properties.
One critical difference from humans is that organisations are not moral agents. They lack consciousness, empathy, and the intuitive grasp of right and wrong that guide human behaviour. Without this moral compass they behave with purpose but not conscience.
As it turns out, that matters.
Goals and Values
There are ways where we humans and our organisations are very well aligned. They are extremely good at achieving the goals that we set them, particularly for complex, ambitious, or resource-intensive activities. It makes sense: addressing complexity requires collective capability.
An organisation requires a purpose. We have companies that run power grids. We have departments that provide policing. We have statutory authorities whose role is to set the rules for participants in the economy. Organisations are goal driven. That goal can be as simple as enriching an individual or family. For the most part though the organisations we bring into existence have something that they want to achieve. This is a core part of their identity.
That defined purpose is part of what makes them effective. But humans aren't built that way. We don't need a fixed purpose to act meaningfully. We form values through experience, culture, and emotion, and we often live in ambiguity, exploring paths without predefined outcomes. That's not a flaw, it's a feature of moral agency. And it's what allows us to hold institutions to account when they pursue goals without regard for values.
There is a difference between having a purpose and having a conscience. Organisations are structure around goals aligned with this purpose. And once those goals are set, they pursue them with extraordinary efficiency. But efficiency without alignment can be dangerous. The deeper question isn't just what they aim to do, it's how they go about doing it.
That's where values come in. And unlike humans, organisations don't come with built-in moral instincts. If we want them to act in ways we find acceptable, those values must be explicitly designed into them. We have external checks, by parliaments, regulators, and public opinion. But the signals from this moral sense are slow and retrospective.
Goals tell us what an organisation is trying to achieve. Values define the boundaries of what they are willing (or permitted) to do in pursuit of those goals. In humans, values often emerge through culture, emotion, and lived experience. But organisations are constructed. Their values must be articulated in frameworks, encoded in rules, and enforced through mechanisms.
Organisations often hit their goals while violating social or ethical expectations. It's typically not malice, it's because those constraints weren't designed in. In these cases, accountability should follow the levers: goal setting, constraint design, priorities, metric choices. It needs to include mechanisms for change, particularly for metrics, to avoid capture and gaming. And we still punish individual transgressions where the responsibility falls on individual action. But the centre of gravity moves to design responsibility when harms are caused by features of the system itself. If we don't encode values, systems will succeed in ways that hurt.
You can generally find the values of a company on their website. They might mention respect, doing the right thing, delivering, being efficient, that sort of language. In general, they tend to be instrumental. They are very much about being tools to achieve their organisational goals. That makes sense, we've established these organisations to achieve a purpose that we've set for them. But it means that their values are superficial and declarative rather than being embedded into the fabric of their institutional design.
We humans are very different. Our values are complex, contradictory, nuanced, and innate. An ethically mature individual won't act out of a fear of consequences. They'll act in a manner aligned with their own values. There absolutely are individuals with values that are problematic when compared to the population, and they will act badly as a result. But a healthy individual acting in this way will find themselves at risk of exclusion and judgement from their peers. This is embedded into our very being.
Our moral sense is, in part, a defence mechanism to regulate behaviour in social systems. We use it to protect the group from destructive individuals and to protect individuals from being exploited or excluded. It's a critical part of our toolkit for cooperating to achieve complex goals. We use it to detect who is safe and reliable and to correct behaviour to maintain equilibrium.
We didn't build our systems with the equivalent of a moral sense. As our systems scale, generate interactions, and become more complex, the moral distance between cause and effect grows. It's this omission that allows misalignment to occur.
My misaligned alien
"Brendon, the Army will never love you as much as you love the Army."
That killer line was delivered by my boss at the time. He was a full Colonel with nearly three decades of experience, the kind of officer who had been everywhere, done everything, and had earned the respect of everyone who had crossed paths with him. I was a public servant at the time and had never been in the Army (I was former Air Force) but the words still stirred something in me.
He was my last boss in Defence before I left to return to the forests of my ancestors in the southwest of Western Australia. After twelve years, enlisting four months after the start of the global war on terror, two overseas deployments, I was tired. I didn't have the language to express it at the time, but I suspect I felt some misalignment as well.
Service. Courage. Respect. Integrity. Excellence.
These are the Defence values in Australia. And they are good ones. I can say without hesitation that I have never seen an organisation as committed to its values, where they are lived authentically by its members. Both the institution and some individuals have been involved in serious transgressions of community values, and I don't want to minimise that. But, for the most part, the values of the organisation become a core part of the identity of the individuals who serve.
Individuality. Consent. Agency. Autonomy. Democracy.
These values didn't make the list. It isn't a criticism, but it is illustrative. The business of Defence is highly consequential and requires significant suppression of individual rights to be effective.
Service is the selflessness of character to place the security and interests of the nation and its people ahead of one's own. Courage is the strength of character to say and do the right thing, especially in the face of adversity. Respect is the humanity to value others and treat them with dignity. Integrity the consistency of character to align one's thoughts, words and actions to do what is right. Excellence is the willingness of character to strive each day to be the best one can be, both professionally and personally.
These are great values. And they are consistent with the expectations of the Australian people when we're acting at our best. But they are also instrumental. The organisation wouldn't function without subordinating the needs of the individual for the community, respecting the chain of command, being courageous enough to face danger, and aligning actions with system-level goals, even when they override personal moral judgement.
The values that institutions declare are often designed not to challenge the system, but to align you with it.
Defence is an extreme example, and it is more aware of its moral compromises than most other organisations. But overall, I think we should be asking more of our systems. If we are to let them loose on the world, to take decisions and perform actions of consequences, we should expect that they are acting in a way that is broadly consistent with the values of the communities they serve.
The Army will never be able to love its soldiers. It's a system, and isn't like us, despite our ontological similarities. It can feed and house and clothe its members and give them a sense of purpose. But love is a peculiar human emotion that my unfeeling alien intelligences won't ever experience.
I'm more optimistic that, with intentional design, we can give our systems not emotions as such, but something close: a moral sense embedded in their architecture, grounded in the values we would want them to live by.