There is a spot just down from the lighthouse in Bunbury called Wyalup Point. It’s a basalt outcrop which formed when Australia split from India and Antarctica around 130 million years ago. The formation is unusual for Western Australia, with the rock not found anywhere else. It’s a great place for sunsets and picnics, a natural gathering place where the land and ocean met dramatically.

It's a great place to watch the sunset.
It's a great place to watch the sunset. Image generated by Chat GPT.

I was living back in Bunbury during an unusually calm time of my life, about a decade ago, where I’d put career on hold to reassess my priorities, much the same as I’m doing now. My days were a swim in the ocean in the morning, literature classes at the local university in the afternoon, work as a night fill captain at night. I spent my spare time in nature, meditated regularly, and was probably fitter than I’d been even during my military days. It was a perfect balance of mind, body, and soul. I was content in my simple life.

I was down at Wyalup one day, early afternoon, wind blowing in from the ocean, staring out to sea. Those who meditate will know that with regular practice you can empty your mind during such moments, shift into the blank spaces with ease, let the thoughts drift past unacknowledged. The rhythmic pulse of the water rushing up the channel in the rocks worked like a slow metronome, a point of focus which let me shift into such a space. I stood and watched the rolling ocean from the edge of the outcrop feeling the spray from wave against rock.

After a time, I felt a presence on my right. I looked over and saw myself standing there, right next to me, looking out to sea. It wasn’t quite me as I’d seen myself in the mirror that morning; he was tanned, had a magnificently long beard and hair, and was dressed in skins, holding a spear. After a time, he looked over, gave a gentle nod, and then directed his gaze back to the ocean in front of us. I did the same, standing in quiet comfort with my ethereal companion, content in the moment.

The feeling I had at the time was continuity. He and I were the same in that moment, stripped of our social and temporal context, two identical beings sharing a sensory experience, the rest of the world invisible and irrelevant. I can feel it today as I remember it, vivid and profound, that sacred knowledge that I’m the product of ten thousand years of settled culture, hundreds of thousands of years of humanity, hundreds of millions of years of life, billions of years of existence.

I’m not one to be sentimental about the past. I’ll take clean drinking water over the authenticity of our nomadic ancestors any day. But perspective is important, even liberating in a way. It means that whatever choices we make are from among a lineage of options much richer than an ordering of candidates on a ballot paper.

Alignment is about people

You might ask why I chose to start this piece about alignment with this anecdote. These things are often as mysterious to me, at least until after the fact. I think that this time what I was trying to say was that despite how much of my writing deals with systems and frameworks and artificial intelligence, alignment is about people. When we make moral choices about how we act, as individuals or as a collective, we’re making those decisions with our peculiar human moral sense. This is as true for my nomadic ancestor as it is for me today.

I’ve mentioned a few times that the systems in our lives don’t have a moral sense. A lot of what I’ve been saying, through the four-layer framework among other essays, is that at a certain scale of complexity they stop being able to rely on individual human moral judgement and start to rely on laws and regulations. A local tradesperson, someone who employs a few workers, will soon become known not only for the quality of their work but also for the ethics of their business practices. It means that the business is responsive to signals from the local economy, particularly where complaints are communicated through the feedback mechanism, which is referrals and reputation.

Larger systems aren’t nearly as responsive. People still generate signals about values violations by institutions, but the absence of a moral sense means that it needs to arrive through proxies. The structural power imbalance is so enormous that it often takes an individual approaching a regulator or journalist before a system will become aware of a harm. This has an impact on responsiveness, another leverage point in systemic intervention, where the time it takes for a system to intervene when harm occurs becomes much longer than that of an individual. The feedback takes its time to arrive.

That means the first part of the alignment story is about providing systems with a moral sense. I believe we have the tools to build a moral sense for systems, one that is continuously adaptive. I’m calling it Values Alignment Intelligence.

The underlying premise it relies on is that violations of commonly held values are visible by observing human judgements about systems. If you sit back and think about it, we make these judgments all the time. Every complaint, every lamentation, every gripe and grievance, contains the value seeds of our dissatisfaction. That doesn’t mean those judgments are always explicit or valid. A rambling story from a four-year-old about how unfair it was when her ball was taken by another kid is loaded with values-rich information, if only you can extract it from the narrative.

And we can. Through the combination of everyone complaining on the internet and the powers of artificial intelligence, we have the capability to extract values signals about systems and surface them. You also have signals from investigations, mostly journalists and regulators, which provide high quality information about the behaviours that lead to reputational or regulatory breaches.

More significantly, internal information, such as complaints received, processes, and actions undertaken by a system, can be used to produce values signals which can be acted upon rapidly. These don’t need to leave the organisation performing the analysis: as an input to risk management, these internal values signals are invaluable. Every Royal Commission includes the behaviours that precede a major violation.

Intelligence is the ideal discipline for this type of activity. Its tradecraft is equipped to take large volumes of unstructured, ambiguous, variable quality information from multiple sources, and apply analysis to produce indications that action needs to be taken. It doesn’t offer high levels of certainty like science or criminal investigations; its only aim is to provide decision makers with defensible information that they can use to make decisions. And that’s what a moral sense should be doing. Its objective isn’t to punish, that’s what we have laws for. Instead, it should provide the information that can be fed back into a system to guide it back into alignment.

I’ve decided that I’m going to cover Values Alignment Intelligence in a separate piece. It needs its own essay; there is a lot of potential here.

Our lost senses

One night in 2019, I was walking home from a political meeting when found myself stride for stride with another attendee. She was an older woman, well and truly retired, a former social worker. We got to talking and discovered that she was a contemporary of my auntie and knew my grandmother as well, a woman who was the federal secretary of their union back in the 70s.

Over the course of our stroll from the pub she recounted a time when they were going through some sort of industrial action. She recounted fondly how the social workers had gone on strike with mixed impact, until a blue-collar union joined in solidarity, giving them the extra clout they needed. Long story short, the social workers won.

That story comes to mind when I think about systems and a moral sense. The union example is an expression of collective discontent by a group of individuals who had sufficient power to produce change. We had other collective senses that acted similarly in the past. When news media was stronger and captured more of the population, its investigative functions were more impactful. Academics in secure, tenured positions functioned similarly, before universities became degree factories. Political parties were more grass roots, with higher participation. Religious and philosophical associations had significant clout. Professional guilds, career public servants, public intellectuals, influential artists, local festivals and ceremonies.

These things still exist. But they’ve been corporatised, flattened into the goals of efficiency and profit. The neoliberal era has turned academics into tenuous employees, mastheads into marketing arms, public service into contract management. The organs that once let society feel itself have been numbed by efficiency. When every institution is optimised for throughput, there’s no time to interpret what the body is feeling, only to keep it moving. As the capacity to generate signals has weakened, trust and participation has dropped, further damaging the feedback loop.

We have built some new senses though. Well, maybe not senses, but a nervous system. This includes social media, forums, citizen journalism, vast troves of data. We have opportunities to network across the world, transparency tools, systems awareness, mutual aid, and belonging that bridges traditional geographic and demographic boundaries. All of these put out weak signals, if only our systems can interpret it.

Values alignment as democracy in action

I think this is the opportunity. We haven’t lost our senses, they’ve just scattered. The world’s attention moves through networks like an electric pulse, billions of tiny transmitters firing across the globe. The nervous system is there, hyperactive as it is, but there is no mind to make sense of what it feels. Every outrage flares and fades in the electronic aether, but nothing integrates.

That’s what Values Alignment Intelligence is meant to be. It’s not a layer of control; it’s a new layer of coherence. It’s a way for the moral signals already present in human interaction to be seen, understood, aggregated, and fed back into systems on our behalf. It’s the beginning of a moral nervous system capable of perception, reflection, and feedback: quick enough, and wise enough, to guide the systems it serves.

The hard problem here isn’t technical, it’s cultural. I’ve built enough prototypes with LLMs that I’m confident that we can extract values signals. But alignment only works if people believe that their moral intuitions matter. We need to believe that collective reflection can improve systems rather than be weaponised by them. To cultivate this belief, we need to teach systems literacy, rebuild trust in shared information, and design transparency so that ordinary citizens can see their values in the structures around them.

It has some interesting implications. Seen this way, democracy stops being an event and becomes a continuous act of moral calibration. Each value signal, whether a complaint, protest, policy submission, or regulator finding, becomes part of a living conversation about what we stand for. Elections remain important, but they’re a harder signal, one that resets leadership and broad direction at the top rather than influencing the day to day running of our systems. It’s an augmentation to our current way of representing our values, not a replacement. And it arrives faster and more focussed than our current feedback mechanisms.

Of course, there is a second challenge here. It’s one thing to sense misalignment, it’s quite another to be able to act on it.

Aligning our actions

Misalignment, or even transgression of a value, is not the same as breaking a law. Most of the decisions we make will require some sort of values trade-off. You can see the struggle in the paragon of virtue Doug Forcett, whose desire to get into The Good Place was so strong that he became terrified of doing anything for his own benefit. It isn’t possible to be good all the time, you have to make compromises.

We don’t need to punish misalignment of systems to our values. In fact, we shouldn’t punish systems unless the violations cause harm or clearly have the potential to do so. Punishment should be reserved for violations of laws or regulations, that’s why we have them.

What we should be doing is highlighting misalignment where it exists. We can then surface the trade-offs where they occur. Finally, we can correct behaviours to manage the risk of harms. Our current feedback doesn’t do this proactively; it’s only after a consequence of behaviours has been surfaced that values are closely examined.

I’m not sure we’ve even had the language or the analytic frameworks within systems to proactively examine values conflicts. We have lawyers to interpret whether actions are consistent with law, but I haven’t encountered an ethicist in my work in government or the private sector. I’ve certainly had conversations about whether an action is ethical during my work as an intelligence professional and regularly made judgements about whether my personal actions were in alignment with the intent of compliance activities. But an assessment of an action undertaken by an individual or a small team is much simpler to make than one on the emergent behaviours of a complex system or process.

Without the expectation that systems, including those privately owned, should be aligned to our basic collective values, none of this is practical. That is a reasonable space for debate, but the future world of my imagination during my brighter moments always includes institutions that will behave better than those we have today. So I’ll restate the assertion that we should expect the same adherence to common values from systems as we do from one another.

There is the challenge of interpreting values. It’s all very well for me to say that I can get AI to neutrally aggregate values signals from across the globe to not only baseline human expectation but create a framework for assessing internal systemic alignment, but there are so many opportunities for bias along the way. Intelligence analysis has the advantage of some processes that try to minimise bias, with mixed success. The important thing though is to acknowledge that bias will exist, discover where it might impact your collection and analysis, take corrective actions, and make decision makers aware of its potential impact.

It’s also important that the interpretation of values misalignment is left to humans. AI will surface signals of misalignment but isn’t a decision maker, at least not yet. I can see a model of analysts processing signals, risk owners producing advice in the context of their missions, and decision makers directing their enterprises in corrective response. I’d have it all overseen by ethicists to ensure the validity of the processes. This reflects how information flows through organisations at present and wouldn’t require a significant shift from how other risks are managed by large enterprises. There are more radical options, but this is the one with the least friction.

Detecting misalignment across an enterprise or government department is incredibly complex and I don’t want to minimise the challenge. However, I’ve worked in enterprises which monitor networks in real time for availability, performance, and cyber intrusions, using mostly deterministic rules and simple statistics. Artificial intelligence, with its ability to process enormous volumes of unstructured information, makes the same kind of monitoring and aggregation of values signals from open-source information and internal artefacts possible. It will take time to develop the tools and techniques, but it’s essentially the application of existing tradecraft to a new challenge.

Finally, there is the question of why systems would choose alignment. My answer to that is because people choose alignment. There is an instinct to reduce corporate and government leaders, or anyone acting on behalf of a bureaucracy, to cartoon villains who are acting because of evil. But that’s not my experience of the people in leadership positions that I have worked with. I don’t want to pretend that there isn’t an overrepresentation of self-interested sociopaths at the top levels of society, but they are still a minority. Most people want to do good, or at the very least be seen to be doing good by their peers.

They make bad moral decisions because the information and incentives don’t corral them into alignment. The signals that reach boards are financial performance, regulatory and legal exposure, risk dashboards filtered through committees, and growth and structural metrics. There may be internal cultural feedback and surveys of customers, or things like trends in complaints, but these lack the sharpness of more robust metrics. Executives and boards want to talk about values, about contributing to society, but they don’t have the tools to measure and correct.

There are other incentives for organisations. Values sit above laws and regulations, even above our rationales for enabling the creation of entities like governments and corporations. They are the invisible substrate of legitimacy, the reason why any system is tolerated in the first place. When a system loses contact with that layer, it begins to decay from within, no matter how efficient it remains.

Providing systems with an operational moral sense gives them access to the second highest layer of systemic leverage points:

“The mindset or paradigm out of which the system — its goals, structure, rules, delays, parameters — arises.”

A system that can perceive the mindset of the culture it operates within becomes capable of participating in that culture’s moral evolution. It can sense when its legitimacy is waning and adapt before collapse. It can act with purpose and coherence rather than reflex. More practically, it can achieve its goals with less friction. There is competitive advantage hidden in that, the kind that accrues to systems that can listen.

It’ll make things better for us regular folks as well.

We have options

Turns out this is one of those times where I discover why I chose my opening anecdote only once I reach the end of the essay. Seeing nomadic Brendon beside me on the rocks was more than just a suggestion that I should grow my hair out again. It was a reminder that we are both points on a continuum of human existence, our ways of living equally valid for our cultural, technological, and worldview contexts.

The future will not be the same as today. We’re good at acknowledging the inevitability of technological change but sometimes fail to consider that we’ll change morally and culturally as well. It explains why we get locked in by systems logic, whether they be national constitutions or the incentives that companies respond to. Our systems are part of the lineage of human history too, and it’s important that they change alongside us. Alignment is how we keep them alive, how we let them grow with us rather than becoming relics of the past.

Acknowledging the inevitability that things in the future will be different makes it easier to become active participants in change. I can’t say what the outcome of better aligning our systems to our shared values would be. But I think it will make them better.

Next week I’ll have another simple demonstrator, mostly because I need to recover from this post. Then I’ll close this series out with Adaptation.

Brendon Hawkins
Brendon Hawkins

Intelligence professional exploring systems, values, and AI

Share this post