The goal of this tool is to look beyond the positions that actors assert when making arguments, and to surface the underlying values that are used justify them. Most political or policy debates are really clashes between value systems that remain invisible. We argue about outcomes without first acknowledging the moral assumptions that shape what each side considers legitimate or fair.

We're good at talking about issues, but not values.
We're good at talking about issues, but not values. Image generated by Chat GPT.

I built it because I keep seeing people talk past one another. These are smart, well-intentioned individuals who weren’t disagreeing about the facts, they’re disagreeing about values. Our public debates have been flattened and have lost their moral literacy. The aim of this tool is to make those hidden assumptions visible again, so that conversations can start with understanding.

Our arguments aren’t just about facts or interests, they’re about what people care about most, often without realising it. By tracing those hidden values, we can start to see why certain conflicts feel unsolvable and where dialogue might actually begin. The Narrative Values Extractor doesn’t tell us who’s right or wrong; it helps us understand why people take the positions they do.

How it works

The custom GPT works by taking a narrative text such as a news article, editorial, or statement, and producing a short, structured values map. Instead of summarising events, it identifies the groups involved, the values they claim, how they frame the issue, and what solutions they prefer. It also surfaces the conflicts between groups and suggests possible ways forward. The result is a human-readable report that outlines the moral and normative information often hidden inside public narratives.

The tool follows a strict step-by-step process:

  1. Is given a purpose: to read a single narrative and output a compact, structured values map.
  2. Is given the output format and output mode.
  3. Ingests URL, file, or copied block of text.
  4. Discovers the actors named in the narrative:
    • Enumerates over name groups and actors.
    • Merges duplicates.
    • Requires that actors be relevant to values map before recording them.
  5. Extracts values:
    • Extracts values nouns and noun phrases.
    • Separates stated values from inferred values.
  6. Evaluates the evidence with discipline:
    • Using quotations where possible.
    • Includes citations when browsing is on.
  7. Produces a conflict map:
    • Lists value-vs-value clashes as X ↔ Y pairs.
    • Notes narrative devices.
    • Surfaces asymmetries of power, voice, risk, or information.
  8. Proposes bridging hypotheses:
    • 2-4 practical ideas that honour both sides' values.
    • Is concrete in recommendations.
  9. Checks output for quality, bias, insufficient information, missing groups.

Limitations

The tool sometimes gives infers actors based on the text. This is referenced, but it’s something that I’m considering explicitly excluding because it can cause confusion. You can see that the example at the end of this piece references the NSW licensing authority who weren’t quoted in the article. I’ve kept it in for transparency, it shows the limitations of this approach.

The stated values don’t match any values framework. They are the best match of the LLM to what it considers human values to be. That’s ok for this project, because this is a first pass where you extract the values from a narrative before aggregating it across a group corpus and building the values map from that. In a larger project I’d be taking thousands (or hundreds of thousands) of these outputs, then collapsing them into the main threads to discover the fundamental values.

I also wouldn’t recommend using the outputs as a source of ultimate truth. These are designed like I build intelligence tools. They point a user in the right direction, reduce uncertainty, surface indicators that might inform more in-depth analysis, that kind of thing. We’re the moral agents here, not the LLMs. It means we need to use our own judgement, this is just to help.

Using the tool

Like the Terms of Service Evaluator, it’s pretty simple. All you need to do is open the custom GPT, paste the URL or text, and let it do its thing. If you can, turn on thinking mode, it gives a much better response.

The link to the tool is here.

This works best with articles that are rich in values statements and have at least two opposing sides. I have also tested it out with texts like the poem The Man from Snowy River and the French national anthem La Marseillaise. The results were very pretty cool. Still, I’d try it with investigative news articles first, particularly those which talk about a wrong being committed.

How it fits in the bigger picture

The aim of this proof of concept is to demonstrate how you can extract values from a text. The techniques it uses are the same as more complex processes I’ve built, such as the Political Values Analysis tool, simplified so that it can be used by the public. But what’s important is that it shows that LLMs can make explicit the values that are just under the surface of contested issues.

I think this might be the most practical of the custom GPTs I’ve built so far. It’s something anyone can use when they’re trying to make sense of a complex issue by understanding the moral terrain underneath.

Next week we’re going to get to the good stuff, to Alignment, the central challenge of this series.

Brendon Hawkins
Brendon Hawkins

Intelligence professional exploring systems, values, and AI

Share this post