Terms of Service Evaluator
AI-powered analysis of terms of service documents for value alignment and transparency.

Project Overview
The Terms of Service Evaluator is an AI-powered tool designed to analyse terms of service documents and identify potential value misalignments and ethical concerns. This tool helps users understand what they're agreeing to and highlights areas where corporate values might conflict with user interests.
This tool demonstrates how AI can be used to make complex legal documents more accessible and transparent to everyday users, embodying the core principles of the Systems of Value framework.
Key Features
Document Analysis
Automatically analyses terms of service documents to identify key clauses, potential risks, and value misalignments.
Value Alignment Assessment
Evaluates how well corporate terms align with stated user values and identifies potential conflicts.
Plain Language Translation
Translates complex legal language into understandable terms for everyday users.
Risk Identification
Highlights potential risks and areas of concern that users should be aware of before agreeing to terms.
Values Framework
The Terms of Service Evaluator assesses documents against six core ethical values. These values were generated by Chat GPT in attempt to make them as objective as possible. The Custom GPT prompts also include examples of alignment and misalignment which haven't been included on this page.
Click on each value below to learn more about what it means and how it's applied.
Core Principle
Agency is the capacity to act intentionally and meaningfully within a system. To support agency, systems must enable voluntary participation, informed choice, and reversible decisions.
Agency is not simply freedom from constraint—it is the presence of coherent, supported options that align with a person's identity, intentions, and moral compass. Aligned systems must empower users to act with understanding and autonomy, not just navigate constraints.
What This Value Requires of Systems
- Voluntary Participation
Users should not be coerced or defaulted into participation. Terms and engagement should reflect true choice, not friction-based manipulation. - Informed Choice
Options must be clear, contextualized, and comprehensible. Systems should reveal trade-offs and consequences of key decisions. - Reversibility of Decisions
People must be able to change their minds. Account deletion, unsubscribing, or withdrawing data should be supported without penalty. - Control Over Scope of Engagement
Users should be able to choose what parts of the system they participate in. This includes data sharing, visibility, and algorithmic targeting. - Design Against Manipulation
Avoid dark patterns, infinite scrolls, and default options that undermine reflective choice. Surface control options at decision points.
Core Principle
Consent must be specific, informed, ongoing, and revocable.
Consent is not a one-time agreement or a checkbox to protect institutions—it is a continuous relationship between individuals and systems. For consent to be meaningful, it must be context-sensitive, actively maintained, and never assumed.
What This Value Requires of Systems
- Clarity of Scope
Consent must be requested with clear boundaries: what is being agreed to, why, and for how long. - Informed Participation
Users must understand what they are agreeing to, in plain language, with accessible explanations of trade-offs. - Ongoing Affirmation
Consent should not be presumed to persist indefinitely. Systems should check in, especially when contexts change. - Easy Withdrawal
Users must be able to revoke consent at any time, without undue friction or punishment. - Granularity of Control
Systems should offer layered consent, not all-or-nothing terms. Different actions should have separate, user-controlled permissions.
Core Principle
Children are a distinct category of moral agent who require proactive protection, support, and respect for their emerging agency.
They are not merely "undeveloped adults" or passive dependents. They are moral beings with evolving rights, vulnerabilities, and needs, and systems that engage with or affect children bear a higher burden of ethical responsibility.
What This Value Requires of Systems
To be aligned with Care for Children, a system must:
- Recognize Children as a Special Moral Category
Not just under-18s by law, but as a phase of life with specific developmental and ethical considerations. Policies must acknowledge that consent, autonomy, and understanding are emergent—not assumed. - Actively Prevent Harm
Design out foreseeable risks (e.g., grooming, exploitation, overexposure). Include fail-safes, detection systems, and escalation pathways. Go beyond compliance—seek protective design. - Support the Development of Agency
Respect their evolving capacity to make choices. Provide guidance, not manipulation. Enable age-appropriate participation in decision-making. - Model Pro-Social Values and Safe Norms
Create environments where care, respect, and inclusion are practiced visibly. Avoid normalizing violence, misogyny, surveillance, or dehumanization. - Engage Guardians and Institutions Transparently
Communicate clearly with parents, educators, and caregivers. Ensure oversight mechanisms are meaningful, not just checkbox obligations.
Core Principle
Transparency is the foundation of trust and accountability. Systems must clearly reveal how they operate, make decisions, and affect those within or subject to them.
Transparency is not just about access to information; it is about making systems legible, navigable, and open to scrutiny. People must be able to understand what is happening, why it is happening, and what options they have to respond or challenge it.
What This Value Requires of Systems
- Clarity of Process
Systems must explain their processes in clear, accessible language. Pathways for engagement, recourse, and change must be visible and understandable. - Decision-Making Visibility
Who made what decision, when, and based on what criteria should be traceable. Systems should disclose the logic of both human and automated decisions. - Disclosure of Power Dynamics
Systems must reveal how power operates, including structural biases or limitations. Users should know what data is being used and for what purpose. - Accessible Documentation
Policies, procedures, and updates must be publicly available and easy to interpret. Historical changes to terms, conditions, or structures should be tracked. - Right to Explanation
Individuals impacted by decisions must be able to request and receive meaningful explanations. Transparency should be proactive, not only reactive to complaints.
Core Principle
Accountability ensures that systems and individuals are answerable for their actions, especially when they cause harm, breach trust, or fail to meet obligations.
A system that lacks accountability enables harm without consequence. True accountability requires not only mechanisms for assigning responsibility, but also pathways for redress, repair, and institutional learning.
What This Value Requires of Systems
- Clear Assignment of Responsibility
Actions and decisions must be traceable to accountable parties. Roles, obligations, and escalation paths must be explicit. - Answerability for Impact
Systems must respond to those affected by their actions. Explanation, engagement, and acknowledgement of harm are core components. - Mechanisms for Redress
Users must be able to challenge decisions and seek fair remediation. The system must adapt in response to valid complaints. - Consequences for Breach
Violations or failures must lead to transparent corrective action. Systemic failures should result in systemic responses, not scapegoating. - Capacity for Repair and Learning
Systems must evolve in response to past harms. Institutional memory and ethical learning should be built in.
Core Principle
Pluralism is the recognition and respect for diverse identities, perspectives, values, and ways of life. Aligned systems must accommodate and support this diversity rather than suppress or erase it.
Pluralism is more than inclusion; it is the structural capacity of a system to remain open and adaptive in the face of difference. A pluralistic system does not demand uniformity—it enables coexistence and mutual flourishing.
What This Value Requires of Systems
- Respect for Difference
Systems must avoid enforcing narrow norms or cultural defaults. Language, imagery, and engagement must reflect human variety. - Support for Multiple Worldviews
Users should be able to operate within their own moral, cultural, or epistemic frameworks. The system should not implicitly or explicitly pathologize non-dominant views. - Inclusive Design and Participation
People from diverse backgrounds must be meaningfully involved in design, governance, and evaluation. Barriers to access or participation should be actively dismantled. - Resilience to Monoculture Drift
Systems must resist the tendency to centre dominant groups over time. Practices that protect pluralism must be continually renewed. - Space for Contestation
Disagreement must not be seen as failure. Systems should provide mechanisms for respectful dissent, dialogue, and reconfiguration.
Each value is evaluated using a five-level maturity model:
- 🔴Negligent: System fails to recognise the value or creates harm
- 🟡Compliant: Meets basic legal requirements but lacks ethical intentionality
- 🔵Respectful: Acknowledges value and avoids harm through static policies
- 🔵Ethically Aligned: Actively supports value through dynamic practices
- 🟣[Value]-Centric: System shaped around the value as core design principle
Technical Implementation
How Version 1 Works
The Terms of Service Evaluator Version 1 is a proof-of-concept AI tool built using ChatGPT's custom GPT functionality. Here's how it operates:
Input Processing
- • Accepts website URLs or file uploads (PDF, DOCX, plain text)
- • Validates that input is actually a Terms of Service document
- • Extracts and structures key sections automatically
Evaluation Framework
- • Uses six core values: Agency, Consent, Care for Children, Transparency, Accountability, Pluralism
- • Applies 5-level maturity model for each value
- • Provides evidence-based scoring with quotes from the document
Analysis Process
- • Parses documents for relevant sections (data use, consent, child protections, etc.)
- • Identifies power dynamics and user limitations
- • Evaluates alignment with ethical principles
Output Generation
- • Creates comprehensive scorecard table
- • Provides written assessment per value
- • Suggests specific improvement areas
- • Assigns overall alignment rating
Version 1 Limitations
This is a proof-of-concept demonstration created to show how AI might help evaluate ethical alignment when values are clearly defined. It's not an official rating system and should be treated as a thought experiment exploring how we might hold systems to better standards than "is it legal?"
Status: Proof of Concept Version: 1.0
Related Content
Blog Post
Read more about the Terms of Service Evaluator in the Systems of Value Substack.
Read on SubstackBack to Projects
Return to the main projects page to explore other tools and frameworks.
View All Projects