I started building these principles in 2018.
That’s not a brag, it’s a confession. Because if it took six years to get them right, I clearly didn’t have them right the first time.
Version 1 launched in October 2020. Twelve principles, each described in a single sentence. Things like “the system should invite diversity of personal identities, ideas, perspectives, and backgrounds.” Earnest. Well-intentioned. And, if I’m being honest with myself, almost entirely toothless.
Here’s what I’ve learned since then: a principle that can’t be violated can’t guide a decision. And most of what I wrote in 2020 was less a design constraint and more a declaration. “The system should invite diversity.” Great. What does a system that violates that look like? What’s the failure mode? How do you know when you’ve crossed the line? If you can’t answer those questions, you haven’t written a principle, you’ve written a hope.
And here’s the uncomfortable irony: that’s exactly the same reason a lot of organizational DEI work has failed. Declarations without design constraints. Aspirations without accountability. It turns out that the problems in my ethics framework were the same problems it was supposed to help us solve.
So I rewrote it.
Actually, I rewrote it several times. Informed by the work I was doing day to day, the research I was doing, and research I did just for the sake of the principles themselves, and the escalating state of… *gestures wildly everywhere*.
It was a LinkedIn post by the great Lily Zheng that put the fire in my belly to put the finishing touches on V2.
What changed
The framework went from 12 principles to 18. The new ones, Algorithmic Accountability, Autonomy & Agency, Economic Justice, Environmental Sustainability, Labor Ethics, and Civic Responsibility, aren’t additions for the sake of completeness. They fill real gaps from V1 and issues we’re seeing in product work today as well as DEI efforts. The original 12 said nothing about the people who build and moderate our products. Nothing about what it means when a recommendation algorithm predictably radicalizes people. Nothing about the difference between a business model and an enshittified financial trap. Those aren’t edge cases anymore… They’re the increasingly depressing centre of the conversation.
Every principle now has a violable definition, written to describe what failure looks like, not just what success aspires to. Each one has anti-patterns, real-world failure modes with named examples, and a section I’m particularly proud of called “For the Builder,” because the ethical footprint of a product includes the team behind it, not just the users in front of it. And if you’re going to be using AI, you should damn well be a responsible builder.
I also introduced a severity framework. Not all violations are equal. Some are foundational: they actively harm users and there’s no business rationale that overrides them. Some are structural, where they limit who gets to benefit from what you’re building. Some are aspirational, where you could be doing more good, and you’re not. Knowing which is which changes how you triage.
The part I didn’t expect to build
Somewhere in the process of rewriting these, I realized the format was the problem too. A framework that lives in a bookmark is a framework that gets forgotten by Thursday. And, I have to recognize how the world is changing and be a part of the solution.
So I built Cursor rules and a Claude Code skill. Six .mdc files that plug directly into Cursor and fire at the right moments, when you’re writing UI components, when you’re designing data models, when you’re writing a product spec, when you’re reviewing a pull request. The Claude Code skill is a CLAUDE.md file you drop into any project, and your AI coding assistant starts applying all 18 principles throughout its work.
I don’t know of another ethics framework that ships as developer tooling. That’s either a novel idea or a sign that I spend too much time lately in the terminal. Possibly both.
All of it: the principles, the rules, the skill, is on GitHub at github.com/spencergoldade/Product-Ethics-Principles under CC BY 4.0. Take it, adapt it, use it, tell me where I got it wrong.
Why this matters right now
We’re watching a lot of companies learn the hard way what happens when you optimize for engagement without asking what you’re actually optimizing for. The Facebook Papers. The Amazon AI hiring tool that discriminated against women. Instagram and its documented effects on teenage mental health. Infinite scroll. The attention economy. Enshittification as a business strategy.
None of these were accidents. They were decisions, made by designers, product managers, business stakeholders, and engineers who either didn’t have a framework for thinking about the downstream effects of their work, or who had one and were overruled, or who had one and quietly set it aside because shipping was more urgent than asking hard questions. Or they just made excuses about not having the time or money… but now they’re using AI tools that can just help apply and mitigate, right?
I can’t fix the structural incentives that produce those decisions. I can try to make it easier to ask the questions earlier, when it’s still a design conversation and not a post-mortem.
That’s what this is for.
→ Read the updated principles
→ Get the Cursor rules and Claude Code skill on GitHub
If you have feedback, including “this principle is still wrong,” find me on LinkedIn. I mean it. Loose opinions, remember.