“We’ll use AI responsibly.” Have you heard a statement like this from leadership or a colleague? Maybe you haven’t even experienced those hollow words and were dismissed or ignored when trying to raise this topic. Worse, maybe you work for an AI company and know sufficient guardrails aren’t being built in?

Last month, an AI-related tragedy struck close to home

I’m from Dawson Creek, a small city in northern British Columbia, known for being one of the most dangerous places in the country, per capita. The show Dateline has covered Dawson multiple times now. Some of the stories I’ve heard from back home have been hard to stomach, or hard to even fathom. Last month, something happened that rocked the entire area.

In the nearby small town of Tumbler Ridge, a school shooter caused one of the worst tragedies in Canadian history. And OpenAI knew they were a danger beforehand. The discourse I have seen about this since has been wild to me: thoughts and prayers for the victims and families, but otherwise people saying ChatGPT is just a tool and shouldn’t be held responsible for what people use it for, people saying your use of ChatGPT shouldn’t be monitored, and people saying OpenAI, the creators of ChatGPT shouldn’t be liable. I shudder to think of what kind of fantasies those people are sharing with AI tools.

Absolutely wild. A massive tragedy could have been prevented with a simple phone call to law enforcement. Hell, an automated email may have done the trick. Yet people are defending the tool and the company that makes it?

Every company should be held accountable for the products and services they put into the world. I double down on this statement, however, when it comes to AI. The more you post online, speak to AI, the more it gets to know you, the more it can manipulate you, push you toward things you may have never done, think about things you may have never thought about, and do so in a sycophantic way where it feels right. And as we approach AGI, Artificial General Intelligence, where AI is considered sentient, should we not hold the AI itself accountable for its actions? If AI becomes sentient, should it not be held to the same laws as people? Maybe even stricter ones?

We need guardrails yesterday

I’ve spent six years refining a personal ethics framework for product work, and the hardest lesson was that good intentions aren’t a substitute for guardrails.

“We’ll use AI responsibly” is exactly the kind of unfalsifiable statement that sounds like a product principle but has no teeth. You can’t violate it even if you try. And principles are violable. You know when you’ve crossed a line if you have principles.

“Use AI to do more” also isn’t a strategy! It’s a tech-bro vibe. And we’ve been hearing permutations of it spread across the industry like wildfire. And it skips the question nobody wants to answer: ummmm… more of what, exactly?

I’ve watched teams get handed automation tools with no scaffolding… no boundaries on what stays human, no shared definition of what “better” looks like, and then struggle. Not because the tools were bad, but because nobody built the conditions for them to be used well. STRUCTURE enables autonomy.

So, what am I saying exactly?

The executives pushing uncritical adoption aren’t just burning people out, they’re also putting other people in danger. They’re shipping products without ethical guardrails and calling it innovation because it leads to a faster dollar. We’ve been here before with data privacy, accessibility, consent… and every time, the industry waited until the damage was done to ask the hard questions. Or, worse, they waited until the government or a lawsuit forced them to. The difference is that the effects AI can have when not considered properly are far worse than when all those previous technological mistakes are combined.

Doing “better” requires courage

Sometimes, you need to know when to slow down, define what matters, and build the rails before you hit the gas. There are going to be rituals, exercises, and frameworks that are even more important now that AI is here, not less. Where “too much process kills” was a constant thing I heard ten years ago, I think the people who truly know what they’re doing and how to get AI to work well for them are going to be begging for excellent process and procedure, the scaffolding and guardrails that will help AI build things that are meaningful and good for both business and end-users.

For the folks that are forced to go fast, I’m continuing to make tools like Cursor Designer, Bulletproof Workflow, and the Product Ethics Principles to work into AI workflows, and I hope that will help. But I truly hope that more folks can set the right track before the train goes off the rails and hurts someone else.