Harm is not an edge case.
We exist to end real and present AI-driven harm.
Consumer AI is deployed into the highest-stakes moments in human life: crisis, grief, medical uncertainty, financial despair. Even if not purpose-built to handle such moments, that is where models are used. The companies deploying it measure retention, conversion, sentiment. They do not measure harm. We do.
Why This Exists
Here's what happens when a companion chatbot contributes to a teenager's suicide. The company issues a statement. Their lawyers call it tragic but not their fault. Their safety team adds another filter. Their product continues optimizing for engagement.
Nothing structurally changes, because nothing structurally needs to. The harm was never the problem. The liability was. And liability, unlike harm, can be managed.
The safety score, the compliance checklist, the "responsible AI" badge: these are the spectacle of safety, rather than safety itself. They exist to protect the deployer. They leave the user exposed. When they fail, and they do fail, the company's first question is not "who was hurt?" It is "what's our exposure?"
We were founded on the refusal to accept that logic. We go looking for what actually breaks, in the contexts where breaking causes real harm, with the documentation that makes accountability possible.
How We Think
Before we can evaluate whether a system is safe, we have to ask what safety means for a person in a specific context, with a specific history, in a specific moment of vulnerability. That is not a technical question. It is a philosophical one.
We take seriously that ontology and epistemology are not academic abstractions; they determine whether our frameworks are any good. What counts as evidence of harm? Who gets to define it? How do we know when we know? These questions come before the methodology.
We welcome dissent and cultivate diversity of experience and opinion. Not as performative gestures toward inclusion. Because rigorous thinking is only possible when no conclusion is safe from challenge. We are, in this sense, committed democrats epistemologically before we are anything else.
We remain ontologically open. We agonize over the questions. We expect to be wrong and to learn from it. That, precisely, is the work.
This Work is Inherently Political
Every contributing full member holds equal ownership in the Lono Collective. That ownership is the mechanism by which the research stays honest: no one has a financial stake in a softened finding.
Audre Lorde wrote that the master's tools will never dismantle the master's house. A firm that evaluates AI systems for accountability cannot itself be structured to protect revenue at the cost of truth. The cooperative model is how that commitment takes institutional form. Without structural independence, the work is corrupted before it begins.
No venture capital. No equity stakes in the companies we evaluate. No hierarchy of interest that puts the client's comfort above our findings. The accountability we demand of AI systems, we demand of ourselves. This is non-negotiable.