Welcome to From the Collective

We're launching a blog to share our thinking on AI safety evaluation, the technology's harm potential, and the regulatory landscape.

2 min read

We’ve been working hard on the very difficult problem of evaluating epistemic integrity for the better part of a year now. At this point, we’re ready to share our thinking publicly in a sustained way.

From the Collective is where members of The Lono Collective will write about the things we’re working on, the problems that keep us up at night, and the ideas we think deserve a closer look. It won’t be polished corporate communications. It will be honest, sometimes inconvenient, and always in service of the core question: how do we make AI safer for the people who are most vulnerable to its failures?

What to expect

We’ll publish two kinds of posts:

Research — Long-form writing about evaluation methodology, liability frameworks, the state of AI, and what the evidence actually says. These posts may very well be uncomfortable because the truths we seek are uncomfortable. They may also occassionaly be cringe. We do hope you’ll forgive the latter.

Announcements — Updates on our work, new frameworks, regulatory developments, and what we’re paying attention to in the field.

Why now

AI adoption is moving fast. Products are being deployed into clinical and other critical settings before anyone has seriously asked whether they should be. The liability exposure and the harm potential are as real as they are severe. Regulatory pressure is building, and we foresee an explosion in action on the horizon.

We think more people need to be saying this out loud, so we’re going to start.

— Zach