"Today, models are complex systems that entail far more than just weights. They require complex tools and infrastructure to run them, of which Claude Code is the one we are most used to. Mythos very likely has its own innovations here."
As we are seeing live the effect of a good open source model (gemma-4), there’s already a whole lot of community interest in tinkering with and understanding the models. This is overall a great thing for anyone who cares about AI risks - study it, use it for red teaming, fine-tune it, find out it strengths and weaknesses. When it’s open, the discussions are open, and the risks and failure modes are more visible than closed sourced APIs.
Nathan, you make a fair point: open‑weight fearmongering has been cyclical, and a blanket ban isn’t the answer. But the Mythos case isn’t just about weights – it’s about the absence of runtime enforcement.
Anthropic built a model that autonomously writes exploits, then handed it to a coalition under gated access. That’s not safety architecture. It’s liability distribution. Whether the weights are open or closed, the binding question is the same: who proves every consequential action was authorised before the electron flowed?
Veritas Core answers that with hardware‑rooted gates (PCIe + TPM) and offline‑verifiable receipts – independent of open/closed weights. Let’s focus on building that enforcement layer, not just debating distribution models.
"Today, models are complex systems that entail far more than just weights. They require complex tools and infrastructure to run them, of which Claude Code is the one we are most used to. Mythos very likely has its own innovations here."
I would read 4000 words on just this subject
Outdated, but this was the short version from last fall. https://www.interconnects.ai/p/thinking-searching-and-acting
It's been a recurring theme in recent articles, of course.
I also discussed it in recent talks, e.g., but none were recorded. https://docs.google.com/presentation/d/1K3bM3K7q_CBcXzUCX7a1YvUHAycpvTKZbJElKSOdiok/edit
As we are seeing live the effect of a good open source model (gemma-4), there’s already a whole lot of community interest in tinkering with and understanding the models. This is overall a great thing for anyone who cares about AI risks - study it, use it for red teaming, fine-tune it, find out it strengths and weaknesses. When it’s open, the discussions are open, and the risks and failure modes are more visible than closed sourced APIs.
Well-timed post and important facts!
Nathan, you make a fair point: open‑weight fearmongering has been cyclical, and a blanket ban isn’t the answer. But the Mythos case isn’t just about weights – it’s about the absence of runtime enforcement.
Anthropic built a model that autonomously writes exploits, then handed it to a coalition under gated access. That’s not safety architecture. It’s liability distribution. Whether the weights are open or closed, the binding question is the same: who proves every consequential action was authorised before the electron flowed?
Veritas Core answers that with hardware‑rooted gates (PCIe + TPM) and offline‑verifiable receipts – independent of open/closed weights. Let’s focus on building that enforcement layer, not just debating distribution models.