2 Comments

Tangential to the OpenAI situation, you mention your stance that open approaches are superior for safety. Openness in terms of an external audit, yes, but hasn’t OpenAI been doing those? Openness in terms of open source I still don’t understand, especially if safe behavior (which is a slightly incoherent concept anyway, but that’s a different point) can be easily fine tuned away. To me, that means essentially anyone can have access to powerful applications for any end, good or bad. I see the value in all the good uses, but if we’re worried about AI risks in the first place, it seems like we should be strictly more worried about open source.

Expand full comment
author

I am definitely not an open source absolutist in ML. Most of my argument is that we haven’t suitably explored the open approach, which has been used for most technologies in the past.

It’ll look like a spectrum of access, and I don’t think liability is necessarily the perfect example. More people Using models *in a way that others can track* is warranted as we know so little.

But yeah, you’re not wrong, the open argument often feels like grasping for straws. OpenAI hasn’t been doing external audits, they’ve been bringing in external people that they want to evaluate their systems. This should look like external people deciding who gets some access.

Expand full comment