Discussion about this post

User's avatar
Greg G's avatar

Tangential to the OpenAI situation, you mention your stance that open approaches are superior for safety. Openness in terms of an external audit, yes, but hasn’t OpenAI been doing those? Openness in terms of open source I still don’t understand, especially if safe behavior (which is a slightly incoherent concept anyway, but that’s a different point) can be easily fine tuned away. To me, that means essentially anyone can have access to powerful applications for any end, good or bad. I see the value in all the good uses, but if we’re worried about AI risks in the first place, it seems like we should be strictly more worried about open source.

Expand full comment
1 more comment...

No posts