I am unclear how the regulation proposals you like at the end are compatible with open weights models? deepfake, non-consensual, watermarking, and “right to be forgotten” all feel impossible to enforce on someone who interfaces with model weights instead of an API? Are you planning these to be enforced on end users only, as in “if you generate unwatermarked text from a model, then you have violated the law/regulation”?
1. Yes I'm much more worried about open image models. The only reason I don't think we should lock them that much is because they're already out there and locking is not that practical.
2. My proposal is to watermark *human* data, not AI data, so doesn't really matter where you get your AI.
3. My idea would be open models are needed to inform how to do this, as a less "gotcha" regime, but I see the point. Is debatable. Would take serious regulation and habit change. I see this applying to closed models too (more of dataset than model removal).
I am unclear how the regulation proposals you like at the end are compatible with open weights models? deepfake, non-consensual, watermarking, and “right to be forgotten” all feel impossible to enforce on someone who interfaces with model weights instead of an API? Are you planning these to be enforced on end users only, as in “if you generate unwatermarked text from a model, then you have violated the law/regulation”?
let's go one at a time.
1. Yes I'm much more worried about open image models. The only reason I don't think we should lock them that much is because they're already out there and locking is not that practical.
2. My proposal is to watermark *human* data, not AI data, so doesn't really matter where you get your AI.
3. My idea would be open models are needed to inform how to do this, as a less "gotcha" regime, but I see the point. Is debatable. Would take serious regulation and habit change. I see this applying to closed models too (more of dataset than model removal).
4. maybe doesn't apply