My Thoughts on OpenAI Not Releasing Weights

Post Reply
john
Posts: 116
Joined: Wed Jun 13, 2018 9:36 am
Has thanked: 0
Been thanked: 1 time
Contact:

My Thoughts on OpenAI Not Releasing Weights

Post by john » Wed Feb 27, 2019 11:11 am

So, I've finally organized my thoughts surrounding @OpenAI's decision to not release weights. I really empathize because I went through exactly this with SNAP_R research three years ago. Thread incoming. I think a lot of people are mischaracterizing @OpenAI's decision to not release weights as something they think was a full-blown solution rather than a step back to define what responsible disclosure looks like for this domain, which I think is long overdue.People can disagree on the efficacy of the decision. The delay in reproduction because of releasing weights will likely not affect how attacks are generated in the future.

But the fact of the matter is this is the first time I'm seeing this addressed at all within the ML research community. We should be first applauding them for the attempt, then explaining to them why the attempt is ineffective and at the same time what would be more effective.IMO, the ML community, with synthetic data generation, has been negligent in considering dual uses for the technology they create and in expecting others to develop countermeasures as they release paper after paper improving SOTA.

The point is that the decision to not release weights to GPT-2 was not just a naive attempt to reduce the threat surface of the research output. The point is that the decision not to release weights to GPT-2 was a statement:

ML researchers need to consider what the output of their research might cause, baseline safeguards need to be debated and agreed upon, and we cannot wait or rely on other researchers from in or outside the field to develop countermeasures (or even think of dual uses).There's a fundamental difference in responsible disclosure in the offsec world and GPT-2, Deepfakes, SNAP_R, etc. So it's not so easy as applying Project Zero's reporting methods.

We can't expect that technical countermeasures to synthetic content will be effective or created in any reasonable amount of time (otherwise, we would have already solved the problem for similar manual attempts at content generation).Complaining that @OpenAI
's approach is inadequate without giving a reasonable alternative is simply armchair philosophy. Expecting other researchers to develop solutions or that solutions will magically present themselves if we continue down our current path is naive.

While I believe withholding the fully trained large model for GPT-2 will be ultimately ineffective for reducing large-scale disinformation attacks, I personally cannot think of anything more effective.

Thanks for reading



faith
Posts: 31
Joined: Mon Jan 28, 2019 7:19 am
Has thanked: 0
Been thanked: 0
Contact:

Re: My Thoughts on OpenAI Not Releasing Weights

Post by faith » Sun Mar 03, 2019 1:27 pm

this is a nice post.i kind of agree with you 101%

Post Reply

Who is online

Users browsing this forum: No registered users and 0 guests