OpenAI released its o3 model in December, bragging about the model’s unparalleled ability to do math and science problems. The model’s success on the FrontierMath benchmark — solving 25.2% of…
Besiroglu says OpenAI did have access to many of the FrontierMath problems and solutions — but he added “we have a verbal agreement that these materials will not be used in model training.”
ooh, a verbal agreement! incredible! altman & co didn’t even have to do the typical slimy corporate move and pay an intern to barely modify the original materials into the input for the training corpus, since that verbal agreement wasn’t legally binding and behind the scenes OpenAI can just go “oopsy woopsy we swear it won’t happen again” and who’s gonna stop them?
ooh, a verbal agreement! incredible! altman & co didn’t even have to do the typical slimy corporate move and pay an intern to barely modify the original materials into the input for the training corpus, since that verbal agreement wasn’t legally binding and behind the scenes OpenAI can just go “oopsy woopsy we swear it won’t happen again” and who’s gonna stop them?
That’s what I was thinking as well. All they have to do is look the other way.
Oops, i did it again, all the way