RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text Detectors



In this short five-minute talk I introduce RAID: the largest and most challenging benchmark dataset for generated text detection. I discuss two key findings from the paper, namely that all detectors when calibrated to FPR=1% achieve accuracies of less than 50% and that many detectors are extremely suscetible to low edit-distance adversarial attacks such as whitespace insertion and article deletion.


This talk was given on June 12th 2024 at the University of Maryland in College Park, Maryland as part of the IARPA HIATUS program.

Slides, Paper, Code