Takeaway 🏃

Zusammenfassung:: This paper tries to test if current obfuscation methods show obvious signs of tampering in the resulting texts. They use neural language models to capture text smoothness features which are then fed into a binary classifier to detect obfuscation. They evaluate their results on two datasets.

Motivation:: The authors argue that true obfuscation should be stealthy.

Ergebnisse:: Most state-of-the-art up until this point aren’t stealthy. The obfuscation detectors developed for this paper are highly adept at figuring out which documents were tampered with from text smoothness alone. Therefore, not only semantic meaning has to be preserved during obfuscation but also smoothness, measured as overall word likelihoods.

The code for this project is openly available: GitHub - asad1996172/Obfuscation-Detection

Keywords: authorship


  • 2235 | Adversary methods should be stealthy – if the text is obviously using obfuscation methods, the reader can call the ruse and enact counter-measures/try to find loop holes
  • 2236 | Literature review of common obfuscation methods, starting in 2012
    • Semi-automated
      • Imitating some else’s style
      • Identify revealing words and phrases and manually change them
      • Machine translation and sentence suggesters
    • Full-automation
      • rule-based changes (e.g. averaging over corpus)
      • search-based: genetic algorithm (“maximum adverse effect”), heuristic
      • GANs transfer style of input to target
      • auto encoders
  • 2237 | Capturing text smoothness to align source with changes on target
    • Text smoothness is measured through co-occuring words
    • Word likelihoods fed into supervised ML model to classify whether text has been obfuscated or not (binary)
  • 2237 | BERT and GPT-2 are used to extract word likelihood
  • 2238 | They plot the likelihoods, store them as images and then train an image classifier (VGG-19) to extract the smoothness features 🧐
  • 2239 | Overview of state-of-the-art obfuscation approaches
  • 2239 | evaded-vs-obfuscated-documents
  • 2241 | EBG data fares better than BLOG, higher overall F1 scores
    • Evaded documents are easier to detect
    • State of the art obfuscators are not stealthy because there is a clear degradation in text smoothness after obfuscating
  • 2243 | BERT + Image-based features + KNN/ANN
    • MUTANT-X [@mahmood2019] most stealthy obfuscator
    • Obfuscation detectors with F1 up to 0.90-0.95

Todo

  • Read @abbasi2008 on Writeprints ⏳ 2022-11-09 ✅ 2022-11-22
  • [b] Figure out what all this means:
    • We experiment with Support Vector Machine (SVM) with a linear kernel, Random Forest Classifier (RFC) an ensemble learning method, K Nearest Neighbor (KNN) which is a nonparametric method, Artificial Neural Network (ANN) which is a parametric method, and Gaussian Naive Bayes (GNB) which is a probabilistic method. All classifiers are trained using default parameters from scikit-learn7 except for ANN, where we use lbfgs solver instead of adam because it is more performant and works well on smaller datasets.

Metadaten


Mahmood, Asad, Zubair Shafiq & Padmini Srinivasan. 2020. A Girl Has A Name: Detecting Authorship Obfuscation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2235–2245. Online: Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.acl-main.203.