Paper : arXiv

Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey

Most attention is put on adversarial techniques in Computer Vision applications (>3 times than in NLP). The most popular, state-of-the-art DNNs are vulnerable to modified samples. Owing to black-box and overconfident working of DNNs, they are easily fooled by perturbed samples. Adversarial attacks meant for images won't work for textual data owing to underlying differences -

  1. Continuous vs discrete: Gradient-based adversarial attacks (orig. for images) on textual vectorised data gives invalid characters; not useful even with word embeddings. Image data (pixel values) is continuous but textual data (tokens) is discrete so input perturbation is meaningless if we consider tokens as our input space. Discrete data is hard to optimize.
  2. Perceivable vs Unperceivable: text embeddings might be really sensitive to small perturbations. In fact, a small perturbation might result in a sentence with an incorrect syntactic structure or completely different semantic meaning.
  3. Semantic vs semantic-less: perturbation on texts would easily change the semantics of a word and a sentence, thus can be easily detected and heavily affect the model output. Changing semantics of the input is against the goal of adversarial attack.

Attributes of threat (attacking) model:

  1. Black-box (no info [arch, param, training data] about DNN; only access to victim's predictions on specified inputs) vs White-box (full known info of victim model)
  2. Change output to incorrect (un-targeted) vs pre-specified (targeted) label.
  3. Granularity - use of word, character, or sentence level embedding
  4. Attack (evaluate robustness of DNN) vs Defense (robustify DNN)

Constraints on attacks:

  1. Perturbation constraint - \epsilon shouldn't modify prediction of ideal DNN, shouldn't end up having nil effect on target DNN.
    1. Norm-based [on vectorised rep] : no use as discrete data
    2. Grammar and syntax related
      • Grammar and syntax checker - check validity of adv. examples
      • Perplexity - measure quality of language model.
      • Paraphrase - type of adv. eg.
    3. Semantic preserving [on both] : measure semantic similarity. N-dim word vectors p & q
      • Euclidean dist. - d(p,q)
      • Cosine similarity - works better than other dist measures because norm of vector is related to overall frequency of which words are in training corpus. Direction and cosine dist will not be affected by this. cos(p,q) = p.q / ||p|| . ||q||
    4. Edit-based [on orig rep]: edit distance is min changes from one string to other; used to quantify dissimilarity
      • Levenshtein dist. - insertion, removal, substitution ops on chars in string.
      • Word-Mover’s dist (WMD) - operated on word embeddings; min dist that embedded words of one doc needs to travel to reach embedded words of other doc
      • Number of changes
    5. Jaccard Similarity coeff [on orig] - use intersection & union to find similarity in finite sample sets. Large J means high sample similarity
      J(A,B) = [ A \inter B ] / [ A \union B]
  2. Attack Evaluation - Choose metrics as per task at hand.

CNN for Sentence Classification [Yoom Kim] - Word2Vec to represent input; convolve in word sequence’s direction; multiple filters in pooling layers.

CNN for Text Classification [Zhang et al] - character level one-hot encoding; data augmentation

RNN for Language Modeling [Bengio et al] - find prob of seq of words in recurrent mode; i/p is feature vectors of preceding words; o/p is conditional prob over output vocab.

Seq2Seq model for NMT - OpenNMT; Seq2Seq models can generate another sequence inf from given seq inf using encoder-decoder arch; 2 RNNs - i) Encoder : process i/p and compress it in vector rep. ii) Decoder : predict o/p

Attention model for Machine Comprehension - BiDAF; to encode long sequences; Attention allows decoder to look back on hidden states of source seq. Hidden states give weighted avg as another i/p to decoder. Vanilla-attention models look at input seq. Self-attention models look at surrounding words in seq to get context-sensitive word rep.

Reinforcement Learning Models in Dialogue systems

Deep Generative Models - generate realistic textual data in latent space; GANs and VAEs ; VAEs - encoder + generator. Encoder encodes i/p into latent space. Generator generates samples from latent space.

Some adv examples' notations :

Attacks

  1. Model access group - knowledge of attacked model when attack is done
    1. White-box attacks : full info of model; worst-case attack; more effective than black-box
      1. FGSM (Fast Gradient Signed Method) based [deets - https://www.tensorflow.org/tutorials/generative/adversarial_fgsm]
        1. TextFool - approximate contri (magnitude) of text items (hot phrases) that play role in text classification using cost gradients; done manually; insertion, modification, removal strategy;
          1. find cost gradient Ξ”J(f, x, c’) using backprop [f=model func, x=training sample, c’=target class, c=orig class]
          2. Find hot characters - word level : ones with max highest gradient. Character level : HTPs contain hot chars and are frequent.
          3. Insertion : insert HTPs of c’ near phrases of c
          4. Modification : identify HSPs and replace chars in HTPs with misspellings etc; follow dir of cost gradient Ξ”J(f,x,c) and opposite dir of Ξ”J(f,x,c’)
          5. Removal : remove adjs/adv from HSPs
          6. test on CNN text classifier
        2. Removal-addition-replacement strategy - words are ordered acc to contri ; greedy
          1. Remove adv. (w_i) that conti most to text classf.
          2. If incorrect grammar in o/p, insert candidate word p_j before w_i
          3. If no highest cost grad for all p_j in o/p, then replace w_i with p_j
        3. Malware Detection - identify malicious software using PEs as features ; PE rep. as m-dim binary vector, 1=PE present, m=num of PEs ; 2 works -
          1. 4 bounding methods to create adv.eg.
            1. First 2 use multi-step FGSM ; restrict perturbations in binary domain using dFGSM & rFGSM
            2. 3rd method - multi-step BGA : set j’th feature bit if corr. partial deriv of loss is >= ( loss gradient's l2-norm / √m )
            3. 4th - multi-step BCA : update 1 bit of max corr partial deriv of loss in each step
          2. Append uniform random seq of bytes (payload) to orig seq. Then, embed this new binary and do iterative FGSM on this embedding until wrong pred by detector. Reconstruct adverse embedding to valid binary seq by mapping this embedding to closest neighbour in valid embedding space.
      2. JSMA (Jacobian Saliency Map Adversary) based
        1. find most contributable seq towards adversary direction; compute Jacobian using computational graph unfolding; craft adv egs for 2 types of RNN o/p :
          1. Categorical - consider π½π‘Žπ‘π‘πΉ[:, 𝑗]column corr to o/p comp j; for word i, identify perturbation direction using sign of Jacobian.
          2. Sequential - after finding Jaobian, alter subset of i/p steps with high Jacobian vals and low Jacobian vals to achieve modification on subset of o/p
        2. Malware Detector - binary feature vector to rep application; preserves functionality of apps; craft adv egs on i/p feature vector (0->1 or 1->0) using JSMA
          1. Compute gradient of fwd deriv to estimate perturb dir
          2. Choose perturbation 𝝢 for a i/p sample that with maximal pos gradient into target class
          3. Bound num of features to 20
          4. Bound num of features modified using L1-norm
          5. For defense - feature reduction, distillation, adversarial training (most eff)
      3. C&W Based
        1. Medical Records : detect susceptible events and measurements in each patient’s records & provide clinical help.
          1. Predictive model - LSTM
          2. Patient data matrix 𝑋𝑖 ∈ 𝑅𝑑*𝑑 , d=num of medical features; t=time index of medical check
          3. Generate adv eg, logit(.) - logit layer o/p , Ζ› - reg param of L1-norm y’ - target label, y - orig label
          4. Pick optimal eg.
          5. Use it to compute susceptibility score for record
        2. Seq2Sick : attack seq2seq models using 2 targeted attacks:
          1. non-overlapping attack : generate adv. seq. diff from orig o/p; Hinge-like loss func. that optimizes on logit layer
          2. keyword attack : targeted keywords to appear in o/p seq; opt on logit layer & targeted keyword’s logit should be largest; solve mask func m to solve keyword collision problem; 2 reg methods - (i) Group lasso reg - for group sparsity (ii) Group gradient reg - make adversaries in permissible range of embedding space
      4. Direction-based
        1. HotFlip - generate adv eg through atomic flips using directional derivs.
          1. Represent char level ops (swap, insert, delete) as vectors in i/p space
          2. Estimate change in loss J(x,y) by directional derivs wrt these vectors
          3. Using beam search, HotFlip finds best dir for multiple flips
          4. Hotflip extended to targeted attacks using 1) controlled attack - remove specific word from o/p 2) targeted attack - replace specific word by chosen one
          5. For this, max J(x,y_t) and min J(x,y_t’) , t=target word; t’=word to replace t
          6. 3 types of attacks
            1. One-hot :manipulate all words in text with best ops
            2. Greedy : pick best op from text + perform fwd & bwd pass
            3. Beam search : replace search method in greedy with beam search
          7. Threshold - only change 20% of chars
      5. Attention-based - compare robustness of CNN vs RNN thru 2 attacks; only uses attention score and not attention mechanism
        1. First, use model’s internal attention distri to find pivotal sentence i.e. sentence given larger weight by model to make correct pred.
        2. Exchange words with most attention with random word in vocab.
        3. Second, remove sentence that gets highest attention
      6. Reprogramming - uses AP to attack sequence neural classifiers; AP -
        1. Adv reprogramming func 𝑔θ is trained so that DNN performs alternate task w/o modifying DNN param
        2. Like transfer learning but no change in param
        3. Apply Gumbel-Softmax to train 𝑔θ that works on discrete data
      7. Hybrid - perturb i/p text on word embeddings using FGSM+DeepFool; round off adv egs to nearest meaningful word vectors using WMD
    2. Black-box attacks : no detailed info of NN; more practical