Malicious attackers can generate targeted adversarial examples by imposing
tiny noises, forcing neural networks to produce specific incorrect outputs.
With cross-model transferability, network models remain vulnerable even in
black-box settings. Recent studies have shown the effectiveness of
ensemble-based methods in generating transferable adversarial examples. To
further enhance transferability, model augmentation methods aim to produce more
networks participating in the ensemble. However, existing model augmentation
methods are only proven effective in untargeted attacks. In this work, we
propose Diversified Weight Pruning (DWP), a novel model augmentation technique
for generating transferable targeted attacks. DWP leverages the weight pruning
method commonly used in model compression. Compared with prior work, DWP
protects necessary connections and ensures the diversity of the pruned models
simultaneously, which we show are crucial for targeted transferability.
Experiments on the ImageNet-compatible dataset under various and more
challenging scenarios confirm the effectiveness: transferring to adversarially
trained models, Non-CNN architectures, and Google Cloud Vision. The results
show that our proposed DWP improves the targeted attack success rates with up
to $10.1$%, $6.6$%, and $7.0$% on the combination of state-of-the-art methods,
respectively. The source code will be made available after acceptance.