Which option is NOT a commonly cited defense against adversarial examples?

Study for the CompTIA SecAI+ (CY0-001) Exam. Review flashcards and multiple choice questions, each with detailed explanations. Ace your certification!

Multiple Choice

Which option is NOT a commonly cited defense against adversarial examples?

Explanation:
The idea here is that a defense against adversarial examples must specifically address how small, crafted changes can flip a model’s decision. Adversarial training does this directly by exposing the model to perturbed inputs during learning so it learns to classify them correctly. Gradient masking tries to conceal gradient information from attackers, but attackers can often work around it by using surrogate models or alternative optimization paths, so it’s not a reliable defense on its own. Defensive distillation aims to smooth the model’s outputs and make the decision boundary harder to exploit with small changes, yet strong attackers can still find ways to succeed. Data normalization for standard ML tasks, meanwhile, is a general preprocessing step to ensure features are on comparable scales and to improve training stability. It does not specifically harden the model against crafted perturbations intended to deceive it, so it isn’t considered a defense against adversarial examples on its own.

The idea here is that a defense against adversarial examples must specifically address how small, crafted changes can flip a model’s decision. Adversarial training does this directly by exposing the model to perturbed inputs during learning so it learns to classify them correctly. Gradient masking tries to conceal gradient information from attackers, but attackers can often work around it by using surrogate models or alternative optimization paths, so it’s not a reliable defense on its own. Defensive distillation aims to smooth the model’s outputs and make the decision boundary harder to exploit with small changes, yet strong attackers can still find ways to succeed.

Data normalization for standard ML tasks, meanwhile, is a general preprocessing step to ensure features are on comparable scales and to improve training stability. It does not specifically harden the model against crafted perturbations intended to deceive it, so it isn’t considered a defense against adversarial examples on its own.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy