Differentiate between data poisoning and model poisoning in an AI system, and give an example of each.

Study for the CompTIA SecAI+ (CY0-001) Exam. Review flashcards and multiple choice questions, each with detailed explanations. Ace your certification!

Multiple Choice

Differentiate between data poisoning and model poisoning in an AI system, and give an example of each.

Explanation:
Data poisoning happens when an attacker corrupts the data used to train a model. By injecting mislabeled examples, wrong features, or crafted samples into the training set, the model learns incorrect mappings and its overall performance or behavior degrades, possibly producing targeted errors or backdoors. Model poisoning targets the training process or the model parameters themselves. The attacker manipulates how the model is learned, such as compromising the training infrastructure or injecting malicious gradient updates in a distributed training setup, so the final model behaves undesirably even if the training data looks clean. An example of data poisoning is adding mislabeled training samples to steer the model toward incorrect classifications. An example of model poisoning is corrupted gradient updates during distributed training to corrupt the final model weights and induce harmful behavior. Other described scenarios, like inference-time tricks or data/theft of weights, pertain to different security risks and are not poisoning in the training sense.

Data poisoning happens when an attacker corrupts the data used to train a model. By injecting mislabeled examples, wrong features, or crafted samples into the training set, the model learns incorrect mappings and its overall performance or behavior degrades, possibly producing targeted errors or backdoors.

Model poisoning targets the training process or the model parameters themselves. The attacker manipulates how the model is learned, such as compromising the training infrastructure or injecting malicious gradient updates in a distributed training setup, so the final model behaves undesirably even if the training data looks clean.

An example of data poisoning is adding mislabeled training samples to steer the model toward incorrect classifications. An example of model poisoning is corrupted gradient updates during distributed training to corrupt the final model weights and induce harmful behavior.

Other described scenarios, like inference-time tricks or data/theft of weights, pertain to different security risks and are not poisoning in the training sense.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy