What are AI supply chain risks related to external datasets and pre-trained models?

Study for the CompTIA SecAI+ (CY0-001) Exam. Review flashcards and multiple choice questions, each with detailed explanations. Ace your certification!

Multiple Choice

What are AI supply chain risks related to external datasets and pre-trained models?

Explanation:
When evaluating AI supply chain risks tied to external datasets and pre-trained models, the major concerns span poisoning, bias, licensing, drift, and backdoors. Data poisoning happens when malicious or mislabeled examples are introduced into external data or fine-tuning sets, steering the model’s behavior in harmful or unintended ways. Bias or harmful content can creep in from training data that reflects societal prejudices or explicit unsafe material, leading to biased outputs or harmful recommendations. Licensing and compliance issues arise from unclear data provenance or model licenses, potentially violating terms, privacy laws, or IP rights when using external sources. Drift occurs as data distributions change over time or as models interact with evolving inputs, causing performance degradation if not monitored and updated. Backdoors can be implanted in data or pre-trained models, creating hidden triggers that alter behavior under specific conditions, compromising security and reliability. The other options focus on only a single issue (like drift) or unrelated hardware concerns, and they miss the breadth of risks that external datasets and models can introduce.

When evaluating AI supply chain risks tied to external datasets and pre-trained models, the major concerns span poisoning, bias, licensing, drift, and backdoors. Data poisoning happens when malicious or mislabeled examples are introduced into external data or fine-tuning sets, steering the model’s behavior in harmful or unintended ways. Bias or harmful content can creep in from training data that reflects societal prejudices or explicit unsafe material, leading to biased outputs or harmful recommendations. Licensing and compliance issues arise from unclear data provenance or model licenses, potentially violating terms, privacy laws, or IP rights when using external sources. Drift occurs as data distributions change over time or as models interact with evolving inputs, causing performance degradation if not monitored and updated. Backdoors can be implanted in data or pre-trained models, creating hidden triggers that alter behavior under specific conditions, compromising security and reliability. The other options focus on only a single issue (like drift) or unrelated hardware concerns, and they miss the breadth of risks that external datasets and models can introduce.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy