Identification
- Label (rdfs)
- Data Poisoning
- Preferred Label
- None
- Alternative Labels
- Adversarial Data Manipulation, Dataset Compromise, Malicious Data Tampering, Poisoning Attack, Training Data Corruption
- Identifier
- N/A
Definition and Examples
- Definition
- Data Poisoning is a type of adversarial attack where malicious actors intentionally alter or manipulate the training data of a machine learning model to compromise its integrity and performance. This can lead to the model learning incorrect patterns or making erroneous predictions.
- Examples
-
- N/A
Translations
Class Relationships
- Sub Class Of
- Parent Class Of
- N/A
- Is Defined By
- N/A
- See Also
- N/A
Additional Information
- Comment
- N/A
- Description
- N/A
- Notes
-
- N/A
- Deprecated
- False
Metadata
- History Note
- N/A
- Editorial Note
- N/A
- In Scheme
- N/A
- Source
- N/A
- Country
- N/A