Summary:
- Data poisoning is a type of attack where an attacker tries to manipulate the data used to train machine learning models, causing the models to make incorrect predictions.
- Attackers can do this by adding malicious data to the training dataset, which can make the model learn the wrong patterns and behave in unintended ways.
- Data poisoning attacks can be used to disrupt the performance of AI systems in various applications, such as image recognition, spam detection, and autonomous vehicles.