Summary:
- This article discusses new research that shows AI-powered large language models can exhibit hiring bias against white men.
- The research found that these AI models tend to rate job applicants with stereotypically white male names as less suitable for certain roles, even when their qualifications are identical.
- This bias in AI systems highlights the importance of carefully designing and testing these technologies to ensure they do not perpetuate unfair discrimination in hiring and other decision-making processes.