Many of us like to think that artificial intelligence could help eradicate biases, that algorithms could help humans avoid hiring or policing according to gender or race-related stereotypes. But a new ...
Ever since Microsoft’s chatbot Tay started spouting racist commentary after 24 hours of interacting with humans on Twitter, it has been obvious that our AI creations can fall prey to human prejudice.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results