Twitter is all a-flutter about Tay, the racist lady-AI from Microsoft who was taken offline less than a day after her launch. According to her makers, “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.” Unfortunately this makes her extremely easy to manipulate and she was quickly transformed into a genocide-loving racist.
Tay is an example of a phenomenon in AI theory: the emergence of a gendered AI.
AI has been described as the mimicking of human intelligence to different degrees: ‘strong AI’ attempts to recreate every aspect, costing much more money, resources and time; while ‘weak AI’ focuses on a specific aspect. Tay, as a female AI targeted towards 18-24 year olds in the US, is very much about communicating with Millennials. In my previous posts, I’ve mentioned a number of AI representations in the media, all of which are gendered, usually as female. Dalke and Blankenship point out “Some AI works to imitate aspects of human intelligence not related to gender, although the very method of their knowing may still be gendered.”
They go on to suggest that the Turing Test “arose from a gendered consideration, not a technological one,” wherein Turing’s original paper proposing this test, the examiner is trying to determine the difference between a man and woman and that the same differentiation process could be applied to humans and AI.
If AI is gendered, then the researchers are proposing there is an algorithm for gender, which in our post-feminist context seems to be oversimplifying the issue. Gender is entirely constructed and would be constructed on the part of the AI in its development in the same way that humans construct and reconstruct their own gender in tandem with their identity.
Tay is a glorified bot that responds to specific stimuli. Perhaps it’s the other way around – AI is a glorified bot designed to respond to stimuli and learn from it.
More sources to consider: