Last March,
Microsoft launched a chatbot on Twitter, a conversation-robot to be used
by adolescents and young adults. The experiment aimed at studying
language comprehension, but led quickly to disaster. Just eight hours
after the launch of the program, sexist and racist misbehavior forced
Microsoft to end it. This robot, named Tay, seemed like a young
adolescent, and was deliberately conceived as a bit superficial, capable
of gliding past any polemical statements that might have been addressed
to it with naïve responses. It had a stock of ready-made phrases to be
used in certain contexts. This is how it responded to any mention of
terrorism: “Terrorism in any form is deplorable. It devastates me to
think about it”. But the algorithms of this artificial intelligence
program also allowed it to personalize its responses, depending on what
internet users were saying to it. So, a certain number of them
successfully tried to test its limits by making it repeat back hateful
phrases, then making it produce them on its own. For example, Tay
responded to the question, “Do you believe the Shoah took place” with
“Not really, sorry.”
How should we understand this digital
anecdote? The programmers anticipated the problems by including
censorship measures, but these turned out to be clearly insufficient.
This type of reaction from teens was, then, expected. Still, to relate
this episode to the usual hatred that exists on the internet would be
reductive. A more delicate mechanism is involved. It is of course common
practice on the internet to try to get one’s interlocutor to produce
hateful statements, and this phenomenon even has a name that dates back
to the beginning of the web: trolling. What does it involve?
Trolling designates any activity on the internet, or the internet user who is its author, that aims basically at provocation and the creation of useless controversies, with a caricatured and repetitive argumentation style that drives the interlocutor crazy. On the internet it is often said, “don’t feed the trolls!” – that is, don’t respond to them, because they always find a way to use your response in the pursuit of trolling. According to Wikipedia, “if the discussion gets sufficiently infected and the arguments start to fall apart, the troll or one of those feeding the troll will end up by reaching” what is called the Godwin point: that is, someone will make a reference to Nazis. The Godwin point is drawn from a parodic law of the same name, which states that “the longer a discussion lasts, the probability that a comparison to Nazis or Adolf Hitler will be made approaches”. The Nazis are to be understood here as the markers of an absolute hatred, and this would be what the troll is trying to produce.
The reaction of internet users to the
introduction of this robot that was supposed to be able to talk was that
they trolled it, as internet slang would put it. What does this mean?
It would be false to conclude from this that the internet users were
really filled with hatred, or that they believed in the content of their
violent proposals. Their reaction was standing up to Microsoft’s lie:
that this was a robot who talks. It was a denunciation of this. It
designates a real-jouissance that overflows speech, but that at the same
time sustains it. It was a way of saying: “if you speak, if you are
really one of us, ok then we’ll show you what it’s really all about:
jouissance”. And this was a jouissance that also unified: it unified
those who know that it is jouissance that rules – the union of the
trolls that speaking-beings are.
The teenagers who trolled this robot
embody a new way of relating to the lying truth. Here, the subject no
longer receives its own message in an inverted form: it is the subject
who sends back to the Other the emptiness of its own semblants.
(translated by Edward Pluth)
Nenhum comentário:
Postar um comentário