We recently partnered with @Sprinklr for an independent assessment of hate speech on Twitter, which we’ve been sharing data on publicly for several months. Sprinklr’s AI-powered model found that the reach of hate speech on Twitter is even lower than our own model quantified 🧵
What’s driving the difference? The context of conversation and how we determine toxicity. Sprinklr defines hate speech more narrowly by evaluating slurs in the nuanced context of their use. Twitter has, to this point, taken a broader view of the potential toxicity of slur usage.
@TwitterSafety @Sprinklr Good job! Keep changing the definition and you'll be able to get down to zero!
@TwitterSafety @Sprinklr Ahh yes, AI. AI, the saviour of everything. AI, the convenient thing to use when you can't be bothered to do some serious analysis. And people like @mariannaspring (and many others) will disagree with you about hate speech, trolls, and their actions on your platform.
@TwitterSafety @elonmusk @Sprinklr @elonmusk @CardanoFeed can you both do a cooperation?
@TwitterSafety @Sprinklr What is hate speech?