Attack of the Sex Bots: Brands and AFL Teams Hire Automated Comment ‘Security’ to Stop Trolls and Porn Bots Destroying Sales

Graphic image of bot for mi-3 sex bot and troll blog header

Mi-3 covers Respondology helping AFL teams and brands eliminate trolls and porn bots destroying their sales:

A wave of sex bots is now competing with trolls to cripple brands’ social pages – just as Australian firms pile into social commerce, behind only China and the US. Facebook and Twitter don’t want to own the problem, but some tech firms have stepped into the void with technology that neuters the sex bots and leaves the trolls shouting only to themselves, while reportedly boosting ad spend returns. 

What you need to know:

  • Brands all over the world saw a wave of sex bots posting “soft porn” links on their social posts about a month ago, causing havoc for social media teams and hurting sales.
  • Sexually explicit material on social posts – as well as overly negative or abusive comments – can have a direct impact on sales, reputation and individuals. 
  • But as Australian brands pile into social commerce, they will find Facebook and Twitter refuse to take ownership of the problem.
  • But some tech firms are stepping into the void, working with big brands, retailers and AFL teams to silence the trolls and neuter the sex bots – trolls do not even know that nobody else can see their post.
  • One such tool has also reportedly helped improve brand return on ad spend (ROAS) by 34 per cent.

Social security

In a shopping centre, a person yelling racial slurs, abusing staff, or displaying sexually explicit images is rapidly tackled by police or security. Social media is a different story.

Yet social commerce is where the big platforms are heading in a race to catch Amazon. And where the money goes, the sex bots follow.

A recent report from e-Marketer found e-commerce sales in Australia grew by 53 per cent year on year. More than 30 per cent of Australians have made a purchase through social media – the third-highest rate in the world, after China and the US.

“Brands can control what happens in their brick-and-mortar stores, but what happens in their digital store – social media has been the Wild West. Two billion people can say whatever they want about your product, about your brand, about your service, your customer service, you name it,” said Erik Swain, president of moderation platform Respondology.

And while most brands say they welcome negative feedback – an open discussion with customers about bad experiences can be important in identifying issues – people feel far more comfortable sharing vitriol online than offline, which often have a direct impact on sales.

“The way some clients have fed it back to me is they view social comments as the new product review section, like Amazon review comments,” per Swain. But trolls and bots are requiring increasingly sophisticated solutions.

Respondology has developed The Mod, a platform that filters comments by pre-determined lists of key words – mild swearing, severe swearing, sexual references and LGBTQ references, for example. There are many similar products. But Respondology also employs a team of more than 1,000 human moderators – known as Responders – who can decide if an unflagged comment is appropriate. Those who left the abusive or offensive comments can still see them, but the broader public cannot. The troll rarely realises they have been filtered, Swain said. Nobody can hear their screams and they don’t even know.

Read the full article on Mi-3.com

Share on facebook
Share on twitter
Share on linkedin

Subscribe for more insights.