PAID MEDIA

Twitter Ads Suspended By Top Brands Because Of Child Exploitation

At least 30 major advertisers have dropped their ads on Twitter after it was revealed that their ads were being shown alongside tweets calling for illegal child abuse content.

For example, a tweet promoted by the Scottish Rite Children’s Hospital in Texas was shown alongside toxic tweets related to pedophilia.

Advertisers revolt against Twitter

Reuters reported At least 30 major brands had their accounts suspended when it was revealed that their Promoted Tweets appeared alongside toxic Tweets.

A Twitter spokesperson was quoted as saying that the research and conclusions made by the cybersecurity firm (which studied tweets and accounts during the first 20 days of September 2022) do not represent Twitter’s efforts to combat illegal activities.

But the Reuters article cited several large advertisers who were told their ads had appeared next to toxic tweets.

Reuters quoted an executive from Cole Haan:

“We’re horrified,” David Maddox, Cole Haan’s chief brand officer, told Reuters after being notified that the company’s ads appeared alongside such tweets.

“Either Twitter will fix this, or we will fix it by any means possible, including not buying Twitter ads.”

Twitter’s inability to accurately detect toxic content

The background to Twitter’s toxic content problem first appeared in an article by The Verge.

The article chronicles Twitter’s project to create a platform similar to OnlyFans where users can pay to share sexually explicit content.

Before launching the new service, Twitter assigned a group of employees to test whether Twitter could succeed in getting rid of harmful content so that the platform would not turn to sharing illegal content.

This group of employees was called the red team.

The Twitter project came to a halt when the Red Team determined that Twitter was unable to detect offensive and toxic content.

to me Edge article:

What the Red team discovered derailed the project: Twitter could not safely allow adult creators to sell subscriptions because the company was not — and still is — actively policing harmful sexual content on the platform.

The Red Team concluded in April 2022 that “Twitter cannot accurately detect widespread non-consensual pedophilia and nudity.” The company also lacked tools to verify that creators and consumers of adult content were of legal age.”

So, in the spring of 2022, Twitter concluded that it was not equipped to launch the service and it was suspended.

However, according to cybersecurity firm Ghost Data, Twitter continued to have difficulties catching rogue users and accounts sharing illegal content.

Ghost Data conducted an investigation in September 2022 to discover the extent of the child exploitation problem on Twitter.

Starting with a group of known child exploitation accounts, they mapped toxic accounts through the social links of linked followers between the accounts, eventually identifying more than 500 accounts responsible for nearly 5,000 tweets related to illegal child abuse activities.

The researchers noted that these accounts are all in English and that they did not investigate the misuse of Twitter networks in other languages.

They concluded that further digging into accounts in non-English accounts may reveal more users sharing child abuse content.

Researchers claim that Twitter is ineffective

The startling finding from the report is that Twitter only took action against just over 25% of the accounts they identified as sharing explicit child abuse content, during the research period covering the first 20 days of September 2022,

The researchers wrote (PDF):

We observed that Twitter (sic) suspended less than 30% (27.5%) of users who publicly shared links, materials, and references to child pornography in the first 20 days of September.

To date, more than 400 users are still active after the “cleansing”.

Many of these users have been active for months.”

The researchers concluded that although they identified many illicit activities and accounts on Twitter, they estimate that this is only a fraction of the true scope of the problem.

Note that Twitter can do a better job of stopping toxic activity:

These findings validate a troubling problem already detected by internal staff and exposed by online media: Twitter (sic) cannot accurately detect child sexual exploitation and its executives do almost nothing to address this scourge.

We also found evidence that such lax policies on porn content also encourage users to post non-consensual videos and rape videos, not to mention minors trying to sell their own nudity or sexual content.

… perhaps a modest investment and a dedicated team, even using our own core technologies, would suffice (sic) to easily identify and significantly reduce illicit activities…”

This conclusion by cybersecurity firm Ghost Data appears to contradict a statement by Twitter and reported by Reuters that Twitter has zero tolerance for these types of activities because it’s been months since Twitter’s Red team identified issues with toxic content detection.

Reuters also reported that Twitter said it was hiring more staff to “implement solutions.”


quotes

Read the full report by cybersecurity firm Ghost Data (PDF)

Read the Reuters report

Exclusive brands post Twitter ads next to child pornography accounts

Read a report from The Verge on how Twitter abandoned competitor OnlyFans

How Twitter’s child porn problem ruined its plans for OnlyFans competitor

Featured image by Shutterstock / Pixel-Shot

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button