in

Amazon, Google asked to explain why they were serving ads on sites hosting CSAM

Amazon, Google asked to explain why they were serving ads on sites hosting CSAM

US Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) on Friday sent letters to the CEOs of Amazon and Google asking why their ad businesses fund websites hosting child sexual abuse material (CSAM) and allow government ads to appear on sites with illegal imagery.

“Recent research indicates that Google, as recently as recently as March 2024, has facilitated the placement of advertising on imgbb.com, a website that has been known to host CSAM since at least 2021, according to transparency reports released by the National Center for Missing & Exploited Children (NCMEC),” the letter to Google CEO Sundar Pichai says. “Just as concerning are reports that the United States government’s own advertising has appeared on this website.”

The United States government’s own advertising has appeared on this website

Amazon CEO Andy Jassy received a similar missive, as did ad verification firms DoubleVerify and Integral Ad Science, and industry trade groups Media Rating Council and Trustworthy Accountability Group.

The letters ask why the companies have facilitated the placement of ads on a website known to have hosted CSAM since 2021 and call into question the effectiveness of technology touted by these firms to prevent ads from appearing alongside illegal content.

“DoubleVerify states that its ‘Universal Content Intelligence’ capabilities ‘provide a holistic approach to content analysis and evaluation,’ asserting that ‘this sophisticated tool leverages AI and relies on DoubleVerify robust and proprietary content policy to provide advertisers with accurate content evaluation, broad coverage and brand suitability protection at scale,'” says the letter to Mark Zagorski, CEO of DoubleVerify.

“Yet, DoubleVerify advertiser customers paying for its sophisticated and ‘industry-leading’ technology have had their ads served on a website that hosts content involving heinous crimes against children.”

The Register understands from an industry expert who asked not to be named that brand safety tools tend to be more about appearances than results. These services can be black boxes that fail to provide customers with the data necessary to assess how they function. In some instances, the tools involved may be no more sophisticated than web page searches for unsafe keywords, which can cause problems for news sites that report on difficult subjects and can be bypassed by bad actors through deliberate misspellings.

Millions
The inquiry from lawmakers follows a report from Adalytics that found past evidence of CSAM on free image sharing sites imgbb.com and ibb.co (currently redirects to imgbb.com), which reportedly receive more than 40 million page views per month – more than several popular news sites.

Adalytics says it found the abuse material while conducting research about how US government ads were served to bots and crawlers.

“Adalytics unintentionally and accidentally came across a historical, archived instance where a major advertiser’s digital ads were served to a URLScan.io bot that was crawling and archiving an ibb.co page which appeared to be displaying explicit imagery of a young child,” the report says, adding the biz immediately ceased viewing the archived page and reported the incident to the FBI, US Homeland Security special agents, America’s National Center for Missing and Exploited Children (NCMEC), and the Canadian Center for Child Protection (C3P).

According to NCMEC, imgbb.com has been alerted dozens of times in 2021, 2022, and 2023 about the presence of CSAM on its platform, the report says. The website does not state who owns it, where it is incorporated, nor where it’s located. The domain is registered through GoDaddy, via Domains By Proxy, and relies on Cloudflare as its name server.

A request for comment to imgbb.com has not been answered.

“In addition to hosting CSAM, imgbb.com and ibb.co appear to host explicit adult content as well as potentially copyright infringing materials,” the report says, and also cites potential animal abuse material.

Imgbb.com appears to support its free image hosting service by displaying online ads from ad tech vendors Amazon, Google, Critero, Microsoft, Nexxen, Outbrain, Quantcast, TripleLift, and Zeta Global, among others.

And numerous major advertisers are said to have run ads on these websites since 2021, it’s claimed, including Acer, Adidas, Adobe, Amazon Prime, Dyson, Google Pixel, Hallmark, Honda, HP, MasterCard, Starbucks, Unilever, and the US Department of Homeland Security, among others.

While Google AdX, Google Ad Manager, and DV 360 have not been showing ads on the image hosting service as of January 2025, other companies continue to do so, Adalytics claims.

Google insists the ad tech business ain’t broke, urges Washington not to fix it

If Britain is so bothered by China, why do these .gov.uk sites use Chinese ad brokers?

Majority of Americans now use ad blockers

PayPal is planning an ad network built off your purchase history

The question is why companies place ads on web pages with illegal, explicit, or unacceptable content, given that the selling point of online ads is the ability to audit and measure how, where, and to whom ads get displayed.

One reason appears to be lack of visibility. Adalytics notes that several major brand advertisers whose ads were served on imgbb.com report that their ad tech providers, such as Amazon, fail to provide advertisers with page level URL reporting that would allow brands to see where their ads are appearing. This is particularly an issue with programmatic advertising, where ads are bought through an exchange linking multiple networks, as opposed to specific networks like Google Ads or Microsoft Ads.

The opaqueness of the ad tech ecosystem is compounded, the report suggests, because ibb.co allows uploaded images to be marked “noindex,” which keeps them from showing up in Google or Bing search results.

Another reason is that brand safety services appear unable to guarantee safety, an issue raised by the Senators’ letters. According to Adalytics, numerous advertisers report that their brand safety vendors had marked 100 percent of the measured ad impressions on imgbb.com and ibb.co as safe – meaning the pages had content suitable for hosting ads. But Adalytics says that independent URLScan.io data shows some of the ads purchased were served with explicit sexual content.

It may be that AI image recognition technology, such as DoubleVerify’s Universal Content Intelligence, isn’t as adept at image identification as the company suggests.

The Register asked DoubleVerify whether it can provide data on false positives – unsafe images labeled safe.

Instead of answering that question, a company spokesperson responded with a link to an online statement that addresses the Adalytics report.

DoubleVerify’s statement, which accuses Adalytics of past inaccuracies, claims: The report only cites a single website, imgbb.com, that has a small advertising footprint (0.000047 percent of to DoubleVerify-measured ads); DoubleVerify has blocked the site; and DoubleVerify has policies and processes in place to deal with illegal content.

“In light of this report’s claims, DV is conducting an additional comprehensive review of ad-supported image-hosting sites on the open web that are within our system – even those that may have very small ad impression volumes – and placing them under stricter classification standards,” the company’s statement says. “Additionally, we are defining a mechanism to block anonymous, profile-based image-hosting sites at scale. We will share our findings with customers to help inform their brand safety strategies as they evaluate campaigns, and DV’s content classifications and brand suitability controls.”

We have zero tolerance when it comes to content promoting child sexual abuse and exploitation

A Google spokesperson said, “We have zero tolerance when it comes to content promoting child sexual abuse and exploitation. As this report indicates, we took action on these sites last year. Our teams are constantly monitoring Google’s publisher network for this type of content and we refer information to the appropriate authorities.”

Google pointed to its “strict policies” about consent eligible for ad support and said both relevant ad accounts have been terminated. The Chrome giant says it takes this issue very seriously and has invested significantly in both AI-based and human enforcement systems.

And in a statement, an Amazon spokesperson told us it is working to respond to the US lawmakers: “We regret that this occurred and have swiftly taken action to block these websites from showing our ads. We have strict policies in place against serving ads on content of this nature, and we are taking additional steps to help ensure this does not happen in the future. We’ve received the senators’ letter and are working on our response.”

Integral Ad Science did not immediately respond to requests for comment. ®

Report

What do you think?

Newbie

Written by Mr Viral

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Trump’s Dept of Transport hits brakes on Biden’s EV charger build-out

Trump’s Dept of Transport hits brakes on Biden’s EV charger build-out

New boss for Roscosmos as Yury Borisov binned

New boss for Roscosmos as Yury Borisov binned