Emergency mode sounds drastic, and on Monday Google announced it was using a new machine-learning tool to identify offensive content, which Schindler told Recode was able to find five times more videos that weren't brand-safe than before.
Google has been under fire lately, after several ads were seen running next to offensive content in the United Kingdom - including neo-Nazi and jihadist videos - on YouTube and on other websites the search giant serves ads on. Ever since the boycott of Google advertising services, the company has plans to create a brand safety reporting channel that lets YouTube ads be monitored by artificial intelligence tools. "We have limited resources". But for Google the ad is the important thing - something like 90% of its revenue comes from selling ads.
Several news outlets including the Wall Street Journal and the United Kingdom times have published reports and evidence of ads appearing on anti-somatic and racist videos over the past few weeks. Havas Worldwide, one of the largest advertising agencies, announced that their United Kingdom office would halt all media buying from YouTube or Google until the issues can be resolved.
"It has always been a small problem", Schindler said, noting that the only change is that it hasn't been pointed out, until now.
Cakmak estimates the financial hit to Google from the boycott could be as much as $500 million - a significant sum, but a small amount for a company that posted $26 billion in sales last quarter.
"We needed to react very, very quickly", Nicklin said. When we watch a video clip we may see ads and we often skip them. And his statements about the boycott showed how convoluted Google's position is. Google has responded by promising greater transparency and saying it will be more aggressive in ensuring brand safety of ad placements.
Apple is 'completely rethinking' its flagship Mac Pro
The company now needs to balance development of the new range with releasing it before customers look elsewhere. Today, Apple announced that a new Mac Pro is coming someday that should please its most faithful fans.
The Australian federal government has reportedly pulled its advertising from Google's YouTube video platform, joining a list of hundreds of clients worldwide. That includes language that promotes negative stereotypes about targeted groups or denies "sensitive historical events" such as the Holocaust.
YouTube must not allow ads to be served unless the content has been approved and correctly classified as news or documentaries, or comedy or for kids and so on.
Some researchers argue digital platforms should rely on humans to make these editorial decisions.
YouTube is under fire once more. "The problem can not be solved by humans and it shouldn't be solved by humans".
Nor is the company willing to alter YouTube's fundamental formula. Advertisements are assigned to videos based on an automated process, generating revenue for the video creator. Google lets any user upload videos and sets thresholds for which ones can run ads.
Now Google is on the offensive to try and wash away the concerns about brand safety. "You can just depress the error rate to the lowest level".