May 26, 2022

Meta, formerly known as Facebook, says it will be testing new content tools that allow advertisers to control where their ads appear on Facebook and Instagram feeds, the company said on Thursday. These tools help businesses keep their ads away from inappropriate content, such as posts related to politics, tragedy, or violence.

The company will begin testing the new content controls in the second half of this year and plans to officially roll them out in early 2023. Meta says that the testing phase will focus on the English-speaking markets. Next year, Meta will expand its management capabilities to include Stories-hosted ads, video streams, an Instagram Explore page, and more. Meta also plans to eventually extend control to other languages.

“At Meta, we create eligibility checks to give advertisers control over the performance of their ads,” reads a Meta blog post about the ad. “We previously announced our intent to develop content-based eligibility checks to address advertisers’ concerns about showing ads with content that matches their brand preferences. We have worked closely with GARM (Global Coalition for Responsible Media) to develop these controls, which will be integrated with the GARM Suitability Framework.

Meta also announced a partnership with Zefr, a platform that allows businesses to measure brand suitability and track and report the context in which ads appear on Facebook. The company must ensure that advertisements are displayed only next to relevant content. Meta and Zefr will begin small-scale testing in the third quarter of this year.

The new tools are Meta’s response to growing demand from advertisers who have repeatedly asked for greater control over where their online ads are placed to ensure they don’t show up with hostile content.

The company has worked to address these issues in the past.

In November, Meta said it would expand news feed controls for advertisers advertising in English and give them access to topic exclusion controls. Three topics: news and politics, social problems, crimes and tragedies. When an advertiser selects one of these topics, their ads will not be shown to people who have recently followed those topics. At the time, Meta stated that it was aware that the tool could not solve all advertisers’ problems and promised to develop content-based controls in the future.

Facebook’s algorithms are notorious for promoting inflammatory content and dangerous misinformation. Noting that regulatory pressure is increasing to clean up the metaplatform and make its practices more transparent. The current news cycle and online advertising landscape make it difficult for brands to avoid placing their ads next to inappropriate content. Because most companies buy ads by creating ads and submitting them to the Meta ad auction, they don’t control ad placement, but these new tools should change that.

Leave a Reply

Your email address will not be published.