May 25, 2022

The Australian Competition and Consumer Commission (ACCC) has filed a case against Meta, the owner of Facebook, for allowing fraudulent ads to run on its platforms and claims it is not taking adequate steps to address the issue.

The watchdog said today it is seeking “clarifications, injunctions, fines, costs and other injunctive relief” against the social media giant by posting fraudulent ads featuring well-known Australian public figures about its “false, misleading or misleading behaviour” . local consumer protection laws.

In particular, he claims that META’s behavior violates the Australian Consumer Law (ACL) or the Australian Securities and Investments Commission (ASIC) Act.

The regulator’s contention extends to the allegation that META has “assisted and facilitated or intentionally engaged in false or misleading behavior and statements on the part of advertisers (i.e. those who have used its platform to lure victims into their scams).

Meta denies the allegations, saying it already uses technology to detect and block fraud.

In a statement about ACCC’s actions, attributed to a company spokesperson, the tech giant said:

“We don’t want ads that try to scam people for money or mislead people on Facebook – they break our policies and don’t benefit our community. We use technology to detect and block fraudulent ads and try to avoid scammers.” Our identification system, we have cooperated with the ACCC in investigating this case so far. We will review recent ACCC documents and intend to defend the proceedings. We cannot comment on the details of the case as it is pending in federal court.”

The ACCC says the fraudulent ads are taking action against promoted cryptocurrency investments or making money through the Meta platforms and feature people who are likely well known to Australians such as businessman Dick Smith, broadcaster David Koch and former Prime Minister of the New South Wales Mike Baird. – which could be seen in the advertisements approving the plans, but in fact, these public figures never endorsed or approved of the positions.

“The ads included links to a fake media article for Facebook users with quotes from a public figure in support of cryptocurrency or money-making plans. Users were then prompted to sign up and then contacted by scammers who used high-pressure tactics such as repeated phone calls to convince users to deposit funds into fake systems: “This indicates that.

The ACCC also notes that celebrity cryptocurrency scam ads continue to be posted on Facebook in Australia as public figures elsewhere around the world have complained that their names and images have been used in such ads without their permission.

A similar complaint was filed against Facebook in the UK in 2018, when local consumer protection consultant Martin Lewis sued the platform for libel over a spate of fraudulent ads featuring his picture and name without his permission, claiming he was used to introduce misleading and deceiving British consumers.

Lewis ended this lawsuit against Facebook in 2019 when he agreed to make some changes to his platform locally, including adding a button to report fraudulent ads. (Later, the company also created a Facebook Misleading and Fraudulent Ads Report Form, available in Australia, the Netherlands, and New Zealand.)

Despite the conclusion of the trial, Lewis has not stopped his campaign against fraudulent advertising, insisting that the recent (successful) draft of the UK Internet Safety Act, which was introduced yesterday in the Parliament of the country, could be expanded to cover fraudulent advertising. The inbound regime includes penalties of up to 10% of global annual revenue to incentivize the tech giant to comply with the rules.

Meanwhile, last year Australia passed a law on Internet safety, and in January the law of the same name came into force. However, Internet safety laws are more stringent and target other types of inappropriate content (eg CSAM, terrorism, cyberbullying, etc.).

The country relies on existing consumer and financial investment regulations to detect online fraudulent advertising platforms.

This It remains to be seen if these laws are specific enough to be successfully used to change the behavior of the meta around ads.

Educational tech giants make money by creating profiles of people to serve targeted ads. Any restrictions on how its advertising activities can be conducted, such as requiring that all ads be manually reviewed before publication and/or limiting the ability to target ads to the eyes, will greatly increase costs and jeopardize its ability to generate such a large amount of revenue.

It is therefore notable that the ACCC is considering orders for such measures – for example, Meta’s targeting tools exacerbate the problem of fraudulent ads by allowing scammers to target people “most likely a link in an ad.” that he dominates his process.

This seems to be the most interesting element of the proceedings if the ACCC finally starts investigating how scammers may be using Facebook’s advertising tools to increase the effectiveness of their scams.

Large-scale moves are already underway in Europe to impose legal restrictions on the platform’s ability to display tracking ads. While Meta has warned its investors of “regulatory hurdles” affecting its advertising business.

“The heart of our business is that Meta is responsible for these ads that are posted on its platform,” ACCC President Rod Sims wrote in a statement. “This is an important part of Meta’s business that allows advertisers to target users who are most likely to click on a link in an ad to be taken to the ad’s landing page using Facebook’s algorithms. These ad landing page visits generate significant revenue for Facebook.

“We contend that Meta’s technology makes it possible to target these ads to users who are most likely to respond to the ad, with Meta reassuring its users that it will detect and prevent spam and improve security on Facebook, but other similar cryptocurrency scams approved celebrities by posting ads on their pages or warning users.”

“The meta should have done more to detect and then remove false or misleading ads on Facebook so that consumers don’t fall prey to ruthless scammers,” he said.

Sims also noted that in addition to the “incalculable loss to consumers” — in one case, the ACCC said a consumer suffered a $650,000 loss from a scam advertised on Facebook as an investment opportunity — the scam ad was made up from public figures. damage the reputation they were falsely associated with by reiterating that Meta did not take “substantial steps” to stop fake ads involving public figures, even though the public figures reported that their names were AND the image was used in the fraudulent ad celebrity cryptocurrencies.

The idea that the technology platform is a whole decade ago! – Managed to implement facial recognition on their platform to automatically tag users when uploading photos, successfully implementing a similar technology to automatically tag all ads with specific names and faces for viewing. Could not – after or even before the public figure reported the problem – looks very suspicious.

And while Meta claims that “cloaking” is a technique spammers use to bypass their verification processes — that is, provide different content to Facebook users and Facebook crawlers or tools — this is exactly the kind of technical problem you might imagine. with a technician. Vishal is able to use his vast technological resources to hack it.

It certainly goes to show that nearly four years after the Lewis ad fraud scandal, scammers seem to still be able to successfully use the same pattern across Facebook platforms around the world. If it’s a success, one has to wonder what a meta-failure would look like.

It’s not clear exactly how many fraudulent ads Meta “successfully” removes.

A self-proclaimed community in the Compliance Report section referred to as “spam” (note: not a scam; also where spam acts as a collective (and self-defining) term meaning it doesn’t refer to specific things is problematic. In particular, stopping ads , not speaking about deception in ads) — The Meta writes that “1.2 billion” is a measure of “content powered by spam” for the three months of the fourth quarter.

This figure is largely meaningless because META is allowed to define what “spam” is for purposes of “transparency” reporting, as the company itself acknowledges in the report – hence the term fraudulent (“content”). to “spam”, not even fragments, but actually advertising, photos, messages, etc.). Of course, this also defines what spam is in this context – apparently even fraudulent ads fall into this more obscure category.

Moreover, in the report, the meta does not even write that 1.2 billion refers to 1.2 billion. Fragments from spam. (In any case, as noted above, a “chunk” of spam – in the meta universe – can actually refer to multiple pieces of content that it aggregates together for public reporting purposes) and counts, for example, multiple photos. and text messages, as it also appears in the report, which essentially means it can use transparency display to hide what exactly is happening on its platform.)

And one more thing: the word “action” – another selfish meta-name – does not mean that (in this case “junk email“) The content has been removed. This is because it concatenates a number of other possible responses such as checking content with a warning or disabling accounts.

so – tl; dr – as always with major edtech, it’s impossible to rely on the platform’s self-reported actions for the content they amplify and monetize – there are no clear legal requirements that define exactly what it is. Which data points are these data points that regulators need to issue to ensure effective oversight and true accountability.

Leave a Reply

Your email address will not be published.