Facebook enlisted the help of a social media analytics company to ferret out fake profiles and found some with computer-generated photos
Facebook's tough punch against fake profiles. The Menlo Park-based social network has decided to crack down on the presence of fake profiles with photos of faces created through artificial intelligence that, according to social media analytics company Graphika, would be used primarily to spread government propaganda messages on the platform.
The announcement from the social network came on Tuesday, indicating how the operation included the removal of 155 accounts, 11 pages, nine groups on Facebook and six accounts on the Instagram platform for "inauthentic coordinated behavior on behalf of a foreign or governmental entity." The content promoted by the accounts focused specifically on events taking place in Southeast Asia and the United States. Among the most common posts in the first, largest group of profiles were those related to Beijing's interests in the South China Sea and Hong Kong, while in the second, smaller than the previous one, postings about future U.S. presidential candidates Pete Battigie, Joe Biden and Donal Trump were found, with both pro and con tones.
Fake profiles on Facebook: the results of the investigation
To better understand and retrieve crucial information on the issue, Facebook has called upon Graphika to perform an in-depth analysis of the fake profiles before closing them down and sending them to oblivion.
From the investigations, the company linked to the U.S. Department of Defense has extracted data in some cases particularly alarming, especially in the field of identity theft.
Fake profiles: where users' photos come from
In some cases, Graphika found that the profile images were completely authentic, but stolen from images spread on the network by real people and easily identifiable through a process of reverse search. In other cases, however, the study revealed that the fictitious characters on the social network were created by Generative Adversial Networks (GANs), which use neural network learning to create highly realistic human faces that are believable to the human eye.
The company's suspicions about the use of GANs were raised by 12 profile photos with irregularities in appearance. In fact, the images showed distorted backgrounds or elements that were not entirely symmetrical despite being so in reality, such as ears and glasses. In order to precisely establish the nature of the photos, the company proceeded to overlay the images after making them opaque, exposing the misalignment of the elements. In just a few steps, it was therefore possible to expose the fallacy of the AI and the subsequent manipulations put in place to confuse the observer.
Fake profiles and Artificial Intelligence: a new trend?
Although the ability to observe makes it possible to identify fake profile photos and related fake profiles, this is not the first time that similar behavior has been reported by groups with unclear intentions. Also in September 2020, another network linked to the Russian company Internet Research Agency (IRA) tried the same ploy to promote the bogus PeaceData site.