BusinessTech & ScienceTop Story

Faking an honest woman: Why Russia, China & Big Tech all use faux females to get clicks

When disinformation researcher Wen-Ping Liu looked into China’s efforts to influence Taiwan’s recent election using fake social media accounts, something unusual stood out about the most successful profiles. They were female, or at least that’s what they appeared to be. Fake profiles that claimed to be women got more engagement, more eyeballs and more influence than supposedly male accounts.

Quick Read

  • When disinformation researcher Wen-Ping Liu studied China’s influence on Taiwan’s election using fake social media accounts, he found female profiles were more successful.
  • Fake profiles claiming to be women got more engagement, views, and influence than those posing as men.
  • Gender stereotypes, such as perceiving women as warmer and less threatening, make fake female profiles more appealing.
  • AI chatbots and fake social media profiles often use female traits to seem more human and engaging.
  • Marketing professor Sylvie Borau’s research found that people prefer “female” bots and see them as more human than “male” bots.
  • Scarlett Johansson declined an offer from OpenAI CEO Sam Altman to use her voice for ChatGPT, preferring to avoid the association.
  • Female profiles, especially those with attractive pictures, receive more views and engagement online.
  • Borau’s research also shows that “female” chatbots face more sexual harassment and threats than “male” bots.
  • An analysis by Cyabra found that female social media profiles receive over three times more views than male profiles, with younger female profiles getting the most views.
  • Nations like China and Russia use faux female profiles to spread propaganda and disinformation.
  • Researchers at NewsGuard found hundreds of fake accounts criticizing President Joe Biden, many posing as young women.
  • A UN report suggested that many fake accounts and chatbots are female because they are created by men, leading to embedded sexist stereotypes.
  • The report emphasized the need for diversity in programming and AI development to reduce sexist stereotypes in tech products.

The Associated Press has the story:

Faking an honest woman: Why Russia, China & Big Tech all use faux females to get clicks

Newslooks- WASHINGTON (AP) —

When disinformation researcher Wen-Ping Liu looked into China’s efforts to influence Taiwan’s recent election using fake social media accounts, something unusual stood out about the most successful profiles.

They were female, or at least that’s what they appeared to be. Fake profiles that claimed to be women got more engagement, more eyeballs and more influence than supposedly male accounts.

“Pretending to be a female is the easiest way to get credibility,” said Liu, an investigator with Taiwan’s Ministry of Justice.

Whether it’s Chinese or Russian propaganda agencies, online scammers or AI chatbots, it pays to be female — proving that while technology may grow more and more sophisticated, the human brain remains surprisingly easy to hack thanks in part to age-old gender stereotypes that have migrated from the real world to the virtual.

People have long assigned human characteristics like gender to inanimate objects — ships are one example — so it makes sense that human-like traits would make fake social media profiles or chatbots more appealing. However, questions about how these technologies can reflect and reinforce gender stereotypes are getting attention as more voice assistants and AI-enabled chatbots enter the market, further blurring the lines between man (and woman) and machine.

“You want to inject some emotion and warmth and a very easy way to do that is to pick a woman’s face and voice,” said Sylvie Borau, a marketing professor and online researcher in Toulouse, France, whose work has found that internet users prefer “female” bots and see them as more human than “male” versions.

People tend to see women as warmer, less threatening and more agreeable than men, Borau told The Associated Press. Men, meanwhile, are often perceived to be more competent, though also more likely to be threatening or hostile. Because of this many people may be, consciously or unconsciously, more willing to engage with a fake account that poses as female.

When OpenAI CEO Sam Altman was searching for a new voice for the ChatGPT AI program, he approached Scarlett Johansson, who said Altman told her that users would find her voice — which served as the eponymous voice assistant in the movie “Her” — “comforting.” Johansson declined Altman’s request and threatened to sue when the company went with what she called an “eerily similar” voice. OpenAI put the new voice on hold.

Feminine profile pictures, particularly ones showing women with flawless skin, lush lips and wide eyes in revealing outfits, can be another online lure for many men.

Users also treat bots differently based on their perceived sex: Borau’s research has found that “female” chatbots are far more likely to receive sexual harassment and threats than “male” bots.

Female social media profiles receive on average more than three times the views compared to those of males, according to an analysis of more than 40,000 profiles conducted for the AP by Cyabra, an Israeli tech firm that specializes in bot detection. Female profiles that claim to be younger get the most views, Cyabra found.

“Creating a fake account and presenting it as a woman will help the account gain more reach compared to presenting it as a male,” according to Cyabra’s report.

The online influence campaigns mounted by nations like China and Russia have long used faux females to spread propaganda and disinformation. These campaigns often exploit people’s views of women. Some appear as wise, nurturing grandmothers dispensing homespun wisdom, while others mimic young, conventionally attractive women eager to talk politics with older men.

Last month, researchers at the firm NewsGuard found hundreds of fake accounts — some boasting AI-generated profile pictures — were used to criticize President Joe Biden. It happened after some Trump supporters began posting a personal photo with the announcement that they “will not be voting for Joe Biden.”

While many of the posts were authentic, more than 700 came from fake accounts. Most of the profiles claimed to be young women living in states like Illinois or Florida; one was named PatriotGal480. But many of the accounts used nearly identical language, and had profile photos that were AI-generated or stolen from other users. And while they couldn’t say for sure who was operating the fake accounts, they found dozens with links to nations including Russia and China.

X removed the accounts after NewsGuard contacted the platform.

A report from the U.N. suggested there’s an even more obvious reason why so many fake accounts and chatbots are female: they were created by men. The report, entitled “ Are Robots Sexist?,” looked at gender disparities in tech industries and concluded that greater diversity in programming and AI development could lead to fewer sexist stereotypes embedded in their products.

For programmers eager to make their chatbots as human as possible, this creates a dilemma, Borau said: if they select a female persona, are they encouraging sexist views about real-life women?

“It’s a vicious cycle,” Borau said. “Humanizing AI might dehumanize women.”

Read more business news

For more Tech & Science news

Previous Article
Nadal and Alcaraz to play doubles together for Spain at Paris Olympics
Next Article
House votes to hold AG Merrick Garland in contempt for withholding Biden audio

How useful was this article?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this article.

Latest News

Menu