The LinkedIn controversy over AI-generated accounts, explained

The technology primarily used to spread misinformation has now crept into the corporate world to ramp up sales, a new investigation finds.

People have just a 50% chance of guessing correctly whether a profile picture was created by a computer, according to a study.
AFP

People have just a 50% chance of guessing correctly whether a profile picture was created by a computer, according to a study.

When Renee DiResta, a Stanford Internet Observatory researcher, received a software sales pitch on LinkedIn, she didn’t know that it would lead her down a rabbit hole of over 10,000 fake corporate accounts of LinkedIn.

With knowledge of information systems and how narratives spread, DiResta’s trained eye was quick to notice something was not quite right —  the profile picture of the sender, Keenan Ramsey, looked off. Her eyes were centred, she was missing an earring in one ear, and some of her hair seemed to blur into the background. 

The researcher, along with her colleague Josh Goldstein, began digging into Ramsey’s profile — only to find that she was not a real person, and over thousands of other accounts on the website, which appeared to be generated by artificial intelligence (AI) technology, didn't exist in real life either.

Who created the profiles?

An investigation by NPR, the public radio network of the United States, found that it is a tactic now being deployed by companies on LinkedIn to ramp up their sales.

When the Stanford researcher DiResta responded to an AI-generated salesperson’s message, she was finally contacted by a real employee to continue the conversation.

NPR says the AI-generated profiles allow companies to reach more potential customers without hitting LinkedIn’s message limit. It also eludes the necessity to hire more sales staff to reach customers.

The investigation spotted more than 70 businesses that used fake profiles. Several companies said they hired third party marketers to help with sales but they had not authorised any use of AI-created profile photos and were surprised by the findings.

While the investigation couldn’t spot who authorised the usage of fake profiles to send messages to users on the website, nor any illegal activity, it did, however, conclude that the usage of fake profiles being used by companies illustrates how technology used to spread misinformation and propaganda has now “made its way to the corporate world.”

“It's not a story of mis-[information] or dis-[information], but rather the intersection of a fairly mundane business use case w/AI technology [sic], and the resulting questions of ethics & expectations,” the Stanford researcher DiResta reacted to the investigation in a tweet thread.

“What are our assumptions when we encounter others on social networks? What actions cross the line to manipulation?” she asked. 

The researchers also notified LinkedIn about their findings. The company said it removed the fake accounts for breaking its policies after an investigation.

"Our policies make it clear that every LinkedIn profile must represent a real person. We are constantly updating our technical defences to better identify fake profiles and remove them from our community, as we have in this case," LinkedIn spokesperson Leonna Spilman said in a statement. 

"At the end of the day, it's all about making sure our members can connect with real people, and we're focused on ensuring they have a safe environment to do just that."

Trustworthy faces

The fake profiles on the website or elsewhere in the online sphere are not easy to detect. The investigation says what created fake salesperson profiles on LinkedIn is likely to be a “generative adversarial network, or GAN” — a technology that improves itself each day. Since its launch in 2014, it has been analysing datasets obtained from pictures of real people online in order to create more realistic images.

"If you ask the average person on the internet, 'Is this a real person or synthetically generated?' they are essentially at chance, (relying on luck)" said Hany Farid, an expert in digital media forensics at the University of California, Berkeley, who co-authored a study with Sophie Nightingale of Lancaster University. 

Farid's study previously found that AI-generated photos were designed to look more trustworthy than real faces.

Some methods may help regular internet users spot such AI-generated online content. One of them is V7 Labs’s Google Chrome extension tool, which helps users spot fake profiles.

However, many people are unlikely to even suspect that the profiles they come across may be fake.

Farid said he finds the proliferation of AI-generated content worrying, not just the still images but also the video and audio content. He warned that it could foreshadow a new era of online deception.

Route 6