Can social media ever be ethical?

Designed by profit-oriented companies with little regard to what you need, social media reshaping what it means to be human. Can that be turned around to our benefit?

Social media apps are not designed in the user's best interests, but ethical design principles could change that.
Getty Images

Social media apps are not designed in the user's best interests, but ethical design principles could change that.

Algorithms: you hear about them, you know they’re the power behind your screens, and you may even be reading this because of one.

In the wake of Facebook’s decision to merge Whatsapp with Facebook, many users of the popular chat app are migrating to alternatives like Signal and Telegram to avoid the comprehensive collection of private data Facebook relies on for profit.

This is nothing new, however. An entire market already exists for the packaging of multiple sources of ‘anonymised’ data pertaining to you, sold to the highest bidder for better marketing placements suitable to you, and better models to encourage your use of social media. 

A crucial question now arises, can users demand more ethical social media? One where users are not treated as resources between competing social media companies all fighting for your attention. Is it possible for the end user to dictate how they want social media to affect them?

It’s a complicated question, and one the world only had to face recently. It wasn’t very long ago that scientists and computer programmers developed self-improving algorithms that led to the tremendous leaps in technology that make modelling and predicting your digital behavior possible.

But the danger is real. Algorithms that show you what you want to see and get better at predicting what you want to consume eventually create echo chambers that don’t let in dissenting opinions. 

In many respects, the all-time high of ideological partisanship and political discord we see in our societies today is a direct product of social media. The divide only grows further when social media enables the belief that the other side is completely unrelatable, given the entirely separate social media spaces it allows you to occupy. 

The dangers are many, and algorithms have now unintentionally been responsible for influencing politics and elections. Once upon a time, social media used to show you feeds in reverse chronological order. Now, relevancy is the name of the game.

As such, the more you search for content, the more the algorithm learns about you, and shows you what it thinks will engage you. But in a world without social media, what are the odds you’d come across someone making alarming radical statements about the need to change society? 

In short, in their attempt to predict what we want, algorithms can influence the very way we see the world, by selling us what we could engage with, and not what we need.

Hidden influence

It’s nothing new that social media algorithms can be controversial. Algorithms can influence us, even when we’re not aware of it. The New York Times reports that YouTube’s recommendation algorithms can actually drive viewers to increasingly extreme content.

That’s not something you can change either. After years of searches, your ‘anonymised’ digital footprint is pretty comprehensive. From search terms, purchases, likes, shares, reshares, and even the amount of time you spend looking at something without engaging with it. 

In other words, your heart is on your sleeve, and that makes you incredibly vulnerable to companies at the forefront of psychology and big data like Facebook or Cambridge Analytica. Enter the gnarly world of the A/B test.

With millions of users, Facebook and other social media companies are able to conduct localised tests, providing different versions of their apps to different targeted segments. The end game? To see if the changes they made, make a difference in behavior. And they do. 

For instance, an ad targeting a specific voter demographic could instantaneously run multiple split tests until it found a way to mobilise an entire city to vote, or not to vote.

Blank page

Being able to start with a new page would make all the difference, but that’s not possible with today’s online ecosystem.

More critically, if the design of social media apps is tailored and focused on promoting addiction to their platforms, how can our brains stand a chance?

For the most part, these companies are largely unregulated and devise their own rules and standards. For companies like Facebook or Twitter, they don’t harvest user data to sell ads or make better behavioral predictions. Instead, it’s not personal, it’s just business. 

“Facebook is an ads-supported platform, which means that selling ads allows us to offer everyone else the ability to connect for free,” Facebook says.

That’s sort of like saying livestock feed is free for animals set to be harvested though. Facebook defends itself by saying that the data it collects is not “personally identifiable”. While that’s true, in large part due to privacy law regulations, it’s still enough for data buyers and advertisers to target you and sell you products with ease, or for social media companies to influence your consumption choices, name or not.

Ethical design

“Ethical” platform design is often touted as the solution to this modern mess. That means designing technology in a way that brings out the best in humans, and promotes better online behavior. 

For instance, this might mean technology that doesn’t maximise its use of addictive feedback loops in your brain after testing dozens of notification sounds, styles and themes. It could mean technology that intentionally uses its algorithms to make you procrastinate less, engage more meaningfully, and feel better about yourself. 

That’s all easier said than done. As the saying goes, the road to hell is often paved with good intentions. Twitter’s blue verification tick is a good example of the disconnect between design intent and user interpretation. For that matter, social media like counts or follower counts is another example of something possibly innocuous that’s spiralled quickly out of control, redefining social relations entirely. 

On Twitter, the tick has come to signify Twitter’s endorsement of a user. A VIP status symbol, if you will. But Twitter says it’s only intended to authenticate and protect the voices of people vulnerable to identity theft. 

In the confusion, Twitter users have accused Twitter of endorsing white supremacists who spread hate speech on its  platform.

If Twitter’s design was ethical in nature, maybe they shouldn’t have chosen a check mark to symbolise authenticity, which usually means correctness or approval. 

But can ethical design make a difference? So far, ethical design has been limited to technology that prioritises the end-user. For instance, digital apps that know what you will buy at a local store based on prior purchases.

More and more, designers and technology companies are beginning to recognise the ethical responsibility they bear. But balancing profit and human interest is a difficult challenge for companies already rooted in the way they work by monetisation strategies set in stone. 

There’s still a lot they can do. 

Ethical design takes on the responsibility of its impact on a user’s life. For instance, how can a designer create technology that gives the user positive emotional experiences, taking into account how people connect with technology through fear, pleasure, trust and anxiety. 

That shapes how the end product looks, feels, is used, and how you feel after using it. 

Another form of ethical design is simplifying decision-making and using algorithms to maximise what’s best for the user, instead of what’s most profitable for the company. For instance, an ethical algorithm would suggest what you need, over what benefits the company but harms you.

From there, the options are nearly limitless. Is it possible to develop technology that actively reduces inequality, supports democratic values, is reliable and joyful to use, useful and effective? If a teenager is using social media for too long, or oversharing, would the technology suggest they exercise or meditate? Do apps recommend meaningful engagement, or virtual substitutes?

But what can you do right now?

Aside from lobbying for more ethical design in technology, you can become aware of how much of your data is given away on a daily basis. One way is to change your social media privacy settings and limit how much data networks can use from you. That means opening each social media app and going into your privacy settings, where you can restrict which platforms the app can share your data with.

You can also use third-party apps on your internet browser to minimise your digital footprint, prevent targeted ads, and keep data collection companies from harvesting your data.

But even if you restrict your data on a platform like Facebook, content you like, look at without clicking, or even read can still be collected. The alternative is to spend less time on social media. 

Some mobile devices have ‘screen time’ tools that tell you when you’ve more than your allotted time on an app, but they’re not solutions.

In the long run, limiting how personal data is used and sold needs large legal measures. While new laws won’t fix everything, it can still bring more accountability and change. This means better data privacy protection laws, like the General Data Protection Regulation that was passed in Europe in 2018.

Until that happens, we’re going to continue to see the effects social media has on the world, as it slowly redefines what our idea of what is normal, acceptable and possible, for better or for worse.

Route 6