In today’s era of news, fake news, and social media propaganda, one subject is on everyone’s lips: algorithms. We spoke to ESCP Professor Lorena Blasco-Arcas and digital marketing expert David Moreno to shed some light on how algorithms really work.
Professor at ESCP
Head of Growth (B2C) Iberia, KitchenAid SDA
Hello David, hello Lorena! First and foremost, can you provide some context on how algorithms work in the media landscape?
Lorena Blasco-Arcas: Algorithms are generally used to categorise and sort content, and thus personalise the user experience and content consumption. By noting and retaining specific topics and keywords based on user behaviour, the platform is able to show users more ‘relevant’ content adapted to their interests and preferences.
David Moreno: For example, in the case of social platforms, algorithms might determine what content the user should be exposed to, when to display it and in which order it should be shown, based on user profiling and data management. But of course, since algorithms are developed by humans, they can suffer from bias and design flaws.
People often discuss the prevalence of algorithms within the world of social media. Where else are algorithms used most effectively?
David: Aside from social media and search engines, many media outlets also use algorithms to serve their news. The BBC is a good example – it uses algorithms to recommend content from its audio and video streaming services via its website and app.
With that said, it’s clear that social media is acting as the current digital battleground. Year on year, more and more people are using social media platforms as a source for news, with engagement levels continuing to rise across said platforms.
I believe that social media has a ‘human connection component’ that search engines lack, which makes news shareable and viral amongst users and hence, more effective.”David Moreno
Concretely, how have algorithms changed our media consumption habits?
David: The old question of media concentration and media neutrality has not disappeared with the rise of the internet. In fact, it’s become bigger than ever. As we head into the third decade of the 20th century, the digital landscape is controlled by an ever-reducing number of companies.
Meta and Alphabet are the dominant players, controlling most of the advertising market and subduing news companies. It’s certainly true that our relationship with media has changed as we’ve moved online, but the implementation of algorithms to drive profit -without ethical and legal considerations – has drastically changed today’s news landscape.
Today, we know that algorithms play a role in increasing that time spent. Tik Tok is a great example of users receiving content based on their previous consumption, increasing stickiness and time in the platform.
Algorithms are often held up as a negative example of today’s changing media landscape. Can they have a positive impact?
Lorena: This is an interesting question. I’d say that algorithms are influencing the media industry in terms of content distribution and optimisation, as they’re often a very cost-effective solution.
However, it’s true that algorithmic personalisation may reduce exposure to diverse content or different viewpoints. This in turn can create echo chambers, polarisation of ideas and, in the worst cases, massive circulation of fake news.
David: In more traditional media landscapes, algorithms can certainly facilitate the impact of relevant and enlightening news when used properly and according to clear editorial guidelines.
For example, the BBC implements its algorithms with the help of cross-functional teams that include not only technical experts but journalists and editorial teams. In short, algorithms are only as good – or as bad – as the purpose they’re created for.
Users need to remain critical and understand how using media that automates access to specific content may have a negative impact on themselves and on others.Lorena Blasco-Arcas
Can you share some comments about the ‘echo chamber’ effect?
Lorena: Online echo chambers are environments where a person only encounters information or opinions that reflect or reinforce theirs – and of course, social media has made it easier than ever for the user to find those spaces. More worrying in my view is the fact that, via algorithms, ‘filter bubbles’ can be created that would prevent the user actually finding new ideas or content based on the automation of the content shown on some websites.
While personalisation of content from the user perspective is (in theory) positive, the fact that, on certain platforms, automatic personalisation can potentially stop users from accessing a wide variety of information is a potential risk we should be aware of. Users need to remain critical and understand how using media that automates access to specific content may have a negative impact on themselves and on others.
David: As tech companies become the biggest news distributors, editorial decisions traditionally carried out by human editors are being replaced, to be performed by algorithms. This means that news is shown or excluded based on metrics such as engagement or clicks, regardless of quality or reliability.
The implementation of algorithms, which were first created to drive revenue through engagement, provokes these echo chambers. Instead of increasing the relevance and quality of the content provided to users, they facilitate the virality of questionable content and disinformation.
Lorena, you cited ‘fake news’ earlier as one of the main risks of poorly implemented algorithms. During the 2020 US Presidential election, Facebook was accused of filtering news based on users’ political affiliations. Do you see this as a privacy concern?
Lorena: Meta has been in the spotlight regarding its role in propagating fake news for quite a while now. For example, back in 2016, Facebook tried to create a feature to highlight Trending Topics from around the site in people’s feeds.
First, it had a team of humans edit the feature, but controversy erupted when some accused the platform of being biased against conservatives. Facebook then turned the job over to algorithms only to find that they could not discern real news from fake news.
What I find very concerning here is to what extent we can trust corporations, with clear profit interests, to keep a neutral and balanced approach when dealing with such hugely important issues.
David: I quite agree. When analysing the available data about social media usage and market size, we can see that more people are spending more time than ever consuming this type of content.
Today, companies such as Meta and Alphabet have the power to decide what news content reaches their users and, logically, content generating revenue is prioritised over quality news content.
These companies should aim to be more transparent and explain clearly to their users how they use our data, how their algorithms work and how they monetise this information. Until then, it should remain a reason of concern for citizens and governments.
To conclude, how do you see algorithms affecting the media landscape in years to come?
David: Whether in terms of we as users or companies as providers, there’s a lot that we don’t know. It’s important to think about the future, but big tech companies and legal regulators should also be acting now in order to regulate and direct this new media landscape.
That means acting with transparency and taking responsibility. Otherwise, these actions will continue negatively impacting our global society, influencing political agendas, and polarising us as a society.
Lorena: Personally, I expect to see news apps continuing to take cues from social media, creating user-profiles and automating personalisation options in news websites in order to attract and keep readers on their platforms.
Most of the different news websites are already implementing these kinds of options to stay more relevant to their users and personalise content in their websites and apps.
Considering how the media industry and technology are evolving, we can only expect algorithms to become more pervasive in the future. With that in mind, what I think is crucial is how businesses and governments develop and design policies with a human-centric perspective to ensure positive outcomes and value creation for all actors.
For that, we need transparency and oversight on algorithmic design and deployment, and more policy guidelines to facilitate algorithmic literacy on the part of Internet users.