Human Behaviour in the Digital World: Desire, Knowledge, and the Algorithmic Mirror


“Human behaviour flows from three main sources: desire, emotion, and knowledge.” — Plato

Even after 2,500 years, Plato’s idea still fits surprisingly well with how we behave online. These days, algorithms don’t just reflect what we want, feel, or know — they learn from it, tweak it, and slowly reshape it. This piece takes a closer look at how digital systems feed our desires, stir our emotions, and filter our knowledge. It’s not about scaring anyone — it’s about helping people understand what’s really going on behind the screen (Sunstein, 2017; Zuboff, 2019).


Desire → Engagement: How Algorithms Shape What We Want

Online platforms turn attention into something they can measure and improve. Recommender systems — like those on Netflix, YouTube, or social media — learn what’s likely to grab us next. Netflix, for example, uses clever filtering and design tricks to keep us watching (Gómez-Uribe & Hunt, 2015/2016). YouTube’s system goes even further, using deep learning to predict what will hold our attention the longest (Covington et al., 2016).

Here’s the thing: we’re drawn to what’s new and emotionally charged. A huge study of Twitter found that false news spreads faster than the truth — mainly because it’s more surprising and stirs stronger feelings (Vosoughi, Roy & Aral, 2018).

We also tend to want what our friends want. On Facebook, it turns out that people’s own choices (what they click on or avoid) play a bigger role in shaping their feed than the algorithm itself. So it’s not just the tech — it’s us too (Bakshy, Messing & Adamic, 2015).


Emotion → Amplification: Why Outrage Spreads Faster Than Empathy

Online, emotions act like a megaphone. Tweets with moral or emotional words are more likely to be shared, especially within like-minded groups. This creates a perfect storm for polarising stories (Brady et al., 2017).

Facebook once ran a big experiment (yes, it was controversial) that showed how changing the emotional tone of your feed could nudge your own posts in the same direction. It proved that even without face-to-face contact, emotions can spread (Kramer, Guillory & Hancock, 2014).

And here’s another twist: attention spans are shrinking. Across platforms like Twitter, Reddit, and even Wikipedia, topics flare up and die out faster than ever. That means content that gets quick, strong reactions tends to win (Lorenz-Spreen et al., 2019).

Why does this happen? Neuroscience gives us a clue. Social media taps into deep human needs — like wanting to belong or be seen. Getting likes or seeing something new triggers reward centres in the brain, making us come back for more (Meshi, Tamir & Heekeren, 2015).


Knowledge → Personalisation: What We See (and Miss) Online

The internet once promised endless knowledge. Now, personalisation means we mostly see what algorithms think we want. Some experts warned early on that this could trap us in “Daily Me” bubbles (Pariser, 2011; Sunstein, 2017). And while it’s not always that simple, echo chambers do exist — and they vary depending on the platform and the person.

For example, Facebook and Twitter show strong signs of biased sharing, while Reddit’s structure allows users to tweak what they see. So yes, design matters (Cinelli et al., 2021).

But more content diversity isn’t always the answer. One study found that showing Republicans opposing views on Twitter actually made them more polarised. Democrats didn’t change much either. So it’s not just about variety — it’s about how messages are framed and whether they feel threatening (Bail et al., 2018).

Also, let’s not overstate things. Some solid research shows only small links between digital use and wellbeing. The effects aren’t the same for everyone — they depend on context, personality, and how the tech is used (Orben & Przybylski, 2019; Guess et al., 2020).


Bigger Forces: Bias and Surveillance Capitalism

Algorithms are built on data — and that data often carries bias. These biases can be old, technical, or even created by the system itself. When they mix, they can lead to unfair search results or skewed visibility (Friedman & Nissenbaum, 1996; Baeza-Yates, 2018).

Some critics say platforms turn our behaviour into something they can sell, using it to predict and control us for advertising. Whether or not you agree with the strongest claims, one thing’s clear: the way platforms are built affects what we count as “knowledge” online (Zuboff, 2019; Noble, 2018).


The Changing News Landscape: Platforms as Gateways

Around the world, platforms — especially video-based ones — are now the main way people get news. A 2024 report covering 47 countries found that traditional networks are fading, while TikTok, Instagram, and YouTube are rising fast. In the US, most adults get at least some news from social media, with TikTok growing quickly among younger users. This shift makes algorithmic curation more important than ever (Reuters Institute, 2024; Pew Research Center, 2024).


What Can We Do? Curiosity, Empathy, and Wisdom

For individuals: Taking a break from social media can boost wellbeing — but it might also reduce how much political info you pick up. So it’s a mixed bag. Good digital habits mean following trusted voices, slowing down your scroll, and going straight to quality sources. And remember: most mental health effects are small and vary from person to person (Allcott et al., 2020; Orben & Przybylski, 2019; Vuorre, Orben & Przybylski, 2021).

For platforms and regulators: If we want systems that encourage curiosity instead of mindless scrolling, we need to change how they work. That means:

  • Making algorithms more transparent and accountable (Friedman & Nissenbaum, 1996; Baeza-Yates, 2018).
  • Giving users more control — like sliders for content diversity or friction before sharing.
  • Supporting quality journalism in a world dominated by creators and video (Reuters Institute, 2024).

For researchers: Mixed results aren’t mistakes — they’re signs of complex systems. We should reward open methods, better data sharing, and studies that ask: who benefits, who’s harmed, and under what conditions (Orben & Przybylski, 2019).


Plato’s trio — desire, emotion, and knowledge — isn’t just philosophy. It’s what today’s platforms optimise for. Our challenge is to make those systems serve people, not just profit. That means designing for curiosity, empathy, and wisdom. And it’ll take both personal effort and big structural changes to make that happen (Sunstein, 2017; Zuboff, 2019; Noble, 2018).

Bibliography

Allcott, H., Braghieri, L., Eichmeyer, S. & Gentzkow, M. (2020) ‘The welfare effects of social media’, American Economic Review, 110(3), pp. 629–676.

Baeza‑Yates, R. (2018) ‘Bias on the web’, Communications of the ACM, 61(6), pp. 54–61.

Bail, C.A., Argyle, L.P., Brown, T.W., Bumpus, J.P., Chen, H., Hunzaker, M.B.F. et al. (2018) ‘Exposure to opposing views on social media can increase political polarization’, Proceedings of the National Academy of Sciences, 115(37), pp. 9216–9221.

Bakshy, E., Messing, S. & Adamic, L.A. (2015) ‘Exposure to ideologically diverse news and opinion on Facebook’, Science, 348(6239), pp. 1130–1132.

Brady, W.J., Wills, J.A., Jost, J.T., Tucker, J.A. & Van Bavel, J.J. (2017) ‘Emotion shapes the diffusion of moralized content in social networks’, Proceedings of the National Academy of Sciences, 114(28), pp. 7313–7318.

Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W. & Starnini, M. (2021) ‘The echo chamber effect on social media’, Proceedings of the National Academy of Sciences, 118(9), e2023301118.

Covington, P., Adams, J. & Sargin, E. (2016) ‘Deep neural networks for YouTube recommendations’, In: Proceedings of the 10th ACM Conference on Recommender Systems (RecSys), pp. 191–198.

Friedman, B. & Nissenbaum, H. (1996) ‘Bias in computer systems’, ACM Transactions on Information Systems, 14(3), pp. 330–347.

Gómez‑Uribe, C.A. & Hunt, N. (2015/2016) ‘The Netflix recommender system: Algorithms, business value, and innovation’, ACM Transactions on Management Information Systems, 6(4), Article 13.

Guess, A.M., Lockett, D., Lyons, B., Montgomery, J.M., Nyhan, B. & Reifler, J. (2020) ‘“Fake news” may have limited effects beyond increasing beliefs in false claims’, Harvard Kennedy School Misinformation Review, 1(1).

Kramer, A.D.I., Guillory, J.E. & Hancock, J.T. (2014) ‘Experimental evidence of massive-scale emotional contagion through social networks’, Proceedings of the National Academy of Sciences, 111(24), pp. 8788–8790.

Lorenz‑Spreen, P., Mønsted, B.M., Hövel, P. & Lehmann, S. (2019) ‘Accelerating dynamics of collective attention’, Nature Communications, 10, 1759.

Meshi, D., Tamir, D.I. & Heekeren, H.R. (2015) ‘The emerging neuroscience of social media’, Trends in Cognitive Sciences, 19(12), pp. 771–782.

Noble, S.U. (2018) Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.

Orben, A. & Przybylski, A.K. (2019) ‘The association between adolescent well‑being and digital technology use’, Nature Human Behaviour, 3, pp. 173–182.

Pariser, E. (2011/2012) The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. New York: Penguin.

Pew Research Center (2024) Social Media and News Fact Sheet. Washington, DC: Pew Research Center.

Reuters Institute for the Study of Journalism (2024) Digital News Report 2024. Oxford: University of Oxford.

Sunstein, C.R. (2017) #Republic: Divided Democracy in the Age of Social Media. Princeton, NJ: Princeton University Press.

Vosoughi, S., Roy, D. & Aral, S. (2018) ‘The spread of true and false news online’, Science, 359(6380), pp. 1146–1151.

Vuorre, M., Orben, A. & Przybylski, A.K. (2021) ‘There is no evidence that associations between adolescents’ digital technology engagement and mental health problems have increased’, Clinical Psychological Science, 9(5), pp. 823–835.

Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs.