Eventually, users do not choose to access their bubble, but they are exposed to it passively. It is built and adapted around each individual in order to envelope them completely.
Nowadays, we feel like the world is at our fingertips. Literally. In the old world, a person just needs a smartphone and a stable connection to access a myriad of information, thanks to that tiny little thing called the internet. Besides, this is the concept on which the web was originally invented: accessibility, transparency, generativity. That is why a page refers to another, which refers to yet another page and so on and so forth, over and over again. A world constantly growing, without barriers – well, at least theoretically speaking.
Currently, barriers within the web are found aplenty. As a matter of fact, in the last few decades, the web’s fragmentation became increasingly evident. There is a plethora of closed spaces – some bigger, others smaller – which end up creating microcosms of their own, completely autonomous and cut off from the rest of the net. The most fitting example of this restriction tendency is represented by social media, like Twitter, Facebook, and Instagram.
These platforms are so cut off from the rest of the web to the point of being defined as “walled gardens”, some sort of space where the users ensconce themselves and from which they do not exit. They effectively represent the web’s tendency to restrict, which is manifested on four different levels. The first level is the linguistic one. The fact that a linguistic fragmentation of the web exists is not that surprising. Actually, it is sort of expected; as the web gradually expanded and more people were able to get access to it, new sections in different languages emerged. The exclusivity of the English language was surpassed, and various linguistic enclaves were formed: places where a user who speaks only one language hardly would want to leave. The second level is social, and it is tightly connected to the concept of homophily, the human beings’ propensity to privilege relations with those who already share their ideas, lifestyles, social background, culture, and interests. This behavioural pattern is weaved to the third level of restriction of the web, the ideologic one. In fact, as we like to be surrounded by people similar to us, we also want them to share our ideas and beliefs. We want to be reassured by people that think in the same way we do, and we do not want our opinions to be criticised or challenged. We fear open discussions and we tend to avoid them. This is why, while chatting about ideology and beliefs on the internet, it is so easy to bump into the discussions on echo chambers. Echo chambers are, by definition, digital closed spaces, where the chances for open debate are derisory and the same idea is repeated over and over again, until it is heard all over the world. They offer fertile ground on which to discuss infamous conspiracy theories, the uncontested realm of conspiracy phenomena like QAnon or of people belonging to groups like Incels.
If the first three levels of closeness of the web are strictly connected to human nature, the fourth and last level is purely technical. Indeed, it is based on the functioning of social media and search engines and particularly on how these platforms – Google above all, but Facebook and Instagram, too – filter the information for their users. This phenomenon is called filter bubble, precisely because both social media and search engines, thanks to cookies, track their users’ behaviour and personalise their experience on the base of what they previously searched. In this way they create a bubble, a sort of comfort zone, made of filtered information that has been expressly chosen for each individual’s tastes.
Some could say that nothing of the above is that revolutionary of a concept: most of the behaviours previously described depend on human nature. So, apparently, what is happening within the web is not different from what happens in reality. Then, for what concerns the filter bubble issue, the phenomenon could be compared to the action of reading a certain newspaper rather than another: after all, keeping up with the news using only one source means living in a bubble of sorts.
However, the filter bubble is much more insidious than that. Firstly, the search experience on Google is so personalised on the individual that the danger of isolation is even greater. Secondly, traditional media, like television and newspapers, produce a set of visions with a sufficiently regulated framing. This does not apply to Google, where the bubbles and the visions of reality they offer are not only invisible, but also unclear; their mechanisms are not easily perceivable and their functioning obscure. Eventually, users do not choose to access their bubble, but they are exposed to it passively. It is built and adapted around each individual in order to envelope them completely.
The issue has become so relevant that programmes and extensions were created, like plug-ins, which allow users to diversify their feeds and theoretically exit their bubble. It may seem like a real windfall, however there are some critical elements in place. First of all, these apps are mainly based on the American system and thus, more apt to trace the cultural, political, and religious biases typical of that specific society. Hence, their functioning in a different context, for instance the European one, could be misleading.
Moreover, the issue gets even more complicated given the fact that the filter bubble is a theory, which is not even universally recognised. It has certainly had a great impact and success, however, on others being of the opposing theory. They especially doubt the validity of proof sustaining the theory, asserting that they are inadequate to be able to confirm the existence of a phenomenon of selective exposure on an algorithmic basis. According to new studies conducted on social media, an affirmation has been made that they are more likely to prompt weak ties, which increase the users’ opportunities of cross-cutting exposure.
However, what is worth mentioning is that probably, with the support of the opposing positions, the filter bubble theory comes not only from research’s data, but also from the will to avoid the complete demonisation of the web. The defence of the digital platforms’ ability to offer varied contents, answer the need to escape the various accusations made to the web, especially linked to its fragmented conformation, which adds fuel to the fire of radicalisation, be it religious, political, or cultural.
 Miconi, A., 2013. Teorie e pratiche del web (Vol. 659, pp. 1-177). Il mulino.
 Cardenal et al. 2019. Digital technologies and selective exposure: How choice and filter bubbles shape news media exposure. The international journal of press/politics, 24(4), pp.465-486.