By Cintia Nogueira
The ubiquity of digital technology has given rise to the teaching and learning of a series of multiliteracies related to digital media and to the processes involved in dealing with information online. Even though many of these literacies encompass different skills that have been part of the educational process for a long time, the possibilities provided by a multimedia culture and the various social implications of a digital information society have brought digital literacy to the forefront of education and cross-disciplinary curricula.
Digital literacy is generally understood as a set of skills and competencies that can be used to access, find, filter, evaluate, use, and produce information and content using digital media, as well as skills to protect one’s privacy online.[i] It also involves the ability to connect with others online and engage in democratic discourse, respecting divergent opinions and different political positions.

The concept of digital literacy thus usually includes not only the ability to operate and manipulate digital equipment, but also think critically about information and media, making relevant connections between ideas, exercising scepticism, differentiating between fact and opinion, understanding biases, checking for information credibility, and understanding how social media can manipulate human thought and influence elections.
However, the growing pervasiveness of algorithms and artificial intelligence in society, as well as the many social and political implications of the monopoly of information and digital systems operated by Big Tech[ii], call for a new layer in the teaching and learning of digital literacy skills – the social and political dimensions of tech.
It is not possible to discuss the use of technological tools without also addressing the social contexts of their creation, development and distribution. When approaching something as simple as the writing of an e-mail, for instance, it is not enough to train students in genre appropriateness, digital etiquette and information safety, they should be encouraged to consider how the e-mail service exists within our social context – if you are using a free e-mail service provided by a private company, then why is this service free? In what ways are you “paying” for this service? Do you have full awareness of how your data is used?
In 1983, in conversation with educator Sérgio Guimarães about the possible uses of media in education, Paulo Freire said (about the television, then having a blooming moment in Brazil):
“To me, the television cannot be understood within itself. It is not a purely technical instrument; its use is political. (…) Means of communication are not good or bad in themselves. (…) The issue is asking “whom” or “what” these means of communication serve. And this is an issue that has to do with power and is therefore political.”[iii]
Freire goes on to say that “behind these antennas and flying from them, there is a whole ideology, a whole comprehension of the world and of reality, an understanding of beauty and ugliness, of gender, of race, of class, that corresponds precisely with the ideology of those who have the power, those in power.”[iv]
And it is the same dominant ideology – white, male, American, heterosexual, upper class – that is behind most digital technology tools that students and educators will encounter in their lives. Likewise, these tools, their purpose and uses, operate within a capitalist framework.
Because these technologies are created within a society that discriminates people based on race, gender, sexual orientation, social class and origin (among others), they tend to reflect the same biases that occur in society and its institutions, favouring the white and wealthy, even though most users believe technology to be neutral and objective.
Discrimination can occur, for example, in various modes of seeing – or not seeing – through digital imaging technology. In activist and poet of code Joy Buolamwini’s now classic “coded gaze” experiment, she realised that the facial recognition software she was using did not recognise her (black) face, even though it could easily identify all her white friends.[v] Buolamwini coined the term “coded gaze” to describe “algorithmic bias that can lead to social exclusion and discriminatory practices – a reflection of the priorities, preferences, and prejudices of those who have the power to shape technology”.[vi]
However, the coded gaze also means that digital imaging technology can be used to make the underprivileged more visible, thus more susceptible to unfair judgement. Facial recognition software is just one of the many data collecting tools that usually target the least privileged in society. In April 2021, Brazilian NGO CUFA (Central Única das Favelas) put to rest a facial recognition system to register those eligible to receive donations.[vii]
Activists such as Brazilian computer scientist and researcher Nina da Hora raised concerns that people’s data could be used for other purposes and that using biometric data in an NGO that operates at favelas needs a completely different framework of use, as the data can easily be weaponised and turned against people, who might not be aware of the implications of sharing such data.
Facial recognition data has historically been used for criminal identification, and it is also known to have historically harmed minorities. Research from the NIST (American National Institute of Standards and Technology) has shown that Black and Asian faces are 10 times more likely to be wrongly identified by such software as compared to white faces.[viii] In this case, as opposed to Joy Buolamwini’s invisibility in her art project, being more visible can actually put people in danger.
Facial recognition software and digital imaging tech are not the only ways in which technology can be biased against minorities. Artificial intelligence can be used to amplify inequality in a myriad of ways, such as:
- Misrepresentation and stereotyping of minorities, ethnic and religious groups, especially through search engine results, which can negatively affect perceptions of the group.[ix]
- Dehumanisation, such as when Google search identified two African-Americans as gorillas in 2015[x] (and then quick-fixed the algorithm by simply blocking it from identifying gorillas).[xi]
- Unfair sentencing and racial bias in algorithms that calculate recidivism rates that punish individuals based on the neighbourhoods they come from and the people they associate with rather than their actions.[xii]
- Predictive policing systems that calculate where crime is most likely to occur and send police to discourage it, which in practice can work like a self-fulfilling prophecy for impoverished communities where crimes without victims, such as vagrancy, selling and consuming drugs, or aggressive panhandling are common, thus increasing incarceration of poor people.
- Use of credit scores to evaluate which people should have access to jobs or promotions.
- New forms of control and surveillance that target the least privileged while the most privileged can opt out of privacy invasion.
These issues should be part and parcel of any digital literacy curriculum that considers technology from a social and political perspective, understanding the contexts involved in the creation of digital technology and what social and commercial purposes it serves, as well as its potential to cause harm and increase inequality. It is also fundamental to understand that in certain situations technology cannot be used as a shortcut or quick fix to social equity.
Sociologist Ruha Benjamin says that “imagination is a field of action”[xiii], in the sense that the least privileged in society need to find ways to stop living in white people’s idea of the world and imagine – and create – their own future. Richard Sennett corroborates this vision by saying that “modern capitalism works by colonising people’s imagination of what is possible.”[xiv] Thus it is very likely that most people consider the monopolistic practices of Big Tech inevitable just because they are not aware that there are alternatives to these services that do not come from Silicon Valley and are not privately owned – services, software and tech initiatives that are open source, user-generated, decentralised, community-based and have a whole different viewpoint on technology that does not operate on the profit framework of capitalism.[xv]
It is in the jobs of educators then to help students develop what Benjamin calls a “liberatory imagination” and Freire calls “generative imagination”, engaging students in critical analyses of our current scenarios but also in active participation in the creation of these futures. Let us move towards a digital literacy that not only equips students with technical skills but also with an understanding of the social and political dimensions of the tools they use, where they are situated historically and how they serve our economic system, fostering an education that can actually change the future of tech, valuing equity over efficiency.[xvi]
[1] For some definitions of digital and media literacy and digital intelligence, visit:
https://www.commonsensemedia.org/news-and-media-literacy/what-is-digital-literacy
Click to access 01a_mlkorientation_rev2_0.pdf
https://educamidia.org.br/educacao-midiatica
(All websites accessed on June 4 2021.)
[1] MOROZOV, Evgeny. Big Tech – A ascensão dos dados e a morte da política. São Paulo: Ubu Editora, 2018.
[1] FREIRE, Paulo and GUIMARÃES, Sérgio. Educar com a mídia: novos diálogos sobre educação. São Paulo: Editora Paz e Terra, 2013.
[1] IDEM
[1] Buolamwini describes the process in her TED Talk, “How I’m fighting bias in algorithm”, available at https://youtu.be/UG_X_7g63rY (Accessed on June 3 2021).
[1] https://www.fordfoundation.org/just-matters/just-matters/posts/fighting-the-coded-gaze/ (Accessed on June 3 2021).
[1] https://g1.globo.com/economia/tecnologia/noticia/2021/04/27/por-que-a-cufa-interrompeu-o-uso-de-reconhecimento-facial-apos-polemica.ghtml (Accessed on June 3 2021)
[1] https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software (Accessed on June 3 2021)
[1] NOBLE, Safiya. Algorithms of oppression: how search engines reinforce racism. New York: NYU Press, 2018.
[1] https://www.forbes.com/sites/mzhang/2015/07/01/google-photos-tags-two-african-americans-as-gorillas-through-facial-recognition-software/?sh=7e9e1b72713d (Accessed on June 3 2021)
[1] https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai (Accessed on June 3 2021)
[1] O’NEIL, Cathy. Weapons of math destruction: how big data increases inequality and threatens democracy. New York: Crown, 2016.
[1] BENJAMIN, Ruha. Race after technology. Medford, MA: Polity Press, 2019.
[1] https://brasil.elpais.com/brasil/2018/08/09/cultura/1533824675_957329.html (Accessed on June 3 2021)
[1] Have a look at some initiatives by visiting their websites:
Home
https://www.olabi.org.br/project/pretalab
https://educatransforma.com.br/
[1] BENJAMIN, 2019.
[xvi] BENJAMIN, 2019.