On February 27th, the Computation & Data seminar series welcomed Gerhard Schreiber for a thought-provoking talk on ‘Data Toxicality: Observations and Reflections from a Techno-Ethical Perspective.’ His presentation introduced a compelling new concept—data toxicality—which extends the notion of toxicity beyond its traditional biochemical roots into the digital realm.
But what exactly does “data toxicality” mean, and why should we be concerned about it?
In biochemistry, a substance is considered toxic if it causes harm when it interacts with a biological system. Over time, this concept has been metaphorically extended into social and psychological domains. We now speak of toxic masculinity, toxic relationships, and even toxic positivity—all referring to harmful behaviors, mindsets, or structures that erode well-being and social cohesion.
Schreiber’s work takes this metaphor a step further by exploring how data itself can become toxic. His argument is not that data is inherently dangerous, but rather that the way data is collected, processed, and used can create real harm in human relationships, societies, and institutions.
Unlike traditional notions of toxicity—such as chemical or genetic toxicity—data toxicality is less about physical harm and more about social-psychological consequences. Schreiber highlighted several key ways in which data can take on toxic properties:
False or misleading data can distort reality, leading to misinformed decisions, erosion of trust, and social division. Example: Disinformation campaigns and algorithmically amplified conspiracy theories.
Excessive data collection can create an oppressive atmosphere where individuals feel constantly monitored, inhibiting free expression. Example: Workplace surveillance that pressures employees to self-censor.
When biased data feeds into decision-making systems, it can reinforce inequalities and institutionalize discrimination. Example: AI-driven hiring processes that disadvantage certain demographic groups.
Personal data exposure (e.g., doxxing, revenge porn, or AI-generated deepfakes) can lead to real-world distress, anxiety, and reputational damage.
The overwhelming volume of digital information—from notifications to endless data streams—can negatively impact mental health by inducing stress and cognitive overload.
While Schreiber’s presentation highlighted the risks, it also pointed towards solutions. He called for a techno-ethical framework that balances innovation with responsibility. This includes:
Data is often seen as a neutral, objective entity—something that simply “exists.” However, Schreiber’s concept of data toxicality reminds us that data is never neutral. It is shaped by human choices, systems, and biases—and when mismanaged, it can create real harm.
By recognizing how data affects us socially and psychologically, we can begin to develop strategies for healthier digital environments—ones that foster trust, fairness, and well-being rather than division, surveillance, and manipulation.
The discussion on data toxicality is just beginning, and as our digital lives continue to expand, this concept will likely become even more relevant. How we collect, interpret, and regulate data today will determine whether we harness its potential—or allow its toxic effects to spread.