preloader
  • 'Data Toxicality': The hidden harms of information

Review of the speech by Prof. Gerhard Schreiber (HSU) on the last Seminar Series ‘Computation and Data’

blog-thumb
ChatGPT Prompt: Create a Picture to represent 'Data Toxicality'

On February 27th, the Computation & Data seminar series welcomed Prof. Gerhard Schreiber for a thought-provoking talk on ‘Data Toxicality: Observations and Reflections from a Techno-Ethical Perspective.1 In his presentation, Prof. Schreiber introduced a compelling and originative concept - data toxicality - which extends the notion of toxicity beyond its traditional biochemical roots into the digital realm.

But what exactly does the concept of the “data toxicality”, coined by Prof. Schreiber mean, and why should we be concerned about it?

From Toxic Substances to Toxic Data

In biochemistry, a substance is considered toxic if it causes harm when it interacts with a biological system. Over time, this concept has been metaphorically extended into social and psychological domains. We now speak of toxic masculinity, toxic relationships, and even toxic positivity; all referring to harmful behaviors, mindsets, or structures that erode well-being and social cohesion.

Prof. Schreiber takes this metaphor a step further by examining how data itself can become toxic. His argument is not that data is inherently harmful, but rather that harm arises from the ways in which data is collected, processed, and used—often with consequences for human relationships, societies, and institutions.

How Can Data Become Toxic?

Unlike traditional notions of toxicity such as chemical or genetic toxicity, data toxicality, as described by Prof. Schreiber, is less about physical harm and more about social-psychological consequences. He identified several key ways in which data can acquire toxic properties:

  1. Misinformation and Manipulation

False or misleading data can distort reality, leading to misinformed decisions, erosion of trust, and social division. This now frequently observed phenomenon, is particulary prolematic where disinformation meets algorithmic distribution mechanisms, such as in social networks.

  1. Surveillance and Loss of Privacy

Excessive data collection can create an oppressive atmosphere in which individuals feel constantly monitored, inhibiting free expression. For example workplace surveillance that pressure employees to self-censor.

  1. Algorithmic Bias and Discrimination

When biased data feeds into decision-making systems, it can reinforce inequalities and institutionalize discrimination. This problem can be observed in technologies such as AI and impact for example hiring processes with the consequence of disadvantages for certain demographic groups.

  1. Emotional and Psychological Harm

Personal data exposure (e.g., doxxing, revenge porn, or AI-generated deepfakes) can lead to real-world distress, anxiety, and reputational damage.

  1. Data Exhaustion and Decision Fatigue

The overwhelming volume of digital information—from notifications to endless data streams—can negatively impact mental health by inducing stress and cognitive overload.

A Techno-Ethical Framework for Addressing Data Toxicality

While Prof. Schreiber’s presentation highlighted risks, it also pointed toward potential solutions. He showed various approaches, such as a techno-ethical framework that seeks to balance technological innovation with social responsibility. This framework includes:

  • Stronger data governance: Clearer regulations on what data can be collected and how it can be used.

  • Transparency in algorithms: Making AI decision-making processes understandable and auditable.

  • Digital literacy and critical thinking: Empowering individuals to recognize and mitigate harmful data practices.

  • Ethical AI design: Ensuring that technology development prioritizes fairness, privacy, and societal well-being.

Why This Matters

Data is often perceived as a neutral, objective entity—something that simply ‘exists’. But as Prof. Schreiber emphasized, data is never neutral. It is shaped by human decisions, systems, and biases. When mismanaged, it can lead to tangible harm.

By acknowledging the ways in which data affects us socially and psychologically, we can begin to develop strategies for healthier digital environments—ones that foster trust, fairness, and well-being rather than division, surveillance, and manipulation.

The conversation around data toxicality is still in its early stages. Yet, as our digital lives grow increasingly complex, Prof. Schreiber’s concept offers a valuable lens for understanding and addressing the risks ahead.


  1. Schreiber, Gerhard 2025: Data Toxicality: A Techno-Philosophical Inquiry into Digital Harm. In: Filozofia, vol. 80 (3), pp. 300-313. Link ↩︎