preloader
  • Home
  • Data Toxicality: Understanding the Hidden Harms of Information in the Digital Age

Review of the speech by Prof. Gerhard Schreiber (HSU) on the last Seminar Series ‘Computation and Data’

blog-thumb
ChatGPT Prompt: Create a Picture to represent 'Data Toxicality'

On February 27th, the Computation & Data seminar series welcomed Gerhard Schreiber for a thought-provoking talk on ‘Data Toxicality: Observations and Reflections from a Techno-Ethical Perspective.’ His presentation introduced a compelling new concept—data toxicality—which extends the notion of toxicity beyond its traditional biochemical roots into the digital realm.

But what exactly does “data toxicality” mean, and why should we be concerned about it?

From Toxic Substances to Toxic Data

In biochemistry, a substance is considered toxic if it causes harm when it interacts with a biological system. Over time, this concept has been metaphorically extended into social and psychological domains. We now speak of toxic masculinity, toxic relationships, and even toxic positivity—all referring to harmful behaviors, mindsets, or structures that erode well-being and social cohesion.

Schreiber’s work takes this metaphor a step further by exploring how data itself can become toxic. His argument is not that data is inherently dangerous, but rather that the way data is collected, processed, and used can create real harm in human relationships, societies, and institutions.

How Can Data Become Toxic?

Unlike traditional notions of toxicity—such as chemical or genetic toxicity—data toxicality is less about physical harm and more about social-psychological consequences. Schreiber highlighted several key ways in which data can take on toxic properties:

  1. Misinformation and Manipulation

False or misleading data can distort reality, leading to misinformed decisions, erosion of trust, and social division. Example: Disinformation campaigns and algorithmically amplified conspiracy theories.

  1. Surveillance and Loss of Privacy

Excessive data collection can create an oppressive atmosphere where individuals feel constantly monitored, inhibiting free expression. Example: Workplace surveillance that pressures employees to self-censor.

  1. Algorithmic Bias and Discrimination

When biased data feeds into decision-making systems, it can reinforce inequalities and institutionalize discrimination. Example: AI-driven hiring processes that disadvantage certain demographic groups.

  1. Emotional and Psychological Harm

Personal data exposure (e.g., doxxing, revenge porn, or AI-generated deepfakes) can lead to real-world distress, anxiety, and reputational damage.

  1. Data Exhaustion and Decision Fatigue

The overwhelming volume of digital information—from notifications to endless data streams—can negatively impact mental health by inducing stress and cognitive overload.

A Techno-Ethical Framework for Addressing Data Toxicality

While Schreiber’s presentation highlighted the risks, it also pointed towards solutions. He called for a techno-ethical framework that balances innovation with responsibility. This includes:

  • Stronger data governance: Clearer regulations on what data can be collected and how it can be used.
  • Transparency in algorithms: Making AI decision-making processes understandable and auditable.
  • Digital literacy and critical thinking: Empowering individuals to recognize and mitigate harmful data practices.
  • Ethical AI design: Ensuring that technology development prioritizes fairness, privacy, and societal well-being.

Why This Matters

Data is often seen as a neutral, objective entity—something that simply “exists.” However, Schreiber’s concept of data toxicality reminds us that data is never neutral. It is shaped by human choices, systems, and biases—and when mismanaged, it can create real harm.

By recognizing how data affects us socially and psychologically, we can begin to develop strategies for healthier digital environments—ones that foster trust, fairness, and well-being rather than division, surveillance, and manipulation.

The discussion on data toxicality is just beginning, and as our digital lives continue to expand, this concept will likely become even more relevant. How we collect, interpret, and regulate data today will determine whether we harness its potential—or allow its toxic effects to spread.