The Etiquette of AI: Why Sharing AI Output Can Be Rude

2025-07-20
ℹ️Note on the source

This blog post was automatically generated (and translated). It is based on the following original, which I selected for publication on this blog:
It’s rude to show AI output to people | Alex Martsinovich.

The Etiquette of AI: Why Sharing AI Output Can Be Rude

The proliferation of AI-generated content raises an important question: Is it rude to share AI output with others? The argument suggests that it is, drawing parallels to the alien species in Peter Watts' novel Blindsight, the 'scramblers'. These aliens consider humanity's constant stream of nonsense communication an act of war, a waste of their cognitive resources.

The Scrambler Analogy: Information as a Virus

The core idea is that a signal, seemingly intelligent but ultimately meaningless, acts as a virus. It consumes the recipient's resources—time and attention—for zero payoff, reducing their fitness. The argument suggests that AI-generated text, without a human's thoughtful adoption or explicit consent from the receiver, can function similarly.

Previously, written text carried an inherent 'proof-of-thought,' a guarantee that a human invested time and effort into its creation. However, AI has drastically reduced the cost of generating text, images, code and video. Consequently, one can no longer assume that a body of text reflects genuine human thought. This proliferation of easily generated content risks overwhelming our cognitive capacities with 'AI slop'.

Navigating the AI Landscape: Consent and Adoption

Unlike the fictional scenario where Earth broadcasts unsolicited content into space, AI-generated content requires a human intermediary. The problem, therefore, lies with other humans who choose to disseminate this content. It is argued that using AI for personal exploration is acceptable. The problem arises when AI-generated content is presented without context or consent, potentially legitimizing it with a false 'proof-of-thought'.

Towards AI Etiquette: Respecting Cognitive Space

The proposed solution lies in establishing an AI etiquette. This etiquette dictates that AI output should only be shared if it's either explicitly adopted as one's own or if the receiving party has given explicit consent.

Consider the following examples:

  • Stating "I asked ChatGPT and this is what it said: <...>" is considered rude.
  • Offering to share a helpful ChatGPT log after a previous conversation is acceptable with the other person's explicit consent.
  • Claiming to have 'vibe-coded' a pull request in minutes raises questions. Should one review the content personally before sharing it?
  • Presenting a pull request with a clear explanation of the work done is still considered polite.

While the scramblers in Blindsight had no choice but to process Earth's noise, humans can choose what they consume. By practicing AI etiquette, it can be ensured that communication remains respectful and meaningful, avoiding the pitfalls of information overload. Which standards and social norms should we adopt to ensure the usefulness of human-computer interaction?


Comments are closed.