Skip to main content

AI Detector vs Generator

Interactive dual-panel exploring the boundary between human and machine writing

ReactNext.jsOpenAISemantic AnalysisSSE
shipped

Introduction

What happens when AI learns to write like a human — and another AI learns to catch it? This project explores the arms race between AI text generation and detection, putting both systems side-by-side so you can observe the dynamic firsthand.

Generate text with a GPT-powered language model, then watch a detection system analyze it in real time — scoring patterns, structure, and style to estimate whether the text was written by a human or an AI.

Generator

zLLM

Give me a topic or scenario and I'll write a short piece.

Detector

AI Text Detector

Waiting for generated text to analyze...

This detector will analyze the generated text

When an unstoppable generator meets an immovable detector, which wins? As generation improves, detection must evolve — and vice versa. This feedback loop drives both technologies forward in an ongoing arms race.

Overview

The generator uses OpenAI's GPT-3.5-turbo to produce creative, contextual text streamed word-by-word in real time. The detector runs client-side heuristic analysis, examining sentence structure, vocabulary patterns, and stylistic markers to estimate AI authorship probability.

This creates a feedback loop: the better generators become at mimicking human writing, the more sophisticated detection must become. Placing both systems side-by-side reveals how AI-generated text differs from human writing in subtle but measurable ways.

How it Works

01
Text GenerationGPT model produces streaming responses from user prompts
02
Pattern DetectionHeuristic analysis scores text for AI-authorship markers
03
Real-time AnalysisDetection updates live as generated text streams in
04
Client-side ProcessingAll detection runs locally in the browser — no data leaves your machine

Features

Four systems working in concert to demonstrate the generator-detector dynamic.

01

GPT Text Generation

Creative writing from any prompt, streamed word-by-word with real-time display

02

Live Detection

Heuristic authorship analysis with confidence scoring as text generates

03

Client-side Analysis

All detection runs in the browser — text never leaves your machine

04

Semantic Visualization

Interactive word embedding map showing how AI understands language relationships

Demo

The visualization below maps words into a 2D semantic space, clustering them by meaning. This is the core technology behind both generation and detection — understanding how words relate to each other in high-dimensional space.

Semantic Word Relationships

Demo Mode

This is a simplified demonstration showing how words might be positioned based on semantic similarity. In the full version, AI embeddings and UMAP would provide more accurate positioning.

happy ×joy ×sad ×angry ×cat ×dog ×computer ×technology ×nature ×forest ×ocean ×mountain ×love ×friendship ×war ×peace ×

Word Categories (Demo):

Emotions
Animals
Technology
Nature
Conflict/Peace
Other

Note: This is a demonstration version. The full implementation would use:

  • @xenova/transformers for AI-generated word embeddings
  • UMAP algorithm for accurate dimensionality reduction
  • Real semantic similarity calculations

Word Embeddings

Words with similar meanings cluster together — "happy" near "joy," "cat" near "dog." Generators use these relationships to produce coherent text. Detectors exploit them to identify patterns that distinguish AI writing from human writing.

Conclusion

As AI text generation approaches human-level fluency, the line between human and machine authorship grows increasingly blurred. This project demonstrates that tension firsthand — generators that write convincingly, and detectors that must find ever-subtler signals to distinguish the source.

The implications reach beyond technology into content authenticity, academic integrity, and digital trust. Understanding both sides of this arms race is essential for navigating the future of human-AI communication.

Next Steps

01

Integrate transformer-based detection models for higher accuracy

02

Add statistical watermarking to generated text

03

Expand detection to handle paraphrasing and style transfer

04

Explore adversarial training between generator and detector