Speculative Product · Information Architecture · AI-Assisted Build · 2026

Aurafact

A spatial archive for environmental audio designed from a cultural thesis, built in collaboration with AI, and hosted as a living product.

Visit Aurafact

Role

Lead Product Designer — Systems & IA

Type

Self-directed speculative product

Built With

Claude · Cursor · ChatGPT · p5.js · GSAP

Outcome

Live multi-page product hosted on GitHub — 11 archived specimens across 4 regions

Aurafact — hero screen showing animated waveform visualisation over a dark ground with the headline: Archiving the Acoustic Memory of Place

The Problem

We photograph every surface of the earth. We archive almost none of its sound.

We live in an era of visual over-saturation. Every landscape is routinely mapped, photographed, and filmed with clinical precision. Its acoustic identity, the density of birdsong before a wildfire, the rhythmic clatter of a tram line that no longer runs, the low drone of a harbour at dawn, disappears without record.

Existing approaches to sound archiving sit at two unhelpful extremes. Academic sound libraries are rich in data but absent of experience — digital graveyards that no one browses. Ambient sound apps are beautiful to inhabit but impossible to study, toys, not tools. Neither treats environmental audio as what it actually is: a primary historical artifact.

Design question: How might we design a digital archive that treats environmental audio as a cultural artifact — searchable, interpretable, and accessible to radically different audiences?

Digital Graveyards

Rich in data, absent of experience. British Library Sound Archive — authoritative but with zero immersion.

Aurafact occupies the gap

Immersive utility. A research-grade archive that you actually want to inhabit.


Key Insight

Sound is not a file. It is an environmental event.

The insight that shaped everything: most audio interfaces treat a recording as a media object to be played. Aurafact treats it as a documented specimen, inseparable from the exact temperature, barometric pressure, microphone topology, and ecological conditions of its capture.

This reframed the entire information architecture. A recording doesn't belong in a playlist. It belongs in an archive with full environmental provenance, the same rigour a museum applies to a physical artifact. The interface had to be built around that idea, not bolted onto it.

A place without its sound is only half remembered.

— Aurafact field notes, 2026
Aurafact Specimen Archive — searchable table showing specimen IDs, artifact names, locations, source types, subtypes, and frequency profiles with a detail panel open on the right
The Specimen Archive: each recording documented as a fully provenance-traced artifact, not a file entry.

AI Collaboration

Built with AI. Directed by design.

Aurafact was built in active collaboration with three AI tools — each used for a specific role where it added the most value. This wasn't AI-as-shortcut. It was AI-as-medium: a new way of working that required the same design discipline as any other tool, but demanded a different kind of precision.

Claude

Copy, rationale, and case study structure

Used to review and refine all written content, the Manifesto, the Field Guide principles, specimen field notes, and the language of the taxonomy system. Every output was reviewed, edited, and approved before use. Claude's role was to push the writing toward precision, not to write autonomously.

Cursor

Front-end build and interaction development

Used to write and iterate on the HTML, CSS, and JavaScript across all four pages. The p5.js waveform simulation on the homepage, the GSAP page transitions, the archive table with its split-panel detail view, and the taxonomy colour system were all developed through directed prompting in Cursor — with each output reviewed against design intent before shipping.

ChatGPT + Perplexity

Research, taxonomy, and acoustic metadata

Used to develop the three-axis taxonomy system (Source · Texture · Environment), the acoustic metadata schema for each specimen, and the supporting research behind the accessibility framework. All outputs were cross-verified with Perplexity before being incorporated into the product. ChatGPT was a research assistant, not an authority.

The most important insight from this way of working: AI tools amplify the clarity of your thinking — or the confusion of it. Every prompt that returned something useful started with a precise design constraint. The quality of the output was a direct function of the quality of the brief.


Design Decisions

Four decisions that made sound navigable

01

The interface begins in darkness

The homepage opens on a near-black ground with a p5.js waveform animation, vertical frequency columns that pulse slowly across the viewport. There is no hero image. No bright photography. The darkness forces the eye to read the sound rather than consume a picture.

Tradeoff A dark homepage is harder to read in bright environments and can read as cold to casual visitors. The product is not for casual visitors and the darkness is the first signal of that.
02

Dual typography registers: serif for narrative, monospace for data

Libre Baskerville carries the editorial voice, the thesis, the field notes, the cultural argument. DM Mono carries the technical layer, specimen IDs, frequency profiles, GPS coordinates, metadata labels. The typographic switch is not decorative. It tells the user which register they are reading in at a glance.

Tradeoff Two typefaces in a single interface risk visual conflict. The risk is managed by restricting monospace strictly to data fields, it never appears in prose.
03

A three-axis taxonomy, not a tag cloud

Every specimen is classified across three independent axes: Source (Geophony · Biophony · Anthropophony), Texture (Rhythmic · Drone · Transient · Granular), and Environment (Forest · Harbour · Transit · Street). This was developed through structured research and validated against acoustic ecology literature. The taxonomy is the product's intellectual spine — it's what separates Aurafact from a folder of audio files.

Tradeoff A rigid taxonomy breaks down at the edges, some recordings sit between categories. The current version uses primary classification only; a future version would support multi-axis tagging per specimen.
04

The archive is a split-panel database, not a media gallery

The Archive page presents specimens as rows in a research database — Specimen ID, Artifact, Location, Source Type, Subtype, Frequency Profile. Selecting a row opens a detail panel on the right: audio playback, metadata fields, and field notes. No thumbnails. No cards. This is a deliberate break from how audio is usually presented online.

Tradeoff A table-first layout is less immediately engaging than a card grid for casual browsers. It is significantly more useful for the researchers, archivists, and sound designers who are Aurafact's actual audience.
Visit the live product
Aurafact Field Guide — design principles panel on the left alongside acoustic studies photography grid on the right
The Field Guide: design principles and acoustic studies, the product's intellectual and visual foundation made public.

Outcome

A live, multi-page product that demonstrates what a research-grade audio archive looks and feels like.

Aurafact is hosted on GitHub and live at jenniferyaya.github.io/aura-fact. The product comprises four pages — Homepage, Archive, Field Guide, and Glossary — with a design system built on a strict 8pt grid, dual-register typography, and a consistent token set. The archive currently holds 11 documented specimens across 4 regions, each with full environmental provenance metadata.

The homepage's p5.js waveform animation responds to live audio spectrum data. The archive table supports sorting and live search. The detail panel opens on row selection with audio playback, metadata, and field notes. Every interaction was built through directed AI prompting, reviewed against design intent, and refined until it matched the product's archival character.

The project demonstrated something worth naming: when you work with AI as a collaborator rather than a generator, the constraint is always your own precision. The clearer the design brief, the better the output. The more specific the prompt, the less revision required. AI fluency, at its core, is design fluency.

Aurafact homepage — below-fold section showing acoustic preservation statement and recent additions table with specimen data
Homepage below fold: the acoustic preservation manifesto alongside the most recent archive additions.

Reflection

What designing for sound revealed about interfaces

Aurafact made visible how thoroughly visual modern interface paradigms are. Search, browse, filter, sort — every standard pattern is built for the eye. Designing a product where sound is the primary artifact required building a different representational vocabulary from scratch: taxonomy as sensory translation, metadata as environmental context, typography register as content signal.

Working with AI throughout the build also changed how I think about the relationship between design thinking and tool use. A prompt is a brief. A revision is a critique. Cursor and Claude don't know what good looks like in your product — you do. The designer's job doesn't disappear; it becomes more precise.

The next horizon is using AI not just to build the archive — but to recover what it contains: reconstructing lost acoustic environments from fragmentary recordings.

Next Project

Clara: Clarity and collaboration for every journey