A spatial archive for environmental audio designed from a cultural thesis, built in collaboration with AI, and hosted as a living product.
Visit Aurafact
We live in an era of visual over-saturation. Every landscape is routinely mapped, photographed, and filmed with clinical precision. Its acoustic identity, the density of birdsong before a wildfire, the rhythmic clatter of a tram line that no longer runs, the low drone of a harbour at dawn, disappears without record.
Existing approaches to sound archiving sit at two unhelpful extremes. Academic sound libraries are rich in data but absent of experience — digital graveyards that no one browses. Ambient sound apps are beautiful to inhabit but impossible to study, toys, not tools. Neither treats environmental audio as what it actually is: a primary historical artifact.
Design question: How might we design a digital archive that treats environmental audio as a cultural artifact — searchable, interpretable, and accessible to radically different audiences?
Digital Graveyards
Rich in data, absent of experience. British Library Sound Archive — authoritative but with zero immersion.
Aurafact occupies the gap
Immersive utility. A research-grade archive that you actually want to inhabit.
The insight that shaped everything: most audio interfaces treat a recording as a media object to be played. Aurafact treats it as a documented specimen, inseparable from the exact temperature, barometric pressure, microphone topology, and ecological conditions of its capture.
This reframed the entire information architecture. A recording doesn't belong in a playlist. It belongs in an archive with full environmental provenance, the same rigour a museum applies to a physical artifact. The interface had to be built around that idea, not bolted onto it.
A place without its sound is only half remembered.
— Aurafact field notes, 2026
Aurafact was built in active collaboration with three AI tools — each used for a specific role where it added the most value. This wasn't AI-as-shortcut. It was AI-as-medium: a new way of working that required the same design discipline as any other tool, but demanded a different kind of precision.
Claude
Copy, rationale, and case study structure
Used to review and refine all written content, the Manifesto, the Field Guide principles, specimen field notes, and the language of the taxonomy system. Every output was reviewed, edited, and approved before use. Claude's role was to push the writing toward precision, not to write autonomously.
Cursor
Front-end build and interaction development
Used to write and iterate on the HTML, CSS, and JavaScript across all four pages. The p5.js waveform simulation on the homepage, the GSAP page transitions, the archive table with its split-panel detail view, and the taxonomy colour system were all developed through directed prompting in Cursor — with each output reviewed against design intent before shipping.
ChatGPT + Perplexity
Research, taxonomy, and acoustic metadata
Used to develop the three-axis taxonomy system (Source · Texture · Environment), the acoustic metadata schema for each specimen, and the supporting research behind the accessibility framework. All outputs were cross-verified with Perplexity before being incorporated into the product. ChatGPT was a research assistant, not an authority.
The most important insight from this way of working: AI tools amplify the clarity of your thinking — or the confusion of it. Every prompt that returned something useful started with a precise design constraint. The quality of the output was a direct function of the quality of the brief.
The homepage opens on a near-black ground with a p5.js waveform animation, vertical frequency columns that pulse slowly across the viewport. There is no hero image. No bright photography. The darkness forces the eye to read the sound rather than consume a picture.
Libre Baskerville carries the editorial voice, the thesis, the field notes, the cultural argument. DM Mono carries the technical layer, specimen IDs, frequency profiles, GPS coordinates, metadata labels. The typographic switch is not decorative. It tells the user which register they are reading in at a glance.
Every specimen is classified across three independent axes: Source (Geophony · Biophony · Anthropophony), Texture (Rhythmic · Drone · Transient · Granular), and Environment (Forest · Harbour · Transit · Street). This was developed through structured research and validated against acoustic ecology literature. The taxonomy is the product's intellectual spine — it's what separates Aurafact from a folder of audio files.
The Archive page presents specimens as rows in a research database — Specimen ID, Artifact, Location, Source Type, Subtype, Frequency Profile. Selecting a row opens a detail panel on the right: audio playback, metadata fields, and field notes. No thumbnails. No cards. This is a deliberate break from how audio is usually presented online.
A live, multi-page product that demonstrates what a research-grade audio archive looks and feels like.
Aurafact is hosted on GitHub and live at jenniferyaya.github.io/aura-fact. The product comprises four pages — Homepage, Archive, Field Guide, and Glossary — with a design system built on a strict 8pt grid, dual-register typography, and a consistent token set. The archive currently holds 11 documented specimens across 4 regions, each with full environmental provenance metadata.
The homepage's p5.js waveform animation responds to live audio spectrum data. The archive table supports sorting and live search. The detail panel opens on row selection with audio playback, metadata, and field notes. Every interaction was built through directed AI prompting, reviewed against design intent, and refined until it matched the product's archival character.
The project demonstrated something worth naming: when you work with AI as a collaborator rather than a generator, the constraint is always your own precision. The clearer the design brief, the better the output. The more specific the prompt, the less revision required. AI fluency, at its core, is design fluency.
Aurafact made visible how thoroughly visual modern interface paradigms are. Search, browse, filter, sort — every standard pattern is built for the eye. Designing a product where sound is the primary artifact required building a different representational vocabulary from scratch: taxonomy as sensory translation, metadata as environmental context, typography register as content signal.
Working with AI throughout the build also changed how I think about the relationship between design thinking and tool use. A prompt is a brief. A revision is a critique. Cursor and Claude don't know what good looks like in your product — you do. The designer's job doesn't disappear; it becomes more precise.
The next horizon is using AI not just to build the archive — but to recover what it contains: reconstructing lost acoustic environments from fragmentary recordings.