Planners are Accountable. The AI isn't.
Municipal planners make consequential decisions (zoning approvals, housing density, displacement risk) increasingly informed by AI outputs they can't fully interrogate. When those decisions fail politically or legally, the model doesn't answer for it. The planner does.
Most tools are designed for speed. None were designed for defensibility.
"The model might be accurate, but accuracy isn't enough. I need transparency."
The Design Challenge
How might we structure AI outputs so planners can defend every decision under legal, political, and public scrutiny?
Sole UX designer on a cross-functional team.
I worked closely with three software engineers and a data scientist to translate simulation model logic into interfaces planners could read, interrogate, and act on.
- Persona development and problem framing
- Information architecture and interaction design
- Transparency, flagging, and audit trail patterns
- High-fidelity interface design (Figma)
- Stakeholder presentation design and content
Planners don't need faster models. They need outputs they can defend.
This reframed the entire project. The design question was no longer about efficiency — it was about accountability. Every interface decision followed from that shift.
CAN-SIMPLAN is not a decision-maker. It's a structured oversight layer.
- Runs multi-domain simulation scoring
- Surfaces confidence levels and flagged risks
- Requires planner engagement before approval
- Generates council-ready, audit-backed outputs
Core principle: Defensible over fast.
01 — Blocking Flags
Flags require written rationale before the workflow proceeds. In a governance context, the interface has to enforce engagement, not rely on professional diligence.
02 — Visible Confidence
Model confidence, version, and last-run timestamp surface in the proposal header, not buried in a technical panel. Planners need to know how much to trust an output before they engage with it. Surfacing uncertainty is not a weakness, it's honesty.
03 — Plain-Language Translation
Every technical flag includes a plain-language explanation written for planners, not data scientists. The planner's job is judgment. The interface's job is translation. The flag descriptions do the interpretation work so cognitive load stays where it belongs.
04 — Embedded Audit Trail
All overrides are logged, timestamped, and immutable within the proposal view, not in a separate admin panel. Placing the audit trail inside the primary workflow signals that documentation is part of the job, not a bureaucratic add-on. If it's buried, accountability is secondary.
Designing for governance changes the constraints entirely.
Usability is not the primary constraint, accountability is. That distinction reshaped how I thought about friction, structure, and trust.
- Friction can be protective. Blocking flags aren't bad UX, they're the right UX for a governance context.
- Structure builds trust. When a system enforces documentation, it signals to every stakeholder that the process is serious.
- Auditability is a UX problem. If the audit trail is buried or hard to read, it doesn't function as accountability, it's just data.
Working cross-functionally also pushed me to get comfortable at the boundary between technical model logic and human decision workflows, translating in both directions.
Given more time and access to users, I'd prioritise:
- Usability testing with senior planners on the flag engagement and override flows
- WCAG 2.1 AA accessibility audit — high-stakes civic tools must be universally accessible
- Mobile workflow for field visits and on-site proposal review
- Design of the post-approval feedback loop, where real outcomes recalibrate the model over time