← Back

Transform UX: AI Co-Creator for Data Pipelines

A ground-up reimagining of data transforms into an AI co-authoring workflow—AI drafts, humans verify, the system proves it with preview/diff/history, run + compare, and rollback—so it feels like collaboration, not editing scripts.

Role

Lead Product Designer

Timeline

2025 — Present

Team

Core pod of 4 (Product, Engineering, Design)

At a Glance

Users ran into three core problems. Newer users had no clear starting point and were expected to understand schemas and mappings before they could do anything useful.

  • -↓ 35% time to first successful transform
  • -↓ 25% runs to success — faster iteration cycles
  • -~40% of transforms completed without any manual code writing
  • -Established a reusable UI pattern for the platform — downstream teams can now inherit this interaction model

Overview

C3 Transform is the data transformation engine — data transforms sit inside the core of C3's platform product, AI Studio — how enterprise teams shape raw data into inputs for AI models and eventually serve into apps. We began with a manual-first redesign to support existing low-code and high-code workflows, with AI generation added as a secondary path. After user testing, it became clear that AI should not sit on the side as a helper. It should be the starting point. V2 shifted to a chat-first co-authoring model where users guide the system, iterate in real time, and move into code whenever needed. That pivot is what made the experience both faster and more usable.

My Role

  • -Owned the UX end to end — from strategy through final shipped design across authoring, review, and execution flows
  • -Designed the AI co-authoring workflow: a chat-based drafting and review experience that lets less-technical users start from intent while technical users can inspect, edit, or take over in code
  • -Built trust into the verification layer — preview, diff, history, run + compare, and rollback — so teams could safely validate AI-generated transforms before committing
  • -Validated iteratively with data engineers, solution engineers, and app developers across multiple rounds of testing

The Problem

Users ran into three core problems. Newer users had no clear starting point and were expected to understand schemas and mappings before they could do anything useful. Experienced users often avoided the low-code experience because it obscured the underlying logic, which is a dealbreaker in data engineering where transparency matters. And the workflow itself was broken: the wizard forced a linear process onto a task that is naturally iterative, so troubleshooting became slow and painful, with users manually translating error logs into code fixes.

What I Learned from Users

01

Users want to steer, not spectate — the best moments in testing were when someone pushed back on a suggestion, redirected the AI, or jumped into the code themselves. The worst were when they felt like they were just waiting for something to approve.

02

Real-time preview was the trust unlock — seeing a transform execute against live data before committing changed everything. No amount of diffs or confirmation dialogs came close to the confidence that gave people.

03

The expression language was a wall, not a floor — even experienced data engineers were copy-pasting from old projects rather than writing from scratch, because the syntax rewarded institutional knowledge over logic.

The Approach

We started with a competitive sweep of AI-native tools with strong co-authoring and code-generation patterns, not to copy them, but to understand how they handled trust, control, and transparency with AI in the loop. From there, we ran PDE whiteboarding sprints with Product, Design, and Engineering to align on the authoring model before touching UI. I moved quickly into low-fidelity prototypes to test the workflow without over-investing in visuals, then validated concepts with data engineers, solution engineers, and app developers. That process shaped the final co-authoring model and helped us prioritize the verification layer: preview, diff, history, run and compare, and rollback.

Key Design Decisions

V1 — Passive AI

Inline widget: AI generates, human approves

The first version gave users an inline widget next to the editor. Select a target field, describe what you want, the AI writes the expression. Fast and contained — looked great on a whiteboard. In practice, users felt like they were filling out a form for the AI and waiting for it to hand something back. They couldn't steer it, only accept or reject it.

Transform Select Target

Transform AI Widget

Diff view: transparency without agency

We added a line-by-line diff so users could see exactly what the AI was changing before committing. It reduced anxiety and built some trust. But transparency isn't the same as agency — users were still reviewing something they didn't write, in a language they barely knew. The core problem wasn't visibility. It was that the AI was driving and users were passengers.

Transform AI Diff

Transform AI Done

V2 — Chat-First

Full pivot: chat-based co-authoring

We removed the widget and built a persistent chat panel alongside the editor. Users push, redirect, and refine in conversation — less-technical users start from plain language; technical users can inspect, edit, and take over in code at any point. The AI stopped being a vending machine and started being a collaborator you can actually argue with.

AI Chat

Transform Diff

Verification layer and the pipeline canvas

With a trusted authoring surface in place, we redesigned the pipeline canvas to show the full data flow at a glance, then built out the verification layer: real-time preview, run + compare, full history, and one-click rollback. Every feature was earned by something we saw fail in testing — not designed upfront, discovered through it.

Transform Complete

Data Pipeline Nodes

Success Targets

  • -↓ 35% time to first successful transform
  • -↓ 25% runs to success — faster iteration cycles
  • -~40% of transforms completed without any manual code writing
  • -Established a reusable UI pattern for the platform — downstream teams can now inherit this interaction model

What I Would Do Next

  • -Extend AI assistance to other parts of the pipeline beyond transforms (data source configuration, model tuning)
  • -Build a transform library where teams can share and reuse proven patterns across projects
  • -Explore proactive AI suggestions that detect common data quality issues before users encounter them