← All posts

AI-Augmented Operations: How One Person Owns the Entire Value Chain

March 2026 · 6 min read

The bottom line

I own Groupon's Merchant Lifecycle Engine — from BigQuery data pipelines to codebase architecture proposals to cross-functional alignment with the COO. The scope would traditionally need 6 people. I cover ~80% of it with Claude Code, 6 MCP servers, and concurrent agents — with a skilled fullstack engineer who reviews, corrects, and ships what I design. AI isn't magic — it writes inefficient SQL, misunderstands architecture, and times out at the worst moments. Here's what works, what doesn't, and why domain knowledge is still the bottleneck.

What I own

At Groupon, I own the Merchant Lifecycle Engine — a system that detects when merchant deals are underperforming (sold out, low conversion, expiring, high refunds) and triggers automated outreach across SMS, email, Salesforce, Google Chat, and Bloomreach.

That means I'm responsible for the data (querying 228M+ row tables in BigQuery, building Keboola pipelines), the systems (configuring campaigns, proposing code changes via GitHub), the process (mapping 40+ merchant workflows across 13 countries), the people (aligning COO, sales leadership, BI, product, and international teams), and the measurement (Tableau dashboards, attribution models).

The traditional org chart says you need:

🔧Data Engineer
📊Data Analyst
📋Product Manager
🏗️Solutions Architect
📅Project Manager
📈Sales Ops Lead
👤
Me + AI
+
👨‍💻
Fullstack Engineer

I design, prototype, and hand off. The engineer reviews, corrects, and ships to production.

What's actually live

Not everything is in production. Being transparent about what's deployed vs. in progress:

Live in production

  • → SMS alerts — generating $100K+ quarterly with direct attribution
  • → Email via Salesforce Marketing Cloud — operational
  • → Inventory alerts with closed-loop attribution
  • → Salesforce task creation and Google Chat escalation

In progress / POC

  • → Bloomreach email — working POC, migrating from SFMC
  • → Some alert types still being built — not all 40+ are automated
  • → Attribution model V1 — works for inventory alerts, still rough for others
  • → Magic link architecture — designed, engineering in progress

The multiplier effect

TaskWithout AIWith AITime
Explore a new data sourceFile ticket with BI, wait 3-5 daysQuery BigQuery MCP directly5 min
Build a data pipelineBI engineer sprint (2 weeks)Prototype in Keboola MCP2 hours
Find duplicate workstreamsWeeks of cross-team meetingsSearch Asana MCP across all projects10 min
Design system integrationRequirements doc → discovery sprintSearch codebase via gh CLI, propose architecture1 session
Test an unfamiliar APIEngineering spike (1 week)Live API testing in terminal1 hour
Cross-reference 4 data sources4 requests to 4 teams4 concurrent Claude Code agents5 min

What I do — and how AI changed it

1
Data
BigQuery · Keboola
2
Systems
APIs · Integrations
3
Discovery
Concurrent agents
4
People
Asana · Stakeholders
5
Code
GitHub → Engineer

1. Data — BigQuery MCP + Keboola MCP

Before

Filed a ticket with BI. Waited 3-5 days for a query result.

After

Queried a 4.5 billion row, 7.2TB table directly. 3 minutes.

I connect directly to Groupon's BigQuery through Claude Code via MCP. When I need to understand a data source, I explore it live — schema, distributions, volumes. The Keboola MCP lets me create extraction configs, SQL transformations, and flow orchestrations. A pipeline that would take a BI engineer a sprint to scope and build, I can prototype in an afternoon.

2. Systems — direct API testing

Before

Engineering spike to test an unfamiliar API. 1 week minimum.

After

Built a working Bloomreach email POC in one session. Never used their API before.

When I designed the Bloomreach email channel, the AI read their docs, tried endpoints, debugged authentication errors (wrong API group, wrong consent category), and iterated until it worked. Not an MCP — direct API calls via curl in the terminal. From “we should add email” to working POC with tracked clicks in one sitting.

3. Discovery — concurrent agents

Before

Weeks of meetings across teams to understand who's building what.

After

4 concurrent agents found 3 teams building the same system. 5 minutes.

The real multiplier: running multiple Claude Code agents simultaneously. One searching Asana MCP, one querying BigQuery MCP, one searching GitHub via gh CLI, one fetching Jira MCP tickets. In minutes, I had a comprehensive picture that would have taken weeks of meetings. I discovered that three different teams were independently building merchant notification systems with overlapping triggers.

4. People — Asana MCP

Before

Status meetings, slide decks, manual data pulls for leadership.

After

COO checks Asana anytime — no meeting needed.

Every significant decision goes into Asana with supporting data, tagged stakeholders, and follow-up tasks. Using the Asana MCP, I search across all workspaces for duplicate workstreams. Found 12 tasks across 4 projects owned by 4 teams — consolidated into one unified catalogue.

5. Code — GitHub gh CLI → engineer handoff

Before

Ambiguous requirements. Engineering spent 2 sprints on discovery.

After

I trace the architecture, propose implementation. Engineer ships in 3 days.

I search the monorepo via gh CLI, trace the architecture, and propose specific implementations. But I don't merge code directly. Every proposal gets handed off to engineering for review, correction, and production deployment. The design is specific enough that it's a build task, not a discovery task. Engineering time goes from “2 sprints to figure out what to build” to “3 days to build what's already designed.”

What doesn't work

This isn't a story about AI being perfect. It's a story about it being useful despite being deeply flawed. Here's what breaks regularly:

Inefficient SQL

The BigQuery and Keboola MCPs generate working queries — but often terrible ones. Running one query per account instead of one for all accounts. Downloading raw outputs using tokens when a simple aggregation would do. Filtering issues that return 10x more data than needed. Every query needs human review for efficiency.

Architecture confusion

When proposing GitHub code changes, AI regularly confuses backend and frontend boundaries. It hardcodes configuration values instead of connecting to real config services. It doesn't understand the relationship between the data layer, the backend Encore TS service, and the frontend — and merges concepts that should be separate. Every proposal needs engineering review and correction.

MCP reliability

MCP servers time out. Authentication breaks. Access permissions are wrong. The Bloomreach integration required iterating through multiple auth failures (wrong API group, wrong consent category) before it worked. This isn't plug-and-play — it's plug-and-debug.

Domain knowledge is still the bottleneck

AI can query any table and search any codebase — but it can't understand why the sales team in Germany handles merchant outreach differently than the US. It can't read the room in a stakeholder meeting. It can't make the judgment call about which alerts to prioritize. The 80% it covers is the mechanical work. The 20% that matters most is still human.

Results

$2-3M
Incremental profit from early replenishments
40+
Processes catalogued
13
Countries researched
  • $2-3M verified annualised incremental profit captured from early replenishments when deals sell out
  • SMS alerts live in USA, internal Salesforce tasks operational, currently merging legacy SFMC emails into unified Bloomreach system
  • → Bloomreach email: zero to working POC in one session (migrating to production)
  • 3 data pipelines (Encore → Keboola → BigQuery) built and operational
  • → Duplicate workstreams across teams identified and consolidated

Tool stack

CoreClaude CodePrimary AI interface — analysis, code proposals, API testing, orchestration
CoreConcurrent AgentsMultiple Claude Code agents running in parallel on different problems
MCPBigQuery MCPDirect SQL queries against Groupon's data warehouse (228M+ row tables)
MCPKeboola MCPData pipeline creation, SQL transformations, flow orchestration
MCPAsana MCPCross-project task search, stakeholder alignment, decision logging
MCPJira MCP (Atlassian)Engineering ticket tracking, sprint coordination
MCPGmail MCPEmail monitoring, stakeholder communication
MCPGoogle Calendar MCPMeeting coordination, availability management
CLIGitHub (gh CLI)Codebase search, PR review, architecture exploration — not an MCP, direct CLI
APIBloomreach APICampaign config, customer management, event tracking — direct API calls via terminal

This doesn't eliminate the need for specialists. Engineers review and correct every code proposal. BI manages the production infrastructure. Frontend builds the UI. The AI covers ~80% of the mechanical work across 6 traditional roles — but the 20% that requires judgment, context, and stakeholder trust is entirely human.

What changes is the gap between strategy and execution. The person who understands the business problem can directly prototype the solution, test it, and hand off a specific build task — rather than an ambiguous requirement. The handoff tax disappears. The domain expert becomes the operator.

October 2025 — March 2026. Groupon, Chicago.