Operational AI.
Agents that automate what
still runs on manual effort.

Program delivery, collections recovery, client communication, content pipelines — each one reimagined with AI agents, structured outputs, and multi-provider orchestration. Not demos. Production.

explore
Simulation Complete
Sherpa
AI PMO Engine

Five AI agents monitoring 200 enterprise projects worth $53.1M in parallel. One reads the plan. One detects historical patterns. One converts risks into actions. One handles escalation. One writes the executive summary. A graph engine computes critical paths and cascades task slips before humans notice.

  • 5 agents running in parallel with distinct personas and word limits
  • BFS graph engine — critical path + cascade analysis across dependencies
  • 171-day simulation with AI-generated week narratives
Multi-AgentStructured OutputGraph EngineCascade Analysis
sherpa.ajayai.io
Arjunreads plan, flags deviations from baseline
Drishtimatches patterns across 200 project histories
Karnarisk → action with owner + deadline + reminder
Bheemescalation when thresholds breach
Vidurawrites the executive summary
Complete
Mint
AI Agent Toolkit

Four-layer agent framework. Layer 1: MCP server exposing database as tools. Layer 2: Multi-agent pipeline — Scout explores schema, Analyst queries data. Layer 3: RAG with vector embeddings for semantic search. Layer 4: Browser automation for legacy system extraction.

  • MCP Server — database-as-tools for any AI agent
  • Multi-Agent Pipeline — Scout + Analyst in orchestrated sequence
  • RAG with pgvector — semantic search over knowledge bases
MCP ServerMulti-AgentRAG + pgvectorPlaywright
mint.ajayai.io
> "Give me a customer overview"
L1 MCPlist_tables → describe_table × 4
L2 Scoutschema mapped → 14 tables, 4 key entities
L2 Analystquery × 8 → customer overview assembled
Output→ structured report from 4 tables · 3,638 tokens
// 12 tool calls · 8.2s end-to-end
Live · Production Users
Ark
Collections & Cashflow Recovery

Command centre for finance teams drowning in aged receivables. AI drafts collection emails at three escalation tones — the model sees only data the user's role permits via row-level security. Full workflow: disputes, promises-to-pay, partial payments, team assignments.

  • Tone-controlled email generation — professional, firm, escalation
  • RLS-gated context — AI sees only what the collector's role permits
  • Full collection lifecycle — disputes, promises, partial payments, SOA
Tone-Controlled GenJSON Schema OutputRLS-Gated ContextNodemailer
ark.ajayai.io
> Draft collection email for Invoice #4872 · Tone: FIRM
Contextload invoice + customer + payment history (RLS-filtered)
Generatestructured JSON → subject, body, tone_score, next_action
Reviewcollector reviews, edits, approves
Send→ Nodemailer via Zoho SMTP · logged to timeline
// collector sees only their portfolio · least privilege
Live · Production Users
NutriZen
AI Diet Planning Platform

A nutritionist's co-pilot that generates personalized 10-day meal plans using 11 tools in a multi-turn agent loop. The AI doesn't invent meals — it selects from 2,300+ validated meals the nutritionist has already approved. Plans that took 2 hours now take under a minute.

  • 11 agent tools — get_client, search_meals, create_plan, save_learning
  • Assembly over generation — constrained selection eliminates hallucination
  • Multi-provider failover with automatic rerouting on 402/429
11 Agent ToolsTool CallingStructured OutputRow-Level Security
nutrizen.ajayai.io
> "Create a 10-day plan for Kavita Joshi"
Tool 1get_client_profile → allergies, preferences, BMR
Tool 2-4search_meals × 3 → breakfast, lunch, dinner pools
Tool 5create_plan_draft → 10 days × 7 meals assembled
Tool 6save_plan → persisted with RLS, ready for review
// 11 tools · multi-turn loop · ~45 seconds
Live · Real Conversations
Elska
AI WhatsApp Assistant

An autonomous assistant over WhatsApp Business API. Detects intents via pattern matching — deliberately not LLM, because deterministic is faster and cheaper. Uses sub-100ms inference for open-ended replies. Transcribes voice notes. Schedules reviews with slot pickers. The ghost-writer that never sleeps.

  • Regex intent classification — 100% deterministic for known patterns
  • Sub-100ms LLM inference for natural language replies
  • Voice note transcription + auto-scheduling with calendar sync
LLM OrchestrationSpeech-to-TextWhatsApp Business APIIntent Engine
whatsapp · pro.ajayai.io
Incoming"send call slots" → WhatsApp webhook
Classifyregex match → scheduling_request (no LLM)
Actionfetch slots → build interactive slot picker
Reply→ "Pick a time for your Progress Review" + 10 slots
Confirm→ "Thu 5 Mar, 6:15 PM. See you then."
// no LLM called · 4ms · deterministic
Live · Production
Maya
AI Content Creator

Topic in, Instagram content out. AI scripts titles and descriptions from a single prompt. Three wizards — Social Cards, Carousels, Reels. Two-phase async pipeline to handle generation that exceeds serverless timeout limits.

  • AI-suggested titles from topic prompts — 3 style modes
  • Two-phase async pipeline — generate, poll, deliver
  • Social cards (OG), carousels (multi-slide), reels (video)
LLM OrchestrationAI Content GenAsync PipelineAWS Amplify
maya.ajayai.io/wizard
> topic: "brisk walking"
Suggest→ "Brisk Walk 15 Min Daily: Slash Mortality 20%!"
Composetitle + description + brand + style:MINIMAL
Render→ 1200×630 social card · PNG + WebP
// Design Decisions

Real trade-offs from production code.

Not theory. Decisions made with real users waiting on the other side.

// Sherpa

Five Agents, Not One Prompt

Five agents in parallel — each with a distinct persona and word limit. One agent can't corrupt another's reasoning. Failures are isolated, not cascading.

// Elska

Know When Not to Use AI

“Want to cancel” is detected via regex, not an LLM. For known intents, pattern matching is faster, cheaper, and 100% deterministic. The right model is sometimes no model.

// NutriZen

Assembly Over Generation

The AI doesn't invent meals — it picks from 2,300+ approved meals. Constraining selection eliminates hallucination. The nutritionist edits 10% instead of 60%.

// Ark

LLM Context = Least Privilege

Every AI call goes through row-level security. The model sees only what the user's role permits. LLM context treated like a database query — least privilege, always.

// Mint

Tool Calling Over Prompt-and-Parse

11 tools in a multi-turn loop. The model reasons about what data it needs, fetches it, acts. When a tool fails, it retries with different parameters.

// All Products

Multi-Provider Failover

If one provider returns 402, the request falls over to another using the same model. Single points of failure don't belong in production.

// Technology

The stack behind every product.

AI & Agent Architecture

  • Multi-Agent Orchestration parallel + sequential pipelines
  • MCP (Model Context Protocol) database-as-tools for agents
  • RAG + pgvector semantic search, embeddings
  • Agentic Tool Calling multi-turn loops, retry, fallback
  • Structured Output / JSON Schema type-safe LLM responses
  • Multi-Provider Failover auto-reroute on 402/429
  • Speech-to-Text voice transcription pipeline
  • Tone-Controlled Generation escalation-aware content

Infrastructure & Cloud

  • AWS Amplify CI/CD, hosting, SSR
  • PostgreSQL + Row-Level Security, RPC
  • Edge Functions serverless compute at edge
  • pgvector vector embeddings in Postgres
  • WhatsApp Business API Meta Cloud, webhooks
  • Async Pipeline Architecture two-phase generation
  • Browser Automation Playwright for legacy extraction
  • Transactional Email SMTP integration, templating

Full-Stack Engineering

  • TypeScript strict mode, end-to-end
  • Next.js 16 + React 19 App Router, Server Actions
  • Tailwind CSS v4 CVA + clsx + merge
  • Graph Algorithms BFS, critical path, cascade
  • Zod runtime schema validation
  • Real-time Subscriptions Postgres changes, live UI
  • Image Optimization Sharp, server-side processing
  • Video Assembly FFmpeg, Ken Burns, TTS mixing
// The Background

Two decades in enterprise software and services.

Program delivery across telecom operators, global banks, and enterprise transformation houses. 250+ programs. Multi-region. Varied complexity — from regulatory rollouts to full-stack platform migrations across 14 countries.

P&L ownership across $120M+ in program delivery portfolios. $150M+ in aged receivables recovered. Revenue assurance wasn't a KPI — it was the job.

Customer escalations at 2am across three time zones. Boardrooms where “red status” meant someone's career. Process gaps that cost more in a quarter than most startups raise in a Series A.

Built customer success stories from stalled handovers. Turned delivery governance into a competitive advantage. Rose from process engineering to the CEO's office — ending as Group Vice President, Head of Program Delivery & Customer Success at a global enterprise software company.

Then the question that changed everything: what if the tools were smarter?

Every product above was born from a process that was too manual, too slow, or too dependent on tribal knowledge. The domain expertise comes from running these operations at scale. The AI just makes it fast enough to matter.

$0M
Program delivery P&L ownership across multi-region portfolios
0+
Programs delivered — enterprise transformation, telecom, banking
$0M+
Aged receivables recovered through structured governance
0
Countries — Middle East, Southeast Asia, Africa, Europe
Ajay Gurnani
Ajay Gurnani
Ex-GVP, Program Delivery & Customer Success
Enterprise Software & Services

Let's build something
together.

Looking for roles where AI meets operations — fractional CTO, AI product builds, delivery transformation, or program turnarounds. A broken process and a willingness to fix it properly is all it takes.

Start a conversation →