By clicking “Accept Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info

Hexaview Logo
great place to work certified logo

General Published on: Tue Sep 30 2025

Intents, Not Tickets: Why AI SDLC Is Eating Software Engineering

For two years the industry has chased “AI productivity” by sprinkling copilots across old delivery pipelines. We got faster snippets but the same bottlenecks: context thrash, governance bolted on at the end, and output measured in story points instead of outcomes. The SE 3.0 vision, AI native, intent first engineering with AI teammates across IDE, Compiler, and Runtime, is the first credible path out of that trap.

A quick primer: SE 1.0 → SE 2.0 → SE 3.0

Software Engineering 1.0: Plan driven, human centric engineering Classic software engineering: humans translate requirements → design → implementation → test → ops via process frameworks (waterfall, later agile or DevOps). Governance is document and review heavy; artifacts are specs, code, tests, and runbooks. If you learned from IEEE SWEBOK, you grew up here.

Software Engineering 2.0: AI augmented delivery inside the old model

Gen AI tools (for example, GitHub Copilot) accelerate parts of the lifecycle, mostly coding. In controlled trials, developers completed tasks 55.8% faster with Copilot. Yet end to end enterprise gains are limited because the process did not change, and cognitive load rose as teams juggled more tools. (Side note: Karpathy’s “Software 2.0” uses the term differently, meaning neural networks as the primary “code.” Useful distinction, but separate from “SE 2.0” here.)

Software Engineering 3.0: AI native, intent first engineering with AI teammates

Hassan et al. propose flipping the unit of work from tickets to business intents and elevating AI from “autocomplete” to teammates across a new stack: Teammate.next, IDE.next, Compiler.next, Runtime.next. Governance becomes computable (policies as machine checkable tests), and cognitive overload drops by centering work around intents instead of tool sprawl.

The stack is AI native, not AI adjacent

  • Teammate.next - personal, adaptive AI partners that learn your domain, constraints, and style
  • IDE.next - a conversation IDE where requirements, code, tests, and runbooks co evolve from intents
  • Compiler.next - multi objective synthesis (not just “make it compile,” but “meet latency or SLAs, pass controls, respect data residency”)
  • Runtime.next - SLA aware execution with edge support and continuous negotiation between system and intent

Translation: tools are not the ceiling, our process model is.

What SE 3.0 changes (and why WealthTech should care)

SE 3.0 makes business intent the primary artifact (“Approve 95% of standard KYC cases in under 6 hours; 100% recall on SDN or PEP hits”) and lets AI teammates synthesize plans, code, tests, and runbooks under explicit constraints (latency, cost, controls). Governance becomes computational and continuous, not a PDF after the fact.

This maps cleanly to WealthTech reality: advisory workflows, suitability, T+0 post trade confirmations, and books and records where regulators now call out AI explicitly:

  • SEC exam priorities highlight AI use and controls; firms should expect scrutiny of policies, recordkeeping, and deployment
  • FINRA documents expanding AI use cases in broker dealers and the need for robust governance
  • NIST AI RMF gives a widely cited blueprint to make AI trustworthy and risk managed; the 2024 GenAI profile extends it for frontier models. SE 3.0 lets us operationalize those controls in code

What this means for WealthTech (examples that bite)

  • Onboarding and KYC: Express the policy intent (“Approve 95% of vanilla retail accounts in under 6 hours; flag PEP or SDN with 100% recall”), and let the AI teammate derive integrations to KYC vendors, route exceptions, and back calculate controls. Change the policy and the system regenerates flows and tests to comply
  • Portfolio operations: Instead of writing a rebalance module ticket by ticket, capture the intent (“Minimize tax drag; respect IPS; trade within venue X during window Y”). Compiler.next materializes algorithms, Runtime.next enforces SLAs for latency and cost, and the AI teammate proposes monitoring dashboards tuned to desk behavior
  • Surveillance and compliance: Define behaviors to deter (front running, suitability breaches) as intents plus constraints. The AI teammate maps data lineage, generates detection rules, and scaffolds case management automations, auditable by design

The services provocation

Big IT still optimizes around ticket velocity and effort-based SOWs. Research shows value capture depends on operating model rewiring, not just tool rollouts. Leadership and system level metrics decide whether AI moves the needle.

SE 3.0 flips three defaults:

  • Intent over tickets: changes are intent deltas, not Jira floods
  • Computational governance: policies become machine checkable tests aligned to NIST, SEC, and FINRA expectations
  • Outcome pricing: contracts tied to onboarding time, exception rates, and STP percent, not hours

Why big IT services will struggle (unless they change fast)

  • Governance is performative, not computational. Many providers still treat compliance as documents and audits. SE 3.0 bakes governance into generation and runtime. If your controls are not machine checkable, you are invisible to the new stack
  • Delivery is ticket maximization, not outcome maximization. SE 3.0 turns scope into solvable intents and optimizes for SLAs and metrics, not story points
  • Talent model is wrong. You need Agentic Solution Engineers who can encode business and controls as machine navigable intents and orchestrate AI teammates, not just “prompt savvy devs”

Why Hexaview is leaning in

  • HAKI AI: our loop (intent capture → design debate → code and controls co generation → SLA aware ops) tracks the SE 3.0 stack
  • DocMolt and CodeMolt: we extract executable intents (policies, SLAs, data residency) from legacy code and docs so AI teammates can reason over them, which is crucial for regulated estates
  • AI Pods and AI University: we train every role to author intents and evaluators, not just code, aligning with findings that operating model rewiring is the unlock

Call to action (for CIOs, CCOs, and Heads of Ops)

If your AI dashboard cannot show, on one screen, which business intents are drifting, why, and what the AI changed to restore them, you are still doing prettier SE 2.0.

Sources

  • Hassan et al., “Towards AI-Native Software Engineering (SE 3.0)” (arXiv, Oct 2024). arXiv
  • IEEE SWEBOK, Guide to the Software Engineering Body of Knowledge. IEEE Computer Society
  • Microsoft/GitHub, controlled Copilot experiment (55.8% faster on a coding task). Microsoft
  • NIST, AI RMF 1.0 (2023) and GenAI Profile (2024). NIST Publications+1
  • SEC, FY2025 Division of Examinations priorities (AI focus).SEC
  • FINRA, AI in the Securities Industry (governance/use cases). FINRA
  • McKinsey, The State of AI 2025: How organizations are rewiring to capture value. McKinsey & Company
author-image

Ankit Agarwal

CTO

Ankit Agarwal co-founded Hexaview with a vision to build a team of unmatched technical excellence, empowering businesses with the right technologies. With 18+ years of experience, he works closely on group strategy, offering domain expertise, delivering advanced solutions, and supporting tactical plans. A true technocrat, he ensures the Hexaview team stays equipped with the latest software tools and technologies, both open source and commercial.