Nobody decided to build a human-optional workflow — they just kept making reasonable procurement decisions, task by task, until the human became optional across hiring, contracting, finance, and security operations. Sean Martin traces what organizations have actually assembled, where accountability lives when it goes wrong, and why the regulatory window for getting ahead of it is closing faster than most leaders realize.
Nobody decided to build a human-optional workflow — they just kept making reasonable procurement decisions, task by task, until the human became optional across hiring, contracting, finance, and security operations. Sean Martin traces what organizations have actually assembled, where accountability lives when it goes wrong, and why the regulatory window for getting ahead of it is closing faster than most leaders realize.
In this edition of Lens Four, Sean Martin looks at the agentic AI landscape through three lenses — programs, innovation, and messaging — to connect the signals that matter.
🔍 In this episode:
Fourth Lens: The vendors knew what they were building. The buyers didn't ask the right questions. The auditors haven't arrived yet. The organizations that use the remaining window to map what they've assembled — and make explicit decisions about what requires human judgment — will be positioned when the frameworks arrive. The ones that don't will discover that the workflow they built by default is not the workflow they would have chosen under scrutiny.
📖 Read the full Lens Four analysis on seanmartin.com: https://www.seanmartin.com/lens-four/task-by-task-workflows-handing-to-ai-one-decision-at-a-time
🎧 Listen to the Redefining CyberSecurity Podcast conversation with Edward Wu of Dropzone AI at Black Hat USA 2025: https://www.itspmagazine.com/their-stories/dropzone-ai-brings-agentic-automation-to-black-hat-usa-2025-a-drop-zone-ai-pre-event-coverage-of-black-hat-usa-2025-las-vegas-brand-story-with-edward-wu-founder/ceo-at-dropzone-ai
🎧 Listen to the Redefining CyberSecurity Podcast conversation with Subo Guha of Stellar Cyber at RSAC 2025: https://www.itspmagazine.com/their-stories/simplifying-cybersecurity-operations-at-scale-automation-with-a-human-touch-a-brand-story-with-subo-guha-from-stellar-cyber-an-on-location-rsac-conference-2025-brand-story
🎧 Listen to the Redefining CyberSecurity Podcast conversation with Subo Guha of Stellar Cyber at Black Hat 2025: https://www.itspmagazine.com/their-stories/stellar-cyber-revolutionizes-soc-cybersecurity-operations-with-human-augmented-autonomous-platform-at-black-hat-2025a-stellar-cyber-event-coverage-of-black-hat-usa-2025-las-vegas
🎧 Listen to the Random and Unscripted episode — "We're Becoming Dumb and Numb" — with Sean Martin and Marco Ciappelli: https://randomandunscripted.com/episodes/were-becoming-dumb-and-numb-why-black-hat-2025s-ai-hype-is-killing-cybersecurity-and-our-ability-to-think-random-and-unscripted-weekly-update-with-sean-martin-and-marco-ciappelli | 🎬 Watch on YouTube
🔔 Subscribe to the Future of Cybersecurity newsletter on LinkedIn: https://itspm.ag/future-of-cybersecurity
This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.
Enjoy, think, share with others, and subscribe to Lens Four on seanmartin.com and "The Future of Cybersecurity" newsletter on LinkedIn: https://itspm.ag/future-of-cybersecurity
Sincerely, Sean Martin and TAPE9
Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️
Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-location
To learn more about Sean, visit his personal website.
🔎 Keywords
agentic AI, workflow automation, task-specific AI agents, AI hiring tools, resume screening automation, HireVue, Paradox Olivia, legal AI, Harvey AI, LegalOn, contract review automation, agentic SOC, Dropzone AI, Stellar Cyber, Token Security, AI agent identity, RSAC 2026, Nintex, Microsoft Copilot Studio, agentic orchestration platform, human accountability in AI, agentwashing, AI augmentation vs replacement, AI governance, enterprise AI adoption, Gartner agentic AI, Forrester AI forecast, AI decision accountability, AI regulatory compliance, AI workforce impact
By Sean Martin. Lens Four, at seanmartin.com.
I look at the intersection of business, technology, and messaging regularly through three lenses: how organizations operate and run their programs, how innovation and market forces are reshaping what's possible, and how the language and narrative around technology shapes what gets funded, prioritized, and trusted.
This week, all three lenses are pointing at the same thing, and the picture is clearer than most people are comfortable admitting.
Nobody decided to remove the human from the workflow.
That's the part worth sitting with. In boardrooms, in budget reviews, in vendor evaluations, nobody stood up and said "let's build a business process with no humans in the loop." What happened instead was a series of smaller decisions, each of them reasonable, each of them local, each of them defensible. An HR team bought a screening tool to handle application volume. A legal department licensed an AI drafting platform to reduce outside counsel spend. A finance team deployed automated invoice processing to close faster.
None of those decisions, on their own, looks like giving up control. But map them together, task by task, across a single workflow, and something significant emerges. The human is already optional across most of the process.
That's what I want to examine this week. Not whether AI should take on more of the work. That debate is largely settled in the data. But whether organizations have consciously mapped what they've actually handed over, and what that means for how businesses operate, compete, and carry accountability going forward.
Let's start with the business operations lens. Are we delegating efficiently, or giving up control?
The honest answer is: both. And most organizations can't tell the difference yet.
Let me trace two workflows that most businesses run every week. Not as edge cases, not as experiments, but as normal operating processes with deployed tools and real outcomes.
The first is the hiring workflow.
A job requisition opens. What happens next used to require a recruiter's judgment at every step. Here's what the same process looks like with current tools.
The first task is resume screening. Unilever reported saving over one million pounds annually after deploying AI screening tools across its hiring pipeline. McDonald's rolled out Paradox's conversational AI called Olivia across thousands of locations to handle applicant screening and scheduling. Candidates move from application to interview without a human recruiter touching the file. AI tools now rank and filter applicants in seconds, against criteria a human set once and that now runs autonomously at scale.
The second task is interview scheduling. A large US financial services firm using GoodTime reduced time-to-fill by weeks by automating calendar coordination alone. The moment a candidate cleared screening, an interview invite went out within hours. No recruiter coordination required.
The third task is first-round interviewing and assessment. HireVue, used by JPMorgan, Goldman Sachs, Amazon, Microsoft, Emirates Airlines, and hundreds of others, conducts asynchronous first-round interviews with no human present. The candidate records answers to pre-set questions on their own schedule. The AI analyzes speech, language, and behavioral indicators and returns a ranked score. No recruiter watches the recording until after the AI has already filtered and ranked the pool. Emirates Airlines reduced its hiring cycle from sixty days to seven using this approach. The human interviewer enters at round two, but by then, the AI has already determined who gets that meeting.
The fourth task is offer generation and outreach. Recruiting platforms including Lindy and Recruiterflow's Agent Mode draft, personalize, and send offer communications and follow-up sequences autonomously. The offer letter is written before a recruiter opens their inbox.
The fifth task is onboarding initiation. End-to-end workflow automation, deployable today through platforms like n8n, covers the full pipeline from CV submission through assessment, scheduling, and status tracking, without human intervention at any step.
Five tasks. Five separate vendor decisions. Each one made independently, each one with its own ROI story. And together: a process where a candidate can move from application to offer without a single human making an active decision along the way.
Now let's run the same analysis across a legal department's standard contracting process.
The first task is matter intake and triage. Checkbox AI handles incoming legal requests through intelligent chatbots that capture context, ask clarifying questions, and route matters to the right team automatically. No paralegal spending the morning clearing an email queue.
The second task is legal research. Harvey AI, now embedded in Am Law one hundred firms, surfaces relevant case law, statutes, and precedent across large document sets in minutes. Lexis Plus AI provides contextual legal reasoning on demand. What used to be a junior associate's full day is now a prompt.
The third task is contract drafting. Spellbook drafts contracts inside Microsoft Word. LegalOn users report NDA reviews dropping from two hours to thirty minutes. One managing partner reported a forty percent increase in billing capacity, not from doing better work, but because AI wrote the first draft on every matter.
The fourth task is contract review and redlining. Luminance identifies anomalies and flags deviations from playbooks across thousands of contracts simultaneously. Kira extracts specific clauses at scale. Across the category, AI contract review tools are reducing review time by seventy-five to eighty-five percent. A task that defined legal practice for decades is compressing toward minutes.
The fifth task is approval routing and post-execution management. ContractPodAi handles routing, obligation tracking, and compliance monitoring after signature. Ironclad manages the full contract lifecycle, renewals, expirations, obligation triggers, without a paralegal maintaining a spreadsheet.
Again: five tasks, five products, five separate procurement decisions. And end to end: a contracting workflow where a contract can move from request to executed agreement without a lawyer authoring a single original clause.
And this pattern runs across the business, not just in these two functions.
Finance has it. AI invoice processing platforms handle capture, validation, approval routing, and payment scheduling end to end, with one hospital association reporting a reduction in batch processing time from ten hours to minutes.
Customer service has it. Gartner projects that agentic AI will resolve eighty percent of common customer service issues without human intervention by twenty twenty-nine, up from effectively zero in twenty twenty-four.
Security operations has it. And I've had direct conversations with the vendors building these tools. Edward Wu, founder of Dropzone AI, told me ahead of Black Hat USA twenty twenty-five: "Nobody wants to be a tier-one analyst forever." Subo Guha of Stellar Cyber described a digital army of AI agents that filter seventy to eighty percent of alerts before a human analyst sees them. The pattern looks the same whether the workflow is closing a contract or closing a security incident.
The business question this raises isn't whether the tools work. Most of them do. The question is whether organizations have a clear, deliberate answer to: which tasks require a human decision, and why? Because right now, many organizations are answering that question by default, one purchase at a time, rather than by design.
Gartner puts a number on the trajectory: at least fifteen percent of day-to-day work decisions will be made autonomously through agentic AI by twenty twenty-eight, up from essentially zero in twenty twenty-four. That isn't a distant forecast. It's a projection from a baseline that is already moving inside most mid-to-large enterprises today.
Now the innovation and market shifts lens. What is the market building, and how fast is it moving?
The market knows exactly what it's building. It's not naming it directly, but the architecture is unmistakable.
Gartner predicts that forty percent of enterprise applications will include integrated task-specific AI agents by the end of twenty twenty-six, up from less than five percent today. Not AI assistants that help people do their jobs. Agents that do the job, within defined parameters, without waiting for a human to initiate each step. By twenty thirty-five, Gartner's best-case scenario has agentic AI driving approximately four hundred and fifty billion dollars in enterprise software revenue, roughly thirty percent of the entire market.
Notice what the market is selling: "task-specific." Not "workflow-replacing." Not "role-eliminating." Task-specific. One task at a time, each one rationalized locally, each one with a discrete budget line and an ROI model. The cumulative effect, a workflow that no longer requires human participation, isn't what's being sold. It's what's being assembled.
This is where the business opportunity gets genuinely interesting, and where the strategic gap between leading and lagging organizations is opening up.
The companies deploying these tools aggressively are not doing so because they ran an experiment. They're doing it because the economics are compelling in ways that compound over time. Recruiterflow data shows recruiters saving six or more hours per week, a thirty-three percent productivity increase per person. LegalOn users report reducing outside counsel dependency by thousands of dollars per contract cycle. Individually, those numbers are meaningful. Across a workforce, across a fiscal year, they represent a structural cost advantage that competitors without these tools cannot match.
But the more consequential shift isn't cost reduction. It's speed and scale. A hiring process that moves at machine speed doesn't just save money. It changes who gets the best candidates. A legal team that can review, redline, and execute contracts in minutes rather than days doesn't just reduce billable hours. It changes how fast the business can move on deals.
The organizations that figured this out early are already operating at a different tempo than those still treating AI as a pilot program. Forrester projects that distributed AI workflows will capture forty-five percent of enterprise workload capacity by twenty twenty-six. That's not adoption at the margin. That's a structural shift in how work gets done.
The complication, and it's a real one, is that the vendor market is significantly ahead of organizational readiness for what these tools actually do. Gartner estimates that only around one hundred and thirty of the thousands of companies now claiming to offer agentic AI are delivering genuine agentic capability. The rest are rebranding existing automation and RPA under a new label. A genuine agentic system reasons across tasks, adjusts based on outcomes, and handles exceptions without a human rewriting the playbook. A rebranded chatbot executes a fixed sequence and breaks at the edge case. Buying the latter under the belief it's the former is how organizations end up with expensive tools that create new workflow fragility instead of removing old bottlenecks.
The cybersecurity sector is already several steps ahead of most enterprise functions on this curve, and the outcomes are real, not experimental. As I wrote in the first Lens Four article, "The Seventy-Two-Minute Gap," organizations deploying agentic SOC automation are realizing documented, measurable budget savings.
Dropzone AI's Edward Wu described it plainly when we spoke at Black Hat USA twenty twenty-five. At roughly thirty-six thousand dollars per year, their platform ran four thousand automated alert investigations, a number that simply cannot be staffed at comparable cost. Subo Guha of Stellar Cyber, in two separate conversations with me at RSAC twenty twenty-five and Black Hat twenty twenty-five, described their digital army of AI agents filtering seventy to eighty percent of incoming alerts, allowing analysts to focus on the fraction that require human judgment. Both companies are emphatic that the value isn't hypothetical. The savings are already in the operating budget.
The market is also generating the next layer of infrastructure, which is itself a leading indicator of how far adoption has already gone. When AI agent identity governance becomes a funded product category, and it has, it means organizations have already deployed enough autonomous agents into production that they've discovered they can't see what those agents are doing or control what systems they can reach.
Token Security, named a finalist in the RSAC twenty twenty-six Innovation Sandbox, was built entirely around this problem: governing AI agent identities with the same rigor applied to human users. Continuous discovery, intent-aware access controls, lifecycle management from deployment through decommissioning. Moderna has already scaled from seven hundred and fifty to more than three thousand internal AI agents in a single year. The governance market doesn't emerge until the adoption that requires governance is already underway. That tells you where the actual baseline is.
Here is the structural shift worth watching closely. Right now, organizations are assembling workflows task by task through separate vendor decisions. The next phase of the market eliminates that friction entirely, and the infrastructure for it is already being built.
What's emerging is the agentic orchestration platform: a single governed environment where workflows can be defined in plain language, purpose-built agents can be selected, configured, guardrailed, and monitored, and the cumulative workflow is visible as a designed whole rather than discovered after the fact as an accumulated pile of vendor contracts.
Nintex, which serves more than seven thousand organizations across one hundred countries, announced its Agentic Business Orchestration platform in September twenty twenty-five, explicitly positioning it as a single governed layer unifying legacy systems, manual processes, and AI agents. Their incoming Agent Designer feature enables IT leaders and business technologists to build, evaluate, and orchestrate specialized agents in a low-code environment, without writing code and without leaving the governance framework.
IDC's Maureen Fleming framed it directly: "Agentic business orchestration represents a shift toward coordinating people, systems and AI agents in governed ways that ensure automation and AI deliver measurable results at scale."
Microsoft is moving in the same direction at enterprise scale. Copilot Studio, already connected to more than fourteen hundred systems, allows agents to be built in natural language, configured, monitored, and governed from a single interface. Every agent now gets a Microsoft Entra Agent ID, an identity credential that enables governance across the fleet. Microsoft's own framing for twenty twenty-six is pointed: the transition is from AI that helps people do work faster to AI that handles work on behalf of the organization, with humans escalating into exceptions rather than executing by default.
The pattern is the same whether you're watching Nintex, Microsoft, Salesforce Agentforce, or ServiceNow's agentic capabilities. The market is converging on a platform model where the workflow is defined up front in plain language, agents are scoped to specific tasks with explicit permissions, guardrails are set before deployment rather than bolted on after, human oversight points are designed in rather than assumed, and the full workflow is auditable and measurable as a system.
The organizations that get ahead of this transition will enter it with clear workflow maps and defined accountability structures. The ones that don't will find themselves importing their accumulated default choices into the new architecture and inheriting all the governance gaps that came with them.
Now the language and messaging lens. Why does everyone say "augment" when the direction is "replace"?
Because "augment" gets funded, "replace" gets scrutinized, and the actual outcome is somewhere neither word honestly describes.
There is a phrase that appears in virtually every vendor pitch, analyst briefing, and enterprise communication about AI and automation: "we augment human capabilities, we don't replace them." It surfaces in hiring tech. It surfaces in legal AI. It surfaces in financial automation and customer service platforms. It is, at this point, essentially obligatory in the category.
The phrase is doing real work. It manages three audiences simultaneously: employees watching their job functions shift, procurement committees answering to boards who want to see AI investment justified, and regulators watching closely how AI is being deployed in consequential decisions. "Augment, not replace" threads all three needles cleanly. It implies human oversight remains intact, accountability structures are unchanged, and the organization is being measured and responsible.
But walk the data back against that framing and it doesn't hold up.
Swimlane projects AI will resolve or escalate over ninety percent of Tier One security alerts by twenty twenty-six. Not assist with them. Resolve them. Gartner projects autonomous AI handling eighty percent of customer service issues without human involvement by twenty twenty-nine. The legal contract review tools marketing seventy-five to eighty-five percent time reduction aren't augmenting lawyers. They're doing the task, and asking the lawyer to review the output. The hiring platforms aren't helping recruiters screen faster. They're screening, and asking the recruiter to validate the ranking.
When the AI handles eighty percent of the task and the human handles exceptions after the fact, that's not augmentation in any meaningful operational sense. That's oversight of an autonomous system. The distinction has direct implications for where accountability lives, what skills the organization needs to maintain, and what happens when the output is wrong.
I explored a version of this tension from an unexpected angle on the Music Evolves Podcast, in a conversation with Chandler Lawn, AI Innovation and Law Fellow at the University of Texas School of Law, Drew Thurlow, Adjunct Professor at Berklee College of Music, and Puya Partow-Navid, Partner at Seyfarth Shaw.
We were talking about AI-generated music and who owns the output, but the underlying question was the same one running through every enterprise workflow: when the system produces the thing that used to require a human, what does the human's role actually become?
The music industry is a few years ahead of most enterprise functions on this curve. Universal Music Group and Warner Music Group both reached landmark settlements with AI music platforms in late twenty twenty-five. The answers they're landing on involve drawing explicit lines around what requires human creative judgment and what can be systematically produced. Enterprise operations will need to draw the same kinds of lines, probably with less drama, but with the same underlying logic.
The language gap has a practical consequence beyond messaging. When leadership describes every AI deployment as augmentation, it becomes difficult to have honest internal conversations about what the organization has actually delegated, where the accountability gaps are, and what happens when a consequential decision turns out to be wrong. That conversation is easier to have before the workflow is fully assembled than after.
Gartner's prediction that over forty percent of agentic AI projects will be cancelled by end of twenty twenty-seven is worth reading through this lens. It's not because the technology fails. It's because organizations bought capability without building the governance, accountability structures, and organizational clarity to run it responsibly. The language that got the tool funded made those harder conversations easier to avoid at purchase time. They don't stay avoided.
At Black Hat USA twenty twenty-five, Marco Ciappelli and I talked after walking the floor about exactly this. When every vendor claims the same positioning, the actual distinctions disappear from the buyer's view. In our post-show episode, we called it the marketing milkshake problem: every vendor's message going into the same promotional blender and coming out tasting the same, regardless of what the underlying technology actually does. The agentwashing problem isn't just a market integrity issue. It's a decision-quality issue for every organization trying to figure out what they're actually acquiring, and what decision authority they're actually transferring.
And now, the fourth lens.
When did you decide to hand over control, and who was in the room when you did?
Here is what I keep coming back to when I look at all three lenses together, and it's the thing I find myself saying in conversations that rarely makes it into polished conference presentations: we are already past the point of no return.
The human-optional workflow is not the exception being cautiously piloted. It is the operational default for hiring, contracting, finance, customer service, and security operations in organizations that made five individually rational procurement decisions and never looked at what those decisions assembled.
That's not naivety. I want to be clear about that. The organizations deploying these tools are not confused about what they're buying. What they haven't done, and what the vendors selling to them have never required them to do, is map the cumulative shape of those decisions before committing to them.
And I don't think that's an accident.
When I look at the language the vendor market has built around this transition, "augment, not replace," "human in the loop," "AI-assisted," I don't read it as imprecise. I read it as precise in exactly the right direction. These are companies staffed with product managers, lawyers, and communications teams who understand exactly what their tools do when deployed at scale. "Augment, not replace" threads every needle it needs to thread: employee relations, procurement approval, regulatory scrutiny, board optics. It's not a description. It's a strategy. And it has worked, because organizations bought the framing along with the capability, and now have workflows they couldn't honestly describe as "augmented" with a straight face.
So where does accountability land when an AI-assembled workflow produces a bad outcome? Right now: nowhere. That is not hyperbole. The procurement signer approved a task-specific tool with its own contained ROI case. The vendor sold a product that performs as specified. The workflow that those tools assembled collectively, that's in a gap between contracts, between org chart lines, between the legal definitions anyone drafted when they wrote the terms of service. Nobody owns the workflow. Everybody owns a task.
That should be alarming. Not because bad outcomes are inevitable, but because the accountability structure that would catch and correct a bad outcome before it becomes a crisis, that structure doesn't exist yet in most organizations. The efficiency gains are real and already in the budget. The accountability architecture is still theoretical.
What I'm watching closely is whether the agentic orchestration platform changes this dynamic or accelerates it. My honest read: both, depending on the organization. A small group of mature, deliberate organizations, the ones who were already doing workflow mapping before procurement, who already had security and legal at the design table, will use Nintex, Copilot Studio, and platforms like them to do exactly what those platforms were designed to enable: define the workflow first, configure the agents inside it, set the guardrails before deployment, and maintain a complete audit trail of what was delegated and why. For those organizations, the platform genuinely forces the design conversation, because you cannot configure guardrails without deciding what you're guarding.
For everyone else, the platform will make accumulation faster and cheaper. The same five irrational decisions, each locally rational, collectively unexamined, will just be easier to make in one place.
Here's the structural reality I think most organizations are not yet reckoning with: the auditors haven't arrived yet. The regulatory frameworks that will eventually require organizations to account for autonomous workflow decisions, who authorized them, under what criteria, with what human oversight, and how exceptions are handled, those frameworks are being drafted right now. GDPR took years to land on AI. The EU AI Act is already in motion. The US regulatory posture is slower but not absent. The window between "we accumulated this workflow through procurement" and "we need to demonstrate we designed it with intention" is open, but it is not going to stay open.
The organizations that use that window to map what they've built, establish where accountability sits, and make explicit decisions about what requires human judgment, not because the AI can't do it, but because the organization has determined that accountability requires a person, will be positioned to operate without disruption when the frameworks arrive. The ones that don't will discover that the workflow they built by default is not the workflow they would have chosen under scrutiny.
The vendors knew what they were building. The buyers, in most cases, didn't ask the right questions. The auditors haven't arrived yet.
That window is closing.
If this analysis is useful to you, the full article with all references, data points, and links to every podcast conversation mentioned is at seanmartin.com. Search for Lens Four, or find me directly at seanmartin.com.
And if these are the kinds of conversations you want more of, the Redefining CyberSecurity Podcast is where I explore them in depth every week with the people building, buying, and breaking these systems. You can find it wherever you listen to podcasts, or at redefiningcybersecuritypodcast.com.
Thanks for listening.