The ITSPmagazine Podcast

Agentic AI, Bot Economics, and the New Arms Race | A Brand Spotlight at RSAC Conference 2026 with Kevin Gosschalk, Founder and CEO of Arkose Labs

Episode Summary

When AI agents started masquerading as real users on the web, Arkose Labs founder Kevin Gosschalk recognized the pattern immediately -- because his company has been fighting bots for nearly a decade. In this RSAC Conference 2026 conversation, Gosschalk maps the agentic AI threat landscape, explains how intent-based detection is replacing blanket bot blocking, and walks through the surprisingly creative economics driving modern fraud.

Episode Notes

A decade ago, Kevin Gosschalk was talking CAPTCHAs and bot mitigation with Marco Ciappelli at a security conference. Today, at RSAC Conference 2026, the conversation has shifted to agentic AI -- autonomous systems that browse, click, and transact on behalf of users. For Gosschalk, the Founder and CEO of Arkose Labs, the technology has changed but the challenge is familiar: how do you tell the difference between a legitimate automated actor and a malicious one?

Gosschalk explains that the vast majority of agentic traffic today is not self-identifying. Rather than announcing themselves as AI agents, these systems impersonate real Chrome browsers on Mac OS -- choosing configurations with stronger privacy features to evade fingerprinting. There are two technical categories to contend with: headless browsers running in the cloud, which can be caught through device spoofing checks, and on-device agents that control a real browser instance, which require a deeper look at behavioral patterns and intent signals. Arkose Labs builds intent models around payment fraud, fake account creation, and account compromise to distinguish the good agents from the bad.

The economic framing Gosschalk brings to this conversation is striking. He describes SMS toll fraud -- where bad actors acquire millions of premium phone numbers and trigger OTP messages from victim companies, earning three to six cents per message while costing those companies tens of millions of dollars annually. He walks through micro deposit fraud targeting fintechs. His core thesis: fraud is an economic activity, and the best defense is making attacks more expensive than they are worth. Arkose Labs builds challenge mechanisms designed to raise that cost through novel stimuli that ML models have not been trained to solve -- presenting something genuinely new forces a brute-force approach that is less effective than purpose-built attacks.

The platform's consortium model is a key differentiator. Arkose Labs protects large enterprises including Expedia and Meta, and when an attack signature appears on one customer but nowhere else in the network, its uniqueness is itself a strong fraud signal. Customers can also feed labeled outcome data back into the system -- if something slips through and later proves malicious, that label sharpens the model for the entire consortium.

Gosschalk is equally clear about the opportunity side of agentic AI. Blocking all automated traffic is no longer viable -- legitimate agentic commerce is coming, where consumers will delegate shopping, comparison, and purchasing to AI assistants. The future is not blanket blocking but granular, policy-driven enforcement: letting each customer define what kinds of agentic behavior they want to permit on their platforms. Integration is accessible -- a basic JavaScript deployment for web, SDKs for mobile, and extended support for IoT devices and CDN integrations.

This is a Brand Spotlight. A Brand Spotlight is a ~15 minute conversation designed to explore the guest, their company, and what makes their approach unique. Learn more: https://www.studioc60.com/creation#spotlight

GUEST

Kevin Gosschalk, Founder and CEO, Arkose Labs
LinkedIn: https://www.linkedin.com/in/kgosschalk/

RESOURCES

Arkose Labs: https://www.arkoselabs.com

Are you interested in telling your story?
▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full
▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight
▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlight

KEYWORDS

Kevin Gosschalk, Arkose Labs, Sean Martin, Marco Ciappelli, brand story, brand marketing, marketing podcast, brand spotlight, agentic AI, bot detection, bot mitigation, fraud prevention, SMS toll fraud, micro deposit fraud, behavioral biometrics, intent detection, CAPTCHA, account takeover, synthetic identity, RSAC Conference 2026, cybersecurity

Episode Transcription

Agentic AI, Bot Economics, and the New Arms Race | A Brand Spotlight at RSAC Conference 2026 with Kevin Gosschalk, Founder and CEO of Arkose Labs


 


 

[00:00:10] Marco Ciappelli: Kevin? It's been a long time.

[00:00:12] Kevin Gosschalk: It has been.

[00:00:13] Marco Ciappelli: Is it aging us or not?

[00:00:15] Kevin Gosschalk: The other one getting older. Not me.

[00:00:17] Marco Ciappelli: You do. You don't. I --

[00:00:17] Kevin Gosschalk: I get younger.

[00:00:18] Marco Ciappelli: You're a vampire. A vampire, right? That's right.

[00:00:20] Kevin Gosschalk: Yeah.

[00:00:21] Marco Ciappelli: Good to see you, man.

[00:00:22] Kevin Gosschalk: Good to see you too.

[00:00:22] Marco Ciappelli: You know, when we run into each other yesterday in the street -- in three seconds, a lot of memories.

[00:00:28] Kevin Gosschalk: Yeah.

[00:00:29] Marco Ciappelli: Of uh, early -- it's been what --

[00:00:29] Kevin Gosschalk: Been what --

[00:00:30] Marco Ciappelli: Nine years? Ten years? Yeah. Something like that. The company was just formed -- fledgling.

[00:00:35] Kevin Gosschalk: Fledgling. Yes.

[00:00:36] Marco Ciappelli: And I remember we were talking about bots and CAPTCHA at the time.

[00:00:42] Kevin Gosschalk: Yeah.

[00:00:42] Marco Ciappelli: And now we're here talking about --

[00:00:43] Kevin Gosschalk: Bots.

[00:00:44] Marco Ciappelli: Agentic AI.

[00:00:45] Kevin Gosschalk: Yeah. Which is also just bots -- slightly smarter bots perhaps.

[00:00:48] Marco Ciappelli: Bots that act different.

[00:00:49] Kevin Gosschalk: Yep. Slightly smarter bots perhaps.

[00:00:51] Marco Ciappelli: Yeah. But I know the company has been growing quite a bit.

[00:00:55] Kevin Gosschalk: Yeah, we've been very successful.

[00:00:56] Marco Ciappelli: Lot of stories I'm sure to tell.

[00:00:58] Kevin Gosschalk: Oh yeah. We have all kinds of fun stories.

[00:01:00] Marco Ciappelli: And I know you're a great storyteller, so that's why I'm excited to have this chat with you, of course, at RSAC Conference 2026.

[00:01:07] Kevin Gosschalk: Oh yeah. It's very --

[00:01:08] Marco Ciappelli: It's very RSAC, always busy. And I like to say the place where nobody wants to go, but then is glad he went.

[00:01:18] Kevin Gosschalk: So when we first met, I used to live in Australia back then. Yeah. I was coming over here every month for an event or a conference. I think we'd been in like ABTEC California or ABTEC USA, whatever it was.

[00:01:29] Marco Ciappelli: Yeah, that too.

[00:01:30] Kevin Gosschalk: And about eight years ago I moved here to San Francisco and been living this madness that is San Francisco ever since. But it's been fantastic for us. We've had an incredible amount of success at Arkose Labs. We're very fortunate to work with really incredible companies. And I think more importantly, we get to solve really fun problems. So we're always in the way of bad guys trying to get through us to attack our customers. And that really puts us on the front line of figuring out what we need to build to disrupt that, prevent that, and all that kind of stuff.

[00:02:10] Marco Ciappelli: Hmm.

[00:02:10] Kevin Gosschalk: And I think RSAC itself -- every other year it's like, what's the big trend? What's the big threat vector? It certainly changes and ebbs and flows.

[00:02:17] Marco Ciappelli: The bingo card.

[00:02:18] Kevin Gosschalk: Yeah, yeah. To do. It's interesting now that I've come here so many times since I started the company, but this year I feel like it's like reinventing the category we've been in for so long. So for me it's kind of really quite a lot of fun. Everyone, once again, is now talking about bots. Bots are gonna be buying things, bots are gonna be going and signing up for stuff, bots are gonna be writing code or whatever it may be. But agents at the end of the day are autonomous systems doing things for users -- that's really the core of what we've been doing for so long now.

[00:02:50] Marco Ciappelli: So is it just -- would you say it's smarter?

[00:02:55] Kevin Gosschalk: Yeah. It's good at jerry-rigging stuff together better than maybe humans have. Humans have had to kind of be in the loop to jerry-rig stuff, and now bots can kind of also jerry-rig stuff. So that enables and changes the way you can use these things, I think. And I think the most profound one so far from my perspective is probably what OpenAI's computer use will end up becoming. I think that's the most interesting.

[00:03:19] Marco Ciappelli: Mm-hmm.

[00:03:19] Kevin Gosschalk: Which is kind of like a harness that allows you to string multiple agents together to do a more complicated thing. And it can just keep going. Like in theory it can reason with: am I doing this task well, have I done this task? They talk to --

[00:03:34] Marco Ciappelli: Each other.

[00:03:35] Kevin Gosschalk: Yeah, exactly. They brainstorm with each other, and it writes it to a file. So it's got a bit of a memory going on around what worked well, what didn't work well. And I think that's gonna be super important for getting this kind of technology to do actions for us on a day-to-day basis. But it's also incredibly important from an anti-adversarial mindset, because all the bad actors are absolutely looking at this technology and thinking: how can I use this to generate synthetic identities and go enroll for a bunch of bank accounts, credit cards, and reap promo fraud bonuses and all that stuff? Like that's already absolutely being discussed.

[00:04:12] Marco Ciappelli: Mm-hmm.

[00:04:13] Kevin Gosschalk: And the way these tools interact with the internet is the same way bots have been interacting with the internet for many years. So it's really the same kind of stuff, but with a little bit more intelligence piloting it as opposed to a human having to do that now.

[00:04:26] Marco Ciappelli: Right. It's kind of funny because the story is reinventing itself in a way. Like the industry thinks it's going in one direction, then we kind of walk back. Right. I had a conversation today about how it's not a good idea to just be all on the cloud -- we need to be hybrid.

[00:04:43] Kevin Gosschalk: Interesting. We're going backwards.

[00:04:44] Marco Ciappelli: Yeah, we're going backwards a little bit, but maybe with a little bigger brain at that point. But one thing that is very important in what you do is to prevent fraud, prevent bad things from happening with identity. But you do this on the website where people -- the user, the client, the consumer -- are present. So you need to balance --

[00:05:09] Kevin Gosschalk: Your experience.

[00:05:10] Marco Ciappelli: You can't stop everything, right?

[00:05:11] Kevin Gosschalk: That's right. And there's also good bots. That's the big difference here. So historically we've had this concept of good bots -- like SEO bots, like Google Bots going to your website. If you block them, your website doesn't turn up on Google. So that's been a known concept for a long time. But the concept of a bot going to your login screen or your account creation screen, or using your app, or buying something -- that's always been a no-no. You don't want bots doing that. But now that's different. So we're gonna keep seeing the bad stuff trying to do those things, but there are absolutely also gonna be these good agents. Agentic commerce -- where you want to enable that. People are gonna say to their app: 'Hey, I want to buy this pair of shoes. Go find me the right price.' This is the size, color, shipping timeframe -- go find it, and the agent will do that. Right? That's a sale you'd miss out on if you block all bots. So now our customers want the ability to enforce the policies they want, as opposed to just a blanket 'block all bots.'

[00:06:22] Marco Ciappelli: Right. So let's talk about this -- let's name the elephant in the room: agentic AI. So how do you identify an agent? How do you give them an ID? And then decide: yep, you're good, or nope, you're not?

[00:06:41] Kevin Gosschalk: It's complicated. No surprise. So there are agents who are self-identifying as agents -- when they connect to your website in their URL or in the user agent equivalent, they'll add some information in there and you can look that up in a database and say, okay, this is owned by OpenAI, or this is owned by Anthropic. So that's an easy part of the problem.

[00:07:04] Marco Ciappelli: That's easy.

[00:07:05] Kevin Gosschalk: Yeah. In certain scenarios they just self-identify. But that is almost 0% of what's actually occurring right now. The vast majority of agentic traffic is showing up pretending to be a normal Chrome browser, pretending to be a Mac OS -- because that has more privacy-preserving features, so it's harder to fingerprint. It's masquerading to be a real user because it doesn't want to be redirected to an MCP server. It doesn't want to be told it can't buy something. It just wants to make the action happen. So the industry right now is a bit of the wild West -- there's no real regulation. Everyone's just doing whatever they want. It's kind of like Waymo -- there's no specific lane for a Waymo. Waymo's just on the same streets that we drive on.

[00:07:46] Marco Ciappelli: Mm-hmm.

[00:07:46] Kevin Gosschalk: And I think that's gonna be where agents end up as well -- they're gonna be interacting with the web the way humans interact with the web. I don't think MCP and these protocols, when it comes to consumer stuff, is going to be where adoption is. For internal business use cases, yes -- the APIs would be the way to go. But that comes back to the original question: if that's true, how do we identify? Because we still want a policy -- if it's an agent, we want to decide what to do with it. There are two categories of what agentic traffic will look like to an e-commerce website.

[00:08:23] Marco Ciappelli: Mm-hmm.

[00:08:23] Kevin Gosschalk: One is headless browsers running in the cloud. So you ask ChatGPT in agent mode to go to a website and buy an item -- it will spin up a headless browser in the cloud and navigate as a headless browser using Selenium, Puppeteer, and all these browser engine technologies. There are a number of ways of identifying that, because it's not a real device. Device spoofing checks and all these different kinds of things can identify that it's not what it claims to be. The other kind of agentic traffic is the agent running on the device locally.

[00:09:00] Marco Ciappelli: Okay.

[00:09:01] Kevin Gosschalk: Where it's actually controlling your laptop and visiting a website -- that's just going to look like your laptop, because it is. Right? So you can't look at how the browser looks, because it's gonna look like your browser -- it's controlling your Chrome instance.

[00:09:15] Marco Ciappelli: Mm-hmm.

[00:09:15] Kevin Gosschalk: So in that case, what you need to look at is two other things. One is the behavior of how it's controlling your Chrome and what it's doing -- there are behavioral biometric signals absolutely available today. The other thing is: what's the intent? If it's going through and doing stuff in a normal user way, you probably categorize that as a good agent. If it's signing up for an account and then bailing once it gets a promo credit -- you put that more in the suspicious category. So this concept of intent is also important. We at Arkose Labs have intent signals around payment fraud, fake account creation, and account compromise. And there are a number of signals we look at to determine if someone's trying to compromise an account, phish a user, or create a fake account.

[00:10:08] Marco Ciappelli: You know, I cannot talk about these things without having my brain going into a sci-fi movie. And we are there. This is not about writing letters in a box anymore, although I still see that. So I think about you every time.

[00:10:22] Kevin Gosschalk: So the biggest irony -- everyone's like, the concept of a CAPTCHA is meant to be something machines can't solve. I think that concept is not achievable. Like anything a human can do, you can train a machine to do. So this idea of a 'human-proof test' -- I don't think that's ever truly existed. Even text CAPTCHAs, you can always train a model to solve them. What we focus on is economic disruption. If we can make the effort and cost of what they're trying to achieve higher than their profit -- that is the entire strategy we take at Arkose Labs. So we have technology --

[00:11:00] Marco Ciappelli: Because they'll go with the lower-hanging fruit.

[00:11:02] Kevin Gosschalk: Yeah. Or ideally go get a real job. We have challenge mechanisms designed to be expensive for human labor attacks and machine vision attacks. There are no agentic AI systems that can get through everything they've never seen before. If you want to defeat a large multimodal model, you show it something novel that it hasn't been trained on -- it doesn't know what to do. Because at the end of the day, however much we emphasize it's AI, it's really predictive text modeling. It's not intelligent -- it's just predictive. It doesn't conceptualize. And so if you present something to it that it has no concept of, it doesn't know how to actually reason. It will try to brute force, but that's worse than basic attacks from attackers who build ML models. I think it's gonna be that way for quite a while. AGI is a different beast entirely, of course.

[00:12:22] Marco Ciappelli: Yeah. And let's go back to how easy it remains to use the website for the user. So is this control -- this identity tracing -- happening in the backend?

[00:12:39] Kevin Gosschalk: Yes.

[00:12:39] Marco Ciappelli: So the user doesn't see anything?

[00:12:41] Kevin Gosschalk: Ideally, they should never see anything. It's all risk-based, so it should be frictionless for good customers -- unless you're doing something that a bad actor happens to overlap with. Like you're at a cafe where a bad actor is launching an attack from, then you might be collateral damage as part of that signal. But we have ML models that look for patterns of behavior. If it's a new pattern of behavior that's never been seen before -- and our product has full consortium data, so we protect very large brands like Expedia and Meta and others -- if I see an attack signature on one customer that I don't see anywhere else, it's a really high likelihood that's a bad signature.

[00:13:23] Marco Ciappelli: Right.

[00:13:24] Kevin Gosschalk: The uniqueness of a signature in itself is a giveaway that it's most likely a bad actor.

[00:13:28] Marco Ciappelli: Yeah. And Arkose Labs works with sharing intelligence in real time -- like what you just said. We just learned this in this occasion here, and we apply it across --

[00:13:39] Kevin Gosschalk: Across the whole customer base. That's right. And it's something where customers can also label data and share it back with us. Like if something does get through Arkose Labs but at the time we didn't flag it as a threat, and it goes on to do something bad -- they can actually label that and give it back to us so our systems also get intelligence from their downstream risk models as well.

[00:13:56] Marco Ciappelli: Yeah.

[00:13:57] Kevin Gosschalk: Which is very powerful.

[00:13:58] Marco Ciappelli: Yeah. Well, tell me about the economics of cybercrime. We were talking before recording about how there's customer support, marketing for bad guys on the dark web -- and how this has become kind of like the epic historic battle between good and evil.

[00:14:27] Kevin Gosschalk: It's always going to be an arms race. There's no silver bullet in security. As long as there is an incentive to do something, people will keep trying to achieve that outcome. The economics of fraud are really favorable for the bad actors. There's a lot to attack. They get the advantage of people not knowing when they're going to attack or what they're going after. There's an incredible amount of ways of making money by doing bad things on the internet. Like there's one attack that costs companies we work with tens of millions of dollars -- to the point where they're not profitable in certain countries because of this attack vector. So when you sign up to a website, sometimes they ask you to put a phone number in --

[00:15:12] Marco Ciappelli: Mm-hmm.

[00:15:13] Kevin Gosschalk: To text you a verification code -- as part of the signup flow or 2FA. So it turns out in a lot of countries like Egypt, Ukraine, and many places in Europe, the cost to send that text message is pretty expensive. It can cost up to 29 or 30 cents to send an OTP code to someone in Egypt or Vietnam because you've got to use the local telecom infrastructure. Now, telecom providers have premium phone numbers where the person receiving the text message gets a small cut of the cost charged to the sender -- they use this for game shows on TV where texting a number costs a dollar and the show keeps 80 cents.

[00:16:07] Marco Ciappelli: A referral fee.

[00:16:08] Kevin Gosschalk: Yeah, basically. An affiliate fee. So bad actors get access to millions of premium phone numbers, then use a bot to sign up for millions of accounts and make three to six cents every time you send them an OTP message. Companies are losing tens of millions of dollars to this. You wouldn't even think of that. And it's not a type of fraud that hits consumers -- it's attacking the platform itself. So consumers aren't even aware it's happening. But fraudsters are very creative in finding ways to make money. Another one I find interesting is micro deposit fraud. Fintechs don't have a way to verify you own a bank account via an API, so they deposit a few cents -- like 13 cents -- and ask you to confirm the amount. Of course, bad actors create bots and sign up for huge numbers of bank accounts and collect 13 cents times thousands and thousands of accounts. That's called micro deposit fraud.

[00:17:15] Marco Ciappelli: Someone's making money out of that and nobody noticed.

[00:17:17] Kevin Gosschalk: Yeah. Someone's making money out of that. There are all these different types of attack vectors. And I think agentic technology will resurface fraud schemes that aren't currently profitable because they require a human in the loop or have other costs that make them expensive. Agentic AI changes that -- you can now do them at massive scale, and the cost has plummeted. So we're gonna see a reinvention of what was old fraud, coming back with an agentic flavor at much higher scale.

[00:17:43] Marco Ciappelli: And again, history repeating itself. When you look at phishing, all the various schemes -- I always bring the example of the Spanish letter. We go back in the days, and people still fall for it. But what people need to understand is that in a lot of cases it's happening in the background, and the consumer isn't even aware of it.

[00:18:08] Kevin Gosschalk: Yes. And then of course there's plenty of fraud that hits consumers too -- social engineering scams, pig butchering, all that kind of stuff.

[00:18:17] Marco Ciappelli: Well, I really enjoy catching up with you. Always a great story. But to end this conversation -- you already work with big companies. If somebody is watching this right now and thinks: 'Hey, I want to work with Arkose Labs' -- how easy is it to implement what you have on their system? And what can they do to get started?

[00:18:40] Kevin Gosschalk: Yeah, that's a good question. It's a pretty basic JavaScript integration for websites. We have SDKs for mobile apps. We can also protect IoT devices -- we work with a lot of big media companies who have our full capabilities on web but also have TVs connecting to their authentication servers, and those need protection too. And then there are CDN integrations as well.

[00:19:04] Marco Ciappelli: Okay, that's great. Just go to arkoselabs.com.

[00:19:07] Kevin Gosschalk: That's the one.

[00:19:08] Marco Ciappelli: And you're gonna find a lot of very helpful people there. Hopefully not agentic AI -- not yet.

[00:19:13] Kevin Gosschalk: Not yet.

[00:19:13] Marco Ciappelli: Or maybe the good ones eventually.

[00:19:15] Kevin Gosschalk: Yeah. Maybe. We'll find out. If it's a week from now, it might be all agents.

[00:19:21] Marco Ciappelli: There you go. Keep an eye open for that.

[00:19:23] Kevin Gosschalk: Yeah.

[00:19:24] Marco Ciappelli: Good to have a chat. And everybody stay tuned for more conversations -- see you on the next one.