ITSPmagazine

The Silent Risk in AI-Powered Business Automation: Why No-Code Needs Serious Oversight | A Conversation with Walter Haydock | Redefining CyberSecurity with Sean Martin

Episode Summary

AI-driven automation with no-code tools is empowering business teams to move fast—but at what risk? In this episode, Walter Haydock, founder of StackAware, joins Sean Martin to outline the hidden dangers, governance gaps, and practical safeguards every organization needs to understand before letting no-code AI fly free.

Episode Notes

GUEST

Walter Haydock, Founder, StackAware | On Linkedin: https://www.linkedin.com/in/walter-haydock/

HOST

Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com

EPISODE NOTES

No-Code Meets AI: Who’s Really in Control?

As AI gets embedded deeper into business workflows, a new player has entered the security conversation: no-code automation tools. In this episode of Redefining CyberSecurity, host Sean Martin speaks with Walter Haydock, founder of StackAware, about the emerging risks when AI, automation, and business users collide—often without traditional IT or security oversight.

Haydock shares how organizations are increasingly using tools like Zapier and Microsoft Copilot Studio to connect systems, automate tasks, and boost productivity—all without writing a single line of code. While this democratization of development can accelerate innovation, it also introduces serious risks when systems are built and deployed without governance, testing, or visibility.

The conversation surfaces critical blind spots. Business users may be automating sensitive workflows involving customer data, proprietary systems, or third-party APIs—without realizing the implications. AI prompts gone wrong can trigger mass emails, delete databases, or unintentionally expose confidential records. Recursion loops, poor authentication, and ambiguous access rights are all too easy to introduce when development moves this fast and loose.

Haydock emphasizes that this isn’t just a technology issue—it’s an organizational one. Companies need to decide: who owns risk when anyone can build and deploy a business process? He encourages a layered approach, including lightweight approval processes, human-in-the-loop checkpoints for sensitive actions, and upfront evaluations of tools for legal compliance and data residency.

Security teams, he notes, must resist the urge to block no-code outright. Instead, they should enable safer adoption through clear guidelines, tool allowlists, training, and risk scoring systems. Meanwhile, business leaders must engage early with compliance and risk stakeholders to ensure their productivity gains don’t come at the expense of long-term exposure.

For organizations embracing AI-powered automation, this episode offers a clear takeaway: treat no-code like production code—because that’s exactly what it is.

ADDITIONAL INFORMATION

✨ More Redefining CyberSecurity Podcast: 

🎧 https://www.seanmartin.com/redefining-cybersecurity-podcast

Redefining CyberSecurity Podcast on YouTube:

📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq

📝 The Future of Cybersecurity Newsletter: https://www.linkedin.com/newsletters/7108625890296614912/

Interested in sponsoring this show with a podcast ad placement? Learn more:

👉 https://itspm.ag/podadplc

⬥KEYWORDS⬥

sean martin, walter haydock, automation, ai, nocode, compliance, governance, orchestration, data privacy, redefining cybersecurity, cybersecurity podcast, redefining cybersecurity podcast

Episode Transcription

The Silent Risk in AI-Powered Business Automation: Why No-Code Needs Serious Oversight | A Conversation with Walter Haydock | Redefining CyberSecurity with Sean Martin
 

[00:00:36] Sean Martin: And hello everybody. You're very welcome to a new episode of Redefining Cybersecurity. This is Sean Martin, your host, where I get to talk about all kinds of cool things, uh, related to technology and cybersecurity and, and, uh, some cool folks I get to connect with as well to have these conversations. 
 

Walter, good to see you. 
 

[00:00:56] Walter Haydock: Sean, thanks for having me on. 
 

[00:00:59] Sean Martin: Glad to have you [00:01:00] on and, uh. We met, I believe it was at, uh, HITRUST Collaborate over a 
 

[00:01:05] Walter Haydock: Almost exactly a year ago. 
 

[00:01:07] Sean Martin: Yeah, right, right around there. And, uh, yeah, I remember having some good conversations with you then. And, uh, we've stayed in touch and, and as with many of my, uh, episodes are usually triggered or inspired by something that's somebody smart like you posts, and that's the, that's the case here. 
 

And we're looking at, uh, kind of this, this world of automation and orchestration and no. Slash no low slash no code and and tools to do a bunch of things for the business, automate a bunch of processes, connect people, connect data, connect processes, and, and of course we'll just sprinkle AI and all on all of that as well, just for some fun. 
 

Right? And that's kind of where we're gonna go today. Kind of how does AI fit into all this stuff? Our team organizations using tools and connecting AI. [00:02:00] Some of the outcomes might look like, but, but also what some of the, uh, pitfalls might be if they're not thinking about this stuff clearly, which you do all day, all night. 
 

I'm pretty sure, Walter, 
 

[00:02:10] Walter Haydock: It's my job. 
 

[00:02:11] Sean Martin: that's your job. So on, on that note, maybe a few words, uh, from you about some of the things you worked on in the past, uh, what you're up to now and, and maybe the inspiration behind that post to kind of kick things into gear. 
 

[00:02:27] Walter Haydock: So I'm the founder of Stack Aware and we help AI powered companies measure and manage cybersecurity compliance and privacy risk. And I have the pleasure of working with a lot of organizations at the cutting edge when it comes to AI development and deployment. And a trend that I'm seeing that's quite common and something that I'm participating in is, is the use of no-code tools plus. 
 

Non-deterministic artificial intelligence systems, and it provides [00:03:00] a very powerful way to accelerate productivity, but at the same time can bring with it a whole series of risks. 
 

[00:03:11] Sean Martin: Yeah, no, no question about that. And I'm certain, uh, my audience has been beaten over the head a gazillion times at this point of there's risk in ai, pay attention to what you're doing. Um, we're gonna get a little, uh, deeper into a couple areas now you mentioned. Uh, a few tools here. I'm not necessarily looking to, to, uh, mention brands, but Zapier is one that I happen to use. 
 

One of the two you mentioned in this post, and I'll include the link to the post for folks as well. Um, and I'll just kind of put it out there. What I do, some of the things I do with this tool, it's primarily connecting with, um, our guests on shows and connecting with, um, our audience on social media and. 
 

Kind of automating the ingestion of certain [00:04:00] types of data and the creation of, of certain processes and the outputs of data that we can use, uh, across our portfolio. And, uh, I don't know that we're using much a lot of, we're not putting a lot of IP in here, but it does, does make me wonder, what am I training? 
 

The certain models on when I use this, are there things that I'm, that I'm exposing my organization to as I pump data through this system? Um, and is it worth it? So I'm, I'm going to, hopefully I'm painting a picture of how I use it to some degree, but I'm curious to, what are some common areas where you see organizations using tools to gain efficiencies, to reduce human error? 
 

I don't know. What are, what are some of the things they're doing? 
 

[00:04:50] Walter Haydock: Yeah, I've seen companies do everything from automate the. Menial stuff that you alluded to, like [00:05:00] processing information, taking forms, moving data from one place to another, all the way up to being the core of their business. And, you know, that's how they process, uh, sensitive information. Obviously there's some restrictions on what you can do with some of the no-code tools out there in terms of business associate agreements and, and data residency and things like that. 
 

But there's not necessarily a reason why you couldn't use a no-code tool as the backbone for your business operations. The key is just understanding the security, compliance, privacy implications of doing that. 
 

[00:05:38] Sean Martin: And do organizations think along those lines at this point in time? Um. I'm just thinking, yeah. What types of data are they putting in there? And you mentioned being, being bound to certain laws and regulations that might, and agreements. BAA agreements certainly in the healthcare [00:06:00] space where you may not be allowed to put data in certain places, specifically in AI because it extends that data access beyond your own world, which maybe the BAA agreement says otherwise. 
 

[00:06:18] Walter Haydock: Yeah, I mean, really understanding the organizational context is key to making informed decisions about. About any information system, but specifically when it comes to no code. So if you're a very early stage startup and you are building an AI powered meme generator, then your requirements for data confidentiality might be almost zero. 
 

Your requirements for integrity also might not be that high because, you know, sometimes you, you might get something funny that comes out there. Availability might be a concern for you if your platform requires. People to be able to access it during certain times. So that's kind of at one end of the spectrum. 
 

Conversely, if you're in the healthcare space, then [00:07:00] you've, you've got all three. You've got confidentiality, integrity and availability that you need to protect against, and you need to make sure that the system that you're working with is giving you what you need from, uh, that requirements perspective. 
 

[00:07:15] Sean Martin: And so where, where do organizations, I think we touched on this a little bit when, when we met the first time, but where do organizations kind of put that analysis? Um, is it because it's no code? Does, does it land in the business world or is there a program team? Um, we're not talking about engineers here necessarily, but there may be some false understanding that no code. 
 

It still means there's code, just you're not writing it right? There's still coding going on somewhere. Um, and who knows what's going on behind there. So how, how does an organization understand their business processes, their data sets, the things they're trying to build with it, and who, who kind of owns that [00:08:00] picture of what's appropriate to do versus not? 
 

[00:08:05] Walter Haydock: Just like with any information system. I think a cross-functional business leader is the person who should be ultimately accountable for making risk decisions because that person will usually be owning the profit and loss statement and is ultimately accountable for mission, success or failure. But security teams can help enable that process by highlighting risks to the business and the business unit from the use of. 
 

No code tools, for example, and helping the business leader establish policies and procedures, secure development guides, checklists that can assist individual business users when they're building automations, when they're integrating third party tools to understand exactly what they're doing to help avoid some of those unacceptable risks. 
 

[00:08:53] Sean Martin: Let's talk about some of those. 'cause you list a few here that, um, I think if you, if you're, [00:09:00] if you hire a team of engineers and you set some requirements and they set off to build something, and let's just use the first one. Or actually let's use the, uh, the second one where a bad prompt triggers the database. 
 

Delete, or, or an override or something. If you have engineers, you might, you might actually, a product team, you might say, this is what it's supposed to do. These are the things it shouldn't do. And you have developers building to that and you have quality assurance testing against that. Um, usually a formal process to do that. 
 

Um. Uh, my expectation is that some of that formality of what's what and what's not, and kind of the overview of, of managing all that might be a little looser when we're talking about no code tools. Um, how, how does it, does an organization need to know or do they know that they should look for. Bad prompts that delete their database. 
 

I mean, and who's [00:10:00] gonna check for that? And how would they check for that with the no-code? Uh. 
 

[00:10:06] Walter Haydock: Specifically in the case of no-code and data integrity, as you're alluding to having a human in the loop approval step for anything that is. Modifying, deleting updating records is gonna be something that I would recommend, and in most cases, unless, you know, there's, there's some reason why you don't especially care about the, the, the data that you're impacting. 
 

So having human review of database, delete and update operations I see, I would say is a key control. 
 

[00:10:42] Sean Martin: Another, another scenario you, uh, you provided here is chained AI functions. And specifically this, an example you gave, is it, it pings customers by mistake. I don't know if this is something you've actually seen. 'cause clearly you might have, uh. [00:11:00] Either one chain of AI prompts that continue to build on each other, or you might have multiple LLMs that you're using data and outputs and, and cross-checking or redoing or revising content. 
 

Um, that, I dunno, presumably you might have some filters or some, some redaction in taking place in one area, but it might get fed to another and the redaction gets lost and the privacy gets lost. What? So an example you gave a customer gets pinged maybe that come out. Customer is not supposed to be known, let alone pinged. 
 

Um, so what, what do you see there and, and what are some of the things that organizations can do for that scenario? 
 

[00:11:44] Walter Haydock: Specifically in the world of no-code, it's possible to build recursive functions that maybe not infinitely, but. Extensively execute over a period of time, and maybe you accidentally send [00:12:00] 50 emails to a customer by running a loop over a function that you didn't intend to. And if you're using a third party artificial intelligence service, like opening eye or something like that, you may be racking up a bill doing, you know, compute processes while you're executing that, that loop. 
 

So it's important for. Business users who are developing no-code systems integrated with AI to understand these risks and to have the right safeguards in place. So understanding that functions can loop over, uh, over themselves for a long period of time. Maybe applying timeouts rate limits, things like that are all safeguards that organizations can put in place to help manage some of these risks. 
 

[00:12:45] Sean Martin: I know, um, my, my own experience with, uh, with the zapper tool in particular, um, I always get upset, but it, it has a very fail hard, um, [00:13:00] model I think where if it hits something, it basically is done, which, which sucks if he actually wanted to go through, um. But I would imagine some teams find a way to work through that, um, to get that complete autonomy and, but the whole idea is that it's set it and forget it. 
 

What are some things there, I know we, we talk about in terms of granting access and granting rights and, and giving, um, applications, uh, control over data and the things you can do with it, that those things tend to creep, right? People get. More access systems have more power once they learn more, especially when we we're talking about LLMs and AI models coming into play. 
 

So how do organizations kind of get a handle on, uh, that scenario? 
 

[00:13:51] Walter Haydock: I think data access is a perennial problem that all organizations need to deal with, and making sure that you have a [00:14:00] lightweight process for getting approvals for which data sources you're going to integrate into is important. I know specifically on Zapier on the enterprise plan, you can allow list or block list certain types of applications, which would. 
 

Give you some of that control there. Make sure that anything that's being integrated is meeting your business and security requirements. 
 

[00:14:26] Sean Martin: And this is kind of the, the category you have this under is, uh, under excessive autonomy. Um, we kind of a bit moved over into the, the confidentiality perhaps, um, providing access. Um, you talk here about. Uh, rooting or mis rooting. Um, and, and I touched on a little bit earlier, exposing sensitive records or, I think the, the other thing that, that, uh, I recognize maybe, maybe not everybody, uh, has a view into this world, but [00:15:00] it's not just a no code environment that you're. 
 

Pumping data into, um, some AI models. It's a bunch of applications connected through API keys, and they're, they're instructing each other to do different things and that can. Basically connect things that shouldn't connect perhaps. Um, 'cause you can nest, you can nest functions, you can nest workflows. 
 

Somebody might build something over here that gets reused across the organization. It does something. They make a change there. You don't know that, again, that scope creep of what's possible. Um, how clearly guardrails need to be put in place in terms of. This is what the data is, here's what the access rules are, here's how the, the usage rules sit. 
 

A lot of that can be defined in, in LLM environments, but how do organizations do that when they start pumping No code. [00:16:00] Are there other rules or, or instructions or something that, that they can set somewhere early on? So the whole process kind of checks itself, or what, what does that look like? 
 

[00:16:15] Walter Haydock: With a lot of no-code tools, the. Enterprise security features that you might expect could be a little bit lacking in some cases. So I mentioned Zapier gives you the ability to allow or block certain types of applications, but that's kind of where it ends. And you're gonna be relying quite a bit on user discretion to make sure that data is correctly being transferred from one application to another, that you're not violating any residency requirements, any compliance requirements that you're not sending it. 
 

To sources where it's being trained on when you have agreed or decided that you don't want it to be trained on. So user education is gonna be really key at this stage because the no-code platforms don't quite have the [00:17:00] full governance infrastructure in place to manage these types of security requirements and enforce them at a machine level. 
 

[00:17:10] Sean Martin: And how, how. Different is this particular environment from, I'll say a traditional app development world where um, presumably you have some visibility and some control. Granted, there's still third party services and API driven stuff that that takes place there. But at least you can pick, pick and choose some of those things and, and again, write code based. 
 

Guardrails and wrappers and tokens and things like that where, I don't know, in the world of no code, the idea is to enable people who don't know how to code to do this stuff so that are they gonna know what tokens are instead of passing straight data through. So how, how do we get around some of the world of opening up everybody [00:18:00] to improve their own business? 
 

Workflows and use AI to do that, but then also not, not put the organization at risk. 
 

[00:18:12] Walter Haydock: Business users are going to need to figure out. How to implement secure no-code systems. And that's because the traditional distinction between a product manager, between a developer, between a QA tester, so on and so forth, is kind of disappearing. In the age of ai, you're seeing the person who's coming up with a requirement may be. 
 

The person who's implementing it now and no code systems accelerate this trend. Now, if that's gonna happen in a productive and secure and compliant banner, then the person who is implementing the requirements who probably understands them best, will also need to understand the basics of security architecture. 
 

Things like lease privilege, like limiting data, access [00:19:00] to the absolute minimum to requiring authentication where appropriate. So these basics are things that business users are going to need to understand. 
 

[00:19:11] Sean Martin: And in your experience thus far, who's driving this awareness and understanding, um, throughout the organization? Typically, security, sadly, has been driven by regulation. So new law comes out, says you can't do this. We then put some security controls in place to make sure we don't do that. Is that the same with this and do you see it being effective if it is? 
 

[00:19:42] Walter Haydock: I don't foresee no code specific regulation or legislation coming out, but what I will say is that. Organizations can always do a better job of tracking the assets that they are using and how they define that is is up to them. [00:20:00] But I would say at a minimum, you need to understand, are business users going into freemium SaaS tools like Zapier has a free plan that you can go into. 
 

Do you have a way to detect that if that's happening? Additionally, if you're using an enterprise product like Microsoft 365, understanding what capabilities something like Microsoft Co-Pilot Studio brings with it is also important because Microsoft's actually been making it quite difficult to disable, uh, co-pilot studio at an individual user level. 
 

I think they're pushing hard for adoption here, but at the same time, that has some security implications. If you have a very. Innovative, but not necessarily educated user. Whip up a bunch of copilots and start making them available for the organization. 
 

[00:20:46] Sean Martin: And, and on that note, I'm assuming that's, that's their goal is to spin up a bunch of agents that do things for the org. Um, I mean, even one of the tools I'm using, there's a big, uh. [00:21:00] Create the code and then there's the publish and it, it's very tempted to wanna hit the publish button, but that's not what I wanna do because I don't want, I'm not looking to make that thing public. 
 

I just want the code for my own use. Um, how do you kind of mentioned it earlier, the freemium versions of these tools, um, the capabilities of them might not be easily to detect that they're being used. Um. They're not costing money, they're maybe not hitting the, the network, um, and slowing it down. So how, how do organizations kind of do that? 
 

Presume culture is part of it. Um, policies that employees adhere to, so say they'll adhere to, but then there's a reality of shadow, everything, right? 
 

[00:21:50] Walter Haydock: I think it's incumbent on security teams to come up with lightweight approval processes for low risk use cases, especially involving no-code, because it's [00:22:00] gonna be such a backbone of business productivity. So at a minimum, having pre-approved tools that people can go into and start using right away for certain use cases is a big help. 
 

A more advanced level would be something like an automated system for evaluating the risk of a third party tool and comparing that to your organization's risk appetite. And something that we're working on right now is a database of these types of risk assessments that are off the shelf to allow much more rapid adoption of no-code tools, while at the same time managing the most critical risks. 
 

[00:22:36] Sean Martin: So I'm going through, uh, the list here. You have a playbook that you mentioned, and it has, I'll just go down the list here. Constrain contract, decide design, train. Prove test and strip. So a, a lot of steps there. Um, what does that, what does that look like from a project [00:23:00] management perspective? Is that a linear, um, process? 
 

Is it a waterfall? Is it a loop? Are there, is there a lot of dependencies in that, in that cycle? Maybe kind of paint a picture for us. What, what that looks like. 
 

[00:23:15] Walter Haydock: So when you're building a no-code governance program, there are some upfront steps that you can take. For example, allow listing certain applications that you might only adjust periodically. Also, looking at the contractual obligations or requirements of the tool. That's something that might be a more period periodic review. 
 

More on a more granular level would be implementing those human approval gates or post hoc. Human analyses of the performance of the system, that's something that would be more regular. And then something that would be continuous would be having an ability to strip out certain types of data that may not be certified for use in the tool [00:24:00] and accommodation of more periodic control enforcement, but also continuous implementation of your security requirements will be the most effective path. 
 

[00:24:12] Sean Martin: I want, I wanna go to the human in the loop 'cause it's been mentioned a few times and I'm, I'm all for it. Um, but it kind of goes back to the picture I was painting earlier where there's a system. I've set up, I want it to go from A to Z, um, unaltered, straight through, hopeful, work out all the errors so that it, it completes on its own fully autonomous. 
 

Um, I get upset when it doesn't, if there's an error. Um, if the whole point of this is to get rid the human out of the loop. How do the organizations convince themselves to put the human back in, and how, how does that look? Um, 'cause if the goal is to take some process from multiple days down to a couple hours, and you throw a human in there and, [00:25:00] and they're not in, in the office at the moment, so it's gonna take 'em in the next day, or they're on vacation, you can see I paint a gazillion scenarios here, but humans can slow things back down. 
 

So how do they convince, how do orgs convince themselves that the human is appropriate to keep in the loop? 
 

[00:25:17] Walter Haydock: There are three main ways that I would recommend applying human in the loop to. Artificial intelligence systems and organizations should look at the risk posed by the use case in deciding which technique to use. So first of all, there would be a default deny approach where a human needs to affirmatively approve something every time it happens. 
 

You should reserve this for the highest risk use cases, anything related to healthcare, financial data, things like that. Next, you could do a. Human approval within a certain window, but then the system automatically, um, it, it proceeds if there's no human input. So [00:26:00] this would be kind of a more medium use, medium risk use case where, you know, maybe you're sending marketing emails or things like that, or moving data in a database. 
 

And then the. Use case where you have the lowest risk would be appropriate for kind of a post hoc review where you look at the performance of the AI system after the fact and confirm or deny whether it meets your business and security requirements. 
 

[00:26:30] Sean Martin: I am curious, are there, are there scenarios to people scenario? Teams feel this is a benign process. The data is not very critical. There's not a lot of ip, the privacy is not really an issue. Um, but then you come in and you say, but have you thought about X, Y, and Z? And then the, the opposite is true, that the risk is higher beyond their, their appetite. 
 

Are there scenarios that you've seen, [00:27:00] I don't know, some that stand out or some that are common across different, uh, sectors? 
 

[00:27:08] Walter Haydock: I'd say that unknown unknowns, which is what you're alluding to, are one of the most difficult challenges in cybersecurity, privacy, and compliance. Because if you have done an analysis and you think you've covered everything, but you haven't, then that that's where the most dangerous risks come from. So I would say getting a third party opinion on your architecture, on your. 
 

Obligations on how you've deployed your no-code and AI systems is probably the most effective way to help spot those types of issues before they materialize. 
 

[00:27:47] Sean Martin: And I'm looking, looking at, um, I think it's your second to last point here. Um, yeah. Treat no code automation as a production system with audits, approvals, and [00:28:00] contracts. And it just, uh, sparks this note in my mind that. You may not even, we kind kind of go back to the BAA business agreements. You might not even be allowed to do some of this stuff. 
 

So doing it and then trying to figure out if it's secure enough, uh, you may not have that ability. Are there laws and regulations that just say you, you can't, and that should really be the, uh, the initial line in the sand and. For some, some business processes or are there ways around you can do it, but only if, I don't know, bring the stuff in on site, on premises, or use a certain model with certain controls in place. 
 

Are there hard lines or, or soft lines that can be maneuvered? 
 

[00:28:50] Walter Haydock: They're both hard lines and soft lines when it comes to no code and compliance. So I'll give an example. The tool [00:29:00] N eight N, which is a pretty popular automation system, they will not enter into a business associate agreement with customers. They just don't do that. So you can't process PHI on there. 
 

Cloud-based platform. Now, they do offer a customer managed version that you can download and deploy into your own environment, and it's conceivable that you could operate it in a HIPAA compliant manner. If you're applying the appropriate controls, you have the right. Access, um, controls on the environment. 
 

So there are some hard lines. For example, don't give PHI to an organization that won't enter into a BAA, but there are some softer lines where you could potentially do a customer managed version or deployment method for a tool and still use it in a HIPAA compliant way. 
 

[00:29:54] Sean Martin: So I didn't, that's that's great, Walter. I appreciate all that. Uh, a lot of good insights throughout. I want [00:30:00] to, in the last few minutes here, maybe speak directly to, uh, the executives that, uh, listen to the show and watch the, watch the YouTube videos, specifically ciso, CIOs, chief Risk Officers. Um. If there's a bit of advice, uh, a lesson learned, um, working with that group yourself, um, anything you wanna share with them to say you think you might, but you might wanna look at this or best practices, always ask this question and you'll, you'll do very well down the line. 
 

Having that initial peace in mind, I'm just making stuff up here. Um, any, anything you wanna highlight for that group to say, here's how you can do better with this, this particular program. 
 

[00:30:48] Walter Haydock: I would advise security leaders to not attempt to ban no-code implementations because that is going to both slow down the [00:31:00] business, but also create the likelihood of. Greater shadow it and shadow AI in your environment. The key is having a risk-based approach to these types of systems to make sure that you are managing the biggest risks while at the same time not being a blocker for development and operations and the business generating value. 
 

Because at the end of the day, that's why you exist in your role is to help preserve that value. So. Facilitate it in a secure and compliant way, but don't get in the way. 
 

[00:31:35] Sean Martin: And then from a, from a business leader executive, the, uh, executive leadership team, let's speak to them a little bit. I'm gonna go out on a limb and, and. Suggest perhaps that when a organization is enabling their employees to do things, I was at a legal conference where there was. Firms saying we're opening up this world to, uh, to our lawyers [00:32:00] and attorneys can go in and vibe code and pull in automation and orchestration and all this stuff. 
 

Go for it. Have at it sounds amazing. A little scary, but in that environment, you're opening it up for that. Have you seen organizations do that in a way, and if so, maybe some advice for business leaders? To say, this is the culture we want. We wanna empower you. This is what it looks like. We're gonna partner with our risk and security teams to ensure that we're safeguarding you, but we wanna hear from you. 
 

What are you trying to accomplish? What are your initial ideas? Almost like a think tank or a a suggestion box, or. I don't know. Skunkworks projects, right? Or, uh, they skunkworks is known versus shadow. It is not known. So have you, if you come across any organizations that do that well create a culture that have the guardrails and, and any advice or thoughts on, on that, speaking to that audience.[00:33:00]  
 

[00:33:01] Walter Haydock: My biggest recommendation for business leaders when it comes to no-code and AI deployments would be to engage with the security, privacy, and compliance stakeholders in your company early. Make clear that your goal is to deliver business value, but that you are willing to take heed of their recommendations, their concerns, and that. 
 

All things being equal, you'd rather do it in the most secure way possible while at the same time, you know, making clear that you have requirements that you need to deliver on. And, you know, really only ethical or, or legal, uh, lines will be the things that will stop you. So I would say bringing in the appropriate risk advisors early on is gonna be key to successful rollout of a no code and AI program. 
 

[00:33:52] Sean Martin: And final thought here, is there anything that either of those groups should pass down as [00:34:00] a, if there's one, one thing to consider as you're building your no-code thing, um, ask this one question. What would that question be? 
 

[00:34:12] Walter Haydock: Why am I doing this? That would be an important question to ask first, because I've seen lots of, I mean, throughout my entire career I've seen lots of organizations embark on these big projects, uh, that it ist entirely even clear what the final business objective is for the technical implementation that the. 
 

Organization is pursuing, and at the end of the day, by far the cheapest risk management technique is avoidance. So if there's no business justification for doing something, for messing with a new tool, then don't do it. 
 

[00:34:50] Sean Martin: Yeah, I, I love that advice. And Marco, my co-founder, wishes I'd follow that advice myself. It's, it's fun to build things. I get [00:35:00] creative, I build some things. Marco goes, why'd you do that? It doesn't fit in with what we're doing. I'm like, yeah, but it was cool. It was fun. So those, those, those projects get abandoned, but some of 'em do stick. 
 

And, um, one of them we tried a while back, we haven't done it for, haven't revisited it, but we're trying to, I don't know, we have almost 3000 episodes of, of podcasts that we've produced. I'd love an agent that analyzes all of that. Feeds up a, feeds up a q and a session with somebody that says, have you talked about this topic in relation to this regulation, uh, in this industry, in this part of the world? 
 

And what were some of the highlights of that conversation? I'd love to do that. Um, we tried and the first thing I noticed. Two things scared me. First was the cost to make that possible and, uh, just from tokens and everything you needed to pay for. And then the other is what if it pumped out something? It was saying the guests were the hosts and then they [00:36:00] worked for companies that didn't work for, so there was a lot of hallucination. 
 

I'm just like, I can't risk the credibility of. The response to the prompts that are given. So I, I, I shut that project down, but the y was there. Um, but there was a lot more to it than just building something and letting it fly and hoping, hoping it works. And didn't even get into the, beyond the cost and the, the integrity of the res, of the, the response to the prompt to say, is there information in there that. 
 

It could be misused in some way or provides access to something that, that shouldn't be given access to. So anyway, that, that's my own personal story. Um, any final thoughts? Uh, Walter, after I told that, uh, fun one, 
 

[00:36:48] Walter Haydock: No, I appreciate the time and thank you for having me on the show, Sean. 
 

[00:36:53] Sean Martin: Uh, it's my pleasure. Appreciate you taking the time and, uh, you're very prolific on uh, LinkedIn. I haven't [00:37:00] checked the other platforms, but I encourage everybody to follow you on LinkedIn for sure. Uh, you put a lot of good information out there and, um, this post I will include the link. To, uh, we'll be in the show notes and people can check that one and then find you and, uh, connect with you and stay, stay in touch. 
 

And, uh, so thank you again and everybody listening and watching. Thanks for joining me here on Redefining Cybersecurity. Hopefully if you learned a few nuggets and, uh, can apply them to your own programs and to your own culture. Uh, so as, as Walter pointed out, uh, let's not get in the way, let's help. Help the organization grow in a way that's safe and secure. 
 

That's the whole objective here. So thanks everybody. Stay tuned for more. You're on Redefining Cybersecurity, and uh, we'll catch you on the next one.  
 

​[00:38:00]