Q&A: What Does Responsible AI Look Like for Infrastructure Security?

Originally shared on January 15, 2026, by The CyberVault, and edited for length and clarity.
In this episode of The CyberVault, Irfahn Khimji, Field CTO at BackBox, joins cybersecurity podcast host Katie Soper to discuss how we bring safety, trust, and responsibility to AI-driven environments and what resilience actually looks like as the attack surface continues to shift.
Q: How has AI changed the way you think about infrastructure resilience and reliability today?
Irfahn: People are using AI everywhere, from their phones to their laptops to their TVs. As I get more into the infrastructure resilience and reliability space, the key thing is that AI is there to make things easier, update things faster, and do things that previously would take weeks or months to do, in minutes. It’s now a matter of how to govern that and make sure we’re working together.
Katie: For sure, and I think it can sometimes look like this balance, where people seem to be either really poor or really against AI. As you mentioned there, I think it’s more about adopting it the right way. And obviously, rolling out AI for a lot of people sounds very exciting, but, as you just mentioned, doing it securely and responsibly is definitely a whole different story. I think it’s easy to get wrapped up in the idea that we’ve got this new product, we’ve got this new technology, it’s going to do all these amazing things for us, and you can really underestimate the challenges and potential gaps and risks that come with jumping into that new initiative.
Q: What are some of the main things you think people really underestimate when they are jumping into these new AI initiatives?
Irfahn: On the surface, the output you get from AI looks fantastic, and it’s improving every day. Now, the challenge is building trust in the actual data that’s coming in. What is it presenting to you? Is that legitimate data, right?
AI is based on large language models, which require a lot of language data, in that background. A lot of this data needs to be connected to the cloud. When you start bringing that into your infrastructure, you need to connect it to a large data lake. If you’re a large enough organization, you may have enough data there, but chances are you need to connect to the cloud and to other people’s data as well.
If you were to bring that on-site and cut that connection off, you suddenly don’t have much data to go through, and the AI model must make more guesses, which leads to more hallucinations. And so, I don’t think everyone necessarily understands that you need a lot of accurate data to pull from. That way, when it gives you something, you can verify that it’s legit and true. You can ask these models where they got the information, show you their sources, and show you that they’re actually getting the correct data, much like you would an intern, right?
If you were to hire someone and bring them in as an intern or co-op student, you would have them do work. Students these days are very good at what they do. They’re good at presenting information, but you don’t necessarily trust them to make decisions for your organization, even if they’ve been there for a long time. You would verify by asking them, “Hey, how did you get to this model?” How did you get to this output?
Similarly, you can treat AI like an intern rather than a full-time employee or an expert. You can then take what the AI model has built in minutes, verify it, and spend a little extra time verifying it so that you can start trusting whatever it’s giving you.
Katie: I think you mentioned something critical there, because it’s easy to forget that, especially if you’re not necessarily in security, that AI is an acrobatic tool. Just because this product has come to market, and exists doesn’t mean there won’t always be improvements. I suppose the same way you can always look at, say, the fact that our laptops will now need continuous updates because the landscape is continuously evolving, and this new technology also must keep up with that.
I love the analogy you’re making, though, because I think it’s easy to forget that you do need to check in and make sure where it’s getting its information from and where it’s learning from, because it must learn from something. And I think there’s a real tension between staying modern and staying secure, because everybody wants the benefits of AI, but there’s hesitation to trust it too early.
Q: 2025 has been the year when we’ve really seen people become a bit more comfortable adopting this technology. How can you navigate that balance to make sure we bring this new tool in without being behind the curve or too far ahead of it?
Irfahn: It comes back to verifying the data, right? At the end of the day, you’re an expert, but if you’re relying on AI to be the expert, you’re going to be in trouble. You won’t know what’s happening.
But if you look at the AI as an intern and verify what it’s doing, like in a recent example someone posted on LinkedIn, where they had a bunch of AI agents they had built or were running in the background, and they did a verification. The AI model’s output was correct, but how it got there was interesting.
It said, “Hey, this job is complete,” when, in reality, it wasn’t. It had signed the secondary agent to finish it and said, “Yeah, done.” And many times, you know, if you say, hey, Katie, can you go do something for me? And you say, “Done!” Have you done it yet? No, but it’s a figure of speech.
Now, he hypothesized doing that, so he picked up an interesting case to say, hey, wait a second, I need to tweak this to say, don’t tell me it’s done until it actually is done, right? It’s a figure of speech.
One of the things we do at BackBox, and how we’re using AI internally, is collecting data. If you limit the scope of what it can do, don’t just give it everything. That’s a way to tweak it so you can have that balance.
For example, we’ve got our AI models going out to find vulnerability data because that data is dispersed across multiple sources. It takes a lot of time to research, and that’s a way to go to all these different sources, pull all that data, it’s in different formats, normalize it for me, and give me this standard output that I can take and provide to my customers and say, here’s what the vulnerability is, here’s how to fix it, here’s what the risk is. If you can’t remediate it, here are the mitigating factors. You just presented that simply.
That used to take network administrators a long time to do, and I can just kind of go, bang, here’s everything you need to do, all streamlined. And that’s an example of narrowing the scope, so you can trust the output.
Katie: For sure, and I think you kind of mentioned something there because a huge part of it is the data security side, what data it’s getting access to, how it’s getting that access, and also what decisions it can make?
Q: Because Security teams are among the earliest adopters of AI, they’re often the guinea pigs for figuring this technology out. As a security team, you don’t want other teams adopting a new technology you don’t understand, because then there’s probably a whole host of risks you’re not prepared for. When Security is always ahead of the curve, do they sometimes pay the price?
Irfahn: One of the challenges is that you want technology to make your life easier, right? Often, there are long security terms of service that not everyone reads to see what’s in them. If you’re a user of the tech, you’re like, hey, this makes my life easier. How do I use it? And then security needs to get their handle on it, get around the actual service, and figure out what data is exiting the enterprise and what is coming in.
All that stuff must be controlled, and one example I’ll give is the cloud. A few years ago, everyone wanted data on-premises, and suddenly, this whole concept of cloud infrastructure came along, like infrastructure as a service, platform as a service, and this is going to make your life easier. You don’t have to maintain infrastructure. There are all these cool benefits.
You know, cloud storage lets you share data, right? Back in the day, if you wanted to copy a file, you had to use a USB drive on a computer that was on-premises. But hey, I can just upload this to cloud storage, and suddenly I can share this stuff.
Organizations and security teams must get ahead of that curve and really understand the impacts on my organization. I take that model and, you know, think history repeats itself, saying, hey, AI is very similar. You have this cool thing that’s supposed to make your life easier, but what are the risks involved, right? What data is leaving my organization? What are their hallucinations? What are their biases?
It’s up to the security team and CISOs to present these risks to the board and say, hey, you know, this is what the risk is, these are some mitigating factors we can put in place, this is how we can provide governance around it, and really, again, going back to how security has to partner with the business to enable them to be successful. If security is a blocker, saying no, you’ve got a problem in the organization. At the same time, you can’t have the business running wild. There needs to be governance and structure in place for how this technology came about.
Katie: You mentioned something there that I think is critical, and it’s often the case that security is perceived as a blocker to wider operations because it creates additional hurdles, whether in technology or processes that people want to adopt. As you mentioned, I suppose it’s about creating that synergy between the two. Another thing to consider here is that within organizations, not everyone sees AI risk the same way, and I think that’s potentially another reason why it’s sometimes treated as an enabler rather than a blocker.
Q: When you look at AI adoption through the lens of a CISO, then through the lens of a vendor, and then through the lens of someone else within the business adopting that technology, whether it’s HR or the data team or whatever it may be, what shifts do you think change when we’re looking through these different lenses?
Irfahn: I think the good thing is, the industry over the last decade or so has gotten better. Security or non-security people are becoming more aware of the risks involved in being online, sharing information, sharing data, and connecting to things you’re not aware of. Years ago, you could just plug in a random API connection or enter your credentials, and you’ve been phished. Nowadays, people are better at that. Everyone’s security awareness has grown.
That’s a real positive. But again, I think it goes back to really understanding what data will be leaving my organization. What data is this technology going to be accessing? And then, on the vendor side, it’s up to us to be very clear about what that is.
I had a customer about a month ago who said, “Hey, tell me about your roadmap.” I walked them through it. Here’s what we’re doing: here’s what we’re doing with AI. I thought everyone wanted to hear about AI, so I told this guy what we’re working on. I was so excited. And you know, he politely nodded on camera, said thank you, and then said, “Don’t give me any AI. I don’t want any AI in my environment.”
He went on to say that if we’re going to have any AI touching any of our on-premises systems, and this is an organization with lots of PII, they’ve got access to customer health records. He said we have a very strict governance process. If there’s a valid business justification for using AI, we don’t want to stop you. We want to make sure we’re asking the right questions to understand whether that use case is beneficial, whether you can do something they already have, and, what’s the cost of that? If the cost is, hey, we’re going to give all our customer or patient records to the cloud, no, that’s not acceptable for us as a business.
Vendor transparency is key, users understanding what that means, and then the business being able to justify it based on the business’s risk tolerance principles and adjust accordingly.
Katie: I think you mentioned something else that’s important there, because when we talk about the cost of AI, people can automatically assume the financial cost – time and usage – but, as you mentioned, what are we having to give? What does it need to learn from? What information do we have to give to its business?
I think we can talk a lot about the upsides of AI, but we don’t always discuss the negative repercussions. We acknowledge that, if you’re looking at the threat landscape, there are vulnerabilities, and we’re obviously aware of that, but I think there are some things we’re not always honest about. In the industry, especially in security, vendors are trying to bring these amazing products to market that will help CISOs sleep at night.
Q: If we’re talking about being totally honest about both the good and the bad, specifically when it comes to AI, infrastructure, and operations, what do you think some of the key things are that people may be burying their heads in the sand about, or pushing away from the limelight?
Irfahn: We talked about some good, right? Being able to automate things and make things easier, right? Like the example I gave earlier about pulling vulnerabilities from multiple sources in disparate formats. That makes your life a lot easier. It saves hours.
When I first started as an intern in vulnerability management, almost 20 years ago, it was mostly manual work. It was literally that we would spend time building the report, formatting it, and then, you know, getting it out a certain way, just so our users would have very clean, structured data. Come to think about it, you probably didn’t need my job when I started as an intern back then. But that’s all the good. Now, the bad is, where is that data coming from? What’s the source, right? You know, the old saying, garbage in, garbage out? Well, what is the model that it’s learning from, right?
Let me give you another example. One of the things we want to do at BackBox is automate. We build automations for our customers to execute tasks and other operations on our network devices. Today, we’re building all of those by hand. If you had AI build some of those automations, it would make it much easier. If you’re thinking of network infrastructure, that’s the backbone of what an entire organization runs on; you don’t want to just be making random changes. You want to make sure you thoroughly check what’s happening there.
Firstly, you need good source data. From our perspective, you need to source data from the different vendors. Fortinet has a specific command line, Cisco has a specific command line, Juniper has a specific command line; everyone has different ways of doing the same thing. If I want to make a configuration change, like turn SSH off, or telnet off or on, to execute that command, connect to the device, run the command, and verify the output, conceptually, it’s the same, but the commands are different.
To address this challenge, you need the AI model to understand that these are the commands for this type of device. This is what those commands look like. Once you’ve built that structure, you can then give it the leeway to say, “Hey, I’m seeing this type of change needs to be made. I already know what the skeleton looks like, the shell for what the actual automation will be. I just need to choose the variables.”
Once you’ve built a shell, the AI model can just change specific variables; you can really go back to limiting its capability, and, like an intern, go back to that intern example, right? Hey, intern, you can change variable ABC without touching the structure, and then you’re in a good spot. I think the challenge for a lot of people today is that, oh, yeah, I can just go figure it out.
Again, it’s artificial intelligence, not the expert. At the end of the day, the person working there is the expert. You’ve got to tell it what to do and structure it correctly.
Katie: I think it’s an important thing to look at, especially when we go back to that analogy of the intern. So many people, when they look to implement AI, they’re not always doing it for the intern-level tasks, like you say, sometimes using that expert knowledge.
You mentioned you wouldn’t necessarily give access to all this information and data to make decisions because there’s privileged access you need to look at. What should they know? They don’t need to know the CEO’s salary, the C-suite execs’ salaries, or whatever.
I think that’s another thing people look at when it comes to AI, because they don’t want AI adoption to slow down business in any way. But also, there’s that element of control. How much do you control, and how much do you intentionally not control?
Q: If we can’t give them access to all that information, they need to make decisions without it. But if you do give them access to that information, there’s a fine balance between how it’s used and who it’s shared with, without that overlay. And I think that’s a difficult balance. How do you identify that? How do you say, okay, this is how much you should control this?
Irfahn: Those are great points, and I think it goes back to history repeating itself. If I go back to earlier in my career, we used to be in an office. There was wired internet; we had laptops, but we had to be at our desks, connected to the hardwired network to run them. When I first started, I was doing vulnerability scanning, and we found this random device. We’re like, hey, what’s this device? It’s plugged into this area of the network. Let’s go check it out.
Somebody had a Wi-Fi access point, just an old Linksys router, plugged in under their desk. And instead of connecting their internet to their laptop directly, they set up their own Wi-Fi network.
The security team said, “That’s a big no-no,” because suddenly everyone was connected to the corporate internet via a random Wi-Fi connection. The business had to quickly understand the issue and find a way to provide secure Wi-Fi so people could move from building to building or office to office in a secure manner.
Another example: USB keys were blocked, so you couldn’t insert one. I used to work in finance, and we had financial advisors who needed to provide files to their customer. Well, they couldn’t plug in their USB keys. They were using email, but I couldn’t just send them my corporate email; you were blocked, and USB keys were blocked. They found a way around it with Gmail. Now they are sending sensitive files via Gmail.
Wait a second, that’s not okay. Clearly, users are finding ways to get around the security controls in place. That goes back to the example you mentioned earlier, where security is a blocker.
The security team needs to understand the AI use cases here. First, awareness and education are important considerations. These risks affect the business, and the entire organization needs to be aware of them.
Secondly, what’s that use case? Why do you feel you need to do that? In a very non-judgmental way, are you writing content? Are you editing it? Are you creating nice, cool fireplaces using AI? What’s the need for this, and ultimately, we want to help you with that, you know, solve that challenge that you’re solving with an unauthorized use of AI. We want to help you solve that in a manner that makes your life more efficient. Let’s just do it correctly and safely.
Katie: It’s always kind of a difficult balance, in the sense that there’s only so many factors you can control there, because with remote work being such a big thing. Everyone pretty much does it now. Whether you’re hybrid or remote, it’s very rare that someone who’s 5 days a week has to be kind of in the office, and even when that does happen, you can take that work home and access it remotely in the evenings or weekends, and so on.
It’s such a big and relevant topic, as everybody’s seeing, there’s a reason everybody’s talking about it. I think one thing we’ve seen is that AI can sometimes feel normal. Trust isn’t static. What works today doesn’t necessarily work tomorrow, because things are changing. There’s more data that the tool might be able to access, and more vulnerabilities are emerging in the landscape.
Q: When we look at responsible AI in practice, do you have any key advice for anyone who’s tuned in? How can we ensure that the systems we trust today remain trustworthy tomorrow?
Irfahn: This is one of the most challenging questions. You have AI everywhere now. You can disable it or enable it, but if we go back to the theme we have had through this whole session, history repeats itself.
Let’s take a step back and look at digital advertising. You know, I’m older than Gmail. I was one of the first people who had to get an invite for Gmail, and we were like, this is so cool, I got this, you know, invite-only email that works fast and everything, but then you notice digital advertising was a big piece of it. So whatever free email you were using, they were scanning your emails and posting advertisements at the top of the bar, based on what email was coming in.
If you look at today, even if you turn off settings on your phone, if you’ve ever tried just speaking about something or thinking about something near your phone, suddenly you get ads on social media and whatever it is for that product.
That same principle applies today: if you look at countries putting legislative practices and guardrails in place, some say, hey, if you’re under 16, you’re not even allowed on social media. You’re not allowed this stuff. Every time you visit a website, you can now accept or reject cookies. Privacy policies are becoming a little easier to read.
The biggest thing is going to be the ability to turn AI capabilities on or off. For many organizations, when you want to use their app, you have no choice but to accept their policies. You either don’t use it and think, well, is that worth the efficiencies, or whatever fun this app is going to bring me? Or is it up to the vendors to legitimately give you the option to turn it on or off?
For example, the customer feedback we’ve received is that AI is great. I want it in my environment, so give me the capability to turn it on and off. But you can have these two extremes: I want you to give me everything AI, and I don’t want it; keep it away from me.
As vendors, developers, or whoever is building a product that has AI, if that’s the core function of your business, great. Make it clear what’s happening, where you’re using that data, and how you’re using it, so users have that choice. If your product is not. If you’re building a product where the capability and function are not AI, but you’re using AI in it, make that clear and transparent, and give the customer the capability to turn it on or off.
The market will dictate where it goes. If, suddenly, we have a broad spectrum of AI companies that people aren’t buying, or AI-enabled products where the AI is turned off everywhere, that will tell us something. Ultimately, consumers decide where things go, where development goes. Or if everyone’s been given the choice, understands the risks and benefits, and decides to go ahead and use it, then maybe AI is the way to go. We’ve decided in an informed matter as opposed to being forced to adopt it regardless.
Katie: Great advice! I think this entire conflict shows that it’s still evolving. We’re working with really exciting technology that has the potential to keep evolving day in and day out, and I think that’s what’s exciting, but as I mentioned, it’s also daunting; you’ve got to stay alert. I think there’s so much to look at when it comes to adoption, and I suppose it’s also about being less intimidated, because, as you mentioned, there are people afraid to use it, and I think that’s an education piece as well. As we see wider adoption, we’ll also see a kind of refinement of where we can really leverage this technology, maybe where it’s not quite ready for us to bring into practice just yet.
Listen to the full podcast. For more details on using AI for network automation, visit our BackBox Platform page. Ready to get started? Request a demo to see our solution in action.


