7 Key Questions About AI for Network Automation Vendors and 3 Red Flags

When exploring network automation, it’s crucial to partner with the right vendor to ensure the technology is secure, reliable, and delivers value. This also applies to the vendor’s use of AI.
According to Gartner, by 2027, more than 40% of the tasks performed in infrastructure and operations (I&O) will be augmented by GenAI, up from less than 5% in 2024. It’s important to choose a network automation partner that responsibly integrates AI, enabling you to transition to agentic NetOps when the time is right for your organization.
To help you navigate the due diligence process, here are important questions you should ask and red flags to watch for to assess vendors’ AI capabilities and offerings. Let’s make sure you make an informed decision to ensure a seamless, efficient network infrastructure!
Data Privacy, Ownership, and Security
1. Will our data be used to train your models?
Make sure your confidential data isn’t leaking into the vendor or public training sphere. Policies regarding the acceptable use of data for training models should be clear and include an opt-out option.
BackBox approach: BackBox doesn’t use your data to train our models. We utilize AI to provide contextualized information on CVE severity, remediation, and workarounds through a data feed that consolidates data from CIS, NVD, NIST, and vendor websites. Our focus is on the network and security devices our customers use. Being vendor-agnostic, this includes monitoring vendor advisories from over 180 vendors.
We also continuously monitor updates from these sources and integrate them into the BackBox platform. Instead of visiting multiple sites to understand the latest vulnerabilities and mitigation strategies, you have a single, normalized source of truth for vulnerabilities affecting devices in your network infrastructure.
2. Where is our data processed and stored?
Sending data off-site for processing, analysis, or storage can expose it to risk. Make sure you know where the data is handled so you can ensure it aligns with your company’s data residency policies. Inquire about data access, retention, and deletion policies to validate that these processes meet your requirements for privacy and security.
BackBox approach: With BackBox, you can be confident that data about your devices remains within your BackBox instance, where it is updated, managed, and maintained. The platform maps the latest vulnerability information to your real-time asset inventory. Your network engineers can access the AI-aggregated and AI-contextualized data to quickly prioritize their remediation efforts. Role-based access controls add an extra layer of data privacy and security.
AI Model Performance, Transparency, and Reliability
3. What specific AI model(s) power this feature?
Determine if it is proprietary or built on a public LLM. Unmanaged interactions with LLM models can expose data to third-party risks. Ensure there are measures to regularly test for vulnerabilities and protect the AI-driven part of the solution from threat actors.
BackBox approach: BackBox isolates the AI components of the solution from the rest of the platform, intentionally separating AI processing from the core automation platform. We utilize a robust proprietary AI model and provide relevant data via a secure data feed into the BackBox platform.
4. Is the AI decision-making process explainable?
It’s important to understand whether the platform offers a traceable audit trail of how it arrived at a specific output. Lack of transparency creates uncertainty. NetOps teams are concerned about potential disruptions to business continuity and compliance issues.
BackBox approach: BackBox provides transparency into how AI generates responses and recommendations. Source data is clear and accessible to authorized stakeholders. This allows your team to verify consistent, accurate results and, if needed, adjust the data or processes using their knowledge and expertise to enhance AI’s performance and the reliability of outputs
Compliance & Ethics
5. Does the AI adhere to relevant regulations and standards?
Evaluate the policies the vendor has to ensure they align with your organization’s governance, risk, and compliance (GRC) program.
BackBox approach: Because the AI-enabled data feed is integrated into the customer’s BackBox platform, you control which systems and teams can view and access data, as well as how it is handled. Customers can also ensure alignment with their GRC programs.
6. What human-in-the-loop oversight is built into the process?
For high-stakes decisions, it’s important to ensure a human can review, override, or intervene if needed. AI should serve as a tool to assist skilled engineers, freeing them from time-consuming tasks, while ensuring checks and balances so they can focus on strategic decisions and new business initiatives.
BackBox approach: BackBox promotes a human-centered control approach, treating AI as a trusted advisor rather than the sole decision-maker. As a best practice, BackBox customers can easily set up guardrails with a few clicks, ensuring that engineers are involved at the right time to review the process, validate the output, prioritize their mitigation strategies, and decide whether to take the next steps themselves or approve automated actions.
Vendor Stability & Risk
7. Is AI built in-house as part of the platform, or is it a third-party API wrapper?
Approaches to AI that rely solely on a third-party solution with an API wrapper require extra due diligence in your vendor evaluation. You need confidence in the vendor’s approach and viability, as well as in the third-party AI provider. This involves understanding their track record with customers, their product roadmaps, and the role of AI in their plans to ensure long-term commitment to AI and the partnership.
BackBox approach: BackBox doesn’t rely on a third party for AI expertise. We have our own guiding principles for AI that demonstrate our strong commitment to ethical AI development and deployment. Our team is knowledgeable in AI and prepared to answer your questions, including how we use AI and where it fits within the platform, best practices for implementation, key customer use cases, and our plans.
3 Red Flags to Watch For
As you work through these questions with vendors, here are three potential “gottchas’ to watch out for.
- Vague responses regarding data usage: If a vendor can’t explicitly say no to training on data, assume they are using it.
- “Fully Autonomous” claims: No human-in-the-loop for high-risk decisions will slow adoption and, even worse, expose your organization to risk.
- Lack of clarity about data sources: Transparency into the sources of data the model will use to provide answers and make recommendations is a “must-have”; get the details.
Interested in taking the next step? Contact us for a demo or check out these resources to learn more:


