AI Won't Replace Network Engineers
After spending some time working with model context protocol (MCP) servers that integrate with select systems, I've gotten very familiar with what GenAI can and can't do for network troubleshooting. The verdict? Our jobs are safe but they're about to change dramatically.
First off, it's pretty clear that AI really doesn't understand intent, at least not yet. AI can tell you that 50 devices just joined a network but it can't tell you whether that's a planned deployment or someone screwing up a config that put a bunch of users on the wrong VLAN. That distinction matters when you're deciding whether to start troubleshooting or keep on doing what you're doing. It's also something that's unique to humans that have context of what is happening within an infrastructure and in an organization.
AI is great at correlation but struggles with causation. For example, let's say monitoring tools see a series of routing changes across multiple devices, AI might correctly identify the topology changes. That said, determining whether this was caused by someone adding some new networks, or by a design problem causing routing instability, or a failing optic causing flapping, or legitimate maintenance requires understanding the broader context of what's happening in the physical world.
The dirty secret about using AI for troubleshooting is that it's only as good as the data it's fed and many network problems are unique. Sure, AI can pattern match against known issues, but networks fail in creative new ways every day. That novel bug after a software update that's causing your device to act unexpectedly? Good luck finding that in any training data because it didn't exist when the model was trained. Also, the model might not have purview of all the bugs and issues out there in TAC databases. AI can't account for vendor bugs that literally weren't in existence nor available to the public during training. Experienced engineers having lived through a few of these types of events know a fresh new bug when they see one.
The liability question alone ensures humans stay in the loop. When a critical network goes down, no one's accepting "ah shoot, the AI screwed that one up!" as an explanation. Someone with a name, credentials and maybe even insurance needs to own critical infrastructure decisions. I've reviewed enough services contracts to know that clients expect accountability so blaming AI systems probably won't fly.
Let's talk about what AI actually does change: the traditional escalation model. Level 1 techs who just reset passwords and power or check cable connections? That's probably not long for this world. But the idea that AI can handle everything up through Level 2 or 3? No way. Most issues at these levels require judgment calls that AI can't make. Should we restart that service during business hours? Is this warning actually critical for this specific environment? Is this the accounting department and is it tax time? These are business decisions, not technical questions. Rejoice Level 1 techs, you might be managing higher level systems and getting out of the trenches with the assistance of AI sooner than later.
One thing is for sure, AI will make good engineers better and will expose mediocre ones. If your value proposition is memorizing CLI commands or being a human grep for log files, then yes, you might need to be worried. But if you understand business requirements, can translate between technical and human speak plus know when a technically correct solution is still the wrong answer, you're going to become even more valuable to the org.
I see a future where AI handles the mundane pattern matching and correlation analysis then engineers focus on architecture, automation and complex troubleshooting. We're not being replaced as much as we're being promoted to work on problems that actually require contextual thinking which is a good thing!
Network engineering has always been about understanding many systems. Not just technical systems, but human systems, business systems and political systems. AI can model the technical stuff all day long but it can't navigate the minefield of competing priorities, budget constraints and things that leadership considers priorities despite there being much more important things to do.
The robots aren't coming for your job, they're coming for the parts of your job you never liked anyway. Personally, I'm ready to let AI handle the boring stuff to focus on the interesting problems that I'd rather spend more time on anyway.