Five Azure Automation Mistakes I’ve Made (So You Don’t Have To)
Over the years, working with Azure and various other cloud providers, I’ve seen — and made — my fair share of automation mistakes. These weren’t theoretical slip-ups. They were real-world, forehead-smacking, “well, I won’t do that again” kinds of experiences. I’ve had the privilege (and sometimes the pain) of learning lessons the hard way, and in sharing them, I hope to help others take a smoother path.
This isn’t just about Azure-specific issues, though some of them do show up more often there. These are patterns that repeat across teams and projects, especially in environments new to cloud automation. Here’s what I’ve learned.
Watch the vBrownBag episode.
№1 Boil the Ocean
Early on, I fell into the trap of trying to make everything perfect before deploying anything. I’d try to define the entire infrastructure, with every dependency, integration, and configuration baked into a single deployment attempt. It felt like I couldn’t move forward until everything was pristine.
This kind of thinking led to endless planning, long delays, and environments that were fragile the moment they touched reality. The biggest breakthrough came when I embraced an iterative approach. Starting small — just a resource group, for example — let me build incrementally and safely. Terraform helped reinforce this mindset. The word “apply” itself suggests repetition, not finality. I apply, then I apply again. I don’t deploy once and hope for the best.
Start with the basics. One small piece. Then build from there.
№2 The Cloud Shotgun
At some point, I got really good at spinning up environments quickly. The problem was, I wasn’t thinking about what happened after that. I just needed to get the infrastructure up and running. Day one operations were everything. Day two? Not so much.
This shotgun approach to cloud provisioning created environments that were difficult to maintain, impossible to evolve, and full of surprises. It reminded me of T-shirt cannons at sports events — shooting things into the crowd with zero concern about what happens after impact.
If I’m not thinking about how changes will be made later, how observability will work, or how other teams will support the infrastructure, then I’m just building technical debt in real time. I’ve had to slow down and ask: Can this be maintained? Can someone else modify it safely? If the answer is no, then it’s not ready.
№3 Deploy by Hand
There was a time when I could spin up an environment in the Azure Portal faster than you could blink. But then I’d have to do it again. And again. And maybe fix a small issue. Or rebuild it. Suddenly, the charm of speed wore off, and I realized I’d created something unrepeatable.
Manual deployments invite inconsistency, human error, and downtime. They might feel productive in the moment, but they come back to haunt you. And let’s be real — people make mistakes. Even with good intentions, it’s easy to forget a step, copy the wrong key, or click the wrong thing.
Automating deployments through CI/CD pipelines, infrastructure as code, and GitOps changed everything. It was hard at first — setting up permissions, pipelines, and identity integration takes effort. But once in place, these systems removed me from the equation in the best way. They made deployment safe, predictable, and repeatable. If it’s worth doing more than once, it’s worth automating.
№4 Security Last
Security is often treated like the final step — something to worry about once everything’s working. I’ve been guilty of this too. I’d get the infrastructure set up, prove that it functioned, and then think, “Okay, now let’s lock it down.”
That’s the wrong order.
The better way? Start with security in place. Use Azure Policy in audit mode to see where things fall short early. Make mistakes in dev and test environments, not in production. When I took this approach seriously, I found security issues I would never have caught until it was too late — over-permissioned identities, missing firewall rules, open endpoints, and more.
Applying least-privilege access early and often forces better design. It also keeps me from having to unbuild insecure environments later. The path to secure infrastructure is through small, intentional steps — what I like to call “baby steps.” Security is a process, not a patch job.
№5 The One-Man Band
There was a time when I was “the Terraform person,” “the pipeline person,” or “the only one who knows how the environments are built.” At first, it felt good. Job security. Recognition. Then the reality set in: everything slowed down when I wasn’t available. I became the bottleneck.
No one should be a one-person automation team. It doesn’t scale. It’s stressful. And it’s dangerous for the project. If I go on vacation, or move to another team, what happens? It turns out — nothing good.
The solution isn’t more automation. It’s shared knowledge. Cross-training. Documentation. Involving others in how things are built and deployed. Infrastructure as code isn’t hard. It’s not a secret club. Anyone on the team who builds applications should understand the infrastructure they’re targeting.
If I can write it, others can learn it. If others can learn it, we all move faster — and no one wears the whole band on their back.
Conclusion
What I’ve learned is that none of these mistakes exist in isolation. Trying to do everything at once leads to brittle environments. Deploying by hand makes it hard to enforce security. Not sharing knowledge leads to burnout. These issues compound.
The good news? The solutions compound too. Start small. Think long-term. Automate early. Build securely. Share what you know.
Cloud automation doesn’t have to be painful. But avoiding that pain means approaching it with humility, discipline, and a willingness to learn — ideally from someone else’s mistakes.
Happy Azure Terraforming.