Introduction to Automating Infrastructure
In this module you will learn about automation. Automation is using code to configure, deploy, and manage applications together with the compute, storage, and network infrastructures, and the services on which they run.
The tools in this area include Ansible, Puppet, and Chef, to name a few. For automation with Cisco infrastructure, the platforms can integrate with those common tools, or provide direct API access to the programmable infrastructure. Whether in a campus/branch configuration, in your own data center, or as a service provider, there are Cisco tools that do more than network configuration management.
When you understand what automation is and what it can do for you, you’ll be ready to visit the Cisco DevNet Automation Exchange to explore solutions that can work for you.
Cisco Automation Solutions
There are several use cases for automation for the network. Depending on the operational model you want to follow, you have choices in how to programmatically control your network configurations and infrastructure. Let’s look at Cisco automation through the DevNet Automation Exchange to understand the levels of complexity and choices available.
Walk: Read-only automation
Using automation tools, you can gather information about your network configuration. This scenario offers answers to the most basic and common question you can ask, which is “What changed?”
By gathering read-only data, you minimize risk of causing a change that could break your network environment. Using GET requests is also a great way to start, by writing code solutions to data collection tasks. Plus, you can use a read scenario to audit configurations and do the next natural step, which is to put the configuration back into compliance. In the Automation Exchange, this shift is categorizes as a walk-run-fly progression.
Run: Activate policies and provide self-service across multiple domains
With these “Run stage” automation scenarios, you can safely enable users to provision their own network updates. You can also automate on-boarding workflows, manage day-to-day network configurations, and run through Day 0, Day 1, and daily (Day n) scenarios.
Fly: Deploy applications, network configurations, and more through CI/CD
For more complex automation and programmable examples, you want to go to the Fly stage of the DevNet Automation Exchange. Here you can get ahead of needs by monitoring and proactively managing your users and devices while also gaining insights with telemetry data.
There are many use cases for infrastructure automation, and you are welcome to add to the collection in the DevNet Automation Exchange.
Why Do We Need Automation?
Enterprises compete and control costs by operating quickly and being able to scale their operations. Speed and agility enable the business to explore, experiment with, and exploit opportunities ahead of their competition. Scaling operations lets the business capture market share efficiently and match capacity to demand.
Developers need to accelerate every phase of software building: coding and iterating, testing, and staging. DevOps practices require developers to deploy and manage apps in production, so developers should also automate those activities.
Below are some of the risks that can be incurred in manually-deployed and -managed environments.
Disadvantages of manual operations
Building up a simple, monolithic web application server can take a practiced IT operator 30 minutes or more, especially when preparing for production environments. When this process is multiplied by dozens or hundreds of enterprise applications, multiple physical locations, data centers and/or clouds; manual processes will, at some point, cause a break or even a network failure. This adds costs and slows down the business.
Manual processes such as waiting for infrastructure availability, manual app configuration and deployment, and production system maintenance, are slow and very hard to scale. They can prevent your team from delivering new capabilities to colleagues and customers. Manual processes are always subject to human error, and documentation meant for humans is often incomplete and ambiguous, hard to test, and quickly outdated. This makes it difficult to encode and leverage hard-won knowledge about known-good configurations and best practices across large organizations and their disparate infrastructures.
Financial costs
Outages and breaches are most often caused when systems are misconfigured. This is frequently due to human error while making manual changes. An often-quoted Gartner statistic (from 2014) places the average cost of an IT outage at upwards of $5,600 USD per minute, or over $300,000 USD per hour. The cost of a security breach can be even greater; in the worst cases, it represents an existential threat to human life, property, business reputation, and/or organizational survival.
Financial Costs of Server Outages
Dependency risks
Today’s software ecosystem is decentralized. Developers no longer need to build and manage monolithic, full-stack solutions. Instead, they specialize by building individual components according to their needs and interests. Developers can mix and match the other components, infrastructure, and services needed to enable complete solutions and operate them efficiently at scale.
This modern software ecosystem aggregates the work of hundreds of thousands of independent contributors, all of whom share the benefits of participating in this vast collaboration. Participants are free to update their own work as needs and opportunities dictate, letting them bring new features to market quickly, fix bugs, and improve security.
Responsible developers attempt to anticipate and minimize the impact of updates and new releases on users by hewing closely to standards, deliberately engineering backwards compatibility, committing to provide long-term support for key product versions (e.g., the “LTS” versions of the Ubuntu Linux distribution), and other best practices.
This ecosystem introduces new requirements and new risks:
- Components need to be able to work alongside many other components in many different situations (this is known as being flexibly configurable) showing no more preference for specific companion components or architectures than absolutely necessary (this is known as being unopinionated).
- Component developers may abandon support for obsolete features and rarely-encountered integrations. This disrupts processes that depend on those features. It is also difficult or impossible to test a release exhaustively, accounting for every configuration.
- Dependency-ridden application setups tend to get locked into fragile and increasingly insecure deployment stacks. They effectively become monoliths that cannot easily be managed, improved, scaled, or migrated to new, perhaps more cost-effective infrastructures. Updates and patches may be postponed because changes are risky to apply and difficult to roll back.