Hello, and welcome to our course, Ansible Fundamentals. My name is Chris Caillouet, a technical training content developer at Red Hat. I look forward to taking you through an exploration of Ansible and the concepts that represent the core features within. We'll take a look at managing the inventory or collection of systems you wish to use Ansible to manage. We'll learn to write one‑off tasks we call ad hoc commands. We'll transition to taking those ad hoc commands and offering our first Ansible playbooks, or set of tasks we wish to use to automate configuration management. We'll templatize those instructions using the Jinja2 template engine, and then we'll transition to using roles in the community‑based Ansible Galaxy. After that, we'll take a look at a few complex inventory scenarios that represent real‑world environmental architectures you may be encountering. For this course, we expect some limited experience with Linux. Certainly it will be helpful if you have command line interface exposure and are proficient with a Linux‑based text editor. From there, we'll explore Ansible within that environment, and then you'll be off to automating your system administrative tasks. I look forward to seeing you throughout the course. Now let's get started with our next module. See you there.
My environment will consist of several machines. I'll be using cloud‑deployed Red Hat Enterprise Linux systems. I'll specifically be using the version 8 of Red Hat Enterprise Linux. You could either deploy your own system locally with this operating system or a similar operating system like CentOS or Fedora. If you have the ability to deploy virtual machines, you can utilize that approach as well. Lastly, if you have cloud‑based accounts, you can certainly deploy a number of machines to interact with throughout this course in that fashion. My own environment will consist of an Ansible control node. From here, I'll author and run all of my Ansible workloads. The four other systems that I have available to me will consist of two I'll call Web Servers and two others I'll call Database Servers. This is purely for organizational techniques so that I can qualify the different purposes and roles of the machines in my environment. If you wish to practice these techniques, you can do so on a local workstation only; however, it can be valuable to see Ansible work against a number of hosts in your inventory. Now that I've explained the environment I'll be working with, let's get into exploring Ansible itself. I look forward to taking you on this journey, and hoping you can discover the values that Ansible provides to a system administrator, and the ease of automation you can adopt within your own environments.
Hello, and welcome to this module from our Ansible Fundamentals course. In this module, we'll discuss managing the inventory. Ansible uses an inventory to describe the systems that you wish to manage using Ansible commands. Ansible takes advantage of a user‑defined inventory in order to target the hosts you wish to manage with Ansible. In this section, we'll take a look at creating your first static inventories to manage your hosts. Ansible uses inventory files in order to describe the host you wish to manage. We'll take a look at those inventory files and explain their format, as well as create an inventory file so we can describe the Linux‑based host we wish to target, as well as how to group and assign managed hosts into those groups. We'll create our first inventory files that defined the Linux‑based host we wish to target with Ansible, as well as define groups and collect those hosts into those groups. Ansible uses the concept of inventory. When we need to manage hosts with Ansible, we'll need to be able to describe those hosts within the inventory system. A host can be collected into groups you define in the inventory, and groups can contain subgroups or child groups. A host can then be organized into as many groups make sense for your environment and then targeted through your work in Ansible. Lastly, through the inventory system, you can apply variables for the hosts and groups you define. The first way to define inventory is with a static inventory file. This is a simple text file, but with a very specific format known as INI style. Additionally, it can be defined in YAML, but users are typically more comfortable writing the INI style format for these files. When you're first getting started with Ansible, it's very easy to create these static inventory files, especially in the learning phase. However, they do need to be updated manually, and this can become cumbersome over time. There's also the concept of dynamic inventory. In dynamic inventory, we'll take advantage of scripting to automatically generate this inventory for Ansible to consume. We'll take a look at that a little bit later in this course. First, let's explore static inventory. The first thing you need to know is where your inventory files are located. These are defined within the Ansible configuration file. If you're unsure which configuration file is in use, Ansible provides a handy command. Ansible ‑‑version will supply the path to the Ansible configuration file that's currently in use. When you explore that file, in the default section, you should see a key for inventory, which will be set to a path, either relative or absolute, depending on if any alterations have been made to your file, to the location of the inventory Ansible is currently using. If this configuration is not set, by default, /etc/ansible/hosts will be consumed. Inventory files for static inventory are written in the INI format. This is the simplest form to author your inventory, and you can have a look at this example where we see a list of hosts we'll want to target using Ansible. These hosts can be of the form of a fully qualified domain name or an IP address. Once you understand which systems you want to target using Ansible, you'll next need to understand how you can organize those for intelligent targeting within the inventory file. To do so, you can organize your hosts into groups. Here you can see an example of an INI formatted inventory file that's defined two groups, webservers and db_servers. Those group names are contained within the bracketing notations. Beneath the heading for each group, we list the hosts that belong to that group. Remember that any host can belong to as many groups as necessary for your work. Expanding on that concept, we can see a number of organizational groups in these sample files. Here we have the systems collected into web servers and DB servers. We have different regions, such as the East or West data centers, or different environments such as production and development environments. You can see the same systems are appearing in multiple groups. For example, the web1.example.com server is a web server, so it belongs to that group, but also belongs in our east_datacenter group as well. Since it's a production server, we also include it in the production group as well. Ansible predefines two special host groups. The host groups for all includes every host located in your inventory. Additionally, the host group ungrouped includes any host that is not a member of another group. When defining your group names, you can include underscores, but dashes should be avoided. Lastly, a helpful hint is to not give a group the same name as a host. This can become confusing, especially if you have other members of your team or organization who rely on your authored inventory files. Groups can also contain other groups as their members. We call this nested groups, and we use the :children suffix to define a group of this type. In this example, we have systems that are in our usa group, as well as others in our canada group. We define a nested group of north_america using the :children nomenclature, and you can see we then list out the groups that nested group contains. When managing a large collection of systems, it can be necessary and definitely helpful to understand some shortcuts on how to define large system ranges. For ranges, you can use the bracketed annotation with a starting and ending sequence number separated by a colon. For example, if we had the IP addresses of 192.168 and then a series of numbers, such as 4 through 7 or 0 through 255, we can then see that we can match the entire CIDR notation of 192.168.4.0/22. It's very helpful when you have large networks or a large number of host systems to understand this shorthand. Additionally, in the fully qualified domain names, you can use ranges to expand things. If we have server01 through server20 of example.com, you can see we're using the bracketed notation to do server[01:20], which will be expanded to include each of those injuries of server01 all the way through server20. This isn't limited to numbers. We can also use alphabetical expansion as well. You can see this last example uses an expansion of A through C to expand a.dns.example.com, as well as b and c of the same qualified domain name. Here are two other examples to understand how Ansible expands these ranges. In the usa group, we have washington with a range of 1 through 2 and canada with a range of 01 through 02. When you include these zeros within the brackets, when it is expanded by Ansible, it will also include those zeros. It's key to note on this that ontario01 is a match, but ontario1 is not. If you need both of those things to match, you'll need individual entries for those expansions. While INI format is the typical and certainly the easiest way to author your first inventory files, Ansible can consume these files in YAML format. You can see a side‑by‑side comparison of an INI format and a YAML formatted file on the right. We'll discuss more about YAML and its syntax as we get into the course, but for now, you can see the indentation is quite specific in YAML, whereas all entries in an INI file are all left‑justified to the margin. Once you've authored an inventory, you can use the ansible‑inventory command to peruse the contents. The ‑i option can be used if you need to check any file other than the currently defined inventory. When using the ‑i option, just supply the path to the alternative inventory file. If you were to use the command below, it will display the current inventory you're consuming in YAML format. This is an easy way to convert INI files into YAML format if you ever had that need. When working with large inventories, it can be cumbersome to peruse them to determine if a host is contained within a particular inventory. We can use the ansible command with ‑‑lists‑hosts to query whether or not a host is listed within a given inventory. Here's an example of us using this technique. Given the ansible command, we want to find out if washington1.example.com is contained within our current inventory. We use the ‑‑list‑hosts argument and can see that one host is defined within the inventory of that matching name. If we wanted to query if washington01 was included, you can see that in the second entry. Washington01 does not return any hosts available within the inventory. Throughout this course, I'm going to hop into our terminal so that we can put these techniques to use. My environment consists of a singular Ansible control node. My Ansible control node is running Red Hat Enterprise Linux version 8. Also, you could consider using CentOS version 8 as well. For the host that I'll be managing , I have two web servers and two database servers. I've called these Web 01 and Web 02 for the web servers and DB 01 and DB 02 for the database servers. We'll use these in various capacities throughout the course so that we can understand ways to target and practice all the techniques that we discuss through the course. We'll utilize these four systems to put all the techniques we discuss throughout the course to practice. To get started with this, I want to go ahead and author an inventory that will allow us to manage these four hosts throughout the rest of the course. Currently, I'm on my machine with Ansible already installed. To understand where the default locations for inventory are, I'll take a look at the Ansible config file. We'll take a look at this file throughout the course to determine various default settings, as well as options we have available to us on how to control the behavior of Ansible. The default Ansible config file can be found at /etc/ansible.ansible.cfg. We'll explore more about this file throughout the remainder of the course. Here, I want to take a quick glance and specifically call out where our inventory is configured. As you can see by the blue text, almost the entirety of this file is just a set of comments. This is intentional. Other than the section headings, Ansible is providing you with a glimpse into what the same default values it has chosen are. You need only supply overriding values when you wish to change from these default settings. First, we'll be authoring an inventory file. The default location for an inventory in Ansible is the /etc/ansible/hosts file. This is a fantastic file to visit when you're new to Ansible and trying to author your first inventory file. As you can see, once again, all of this file is commented out, but it does provide you with helpful tips around how you can author inventory using the INI format. In the first example, you can see a set of four ungrouped hosts. These are a mix of both fully qualified domain names, as well as IP addresses, as we can see in the four entries. The second includes some grouping. Here we have a group named webservers. This webservers group also contains four hosts. To take advantage of some patterns, you can see that we have the range of 001 through 006 on this line. This will expand to include all hosts www 001 through the range to 006, including the rest of the fully qualified domain name, .example.com. If we were to wish to group a set of DB servers, we can supply a group name with the brackets and then iterate those DB servers. Again, we see another example of using a ranged annotation here. In this example, we're omitting any leading zeros, so the expansion would not include those. Let's now author our first inventory file for the systems we'll have throughout the rest of the course. While the default location for Ansible inventory is /etc/ansible/hosts, we can supply any inventory file we author. I'll utilize an ansible folder in my home directory for the demo user throughout this course. I'll go ahead and author my first inventory called inventory. At a basic entry, we need only name the entries here. As stated, I'll be utilizing four systems throughout this course. I've given them the names web01, web02, db01, and db02. As these are the four hosts I wish to manage, this in itself would be a simple enough inventory to control all of those machines. Now, as we've seen, groupings can be very helpful. Let's add some groups to organize these. (Typing) As you can see, we've put the web nodes in the webservers group and the db nodes in the databases group. We could further expand this to show the value of ranges. We could use shorthand to even use ranges, although with four entries, it's not quite interesting. I'll do so with the database servers. Here we can see in the top example we've listed these nodes individually, while in the second example group we've used a range to expand db01 and db02. We can save our work and make sure that Ansible can list out the hosts from that inventory. Using the ansible‑inventory command, I can pass the ‑i flag and name any inventory file, such as the one I just authored, and then use the ‑‑list command. This should iterate through all of the hosts we've listed in our inventory and give us a JSON‑formatted output for those contained within. Here you can see the databases group containing db01 and 02, expanded from that range we enter, as well as the webservers group for web01 and web02. That concludes this section, and I look forward to seeing you in the next videos.
Welcome back! In this section, we'll explore managing the connection settings and privilege escalation. Ansible allows us to configure the connections we're going to use when contacting hosts, as well as how we will escalate privileges when needed to do so within playbooks and play tasks. We'll also take a look at how Ansible determines the configuration file that's going to be used and how that gets applied. A real benefit of Ansible is that it has an agentless architecture. What we mean by this is that there's no custom agent installed on any of the hosts you manage. This is great because Ansible will natively take advantage of communication methods available within your operating system; SSH, for example, in Linux‑based hosts. No additional software need be installed and no additional ports need be open, introducing vulnerabilities, as well as maintenance headaches that could arise if you were trying to manage different pieces of custom agents on these machines. Since we're going to be using the typical ways you communicate with your managed systems as a system administrator, Ansible won't require you to learn any new tooling. You'll take advantage of managing your systems the same way you do manually in an automated fashion. Now that we understand an inventory, we'll describe the list of hosts that we wish for Ansible to manage. We now can go forward and understand how to describe to Ansible the various bits of information it will need in order to do so. First, the location of the inventory file can be found within the Ansible configuration file. If you're unsure which configuration file you're using, you can use the ‑‑version flag for the Ansible command. By default, Linux‑based systems will take advantage of the SSH protocol. If you wish to define a separate protocol or a nonstandard port, you'll need to describe that to Ansible as well. The user being used to log into the system can also be described in inventory. Once you've gained access to a system, if you need to escalate privileges to the administrative or root credentials, Ansible will need to understand how that occurs in your environment. By default, it will use the sudo command to do so. Other options exist, and if you do use a different privilege escalation method, such as the su command, that's available, and simply need be explained within the Ansible configuration. Lastly, you can describe to Ansible whether an SSH password should be provided, or if key‑based authentication is in place. All of these defaults can be adjusted within the Ansible configuration file or by passing a set of flags on the command‑line during invocation. It is not uncommon to have several configuration files for different Ansible workloads in your environment. Additionally, since there could be several in play, you'll need to understand which one Ansible looks for in which order. Ansible does consult an ANSIBLE_CONFIG environmental variable. If this is set, it will be set to the path of an Ansible configuration file. If this is not set, Ansible looks for the configuration file in specific paths, the first being your current working directory. If it doesn't find an ansible.cfg file in the current working directory, it will then look in your home directory for a dot file, or hidden file. Lastly, if it hasn't found a configuration file in any of those locations, it will use the default installation file at /etc/ansible/ansible.cfg. Again, just a reminder that the flag ‑‑version for the Ansible command will clearly spell out which configuration file is being consulted. Be sure if you navigate around the file system you can check this because you may have switched directories to one that contains an alternative ansible.cfg file. You'll want to know this if this is the case. The ansible.cfg file consists of several sections. We'll take a look at this config file momentarily. Each section contains a heading and has a collection of key value pairs. The section headings or titles are enclosed within square brackets, and then the key value pairs are set as key equals value. The basic operations of Ansible executions take advantage of two main sections. One is the default section for Ansible operations, and the second is the privilege_escalation section, where Ansible looks to understand how to gain privilege escalation when invoked for your managed hosts. The connection settings we discussed previously will be defined within the defaults section of the configuration file. This will include three main pieces of information for Ansible to understand. Remote_user will explain which user to take advantage of when connecting to managed hosts. If you do not supply a remote_user argument, it will use your current username. Remote_port specifies which SSH port you'll use to contact your managed hosts. By default, this is port 22. The ask_pass argument controls how or whether or not Ansible will prompt you for an SSH password. By default, it does not prompt for a password, as it is most customary and a best practice to use key‑based authentication for SSH connections. In the privilege_escalation setting section of the configuration file, several main arguments are used for Ansible to understand how to escalate privileges to a higher tiered user, such as the root user. The become key will describe whether or not you will automatically use privilege_escalation. This is a Boolean, and the default is set to no. The become_user key will define which user to switch to when privilege_escalation occurs. By default, this is the root user. The become method key will determine how Ansible will switch to becoming that escalated user. Sudo is the default implementation. However, there are other options, such as su. Lastly, the become_ask_pass key will control whether or not Ansible prompts you for a password when escalating privileges. By default, this is set to no. It is very typical to make adjustments to the default behaviors of Ansible. Managing settings within the configuration file is common not only for your environment, but potentially for different workloads. In general, an ansible.cfg file should contain only the keys you're overriding from defaults. Here you can see an example of an Ansible config file that supplies arguments for the inventory located in the current working directory, the remote_user set to Ansible, setting the ask_pass argument to false, and then defining some privilege_escalation rules. Become will be set to true, the user to become will be set to root, and the ask pass feature is disabled. This ansible.cfg example specifies the defaults that Ansible is assuming. Should you need to override any of these, you can edit your ansible.cfg. In a typical environment, not all hosts are equal, and there could be different properties we wish to set as variables on specific hosts. There are many ways you can do this within Ansible, but here let's take a look at one or two. One of the easiest ways to provide host‑specific variables is to create a host_vars directory. In that directory, you'll create a text file that matches the host name. Within this text file, you can supply a list of key value pairs that are unique to that host. Any variables provided in this fashion will override the ones set within the ansible.cfg file. There's also a slight different syntax and naming when it comes to using this method. Let's have a look at the host‑based connection and privilege escalation variables. Ansible_host will specify a different IP or hostname to use when connecting to the host instead of the one that's specified in inventory. Think of this as a secondary IP or alternative hostname for that host. Ansible_port will specify the SSH port that you prefer to use for connecting to that host. Ansible_user specifies the user for that connection. Ansible_become will specify whether or not you should use privilege escalation for that host. And the ansible_become_user specifies which user to become on that host. Lastly, ansible_become_method specifies the methodology on how privilege escalation works, whether this be sudo, su, or something alternative. Here let's have a look at an example of some host‑based connection variables that are contained within those files in a host_vars subdirectory. Here we have a subdirectory host_vars containing the file server1.example.com. The server1.example.com file contains this example. These will be variables specifically used when connecting and manipulating server1.example.com only. Here we can see an ansible_host IP address that's being specified, an alternative port for the connection, the user root to use when connecting to that server, and disablement of the become facility. No other servers will inherit this, but this will override any defaults that are contained in ansible.cfg when you interact with server1.example.com. Now that we've determined how to create inventory and supply additional information on how we manage specific hosts, let's talk about how to prepare any managed hosts for management through Ansible. The first and highly encouraged approach is to set up SSH key‑based authentication to a user that can use sudo to become root without a password. Having passwords need to be supplied when invoking an Ansible play or playbook requires manual interaction. There are some advanced techniques on how to get around that, but simple SSH key‑based authentication allows a higher degree of automation when working with Ansible. This will simulate something akin to a password‑less authentication and allow full automation of Ansible scripting. Either way, Ansible is flexible enough to allow either implementation or some alternatives that I'm not even discussing here that best match your security policy and preferred managerial style of your systems. Let's have a look at some of these concepts as we put them into practice. Revisiting our ansible.cfg file, let's have a look at the privilege settings. I can search for the keyword become to find the area that deals with this. Here the heading privilege_escalation will define the various arguments for how we will, or if we will, allow privilege escalation. Here you can see the default values used in a default installation of Ansible. By default, the ability to escalate privileges, or the become argument, is set to true. The become method will be sudo, the user that it will switch to is the root user, and asking for a password is set to False. The assumption Ansible is making here is that you will have SSH key‑based authentication and password‑less root permission escalations. Let's go ahead and begin creating our own custom ansible.cfg file to adopt the changes we're performing throughout this course. As shown previously, we should have a default section. We'll use this to define our inventory that we previously created. We'll set that equal to our file inventory located in my home directory. Next, we'll paste in the privilege_escalation values from the main ansible.cfg file. In this fashion, we can then start altering those values. We'll be explicit here. While these are the defaults, we'll still just make sure that they're written as we would expect to see them. Let's go ahead and save that file. Taking a look at what we've evolved so far, we now have our defaults pointing inventory to the localized inventory in the current working directory. We also have the privilege_escalation values that are true to the default values. We'll make modifications throughout the rest of this course to this file. Now that we have a custom configuration file in our working directory, we can use ansible ‑‑version. The reason to do so is to make sure to understand the specific config file that Ansible will utilize when it's performing its work. Here you can see that config file points to this localized entry. One of the easiest ways to collect host‑based variables, specifically ones around connection settings and user escalations for privileges, would be in their own file. We'll contain this file in a host_vars directory. Let's create that structure now. Switching into that directory, we'll now create an individual file per host to supply any unique values that override the defaults. I'll do this for one of my systems. I'll create a system file for the db01 system. Now that we have our file, I'll begin by putting a comment at the top of this file that describes the file's intention. A simple comment that I'll provide here will describe that the intention of this file is to contain the host variables for the db01 system. This is helpful for yourself and subsequent administrators who encounter this file so that they can understand the intention of the file itself. Now that we have our comment, we can begin providing our key value pairs in proper YAML syntax. To begin, we'll use the three dashes. Once we have those in place, we can supply key value pairs, one per line. Here we'll override the become_ask_pass value that we've set in our ansible.cfg to false. Here we'll override that to true for only the db01 system. Let's save our work and have a look at what we've written. Great. Let's give this a quick test. I'll use an Ansible ad hoc command and a simple module like the ping module in order to do so. Here you see me targeting the databases group, yet limiting it to only the db01 system. Oh, I encountered an error. My error here is because I'm located in the host_vars directory. If I were to change to a one directory hire where our inventory actually lives, we shouldn't have this problem. Rerunning the command, we can now see that our variables are supplied in a proper format and do not cause an error. Let's continue to expand on this work. I've changed back into the directory, and now I'll re‑edit the file. Here we can envision a scenario where a custom DB port may be in use for the database that will run on this system. I'll supply a key value pair for custom_db_port and set this to some example like 1234. This custom_db_port variable will now be available to us in all of our Ansible workloads that interact with the db01 system. Cleaning up our work, we can have a look at the file we authored. Now let's go ahead and rerun that command to ensure that it parses our variable pairs correctly and that we can interact with these variables throughout our Ansible workloads. Moving up a directory and rerunning that Ansible ad hoc command, I can now see that both of those variables are appropriately parsed and we have them available for our Ansible workloads. Now that we've configured an inventory, populated our keys, and can effectively manage these systems with Ansible, let's recap the work we've done to this point. First, inside our Ansible working directory, we have several files. Let's take a look at our ansible.cfg. You can see we've supplied several arguments. We've pathed the inventory that we wish for it to use to the inventory file we authored. Additionally, while these are default values, we have extracted some of the privilege_escalation values and contained them in our own custom ansible.cfg file. And we've made a number of changes to the default behavior of Ansible. We have some helpful commands to help us understand at a glance what we've overridden. Ansible‑config is a command that allows us to peruse the running Ansible configuration. Specifically, we can use the dump command to view all parameters Ansible's using. However, in this case, we really only wish to see the ones we've changed, so we can provide the only changed argument. When we execute this command, we'll see the output of the variables we're overriding with our custom ansible.cfg. Note that these one to one match the parameters we see above. After creating and populating our hosts with keys, we now have the ability to SSH directly into those hosts without supplying a password. This will be very helpful to automate our workloads with Ansible. Next, we'll take a look at using the Ansible command to start interacting with these hosts. That concludes this section and the module. I look forward to seeing you in the next video.
Welcome back to our course, Ansible Fundamentals. In this module, we'll take a look at running one‑off tasks using ad hoc commands. Within Ansible, automation tasks can use the command line ad hoc task execution to trigger a single Ansible task. This is very useful for quick interactions with our managed hosts. Ansible provides a feature known as ad hoc commands. These are simple one‑line operations that run without authoring any playbooks. These are great for quick tests, making easy changes, or even doing simple exploration on your managed hosts. We'll look at some great examples of this in action, as well as discuss the limitations of using ad hoc commands. Before we take a look at ad hoc commands directly, we need to understand the concept of Ansible modules. Ansible provides a catalogue of modules. These are the underlying code that explains via code how Ansible can provide the automation tasks we'll leverage. Modules exist for a large number of system administrative tasks such as creating and managing users, installing and removing or even updating software, deploying configurations, as well as configuring the network services that run on your systems. Ansible modules are what is known as idempotent. In other words, they'll always check to see if the work being requested is required on the system or if it's already in the desired state. If a system is already in the desired state described by your Ansible work, then it will skip that and report back that no change was necessary. If a change is required, Ansible will then perform that change and report that as well. An ad hoc command runs a single module against a specified host to perform a single task. To run ad hoc commands, we'll use the ansible command. After the ansible command, you'll need to supply a host‑pattern. This host‑pattern will specify which host this task will run on. Additionally, you'll need to specify a module using the ‑m flag. Each module takes a unique set of arguments, which you'll provide with the ‑a flag. Lastly, you'll specify an inventory file with the ‑i flag, where the host can be found for Ansible. One of the simplest ad hoc commands, as well as one of the most common system administrative tasks, is ping. The ping module doesn't actually send an ICMP packet like we're used to as system administrators using the ping command, but it does check to see if Ansible can contact the managed host. Specifically in a Linux‑default implementation, this would be using an SSH interaction. In the example here, you can see the command, ansible, targeting all hosts, calling the module ping. In the output, you can see a success in that it was able to contact the host servera.lab.example.com and reply with the answer pong. This shows a proper and valid interaction via ping using the Ansible module. When using Ansible ad hoc commands, there're a number of flags available to the user to override default behaviors. The default behaviors are defined in the ansible.cfg configuration file as we discussed in our previous module. We can see that it may be necessary to tell Ansible that we need a prompt for a password for our ad hoc command. You can use the ‑k flag or the ‑‑ask‑pass flag for this behavior. When you need to specify a specific user for the interaction, the ‑u flag will allow you to do so. This will be overriding the REMOTE_USER setting contained within ansible.cfg. A ‑b enables privilege escalation akin to the become argument within our configuration file. The capital ‑K flag denotes that we need to be prompted for a password during privilege escalation, and the flag ‑‑become‑method overrides the default privilege escalation method. With Ansible, the default is sudo. Other valid choices exist such as su and can be seen using the ansible‑doc command. Most Ansible modules take a set of arguments to describe the actions you wish for Ansible to perform. The ‑a flag in an ad hoc command allows you to supply those arguments. Syntactally, we'll contain those within single quotes and put a space between each key‑value pair. In this example, we're using the user module to ensure that a user named newbie exists and has a UID of 4000. You can see we're using the ‑m flag to declare the module user and the ‑a flag to specify those arguments. The output also has shown that a successful interaction has occurred, that work was changed, and that the user was created and set to the UID of 4000. Also important to consider is the concept of state. Here we've declared a state=present. We'll take a look through this course at state as that is a main approach of how Ansible has you describe the behavior you wish for it to perform. For example, if we wish to remove this user, we could change the state to absent and rerun this command. That would then remove the user we had just created. Here's a helpful list showing some of the flags you have available to you during ad hoc commands. Take a moment to peruse through these arguments as they will become very helpful as you craft your ad hoc commands. With a configured inventory, we can now start running Ansible commands. To take a look at the list of modules available to us for ad hoc or one‑off commands, we can use the ansible‑doc command. With the ‑l flag, it will list all modules available to us. This can be quite an overwhelming amount of input, so you might as well consider using other operators to drill down to find the modules you need. I'll look for one called ping. Great! With a subset here, we can see the very simple ping module. To get more information about that module, you'll use ansible‑doc and simply name that module. Here we get the full details of the ping module itself. We also at the bottom of each of the documentation get some helpful examples. Now, this module is not very sophisticated, so this gets more interesting as you use more elaborate modules. Now that we have a module that we'd like to try out, let's put it to use. We use the ansible command. The next thing we can do is either rely on the default inventory Ansible will find and the default configuration files it'll use. But to know what those are, we can use our version command to display that. We can see that we have the Ansible config file located in our home/demo/ansible directory. We can consult this file to see if any values for inventory are overridden. Here we can see that we are overriding the default /etc/ansible/hosts file and using one that we've authored here in this location. Taking a look at that inventory file, we can see it simply contains two groups and the four hosts that we intend to manage during this course. Now that we understand how Ansible will behave, we can get to using Ansible commands. In this case, we'll run a simple one‑off command naming a module with the ‑m flag. We'll use the ping module. Before we specify the modules that we wish, we also need to target some hosts contained in our inventory. I'll target all hosts. If we wish to be explicit on the inventory we're using instead of rely on any defaults or overridden values, we can simply provide the ‑i flag. I'll go ahead and try this command now. Great! We can see that a number of hosts from our inventory, specifically web02, db01, web01, and db02, were all successfully contacted. A proper response from the ansible ping command is pong. We can see all four of our hosts were able to be contacted and properly responded. It's also possible to supply additional flags to alter the behavior of the Ansible interaction. For example, if we wish to force a password handshake for authentication instead of relying on our SSH keys, we can supply the ‑k flag. Let's take a look at this. I'll target all hosts, and I'll show a new technique of limiting to a single host. I'll then use an explicit statement for our inventory, the ‑k flag to tell Ansible that I wish for it to prompt me for passwords when authenticating to this host, and then a simple module call to ping. We're prompted for that password, and supplying our user's password for that system, the command continues. Note that instead of all systems responding here, only the limited web01 system responds. Let's try a few other modules as well. I'll use ansible‑doc listing out all modules, and I'm going to search for one that manages services. With a large catalog available with the keyword service, I find in there one named explicitly service. Through this output, I can see the module name service listed. Taking a look at the documentation for just this module, I can see a bit more about its purpose. This module controls services on remote hosts, as well as helpful examples of how we use them in playbooks. Before we get into that, I'll simply use this in an ad hoc command. Each of our systems has the SSH service running on it named sshd. Let's try an ad hoc command to restart sshd on one of our targeted hosts. I'll use an ansible command. This time I'll allow it to rely on the inventory we know it'll be using instead of explicitly stating it to show you how simple ad hoc commands can be. I'll target all hosts. I'll call the module service. And I'll supply the arguments for that module. We'll use the key state setting it to the value restarted. Additionally, we'll name the service we wish to restart. Now let's give that a try. Great! With a large list of output, we can see that each of our systems has restarted the sshd service. I'll take a look at just one of these. Here we can see db02 has a changed status. The name of the service has been started. That concludes our look at a simple interaction through ad hoc commands with our managed hosts. Let's try one more example of ad hoc commands. Let's try to create users on our target machines. First, let's take a look at the Ansible module for user. We can note that the equal sign denotes mandatory fields, and not many of them are mandatory here, but we do have a lot of flexibility with the user module itself. I'll create a simple user by name. Here we can see that name is one of the mandatory fields, so I'll create a simple user using name and set a simple password. Let's use the user module to create a simple user across our web server systems. We'll start with the command ansible. We'll then target the group webservers from our inventory that includes but the web01 and web02 systems. We'll call the user module and supply the arguments necessary to complete our work. The first thing we'll supply is the name of a new user we'll call test. We'll create a password for this user we'll call secure_password. Lastly, we'll set the state to present. This means that we'll create the user if not already existing on the machine. Go ahead and press Enter, and we'll see the work occur. Great! Looking at just the web02, but also including the web01, we can see that both were changed and that we created the user test, a home directory, and then our output is not showing the password we created for security purposes, but we know we didn't set a very secure password. If we log into one of these systems for testing, let's see if we can become the test user. Now that we've seen that user get created, let's log into that system and check that out. I'll hop on the web01 system. If we take a look at our /etc/passwd file that contains all users on the system, we can see that the test user has, in fact, been created. Because we now have some users introduced that had insecure passwords, although we may have called it secure_password, we don't want to leave those system users on there. To remove those users, we can simply switch the state to absent and run the same command again. Here we can see that it did remove those users, and we can verify that by logging into one of these systems. If I take a look at that same password file, we can see that the test user has been removed. I'll see you in the next sections.
Welcome to our section on selecting modules for ad hoc commands. While we've discussed ad hoc commands, this module explains how Ansible modules are provided so that you can perform simple command line interactions on your targeted hosts. Ansible provides a catalog of modules to perform the various system administrative tasks we'd like to use via ad hoc commands. To see an entire list of the modules within your terminal, you can use the ansible‑doc command. The ‑l flag will list out all modules. In this output, the name and description of the module are displayed. Since there are thousands of modules, you may consider using a grep command to filter through the results. Additionally, this command has output that is congruent to the online documentation. Let's take a look at that documentation. Here, we're visiting docs.ansible.com, and if you scroll down a bit, the module index is available. By clicking in, we have a categorization of all the modules within Ansible. To peruse the entire module catalog, you can click All modules. As you can see, a large library of modules is available for your needs. With such a large catalog of modules, it may be beneficial to select the category that most closely matches your needs, for example, Files modules. Here you can see a subset of the library that includes only the modules dealing with file management. Extracting just a few of the most commonly used modules, let's have a look at those that achieved the most typical system administrative tasks. When working with files, it's really common to need to place a file onto a managed host. We can do so using the copy module. With this module, we'll describe to Ansible the path on our local system to the file we wish placed on the destination server and where we'd like that placed. The file module then allows us to set permissions or properties on that file, such as read, write, and execute permissions. When we need to edit a single line in a file, such as a port setting in a configuration file for a service, we can utilize lineinfile. The synchronize module is available to perform functions similar to those available with the command rsync. Managing software on systems is a very common administrative task. You can see we have the yum, dnf, and gem modules to provide that functionality. System modules exist to manage many of the typical services, as well as reboot the machine. The service module allows us to control starting, stopping, enabling, and disabling services on our managed hosts. The user module allows for user management, such as creating new users, managing user accounts, or removing user accounts we no longer need. We have net tool modules to allow us the manipulation of network‑based interactions. Things like get_url allow us to download files over the various protocols HTTP, HTTPS, or FTP. The nmcli module will allow us the command line interface access to manage our networking. The uri module will allow us to communicate with APIs that interact with web services. From our command line, when we wish to peruse deeper details about a specific module, we can use the ansible‑doc command and name a specific module. For example, here we're showing the ansible‑doc explanation of the module ping. While the output is omitted here, we'll take a look at that in the command line momentarily. One of the most helpful components of Ansible documentation in the command line is that each module includes helpful examples of how to use them within ad hoc commands or playbook authoring. Let's take a look at an example using one of these modules. You could use the ansible‑doc list to discover the module user. This module manages user accounts. From there, you could then run ansible‑doc user to learn more about how the module works. Specifically, you could take a look at the arguments required and available for this module, such as name, naming the particular user account, UID if you wish to specify a UID number for the account, and state. The Ansible concept of state allows us to declare the end result of how we would like Ansible to perform the work in the in state we wish the machine to end up in. In this case, we could have absent to remove users or present to add them. There are additional options that you could provide for this module, such as setting a password. In this simple ad hoc command, we are creating the user newbie, setting the UID to 4000, and ensuring that user exists on the machine by setting the state to present. As we had seen before, we're crafting our command using the ansible command, targeting the hosts, all, calling the module user, and supplying the arguments for that module. Here's another example for group management. The same user module can allow us to adjust group membership. The group argument allows us to set the user's primary group. The groups argument will allow us to supply a list of other groups we wish for this user to be assigned to. The argument append allows us to control whether we are adding this user to the groups we list or replacing all existing groups they may be in to replace them with this list. If we wish to add the user newbie to the group developers, as well as the group wheel without changing the user's primary group or removing them from any other groups, we could craft this ad hoc command. Here, we're targeting all hosts and calling the module user. The arguments we're providing are the name of the user set to newbie, the groups we wish to add them to, developers and wheel, we're setting append to yes as we wish to keep them in any other groups, specifically their primary group, and we're setting the state to present. Managing software can be done through the package module. We could discover this module using the ansible‑doc command and listing out all modules. If you did so, we could then run ansible‑doc package to find out more about how this module works. Specifically, the arguments for name, to detail the package we wish to manage, and state will need to be supplied. State can take an argument of present to add this piece of software, absent if we wish to remove it, or latest if we're trying to update already installed software. If we wanted to make sure that the httpd package is installed on all hosts, we could craft this Ansible ad hoc command. Here we're targeting all hosts, calling the module package, and supplying the arguments for the name, httpd, and state, present. Other modules are also available for package management such as yum, DNF, and apt. These work in a similar way, but only support the specific environments that have those package managers. The package module itself aims to be system agnostic and will determine which is the most appropriate way to determine software across a variety of operating systems. There are several modules we call command modules. These allow you to directly run commands on the manage hosts. These are provided to users in the event that there's no other module available to perform your preferred actions. You should always peruse the catalog to see if an Ansible module exists to perform the tasks you're attempting. However, in the absence of a module, you can rely on these. Please note that these are not idempotent. Ansible will be unaware of the intended outcomes, and therefore can't make determinations about idempotence. We have the modules command, which allows you to run a single command, shell, which allows you to run a command on the remote system shell, and raw, which simply runs a command with no processing whatsoever. Be advised that this can be a bit dangerous as Ansible will have no ability to provide fail safes for rogue commands. As stated before, you should always rely on authored Ansible modules where available. Here's an example of us using the command module to run an arbitrary command. In this case, we're running this simple command hostname. The command module cannot access environmental variables as it does not have a shell, nor perform things like redirection or piping we would typically have available within a shell session. Here, you can see the ad hoc command ansible mymanagedhosts, a simulated inventory for this example, calling the module command and supplying the argument, which is that command. In this case, we do have to fully path to the location of the command we wish to run. Here's an example of our shell module being used. Note in the first example, we invoke the module command and have a failed interaction for the argument set. In the second example, once using the shell module, we supply the same argument set. This will then properly change the shell, setting it to bin/sh. Both of these modules require working Python installations on the managed hosts. Having a working Python installation on the managed hosts will be a prerequisite to proper Ansible management. The raw module, however, can run commands directly on the systems using the remote shell, bypassing the module subsystem entirely. This is useful for systems that do not or cannot have properly installed Python. A good example of a system that would be in that state is a network router. Ad hoc commands are available to you for a number of reasons. These are great when you need to make single, quick changes to a large number of systems. They're also great when you're perusing systems for a single piece of information, such as hostname, as we saw in our examples. They're very powerful and a great weapon to have in your arsenal; however, they do have some disadvantages. Ad hoc commands can only leverage one module at a time. This will limit them to simple interactions only. Additionally, you'll have to retype the same command again if you wish to reuse it. The list of options can grow long and complex as well, and in general, ad hoc commands are mostly manual in nature. It's great to take advantage of the library of Ansible modules; however, you're still typing out in true system administrative fashion all the interactions you wish to have with your managed hosts. In many cases, a better approach would be to author Ansible playbooks. Playbooks allow you to take advantage of multiple modules with various conditionals and other processing available to you as you author your workloads. Playbooks themselves are text files that can be tracked in version control systems. They can also be easily reused with a single command. Playbooks open up the vast array of Ansible's powers to any system administrator, and truly open the opportunity to automation. Let's get into our terminal and have a look at some of these concepts. Now that we're getting comfortable with the ad hoc commands, let's try more real‑world examples. Let's add users to a group we create. We'll look at how to do that using a few different modules, and we'll take a look at each using the ansible‑doc command. Ansible‑ doc has the list option with ‑l, and we can search through that with the grep command to find a module to manage group creation. With a large list of those, we're looking for the plain group creation, so let's further trim that output with less so that we see it one screen at a time. You can see there's a lot of different ways we can manage groups, depending on the methodology we're doing. Here, we're wanting to just add or remove system groups, and the module we'll use to do that is simply called group. Let's have a look at the group documentation itself. This will manage the presence of groups on a host. We'll want to create a group we'll call demo, and we'll place a user called test in that group. We shouldn't have any artifacts from our previous exercises as we also demonstrated the removal of the test user. Here with the group command, let's take a look at some of the options that we'll use to create this workload. We'll want to use name so that we can name our group. Again, we'll create a group called demo. We'll definitely need to declare a state of present or absent. In this case we're creating the group, so we could say present or understand that the default behavior of this module is to make the argument present. So in this case, from an ad hoc command, we can omit that option. Let's give that a try. Our ansible command will target our web servers for this exercise. We'll call the module group, and then we'll supply the arguments. The name of the group we're creating is test. Since the default assume state is present, we don't need to supply that key. Great. It's created the group on both of those systems, web01 and web02. To demonstrate the concept of idempotency that Ansible has, I'm going to rerun the same command. Note the color change in the output from yellow to green. In the first example where we created the groups, we had a status have changed for those tasks. That meant that work was done by Ansible. In the second run, idempotency, or this fact that Ansible would check to see if it needed to do the work before simply performing it, meant that it found that group already existed and successfully reported that the group was present. This didn't require any work on Ansible's part, aside from the validation that it was in the desired state. Now let's proceed with creating a user and adding them to this group. We've already seen an example of using the user module, so I'll simply dive right in. I'll again target the webservers group, and I'll use the module named user. The arguments we'll supply here are the name of the user we want to create, which is the test user, and we're going to add them to the group demo. Excellent. We can see that work was done with the change status and the yellow output. Rerunning the same arguments, we see that it changes to green as the work was already completed, but Ansible was able to validate that they were in the declared state. Like before, we could declare a state of absent and remove these users. Great. It was able to remove the user. And if we were to rerun the same command, we would see that green output and simply report a success. Let's try the same for the group. Here we see the name of test group, and we'll change the state from its assuming present to its declared absent. This should remove the group we created. Now like I did before, I like rerunning those commands just to show that idempotency that Ansible provides. One final example, let's install some software. Since we're working with web servers, why not install the Apache web server? We'll use an Ansible module. Targeting the webservers group, call package. The arguments we'll supply for this is the name of the package we wish, in this case httpd, or the Apache web server, and the state of present. Let's give it a try. As package installations can take more time, you should expect a little more delay in the reporting status of an Ansible command of this nature. Here we see, it was able to install it on both of our web systems. As I did before, rerunning that command should report a status of ok, and the fact that it was already in our desired state. Nothing was changed as it was already in the state, so nothing to do. If we wanted to remove this package, we'd simply change the state in our argument to absent. And there, we've removed that installed package. Similarly, as the package module allowed us to do installations, we can use one more specific to our distribution. We had used package, but for Red Hat‑based systems, the package manager yum has its own module. Supplying that module in place, we should see a very similar result. In effect, we do. I'll use the yum module to remove it as well. While the yum module is very specific to Red Hat‑based systems as they utilize the yum package manager, the package module aims to be more system agnostic. You can experiment to see which one best serves your needs, depending on the given architecture and operating systems you're managing through Ansible. This concludes our section and our module. I look forward to seeing you in the next videos.
Welcome back to our course, Ansible Fundamentals. In this module, we'll discuss writing Ansible playbooks. We'll learn to create simple playbooks and then create and reference variables within them. We'll learn about conditional task handling, as well as triggering tasks with handlers. We'll recover task errors with blocks and then explore templating with the Jinja2 template engine. Writing Ansible playbooks is the primary way to automate tasks within Ansible. Playbooks are lists of one or more plays, and they're authored in a YAML‑based syntax. These are simple‑to‑create text files that will learn the specific syntax, as well as how to create them throughout the rest of this course. As a playbook contains one or more plays, a play is an ordered list of tasks to run against hosts in your inventory. Each task will take advantage of a specific Ansible module to perform some action against your managed hosts. Most of the tasks authored throughout the modules are idempotent and can safely be executed over and over again without issue. The intention of a playbook is to alter lengthy, complex manual system administration into easily repeatable routines. This should provide predictability, as well as reusability for the work you author with Ansible. In this section, we'll look at creating a simple playbook. The first thing to know when formatting an Ansible playbook is about YAML. YAML is a simple to author structure with standard file extensions ending in .yml. Two‑space indentation with the space character only is the main concept behind the syntax within YAML files. Note that the spaces cannot be substituted with the tab character. The tab character is not allowed in proper YAML. YAML doesn't place strict requirements on how many spaces are used for the indentation, but the two basic rules come into play that data elements at the same level in the hierarchy must align with the same indentation. Items that are children of another item must be indented more than their parents. All children of the same data element, again, must be indented with the same indentation. Common practice is that this indentation obeys two spaces at a time. Let's take a look at this example and explore the various aspects of it, as well as discuss the YAML syntax in play. Proper playbooks always begin with three dashes to denote the start of a file. These will be all the way left justified. Our first call out here in this diagram is the naming of the play. With a fully left justified dash, a single space in a keyword name, we then can supply a human readable name. While this is the only optional aspect of Ansible playbook authoring we'll discuss, it would be foolhardy to omit this line. This allows for ease of understanding for any consumer as to the purpose of the playbook or even the tasks individually that include the name field as well. The second call out we see is the hosts will be targeted. Notice the two‑space indentation that aligns the keyword hosts with the keyword name above it. This falls in line with our previously discussed concept that elements of the same hierarchy should align with their indentation. Our third call out denotes the privilege escalation enablement. The become keyword is set to yes. Once we've completed this, we can begin iterating our tasks by creating the task section. Our first task, listed below tasks, is further indented two additional spaces. We can see that the name of this task, user exists with the UID 4000, is then supplied. Aligned with the keyword name is now the name of the module we'll be using from the Ansible library of modules. In this example, we're taking a look at using the user module. As arguments for this module or children, they are further indented two spaces. The arguments for name, uid, and state are then supplied. This is a great simple example of a playbook containing a single task. When we're ready to execute the playbook, Ansible provides a command ansible‑playbook. Once you've properly formatted a YAML file for your playbook, you can simply call that by file name. This can be either relative or absolute pathing. In this case, we're showing an example of relative pathing. Consider the file site.yml that contains the previous example. Given that, we could then call ansible‑playbook site.yml and see the execution as contained in this example. Note the play name is then displayed, as well as the task Gathering Facts. We'll discuss Gathering Facts a bit further. Gathering Facts is a built‑in feature of Ansible executions where Ansible will profile all the targeted hosts to understand as much as it can about them. After that's completed, the tasks we listed within our playbook, user exists with UID 4000, for example, are then run in top‑down order. Once the play completes, we get a Play Recap showing the ok status, changed status, unreachable, failed, skipped, rescued, and ignored states of all of the various tasks it encountered. In this case, one task resulted in a change, and both the Gathering Facts task, as well as our user creation, were okay. Failures will result in immediately halting the play execution. Within our inventories, we may not always find it appropriate to target every node within a group or even the entire inventory. This is where the ‑‑limit flag will allow us to target specific hosts within our inventory. The limit is a host pattern that further limits the hosts for the play. Given our playbook targeting all hosts, we could then supply a ‑‑limit argument and call out a singular host, or even a host pattern for this to execute upon. In an example given here, where we have the hosts argument targeting web servers, we can then supply a limit argument to specifically only target those that match with datacenter2. In this situation, datacenter2 is an additional group in our inventory. So, while targeting all the webservers, it will then only execute upon those that appear in both the webservers and the datacenter2 group. The Ansible playbook command also provides us a helpful syntax‑check argument. We can call ansible‑playbook, passing in the argument for ‑‑syntax‑check. Then, just simply name your YAML file you wish for the syntax check to be performed upon. If any errors are found, Ansible will do its best to denote where in the file that error exists. In this example, you can see an error being called out with improper YAML. We would then be able to open the webserver.yml file and make corrections as appropriate. It can often be advantageous before performing the actual execution of a playbook to do a test or dry run of its work. The Ansible playbook argument provides the ‑C argument to be able to do just that. You can see an example here of ansible‑playbook using the ‑C argument on the webserver.yml playbook file. The resulting output simulates what would occur if you remove that flag, but does not actually perform that work. Once the work has been validated and you approve for this to carry on, simply remove the flag and run this again. Let's take a look at a few examples of this in our command‑line. Now that we've learned a bit about playbooks, let's author our first simple playbook. We'll take a look at creating a user, and we'll look at a few ways to execute that playbook once it's authored. Currently located in my home/demo/ansible directory, you can see our previous work, including our custom ansible.cfg, our host_vars directory, and the inventory we authored. I'll create our first playbook with an editor. I'm going to use vim; however, you can preference any editor that you use. I'll call this our example.yml. Yml is the standard extension for our YAML files, and we'll take a look at how to author our first playbook using YAML. The initial line of a new playbook will always be three dashes to denote the start of a file. Next, we'll want to declare a few parameters for this playbook. The first parameter we'll declare is the name of our playbook. We use a dash, a space, and the key name. Naming playbooks is a common convention that you'll need to determine what's right in your organization. I'll name mine New user is created. In order to follow the indentation, I'll use the two spaces to create the keys that follow in line with the common indentation as our previous key name. I'll target our hosts webservers. From here, we'll just need to list out the tasks we wish to use in this playbook. Tasks will call an individual module and provide the arguments that that module uses as parameters. Since we're going to create a user, we'll use the user module. Since this is a new child of the key tasks, we'll indent two spaces, use a dash, and name this individual task. Underneath there, we need to name our module. Beneath the module name, we'll supply the arguments. Now, as these are children of the user key, we'll use another to space indentation as is customary in YAML syntax. The name of our user that we'll create will be test user. The state we want for this user is present. Since user creation may require escalated privileges, we can go to the top of our play and alter the become to say true for this. While our defaults in our ansible.cfg allow this, we always want to be as explicit as possible in playbook creation to make sure that they will execute despite the alterations we may make to our Ansible config file over time. Once your playbook is authored, you can save the file. We'll use the ansible‑playbook command to run any authored playbooks, in this case, our example.yml playbook. Since we're not providing arguments, it will rely on the default values that we have in place with our custom ansible.cfg file, namely the inventory we previously authored. We hit Enter and we can see the execution of this playbook. Great. We can see the playbook summary. This is a very helpful output that Ansible provides for us. The color coding of the statuses from our Gathering Facts in green, meaning no changes were made and it was in a good state, to yellow in our User gets created task, where the two users were created, one on each system for the test user. This is color coded yellow to denote that work did occur, and our Play Recap summary shows the breakdown of all the tasks that were executed through playbook execution and the status that they each encountered. If we were to rerun this play, we could see due to Ansible's idempotency that it wouldn't require the addition of these users again. It would simply validate that they exist as the state we declared was present. Let's update our playbook. Let's remove these users by setting the state here absent. Save your work. Now, here I'll do these hosts individually so that we can show the limit feature of targeting inventory. I'll once again use the ansible‑playbook command, but I'll just limit the field of our inventory to the web01 system. I'll then declare the playbook we wish to execute and head in there. Now you can see we're targeting just the singular node. If we wanted to do the same for web02, we could replace that in the arguments. This allows you a unique way to target exactly the members of your inventory you wish to perform work on. And just to show idempotency, rerunning that last command, we can see that web02 wouldn't require any changes to make sure that that test user is absent. That concludes our section. I look forward to seeing you in the next video.
In this section, we'll explore using variables in plays. There are several key places where we can insert variables throughout our plays. We'll take a look at the basic rules of variable precedence, as well as author and run a playbook that takes advantage of the technique. Ansible variables can be provided in a number of places throughout your workload. These variables allow a powerful way to reuse values throughout the bits of Ansible automation you'll author. This allows for a simplified approach to the creation and maintenance of a project and also reduces the number of errors we could perform when handling these values manually. Variables are a helpful way to manage dynamic values or values that may change with different executions of the plays. Variables could contain things like different usernames to create, modify, or delete, different bits of software we wish to manage, various services that may need to be started, stopped, or restarted, a list of files that could be created, modified, or removed, or archives that we want retrieved from the Internet. When naming our variables, there are a few rules. Variable names always start with a letter. Additionally, they can only contain letters, numbers, and underscores. Periods and dashes are not allowed in variable names. Have a look at the table below. Some invalid variable names are provided on the left with suggestions on valid variable name substitutions you could utilize. For example, spaces in the name web space server will not work out as a variable name. But substituting an underscore for that space, such as web underscore server, is a very valid name. Once you've begun to create variables, it's important to understand the scope or the available reach for each variable you've created. The concept of global, host, and play‑based scopes exists. A global variable is one that is set for every host. An example of this would be extra variables we create within a job template. Host‑based values are set for a particular host or host group. These would include variables we set in the inventory or in our host_vars directory as explored in a previous module. Play‑scoped variables are available for all hosts in the context of a currently executing play. These play‑based scoped variables include things included in the vars directive at the top of a play or in the include_vars tasks contained. When variables are defined in multiple places, precedence also has to be considered. If a variable is defined at multiple levels, the level with the highest precedence will take over. A narrow‑scoped variable, in general, will take precedence over a wider‑scoped variable. Considering the types we discussed in our previous slide, this would mean that a play‑scoped variable would override a global‑scoped variable. Variables defined within a playbook are overridden by extra variables defined on the command line during execution. To override in this manner, simply provide the ‑e option and the substituted value for any variables you wish to override when you're calling ansible‑playbook. You can have a look at some helpful documentation online at the docs.ansible website for further understanding variable precedence. Within playbooks, variables can be defined in several ways. A common method is to place a vars block at the beginning of the play and then list the variables you wish to define. You can see an example of vars being defined in this way in this block where the user_name and user_state are defined in a vars block at the top of a play. You could additionally define these variables in an external file. If you do so in this manner, we use the vars_files argument at the top of a play to load variables in from a file located elsewhere. You can see an example here where the vars_files block is created and the relative path to the vars directory and a users.yml file has been provided. Once defined, variables can then be used within your tasks contained in a playbook. When we're ready to reference a variable within a play execution, we'll substitute its value by using double braces. The double braces will contain the name of the variable we wish to substitute in. You can see an example here where we have a variable defined in the vars block at the top of the play as use_name. The value this is set to is joe. Within our task, we're creating the user Joe by using variable interpolation. You can see the double brace nomenclature utilized to substitute the value joe for the variable user_name. We're doing so in two places, both in the name of the task, as well as in the name provided for the argument for the user module. When referencing one variable as another variable's value, the double brace will start the value. When it does, you may also need to quote around this value. This will prevent Ansible from interpreting the variable reference as starting a YAML dictionary. You can see an example of the error that results when you omit these quotation marks around the double braces. Ansible provides the helpful hint that the with_items without the quotation marks should be written as with_items including the quotation marks around the double braces. When you encounter this error, update your YAML file to make this change and rerun your playbook. Two concepts that can be quite helpful in Ansible are the concepts of host‑ based variables and group‑based variables. As the names denote, host variables apply to a specific host, while group variables apply to all hosts in a host group or group of groups. Host variables will take precedence over any group variables supplied on a host, but variables defined inside a play will then override either of these. You can define both host and group variables in the inventory itself or in subdirectories that contain YAML files that match the names of the host in a host_vars subdirectory or group in a group_vars subdirectory. These YAML files will then contain the list of variables you wish applied with those scopes. Variables defined in the host_vars and group_vars directory have a higher precedence than those defined as inventory variables. To utilize this technique, you'll need to create directories at the same level as your Ansible playbook. Creating the two directories, group_vars and host_vars, will allow you areas to provide YAML files to define variables with this technique. If we had a group defined an inventory named servers, we could then create a subdirectory group_vars that contains the YAML file, servers. Any variables we define in the servers file will then be supplied as variables on all hosts in the servers group. In the example to the right, we see proper YAML syntax for setting variables in this fashion. The ansible_user variable is set to the string devops, while the newfiles variable is a list of two different values. If you wish to create variables for a specific host with this technique, create a host_vars directory and contain those variables and a YAML file that matches the host's name. Here's a look at a proper file hierarchy that has examples of this technique. You can see we have the group_vars and host_vars subdirectory at the same tier our playbook is contained. Underneath the group_vars subdirectory, we have files for all, datacenters, datacenters1, and datacenters2. These represent groups we've defined in inventory, and each of these files will contain a list of variables that are applied to the members of those groups specifically. In the host_vars subdirectory, we have four different files that correspond to each of our host, demo1 through demo 4. The files contained here will have a list of variables that explicitly apply to those individual hosts. Let's take a look at defining variables within a playbook. In this example, we're doing exactly that. We have a vars block that has a variable named packages. The dictionary packages then contains a list of packages we'll use a task to install. The packages syntax uses proper indentation of two spaces before listing out each of the members of the packages dictionary. Once we've defined this dictionary, we can then call a task using a package installer module, such as yum, in the argument for the yum module's task. The name can then use the variable, using both double quotations and double braces to call packages. The packages variable contains a list of five different individual packages that would then be looped through to install all five of these pieces of software. Here we can see a more elaborate structure for this variable named users. This users variable is an array of values. It is possible to return one value from the array within this variable. When we wish to do so, we'll use bracketed syntax, as well a single quotations around each of the elements. For example, we could reference the users variable and then the aditya user's username and then their first name by the syntax users, opening the bracket, opening the quotation mark, and naming the username Aditya, and then following that with an open bracket and open quotation mark for the fname. In a similar fashion, we can get Carlotta's home directory reference with users open bracket, open quotation, Carlotta. Close both of those. And then open bracket, open quotation home. Ansible provides something called the register statement. The register statement will allow us to capture the output of a task and store it in a variable during execution. The output is saved into a temporary variable that could be used throughout the rest of the playbook for either debugging or utilization for another task. This is a common technique that allows us to take advantage of the return values from each module, store them in a variable, and reuse them throughout the rest of our workloads. These registered variables are only stored in memory and are destroyed once playbook execution completes, Let's open our terminals and take a look at these techniques. From our previous play, let's have a look at what it looks like currently. We can evolve the value that we're supplying for the username test as a variable at the top of the play. Let's go ahead and evolve this playbook. Alright, up here in the heading keys, we can add a new key for vars. Since this is a child, we'll need to two‑space indent beneath it, but we'll simply supply the vars. Let's create a variable named username. We'll set this user name to test. We'll then utilize it down below in the play by doing variable substitution. Since we're substituting a variable, we'll needs a first enclose quotation marks, the double braces, and include the variable name in the middle here, username. Let's give this a try after saving our work to make sure that this works. We can take a look at the evolved playbook to see that we've now got our variable section, as well as a usage of that variable. We'll use our Ansible playbook command and call the example playbook. Excellent. In this case, the playbook is verifying that the user is absent. Let's go back in and make sure we can add the user (Working) and rerun the same command. Excellent. This shows that we've been able to evolve a value that we had hardcoded into a variable that we can now override. Let's supply a command line variable substitution. We'll say ansible‑playbook and, providing extra variables, we can supply username and set it equal to a second username. We'll use student, and then we'll call our playbook. Great. We can see changes were made. Let's log in to web01 and have a look to make sure that the new user has been added. To do that, we'll just take a look at the last few entries of the etc/passwd file. Excellent. We can see that both our test user and student user have been added. Having a look at what our playbook looks like currently, we see that we have this. We've got our basic variable in place. Oftentimes, we may want the variable to contain a list of values. Usernames are a really good example where we may care about more things about that user than just a simple string of the username. For example, we may have additional information we want to supply. Let's evolve this playbook a bit further. We'll transform the username variable into more of a dictionary of values. Here, I'll remove test for now and then begin building child values underneath username. Each of these child keys will be invented two spaces further. And as we provide more values for the test user, we can then further indent two additional spaces. (Working) Now that we've evolved our variable, we'll need to update the interpolation below. The notation we'll use here will involve brackets and single quotation marks to iterate through the fields. (Working) Let's also take advantage of that new value that we've supplied. The user module also provides a key comment to allow for additional commentary within the etc/passwd file. I'll take advantage of that here. (Working) Once you've made your changes, save your file, and we can re‑execute this using the same command as before. (Working) Great. Now that that completed, let's log into web01 and see the changes. (Working) We can see that the comment has now been supplied in the comment field for the test user. This shows that we were able to take multiple values out of the dictionary variable approach that we've now structured. We can log out of our web01 system at this point. We've previously taken a look at the host_vars concept, but let's also evolve our playbook to take advantage of that technique. I'm going to go ahead and create the group_vars alongside the host_vars directory. (Working) This is what we currently have. To clean up our work for further exercises, I'm going to go ahead and remove the db01 host variables we previously set by simply deleting that file. After that clean up, we have the current structure in place. Since we've been performing the work on the webservers group, we can create a file in the group_vars directory for the webservers group. We can migrate the variables we just created into that file and take advantage from the playbook in the exact same fashion. Let's perform that work now. (Working) We want to take these values here and supply them within our webservers file in the group_vars directory. Let me first create the file. Then I'll come back to this playbook and remove the values. (Working) Let's create the file group_vars/webservers. In this file, we'll simply paste our variable information. It can be helpful to provide a comment at the beginning of each file, so we understand what the file's intention is. (Working) Let's save our work. Now we'll need to go into our example.yml and remove those values. Since we have no additional variables currently, we can leave the key and have it blank. But since we don't provide variables, I'll go ahead and remove it as well. It's best to keep your playbooks as clean as possible. Let's save this new playbook. We should be able to run the same Ansible playbook we had run previously and see that it was able to complete and reference those variables. While I performed no work, it also gave no errors, meaning it was able to find and refer to the variables used in our tasks in this fashion. If we wanted to see actual changes to ensure that we understand that this technique is working, let's make an update to the example.yml to change the state from present to absent. (Working) Here we'll replace present with absent, which should perform the removal of these users. (Working) Great. We see that it worked. Once again, let's edit and make sure we can add these users. (Working) Great. We'll verify this, and this time we can go verify on a different system. Let's log into web02. We'll tail the etc/passwd file, and we can see that even with our new approach, we're still able to add that test user. This concludes our section. I look forward to seeing you in the next video.
In this section, we'll take a look at protecting sensitive data using Ansible. We'll have a look at encrypting files that contain sensitive data using the Ansible Vault. We'll execute playbooks that reference Ansible Vault‑encrypted files, and then we'll make sure we know how to update Ansible Vault‑encrypted files. Ansible provides the secure key value data store called Ansible Vault. Within our Ansible playbooks, it is not an uncommon task that we need to access sensitive data, such as passwords, API keys, or other secrets. Oftentimes, we'll pass this information to Ansible through variables. But it is a security risk when these are, stored in plaintext. Ansible Vault provides us away to encrypt and decrypt that information when using it within a playbook. And Ansible provides the ansible‑vault command and its subcommands to do so. Three main things we'll do with Ansible Vault is create, view, and edit files that we encrypt. When we want to create a new encrypted file, ansible‑vault has a create subcommand. Passing in the name of any file, Ansible Vault will then create a file with that filename in an encrypted fashion. Using the subcommand view, we can then view the contents of that command. When we need to alter one of these encrypted files, we'll use the subcommand edit and pass in the same filename to make changes to that existing encrypted file. If the file already exists and we want to encrypt it, we can then use the encrypt subcommand. The create subcommand is used when we're authoring new files in which to encrypt them, while the encrypt subcommand is used to encrypt already existing files. If we need to save the encrypted file using a new name, we have the ‑‑output argument where we'll supply this new filename. When we want to decrypt a filename, we have the decrypt subcommand. This will remove Ansible Vault encryption for that file. Now that we have encrypted information, we'll want to use that within our playbooks. We can provide the vault password that we set when encrypting the file with the ‑‑vault ‑id option. You can see an example command here that does exactly that, ansible‑playbook ‑‑vault ‑id, an argument that says @prompt and then the vault encrypted file. The @orompt option ensures that Ansible understands it needs to receive user input for the password. If you do not provide that password, Ansible will return an error. You may have different passwords for various files that are encrypted using Ansible Vault. When you need to supply multiple passwords, we'll have to understand the technique that allows us to do that. Using the ‑‑vault ‑id option, we can set labels on the encrypted file. We can then use this as many times as necessary to label the various files we have encrypted and ask Ansible to prompt us for the different passwords when we need to supply them. Have a look at this last example. Ansible‑playbook calls the vault‑id and supplies a vars@prompt argument. It calls it again, providing a playbook@prompt argument before then calling the playbook site.yml. Ansible will then prompt you with this execution for two different passwords, one for vars and one for playbook. Given that you provide the two appropriate passwords, the files will be decrypted when utilized by the playbook, and execution will proceed; else Ansible will provide an error. Once we've created a password, we may need to change that on an encrypted file. To change the password of an encrypted file, we'll use the subcommand rekey for the Ansible Vault command. You can use this subcommand on multiple data files at once, providing a helpful way to rekey a bunch of files to the same password. The rekey subcommand will prompt for the current password and then the new password you wish to set for these encrypted files. While we're discussing sensitive information, sometimes Ansible output can include sensitive values. When this is the case, you may want to suppress the output from a given task that could do that. When we want to suppress that output, we can use the key no_log. By using this value, Ansible will suppress the output of the task so that sensitive information is not displayed. Have a look at these two examples. In the top example, we're debugging a variable called secret. You can see on the right the output declares the secret. You know it. In the second example on the bottom left, we've added the no_log keyword and set that to true. Note that when we debug this variable on the right‑hand output, it does not display the value of that variable. This could be very helpful when you have passwords, API keys, or various other secrets you don't want displayed in your task summaries. Let's open up our terminals and give this a try. When working with secrets with Ansible, we have the command ansible‑vault. Here we see the commands, options, and subcommands available to us when working with encrypted or secret files. We'll take a look at some of these, like decrypt, create, encrypt, and so on and so forth. Let's take a look at a simple file I've authored. I'm calling secret. Here, I've created a single variable named secret set to a value of our_secret_data. I want to be able to encrypt this file to where it can't be displayed or accessed by users without a password and then use that data within Ansible workflows. First things first, let's encrypt that file. We'll use the Ansible Vault subcommand encrypt and simply name the file. It'll prompt us for a password, which we'll enter and confirm. Encryption is now successful. When we try to display the contents of that file again, we'll notice we don't have access to the data itself, and we do have a handy note that Ansible Vault is in use here. Let's evolve our workloads in the existing playbook to include this concept. Let me open up my editor. You can see what I've done here is I've now loaded in the variables from our encrypted file, that filename's secret. Ansible has a module named include_vars that allows us to do exactly that. I have simply presupplied this information to make it a little quicker for us to take a look at this example. Next, I'm using the debug module, which allows us to display the contents of variables within our Ansible workspaces. I'm naming the variable we created. Let's go ahead and run this. I'll run this with no additional options so we can see what happens when we try to access this vault‑encrypted file without taking the proper techniques to decrypt during usage. Let's see what happens. Uh oh, it's attempting to decrypt the file, but no vault secrets were found. Okay, let's see what happens when we now supply the proper flag that allows us to pass the password in. (Working) We'll see what happens this time. Oh, great! It was able to decrypt our data, and we can see the contents of our variable called secret. Excellent. Displaying the variable may not be wise, especially if it's sensitive data. But let's take a look at how we can maybe use this in a more meaningful way, such as to supply a password for some of those users we are creating. In our group_vars file for the webservers group, let's take a look at the group_vars webservers set of variables we've created. Let's supply another value here for the password for this user, and we'll use the variable we've created. Excellent. Let's save our work. Now we'll adjust our playbook to supply the password for these users. (Working) Okay, now we've added that argument. So let's see if we are able to supply the password for our encrypted file and have this playbook create these users and set their password to that secret we've now encrypted. I'll run the playbook as we did previously, asking it to prompt us for our vault password. We do get a warning about the fact that we're passing the password in plaintext. However, we know that that password is stored in an encrypted location. But from Ansible's perspective, it's decrypted during time of execution. We can ignore this for now because this is just a simple example. Now additionally, we're displaying the contents of that secret data anyway. So let's use one more argument to make sure we're not supplying that secret encrypted data in the Ansible output. Let's add one more argument into our example playbook. (Working) The argument we'll add will go here, and we're simply going to supply the no_log option and set it to true. Let's save our work and rerun it one final time. (Working) Excellent. Note that we're no longer displaying the variable in this section here. This is the most appropriate way to handle sensitive information and ensure that Ansible isn't revealing data that you do not wish displayed in the Ansible output. That's just a small example of the ways we can use. Ansible Vault to handle our secrets and ensure that we're passing sensitive data responsibly. Now that we are familiar with working with these encrypted files in our Ansible playbooks, let's talk about a few other subcommands available with the ansible‑vault command. Specifically, we know that we were able to encrypt with ansible‑vault encrypt. But if we no longer wanted that file encrypted, we could use ansible‑vault decrypt and name the file. It would ask for our password, and now we see that we can simply see our data. I'll re‑encrypt this so that we can see other subcommands. (Working) Notice that I could've set a new password during this encryption, but we wouldn't want to have to decrypt our sensitive data just to supply a new password. That's why Ansible Vault allows the rekey subcommand. (Working) The rekey subcommand asks for the password and then allows us to supply a new password and confirm it. If we needed to edit this file, Ansible Vault does supply an edit subcommand, It'll prompt for our password and then allow us to edit the file. When we save the file, note that we still can no longer see the data. That's just a glimpse at the power of Ansible Vault. The intention with Ansible Vault is to ensure an easy method for you to securely pass your data when using secrets in your Ansible workloads. These techniques can go far to make sure you're keeping your secrets safe. That concludes this section. I'll see you in the next video.
In this section, we'll look at task iteration with loops. We'll want to demonstrate basic looping functionality within Ansible in order to iterate over tasks. Using loops can save system administrators the need to write multiple tasks that do the same thing on various different arguments. If we were to need to create several users, for example, we can use one single task with the loop to create five users instead of five individual tasks that each take that unique argument. Ansible provides this functionality using the keyword loop. Loops can then be configured to repeat a task using each item in a given list. The loop variable item holds the value during each iteration. Here's an example on the left of a technique that does not use loops to create three different users. We can see the user module called three different times, each supplying a different name. All three of these tasks look identical, with the exception that the name changes. Using what we learned before, we can define a variable called myusers. This myusers variable will then contain the three names that we were using on the left example. Once we've created this variable, we can then take advantage of the loop keyword. Using the loop keyword in the example on the right, we can call the myusers variable. We'll substitute this in to the name field, and to do so, we'll use the keyword item. Once we execute either of these playbooks, we should get the same results. In the first example, it would be three separate tests to create the users. Here we're displaying the output of the looped example. You can see we have a single task that creates all three users. This is much more efficient and allows you to author much cleaner and smaller playbooks when you're creating your Ansible workloads. Additionally, after Ansible completes, we could check our etc password file to ensure that all of these users were created. Here at the bottom of that file, we can see the three users have been created, Aditya, Boris, and Carlotta. More advanced looping techniques exists besides loop. We also have the concept of with_dict, which takes in a list, but each item in the list is actually a hash or array of the dictionary instead of just a simple value. There's great Ansible documentation on this, but here we can see a simple example. In this example, we're defining the with_dict to have both name and groups for each element. You can see that we have two groups, Flintstones and Rubbles, that get created in our first task. In the second task, we take advantage of the with_dict to have various members of these two groups created. Note the nomenclature for item.name and item.groups that corresponds with the two fields created underneath with _dict. In the first element, we can see that the name is set to Fred and the groups corresponds to Flintstones. In this way, item.name would translate to Fred, and item.groups would translate to Flintstones. While loops provide us a tremendous power within Ansible, they're not always the most efficient way to accomplish a task. Depending on the module, you can consider whether it's more appropriate to use a loop or not. In the example here, we're taking advantage of the yum module to install several pieces of software. Using the loop structure on the left, we'll call the yum module three separate times to install individual packages. Due to the functionality provided within the yum module, we could have just passed those in as three different names for the argument within the module. The example on the right would call the yum module one single time, providing all three arguments for installation. In this case, the task on the right will be more efficient and faster. This is very contextual per module, so consider how each module may work, and some testing and exploration may be necessary to determine whether or not loops are appropriate in your workload. Let's hop in our terminal and give this a try. Let's try creating a simple loop in a playbook so that we can add a couple users. Up to this point, we've only been managing single users at a time, but the power of loops is to allow us to do multiple items in a given task. Let's have a look at a variable file I've just created. Note we have two files now. The databases and webservers group each have their own variables file. I've populated the databases file with a simple list of what I'm calling db_users. Namely, the usernames will be test, dev, and qa. We'll create a simple playbook using the user module that adds these three users to our database systems. We'll do so using a loop. I'm going to create a file I'm calling loop.yml. I'll start with the typical kind of headings, only this time, instead of webservers, I'm going to target databases, the name of our group that includes our DB systems. And since it's multiple users instead of a single user, I'll update that line as well. Let's fill out the arguments for the user module that we're going to use to iterate through our loop of database users. We'll supply the name field, and in this case, since we're going to use a loop, loops require the keyword item, and we represent that in the typical variable fashion included in both quotation marks, as well as double braces. We'll still declare a state, and in this case, we'll set the state to present. Now we need to include our loop field. We called our variable db_users, so that's the variable that we'll want to supply here to the loop argument. Now, as the loop argument also takes a variable, we'll need to come back and add the proper syntax of the double braces and quotation marks in order to have this function properly. Now that we've created that simple playbook, let's go ahead and execute it using ansible‑playbook and call the name of the playbook we just created loop.yml. Excellent. We can see that the user creation task has added three users to two systems each, six entries altogether, a very powerful example of how loop can be used to iterate through our variable lists. You could see how this could be very valuable for supplying larger, more complex environmental structures, and enabling you to use the loop feature certainly trims down on the number of tasks you're required to author in your Ansible workloads. That concludes this section. I'll see you in the next video.
In this section, we'll discuss running conditional tasks. Ansible implements both conditionals and handlers for us to be able to control if or when tasks execute. We'll take a look at that in this section. Ansible conditionals allow us to qualify whether to run or skip certain tasks. Both variables and facts are available to be tested using conditionals. Conditionals will leverage operators, such as greater than, less than, or equal, some various numerical data, or boolean values to qualify whether or not tasks should execute. A few good use cases for when you may want to qualify a task execution using a conditional exist. Perhaps you would only want to do certain tasks if there is system memory available. You could use an Ansible fact on available memory to qualify whether a task should execute. You could peruse and create users on a managed host depending on which domain it may belong to. Certain tests may need to be skipped if a variable is or isn't set to a certain value. And using the register technique we've learned previously, you could also leverage the data we've gathered throughout task execution and potentially store it in variables using the register technique we learned in our previous sections to determine whether or not to run further tasks. Conditionals take advantage of a when statement to qualify if or when they should run. If the condition is met, then a task will execute. However, if the condition is not met, the task is skipped. Let's take a look at this example. In this example, we have a variable we've created called run_my_task and initially set its value to true. In the task area, we have the installation of the httpd package. This package will only be installed when run_my_task is set to true. In this case, since we've initially set that value to true, the httpd package would be installed. Let's have a look at a slightly more sophisticated example. In this example, we're going to test whether or not the my_service variable has a value. If it does, then the value of my_service is used as the name of the package to install. If the variable is not defined, then the task is skipped without an error. You can see in our vars block, we've set my service to httpd. And then in the yum task, we install the my_service variables value. The conditional when qualifies to only do so when the my_service variable is defined. In this case, we would install the httpd package since it is. Here's a table that shows some examples of various conditionals we can take advantage of. When we wish to test if a variable is equal to a string, we'll quote that string and use the double equal sign. If we wish to test for a numeric value, we don't need the quotation marks. Less than and greater than examples exist, as well as less than or equal to examples. The exclamation point provides us the ability to say not equal to. And if we simply want to test for a variable's existence we can use the keyword is defined. The converse of that is if we want to test to make sure a variable does not exist. In this case, we'll use is not defined. When we want to test for boolean true values or one or yes, we can simply name the variable. If we wish to test for if the boolean value is false or zero or no, we'll say not variable name. We can also start to create more complex associations, such as if the first variable's value is present within a second variable's list. You can see an example of that in the bottom of this table. Ansible_distribution in supported_distros. In this case, we have an Ansible fact gathered that qualifies and stores the value of the Ansible distribution. We also could define our own supported distros variable. If the gathered Ansible fact for Ansible distribution is listed within our defined supported distros, then this task will execute. Now that I've mentioned Ansible facts, let's take a look at how to use some of those in constructing your conditionals. The distribution Ansible fact is gathered and set when the play runs. This fact identifies the operating system of the current host. In our play to the right, we also define a supported_os variable. We list two operating systems that we consider supported OSes. Using the conditional at the bottom of this play, we'll create a when statement that consults the Ansible fact for distribution and determines whether or not it is in the list of supported operating systems we've created. We can see the syntax for that uses ansible_facts and opens a bracket and quotes the fact we wish to test, in this case distribution. Then we used the keyword in and consult the variable we created at the top of this play, supported_os. If the gathered Ansible fact for distribution is either Red Hat or Fedora, then this task will execute. Building on this concept, we can test for multiple conditions. We can use a single when statement to evaluate several conditions. We'll combine those conditions using either the and or the or keywords. If we have multiple of these statements, we'll group them with parentheses. Let's have a look at a few examples. In this first example, we're testing for the ansible_distribution to be equal to RedHat or the ansible_distribution to be equal to Fedora. If either of these is true, the task will execute. In the next example, we're using ansible_distribution_version, testing it to match 7.5, and the ansible_kernel set to a specific value of 3.10.0. Both of these will have to test as true for this test to execute. We can utilize lists to describe a list of conditionals as well. When we use this technique, these are combined as an and operation. In other words, both of these conditions must be met for the test to execute. You can see an example of that here where we've adapted the previous slide's example into this list. If both the distribution version and the kernel version for Ansible match these values, then the task will execute. If either do not, then the task will be skipped without error. As we evolve that to more complex conditional statements, we can group conditions with parentheses. This will allow us to ensure that Ansible correctly interprets the expressions we're authoring. Here you can see a complex example of a when statement where we're testing for the system to have RedHat at version 7 or Fedora at version 28. If either of those are true, then it will execute. But it must be RedHat version 7 or Fedora version 28. The concept of loops and conditionals can certainly be combined. In this example, the mariadb‑server package is installed by the yum module. It's only installed if there's a file system mounted on / with more than 300MB free. We're using ansible_mounts as a conditional test. This fact is a list of dictionaries that represent a fact about each mounted file system. The loop will iterate over the list, and the conditional statement is not met unless there is an actual mount found to have 300MB free or greater. Both of these conditions must be true. If both of these conditions are met, then the yum task will install the mariadb‑server. Here's another example where a playbook will restart the HTTPD web server only if the Postfix server is running. In the first task, we're going to find out if Postfix is running or not. We'll run a command using the command module and then register its output into a variable we're calling result. In the next task, we're then going to consult that output using a when statement on result.rc and verifying that it 0. If that is true, then we will restart the httpd service using the service module. Let's hop in our terminals and try this out. Let's consider a scenario where we want to add users to both our database systems and our web server systems. We can write a playbook with two simple tests to add these users. With conditionals, we now have the ability to target specific systems based on criteria and ensure that the proper users are added to only the systems they belong on. In this example, I'll add web users to our web systems and DB users to our database systems. Let's hop into a simple playbook that I've gone ahead and created. I'm calling it conditional.yml. Let me show you what I've set up before we go ahead and add our conditionals. I've added a new vars area to the top of our playbook and created a variable called web_users. This is a list of three different web users, member, admin, and developer. This first test should look familiar as this is the create database users tests that we've seen previously. We're using the user module, and we're using a loop of item, a state of present, and our db_users variable that's contained in our group_vars. Here we'll need to fill out the same for our web users. In this case, we're going to use the same technique of a loop. So let's go ahead and fill that out. The state is also going to be present, and our loop in this case is going to be that variable we defined at the top, the web_users. Now as this is currently, we're targeting all hosts, which means both of these tests will run on all systems. We're going to want to use a when conditional on each of these tests to make sure that these users are added to only the systems that we expect, namely the first test should only target systems in the databases group, and the second one should target only web servers. Let's set up a when conditional for each of these that does exactly that. We can put when statements just here at the bottom. I'll add both of the keywords first. Now let's consider some things available to us that we can use in conditionals. In this case, I want to verify if a system belongs to specific group. There's a helpful syntax that allows us to do that. I'll give you that example now. Here we can say databases in, and we have a key here that is available to us through Ansible known as a magic variable. The magic variable we'll use here is group_names. Group_names is a list of all groups Ansible is aware of in your workspace that it gathers during the gather facts phase of all playbook executions. We'll take advantage of this fact to target the databases group within that list. Let's do the same thing for the web servers section. (Working) Here we'll say webservers in group_names, targeting a different section of our inventory. Let's save our work and execute the playbook. Say ansible‑playbook conditional.yml. Great. We can see the blue output is denoting that we're skipping certain systems for these tasks, namely when we're creating the database users, we're skipping both web01 and web02. When we're creating the web server users, note there we're skipping the tasks involved for those users on the database systems. You can see the summary at the bottom shows that we have performed the proper changes on the appropriate systems and have added those users. As with all Ansible playbooks, we can always test for idempotency in our work by rerunning the playbook command. Aside from our skipping, the full green output shows that idempotency was intact and that the work was not duplicated. As these users already existed, we simply see statuses of ok for all of our tests. Using Ansible facts throughout your conditionals, as well as all of the other catalog of abilities you have to condition certain task execution really makes it easy for you to target the proper systems for various aspects of your Ansible playbooks and workloads. That concludes this section. I'll see you in the next video.
In this section, we'll talk about triggering tasks using handlers. We'll learn to author handlers that run tasks when another task makes changes on a managed host. Ansible handlers can be created within our workloads. They'll take advantage of all the same modules we use throughout all of our other Ansible workloads. These Ansible modules are designed to be idempotent. As they are idempotent, Ansible only tries to do work when it's absolutely necessary, not just because it came up in a task. To do so, it will always validate the state of a machine before performing any actions and only perform those actions when it's necessary to remediate to the desired state. An example of where we consider the concept of a handler is when we may want to run the same kind of module task at various points within our playbooks, but performing the same action. A good example of this is when we want to reboot a server after a certain set of actions. We may have three or four tasks within a playbook that require a restart when successfully executing, but we wouldn't want to restart the server after each one of those. We can author a handler to restart the server and call it for each of these tasks. Ansible will then keep track of the request to restart the server and perform that only once using the authored handler. As stated before, handlers are simply tasks just like we've seen before. But these are defined in a way that they respond to a notification triggered by another task. A task will only notify the handler when the task makes a change on something on your managed host. A handler has a globally unique name for your workloads, and it's triggered at the end of a block of tasks in a playbook. If there isn't a task that notifies the handler, then the handler will not run. If multiple tasks notify the same handler, the handler will only run once at the end of those tasks' execution. Since they're simple tests like any others, you have access to the full library of Ansible modules that you've seen so far. Typical things like reboots and service restarts are commonplace usages for task handlers. You can consider a handler an inactive task that will only execute when triggered and explicitly invoked when using a notify statement in another task. Here's an example of a defined handler. We can see that we're using a template module to create some work. And at the bottom of that execution, we use the keyword notify. The notify keyword then supplies the argument restart apache. This argument must directly match to the name of an author handler somewhere within our workload. As you get started with handlers, it's customary to author them at the bottom of your YAML files. In this case, we have done exactly that. Our handler is defined with the matching name of restart apache. This handler takes advantage of a task using the service module. This task will then restart the httpd service. In this example, if the template task performs any work, it will then notify the handler restart apache. Once notified, the handler will restart Apache at the end of task completion. It is possible for a test to call multiple handlers upon execution. In the example here, we're notifying two separate handlers. Those handlers are both defined at the bottom of the file and have directly matching names of restart mysql and restart apache. We can see that both of them use the service module to restart their proper services. If the task of the top performs any work, it will then notify both of these handlers. These handlers will run their service restarts at the end of playbook execution. A list of handlers that may be called throughout playbook execution will always run in the order specified by their calls. They do not run in the order in which they're listed in the notify statements in a task or in the order in which tasks notify them. They're executed in the order in which they're defined within your playbook structures. Handlers typically run after all other tasks in a play complete. A handler called by a task in the task part of a playbook will not run until all of those tasks have been processed. The names of handlers exist in a per‑play namespace. If two or more handlers are incorrectly given the same name, only one will run, the one first defined. If multiple task notify the same handler, the handler only runs once. That's really the purpose of handlers here. If no tasks notify a handler, then it will not run. Again, this is really at the heart of the purpose of handlers. Tests that include a notify statement to notify a handler do not do so unless they report a state of changed. In other words, if a task does not perform work, it will not notify its handler. If the test does not notify its handler, then the handler is not executed. Consider with our playbook that added users for the DB and web server systems. Potentially, we may want to reboot those systems when users get added. If we add DB users, we could create a task that reboots the machines. And then if we add web server users, we'd create another task to reboot machines. Potentially, this could result in multiple reboots across all the systems. We can evolve our playbook a bit further to use a handler to accomplish this sort of task in a more graceful way. Let's take a look at my hander.yml file I've created. This is just an evolution of the file that we already were using up to this point. If we wanted to reboot after database users are added, we could insert a task at this location to go ahead and call the reboot module. Then after web server users get added, we could add another test to reboot. However, handlers allow for this to be a graceful approach. To start authoring handlers, we'll put them at the same hierarchy as tasks. From there, they're authored in the exact same fashion as our task modules. I'll give this handler a name of Reboot system. With this module, if we wish to reboot a machine, no other arguments are required. Now that we have this handler authored, how do we invoke it in our tests above? Well that's where the notify keyword comes in. So let's scroll up, and we'll insert a line directly in line with the loop when and user and call it notify. Notify then must match the exact name of the handler defined, including capital letters. So the capital R here is important. We'll do the same for our web server users. We'll add a notify statement, and we'll call the exact same handler. And since these users exist, if we were to run the playbook in its current form, no work would be done, and therefore we would result in no handler being called. So let me log in to one of these systems. Let's call it db02, and I'm going to use the command userdel. Let's just show real quick that in etc/passwd we have those users, test, dev, and qa, that we expected to see. I'm going to remove the qa user. I'll use the userdel command, and I'll say qa. Since I'm not a privileged user, I need to invoke sudo to do that. If we tail the file again, we now see that that qa user's removed. Now at least that task in our playbook should require some work to get that qa user added. So I think we're in a good situation to test our handler. Alright, so let's take a look at our handler one more time, handler.yml. Great. Okay, so now let's go ahead and execute this playbook. (Working) Notice the changed status there for adding that qa user to db02. The handler is, in fact, being run. So now we'll wait for the machines to reboot. Great. Now we can see that the handler was invoked and rebooted the system db02. We can see that change reflected in the play recap as well. This a great way to use a more elegant way to have task execution be conditionalized upon changes on systems. That completes this section. I'll you see in the next video.
In this section, we'll explore recovering from errors with blocks. We'll learn to use blocks to group tests together in a play so that we can recover from errors that may occur in the block. Blocks are available in Ansible as a logical grouping of tasks into a unit. They'll then be used to control how tasks are executed. Blocks can have when conditionals applied to the entire block, for example. And then that would mean that all the tasks in the block only run when that conditional is met. Here's an example of using that technique. We give the task a name of installing and configuring Yum versionlock plugin. We start our block and list out two separate tasks, one for the yum module and one for the lineinfile module. We supply a conditional on this block that says that the ansible_distribution fact must equal RedHat. If we're working on a Red Hat system, then both of these tasks will execute. You can also utilize blocks with the keyword rescue. This will help you to understand ways to recover from task failure. A block can have a set of tasks grouped into a rescue statement that will execute only if the block fails. Normally, the tasks in the rescue statement will recover the host from some sort of failure that could have occurred during the block tasks. This is a really helpful technique. If the block exists because multiple tasks are needed to accomplish some outcome. Additional to rescue, block also can be companioned with always. This third section will run no matter the output of the block and rescue. After the block runs, if there was a failure, the rescue tasks will then execute. No matter the output of block and rescue, the tasks in the always section will run. Now the always section is only limited by the conditional that may have been set on the block. So to summarize, block will define your main tasks to execute, rescue will be utilized when the block clauses fail, and then the always tasks will run independently of the success or failure of the tasks defined in block and rescue. Here's an example of all three of these put to use. We have a block statement defining a shell command. We have a rescue section that defines a different shell command. And then we have an always section that restarts the service. In the block section, you can tell we're trying to upgrade a database. If this were to fail, we're going to use the rescue task to revert that database. No matter which of these tasks was successful, the always section will enable the restart of the database. We could further supply a when conditional on the block clause, and that would be applied to both the rescue and always clauses. Perhaps we wish to match on an operating system, such as Red Hat. If we had a conditional that matches for the OS Red Hat, and we were working on a different system, none of these tasks would run. Let's open our terminal and try some of this out. Block, rescue, and always are three great ways that we can declare tasks that we wish to attempt, ways we can mitigate any failures, and then tasks we want to run no matter the outcome, to clean up or finish our workloads. Let's have a look at a sample playbook that I've authored to utilize this approach. I have called this playbook block_rescue_always.yml for obvious reasons. Here I'm going to use a block, rescue, and always approach to attempt to update a database system. That being said, we'll attempt to update this system package. And if it fails, we'll restart the database so we can ready that system to attempt that again or mitigate whatever issues may have occurred. And lastly, we'll reboot the system so we can put it back into production. Let's take a look at my work. With hosts, I'm going to target our databases. We'll utilize become true because dealing with package updates and installation type tests, you'll certainly need those privileges. In my task section, I have the task update database. Opening the block section, I'll iterate a few tests that are going to attempt to update the database. You can see my first task is going to message the users that the database is being updated. Then we'll use the yum module and a state set to latest to update our PostgreSQL database server. The name here corresponds to the package name for that system. If all goes well, excellent. However, we have a rescue section in case it doesn't. In this case, the rescue section both supplies a message, an error message that the database will be restarted, and then the message itself, Update failed. Restarting database to correct issues. Once we've messaged the user, we'll restart the database since the update had failed. We need to use the service module to manage our services and then declare the service we wish. From here, I'll need to declare a state. And here the state I want is restarted. Our always block will run no matter if the upgrade was successful or not. Here the upgrade, given either status, will still want a reboot of the system. Here we'll notify the user if the reboot update process has completed. See the previous output for status of failure or completion and then reboot the system. We'll call the reboot function, and we've seen that previously. It requires no arguments to reboot a system immediately. Let's save this work and give it an execution. Now something I haven't called out previously is a great syntax check. I'll run it here just to make sure that all of the work we've put in in the block_rescue_always.yml file is appropriate YAML syntax. Since no issues arose, we know that we're in good YAML. I'll use the Ansible playbook command now to run the play. We can see the updating databases. It's only working on our two DB servers as we expected. The update postgreSQL server to latest version was able to make the changes necessary. The restart of that service did not occur since the rescue block was skipped since the block was successful. And now we can see that we've notified the users of a reboot and that that reboot is taking place. I'll save us some time on waiting for that reboot. However, the system should reboot, and once returned to normal status, this task will complete. Block, rescue, and always gives us that great power to determine outcomes when tasks fail instead of simply halting the playbook. Here we have rescue‑type output that could have been run should the block failed. Since our block was successful, we proceeded to always, which would have run under any circumstance. That concludes this section, as well as our module. I look forward to seeing you in the next video.
Welcome back to our answerable fundamentals. Course in this module will look at using ginger to templates and filters in this section will explore deploying files with a ginger to template will understand how the ginger to template engine is available to us with unanswerable and allows us to deploy customized files on the host we manage with answerable in previous modules. We've had a look at the ways we can deploy files on our manage hosts. Ansel has a number of modules that enabled this ability. We've had a look at the copy module that allows us to copy a file from our source machine on to the manage hosts. We've seen that the file module allows us to manipulate the permissions and settings on this files. Additionally, the synchronized module allows us to take advantage of the our sink type abilities within Lennox systems for existing files on our targeted hosts, the line and file module allows us to edit certain lines within an existing file on a targeted host. Let's consider a situation, however, where we need to deploy a customized file on each of our manage hosts. Each host may need specific values altered relative to that host. In this situation, a template could be very valuable. Ansel has the ginger to template ing engine available for us to meet exactly this need. The ginger to template ING engine allows us to template files and then deploy them, using an answerable playbook within ginger to weaken substitute variables with values that are relative to the unique managed host. In this example, we're having a look at an ssh de config file and how we could template that to be used on each of our managed hosts. You can see that we have the port value written in our way that we've seen with variables previously using the double braces for the variable were calling here. Ssh port. Given that configuration file with the variable substitution where requiring we can now use the answerable module template, the answer will. Template module allows us to deploy a ginger to template ID file. It is similar to the module copy in the number of arguments and style that is used in this example, we're taking a look at the template shown on the previous slide. Notice that the file will end with the extension dot J two when we're using a ginger to template ID file. We're also using the template module available from the answerable library. We're showing the source says this template file. Ssh de underscore config dot j To using that standard file extension, we then declare the destination on the targeted host here. That destination is etc. Ssh s s HD underscore config. You can see additional file parameters were setting for this template available through the various arguments, we've discussed a bit about answerable fax as special variables available to us. That answer will will gather during these set up phase of each playbook. Execution at the start of each play answerable will gather these facts and make them available to us throughout our workloads. Additionally, you can collect facts at any time by running the module set up. Once answerable has gathered these facts, they're available and stored in a special variable set called answerable Underscore fax. This variable is structured as a dictionary. Lots of information is included in our answerable facts, such as network address information about each host, the host names, storage information, operating system data as well as many other aspects about the hardware and software available on the managed host. An example here shows how we can display all fax for a managed host as well as a subset effects specifically for an I P version for address to display variable information we utilize The debug module argument the debug module will take is the name of the variable we wish to display. This Ki var will be used to choose the variable. Additionally, as we wish to explore subsets of the Ansel fact information will use the bracket and quote notation to declare a specific value contained within the answerable fax. In this example, the first task displays all facts by simply debugging the variable named answerable underscore fat. This could be a helpful approach when you need to view all the facts about a given host to determine what is valuable for your answerable workload. The second task lists all of the I P V four address is for a specific host. It does so using the debug module and specifying the all I P V four address is contained within the answerable fax dictionary. Now that we understand that facts are available to us, let's look at how we could use those in a ginger to template here were using a message of the day template or an mot de template file. As per standard nomenclature, we'll call this file motd dot j to the standard location for this file will be etc. Motd on our managed Lennox hosts, the answer will fact F Q D in can then be utilized to replace the fully qualified DNS name of the host into various configuration files, specifically the message of the day file. You can see the example at the right does exactly this. We're using the double brace notation and including the answer will fact variable. We wish to take advantage off specifically f qd in. In this case, in the second box, you can see that we're taking advantage of the template module to use this new motd dot j to file and deploy it on our manage hosts into the location etc. Motd, as declared by the desk or destination value by the template argument. The example of the bottom shows this variable substitution to server one dot example dot com For that specific targeted host, each target host with substitute. It's fully qualified domain name in this fashion. When we wish to supply comments, and it's simple it We have a special syntax to do so we use a brace, followed by a pound sign or hash, and then include our comments. The comments contained in this way in our templates shouldn't appear in the final file. Have a look at this example. The first line of this example includes a comment that will not be included when this file gets deployed. The variable reference on the second line are then interpolated with the facts gathered by answerable for the specific targeted hosts. Using the ginger to template engine, we have available control structures that we can take advantage of when we need more complex substitution. We have Thief four statement providing us away toe loop over a particular set of items. In the example below were using the groups all as a special variable that list all the members that are contained in a group. Note the syntax using Thebe brace and percent sign to declare these values. Additionally, when we're done with a loop, we close a four loop using the n four key word again contained in the brace and percent sign Syntex. When we're done with this loop will close this statement using the end for keyword contained in the brace and percentage sign. The middle line in this example uses a specific set of variables we have available to us in this case provided by our host bars. The result of this specific Linus to generate something like an Etsy host formatted file that contains the I. P address that matches the fully qualified domain name of each host within your inventory. For all the hosts in the inventory, you should generate one line per host. That line would contain the I. P address that matches the fully qualified domain name and thus fills out an Etsy host file ginger to also makes available to us the use of condition. ALS will again use the brace and percent sign syntax for any expressions or logic we wish to take advantage of. These expressions are available to us within template files, but you shouldn't use thes within your authored answerable playbooks. In this example, we see the utilization of the if an end of keywords for evaluation. The if statement will check the Boolean value of the finished variable as declared if the finished variable is set to true. The result will occur before the end. If statement then closes out that structure, let's get into our terminal and try some of these techniques. When we're ready to create our first templates. First will organize them into a subdirectory. I'll make a subdirectory called templates. Switching into that sub directory, I'll create our first template it file. In this case, I'll deploy a message of the day file, toe some of our systems, so I'll create the template called motd dot j two. John J two is the's standard extension for Ginger to templates. Let's use some of the techniques we just discussed. Let's start our file with a simple comment just to explain the purpose. So here I'll use the comments Syntex, which is the brace that contains a hash or pound sign inside. Here. We can place any comment we like here. I'll just state the intention is file. Now we proceed with template ing out the file we wish to have in place for a simple message of the day. I guess we could just state the host name of the system were logging into, So let's create some text now that we have this simple text. We need to understand how to insert a variable here. We want to use a variable that supplies the host name for the system. I'll use the example that we had in our content of the f. Q. Deign to supply this value. You can see the sin taxes using the double braces, taking advantage of our list of answerable fax and then using the bracket and quote notation to supply f qd it. This is a good, simple template. So let's get started with this, and we can evolve it with a few techniques. Now that we have the motd ginger to template will need to write some answerable playbooks that deploy this. So switching out of this, I've already created a skeleton called template dot Yeah, well, let me take a look at the template dot You will file have not yet filled out the task explicitly. So let's go ahead and do so. No, The module that answerable provides for us to supply ginger to templates is the template module. It takes a number of arguments, and let's go ahead and fill those out. Now, the first argument it takes is the name of the template we wish to deploy in this case. I called on motd dot j two. Next we need to declare on the target system where we wish for this filed, replaced a proper motd goes in etc. Motd Next we can declare permission ing, the owner of the Motd is route. The group is also route in the mode or permission setting is ah 644 So for proper Norman clich? Near will say 0644 containing quotation marks that should place are motd on these systems in the correct location with correct permission and ownership. Let's give this a test. We'll call Ansel playbook and then name template dot You know, this is targeting all four hosts and we see that all four hosts were changed. Let's log into one of them to view the changes. We can see that we got this answer not quite up to our liking. So perhaps a different answer will fact may be more appropriate for us. Let's consider what else is available to us. Since the F qd in didn't give us the exact value we were looking for, let's try instead of unanswered effect, let's try one of the special variables that are available to us in this case. Inventory host name. This should reference the host names we supplied directly in our inventory. Let's save our file and rerun our playbook. All right, Great. Now we see that's been deployed. Let's log back into one of these systems. Ah, that's perfect. That's exactly what would want to see. All right, Great. It's clear screen. In order to craft our template file for the hosts file, we need to understand a little bit about how to traverse answerable facts. I've created this list of the answer, will fax for the Web one system and have stored those in the this temporary file. We can see the list of all answerable facts that are available to us within this file. When we're ready, toe utilize, one of these will need to know the nomenclature for using an answerable fact, any sub fact and then the values contained within. Specifically, I'll be looking for the default. I p v four address is so need this keyword here under Ansel fact as well as the default address, we'll use this in our variable so that we can place the I P address for each system and then correspondent to the host name for this system. Now let's author our host at J Tufo to be located in our templates folder and host a J two here You can see I have a boilerplate file where Out begin to enter our etc. Host template at the beginning of the file. I always like to include the comment to denote the purpose of the file. Improper. Ginger Syntex will use the brace and hash for our comment at the top. You can additionally see that I have a four loop I've created using Thebe Brace and Percent Sign Syntex. Here I am declaring host within the group all let's begin to fill out our four loops so that we can denote the specific fields we want entered into our etc. Host file. First, we'll begin with the double curly Brace Syntex. In this case, we need to use the host variables or host bars for each host, as we saw previously. We want to call the specific host that correlates to the I P address specifically the I P v four i p address of each node and couple that with the inventory host name that we have for each of our systems. We'll use the syntax shown to declare that we wish to use one of our answerable facts. The fact that we found that relates here is one for the default i p v four. So we'll say default i p be four. And then within that field in the Jason Output we saw from the fax we needed the specific name for the address will close this particular set of double braces the next field in the etc. Host file corresponds to the host name. You wish for this I p address to map to In this case, we have very quick variable available to us that Ansel creates called inventory host name. Now for each of our systems, we should have a line entered into the etc. Host file that gets generated from this temple. Let's save our file and execute our work. Bola. Now that we've logged in the Web 01 let's display our host file to see if we like what we see. Perfect. No, it looks like what we expect to see. That concludes this section. I'll see in the next video
In this section, we'll look at processing variables with Jinja2. filters. Jinja2 has a number of filters we can take advantage of to process and reformat the values contained in our variables. Within the Jinja2 engine, we have a number of filters that are supported for our expressions of variables. Filters allow us the ability to modify and process variable information to meet our needs. Some of these filters are provided by the Jinja2 language itself, and others are included as specific plugins for Ansible. You can also author custom filters, but that's kind of beyond the scope of this course. If you require further information on that, have a look within the Ansible documentation for playbooks_filters. These filters can be very powerful and allow us to prepare data for use within our playbook or within templated files for our various Ansible workloads. Now that we're ready to process data using the Jinja2 filters, we'll need to understand how to do exactly that. To apply a filter, you'll need to first reference the variable. You'll follow that variable name with the pipe character. After that character, you'll then add the name of the filter you want to apply. Some filters require a series of arguments beyond that or optional additional arguments contained within parentheses. You can utilize multiple filters within a pipeline to get the formatted output you require. In this example, we can see how the capitalized filter allows us to capitalize the first letter of a string. If we included a variable such as myname and that variable included a value such as james, all letters being lowercase, we could then use the filter capitalized to ensure that the J in James is then capitalized upon output. Oftentimes, we may need multiple transformations of our data. In this case, we can take advantage of multiple filters. The unique filter will get a unique set of items from a list, removing any duplicated entries. The sort filter then sorts that list of items. In this example, we can see that the mylist variable has a series of numbers contained within. We'll then pass that list through the unique and sort filters. The duplicate 9 should be removed from this by the unique filter. Sort will then put them in numerical order. The resulting output, as shown, would then show 1, 3, 7, 9 A more complex example is the ipaddr filter. This filter can perform a number of operations on IP addresses. If we were to pass a single IP address, it will return true if it is not in the proper format for an IP address. It will return false if it is not. If we were to pass in a list of multiple IP addresses, the filter will then return a list of the ones that are properly formed. Let's have a look at the example at right. We can see we have the variable mylist containing three IP addresses. Without getting too heavy into the networking concepts here, the bottom IP address is considered an invalid IP address. If we were to use the ipaddr filter on this variable, as shown in the task in the example, the output would remove this last entry, leaving us with just the 192. and the 10. IP addresses. As we get into more complex examples using CIDR information, you can see an example entry contained in this box. With the parenthetical arguments we are allowed to supply for the ipaddr filter, we can supply network/prefix to describe how we'd like to see this information output. With the variable mylist defined in this play, we can see that long‑form CIDR notation is provided within the list of IP addresses. With network/prefix argument it appended to our filter ipaddr, we're asking that the output truncate that to proper CIDR notation from VLSN notation. The output would then show the 192 /24 address and so forth to properly show these IP addresses and ranges with CIDR notation. This not only changed the order in which these values are displayed, but specifically the format. This can be very powerful when your workloads require specific types and formats of input. A key concept of processing variables with Jinja2 filters is that they don't actually change the value stored in the variable. They're really just transforming it to be more appropriate output that's utilized in your workloads. There's a large number of filters available both as standard Jinja2 included filters, as well as ones provided specifically for Ansible's usage to cover within this conversation. You can see here in this slide a small list of the filters available to you to the utilize within your Ansible workloads. If you need to see a full list or further conversations about this concept, please visit the Ansible documentation online. That concludes this section. I'll see you in the next video.
In this section, we'll explore Templating External Data with Lookup Plugins. We'll want to understand how to use lookup plugins so that we can template external data using the Jinja2 template engine. Lookup plugins are available within Ansible. They are an Ansible extension to the Jinja2 templating language to extend additional functions and features for Ansible workloads. These lookup plugins import and format data from external sources, so that they can be utilized in variables and templates. Lookup plugins will allow you to use the contents of a file, for example, as a value within a variable. Additionally, that'll allow you to lookup information from other sources, including external sources, and then supply them through a template. The ansible‑doc argument ‑t lookup ‑l will list all available lookup plugins. When you wish to see documentation of one in particular, you can then supply its name, ansible‑doc ‑t lookup, and then the name of the file lookup plugin will display the documentation specifically for that lookup plugin. We have two main ways that we can call a lookup plugin. Lookup will return a string in comma‑separated form. The query argument returns an actual YAML‑formatted list of items. If you require further processing of this information, query is often easier, as Ansible really intends YAML syntax. The example at right takes a look at using the dig lookup plugin, so that we can lookup the DNS MX records for a specific gmail.com entry. This lookup returns a list where each item is one specific DNS MX record. Once it's gathered this information, it then prints the list one item at a time. You can see in the example we're creating a variable mxvar and utilizing the format for the query approach of lookup plugins, naming the lookup plugin dig, supplying the domain we wish to do that dig upon, gmail.com, and then naming the record type, MX. Once that variable is created, we simply use a debug module statement to list those out using a loop. When we want to load in the contents of a file as a variable, we can use the file lookup plugin. We can provide either a relative or absolute path to the file we wish to load in this fashion. In the example at right, we use the Ansible module authorized_key to copy the contents of a specific file located at files/naoko.key.pub into a specified area of a targeted machine within the .ssh folder. In this case, we're using a lookup plug in because the value of key must be the actual public key and not a file name. The file itself contains this actual key. When we need to look at each line within a file or output, we can use the lines lookup plugin. This is often helpful to use in tandem with filters. At right, we're looking at an example that uses the lines lookup plugin to build a list consisting of the lines contained within the file etc/passwd. Each of the lines in that entry contains a specific user's information. The debug task in this example uses the regex_replace filter to print out the name of each user contained in each line of the etc/passwd file. We can use a template lookup plugin when we want to take a Jinja2 template and evaluate each of the values when setting a variable. When passing a relative path to the template, Ansible will look in the playbook's templates sub directory. Consider that the template sub directory has a file, my.template.j2. That file could contain the content Hello, interpolating the variable named my_name. We could then author the play at right that prints out the text, Hello class! Using that variable, my_names, set to the value class, and then the lookup value for template. The lookup arguments for template would also needs to include the template name, my.template.j2, to be able to find the variable contents contained within. Perhaps one of the most useful lookup plugins is the url lookup plugin. This one allows you to grab the content of a web page or the output of an API call. This can be very powerful in your Ansible workloads when you need to probe an API or grab content from a specific web page, like status pages. In this example, we're having a look at querying the Amazon API to print out the Ipv4 and IPv6 network addresses used by our AWS systems. You can see we create a variable called amazon_ip_ranges. This variable uses the lookup plugin url and then specifies the url we wish to probe. An additional argument of split_lines is provided and set to false. From there, we use several debug tests to be able to peruse the various IP ranges and prefixes. The first one shows the IPv4 ranges. The second task does the same, but for IPv6. When you're ready to learn more about lookup plugins, you can have a look at ansible‑doc‑t lookup. This is a helpful way to find documentation about lookup plugins directly within your command line interface. The dash ‑l argument will list all lookup plugins available to you in your environment. Further, you can use additional commands like grep to drill down and filter these results. By supplying a single lookup plugin as the final argument, you'll see documentation on that specific lookup plugin. Once you have the skills for using variables and filters, lookup plugins are a natural growth in enabling your Ansible, workloads in more sophisticated and complex environments. This concludes our section, as well as our module. I look forward to seeing you in the next videos.
Welcome back to our course, Ansible Fundamentals. In this module, we'll look at working with roles for automation reuse. Roles are a very powerful tool available to you within Ansible. We'll look at how they're structured and how you can utilize these within your playbooks. We'll describe how to create your own roles and then use them within a playbook. We'll look at the directory structure required to do so and then run one is part of a play. Lastly, we'll look at how you can select and retrieve roles from Ansible Galaxy, the online community that collects shared roles. This section focuses on creating roles. Within Ansible, roles allow you to make automation code far more reusable. Roles package tasks that can be configured throughout variables. A playbook will call a role, passing in the proper values for the variables for the use case. This will allow you to create very generic code that can be reused between projects or even shared with others. There are many benefits to using Ansible roles. Roles allow you to group content, and this allows you to easily share code with others or between projects. Roles are written in a way that defined the essential elements of a system type, such as web server, database server, or repository, or other aspects. Roles are bite‑size pieces of larger projects, making the code base far more manageable. Since you have many components making up the larger project, different administrators can develop roles in parallel and share their work to comprise the larger project. When we create Ansible roles, we use the same toolkit we do when authoring playbooks. There's particularly three steps involved in creating a role. The first is to create the directory structure that a role utilizes. Second, you'll author the content for the role. A common approach to authoring roles is to start by writing a play and then refactoring that into a role that makes it more generic. A key thing to note is that you should never store secrets within a role. The concept of a role is to make them reusable and shareable, and you wouldn't want secrets to be passed in this fashion. A proper approach would be to pass your secrets as parameters from within the play. Roles have a very specific directory structure. This directory structure is a standardized approach that makes sharing and consuming other roles easy. The top‑level directory defines the name of the role itself. Contained within this top‑level directory is the very predictable role directory structure. Each of the files for your role are organized into subdirectories that are named according to the purpose of each of these files. Subdirectories include things such as tasks and handlers. While you can manually create this directory structure, Ansible provides a command that makes it easy to do so in an automated fashion. Ansible‑galaxy and the init subcommand allow you to name a role, which will automatically create the skeleton directory for you. Here's a look at the default layout of the role skeleton directory structure. At the top level, we'll have the name of our role. In this example, we're calling it role_example. Beneath there, we have a series of subdirectories that contain our Ansible files. Each of these subdirectories has a main.yml where you'll author your work. The default subdirectory contains the values for default variables used within the role. These can be overridden during role invocation. These particular variables have a low precedence as they're intended to be changed and customized when you consume the role within a play. The files subdirectory contains static files referenced throughout the role. The handlers subdirectory contains the definitions of the handlers used within the role. The meta folder defines specific information about the role, such as the author, license, or optional role dependencies. A task subdirectory is included where the tasks performed by the role are defined. This is similar to a task section within a play. The template subdirectory contains all the Jinja2 templates you'll use throughout the role. The tests subdirectory can contain an inventory that can then be used to test the role. And lastly, the vars subdirectory defines values of variables used internally by the role. These variables have a high precedence and are not intended to be changed through the play. As mentioned before, it's really common to start with a fully authored playbook to transition that work into a role. In this example, we have a simple playbook that creates an FTP server on all systems in an inventory group we're calling ftpservers. You can see the three tasks contain one to install vsftpd, another to place a templated configuration