Prepare better with the best interview questions and answers, and walk away with top interview tips. These interview questions and answers will boost your core interview skills and help you perform better. Be smarter with every interview.
If you are attending an interview for the position of DevOps Engineer, you really need to have in-depth knowledge around DevOps tool, software & processes which are targeted for automating IT. There are various popular automation tools, both open source and commercial product targeted for Enterprise IT. One of the most popular modern automation platforms is Ansible. It is basically an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates. The major reasons why Ansible is so popular are simplicity and ease-of-use. Not only this, it has a strong focus on security and reliability, featuring a minimum of moving parts, usage of OpenSSH for transport (with other transports and pull modes as alternatives), and a language that is designed around auditability by humans–even those not familiar with the program.
Let us agree to the fact that implementing a DevOps tool, software & processes can help revolutionize your organization but adopting a DevOps framework doesn’t require updating your entire IT stack to newer agile implementations first. Quite simply, your organization can adopt DevOps through automation, even if you are running only on bare metal, migrating to the cloud, or already going full force into containers. Ansible caters to this need fantastically and is damn popular. The listed below are the top 5 reasons of its popularity:
Multiple IT automation tools like Puppet, Chef, CFEngine etc.appeared in the mid and late 2000-2002. They came with their own documentation which was still not up-to-mark for sysadmins to learn and adopt inside Datacenter. One reason why many developers and sysadmins stick to shell scripting and command line configuration was it's simple, easy to use and years of experience using bash and command-line tools. Why learn yet another IT automation tool and syntax? - was one of concern showed when a lot of such tools appeared during the same year.
Ansible was primarily built by developers and sysadmins who love the command line and want to make a tool that helps them manage their servers exactly the same as they have in the past but in a repeatable and centrally-managed way. One of Ansible’s greatest strengths is its ability to run regular shell commands verbatim, so you can take existing scripts and commands, and work on converting them into idempotent playbooks as time allows.
If Ansible tops the chart of popularity, Puppet is the 2nd most popular automation platform which is available both as open source as well as the commercial product. Below is a list of major differences between Puppet and Ansible which you should be aware of:
Ansible | Puppet |
---|---|
Developed to simplify complex orchestration and configuration management tasks | Puppet can be difficult for new users who must learn Puppet DSL or Ruby, as advanced tasks usually require input from CLI. |
The platform is written in Python and allows users to script commands in YAML as an imperative programming paradigm. Written in YAML language | Puppet is written in Ruby language |
Automated workflow for Continuous Delivery | Visualization and reporting |
Ansible doesn’t require agents on every system, and modules can reside on any server. | Puppet uses an agent/master architecture. Agents manage nodes and request relevant info from masters that control configuration info. The agent polls status reports and queries regarding its associated server machine from the master Puppet server, which then communicates its response and required commands using the XML-RPC protocol over HTTPS |
The Self-Support offering starts at $5,000 per year, and the Premium version goes for $14,000 per year for 100 nodes each. (Get more info here.) | Puppet Enterprise is free for up to 10 nodes. Standard pricing starts at $120 per node. (Get more info here.) |
Good GUI | GUI - work under progress |
CLI accepts commands in almost any language | Must learn the Puppet DSL |
This interview question identifies a candidate experience around Ansible both theoretically and practically. A Simple way to answer this question could be -
Ansible works by pushing changes out to all your servers (by default), and requires no extra software to be installed on your servers (thus no extra memory footprint, and no extra daemon to manage), unlike most other configuration management tools
Consider any configuration management(CM) tool. One of its ability is to ensure the same configuration is maintained, no matter if you run it once or 1000s times. Various shell scripts have unintended consequences if you execute them more than once or twice, but Ansible is the tool which can deploy the same configuration to a server over and over again without making any changes after the first deployment activity.
Ansible products offer the following capabilities:
Ansible is quite popular in streamlining the entire process. Provisioning with Ansible is simple and allows you to seamlessly transition into configuration management, orchestration and application deployment using the same simple, human-readable, automation language.
If you’re looking out for a simple solution for CM available in the market today, Ansible is the de-facto. It requires nothing more than a password or SSH key in order to start managing systems and can start managing them without installing any agent software, avoiding the problem of "managing the management" common in many automation systems. There's no more wondering why configuration management daemons are down, when to upgrade management agents, or when to patch security vulnerabilities in those agents.
Ansible is the simplest solution for configuration management available. It's designed to be minimal in nature, consistent, secure and highly reliable, with an extremely low learning curve for administrators, developers and IT managers.
With very simple data descriptions of your infrastructure (both human-readable and machine-parsable), Ansible ensures that everyone on your team will be able to understand the meaning of each configuration task. New team members will be able to quickly dive in and make an impact. Existing team members can get work done faster - freeing up cycles to attend to more critical and strategic work instead of configuration management.
App deployment is a matter of minutes compared to hours in the traditional approach to system management. When you define and manage your application deployment, teams are able to effectively manage the entire application lifecycle from development to production.
Ansible provides not only multi-tier but also a multi-step orchestration platform. The push-based architecture of Ansible allows very fine-grained control over operations. It is able to orchestrate configuration of servers in batches, all while working with load balancers, monitoring systems, and cloud or web services. Slicing 1000s of servers into manageable groups and updating them 100 at a time is incredibly simple, and can be done in a half page of automation content.
And this is all possible today using Ansible Playbooks. They keep your applications properly deployed (and managed) throughout their entire lifecycle.
Ansible has the capability to simply define your systems for security. Ansible easily understood Playbook syntax allows you to define secure any part of your system, whether it’s setting firewall rules, locking down users and groups, or applying custom security policies.
This part needs a special mention of Ansible Tower. Ansible Tower self-service surveys help you to delegate your complex orchestration to whomever in your organization needs it. With Ansible and Ansible Tower, orchestrating the most complex tasks becomes merely the click of a button even for the non-technical people in your organization.
This is a very important question which identified candidate understanding about the limitations around Ansible and tools being used. Undoubtedly, every automation tools available in the market have limitations. Ansible too have certain pros and cons.
Below is the list of Pros of Ansible which is self-explanatory:
Cons:
Ansible manages machines in an agent-less manner. There is never a question of how to upgrade remote daemons or the problem of not being able to manage systems because daemons are uninstalled. Because OpenSSH is one of the most peer-reviewed open source components, security exposure is greatly reduced. Ansible is decentralized–it relies on your existing OS credentials to control access to remote machines. If needed, Ansible can easily connect with Kerberos, LDAP, and other centralized authentication management systems.
Ansible by default manages machines over the SSH protocol. Once Ansible is installed, it will not add a database, and there will be no daemons to start or keep running. You only need to install it on one machine (which could easily be a laptop) and it can manage an entire fleet of remote machines from that central point. When Ansible manages remote machines, it does not leave software installed or running on them, so there’s no real question about how to upgrade Ansible when moving to a new version.
Ansible uses an inventory file (basically, a list of servers) to communicate with your servers. Like a hosts file (at /etc/hosts) that matches IP addresses to domain names, an Ansible inventory file matches servers (IP addresses or domain names) to groups. Inventory files can do a lot more, but for now, we’ll just create a simple file with one server. One can easily create a file at /etc/ansible/hosts (the default location for Ansible inventory file), and add one server to it as shown below:
$ sudo mkdir /etc/ansible $ sudo touch /etc/ansible/hosts
The entry under this file look like as shown below:
[example]
www.test.com
…where test is the group of servers you’re managing and www.test.com is the domain name (or IP address) of a server in that group. If you’re not using port 22 for SSH on this server, you will need to add it to the address, like www.test.com:2222, since Ansible defaults to port 22 and won’t get this value from your ssh config file.
Now that you’ve installed Ansible and created an inventory file, it’s time to run a command to see if everything works! Enter the following in the terminal (we’ll do something safe so it doesn’t make any changes on the server):
$ ansible test -m ping -u [username]
…where [username] is the user you use to log into the server. If everything worked, you should see a message that shows www.test.com | success >>, then the result of your ping. If it didn’t work, run the command again with -vvvv on the end to see the verbose output. Chances are you don’t have SSH keys configured properly—if you log in with ssh username@www.test.com and that works, the above Ansible command should work, too.
Currently, Ansible can be run from any machine with Python 2 (versions 2.6 or 2.7) or Python 3 (versions 3.5 and higher) installed (Windows isn’t supported for the control machine). This includes Red Hat, Debian, CentOS, OS X, any of the BSDs, and so on.
The supported operating system versions are:
Windows Nano Server is not currently supported by Ansible since it does not have access to the full .NET Framework that is used by the majority of the modules and internal components.
On the managed nodes, you need a way to communicate, which is normally ssh. By default this uses sftp. If that’s not available, you can switch to scp in ansible.cfg. You also need Python 2 (version 2.6 or later) or Python 3 (version 3.5 or later).
If you have SELinux enabled on remote nodes, you will also want to install libselinux-python on them before using any copy/file/template related functions in Ansible. You can use the yum module or dnf module in Ansible to install this package on remote systems that do not have it.
--- - hosts: all vars: mario_file: /opt/collab package_list: - 'git' tasks: - name: Check for collab file stat: path: "{{ collab_file }}" register: collab_f - name: Install git if collab file exists become: "yes" package: name: "{{ item }}" state: present with_items: "{{ package_list }}" when: collab_f.stat.exists
As shown in the example above, the first task verifies the ‘stat’ module to check if the file exists then captures the output in a variable called ‘collab_f’ using the ‘register’ term. One uses the registered variable in any other task. In our case, we capture the stats of ‘/opt/collab’ file and in the next task, we install the package list if the file exists.
Installing Ansible on macOS is a single-liner command.
It can be installed with the help of “pip”, the Python package manager.
Run the below command to install pip on macOS:
$ sudo easy_install pip
Then install Ansible with:
$ sudo pip install ansible
Yes, it is possible to increase Ansible reboot module which wait for 600 seconds to certain values. All you can use the below syntax:
- name: Reboot a Linux system
reboot:
reboot_timeout: 1200
By default, Solaris 10 and earlier run a non-POSIX shell which does not correctly expand the default tmp directory Ansible uses ( ~/.ansible/tmp). If you see module failures on Solaris machines, this is likely the problem. There are several workarounds:
You can set remote_tmp to a path that will expand correctly with the shell you are using (see the plugin documentation for C shell, fish shell, and Powershell). For example, in the ansible config file you can set:
remote_tmp=$HOME/.ansible/tmp
In Ansible 2.5 and later, you can also set it per-host in inventory like this:
solaris1 ansible_remote_tmp=$HOME/.ansible/tmp
You can set ansible_shell_executable to the path to a POSIX compatible shell. For instance, many Solaris hosts have a POSIX shell located at /usr/xpg4/bin/sh so you can set this in inventory like so:
solaris1 ansible_shell_executable=/usr/xpg4/bin/sh
(bash, ksh, and zsh should also be POSIX compatible if you have any of those installed).
For Linux , the protocol used is SSH
For Windows, Protocol used in WinRM
Often a user of a configuration management system will want to keep inventory in a different software system. Frequent examples include pulling inventory from a cloud provider, LDAP, Cobbler, or a piece of expensive enterprisey CMDB software.
Ansible easily supports all of these options via an external inventory system. The inventory directory contains some of these already – including options for EC2/Eucalyptus, Rackspace Cloud, and OpenStack.
Create an AWS infrastructure using the Ansible EC2 dynamic inventory
Suppose you have the requirement to launch an instance and install some packages on top of it in one go, what would be your approach be?
To set up dynamic inventory management, you need two files:
https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py
https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini
An ec2.py file is a Python script, which is responsible for fetching details of the EC2 instance, whereas the ec2.ini file is a configuration file which is used by ec2.py
Ansible uses AWS Python library boto to communicate with AWS using APIs. To allow this communication, export the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY variables.
You can use the inventory in two ways:
An example playbook with EC2 dynamic inventory, which will simply ping all machines
$ ansible -i ec2.py all -m ping
Ansible modules are components installed with Ansible that do all the heavy lifting. They can be classified as core and extra modules. The main difference between the two is that core modules come with Ansible and are built and maintained by Ansible Inc. and RedHat employees. Extra modules can be easily installed using your distribution’s package manager or directly from GitHub.
Below is the table for core modules :
Module | Function |
---|---|
copy | Copies files or folders from the local machine to the configured server |
user | Creates, deletes or alters user accounts on the configured server |
npm | Manages Node.JS packages |
ping | Checks SSH connection to servers defined in inventory |
setup | Collects various information about servers |
cron | Manages crontab |
Majority of modules expect one or more arguments that tune the way a module works; for example, the copy module has src and dest arguments that tell the module what is the source and destination of the file or directory to be copied.
Below command will copy a file named "my_app.zip" from the current directory to "/var/www/html" directory on the configured server.
# ansible -m copy -a “src=my_app.zip dest=/var/www/html”
Ansible tasks are atomic actions defined by name and an accompanying module.
Anatomy of this task is quite simple; its name is “install mysql”, a module in use is “yum”, and it has two arguments; name argument refers to the package which needs to be in the state of “installed”.
This brings us to one important Ansible feature: Ansible does not expect commands or functions that do something – Ansible tasks describe the desired state of the configured server. If a package named “mysql” is installed, Ansible will not install it again. This means that it is perfectly safe to run tasks several times as they will not alter the system if its configuration is in the state described in those tasks.
A single task can only use one module. If, for example, I wanted to install MySQL and start the mysqld service, I would need two tasks to achieve that.
Tasks for themselves have no real use case so we combine them into playbooks. Therefore, playbooks are collections of tasks that describe a state of the configured server and configure it. Playbooks are written in YAML because it is extremely human and machine-readable.
An example playbook may look like this:
name : Common tasks
hosts : webservers become : true tasks : - name : task 1 . . . . handlers : - name : handler 1
Reading from the top, the line starting with “name” is playbook name.
Note: Tasks will be executed one by one in the order they are written in. It is important to note that in the situation where Ansible executes a playbook on several servers, tasks are running in parallel on all servers.
During the configuration process, there is sometimes a need to conditionally execute the task. Handlers are one of the conditional forms supported by Ansible. A handler is similar to a task, but it runs only if it was notified by a task.
A task will fire the notification if Ansible recognizes that the task has changed the state of the system. An example situation where handlers are useful is when a task modifies a configuration file of some service, MySQL for example. In order for changes to take effect, the service needs to be restarted.
name : change mysql max_connections
copy : src=edited_my.cnf dest=/etc/my.cnf notify :
restart_mysql
Notify keyword acts as a trigger for the handler named “restart_mysql”
Yes. Ansible-Doc displays information on modules installed in Ansible libraries. It displays a terse listing of plugins and their short descriptions, provides a printout of their DOCUMENTATION strings, and it can create a short “snippet” which can be pasted into a playbook.
Yes. vmware_guest can deploy a virtual machine with required settings on a standalone ESXi server.
An ad-hoc command is something that you might type in to do something really quick, but don’t want to save for later.
This is a good place to start to understand the basics of what Ansible can do prior to learning the playbooks language – ad-hoc commands can also be used to do quick things that you might not necessarily want to write a full playbook for.
Patterns in Ansible are how we decide which hosts to manage. This can mean what hosts to communicate with, but in terms of Working With Playbooks, it actually means what hosts to apply a particular configuration or IT process to.
Below is the sample example of pattern usage
# ansible <pattern_goes_here> -m <module_name> -a <arguments> # ansible webservers -m service -a "name=httpd state=restarted"
A pattern usually refers to a set of groups (sets of hosts) – in the above case, machines in the “webservers” group.
The following patterns are equivalent and target all hosts in the inventory:
# all *
Ansible Vault feature can encrypt any structured data file used by Ansible. This can include group_vars/ or host_vars/ inventory variables, variables loaded by include_vars or vars_files, or variable files passed on the ansible-playbook command line with -e @file.yml or -e @file.json. Role variables and defaults are also included!
Because Ansible tasks, handlers, and other objects are data, these can also be encrypted with vault. If you’d like to not expose what variables you are using, you can keep an individual task file entirely encrypted.
The password used with vault currently must be the same for all files you wish to use together at the same time.
How to update the encrypted data using ansible vault?
To update the AWS keys added to the encrypted file, you can later use Ansible-vault's edit subcommand as follows:
$ ansible-vault edit aws_creds.yml Vault password:
The edit command does the following operations:
Another way to update the content of the file. Decrypt the file as follows:
$ ansible-vault decrypt aws_creds.yml Vault password: Decryption successful
Once updated, this file can then be encrypted again
Blocks allow for logical grouping of tasks and in play error handling. Most of what you can apply to a single task can be applied at the block level, which also makes it much easier to set data or directives common to the tasks. This does not mean the directive affects the block itself but is inherited by the tasks enclosed by a block. i.e. a when will be applied to the tasks, not the block itself.
Block example
tasks: - name: Install Apache block: - yum: name: "{{ item }}" state: installed with_items: - httpd - memcached - template: src: templates/src.j2 dest: /etc/foo.conf - service: name: bar state: started enabled: True when: ansible_distribution == 'CentOS' become: true become_user: root
If you know you don’t need any factual data about your hosts and know everything about your systems centrally, you can turn off fact gathering. This has advantages in scaling Ansible in push mode with very large numbers of systems, mainly, or if you are using Ansible on experimental platforms.
In any play, just do this:
- hosts: whatever gather_facts: no
It is also possible to make groups of groups using the :children suffix in INI or the children: entry in YAML. You can apply variables using :vars or vars::
[atlanta] host1 host2 [raleigh] host2 host3 [southeast:children] atlanta raleigh [southeast:vars] some_server=foo.southeast.example.com halon_system_timeout=30 self_destruct_countdown=60 escape_pods=2 [usa:children] southeast northeast southwest northwest
Ansible allows you to ‘become’ another user, different from the user that logged into the machine (remote user). This is done using existing privilege escalation tools such as sudo, su, pfexec, doas, pbrun, dzdo, ksu, runas, machinectl and others.
For example, to manage a system service (which requires root privileges) when connected as a non-root user (this takes advantage of the fact that the default value of become_user is root):
- name: Ensure the httpd service is running service: name: httpd state: started become: yes
By default, variables are merged/flattened to the specific host before a play is run.This keeps Ansible focused on the Host and Task, so groups don’t really survive outside of inventory and host matching. By default, Ansible overwrites variables including the ones defined for a group and/or host (see the hash_merge setting to change this). The order/precedence is (from lowest to highest):
When groups of the same parent/child level are merged, it is done alphabetically, and the last group loaded overwrites the previous groups. For example, an a_group will be merged with b_group and b_group vars that match will overwrite the ones in a_group.
Cache plugin implements a backend caching mechanism that allows Ansible to store gathered facts or inventory source data without the performance hit of retrieving them from source.
The default cache plugin is the memory plugin, which only caches the data for the current execution of Ansible. Other plugins with persistent storage are available to allow caching the data across runs.
Enabling Cache Plugins
Only one cache plugin can be active at a time. You can enable a cache plugin in the Ansible configuration, either via an environment variable:
export ANSIBLE_CACHE_PLUGIN=jsonfile
or in the ansible.cfg file:
[defaults] fact_caching=redis
You will also need to configure other settings specific to each plugin. Consult the individual plugin documentation or the Ansible configuration for more details.
Ansible executes playbooks over SSH but it is not limited to this connection type. With the host-specific parameter ansible_connection=<connector>, the connection type can be changed. The following non-SSH based connectors are available:
local
This connector can be used to deploy the playbook to the control machine itself.
docker
This connector deploys the playbook directly into Docker containers using the local Docker client.
With fact caching enabled, it is possible for the machine in one group to reference variables about machines in the other group, despite the fact that they have not been communicated within the current execution of /usr/bin/ansible-playbook.
To benefit from cached facts, you will want to change the gathering setting to smart or explicit or set gather_facts to False in most plays.
Currently, Ansible ships with two persistent cache plugins: redis and jsonfile.
To configure fact caching using redis, enable it in ansible.cfg as follows:
[defaults] gathering = smart fact_caching = redis fact_caching_timeout = 86400 # seconds
Registered variables are valid on the host the remainder of the playbook run, which is the same as the lifetime of “facts” in Ansible. Effectively registered variables are just like facts.
When using register with a loop, the data structure placed in the variable during the loop will contain a results attribute, that is a list of all responses from the module.
- hosts: web_servers tasks: - shell: /usr/bin/foo register: foo_result ignore_errors: True - shell: /usr/bin/bar when: foo_result.rc == 5
Unlike most Ansible modules, network modules do not run on the managed nodes. From a user’s point of view, network modules work like any other modules. They work with ad-hoc commands, playbooks, and roles. Behind the scenes, however, network modules use a different methodology than the other (Linux/Unix and Windows) modules use. Ansible is written and executed in Python. Because the majority of network devices cannot run Python, the Ansible network modules are executed on the Ansible control node, where ansible or ansible-playbook runs.
Network modules also use the control node as a destination for backup files, for those modules that offer a backup option. With Linux/Unix modules, where a configuration file already exists on the managed node(s), the backup file gets written by default in the same directory as the new, changed file. Network modules do not update configuration files on the managed nodes, because network configuration is not written in files. Network modules write backup files on the control node, usually in the backup directory under the playbook root directory.
Set the hostname on Cisco Switch using network modules
Network device is running the Cisco IOS operating system, use the ios_config module, which manages Cisco IOS configuration section.
Below is playbook for setting hostname of cisco switch
--- - hosts: localhost gather_facts: no connection: local tasks: - name: set a hostname ios_config: lines: hostname sw2 provider: host: 10.0.0.15 username: admin password: adc123 authorize: true auth_pass: abcjfe767
Run the playbook
$ ansible-playbook playbook.yml -v
Verify if the Cisco Switch config is saved correctly
$ ssh admin@10.0.0.15 Password: sw2>
nodes, they can support multiple communication protocols. The communication protocol (XML over SSH, CLI over SSH, API over HTTPS) selected for each network module depends on the platform and the purpose of the module. Some network modules support only one protocol; some offer a choice. The most common protocol is CLI over SSH.
You set the communication protocol with the ansible_connection variable:
Value of ansible_connection | Protocol | Requires | Persistent? |
---|---|---|---|
network_cli | CLI over SSH | network_os setting | yes |
netconf | XML over SSH | network_os setting | yes |
httpapi | API over HTTP/HTTPS | network_os setting | yes |
local | depends on provider | provider setting | no |
When the Ansible controller is running on Python 2.7.9+ or an older version of Python that has backported SSLContext (like Python 2.7.5 on RHEL 7), the controller will attempt to validate the certificate WinRM is using for an HTTPS connection. If the certificate cannot be validated (such as in the case of a self-signed cert), it will fail the verification process.
To ignore certificate validation, add ansible_winrm_server_cert_validation: ignore to inventory for the Windows host.
Setting inventory variables in the inventory file is the easiest way.
For instance, suppose these hosts have different usernames and ports:
[webservers]
asdf.example.com ansible_port=5000 ansible_user=alice
jkl.example.com ansible_port=5001 ansible_user=bob
You can also dictate the connection type to be used, if you want:
[testcluster]
localhost ansible_connection=local
/path/to/chroot1 ansible_connection=chroot
foo.example.com ansible_connection=paramiko
You may also wish to keep these in group variables instead, or file them in a group_vars/<groupname> file.
The AWX Project -- AWX for short -- is an open source community project, sponsored by Red Hat, that enables users to better control their Ansible project use in IT environments. AWX is the upstream project from which the Red Hat Ansible Tower offering is ultimately derived.
Ansible by default gathers “facts” about the machines under management, and these facts can be accessed in Playbooks and in templates. To see a list of all of the facts that are available about a machine, you can run the “setup” module as an ad-hoc action:
# ansible -m setup hostname
This will print out a dictionary of all of the facts that are available for that particular host.
One need to first install Ansible on Linux or Windows system. One can use the below playbook format to create AWS EC2 key as shown:
File: myec2.key.yml --- - hosts: local connection: local gather_facts: no tasks: - name: Create a new EC2 key myec2_key: name: collab-key region: us-east-1 register: myec2_key_result - name: Save private key copy: content="{{ myec2_key_result.key.private_key }}" dest="./aws.collab.pem" mode=0600 when: myec2_key_result.changed
Where,
myec2_key: – Maintains ec2 key pair.
name: collab_key – Name of the key pair.
region: us-east-1 – The AWS region to use.
register: myec2_key_result : Save result of generated key to myec2_key_result variable.
copy: content="{{ myec2_key_result.key.private_key }}" dest="./aws.collab.pem" mode=0600 : Sets the contents of myec2_key_result.key.private_key to a file named aws.nixcraft.pem in the current directory. Set mode of the file to 0600 (unix file permissions).
when: myec2_key_result.changed : Only save when myec2_key_result changed is set to true. We don’t want to overwrite our key file.
It is pretty much do-able. Let’s take an example. If you want to find and replace all instances of “collab” with “collabera” within a file named /opt/collab.conf:
- replace: path: /opt/collab.conf regexp: 'collab' replace: 'collabera' backup: yes
Yes, it is definitely an insecure way of logging. In order to prevent a task writing confidential information, in syslog(for example), set no_log: true on the task:
- name: mysecret stuff command: "echo {{secret_root_password}} | sudo su -" no_log: true
One can easily upgrade the ansible version to the specific version using the below one-liner command:
sudo pip install ansible==<version-number>
You can refer to playbook YAML file below to deploy WordPress application inside Docker container using Ansible:
--- - hosts: localhost gather_facts: no vars: docker_volume: database_data docker_network: ansible_network db_name: database wp_name: wordpress wp_host_port: 8000 wp_container_port: 80 tasks: - name: "Create a Volume" docker_volume: name: "{{ docker_volume }}" # ansible 2.2 only - name: "Create a network" docker_network: name: "{{ docker_network }}" - name: "Launch database container" docker_container: name: "{{ database_name }}" image: mysql:5.7 volumes: - "{{ docker_volume }}:/var/lib/mysql:rw" restart: true networks: - name: "{{ docker_network }}" alias: - "{{ database_name }}"
env:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
- name: "Launch wordpress container" docker_container: name: "{{ wp_name }}" image: wordpress:latest ports: - "{{ wp_host_port }}:{{ wp_container_port }}" restart: true networks: - name: "{{ docker_network }}" alias: - "{{ wp_name }}"
env:
WORDPRESS_DB_HOST: "{{ database_name }}:3306"
WORDPRESS_DB_PASSWORD: wordpress
The mkpasswd utility that is available on most Linux systems is a great option:
mkpasswd --method=sha-512
In OpenBSD, a similar option is available in the base system called encrypt(1):
encrypt
If the above utilities are not installed on your system then you can still easily generate these passwords using Python with the passlib hashing library.
# pip install passlib Once the library is ready, SHA512 password values can then be generated as follows: # python -c "from passlib.hash import sha512_crypt; import getpass; print(sha512_crypt.using(rounds=5000).hash(getpass.getpass()))"
Vault can be used in playbooks to keep secret data.
If you have a task that you don’t want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful:
- name: secret task shell: /usr/bin/do_something --value={{ secret_value }} no_log: True
This can be used to keep verbose output but hide sensitive information from others who would otherwise like to be able to see the output.
The no_log attribute can also apply to an entire play:
- hosts: all no_log: True
Note that the use of the no_log attribute does not prevent data from being shown when debugging Ansible itself via the ANSIBLE_DEBUG environment variable.
Using the docker modules requires having docker-py installed on the host running Ansible. You will need to have >= 1.7.0 installed.
$ pip install 'docker-py>=1.7.0'
The docker_service module also requires docker-compose
$ pip install 'docker-compose>=1.7.0'
You can connect to a local or remote API using parameters passed to each task or by setting environment variables. The order of precedence is command line parameters and the environment variables. If neither a command line option or an environment variable is found, a default value will be used. The default values are provided under Parameters
Control how modules connect to the Docker API by passing the following parameters:
If you are attending an interview for the position of DevOps Engineer, you really need to have in-depth knowledge around DevOps tool, software & processes which are targeted for automating IT. There are various popular automation tools, both open source and commercial product targeted for Enterprise IT. One of the most popular modern automation platforms is Ansible. It is basically an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates. The major reasons why Ansible is so popular are simplicity and ease-of-use. Not only this, it has a strong focus on security and reliability, featuring a minimum of moving parts, usage of OpenSSH for transport (with other transports and pull modes as alternatives), and a language that is designed around auditability by humans–even those not familiar with the program.
Let us agree to the fact that implementing a DevOps tool, software & processes can help revolutionize your organization but adopting a DevOps framework doesn’t require updating your entire IT stack to newer agile implementations first. Quite simply, your organization can adopt DevOps through automation, even if you are running only on bare metal, migrating to the cloud, or already going full force into containers. Ansible caters to this need fantastically and is damn popular. The listed below are the top 5 reasons of its popularity:
Multiple IT automation tools like Puppet, Chef, CFEngine etc.appeared in the mid and late 2000-2002. They came with their own documentation which was still not up-to-mark for sysadmins to learn and adopt inside Datacenter. One reason why many developers and sysadmins stick to shell scripting and command line configuration was it's simple, easy to use and years of experience using bash and command-line tools. Why learn yet another IT automation tool and syntax? - was one of concern showed when a lot of such tools appeared during the same year.
Ansible was primarily built by developers and sysadmins who love the command line and want to make a tool that helps them manage their servers exactly the same as they have in the past but in a repeatable and centrally-managed way. One of Ansible’s greatest strengths is its ability to run regular shell commands verbatim, so you can take existing scripts and commands, and work on converting them into idempotent playbooks as time allows.
If Ansible tops the chart of popularity, Puppet is the 2nd most popular automation platform which is available both as open source as well as the commercial product. Below is a list of major differences between Puppet and Ansible which you should be aware of:
Ansible | Puppet |
---|---|
Developed to simplify complex orchestration and configuration management tasks | Puppet can be difficult for new users who must learn Puppet DSL or Ruby, as advanced tasks usually require input from CLI. |
The platform is written in Python and allows users to script commands in YAML as an imperative programming paradigm. Written in YAML language | Puppet is written in Ruby language |
Automated workflow for Continuous Delivery | Visualization and reporting |
Ansible doesn’t require agents on every system, and modules can reside on any server. | Puppet uses an agent/master architecture. Agents manage nodes and request relevant info from masters that control configuration info. The agent polls status reports and queries regarding its associated server machine from the master Puppet server, which then communicates its response and required commands using the XML-RPC protocol over HTTPS |
The Self-Support offering starts at $5,000 per year, and the Premium version goes for $14,000 per year for 100 nodes each. (Get more info here.) | Puppet Enterprise is free for up to 10 nodes. Standard pricing starts at $120 per node. (Get more info here.) |
Good GUI | GUI - work under progress |
CLI accepts commands in almost any language | Must learn the Puppet DSL |
This interview question identifies a candidate experience around Ansible both theoretically and practically. A Simple way to answer this question could be -
Ansible works by pushing changes out to all your servers (by default), and requires no extra software to be installed on your servers (thus no extra memory footprint, and no extra daemon to manage), unlike most other configuration management tools
Consider any configuration management(CM) tool. One of its ability is to ensure the same configuration is maintained, no matter if you run it once or 1000s times. Various shell scripts have unintended consequences if you execute them more than once or twice, but Ansible is the tool which can deploy the same configuration to a server over and over again without making any changes after the first deployment activity.
Ansible products offer the following capabilities:
Ansible is quite popular in streamlining the entire process. Provisioning with Ansible is simple and allows you to seamlessly transition into configuration management, orchestration and application deployment using the same simple, human-readable, automation language.
If you’re looking out for a simple solution for CM available in the market today, Ansible is the de-facto. It requires nothing more than a password or SSH key in order to start managing systems and can start managing them without installing any agent software, avoiding the problem of "managing the management" common in many automation systems. There's no more wondering why configuration management daemons are down, when to upgrade management agents, or when to patch security vulnerabilities in those agents.
Ansible is the simplest solution for configuration management available. It's designed to be minimal in nature, consistent, secure and highly reliable, with an extremely low learning curve for administrators, developers and IT managers.
With very simple data descriptions of your infrastructure (both human-readable and machine-parsable), Ansible ensures that everyone on your team will be able to understand the meaning of each configuration task. New team members will be able to quickly dive in and make an impact. Existing team members can get work done faster - freeing up cycles to attend to more critical and strategic work instead of configuration management.
App deployment is a matter of minutes compared to hours in the traditional approach to system management. When you define and manage your application deployment, teams are able to effectively manage the entire application lifecycle from development to production.
Ansible provides not only multi-tier but also a multi-step orchestration platform. The push-based architecture of Ansible allows very fine-grained control over operations. It is able to orchestrate configuration of servers in batches, all while working with load balancers, monitoring systems, and cloud or web services. Slicing 1000s of servers into manageable groups and updating them 100 at a time is incredibly simple, and can be done in a half page of automation content.
And this is all possible today using Ansible Playbooks. They keep your applications properly deployed (and managed) throughout their entire lifecycle.
Ansible has the capability to simply define your systems for security. Ansible easily understood Playbook syntax allows you to define secure any part of your system, whether it’s setting firewall rules, locking down users and groups, or applying custom security policies.
This part needs a special mention of Ansible Tower. Ansible Tower self-service surveys help you to delegate your complex orchestration to whomever in your organization needs it. With Ansible and Ansible Tower, orchestrating the most complex tasks becomes merely the click of a button even for the non-technical people in your organization.
This is a very important question which identified candidate understanding about the limitations around Ansible and tools being used. Undoubtedly, every automation tools available in the market have limitations. Ansible too have certain pros and cons.
Below is the list of Pros of Ansible which is self-explanatory:
Cons:
Ansible manages machines in an agent-less manner. There is never a question of how to upgrade remote daemons or the problem of not being able to manage systems because daemons are uninstalled. Because OpenSSH is one of the most peer-reviewed open source components, security exposure is greatly reduced. Ansible is decentralized–it relies on your existing OS credentials to control access to remote machines. If needed, Ansible can easily connect with Kerberos, LDAP, and other centralized authentication management systems.
Ansible by default manages machines over the SSH protocol. Once Ansible is installed, it will not add a database, and there will be no daemons to start or keep running. You only need to install it on one machine (which could easily be a laptop) and it can manage an entire fleet of remote machines from that central point. When Ansible manages remote machines, it does not leave software installed or running on them, so there’s no real question about how to upgrade Ansible when moving to a new version.
Ansible uses an inventory file (basically, a list of servers) to communicate with your servers. Like a hosts file (at /etc/hosts) that matches IP addresses to domain names, an Ansible inventory file matches servers (IP addresses or domain names) to groups. Inventory files can do a lot more, but for now, we’ll just create a simple file with one server. One can easily create a file at /etc/ansible/hosts (the default location for Ansible inventory file), and add one server to it as shown below:
$ sudo mkdir /etc/ansible $ sudo touch /etc/ansible/hosts
The entry under this file look like as shown below:
[example]
www.test.com
…where test is the group of servers you’re managing and www.test.com is the domain name (or IP address) of a server in that group. If you’re not using port 22 for SSH on this server, you will need to add it to the address, like www.test.com:2222, since Ansible defaults to port 22 and won’t get this value from your ssh config file.
Now that you’ve installed Ansible and created an inventory file, it’s time to run a command to see if everything works! Enter the following in the terminal (we’ll do something safe so it doesn’t make any changes on the server):
$ ansible test -m ping -u [username]
…where [username] is the user you use to log into the server. If everything worked, you should see a message that shows www.test.com | success >>, then the result of your ping. If it didn’t work, run the command again with -vvvv on the end to see the verbose output. Chances are you don’t have SSH keys configured properly—if you log in with ssh username@www.test.com and that works, the above Ansible command should work, too.
Currently, Ansible can be run from any machine with Python 2 (versions 2.6 or 2.7) or Python 3 (versions 3.5 and higher) installed (Windows isn’t supported for the control machine). This includes Red Hat, Debian, CentOS, OS X, any of the BSDs, and so on.
The supported operating system versions are:
Windows Nano Server is not currently supported by Ansible since it does not have access to the full .NET Framework that is used by the majority of the modules and internal components.
On the managed nodes, you need a way to communicate, which is normally ssh. By default this uses sftp. If that’s not available, you can switch to scp in ansible.cfg. You also need Python 2 (version 2.6 or later) or Python 3 (version 3.5 or later).
If you have SELinux enabled on remote nodes, you will also want to install libselinux-python on them before using any copy/file/template related functions in Ansible. You can use the yum module or dnf module in Ansible to install this package on remote systems that do not have it.
--- - hosts: all vars: mario_file: /opt/collab package_list: - 'git' tasks: - name: Check for collab file stat: path: "{{ collab_file }}" register: collab_f - name: Install git if collab file exists become: "yes" package: name: "{{ item }}" state: present with_items: "{{ package_list }}" when: collab_f.stat.exists
As shown in the example above, the first task verifies the ‘stat’ module to check if the file exists then captures the output in a variable called ‘collab_f’ using the ‘register’ term. One uses the registered variable in any other task. In our case, we capture the stats of ‘/opt/collab’ file and in the next task, we install the package list if the file exists.
Installing Ansible on macOS is a single-liner command.
It can be installed with the help of “pip”, the Python package manager.
Run the below command to install pip on macOS:
$ sudo easy_install pip
Then install Ansible with:
$ sudo pip install ansible
Yes, it is possible to increase Ansible reboot module which wait for 600 seconds to certain values. All you can use the below syntax:
- name: Reboot a Linux system
reboot:
reboot_timeout: 1200
By default, Solaris 10 and earlier run a non-POSIX shell which does not correctly expand the default tmp directory Ansible uses ( ~/.ansible/tmp). If you see module failures on Solaris machines, this is likely the problem. There are several workarounds:
You can set remote_tmp to a path that will expand correctly with the shell you are using (see the plugin documentation for C shell, fish shell, and Powershell). For example, in the ansible config file you can set:
remote_tmp=$HOME/.ansible/tmp
In Ansible 2.5 and later, you can also set it per-host in inventory like this:
solaris1 ansible_remote_tmp=$HOME/.ansible/tmp
You can set ansible_shell_executable to the path to a POSIX compatible shell. For instance, many Solaris hosts have a POSIX shell located at /usr/xpg4/bin/sh so you can set this in inventory like so:
solaris1 ansible_shell_executable=/usr/xpg4/bin/sh
(bash, ksh, and zsh should also be POSIX compatible if you have any of those installed).
For Linux , the protocol used is SSH
For Windows, Protocol used in WinRM
Often a user of a configuration management system will want to keep inventory in a different software system. Frequent examples include pulling inventory from a cloud provider, LDAP, Cobbler, or a piece of expensive enterprisey CMDB software.
Ansible easily supports all of these options via an external inventory system. The inventory directory contains some of these already – including options for EC2/Eucalyptus, Rackspace Cloud, and OpenStack.
Create an AWS infrastructure using the Ansible EC2 dynamic inventory
Suppose you have the requirement to launch an instance and install some packages on top of it in one go, what would be your approach be?
To set up dynamic inventory management, you need two files:
https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py
https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini
An ec2.py file is a Python script, which is responsible for fetching details of the EC2 instance, whereas the ec2.ini file is a configuration file which is used by ec2.py
Ansible uses AWS Python library boto to communicate with AWS using APIs. To allow this communication, export the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY variables.
You can use the inventory in two ways:
An example playbook with EC2 dynamic inventory, which will simply ping all machines
$ ansible -i ec2.py all -m ping
Ansible modules are components installed with Ansible that do all the heavy lifting. They can be classified as core and extra modules. The main difference between the two is that core modules come with Ansible and are built and maintained by Ansible Inc. and RedHat employees. Extra modules can be easily installed using your distribution’s package manager or directly from GitHub.
Below is the table for core modules :
Module | Function |
---|---|
copy | Copies files or folders from the local machine to the configured server |
user | Creates, deletes or alters user accounts on the configured server |
npm | Manages Node.JS packages |
ping | Checks SSH connection to servers defined in inventory |
setup | Collects various information about servers |
cron | Manages crontab |
Majority of modules expect one or more arguments that tune the way a module works; for example, the copy module has src and dest arguments that tell the module what is the source and destination of the file or directory to be copied.
Below command will copy a file named "my_app.zip" from the current directory to "/var/www/html" directory on the configured server.
# ansible -m copy -a “src=my_app.zip dest=/var/www/html”
Ansible tasks are atomic actions defined by name and an accompanying module.
Anatomy of this task is quite simple; its name is “install mysql”, a module in use is “yum”, and it has two arguments; name argument refers to the package which needs to be in the state of “installed”.
This brings us to one important Ansible feature: Ansible does not expect commands or functions that do something – Ansible tasks describe the desired state of the configured server. If a package named “mysql” is installed, Ansible will not install it again. This means that it is perfectly safe to run tasks several times as they will not alter the system if its configuration is in the state described in those tasks.
A single task can only use one module. If, for example, I wanted to install MySQL and start the mysqld service, I would need two tasks to achieve that.
Tasks for themselves have no real use case so we combine them into playbooks. Therefore, playbooks are collections of tasks that describe a state of the configured server and configure it. Playbooks are written in YAML because it is extremely human and machine-readable.
An example playbook may look like this:
name : Common tasks
hosts : webservers become : true tasks : - name : task 1 . . . . handlers : - name : handler 1
Reading from the top, the line starting with “name” is playbook name.
Note: Tasks will be executed one by one in the order they are written in. It is important to note that in the situation where Ansible executes a playbook on several servers, tasks are running in parallel on all servers.
During the configuration process, there is sometimes a need to conditionally execute the task. Handlers are one of the conditional forms supported by Ansible. A handler is similar to a task, but it runs only if it was notified by a task.
A task will fire the notification if Ansible recognizes that the task has changed the state of the system. An example situation where handlers are useful is when a task modifies a configuration file of some service, MySQL for example. In order for changes to take effect, the service needs to be restarted.
name : change mysql max_connections
copy : src=edited_my.cnf dest=/etc/my.cnf notify :
restart_mysql
Notify keyword acts as a trigger for the handler named “restart_mysql”
Yes. Ansible-Doc displays information on modules installed in Ansible libraries. It displays a terse listing of plugins and their short descriptions, provides a printout of their DOCUMENTATION strings, and it can create a short “snippet” which can be pasted into a playbook.
Yes. vmware_guest can deploy a virtual machine with required settings on a standalone ESXi server.
An ad-hoc command is something that you might type in to do something really quick, but don’t want to save for later.
This is a good place to start to understand the basics of what Ansible can do prior to learning the playbooks language – ad-hoc commands can also be used to do quick things that you might not necessarily want to write a full playbook for.
Patterns in Ansible are how we decide which hosts to manage. This can mean what hosts to communicate with, but in terms of Working With Playbooks, it actually means what hosts to apply a particular configuration or IT process to.
Below is the sample example of pattern usage
# ansible <pattern_goes_here> -m <module_name> -a <arguments> # ansible webservers -m service -a "name=httpd state=restarted"
A pattern usually refers to a set of groups (sets of hosts) – in the above case, machines in the “webservers” group.
The following patterns are equivalent and target all hosts in the inventory:
# all *
Ansible Vault feature can encrypt any structured data file used by Ansible. This can include group_vars/ or host_vars/ inventory variables, variables loaded by include_vars or vars_files, or variable files passed on the ansible-playbook command line with -e @file.yml or -e @file.json. Role variables and defaults are also included!
Because Ansible tasks, handlers, and other objects are data, these can also be encrypted with vault. If you’d like to not expose what variables you are using, you can keep an individual task file entirely encrypted.
The password used with vault currently must be the same for all files you wish to use together at the same time.
How to update the encrypted data using ansible vault?
To update the AWS keys added to the encrypted file, you can later use Ansible-vault's edit subcommand as follows:
$ ansible-vault edit aws_creds.yml Vault password:
The edit command does the following operations:
Another way to update the content of the file. Decrypt the file as follows:
$ ansible-vault decrypt aws_creds.yml Vault password: Decryption successful
Once updated, this file can then be encrypted again
Blocks allow for logical grouping of tasks and in play error handling. Most of what you can apply to a single task can be applied at the block level, which also makes it much easier to set data or directives common to the tasks. This does not mean the directive affects the block itself but is inherited by the tasks enclosed by a block. i.e. a when will be applied to the tasks, not the block itself.
Block example
tasks: - name: Install Apache block: - yum: name: "{{ item }}" state: installed with_items: - httpd - memcached - template: src: templates/src.j2 dest: /etc/foo.conf - service: name: bar state: started enabled: True when: ansible_distribution == 'CentOS' become: true become_user: root
If you know you don’t need any factual data about your hosts and know everything about your systems centrally, you can turn off fact gathering. This has advantages in scaling Ansible in push mode with very large numbers of systems, mainly, or if you are using Ansible on experimental platforms.
In any play, just do this:
- hosts: whatever gather_facts: no
It is also possible to make groups of groups using the :children suffix in INI or the children: entry in YAML. You can apply variables using :vars or vars::
[atlanta] host1 host2 [raleigh] host2 host3 [southeast:children] atlanta raleigh [southeast:vars] some_server=foo.southeast.example.com halon_system_timeout=30 self_destruct_countdown=60 escape_pods=2 [usa:children] southeast northeast southwest northwest
Ansible allows you to ‘become’ another user, different from the user that logged into the machine (remote user). This is done using existing privilege escalation tools such as sudo, su, pfexec, doas, pbrun, dzdo, ksu, runas, machinectl and others.
For example, to manage a system service (which requires root privileges) when connected as a non-root user (this takes advantage of the fact that the default value of become_user is root):
- name: Ensure the httpd service is running service: name: httpd state: started become: yes
By default, variables are merged/flattened to the specific host before a play is run.This keeps Ansible focused on the Host and Task, so groups don’t really survive outside of inventory and host matching. By default, Ansible overwrites variables including the ones defined for a group and/or host (see the hash_merge setting to change this). The order/precedence is (from lowest to highest):
When groups of the same parent/child level are merged, it is done alphabetically, and the last group loaded overwrites the previous groups. For example, an a_group will be merged with b_group and b_group vars that match will overwrite the ones in a_group.
Cache plugin implements a backend caching mechanism that allows Ansible to store gathered facts or inventory source data without the performance hit of retrieving them from source.
The default cache plugin is the memory plugin, which only caches the data for the current execution of Ansible. Other plugins with persistent storage are available to allow caching the data across runs.
Enabling Cache Plugins
Only one cache plugin can be active at a time. You can enable a cache plugin in the Ansible configuration, either via an environment variable:
export ANSIBLE_CACHE_PLUGIN=jsonfile
or in the ansible.cfg file:
[defaults] fact_caching=redis
You will also need to configure other settings specific to each plugin. Consult the individual plugin documentation or the Ansible configuration for more details.
Ansible executes playbooks over SSH but it is not limited to this connection type. With the host-specific parameter ansible_connection=<connector>, the connection type can be changed. The following non-SSH based connectors are available:
local
This connector can be used to deploy the playbook to the control machine itself.
docker
This connector deploys the playbook directly into Docker containers using the local Docker client.
With fact caching enabled, it is possible for the machine in one group to reference variables about machines in the other group, despite the fact that they have not been communicated within the current execution of /usr/bin/ansible-playbook.
To benefit from cached facts, you will want to change the gathering setting to smart or explicit or set gather_facts to False in most plays.
Currently, Ansible ships with two persistent cache plugins: redis and jsonfile.
To configure fact caching using redis, enable it in ansible.cfg as follows:
[defaults] gathering = smart fact_caching = redis fact_caching_timeout = 86400 # seconds
Registered variables are valid on the host the remainder of the playbook run, which is the same as the lifetime of “facts” in Ansible. Effectively registered variables are just like facts.
When using register with a loop, the data structure placed in the variable during the loop will contain a results attribute, that is a list of all responses from the module.
- hosts: web_servers tasks: - shell: /usr/bin/foo register: foo_result ignore_errors: True - shell: /usr/bin/bar when: foo_result.rc == 5
Unlike most Ansible modules, network modules do not run on the managed nodes. From a user’s point of view, network modules work like any other modules. They work with ad-hoc commands, playbooks, and roles. Behind the scenes, however, network modules use a different methodology than the other (Linux/Unix and Windows) modules use. Ansible is written and executed in Python. Because the majority of network devices cannot run Python, the Ansible network modules are executed on the Ansible control node, where ansible or ansible-playbook runs.
Network modules also use the control node as a destination for backup files, for those modules that offer a backup option. With Linux/Unix modules, where a configuration file already exists on the managed node(s), the backup file gets written by default in the same directory as the new, changed file. Network modules do not update configuration files on the managed nodes, because network configuration is not written in files. Network modules write backup files on the control node, usually in the backup directory under the playbook root directory.
Set the hostname on Cisco Switch using network modules
Network device is running the Cisco IOS operating system, use the ios_config module, which manages Cisco IOS configuration section.
Below is playbook for setting hostname of cisco switch
--- - hosts: localhost gather_facts: no connection: local tasks: - name: set a hostname ios_config: lines: hostname sw2 provider: host: 10.0.0.15 username: admin password: adc123 authorize: true auth_pass: abcjfe767
Run the playbook
$ ansible-playbook playbook.yml -v
Verify if the Cisco Switch config is saved correctly
$ ssh admin@10.0.0.15 Password: sw2>
nodes, they can support multiple communication protocols. The communication protocol (XML over SSH, CLI over SSH, API over HTTPS) selected for each network module depends on the platform and the purpose of the module. Some network modules support only one protocol; some offer a choice. The most common protocol is CLI over SSH.
You set the communication protocol with the ansible_connection variable:
Value of ansible_connection | Protocol | Requires | Persistent? |
---|---|---|---|
network_cli | CLI over SSH | network_os setting | yes |
netconf | XML over SSH | network_os setting | yes |
httpapi | API over HTTP/HTTPS | network_os setting | yes |
local | depends on provider | provider setting | no |
When the Ansible controller is running on Python 2.7.9+ or an older version of Python that has backported SSLContext (like Python 2.7.5 on RHEL 7), the controller will attempt to validate the certificate WinRM is using for an HTTPS connection. If the certificate cannot be validated (such as in the case of a self-signed cert), it will fail the verification process.
To ignore certificate validation, add ansible_winrm_server_cert_validation: ignore to inventory for the Windows host.
Setting inventory variables in the inventory file is the easiest way.
For instance, suppose these hosts have different usernames and ports:
[webservers]
asdf.example.com ansible_port=5000 ansible_user=alice
jkl.example.com ansible_port=5001 ansible_user=bob
You can also dictate the connection type to be used, if you want:
[testcluster]
localhost ansible_connection=local
/path/to/chroot1 ansible_connection=chroot
foo.example.com ansible_connection=paramiko
You may also wish to keep these in group variables instead, or file them in a group_vars/<groupname> file.
The AWX Project -- AWX for short -- is an open source community project, sponsored by Red Hat, that enables users to better control their Ansible project use in IT environments. AWX is the upstream project from which the Red Hat Ansible Tower offering is ultimately derived.
Ansible by default gathers “facts” about the machines under management, and these facts can be accessed in Playbooks and in templates. To see a list of all of the facts that are available about a machine, you can run the “setup” module as an ad-hoc action:
# ansible -m setup hostname
This will print out a dictionary of all of the facts that are available for that particular host.
One need to first install Ansible on Linux or Windows system. One can use the below playbook format to create AWS EC2 key as shown:
File: myec2.key.yml --- - hosts: local connection: local gather_facts: no tasks: - name: Create a new EC2 key myec2_key: name: collab-key region: us-east-1 register: myec2_key_result - name: Save private key copy: content="{{ myec2_key_result.key.private_key }}" dest="./aws.collab.pem" mode=0600 when: myec2_key_result.changed
Where,
myec2_key: – Maintains ec2 key pair.
name: collab_key – Name of the key pair.
region: us-east-1 – The AWS region to use.
register: myec2_key_result : Save result of generated key to myec2_key_result variable.
copy: content="{{ myec2_key_result.key.private_key }}" dest="./aws.collab.pem" mode=0600 : Sets the contents of myec2_key_result.key.private_key to a file named aws.nixcraft.pem in the current directory. Set mode of the file to 0600 (unix file permissions).
when: myec2_key_result.changed : Only save when myec2_key_result changed is set to true. We don’t want to overwrite our key file.
It is pretty much do-able. Let’s take an example. If you want to find and replace all instances of “collab” with “collabera” within a file named /opt/collab.conf:
- replace: path: /opt/collab.conf regexp: 'collab' replace: 'collabera' backup: yes
Yes, it is definitely an insecure way of logging. In order to prevent a task writing confidential information, in syslog(for example), set no_log: true on the task:
- name: mysecret stuff command: "echo {{secret_root_password}} | sudo su -" no_log: true
One can easily upgrade the ansible version to the specific version using the below one-liner command:
sudo pip install ansible==<version-number>
You can refer to playbook YAML file below to deploy WordPress application inside Docker container using Ansible:
--- - hosts: localhost gather_facts: no vars: docker_volume: database_data docker_network: ansible_network db_name: database wp_name: wordpress wp_host_port: 8000 wp_container_port: 80 tasks: - name: "Create a Volume" docker_volume: name: "{{ docker_volume }}" # ansible 2.2 only - name: "Create a network" docker_network: name: "{{ docker_network }}" - name: "Launch database container" docker_container: name: "{{ database_name }}" image: mysql:5.7 volumes: - "{{ docker_volume }}:/var/lib/mysql:rw" restart: true networks: - name: "{{ docker_network }}" alias: - "{{ database_name }}"
env:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
- name: "Launch wordpress container" docker_container: name: "{{ wp_name }}" image: wordpress:latest ports: - "{{ wp_host_port }}:{{ wp_container_port }}" restart: true networks: - name: "{{ docker_network }}" alias: - "{{ wp_name }}"
env:
WORDPRESS_DB_HOST: "{{ database_name }}:3306"
WORDPRESS_DB_PASSWORD: wordpress
The mkpasswd utility that is available on most Linux systems is a great option:
mkpasswd --method=sha-512
In OpenBSD, a similar option is available in the base system called encrypt(1):
encrypt
If the above utilities are not installed on your system then you can still easily generate these passwords using Python with the passlib hashing library.
# pip install passlib Once the library is ready, SHA512 password values can then be generated as follows: # python -c "from passlib.hash import sha512_crypt; import getpass; print(sha512_crypt.using(rounds=5000).hash(getpass.getpass()))"
Vault can be used in playbooks to keep secret data.
If you have a task that you don’t want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful:
- name: secret task shell: /usr/bin/do_something --value={{ secret_value }} no_log: True
This can be used to keep verbose output but hide sensitive information from others who would otherwise like to be able to see the output.
The no_log attribute can also apply to an entire play:
- hosts: all no_log: True
Note that the use of the no_log attribute does not prevent data from being shown when debugging Ansible itself via the ANSIBLE_DEBUG environment variable.
Using the docker modules requires having docker-py installed on the host running Ansible. You will need to have >= 1.7.0 installed.
$ pip install 'docker-py>=1.7.0'
The docker_service module also requires docker-compose
$ pip install 'docker-compose>=1.7.0'
You can connect to a local or remote API using parameters passed to each task or by setting environment variables. The order of precedence is command line parameters and the environment variables. If neither a command line option or an environment variable is found, a default value will be used. The default values are provided under Parameters
Control how modules connect to the Docker API by passing the following parameters:
Submitted questions and answers are subjecct to review and editing,and may or may not be selected for posting, at the sole discretion of Knowledgehut.