ansible | radically simple IT automation platform | DevOps library
kandi X-RAY | ansible Summary
Support
Quality
Security
License
Reuse
- Run AnsibleModule
- Return an error message about the required library
- Convert obj to text
- Spawn the daemon
- Create a symlink
- Convert obj to bytes
- Ensures that the file is present in the destination
- Sets the mode of the given path
- Recursively change ownership of a directory
- Run a wrapped module
- Construct a rule string
- Enforces the host state
- Determine if this file is unarchived
- List all files in the archive
- Convert a permstr to octal mode
- Return CRC32 for a given path
- Get service tools
- Get the status of the service
- Recursively change the owner of a directory
- Enable the service
- Return distro release info
- Runs Ansible module
- Create a new user
- Construct the rule
- Enforce the state of a host
- Set the mode of a file or a diff
- Enable the service
- Modify usermod
- Ensures that a file exists
- Modify the user
- Enable new service
- Ensure a symlink exists
- Start the daemon
- Create a user
ansible Key Features
ansible Examples and Code Snippets
- name: Generate Switch Configurations
vars:
dest_dir: /home/usr/complete_config/{{ project_name }}
switch_device_name: '{{ device_name|lower }}-{{ site_abbrev|lower }}-{{ item.installation_floor }}{{ "%02d" | format(item.id) }}'
template:
src: /home/usr/templates/switch-template.j2
dest: "{{ dest_dir }}/{{ switch_device_name }}"
lstrip_blocks: yes
delegate_to: localhost
loop: "{{ switch_stacks }}"
.
├── filter_plugins
│ └── orgas_utils.py
└── playbook.yml
#!/usr/bin/python
class FilterModule(object):
def __init__(self):
self.orga_nodes_list=[]
def filters(self):
return {
'orgas_flattened': self.orgas_flattened
}
def _process_orga(self, orga, direct_parent=None):
if direct_parent:
current_parents = [x for x in self.orga_nodes_list if x['key'] == direct_parent][0]['parents'] + [direct_parent]
else:
current_parents = []
current_node= {
'key': orga['key'],
'description': orga['description'],
'parents': current_parents
}
self.orga_nodes_list.append(current_node)
for child in (orga['children'] if 'children' in orga.keys() else []):
self._process_orga(child, direct_parent=current_node['key'])
def orgas_flattened(self, orgas):
for orga in orgas:
self._process_orga(orga)
return self.orga_nodes_list
---
- hosts: localhost
gather_facts: false
vars:
orgas:
- key: orga1
description: "Description of orga"
- key: orga2
description: "Description of orga"
- key: orga3
description: "Description of orga"
children:
- key: sub-orga1
description: "Description of sub-orga"
- key: orga4
description: "Description of orga"
children:
- key: sub-orga2
description: "Description of sub-orga"
- key: sub-orga3
description: "Description of sub-orga"
children:
- key: sub-sub-orga1
description: "Description of sub-sub-orga"
- key: sub-orga4
description: "Description of sub-orga"
children:
- key: sub-sub-orga2
description: "Description of sub-sub-orga"
tasks:
- name: Show list processed by custom filter
debug:
msg: "{{ orgas | orgas_flattened }}"
PLAY [localhost] **************************************************************************************************************************************************************************************************
TASK [Show list processed by custom filter] ***********************************************************************************************************************************************************************
ok: [localhost] => {
"msg": [
{
"description": "Description of orga",
"key": "orga1",
"parents": []
},
{
"description": "Description of orga",
"key": "orga2",
"parents": []
},
{
"description": "Description of orga",
"key": "orga3",
"parents": []
},
{
"description": "Description of sub-orga",
"key": "sub-orga1",
"parents": [
"orga3"
]
},
{
"description": "Description of orga",
"key": "orga4",
"parents": []
},
{
"description": "Description of sub-orga",
"key": "sub-orga2",
"parents": [
"orga4"
]
},
{
"description": "Description of sub-orga",
"key": "sub-orga3",
"parents": [
"orga4"
]
},
{
"description": "Description of sub-sub-orga",
"key": "sub-sub-orga1",
"parents": [
"orga4",
"sub-orga3"
]
},
{
"description": "Description of sub-orga",
"key": "sub-orga4",
"parents": [
"orga4"
]
},
{
"description": "Description of sub-sub-orga",
"key": "sub-sub-orga2",
"parents": [
"orga4",
"sub-orga4"
]
}
]
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
shell> cat hosts
sw1
sw2
sw3
shell> cat playbook.yml
- name: Playbook
hosts: all
gather_facts: false
tasks:
- set_fact:
cli_result:
stdout:
- TABLE_vrf:
ROW_vrf: to be continued
- copy:
content: "{{ _dict|to_nice_yaml(indent=2) }}"
dest: cli_result.json
delegate_to: localhost
run_once: true
vars:
_keys: "{{ ansible_play_hosts }}"
_vals: "{{ ansible_play_hosts|
map('extract', hostvars, ['cli_result', 'stdout'])|
list }}"
_dict: "{{ dict(_keys|zip(_vals)) }}"
sw1:
- TABLE_vrf:
ROW_vrf: to be continued
sw2:
- TABLE_vrf:
ROW_vrf: to be continued
sw3:
- TABLE_vrf:
ROW_vrf: to be continued
"msg": {
"affectedResources": [
"/Manager/tes/objects/bew/13"
],
"completedTime": "2022-03-16T..."
"createdTime": "2022...."
}
# Navigate to the appropriate element in the JSON object
string = json_object["msg"]["affectedResources"][0]
for item in string.split('/'): # iterate on items between '/'
try:
return_val = int(item) # if the item is an integer
break # quit and keep that value
except:
return_val = None # or other default value
continue
# Result
print(return_val, type(return_val)) >>> 13,
result_job:
json:
affectedResources:
- /Manager/tes/objects/bew/13
completedTime: 2022-03-16T...
createdTime: 2022....
"{{ result_job.json.affectedResources.0 }}"
/Manager/tes/objects/bew/13
"{{ result_job.json.affectedResources.0.split('/').[-1] }}"
"{{ result_job.json.affectedResources.0.split('/')|last }}"
'13'
- hosts: localhost
gather_facts: no
vars:
json: "{{ lookup('file', './file2.json') | from_json }}"
tasks:
- name: display
debug:
msg: "{{ server.0.name }} -> {{ cpath[0]['server-start'][2]['java-home'] }}"
loop: "{{ json[1].domain }}"
vars:
server: "{{ item.server | selectattr('name', 'defined') }}"
cpath: "{{ item.server | selectattr('server-start', 'defined') }}"
when: item.server is defined and (item.server | selectattr('server-start', 'defined')) != []
skipping: [localhost] => (item={'name': 'mydom'})
skipping: [localhost] => (item={'domain-version': '12.2.1.3.0'})
skipping: [localhost] => (item={'server': [{'name': 'AdminServer'}, {'ssl': {'name': 'AdminServer'}}, {'listen-port': '12400'}, {'listen-address': 'mydom.host1.bank.com'}]})
skipping: [localhost] => (item={'server': [{'name': 'myserv1'}, {'ssl': [{'name': 'myserv1'}, {'login-timeout-millis': '25000'}]}, {'log': [{'name': 'myserv1'}, {'file-name': '/web/bea_logs/domains/mydom/myserv1/myserv1.log'}]}]})
ok: [localhost] => (item={'server': [{'name': 'myserv2'}, {'ssl': {'name': 'myserv2'}}, {'reverse-dns-allowed': 'false'}, {'log': [{'name': 'myserv2'}, {'file-name': '/web/bea_logs/domains/mydom/myserv2/myserv2.log'}]}, {'server-start': [{'name': 'CANVL01'}, {'java-vendor': 'Sun'}, {'java-home': '/web/bea/platform1221/jdk'}]}]}) => {
"msg": "myserv2 -> /web/bea/platform1221/jdk"
}
- name: Three
set_fact:
second_test_var: >-
{{
test_var
if test_var is defined
else groups | my_custom_filter_plugin("should not execute")
}}
- hosts: localhost
gather_facts: no
tasks:
- set_fact:
second_test_var: >-
{{
test_var
if test_var is defined
else I_do_not_exists | int
}}
vars:
test_var: foobar
- debug:
var: second_test_var
- name: Showing that doing the same with `default` errors
set_fact:
second_test_var: "{{ test_var | default(I_do_not_exists | int) }}"
vars:
test_var: foobar
TASK [set_fact] *****************************************************************
ok: [localhost]
TASK [debug] ********************************************************************
ok: [localhost] =>
second_test_var: foobar
TASK [Showing that doing the same with `default` errors] ************************
fatal: [localhost]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: 'I_do_not_exists' is undefined
The error appears to be in '/usr/local/ansible/play.yml': line 18, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Showing that doing the same with `default` errors
^ here
l = map(str, l)
l = list(map(str, l))
Trending Discussions on ansible
Trending Discussions on ansible
QUESTION
I'm trying to use my Ansible playbook to call upon a site YAML reference to create a filename that increment for multiple switches. What am I doing wrong? I believe the playbook is pulling from the host YAML?
Format: --.txt
e.g.: with two switches:
- swi-lon-101.txt
- swi-lon-202.txt
host_vars/host.yaml
project_name: test
device_name: swi
site_abbrev: lon
device_type: switch
switch_stacks:
- id: 01
installation_floor: 1
- id: 02
installation_floor: 2
templates/switch-template.j2
{% for stack in switch_stacks %}
set system host-name {{ device_name }}-{{ site_abbrev }}-{{ stack.installation_floor }}{{ stack.id }}
{% endfor %}
The playbook, in which the problem lies, how do I get the hostname to create correctly for each of the 2 switches?
My playbook:
- name: Create Folder Structure
hosts: junos
gather_facts: false
tasks:
- name: Create Site Specific Folder
file:
path: /home/usr/complete_config/{{ project_name }}
state: directory
mode: 0755
- name: Set Destination Directory & Filename for Switch Configurations
set_fact:
dest_dir: /home/usr/complete_config/{{ project_name }}
full_device_name: "{{ device_name|lower }}-{{ site_abbrev|lower }}-{{ switch_stacks.installation_floor }}{{ switch_stacks.id }}.txt"
when: device_type == 'switch'
Ansible error, running:
ansible-playbook playbooks/site-playbook.yaml
TASK [Set Destination Directory & Filename for Switch Configurations] **************************************************
fatal: [site-switch]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'list object' has no attribute 'installation_floor'\n\nThe error appears to be in '/home/usr/playbooks/switch-playbook.yaml': line 19, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Set Destination Directory & Filename for Switch Configurations\n ^ here\n"}
ANSWER
Answered 2022-Mar-31 at 18:39So, you do need a loop in order to set this fact, otherwise, you are trying to access a installation_floor
on a list, which cannot be.
You will also face an issue with the id
of your items in switch_stacks
, as 01
is an int and will end up displayed as 1
, simply. So you either need to declare those as string, or to pad them with a format
filter.
So, you end up with this task:
- set_fact:
full_device_name: >-
{{
full_device_name
| default([])
+ [
device_name | lower ~ '-' ~
site_abbrev | lower ~ '-' ~
item.installation_floor ~
"%02d" | format(item.id) ~ '.txt'
]
}}
loop: "{{ switch_stacks }}"
when: device_type == 'switch'
Which will create a list:
full_device_name:
- swi-lon-101.txt
- swi-lon-202.txt
Given the playbook:
- hosts: localhost
gather_facts: false
tasks:
- set_fact:
full_device_name: >-
{{
full_device_name
| default([])
+ [
device_name | lower ~ '-' ~
site_abbrev | lower ~ '-' ~
item.installation_floor ~
"%02d" | format(item.id) ~ '.txt'
]
}}
loop: "{{ switch_stacks }}"
when: device_type == 'switch'
vars:
device_name: swi
site_abbrev: lon
device_type: switch
switch_stacks:
- id: 01
installation_floor: 1
- id: 02
installation_floor: 2
- debug:
var: full_device_name
This yields:
TASK [set_fact] ************************************************************
ok: [localhost] => (item={'id': 1, 'installation_floor': 1})
ok: [localhost] => (item={'id': 2, 'installation_floor': 2})
TASK [debug] ***************************************************************
ok: [localhost] =>
full_device_name:
- swi-lon-101.txt
- swi-lon-202.txt
QUESTION
This is my ansible code
- name: no need to import it.
ansible.builtin.uri:
url: >
https://{{ vertex_region }}-aiplatform.googleapis.com/v1/projects/{{ project }}/locations/{{ vertex_region }}/datasets/{{ dataset_id }}/dataItems
method: GET
headers:
Content-Type: "application/json"
Authorization: Bearer "{{ gcloud_auth }}"
register: images
While checking for Ansible lint, it spills out line too long (151 > 120 characters) (line-length)
The error is for the uri part of the code. I already used > to break down the uri, not sure how can I reduce it even more to fit in line constrain given by ansible lint ?
ANSWER
Answered 2022-Mar-28 at 15:22If you want to obey the lint line length rule, you need to split your url on several lines.
>
is the yaml folded scalar block indicator: new lines will be replaced by spaces. This is not what you want.
The best solution here is to use a double quoted flow scalar where you can escape new lines so that they are not converted to white spaces, e.g.:
url: "https://{{ vertex_region }}-aiplatform.googleapis.com/v1/projects/\
{{ project }}/locations/{{ vertex_region }}/datasets/{{ dataset_id }}/dataItems"
You can add as many escaped new lines as you whish if this is still too long.
https://yaml-multiline.info/ is a good ressource to learn all possibilities for multiline strings in yaml.
QUESTION
I made a playbook with two task
The first task is for getting all the directories in the selected directory.
The second task is for deleting the directories. But, I only want to delete a directory if the list length is longer than two.
---
- name: cleanup Backend versions
hosts: backend
become: true
become_user: root
vars:
backend_deploy_path: /opt/app/test/
tasks:
- name: Get all the versions
ansible.builtin.find:
paths: "{{ backend_deploy_path }}"
file_type: directory
register: file_stat
- name: Delete old versions
ansible.builtin.file:
path: "{{ item.path }}"
state: absent
with_items: "{{ file_stat.files }}"
when: file_stat.files|length > 2
When I run this playbook it deletes all the directories instead of keeping three directories.
My question is how can I keep the variable updated? So that it keeps checking every time it tries to delete a directory?
ANSWER
Answered 2022-Mar-15 at 14:40This won't be possible, once a module is executed, the result is saved in the variable and won't dynamically change with the state of the node.
What you should do instead is to limit the list you are looping on with a slice notation to exclude the three last items of the said list: files[:-3]
.
So, your task deleting files would look like this:
- name: Delete old versions
ansible.builtin.file:
path: "{{ item.path }}"
state: absent
loop: "{{ file_stat.files[:-3] }}"
Side note: you probably want to sort
that find
result based on the creation date of the folder, too, something like:
loop: "{{ (file_stat.files | sort(attribute='ctime'))[:-3] }}"
QUESTION
I'm looking for a way of specifying that a module argument can't be used if another argument has a certain value. You can specify required_if
to require an argument if another argument has a specific value but I need the opposite. Something that's conceptually similar to mutually_exclusive
and might be called forbidden_if
.
I'm developing a module that creates a login for an SQL server. It can either be a SQL login that's specific to the server or a Windows log in that uses the domain controller. For an SQL login you must specify a password for but you can't for Windows as this is set by the domain controller. Logins have an identifier (SID) that may be specified by the user for SQL logins but can't be for Window.
Although it's a Powershell module for a Windows host I'll use Python examples because that's what the documentation is in.
This is the spec for a module that creates an SQL login
module = AnsibleModule(
argument_spec=dict(
username=dict(type='str', required=True),
password=dict(type='str', no_log=True, required=True),
sid=dict(type='str', required=False),
),
supports_check_mode=True
)
and one for a Windows login
module = AnsibleModule(
argument_spec=dict(
username=dict(type='str', required=True),
),
supports_check_mode=True
)
This is my current attempt at a spec for a combined module
module = AnsibleModule(
argument_spec=dict(
username=dict(type='str', required=True),
password=dict(type='str', no_log=True, required=False),
sid=dict(type='str', required=False),
login_type=dict(
type='str',
choices=[ 'sql', 'windows' ],
default='sql',
required=False
)
),
required_if=[
('login_type', 'sql', ('password')),
],
supports_check_mode=True
)
I was able to make password
required for the sql
login_type
. Since password
and sid
can't be specified for a windows login so I'd like to prevent them being used if login_type
is windows
. Is this possible and if so, how do I do it?
ANSWER
Answered 2022-Mar-12 at 16:17I don't see a solution to your problem without coding the test:
arguments = dict(
username=dict(type='str', required=True),
password=dict(type='str', no_log=True, required=False),
sid=dict(type='str', required=False),
login_type=dict(
type='str',
choices=[ 'sql', 'windows' ],
default='sql',
required=False
)
)
module = AnsibleModule(
argument_spec=arguments,
required_if=[
('login_type', 'sql', ('password',)),
],
supports_check_mode=True
)
if module.params['login_type'] == 'windows' and (module.params['password'] or module.params['sid']):
module.fail_json(msg="unable to use 'login_type=windows' with args 'password' or 'sid'")
FYI: I noticed an error in your code, you forgot the ,
in the test:
required_if=[
('login_type', 'sql', ('password'**,**)),
],
Result:
fatal: [localhost]: FAILED! => {"changed": false, "msg": "unable to use 'login_type=windows' with args 'password' or 'sid'"}
QUESTION
I am having a hard time getting to know how to create Ansible roles that are following the best practices according to documentation. The following use-case which I am looking at is e.g. enabling Filebeat on host. Filebeat can be configured by placing a module definition in /etc/filebeat/modules.d
folder.
It works fine when I am adding modules. Idempotence is working, everytime, on each run of the role (playbook), a given set of modules is enabled.
But what I should do when I decide that a given module is not longer needed? I remove it from role, rerun a playbook, so that all other modules are enabled. But: the previous run enabled a module that I am not installing directly with role after changes. So my server state is still altered in a way that is different than the role is imposing itself.
My question is: should I take care of removing modules before I apply them so I always start from, let's say, fresh state?
E.g.:
- name: Remove modules
file:
dest: "/etc/filebeat/modules.d/{{ item }}"
state: absent
loop:
- "module1.yml"
- "module2.yml"
- "module3.yml" # It was being installed in previous role, but not now
- name: Enable modules via 'modules.d' directory
template:
src: "modules.d/{{ item }}"
dest: "/etc/filebeat/modules.d/{{ item }}"
mode: '0644'
loop:
- "module1.yml"
- "module2.yml"
So I remove module3.yml
, because I remember that I've installed it before, and install module1.yml
and module2.yml
.
Instead of just installing what I need, no matter what has been installed before:
- name: Enable modules via 'modules.d' directory
template:
src: "modules.d/{{ item }}"
dest: "/etc/filebeat/modules.d/{{ item }}"
mode: '0644'
loop:
- "module1.yml"
- "module2.yml"
Leaving me with module1.yml
and module2.yml
(desired) and, unfortunately: module3.yml
(from previous role).
How to manage that to avoid such situations? And avoid treating server as one big stateful machine that even if I run a role, the output is different than desired, because something has been done before that I cannot see in current Ansible role code.
Do you code revert
playbooks in your Ansible workflow to revert to initial state when needed?
I am curious. Thanks in advance for your reply.
ANSWER
Answered 2022-Mar-02 at 11:07In a nutshell:
- name: Configure filebeat modules
hosts: all
vars:
fb_modules_d:
- file: module1.yml
state: present
- file: module2.yml
state: present
- file: module3.yml
state: absent
tasks:
- name: Make sure all needed module files are present
template:
src: "modules.d/{{ item.file }}"
dest: "/etc/filebeat/modules.d/{{ item.file }}"
mode: '0644'
loop: "{{ fb_modules_d | selectattr('state', '==', 'present') }}"
notifiy: restart_filebeat
- name: Make sure all disabled modules are removed
file:
dest: "/etc/filebeat/modules.d/{{ item.file }}"
state: "{{ item.state }}"
loop: loop: "{{ fb_modules_d | selectattr('state', '==', 'absent') }}"
notify: restart_filebeat
handlers:
- name: Restart filebeat service
listen: restart_filebeat
systemd:
name: filebeat
state: restarted
Note: I declared the variable inside the playbook for the example but that one one should most probably go inside your inventory (group or host level), and certainly not in a role (except in defaults for documentation)
QUESTION
Is there a way to output only the last 5 lines of an Ansible shell output, for example?
Maybe using loops?
Example:
- name: Running Migrations
ansible.builtin.shell: /some-script-produces-lot-of-output.sh
register: ps
- debug: var=ps.stdout_lines
Debug should only output the last 5 lines.
ANSWER
Answered 2022-Jan-26 at 00:00You can use Python's slicing notation for this:
- debug:
var: ps.stdout_lines[-5:]
Will output the 5 lines from the end of the list (hence the negative value) up until the end of the list.
Given the tasks
- command: printf "line1\nline2\nline3\nline4\nline5\nline6\nline7\nline8\nline9\nline10"
register: ps
- debug:
var: ps.stdout_lines[-5:]
This yields:
TASK [command] ************************************************************
changed: [localhost]
TASK [debug] **************************************************************
ok: [localhost] =>
ps.stdout_lines[-5:]:
- line6
- line7
- line8
- line9
- line10
QUESTION
I'm trying to use Ansible to deploy a small k3s cluster with just two server nodes at the moment. Deploying the first server node, which I refer to as "master" is easy to set up with Ansible. However, setting up the second server node, which I refer to as "node" is giving me a challenge because I need to pull the value of the node-token from the master and use it to call the k3s install command on the "node" vm.
I'm using Ansible roles, and this is what my playbook looks like:
- hosts: all
roles:
- { role: k3sInstall , when: 'server_type is defined'}
- { role: k3sUnInstall , when: 'server_type is defined'}
This is my main.yml file from the k3sInstall role directory:
- name: Install k3s Server
import_tasks: k3s_install_server.yml
tags:
- k3s_install
This is my k3s_install_server.yml:
---
- name: Install k3s Cluster
block:
- name: Install k3s Master Server
become: yes
shell: "{{ k3s_master_install_cmd }}"
when: server_role == "master"
- name: Get Node-Token file from master server.
become: yes
shell: cat {{ node_token_filepath }}
when: server_role == "master"
register: nodetoken
- name: Print Node-Token
when: server_role == "master"
debug:
msg: "{{ nodetoken.stdout }}"
# msg: "{{ k3s_node_install_cmd }}"
- name: Set Node-Token fact
when: server_role == "master"
set_fact:
nodeToken: "{{ nodetoken.stdout }}"
- name: Print Node-Token fact
when: server_role == "node" or server_role == "master"
debug:
msg: "{{ nodeToken }}"
# - name: Install k3s Node Server
# become: yes
# shell: "{{ k3s_node_install_cmd }}{{ nodeToken }}"
# when: server_role == "node"
I've commented out the Install k3s Node Server
task because I'm not able to properly reference the nodeToken variable that I'm setting when server_role == master
.
This is the output of the debug:
TASK [k3sInstall : Print Node-Token fact] ***************************************************************************************************************************************************************************************************************************************************************************
ok: [server1] => {
"msg": "K10cf129cfedafcb083655a1780e4be994621086f780a66d9720e77163d36147051::server:aa2837148e402f675604a56602a5bbf8"
}
ok: [server2] => {
"msg": ""
}
My host file:
[p6dualstackservers]
server1 ansible_ssh_host=10.63.60.220
server2 ansible_ssh_host=10.63.60.221
And I have the following host_vars files assigned:
server1.yml:
server_role: master
server2.yml:
server_role: node
I've tried assigning the nodeToken variable inside of k3sInstall/vars/main.yml as well as one level up from the k3sInstall role inside group_vars/all.yml but that didn't help.
I tried searching for a way to use block-level variables but couldn't find anything.
ANSWER
Answered 2022-Jan-24 at 20:03If you set the variable for master only it's not available for other hosts, e.g.
- hosts: master,node
tasks:
- set_fact:
nodeToken: K10cf129cfedaf
when: inventory_hostname == 'master'
- debug:
var: nodeToken
gives
ok: [master] =>
nodeToken: K10cf129cfedaf
ok: [node] =>
nodeToken: VARIABLE IS NOT DEFINED!
If you want to "apply all results and facts to all the hosts in the same batch" use run_once: true, e.g.
- hosts: master,node
tasks:
- set_fact:
nodeToken: K10cf129cfedaf
when: inventory_hostname == 'master'
run_once: true
- debug:
var: nodeToken
gives
ok: [master] =>
nodeToken: K10cf129cfedaf
ok: [node] =>
nodeToken: K10cf129cfedaf
In your case, add 'run_once: true' to the task
- name: Set Node-Token fact
set_fact:
nodeToken: "{{ nodetoken.stdout }}"
when: server_role == "master"
run_once: true
The above code works because the condition when: server_role == "master"
is applied before run_once: true
. Quoting from run_once
"Boolean that will bypass the host loop, forcing the task to attempt to execute on the first host available and afterward apply any results and facts to all active hosts in the same batch."
Safer code would be adding a standalone set_fact instead of relying on the precedence of the condition when: and run_once, e.g.
- set_fact:
nodeToken: "{{ nodetoken.stdout }}"
when: inventory_hostname == 'master'
- set_fact:
nodeToken: "{{ hostvars['master'].nodeToken }}"
run_once: true
QUESTION
I am deploying multiple R versions on multiple virtual desktops. I've built 3.6.3
and 4.1.2
R from source on Ubuntu 18.04.3 LTS
. None of them finds the system-wide Rprofile.site
file in /etc/R
or the system certificates in /usr/share/ca-certificates
. However R (3.4.4
) installed with APT has no such problems. I used Ansible, but for the sake of this question I reproduced the deployment for one host with a shell script.
#!/bin/bash
set -euo pipefail
# install build dependecies
(command -v apt && apt-get build-dep r-base) || (command -v dnf && dnf builddep R)
version='4.1.2'
major_version=$(echo "$version" | cut -c 1)
wget "https://cran.rstudio.com/src/base/R-$major_version/R-$version.tar.gz"
tar -xzf R-$version.tar.gz
cd R-$version
./configure \
--prefix=/opt/R/$version \
--sysconfdir=/etc/R \
--enable-R-shlib \
--with-pcre1 \
--with-blas \
--with-lapack
make -j 8
make install
Note: It should run on most Linux distros with APT or RPM package managers. Increase the -j
argument of make
, if you have the enough cores, but no time.
So I defined the installation prefix as /opt/R/$version
, but I want it read config files from /etc/R
(defined --sysconfdir=/etc/R
). However when I open the R interactive shell (/opt/R/4.1.2/bin/R
) to try install a package:
install.packages("remotes")
then I will be prompted to choose a R package mirror, but one already defined in /etc/R/Rprofile.site
:
local({
r <- getOption("repos")
r["CRAN"] <- "https://cloud.r-project.org"
options(repos = r)
})
I can force the R shell to find the Rprofile.site
file by defining it with the R_PROFILE
environment variable.
export R_PROFILE=/etc/R/Rprofile.site
/opt/R/4.1.2/bin/R
then call install.packages("remotes")
again in the R shell. Now no mirror selection prompt will be shown, but the following error:
Warning: unable to access index for repository https://cloud.r-project.org/src/contrib:
cannot open URL 'https://cloud.r-project.org/src/contrib/PACKAGES'
Warning message:
package ‘remotes’ is not available for this version of R
A version of this package for your version of R might be available elsewhere,
see the ideas at
https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
So it cannot access the repository index (real problem), then concludes that the package ‘remotes’ package is not available for my R version. Which is BS, since it was not able to read the index at the first place. So I tried a simple HTTP call in the same R shell.
curlGetHeaders("https://example.com")
and got this error:
Error in curlGetHeaders("https://example.com") : libcurl error code 77:
unable to access SSL/TLS CA certificates
So it cannot find the CA certificates in /usr/share/ca-certificates
.
Since the R installed by APT has none of these problems. The compiled R does not search the right places. Even if I omit the --sysconfdir=/etc/R
build option and copy or symlink the /etc/R
directory under the prefix, so it will be at /opt/R/4.1.2/etc
. It will still not find its config files.
The greater problem that I do not even no know how to specify the /usr/share
so it may find the certificates. The rsharedir
build options (the --
also missing in the makefile) will not do, because it should point to /usr/share/R/
not /usr/share
, which would be a bad practice anyway.
I also tried all of this it with the 3.6.3
R version and got the same results.
Questions: How can I make the compiled R installations to find the system-wide or any config files and the certificates.
Update 1I ran the build script on a Ubuntu server which I do not manage with the same Ansible code. On both of them R successfully finds the certificates. So the problem is not with the build script but the system state.
Update 2I created a simple R script (install-r-package.R
) which install a package:
install.packages("renv", repos="https://cran.wu.ac.at/")
then I executed it with Rscript
and traced which file do they open on both the correct and erroneous hosts:
strace -o strace.log -e trace=open,openat,close,read,write,connect,accept ./Rscript install-r-package.R
It turned out that on the problematic system R does not even try to open the certificate files.
The relevant trace snippet on the correct system:
connect(5, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("137.208.57.37")}, 16) = -1 EINPROGRESS (Operation now in progress)
openat(AT_FDCWD, "/etc/ssl/certs/ca-certificates.crt", O_RDONLY) = 6
read(6, "-----BEGIN CERTIFICATE-----\nMIIH"..., 200704) = 200704
read(6, "--\n", 4096) = 3
read(6, "", 4096) = 0
close(6) = 0
on the problematic system:
connect(5, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("137.208.57.37")}, 16) = -1 EINPROGRESS (Operation now in progress)
openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/nss/libsoftokn3.so", O_RDONLY|O_CLOEXEC) = 6
read(6, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\220A\0\0\0\0\0\0"..., 832) = 832
close(6) = 0
In both cases R connects to the mirror (137.208.57.37
) after that on the correct system it reads the ca-certificates.crt
certificate file and many other .crt files after that. However the erroneous system jump this step altogether.
ANSWER
Answered 2022-Jan-14 at 17:25Finally I found the solution:
Since both system has the arch and OS. I cross copied the R compiled installations between them. The R which was compiled on the problematic system, but was run on the correct one gave the warnings below after the calling of the install.packages("renv", repos="https://cran.wu.ac.at/")
Warning: unable to access index for repository https://cran.wu.ac.at/src/contrib:
internet routines cannot be loaded
Warning messages:
1: In download.file(url, destfile = f, quiet = TRUE) :
unable to load shared object '/opt/R/4.1.2/lib/R/modules//internet.so':
libcurl-nss.so.4: cannot open shared object file: No such file or directory
2: package ‘remotes’ is not available for this version of R
A version of this package for your version of R might be available elsewhere,
see the ideas at
https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
If I do the reverse then the installation works.
The libcurl-nss.so.4: cannot open shared object file: No such file or directory
line gave me the clue that different libcurl4 flavors was used as build dependecies. I checked which dev dependecies were installed on the systems and libcurl4-nss-dev 7.58.0-2ubuntu3
were installed on the problematic system and libcurl4-gnutls-dev 7.58.0-2ubuntu3.16
on the correct system.
So I purged libcurl4-gnutls-dev
from the problematic system:
apt purge libcurl4-nss-dev -y
and installed libcurl4-gnutls-dev:
aptitude install libcurl4-gnutls-dev
I used aptitude, because I had to downgrade libcurl3-gnutls 7.58.0-2ubuntu3.16 (now) -> 7.58.0-2ubuntu3 (bionic)
which is a dependency of libcurl4-gnutls-dev
, then I run a make clean
in the R-4.1.2 source directory. Finally I re-run the build script from the question, and got a well working R, which can read the certificates, hence can reach the HTTPS using package mirrors.
QUESTION
I have an Ansible 2.9.27 and I am trying to add upstream remote for git repositories which I previously cloned with Ansible. Let's assume that already cloned repositories are located in /home/user/Documents/github/
directory and I want to add upstream remote for them (git remote add upstream
for each repo).
The task looks like this:
- name: Add remote upstream to github projects
# TODO: how to add remote with git module?
command: git remote add upstream git@github.com:{{ git_user }}/{{ item }}.git
changed_when: false
args:
chdir: /home/user/Documents/github/{{ item }}
loop: "{{ github_repos }}"
The issue is that ansible-lint doesn't like using command
instead of git
module:
WARNING Listing 1 violation(s) that are fatal
command-instead-of-module: git used in place of git module
tasks/github.yaml:15 Task/Handler: Add remote upstream to github projects
What I need to do to add remote upstream for these repositories with git
module?
ANSWER
Answered 2021-Dec-29 at 18:44Since what you want to achieve is not (yet...) taken in charge by the git
module, this is a very legitimate use of command
.
In such cases, it is possible to silence the specific rule in ansible lint for that specific task.
To go a bit further, your changed_when: false
clause looks a bit like a quick and dirty fix to silence the no-changed-when
rule and can be enhanced in conjunction with a failed_when
clause to detect cases where the remote already exists.
Here is how I would write that task to be idempotent, documented and passing all needed lint rules:
- name: Add remote upstream to github projects
# Git module does not know how to add remotes (yet...)
# Using command and silencing corresponding ansible-lint rule
# noqa command-instead-of-module
command:
cmd: git remote add upstream git@github.com:{{ git_user }}/{{ item }}.git
chdir: /home/user/Documents/github/{{ item }}
register: add_result
changed_when: add_result.rc == 0
failed_when:
- add_result.rc != 0
- add_result.stderr | default('') is not search("remote .* already exists")
loop: "{{ github_repos }}"
QUESTION
We've had a working Ansible AWX instance running on v5.0.0 for over a year, and suddenly all jobs stop working -- no output is rendered. They will start "running" but hang indefinitely without printing out any logging.
The AWX instance is running in a docker compose container setup as defined here: https://github.com/ansible/awx/blob/5.0.0/INSTALL.md#docker-compose
ObservationsStandard troubleshooting such as restarting of containers, host OS, etc. hasn't helped. No configuration changes in either environment.
Upon debugging an actual playbook command, we observe that the command to run a playbook from the UI is like the below:
ssh-agent sh -c ssh-add /tmp/awx_11021_0fmwm5uz/artifacts/11021/ssh_key_data && rm -f /tmp/awx_11021_0fmwm5uz/artifacts/11021/ssh_key_data && ansible-playbook -vvvvv -u ubuntu --become --ask-vault-pass -i /tmp/awx_11021_0fmwm5uz/tmppo7rcdqn -e @/tmp/awx_11021_0fmwm5uz/env/extravars playbook.yml
That's broken down into three commands in sequence:
ssh-agent sh -c ssh-add /tmp/awx_11021_0fmwm5uz/artifacts/11021/ssh_key_data
rm -f /tmp/awx_11021_0fmwm5uz/artifacts/11021/ssh_key_data
ansible-playbook -vvvvv -u ubuntu --become --ask-vault-pass -i /tmp/awx_11021_0fmwm5uz/tmppo7rcdqn -e @/tmp/awx_11021_0fmwm5uz/env/extravars playbook.yml
You can see in part 3, the -vvvvv
is the debugging argument -- however, the hang is happening on command #1. Which has nothing to do with ansible or AWX specifically, but it's not going to get us much debugging info.
I tried doing an strace
to see what is going on, but for reasons given below, it is pretty difficult to follow what it is actually hanging on. I can provide this output if it might help.
So one natural question with command #1 -- what is 'ssh_key_data'?
Well it's what we set up to be the Machine credential in AWX (an SSH key) -- it hasn't changed in a while and it works just fine when used in a direct SSH command. It's also apparently being set up by AWX as a file pipe:
prw------- 1 root root 0 Dec 10 08:29 ssh_key_data
Which starts to explain why it could be potentially hanging (if nothing is being read in from the other side of the pipe).
Running a normal ansible-playbook from command line (and supplying the SSH key in a more normal way) works just fine, so we can still deploy, but only via CLI right now -- it's just AWX that is broken.
ConclusionsSo the question then becomes "why now"? And "how to debug"? I have checked the health of awx_postgres, and verified that indeed the Machine credential is present in an expected format (in the main_credential
table). I have also verified that can use ssh-agent on the awx_task container without the use of that pipe keyfile. So it really seems to be this piped file that is the problem -- but I haven't been able to glean from any logs where the other side of the pipe (sender) is supposed to be or why they aren't sending the data.
ANSWER
Answered 2021-Dec-13 at 04:21Had the same issue starting this Friday in the same timeframe as you. Turned out that Crowdstrike (falcon sensor) Agent was the culprit. I'm guessing they pushed a definition update that is breaking or blocking fifo pipes. When we stopped the CS agent, AWX started working correctly again, with no issues. See if you are running a similar security product.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install ansible
You can use ansible like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page