Popular New Releases in Ansible
awx
20.1.0
sealos
v4.0.0-alpha.3
kind
v0.11.1
flannel
v0.17.0
kubeasz
kubeasz 3.0.1
Popular Libraries in Ansible
by ansible python
52834 GPL-3.0
Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy and maintain. Automate everything from code deployment to network configuration to cloud management, in a language that approaches plain English, using SSH, with no agents to install on remote systems. https://docs.ansible.com.
by hashicorp ruby
23316 MIT
Vagrant is a tool for building and distributing development environments.
by bregman-arie python
22045 NOASSERTION
Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP, DNS, Elastic, Network, Virtualization. DevOps Interview Questions
by ansible python
10864 NOASSERTION
AWX Project
by sovereign html
10012 NOASSERTION
A set of Ansible playbooks to build and maintain your own private cloud: email, calendar, contacts, file sync, IRC bouncer, VPN, and more.
by ansible shell
9577
A few starter examples of ansible playbooks, to show features and how they work together. See http://galaxy.ansible.com for example roles from the Ansible community for deploying many popular applications.
by fanux go
8465 Apache-2.0
以kubernetes为内核的云操作系统发行版,3min 一键高可用安装自定义kubernetes,500M,100年证书,版本不要太全,生产环境稳如老狗🔥 ⎈ 🐳
by kubernetes-sigs go
8441 Apache-2.0
Kubernetes IN Docker - local clusters for testing Kubernetes
by openshift go
8070 Apache-2.0
Conformance test suite for OpenShift
Trending New libraries in Ansible
by armosec go
5495 Apache-2.0
Kubescape is a K8s open-source tool providing a multi-cloud K8s single pane of glass, including risk analysis, security compliance, RBAC visualizer and image vulnerabilities scanning.
by lyft go
1307 Apache-2.0
Extensible platform for infrastructure management
by kubesphere go
914 Apache-2.0
Install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA 🔥 ⎈ 🐳
by erjadi go
691 MIT
by k3s-io html
578 Apache-2.0
by k8s-at-home shell
499 MIT
Highly opinionated template for deploying a single k3s cluster with Ansible and Terraform backed by Flux, SOPS, GitHub Actions, Renovate and more!
by swarmlet shell
495 MIT
A self-hosted, open-source Platform as a Service that enables easy swarm deployments, load balancing, automatic SSL, metrics, analytics and more.
by ansible-collections python
456 GPL-3.0
Ansible Community General Collection
by microsoft powershell
425 NOASSERTION
Automated Azure Arc environments
Top Authors in Ansible
1
81 Libraries
3279
2
72 Libraries
20308
3
63 Libraries
1171
4
56 Libraries
3278
5
50 Libraries
498
6
48 Libraries
84812
7
41 Libraries
282
8
35 Libraries
99
9
32 Libraries
341
10
32 Libraries
9149
1
81 Libraries
3279
2
72 Libraries
20308
3
63 Libraries
1171
4
56 Libraries
3278
5
50 Libraries
498
6
48 Libraries
84812
7
41 Libraries
282
8
35 Libraries
99
9
32 Libraries
341
10
32 Libraries
9149
Trending Kits in Ansible
No Trending Kits are available at this moment for Ansible
Trending Discussions on Ansible
Ansible playbook loop from site yaml or template?
Line too long: Ansible lint
Ansible update variable in function
Can I specify that an argument can't be used with a specific choice in Ansible module spec?
Ansible: how to achieve idempotence with tasks that append files on host (w/o reverting to initial state)
Ansible: Show last X output lines
Ansible, how to set a global fact using roles?
I compiled R from source and it doesn't find certificates
Add `git remote add usptream` to repositories but using Ansible
AWX all jobs stop processing and hang indefinitely -- why
QUESTION
Ansible playbook loop from site yaml or template?
Asked 2022-Apr-01 at 14:16I'm trying to use my Ansible playbook to call upon a site YAML reference to create a filename that increment for multiple switches. What am I doing wrong? I believe the playbook is pulling from the host YAML?
Format: <switch>-<site>-<floor><stackid>.txt
e.g.: with two switches:
- swi-lon-101.txt
- swi-lon-202.txt
host_vars/host.yaml
1project_name: test
2device_name: swi
3site_abbrev: lon
4device_type: switch
5switch_stacks:
6- id: 01
7 installation_floor: 1
8- id: 02
9 installation_floor: 2
10
templates/switch-template.j2
1project_name: test
2device_name: swi
3site_abbrev: lon
4device_type: switch
5switch_stacks:
6- id: 01
7 installation_floor: 1
8- id: 02
9 installation_floor: 2
10{% for stack in switch_stacks %}
11set system host-name {{ device_name }}-{{ site_abbrev }}-{{ stack.installation_floor }}{{ stack.id }}
12{% endfor %}
13
The playbook, in which the problem lies, how do I get the hostname to create correctly for each of the 2 switches?
My playbook:
1project_name: test
2device_name: swi
3site_abbrev: lon
4device_type: switch
5switch_stacks:
6- id: 01
7 installation_floor: 1
8- id: 02
9 installation_floor: 2
10{% for stack in switch_stacks %}
11set system host-name {{ device_name }}-{{ site_abbrev }}-{{ stack.installation_floor }}{{ stack.id }}
12{% endfor %}
13- name: Create Folder Structure
14 hosts: junos
15 gather_facts: false
16
17 tasks:
18 - name: Create Site Specific Folder
19 file:
20 path: /home/usr/complete_config/{{ project_name }}
21 state: directory
22 mode: 0755
23
24 - name: Set Destination Directory & Filename for Switch Configurations
25 set_fact:
26 dest_dir: /home/usr/complete_config/{{ project_name }}
27 full_device_name: "{{ device_name|lower }}-{{ site_abbrev|lower }}-{{ switch_stacks.installation_floor }}{{ switch_stacks.id }}.txt"
28 when: device_type == 'switch'
29
Ansible error, running:
1project_name: test
2device_name: swi
3site_abbrev: lon
4device_type: switch
5switch_stacks:
6- id: 01
7 installation_floor: 1
8- id: 02
9 installation_floor: 2
10{% for stack in switch_stacks %}
11set system host-name {{ device_name }}-{{ site_abbrev }}-{{ stack.installation_floor }}{{ stack.id }}
12{% endfor %}
13- name: Create Folder Structure
14 hosts: junos
15 gather_facts: false
16
17 tasks:
18 - name: Create Site Specific Folder
19 file:
20 path: /home/usr/complete_config/{{ project_name }}
21 state: directory
22 mode: 0755
23
24 - name: Set Destination Directory & Filename for Switch Configurations
25 set_fact:
26 dest_dir: /home/usr/complete_config/{{ project_name }}
27 full_device_name: "{{ device_name|lower }}-{{ site_abbrev|lower }}-{{ switch_stacks.installation_floor }}{{ switch_stacks.id }}.txt"
28 when: device_type == 'switch'
29ansible-playbook playbooks/site-playbook.yaml
30
1project_name: test
2device_name: swi
3site_abbrev: lon
4device_type: switch
5switch_stacks:
6- id: 01
7 installation_floor: 1
8- id: 02
9 installation_floor: 2
10{% for stack in switch_stacks %}
11set system host-name {{ device_name }}-{{ site_abbrev }}-{{ stack.installation_floor }}{{ stack.id }}
12{% endfor %}
13- name: Create Folder Structure
14 hosts: junos
15 gather_facts: false
16
17 tasks:
18 - name: Create Site Specific Folder
19 file:
20 path: /home/usr/complete_config/{{ project_name }}
21 state: directory
22 mode: 0755
23
24 - name: Set Destination Directory & Filename for Switch Configurations
25 set_fact:
26 dest_dir: /home/usr/complete_config/{{ project_name }}
27 full_device_name: "{{ device_name|lower }}-{{ site_abbrev|lower }}-{{ switch_stacks.installation_floor }}{{ switch_stacks.id }}.txt"
28 when: device_type == 'switch'
29ansible-playbook playbooks/site-playbook.yaml
30TASK [Set Destination Directory & Filename for Switch Configurations] **************************************************
31fatal: [site-switch]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'list object' has no attribute 'installation_floor'\n\nThe error appears to be in '/home/usr/playbooks/switch-playbook.yaml': line 19, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Set Destination Directory & Filename for Switch Configurations\n ^ here\n"}
32
ANSWER
Answered 2022-Mar-31 at 18:39So, you do need a loop in order to set this fact, otherwise, you are trying to access a installation_floor
on a list, which cannot be.
You will also face an issue with the id
of your items in switch_stacks
, as 01
is an int and will end up displayed as 1
, simply. So you either need to declare those as string, or to pad them with a format
filter.
So, you end up with this task:
1project_name: test
2device_name: swi
3site_abbrev: lon
4device_type: switch
5switch_stacks:
6- id: 01
7 installation_floor: 1
8- id: 02
9 installation_floor: 2
10{% for stack in switch_stacks %}
11set system host-name {{ device_name }}-{{ site_abbrev }}-{{ stack.installation_floor }}{{ stack.id }}
12{% endfor %}
13- name: Create Folder Structure
14 hosts: junos
15 gather_facts: false
16
17 tasks:
18 - name: Create Site Specific Folder
19 file:
20 path: /home/usr/complete_config/{{ project_name }}
21 state: directory
22 mode: 0755
23
24 - name: Set Destination Directory & Filename for Switch Configurations
25 set_fact:
26 dest_dir: /home/usr/complete_config/{{ project_name }}
27 full_device_name: "{{ device_name|lower }}-{{ site_abbrev|lower }}-{{ switch_stacks.installation_floor }}{{ switch_stacks.id }}.txt"
28 when: device_type == 'switch'
29ansible-playbook playbooks/site-playbook.yaml
30TASK [Set Destination Directory & Filename for Switch Configurations] **************************************************
31fatal: [site-switch]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'list object' has no attribute 'installation_floor'\n\nThe error appears to be in '/home/usr/playbooks/switch-playbook.yaml': line 19, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Set Destination Directory & Filename for Switch Configurations\n ^ here\n"}
32- set_fact:
33 full_device_name: >-
34 {{
35 full_device_name
36 | default([])
37 + [
38 device_name | lower ~ '-' ~
39 site_abbrev | lower ~ '-' ~
40 item.installation_floor ~
41 "%02d" | format(item.id) ~ '.txt'
42 ]
43 }}
44 loop: "{{ switch_stacks }}"
45 when: device_type == 'switch'
46
Which will create a list:
1project_name: test
2device_name: swi
3site_abbrev: lon
4device_type: switch
5switch_stacks:
6- id: 01
7 installation_floor: 1
8- id: 02
9 installation_floor: 2
10{% for stack in switch_stacks %}
11set system host-name {{ device_name }}-{{ site_abbrev }}-{{ stack.installation_floor }}{{ stack.id }}
12{% endfor %}
13- name: Create Folder Structure
14 hosts: junos
15 gather_facts: false
16
17 tasks:
18 - name: Create Site Specific Folder
19 file:
20 path: /home/usr/complete_config/{{ project_name }}
21 state: directory
22 mode: 0755
23
24 - name: Set Destination Directory & Filename for Switch Configurations
25 set_fact:
26 dest_dir: /home/usr/complete_config/{{ project_name }}
27 full_device_name: "{{ device_name|lower }}-{{ site_abbrev|lower }}-{{ switch_stacks.installation_floor }}{{ switch_stacks.id }}.txt"
28 when: device_type == 'switch'
29ansible-playbook playbooks/site-playbook.yaml
30TASK [Set Destination Directory & Filename for Switch Configurations] **************************************************
31fatal: [site-switch]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'list object' has no attribute 'installation_floor'\n\nThe error appears to be in '/home/usr/playbooks/switch-playbook.yaml': line 19, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Set Destination Directory & Filename for Switch Configurations\n ^ here\n"}
32- set_fact:
33 full_device_name: >-
34 {{
35 full_device_name
36 | default([])
37 + [
38 device_name | lower ~ '-' ~
39 site_abbrev | lower ~ '-' ~
40 item.installation_floor ~
41 "%02d" | format(item.id) ~ '.txt'
42 ]
43 }}
44 loop: "{{ switch_stacks }}"
45 when: device_type == 'switch'
46full_device_name:
47 - swi-lon-101.txt
48 - swi-lon-202.txt
49
Given the playbook:
1project_name: test
2device_name: swi
3site_abbrev: lon
4device_type: switch
5switch_stacks:
6- id: 01
7 installation_floor: 1
8- id: 02
9 installation_floor: 2
10{% for stack in switch_stacks %}
11set system host-name {{ device_name }}-{{ site_abbrev }}-{{ stack.installation_floor }}{{ stack.id }}
12{% endfor %}
13- name: Create Folder Structure
14 hosts: junos
15 gather_facts: false
16
17 tasks:
18 - name: Create Site Specific Folder
19 file:
20 path: /home/usr/complete_config/{{ project_name }}
21 state: directory
22 mode: 0755
23
24 - name: Set Destination Directory & Filename for Switch Configurations
25 set_fact:
26 dest_dir: /home/usr/complete_config/{{ project_name }}
27 full_device_name: "{{ device_name|lower }}-{{ site_abbrev|lower }}-{{ switch_stacks.installation_floor }}{{ switch_stacks.id }}.txt"
28 when: device_type == 'switch'
29ansible-playbook playbooks/site-playbook.yaml
30TASK [Set Destination Directory & Filename for Switch Configurations] **************************************************
31fatal: [site-switch]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'list object' has no attribute 'installation_floor'\n\nThe error appears to be in '/home/usr/playbooks/switch-playbook.yaml': line 19, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Set Destination Directory & Filename for Switch Configurations\n ^ here\n"}
32- set_fact:
33 full_device_name: >-
34 {{
35 full_device_name
36 | default([])
37 + [
38 device_name | lower ~ '-' ~
39 site_abbrev | lower ~ '-' ~
40 item.installation_floor ~
41 "%02d" | format(item.id) ~ '.txt'
42 ]
43 }}
44 loop: "{{ switch_stacks }}"
45 when: device_type == 'switch'
46full_device_name:
47 - swi-lon-101.txt
48 - swi-lon-202.txt
49- hosts: localhost
50 gather_facts: false
51
52 tasks:
53 - set_fact:
54 full_device_name: >-
55 {{
56 full_device_name
57 | default([])
58 + [
59 device_name | lower ~ '-' ~
60 site_abbrev | lower ~ '-' ~
61 item.installation_floor ~
62 "%02d" | format(item.id) ~ '.txt'
63 ]
64 }}
65 loop: "{{ switch_stacks }}"
66 when: device_type == 'switch'
67 vars:
68 device_name: swi
69 site_abbrev: lon
70 device_type: switch
71 switch_stacks:
72 - id: 01
73 installation_floor: 1
74 - id: 02
75 installation_floor: 2
76
77 - debug:
78 var: full_device_name
79
This yields:
1project_name: test
2device_name: swi
3site_abbrev: lon
4device_type: switch
5switch_stacks:
6- id: 01
7 installation_floor: 1
8- id: 02
9 installation_floor: 2
10{% for stack in switch_stacks %}
11set system host-name {{ device_name }}-{{ site_abbrev }}-{{ stack.installation_floor }}{{ stack.id }}
12{% endfor %}
13- name: Create Folder Structure
14 hosts: junos
15 gather_facts: false
16
17 tasks:
18 - name: Create Site Specific Folder
19 file:
20 path: /home/usr/complete_config/{{ project_name }}
21 state: directory
22 mode: 0755
23
24 - name: Set Destination Directory & Filename for Switch Configurations
25 set_fact:
26 dest_dir: /home/usr/complete_config/{{ project_name }}
27 full_device_name: "{{ device_name|lower }}-{{ site_abbrev|lower }}-{{ switch_stacks.installation_floor }}{{ switch_stacks.id }}.txt"
28 when: device_type == 'switch'
29ansible-playbook playbooks/site-playbook.yaml
30TASK [Set Destination Directory & Filename for Switch Configurations] **************************************************
31fatal: [site-switch]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'list object' has no attribute 'installation_floor'\n\nThe error appears to be in '/home/usr/playbooks/switch-playbook.yaml': line 19, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Set Destination Directory & Filename for Switch Configurations\n ^ here\n"}
32- set_fact:
33 full_device_name: >-
34 {{
35 full_device_name
36 | default([])
37 + [
38 device_name | lower ~ '-' ~
39 site_abbrev | lower ~ '-' ~
40 item.installation_floor ~
41 "%02d" | format(item.id) ~ '.txt'
42 ]
43 }}
44 loop: "{{ switch_stacks }}"
45 when: device_type == 'switch'
46full_device_name:
47 - swi-lon-101.txt
48 - swi-lon-202.txt
49- hosts: localhost
50 gather_facts: false
51
52 tasks:
53 - set_fact:
54 full_device_name: >-
55 {{
56 full_device_name
57 | default([])
58 + [
59 device_name | lower ~ '-' ~
60 site_abbrev | lower ~ '-' ~
61 item.installation_floor ~
62 "%02d" | format(item.id) ~ '.txt'
63 ]
64 }}
65 loop: "{{ switch_stacks }}"
66 when: device_type == 'switch'
67 vars:
68 device_name: swi
69 site_abbrev: lon
70 device_type: switch
71 switch_stacks:
72 - id: 01
73 installation_floor: 1
74 - id: 02
75 installation_floor: 2
76
77 - debug:
78 var: full_device_name
79TASK [set_fact] ************************************************************
80ok: [localhost] => (item={'id': 1, 'installation_floor': 1})
81ok: [localhost] => (item={'id': 2, 'installation_floor': 2})
82
83TASK [debug] ***************************************************************
84ok: [localhost] =>
85 full_device_name:
86 - swi-lon-101.txt
87 - swi-lon-202.txt
88
QUESTION
Line too long: Ansible lint
Asked 2022-Mar-28 at 18:27This is my ansible code
1- name: no need to import it.
2 ansible.builtin.uri:
3 url: >
4 https://{{ vertex_region }}-aiplatform.googleapis.com/v1/projects/{{ project }}/locations/{{ vertex_region }}/datasets/{{ dataset_id }}/dataItems
5 method: GET
6 headers:
7 Content-Type: "application/json"
8 Authorization: Bearer "{{ gcloud_auth }}"
9 register: images
10
While checking for Ansible lint, it spills out line too long (151 > 120 characters) (line-length)
The error is for the uri part of the code. I already used > to break down the uri, not sure how can I reduce it even more to fit in line constrain given by ansible lint ?
ANSWER
Answered 2022-Mar-28 at 15:22If you want to obey the lint line length rule, you need to split your url on several lines.
>
is the yaml folded scalar block indicator: new lines will be replaced by spaces. This is not what you want.
The best solution here is to use a double quoted flow scalar where you can escape new lines so that they are not converted to white spaces, e.g.:
1- name: no need to import it.
2 ansible.builtin.uri:
3 url: >
4 https://{{ vertex_region }}-aiplatform.googleapis.com/v1/projects/{{ project }}/locations/{{ vertex_region }}/datasets/{{ dataset_id }}/dataItems
5 method: GET
6 headers:
7 Content-Type: "application/json"
8 Authorization: Bearer "{{ gcloud_auth }}"
9 register: images
10 url: "https://{{ vertex_region }}-aiplatform.googleapis.com/v1/projects/\
11 {{ project }}/locations/{{ vertex_region }}/datasets/{{ dataset_id }}/dataItems"
12
You can add as many escaped new lines as you whish if this is still too long.
https://yaml-multiline.info/ is a good ressource to learn all possibilities for multiline strings in yaml.
QUESTION
Ansible update variable in function
Asked 2022-Mar-15 at 14:40I made a playbook with two task
The first task is for getting all the directories in the selected directory.
The second task is for deleting the directories. But, I only want to delete a directory if the list length is longer than two.
1---
2- name: cleanup Backend versions
3 hosts: backend
4 become: true
5 become_user: root
6 vars:
7 backend_deploy_path: /opt/app/test/
8 tasks:
9 - name: Get all the versions
10 ansible.builtin.find:
11 paths: "{{ backend_deploy_path }}"
12 file_type: directory
13 register: file_stat
14
15 - name: Delete old versions
16 ansible.builtin.file:
17 path: "{{ item.path }}"
18 state: absent
19 with_items: "{{ file_stat.files }}"
20 when: file_stat.files|length > 2
21
When I run this playbook it deletes all the directories instead of keeping three directories.
My question is how can I keep the variable updated? So that it keeps checking every time it tries to delete a directory?
ANSWER
Answered 2022-Mar-15 at 14:40This won't be possible, once a module is executed, the result is saved in the variable and won't dynamically change with the state of the node.
What you should do instead is to limit the list you are looping on with a slice notation to exclude the three last items of the said list: files[:-3]
.
So, your task deleting files would look like this:
1---
2- name: cleanup Backend versions
3 hosts: backend
4 become: true
5 become_user: root
6 vars:
7 backend_deploy_path: /opt/app/test/
8 tasks:
9 - name: Get all the versions
10 ansible.builtin.find:
11 paths: "{{ backend_deploy_path }}"
12 file_type: directory
13 register: file_stat
14
15 - name: Delete old versions
16 ansible.builtin.file:
17 path: "{{ item.path }}"
18 state: absent
19 with_items: "{{ file_stat.files }}"
20 when: file_stat.files|length > 2
21- name: Delete old versions
22 ansible.builtin.file:
23 path: "{{ item.path }}"
24 state: absent
25 loop: "{{ file_stat.files[:-3] }}"
26
Side note: you probably want to sort
that find
result based on the creation date of the folder, too, something like:
1---
2- name: cleanup Backend versions
3 hosts: backend
4 become: true
5 become_user: root
6 vars:
7 backend_deploy_path: /opt/app/test/
8 tasks:
9 - name: Get all the versions
10 ansible.builtin.find:
11 paths: "{{ backend_deploy_path }}"
12 file_type: directory
13 register: file_stat
14
15 - name: Delete old versions
16 ansible.builtin.file:
17 path: "{{ item.path }}"
18 state: absent
19 with_items: "{{ file_stat.files }}"
20 when: file_stat.files|length > 2
21- name: Delete old versions
22 ansible.builtin.file:
23 path: "{{ item.path }}"
24 state: absent
25 loop: "{{ file_stat.files[:-3] }}"
26loop: "{{ (file_stat.files | sort(attribute='ctime'))[:-3] }}"
27
QUESTION
Can I specify that an argument can't be used with a specific choice in Ansible module spec?
Asked 2022-Mar-12 at 16:17I'm looking for a way of specifying that a module argument can't be used if another argument has a certain value.
You can specify required_if
to require an argument if another argument has a specific value but I need the opposite.
Something that's conceptually similar to mutually_exclusive
and might be called forbidden_if
.
I'm developing a module that creates a login for an SQL server. It can either be a SQL login that's specific to the server or a Windows log in that uses the domain controller. For an SQL login you must specify a password for but you can't for Windows as this is set by the domain controller. Logins have an identifier (SID) that may be specified by the user for SQL logins but can't be for Window.
Although it's a Powershell module for a Windows host I'll use Python examples because that's what the documentation is in.
This is the spec for a module that creates an SQL login
1module = AnsibleModule(
2 argument_spec=dict(
3 username=dict(type='str', required=True),
4 password=dict(type='str', no_log=True, required=True),
5 sid=dict(type='str', required=False),
6 ),
7 supports_check_mode=True
8)
9
and one for a Windows login
1module = AnsibleModule(
2 argument_spec=dict(
3 username=dict(type='str', required=True),
4 password=dict(type='str', no_log=True, required=True),
5 sid=dict(type='str', required=False),
6 ),
7 supports_check_mode=True
8)
9module = AnsibleModule(
10 argument_spec=dict(
11 username=dict(type='str', required=True),
12 ),
13 supports_check_mode=True
14)
15
This is my current attempt at a spec for a combined module
1module = AnsibleModule(
2 argument_spec=dict(
3 username=dict(type='str', required=True),
4 password=dict(type='str', no_log=True, required=True),
5 sid=dict(type='str', required=False),
6 ),
7 supports_check_mode=True
8)
9module = AnsibleModule(
10 argument_spec=dict(
11 username=dict(type='str', required=True),
12 ),
13 supports_check_mode=True
14)
15module = AnsibleModule(
16 argument_spec=dict(
17 username=dict(type='str', required=True),
18 password=dict(type='str', no_log=True, required=False),
19 sid=dict(type='str', required=False),
20 login_type=dict(
21 type='str',
22 choices=[ 'sql', 'windows' ],
23 default='sql',
24 required=False
25 )
26 ),
27 required_if=[
28 ('login_type', 'sql', ('password')),
29 ],
30 supports_check_mode=True
31)
32
I was able to make password
required for the sql
login_type
. Since password
and sid
can't be specified for a windows login so I'd like to prevent them being used if login_type
is windows
. Is this possible and if so, how do I do it?
ANSWER
Answered 2022-Mar-12 at 16:17I don't see a solution to your problem without coding the test:
1module = AnsibleModule(
2 argument_spec=dict(
3 username=dict(type='str', required=True),
4 password=dict(type='str', no_log=True, required=True),
5 sid=dict(type='str', required=False),
6 ),
7 supports_check_mode=True
8)
9module = AnsibleModule(
10 argument_spec=dict(
11 username=dict(type='str', required=True),
12 ),
13 supports_check_mode=True
14)
15module = AnsibleModule(
16 argument_spec=dict(
17 username=dict(type='str', required=True),
18 password=dict(type='str', no_log=True, required=False),
19 sid=dict(type='str', required=False),
20 login_type=dict(
21 type='str',
22 choices=[ 'sql', 'windows' ],
23 default='sql',
24 required=False
25 )
26 ),
27 required_if=[
28 ('login_type', 'sql', ('password')),
29 ],
30 supports_check_mode=True
31)
32arguments = dict(
33 username=dict(type='str', required=True),
34 password=dict(type='str', no_log=True, required=False),
35 sid=dict(type='str', required=False),
36 login_type=dict(
37 type='str',
38 choices=[ 'sql', 'windows' ],
39 default='sql',
40 required=False
41 )
42)
43module = AnsibleModule(
44 argument_spec=arguments,
45 required_if=[
46 ('login_type', 'sql', ('password',)),
47 ],
48 supports_check_mode=True
49)
50
51if module.params['login_type'] == 'windows' and (module.params['password'] or module.params['sid']):
52 module.fail_json(msg="unable to use 'login_type=windows' with args 'password' or 'sid'")
53
FYI: I noticed an error in your code, you forgot the ,
in the test:
1module = AnsibleModule(
2 argument_spec=dict(
3 username=dict(type='str', required=True),
4 password=dict(type='str', no_log=True, required=True),
5 sid=dict(type='str', required=False),
6 ),
7 supports_check_mode=True
8)
9module = AnsibleModule(
10 argument_spec=dict(
11 username=dict(type='str', required=True),
12 ),
13 supports_check_mode=True
14)
15module = AnsibleModule(
16 argument_spec=dict(
17 username=dict(type='str', required=True),
18 password=dict(type='str', no_log=True, required=False),
19 sid=dict(type='str', required=False),
20 login_type=dict(
21 type='str',
22 choices=[ 'sql', 'windows' ],
23 default='sql',
24 required=False
25 )
26 ),
27 required_if=[
28 ('login_type', 'sql', ('password')),
29 ],
30 supports_check_mode=True
31)
32arguments = dict(
33 username=dict(type='str', required=True),
34 password=dict(type='str', no_log=True, required=False),
35 sid=dict(type='str', required=False),
36 login_type=dict(
37 type='str',
38 choices=[ 'sql', 'windows' ],
39 default='sql',
40 required=False
41 )
42)
43module = AnsibleModule(
44 argument_spec=arguments,
45 required_if=[
46 ('login_type', 'sql', ('password',)),
47 ],
48 supports_check_mode=True
49)
50
51if module.params['login_type'] == 'windows' and (module.params['password'] or module.params['sid']):
52 module.fail_json(msg="unable to use 'login_type=windows' with args 'password' or 'sid'")
53required_if=[
54 ('login_type', 'sql', ('password'**,**)),
55],
56
Result:
fatal: [localhost]: FAILED! => {"changed": false, "msg": "unable to use 'login_type=windows' with args 'password' or 'sid'"}
QUESTION
Ansible: how to achieve idempotence with tasks that append files on host (w/o reverting to initial state)
Asked 2022-Mar-02 at 14:22I am having a hard time getting to know how to create Ansible roles that are following the best practices according to documentation. The following use-case which I am looking at is e.g. enabling Filebeat on host. Filebeat can be configured by placing a module definition in /etc/filebeat/modules.d
folder.
It works fine when I am adding modules. Idempotence is working, everytime, on each run of the role (playbook), a given set of modules is enabled.
But what I should do when I decide that a given module is not longer needed? I remove it from role, rerun a playbook, so that all other modules are enabled. But: the previous run enabled a module that I am not installing directly with role after changes. So my server state is still altered in a way that is different than the role is imposing itself.
My question is: should I take care of removing modules before I apply them so I always start from, let's say, fresh state?
E.g.:
1- name: Remove modules
2 file:
3 dest: "/etc/filebeat/modules.d/{{ item }}"
4 state: absent
5 loop:
6 - "module1.yml"
7 - "module2.yml"
8 - "module3.yml" # It was being installed in previous role, but not now
9
10- name: Enable modules via 'modules.d' directory
11 template:
12 src: "modules.d/{{ item }}"
13 dest: "/etc/filebeat/modules.d/{{ item }}"
14 mode: '0644'
15 loop:
16 - "module1.yml"
17 - "module2.yml"
18
So I remove module3.yml
, because I remember that I've installed it before, and install module1.yml
and module2.yml
.
Instead of just installing what I need, no matter what has been installed before:
1- name: Remove modules
2 file:
3 dest: "/etc/filebeat/modules.d/{{ item }}"
4 state: absent
5 loop:
6 - "module1.yml"
7 - "module2.yml"
8 - "module3.yml" # It was being installed in previous role, but not now
9
10- name: Enable modules via 'modules.d' directory
11 template:
12 src: "modules.d/{{ item }}"
13 dest: "/etc/filebeat/modules.d/{{ item }}"
14 mode: '0644'
15 loop:
16 - "module1.yml"
17 - "module2.yml"
18- name: Enable modules via 'modules.d' directory
19 template:
20 src: "modules.d/{{ item }}"
21 dest: "/etc/filebeat/modules.d/{{ item }}"
22 mode: '0644'
23 loop:
24 - "module1.yml"
25 - "module2.yml"
26
Leaving me with module1.yml
and module2.yml
(desired) and, unfortunately: module3.yml
(from previous role).
How to manage that to avoid such situations? And avoid treating server as one big stateful machine that even if I run a role, the output is different than desired, because something has been done before that I cannot see in current Ansible role code.
Do you code revert
playbooks in your Ansible workflow to revert to initial state when needed?
I am curious. Thanks in advance for your reply.
ANSWER
Answered 2022-Mar-02 at 11:07In a nutshell:
1- name: Remove modules
2 file:
3 dest: "/etc/filebeat/modules.d/{{ item }}"
4 state: absent
5 loop:
6 - "module1.yml"
7 - "module2.yml"
8 - "module3.yml" # It was being installed in previous role, but not now
9
10- name: Enable modules via 'modules.d' directory
11 template:
12 src: "modules.d/{{ item }}"
13 dest: "/etc/filebeat/modules.d/{{ item }}"
14 mode: '0644'
15 loop:
16 - "module1.yml"
17 - "module2.yml"
18- name: Enable modules via 'modules.d' directory
19 template:
20 src: "modules.d/{{ item }}"
21 dest: "/etc/filebeat/modules.d/{{ item }}"
22 mode: '0644'
23 loop:
24 - "module1.yml"
25 - "module2.yml"
26- name: Configure filebeat modules
27 hosts: all
28
29 vars:
30 fb_modules_d:
31 - file: module1.yml
32 state: present
33 - file: module2.yml
34 state: present
35 - file: module3.yml
36 state: absent
37
38 tasks:
39 - name: Make sure all needed module files are present
40 template:
41 src: "modules.d/{{ item.file }}"
42 dest: "/etc/filebeat/modules.d/{{ item.file }}"
43 mode: '0644'
44 loop: "{{ fb_modules_d | selectattr('state', '==', 'present') }}"
45 notifiy: restart_filebeat
46
47 - name: Make sure all disabled modules are removed
48 file:
49 dest: "/etc/filebeat/modules.d/{{ item.file }}"
50 state: "{{ item.state }}"
51 loop: loop: "{{ fb_modules_d | selectattr('state', '==', 'absent') }}"
52 notify: restart_filebeat
53
54 handlers:
55 - name: Restart filebeat service
56 listen: restart_filebeat
57 systemd:
58 name: filebeat
59 state: restarted
60
Note: I declared the variable inside the playbook for the example but that one one should most probably go inside your inventory (group or host level), and certainly not in a role (except in defaults for documentation)
QUESTION
Ansible: Show last X output lines
Asked 2022-Jan-26 at 00:00Is there a way to output only the last 5 lines of an Ansible shell output, for example?
Maybe using loops?
Example:
1 - name: Running Migrations
2 ansible.builtin.shell: /some-script-produces-lot-of-output.sh
3 register: ps
4 - debug: var=ps.stdout_lines
5
Debug should only output the last 5 lines.
ANSWER
Answered 2022-Jan-26 at 00:00You can use Python's slicing notation for this:
1 - name: Running Migrations
2 ansible.builtin.shell: /some-script-produces-lot-of-output.sh
3 register: ps
4 - debug: var=ps.stdout_lines
5- debug:
6 var: ps.stdout_lines[-5:]
7
Will output the 5 lines from the end of the list (hence the negative value) up until the end of the list.
Given the tasks
1 - name: Running Migrations
2 ansible.builtin.shell: /some-script-produces-lot-of-output.sh
3 register: ps
4 - debug: var=ps.stdout_lines
5- debug:
6 var: ps.stdout_lines[-5:]
7- command: printf "line1\nline2\nline3\nline4\nline5\nline6\nline7\nline8\nline9\nline10"
8 register: ps
9
10- debug:
11 var: ps.stdout_lines[-5:]
12
This yields:
1 - name: Running Migrations
2 ansible.builtin.shell: /some-script-produces-lot-of-output.sh
3 register: ps
4 - debug: var=ps.stdout_lines
5- debug:
6 var: ps.stdout_lines[-5:]
7- command: printf "line1\nline2\nline3\nline4\nline5\nline6\nline7\nline8\nline9\nline10"
8 register: ps
9
10- debug:
11 var: ps.stdout_lines[-5:]
12TASK [command] ************************************************************
13changed: [localhost]
14
15TASK [debug] **************************************************************
16ok: [localhost] =>
17 ps.stdout_lines[-5:]:
18 - line6
19 - line7
20 - line8
21 - line9
22 - line10
23
QUESTION
Ansible, how to set a global fact using roles?
Asked 2022-Jan-24 at 20:03I'm trying to use Ansible to deploy a small k3s cluster with just two server nodes at the moment. Deploying the first server node, which I refer to as "master" is easy to set up with Ansible. However, setting up the second server node, which I refer to as "node" is giving me a challenge because I need to pull the value of the node-token from the master and use it to call the k3s install command on the "node" vm.
I'm using Ansible roles, and this is what my playbook looks like:
1- hosts: all
2
3 roles:
4 - { role: k3sInstall , when: 'server_type is defined'}
5 - { role: k3sUnInstall , when: 'server_type is defined'}
6
This is my main.yml file from the k3sInstall role directory:
1- hosts: all
2
3 roles:
4 - { role: k3sInstall , when: 'server_type is defined'}
5 - { role: k3sUnInstall , when: 'server_type is defined'}
6- name: Install k3s Server
7 import_tasks: k3s_install_server.yml
8 tags:
9 - k3s_install
10
This is my k3s_install_server.yml:
1- hosts: all
2
3 roles:
4 - { role: k3sInstall , when: 'server_type is defined'}
5 - { role: k3sUnInstall , when: 'server_type is defined'}
6- name: Install k3s Server
7 import_tasks: k3s_install_server.yml
8 tags:
9 - k3s_install
10---
11- name: Install k3s Cluster
12 block:
13 - name: Install k3s Master Server
14 become: yes
15 shell: "{{ k3s_master_install_cmd }}"
16 when: server_role == "master"
17
18 - name: Get Node-Token file from master server.
19 become: yes
20 shell: cat {{ node_token_filepath }}
21 when: server_role == "master"
22 register: nodetoken
23
24 - name: Print Node-Token
25 when: server_role == "master"
26 debug:
27 msg: "{{ nodetoken.stdout }}"
28 # msg: "{{ k3s_node_install_cmd }}"
29
30 - name: Set Node-Token fact
31 when: server_role == "master"
32 set_fact:
33 nodeToken: "{{ nodetoken.stdout }}"
34
35 - name: Print Node-Token fact
36 when: server_role == "node" or server_role == "master"
37 debug:
38 msg: "{{ nodeToken }}"
39 # - name: Install k3s Node Server
40 # become: yes
41 # shell: "{{ k3s_node_install_cmd }}{{ nodeToken }}"
42 # when: server_role == "node"
43
I've commented out the Install k3s Node Server
task because I'm not able to properly reference the nodeToken variable that I'm setting when server_role == master
.
This is the output of the debug:
1- hosts: all
2
3 roles:
4 - { role: k3sInstall , when: 'server_type is defined'}
5 - { role: k3sUnInstall , when: 'server_type is defined'}
6- name: Install k3s Server
7 import_tasks: k3s_install_server.yml
8 tags:
9 - k3s_install
10---
11- name: Install k3s Cluster
12 block:
13 - name: Install k3s Master Server
14 become: yes
15 shell: "{{ k3s_master_install_cmd }}"
16 when: server_role == "master"
17
18 - name: Get Node-Token file from master server.
19 become: yes
20 shell: cat {{ node_token_filepath }}
21 when: server_role == "master"
22 register: nodetoken
23
24 - name: Print Node-Token
25 when: server_role == "master"
26 debug:
27 msg: "{{ nodetoken.stdout }}"
28 # msg: "{{ k3s_node_install_cmd }}"
29
30 - name: Set Node-Token fact
31 when: server_role == "master"
32 set_fact:
33 nodeToken: "{{ nodetoken.stdout }}"
34
35 - name: Print Node-Token fact
36 when: server_role == "node" or server_role == "master"
37 debug:
38 msg: "{{ nodeToken }}"
39 # - name: Install k3s Node Server
40 # become: yes
41 # shell: "{{ k3s_node_install_cmd }}{{ nodeToken }}"
42 # when: server_role == "node"
43TASK [k3sInstall : Print Node-Token fact] ***************************************************************************************************************************************************************************************************************************************************************************
44ok: [server1] => {
45 "msg": "K10cf129cfedafcb083655a1780e4be994621086f780a66d9720e77163d36147051::server:aa2837148e402f675604a56602a5bbf8"
46}
47ok: [server2] => {
48 "msg": ""
49}
50
My host file:
1- hosts: all
2
3 roles:
4 - { role: k3sInstall , when: 'server_type is defined'}
5 - { role: k3sUnInstall , when: 'server_type is defined'}
6- name: Install k3s Server
7 import_tasks: k3s_install_server.yml
8 tags:
9 - k3s_install
10---
11- name: Install k3s Cluster
12 block:
13 - name: Install k3s Master Server
14 become: yes
15 shell: "{{ k3s_master_install_cmd }}"
16 when: server_role == "master"
17
18 - name: Get Node-Token file from master server.
19 become: yes
20 shell: cat {{ node_token_filepath }}
21 when: server_role == "master"
22 register: nodetoken
23
24 - name: Print Node-Token
25 when: server_role == "master"
26 debug:
27 msg: "{{ nodetoken.stdout }}"
28 # msg: "{{ k3s_node_install_cmd }}"
29
30 - name: Set Node-Token fact
31 when: server_role == "master"
32 set_fact:
33 nodeToken: "{{ nodetoken.stdout }}"
34
35 - name: Print Node-Token fact
36 when: server_role == "node" or server_role == "master"
37 debug:
38 msg: "{{ nodeToken }}"
39 # - name: Install k3s Node Server
40 # become: yes
41 # shell: "{{ k3s_node_install_cmd }}{{ nodeToken }}"
42 # when: server_role == "node"
43TASK [k3sInstall : Print Node-Token fact] ***************************************************************************************************************************************************************************************************************************************************************************
44ok: [server1] => {
45 "msg": "K10cf129cfedafcb083655a1780e4be994621086f780a66d9720e77163d36147051::server:aa2837148e402f675604a56602a5bbf8"
46}
47ok: [server2] => {
48 "msg": ""
49}
50[p6dualstackservers]
51server1 ansible_ssh_host=10.63.60.220
52server2 ansible_ssh_host=10.63.60.221
53
And I have the following host_vars files assigned:
server1.yml:
1- hosts: all
2
3 roles:
4 - { role: k3sInstall , when: 'server_type is defined'}
5 - { role: k3sUnInstall , when: 'server_type is defined'}
6- name: Install k3s Server
7 import_tasks: k3s_install_server.yml
8 tags:
9 - k3s_install
10---
11- name: Install k3s Cluster
12 block:
13 - name: Install k3s Master Server
14 become: yes
15 shell: "{{ k3s_master_install_cmd }}"
16 when: server_role == "master"
17
18 - name: Get Node-Token file from master server.
19 become: yes
20 shell: cat {{ node_token_filepath }}
21 when: server_role == "master"
22 register: nodetoken
23
24 - name: Print Node-Token
25 when: server_role == "master"
26 debug:
27 msg: "{{ nodetoken.stdout }}"
28 # msg: "{{ k3s_node_install_cmd }}"
29
30 - name: Set Node-Token fact
31 when: server_role == "master"
32 set_fact:
33 nodeToken: "{{ nodetoken.stdout }}"
34
35 - name: Print Node-Token fact
36 when: server_role == "node" or server_role == "master"
37 debug:
38 msg: "{{ nodeToken }}"
39 # - name: Install k3s Node Server
40 # become: yes
41 # shell: "{{ k3s_node_install_cmd }}{{ nodeToken }}"
42 # when: server_role == "node"
43TASK [k3sInstall : Print Node-Token fact] ***************************************************************************************************************************************************************************************************************************************************************************
44ok: [server1] => {
45 "msg": "K10cf129cfedafcb083655a1780e4be994621086f780a66d9720e77163d36147051::server:aa2837148e402f675604a56602a5bbf8"
46}
47ok: [server2] => {
48 "msg": ""
49}
50[p6dualstackservers]
51server1 ansible_ssh_host=10.63.60.220
52server2 ansible_ssh_host=10.63.60.221
53server_role: master
54
server2.yml:
1- hosts: all
2
3 roles:
4 - { role: k3sInstall , when: 'server_type is defined'}
5 - { role: k3sUnInstall , when: 'server_type is defined'}
6- name: Install k3s Server
7 import_tasks: k3s_install_server.yml
8 tags:
9 - k3s_install
10---
11- name: Install k3s Cluster
12 block:
13 - name: Install k3s Master Server
14 become: yes
15 shell: "{{ k3s_master_install_cmd }}"
16 when: server_role == "master"
17
18 - name: Get Node-Token file from master server.
19 become: yes
20 shell: cat {{ node_token_filepath }}
21 when: server_role == "master"
22 register: nodetoken
23
24 - name: Print Node-Token
25 when: server_role == "master"
26 debug:
27 msg: "{{ nodetoken.stdout }}"
28 # msg: "{{ k3s_node_install_cmd }}"
29
30 - name: Set Node-Token fact
31 when: server_role == "master"
32 set_fact:
33 nodeToken: "{{ nodetoken.stdout }}"
34
35 - name: Print Node-Token fact
36 when: server_role == "node" or server_role == "master"
37 debug:
38 msg: "{{ nodeToken }}"
39 # - name: Install k3s Node Server
40 # become: yes
41 # shell: "{{ k3s_node_install_cmd }}{{ nodeToken }}"
42 # when: server_role == "node"
43TASK [k3sInstall : Print Node-Token fact] ***************************************************************************************************************************************************************************************************************************************************************************
44ok: [server1] => {
45 "msg": "K10cf129cfedafcb083655a1780e4be994621086f780a66d9720e77163d36147051::server:aa2837148e402f675604a56602a5bbf8"
46}
47ok: [server2] => {
48 "msg": ""
49}
50[p6dualstackservers]
51server1 ansible_ssh_host=10.63.60.220
52server2 ansible_ssh_host=10.63.60.221
53server_role: master
54server_role: node
55
I've tried assigning the nodeToken variable inside of k3sInstall/vars/main.yml as well as one level up from the k3sInstall role inside group_vars/all.yml but that didn't help.
I tried searching for a way to use block-level variables but couldn't find anything.
ANSWER
Answered 2022-Jan-24 at 20:03If you set the variable for master only it's not available for other hosts, e.g.
1- hosts: all
2
3 roles:
4 - { role: k3sInstall , when: 'server_type is defined'}
5 - { role: k3sUnInstall , when: 'server_type is defined'}
6- name: Install k3s Server
7 import_tasks: k3s_install_server.yml
8 tags:
9 - k3s_install
10---
11- name: Install k3s Cluster
12 block:
13 - name: Install k3s Master Server
14 become: yes
15 shell: "{{ k3s_master_install_cmd }}"
16 when: server_role == "master"
17
18 - name: Get Node-Token file from master server.
19 become: yes
20 shell: cat {{ node_token_filepath }}
21 when: server_role == "master"
22 register: nodetoken
23
24 - name: Print Node-Token
25 when: server_role == "master"
26 debug:
27 msg: "{{ nodetoken.stdout }}"
28 # msg: "{{ k3s_node_install_cmd }}"
29
30 - name: Set Node-Token fact
31 when: server_role == "master"
32 set_fact:
33 nodeToken: "{{ nodetoken.stdout }}"
34
35 - name: Print Node-Token fact
36 when: server_role == "node" or server_role == "master"
37 debug:
38 msg: "{{ nodeToken }}"
39 # - name: Install k3s Node Server
40 # become: yes
41 # shell: "{{ k3s_node_install_cmd }}{{ nodeToken }}"
42 # when: server_role == "node"
43TASK [k3sInstall : Print Node-Token fact] ***************************************************************************************************************************************************************************************************************************************************************************
44ok: [server1] => {
45 "msg": "K10cf129cfedafcb083655a1780e4be994621086f780a66d9720e77163d36147051::server:aa2837148e402f675604a56602a5bbf8"
46}
47ok: [server2] => {
48 "msg": ""
49}
50[p6dualstackservers]
51server1 ansible_ssh_host=10.63.60.220
52server2 ansible_ssh_host=10.63.60.221
53server_role: master
54server_role: node
55- hosts: master,node
56 tasks:
57 - set_fact:
58 nodeToken: K10cf129cfedaf
59 when: inventory_hostname == 'master'
60 - debug:
61 var: nodeToken
62
gives
1- hosts: all
2
3 roles:
4 - { role: k3sInstall , when: 'server_type is defined'}
5 - { role: k3sUnInstall , when: 'server_type is defined'}
6- name: Install k3s Server
7 import_tasks: k3s_install_server.yml
8 tags:
9 - k3s_install
10---
11- name: Install k3s Cluster
12 block:
13 - name: Install k3s Master Server
14 become: yes
15 shell: "{{ k3s_master_install_cmd }}"
16 when: server_role == "master"
17
18 - name: Get Node-Token file from master server.
19 become: yes
20 shell: cat {{ node_token_filepath }}
21 when: server_role == "master"
22 register: nodetoken
23
24 - name: Print Node-Token
25 when: server_role == "master"
26 debug:
27 msg: "{{ nodetoken.stdout }}"
28 # msg: "{{ k3s_node_install_cmd }}"
29
30 - name: Set Node-Token fact
31 when: server_role == "master"
32 set_fact:
33 nodeToken: "{{ nodetoken.stdout }}"
34
35 - name: Print Node-Token fact
36 when: server_role == "node" or server_role == "master"
37 debug:
38 msg: "{{ nodeToken }}"
39 # - name: Install k3s Node Server
40 # become: yes
41 # shell: "{{ k3s_node_install_cmd }}{{ nodeToken }}"
42 # when: server_role == "node"
43TASK [k3sInstall : Print Node-Token fact] ***************************************************************************************************************************************************************************************************************************************************************************
44ok: [server1] => {
45 "msg": "K10cf129cfedafcb083655a1780e4be994621086f780a66d9720e77163d36147051::server:aa2837148e402f675604a56602a5bbf8"
46}
47ok: [server2] => {
48 "msg": ""
49}
50[p6dualstackservers]
51server1 ansible_ssh_host=10.63.60.220
52server2 ansible_ssh_host=10.63.60.221
53server_role: master
54server_role: node
55- hosts: master,node
56 tasks:
57 - set_fact:
58 nodeToken: K10cf129cfedaf
59 when: inventory_hostname == 'master'
60 - debug:
61 var: nodeToken
62ok: [master] =>
63 nodeToken: K10cf129cfedaf
64ok: [node] =>
65 nodeToken: VARIABLE IS NOT DEFINED!
66
If you want to "apply all results and facts to all the hosts in the same batch" use run_once: true, e.g.
1- hosts: all
2
3 roles:
4 - { role: k3sInstall , when: 'server_type is defined'}
5 - { role: k3sUnInstall , when: 'server_type is defined'}
6- name: Install k3s Server
7 import_tasks: k3s_install_server.yml
8 tags:
9 - k3s_install
10---
11- name: Install k3s Cluster
12 block:
13 - name: Install k3s Master Server
14 become: yes
15 shell: "{{ k3s_master_install_cmd }}"
16 when: server_role == "master"
17
18 - name: Get Node-Token file from master server.
19 become: yes
20 shell: cat {{ node_token_filepath }}
21 when: server_role == "master"
22 register: nodetoken
23
24 - name: Print Node-Token
25 when: server_role == "master"
26 debug:
27 msg: "{{ nodetoken.stdout }}"
28 # msg: "{{ k3s_node_install_cmd }}"
29
30 - name: Set Node-Token fact
31 when: server_role == "master"
32 set_fact:
33 nodeToken: "{{ nodetoken.stdout }}"
34
35 - name: Print Node-Token fact
36 when: server_role == "node" or server_role == "master"
37 debug:
38 msg: "{{ nodeToken }}"
39 # - name: Install k3s Node Server
40 # become: yes
41 # shell: "{{ k3s_node_install_cmd }}{{ nodeToken }}"
42 # when: server_role == "node"
43TASK [k3sInstall : Print Node-Token fact] ***************************************************************************************************************************************************************************************************************************************************************************
44ok: [server1] => {
45 "msg": "K10cf129cfedafcb083655a1780e4be994621086f780a66d9720e77163d36147051::server:aa2837148e402f675604a56602a5bbf8"
46}
47ok: [server2] => {
48 "msg": ""
49}
50[p6dualstackservers]
51server1 ansible_ssh_host=10.63.60.220
52server2 ansible_ssh_host=10.63.60.221
53server_role: master
54server_role: node
55- hosts: master,node
56 tasks:
57 - set_fact:
58 nodeToken: K10cf129cfedaf
59 when: inventory_hostname == 'master'
60 - debug:
61 var: nodeToken
62ok: [master] =>
63 nodeToken: K10cf129cfedaf
64ok: [node] =>
65 nodeToken: VARIABLE IS NOT DEFINED!
66- hosts: master,node
67 tasks:
68 - set_fact:
69 nodeToken: K10cf129cfedaf
70 when: inventory_hostname == 'master'
71 run_once: true
72 - debug:
73 var: nodeToken
74
gives
1- hosts: all
2
3 roles:
4 - { role: k3sInstall , when: 'server_type is defined'}
5 - { role: k3sUnInstall , when: 'server_type is defined'}
6- name: Install k3s Server
7 import_tasks: k3s_install_server.yml
8 tags:
9 - k3s_install
10---
11- name: Install k3s Cluster
12 block:
13 - name: Install k3s Master Server
14 become: yes
15 shell: "{{ k3s_master_install_cmd }}"
16 when: server_role == "master"
17
18 - name: Get Node-Token file from master server.
19 become: yes
20 shell: cat {{ node_token_filepath }}
21 when: server_role == "master"
22 register: nodetoken
23
24 - name: Print Node-Token
25 when: server_role == "master"
26 debug:
27 msg: "{{ nodetoken.stdout }}"
28 # msg: "{{ k3s_node_install_cmd }}"
29
30 - name: Set Node-Token fact
31 when: server_role == "master"
32 set_fact:
33 nodeToken: "{{ nodetoken.stdout }}"
34
35 - name: Print Node-Token fact
36 when: server_role == "node" or server_role == "master"
37 debug:
38 msg: "{{ nodeToken }}"
39 # - name: Install k3s Node Server
40 # become: yes
41 # shell: "{{ k3s_node_install_cmd }}{{ nodeToken }}"
42 # when: server_role == "node"
43TASK [k3sInstall : Print Node-Token fact] ***************************************************************************************************************************************************************************************************************************************************************************
44ok: [server1] => {
45 "msg": "K10cf129cfedafcb083655a1780e4be994621086f780a66d9720e77163d36147051::server:aa2837148e402f675604a56602a5bbf8"
46}
47ok: [server2] => {
48 "msg": ""
49}
50[p6dualstackservers]
51server1 ansible_ssh_host=10.63.60.220
52server2 ansible_ssh_host=10.63.60.221
53server_role: master
54server_role: node
55- hosts: master,node
56 tasks:
57 - set_fact:
58 nodeToken: K10cf129cfedaf
59 when: inventory_hostname == 'master'
60 - debug:
61 var: nodeToken
62ok: [master] =>
63 nodeToken: K10cf129cfedaf
64ok: [node] =>
65 nodeToken: VARIABLE IS NOT DEFINED!
66- hosts: master,node
67 tasks:
68 - set_fact:
69 nodeToken: K10cf129cfedaf
70 when: inventory_hostname == 'master'
71 run_once: true
72 - debug:
73 var: nodeToken
74ok: [master] =>
75 nodeToken: K10cf129cfedaf
76ok: [node] =>
77 nodeToken: K10cf129cfedaf
78
In your case, add 'run_once: true' to the task
1- hosts: all
2
3 roles:
4 - { role: k3sInstall , when: 'server_type is defined'}
5 - { role: k3sUnInstall , when: 'server_type is defined'}
6- name: Install k3s Server
7 import_tasks: k3s_install_server.yml
8 tags:
9 - k3s_install
10---
11- name: Install k3s Cluster
12 block:
13 - name: Install k3s Master Server
14 become: yes
15 shell: "{{ k3s_master_install_cmd }}"
16 when: server_role == "master"
17
18 - name: Get Node-Token file from master server.
19 become: yes
20 shell: cat {{ node_token_filepath }}
21 when: server_role == "master"
22 register: nodetoken
23
24 - name: Print Node-Token
25 when: server_role == "master"
26 debug:
27 msg: "{{ nodetoken.stdout }}"
28 # msg: "{{ k3s_node_install_cmd }}"
29
30 - name: Set Node-Token fact
31 when: server_role == "master"
32 set_fact:
33 nodeToken: "{{ nodetoken.stdout }}"
34
35 - name: Print Node-Token fact
36 when: server_role == "node" or server_role == "master"
37 debug:
38 msg: "{{ nodeToken }}"
39 # - name: Install k3s Node Server
40 # become: yes
41 # shell: "{{ k3s_node_install_cmd }}{{ nodeToken }}"
42 # when: server_role == "node"
43TASK [k3sInstall : Print Node-Token fact] ***************************************************************************************************************************************************************************************************************************************************************************
44ok: [server1] => {
45 "msg": "K10cf129cfedafcb083655a1780e4be994621086f780a66d9720e77163d36147051::server:aa2837148e402f675604a56602a5bbf8"
46}
47ok: [server2] => {
48 "msg": ""
49}
50[p6dualstackservers]
51server1 ansible_ssh_host=10.63.60.220
52server2 ansible_ssh_host=10.63.60.221
53server_role: master
54server_role: node
55- hosts: master,node
56 tasks:
57 - set_fact:
58 nodeToken: K10cf129cfedaf
59 when: inventory_hostname == 'master'
60 - debug:
61 var: nodeToken
62ok: [master] =>
63 nodeToken: K10cf129cfedaf
64ok: [node] =>
65 nodeToken: VARIABLE IS NOT DEFINED!
66- hosts: master,node
67 tasks:
68 - set_fact:
69 nodeToken: K10cf129cfedaf
70 when: inventory_hostname == 'master'
71 run_once: true
72 - debug:
73 var: nodeToken
74ok: [master] =>
75 nodeToken: K10cf129cfedaf
76ok: [node] =>
77 nodeToken: K10cf129cfedaf
78 - name: Set Node-Token fact
79 set_fact:
80 nodeToken: "{{ nodetoken.stdout }}"
81 when: server_role == "master"
82 run_once: true
83
The above code works because the condition when: server_role == "master"
is applied before run_once: true
. Quoting from run_once
"Boolean that will bypass the host loop, forcing the task to attempt to execute on the first host available and afterward apply any results and facts to all active hosts in the same batch."
Safer code would be adding a standalone set_fact instead of relying on the precedence of the condition when: and run_once, e.g.
1- hosts: all
2
3 roles:
4 - { role: k3sInstall , when: 'server_type is defined'}
5 - { role: k3sUnInstall , when: 'server_type is defined'}
6- name: Install k3s Server
7 import_tasks: k3s_install_server.yml
8 tags:
9 - k3s_install
10---
11- name: Install k3s Cluster
12 block:
13 - name: Install k3s Master Server
14 become: yes
15 shell: "{{ k3s_master_install_cmd }}"
16 when: server_role == "master"
17
18 - name: Get Node-Token file from master server.
19 become: yes
20 shell: cat {{ node_token_filepath }}
21 when: server_role == "master"
22 register: nodetoken
23
24 - name: Print Node-Token
25 when: server_role == "master"
26 debug:
27 msg: "{{ nodetoken.stdout }}"
28 # msg: "{{ k3s_node_install_cmd }}"
29
30 - name: Set Node-Token fact
31 when: server_role == "master"
32 set_fact:
33 nodeToken: "{{ nodetoken.stdout }}"
34
35 - name: Print Node-Token fact
36 when: server_role == "node" or server_role == "master"
37 debug:
38 msg: "{{ nodeToken }}"
39 # - name: Install k3s Node Server
40 # become: yes
41 # shell: "{{ k3s_node_install_cmd }}{{ nodeToken }}"
42 # when: server_role == "node"
43TASK [k3sInstall : Print Node-Token fact] ***************************************************************************************************************************************************************************************************************************************************************************
44ok: [server1] => {
45 "msg": "K10cf129cfedafcb083655a1780e4be994621086f780a66d9720e77163d36147051::server:aa2837148e402f675604a56602a5bbf8"
46}
47ok: [server2] => {
48 "msg": ""
49}
50[p6dualstackservers]
51server1 ansible_ssh_host=10.63.60.220
52server2 ansible_ssh_host=10.63.60.221
53server_role: master
54server_role: node
55- hosts: master,node
56 tasks:
57 - set_fact:
58 nodeToken: K10cf129cfedaf
59 when: inventory_hostname == 'master'
60 - debug:
61 var: nodeToken
62ok: [master] =>
63 nodeToken: K10cf129cfedaf
64ok: [node] =>
65 nodeToken: VARIABLE IS NOT DEFINED!
66- hosts: master,node
67 tasks:
68 - set_fact:
69 nodeToken: K10cf129cfedaf
70 when: inventory_hostname == 'master'
71 run_once: true
72 - debug:
73 var: nodeToken
74ok: [master] =>
75 nodeToken: K10cf129cfedaf
76ok: [node] =>
77 nodeToken: K10cf129cfedaf
78 - name: Set Node-Token fact
79 set_fact:
80 nodeToken: "{{ nodetoken.stdout }}"
81 when: server_role == "master"
82 run_once: true
83 - set_fact:
84 nodeToken: "{{ nodetoken.stdout }}"
85 when: inventory_hostname == 'master'
86 - set_fact:
87 nodeToken: "{{ hostvars['master'].nodeToken }}"
88 run_once: true
89
QUESTION
I compiled R from source and it doesn't find certificates
Asked 2022-Jan-14 at 17:25I am deploying multiple R versions on multiple virtual desktops. I've built 3.6.3
and 4.1.2
R from source on Ubuntu 18.04.3 LTS
. None of them finds the system-wide Rprofile.site
file in /etc/R
or the system certificates in /usr/share/ca-certificates
. However R (3.4.4
) installed with APT has no such problems. I used Ansible, but for the sake of this question I reproduced the deployment for one host with a shell script.
1#!/bin/bash
2set -euo pipefail
3
4# install build dependecies
5(command -v apt && apt-get build-dep r-base) || (command -v dnf && dnf builddep R)
6
7
8version='4.1.2'
9major_version=$(echo "$version" | cut -c 1)
10
11wget "https://cran.rstudio.com/src/base/R-$major_version/R-$version.tar.gz"
12tar -xzf R-$version.tar.gz
13cd R-$version
14
15./configure \
16 --prefix=/opt/R/$version \
17 --sysconfdir=/etc/R \
18 --enable-R-shlib \
19 --with-pcre1 \
20 --with-blas \
21 --with-lapack
22make -j 8
23make install
24
25
Note: It should run on most Linux distros with APT or RPM package managers. Increase the -j
argument of make
, if you have the enough cores, but no time.
So I defined the installation prefix as /opt/R/$version
, but I want it read config files from /etc/R
(defined --sysconfdir=/etc/R
). However when I open the R interactive shell (/opt/R/4.1.2/bin/R
) to try install a package:
1#!/bin/bash
2set -euo pipefail
3
4# install build dependecies
5(command -v apt && apt-get build-dep r-base) || (command -v dnf && dnf builddep R)
6
7
8version='4.1.2'
9major_version=$(echo "$version" | cut -c 1)
10
11wget "https://cran.rstudio.com/src/base/R-$major_version/R-$version.tar.gz"
12tar -xzf R-$version.tar.gz
13cd R-$version
14
15./configure \
16 --prefix=/opt/R/$version \
17 --sysconfdir=/etc/R \
18 --enable-R-shlib \
19 --with-pcre1 \
20 --with-blas \
21 --with-lapack
22make -j 8
23make install
24
25install.packages("remotes")
26
then I will be prompted to choose a R package mirror, but one already defined in /etc/R/Rprofile.site
:
1#!/bin/bash
2set -euo pipefail
3
4# install build dependecies
5(command -v apt && apt-get build-dep r-base) || (command -v dnf && dnf builddep R)
6
7
8version='4.1.2'
9major_version=$(echo "$version" | cut -c 1)
10
11wget "https://cran.rstudio.com/src/base/R-$major_version/R-$version.tar.gz"
12tar -xzf R-$version.tar.gz
13cd R-$version
14
15./configure \
16 --prefix=/opt/R/$version \
17 --sysconfdir=/etc/R \
18 --enable-R-shlib \
19 --with-pcre1 \
20 --with-blas \
21 --with-lapack
22make -j 8
23make install
24
25install.packages("remotes")
26local({
27 r <- getOption("repos")
28 r["CRAN"] <- "https://cloud.r-project.org"
29 options(repos = r)
30})
31
I can force the R shell to find the Rprofile.site
file by defining it with the R_PROFILE
environment variable.
1#!/bin/bash
2set -euo pipefail
3
4# install build dependecies
5(command -v apt && apt-get build-dep r-base) || (command -v dnf && dnf builddep R)
6
7
8version='4.1.2'
9major_version=$(echo "$version" | cut -c 1)
10
11wget "https://cran.rstudio.com/src/base/R-$major_version/R-$version.tar.gz"
12tar -xzf R-$version.tar.gz
13cd R-$version
14
15./configure \
16 --prefix=/opt/R/$version \
17 --sysconfdir=/etc/R \
18 --enable-R-shlib \
19 --with-pcre1 \
20 --with-blas \
21 --with-lapack
22make -j 8
23make install
24
25install.packages("remotes")
26local({
27 r <- getOption("repos")
28 r["CRAN"] <- "https://cloud.r-project.org"
29 options(repos = r)
30})
31export R_PROFILE=/etc/R/Rprofile.site
32/opt/R/4.1.2/bin/R
33
then call install.packages("remotes")
again in the R shell. Now no mirror selection prompt will be shown, but the following error:
1#!/bin/bash
2set -euo pipefail
3
4# install build dependecies
5(command -v apt && apt-get build-dep r-base) || (command -v dnf && dnf builddep R)
6
7
8version='4.1.2'
9major_version=$(echo "$version" | cut -c 1)
10
11wget "https://cran.rstudio.com/src/base/R-$major_version/R-$version.tar.gz"
12tar -xzf R-$version.tar.gz
13cd R-$version
14
15./configure \
16 --prefix=/opt/R/$version \
17 --sysconfdir=/etc/R \
18 --enable-R-shlib \
19 --with-pcre1 \
20 --with-blas \
21 --with-lapack
22make -j 8
23make install
24
25install.packages("remotes")
26local({
27 r <- getOption("repos")
28 r["CRAN"] <- "https://cloud.r-project.org"
29 options(repos = r)
30})
31export R_PROFILE=/etc/R/Rprofile.site
32/opt/R/4.1.2/bin/R
33Warning: unable to access index for repository https://cloud.r-project.org/src/contrib:
34 cannot open URL 'https://cloud.r-project.org/src/contrib/PACKAGES'
35Warning message:
36package ‘remotes’ is not available for this version of R
37
38A version of this package for your version of R might be available elsewhere,
39see the ideas at
40https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
41
So it cannot access the repository index (real problem), then concludes that the package ‘remotes’ package is not available for my R version. Which is BS, since it was not able to read the index at the first place. So I tried a simple HTTP call in the same R shell.
1#!/bin/bash
2set -euo pipefail
3
4# install build dependecies
5(command -v apt && apt-get build-dep r-base) || (command -v dnf && dnf builddep R)
6
7
8version='4.1.2'
9major_version=$(echo "$version" | cut -c 1)
10
11wget "https://cran.rstudio.com/src/base/R-$major_version/R-$version.tar.gz"
12tar -xzf R-$version.tar.gz
13cd R-$version
14
15./configure \
16 --prefix=/opt/R/$version \
17 --sysconfdir=/etc/R \
18 --enable-R-shlib \
19 --with-pcre1 \
20 --with-blas \
21 --with-lapack
22make -j 8
23make install
24
25install.packages("remotes")
26local({
27 r <- getOption("repos")
28 r["CRAN"] <- "https://cloud.r-project.org"
29 options(repos = r)
30})
31export R_PROFILE=/etc/R/Rprofile.site
32/opt/R/4.1.2/bin/R
33Warning: unable to access index for repository https://cloud.r-project.org/src/contrib:
34 cannot open URL 'https://cloud.r-project.org/src/contrib/PACKAGES'
35Warning message:
36package ‘remotes’ is not available for this version of R
37
38A version of this package for your version of R might be available elsewhere,
39see the ideas at
40https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
41curlGetHeaders("https://example.com")
42
and got this error:
1#!/bin/bash
2set -euo pipefail
3
4# install build dependecies
5(command -v apt && apt-get build-dep r-base) || (command -v dnf && dnf builddep R)
6
7
8version='4.1.2'
9major_version=$(echo "$version" | cut -c 1)
10
11wget "https://cran.rstudio.com/src/base/R-$major_version/R-$version.tar.gz"
12tar -xzf R-$version.tar.gz
13cd R-$version
14
15./configure \
16 --prefix=/opt/R/$version \
17 --sysconfdir=/etc/R \
18 --enable-R-shlib \
19 --with-pcre1 \
20 --with-blas \
21 --with-lapack
22make -j 8
23make install
24
25install.packages("remotes")
26local({
27 r <- getOption("repos")
28 r["CRAN"] <- "https://cloud.r-project.org"
29 options(repos = r)
30})
31export R_PROFILE=/etc/R/Rprofile.site
32/opt/R/4.1.2/bin/R
33Warning: unable to access index for repository https://cloud.r-project.org/src/contrib:
34 cannot open URL 'https://cloud.r-project.org/src/contrib/PACKAGES'
35Warning message:
36package ‘remotes’ is not available for this version of R
37
38A version of this package for your version of R might be available elsewhere,
39see the ideas at
40https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
41curlGetHeaders("https://example.com")
42Error in curlGetHeaders("https://example.com") : libcurl error code 77:
43 unable to access SSL/TLS CA certificates
44
So it cannot find the CA certificates in /usr/share/ca-certificates
.
Since the R installed by APT has none of these problems. The compiled R does not search the right places. Even if I omit the --sysconfdir=/etc/R
build option and copy or symlink the /etc/R
directory under the prefix, so it will be at /opt/R/4.1.2/etc
. It will still not find its config files.
The greater problem that I do not even no know how to specify the /usr/share
so it may find the certificates. The rsharedir
build options (the --
also missing in the makefile) will not do, because it should point to /usr/share/R/
not /usr/share
, which would be a bad practice anyway.
I also tried all of this it with the 3.6.3
R version and got the same results.
Questions: How can I make the compiled R installations to find the system-wide or any config files and the certificates.
Update 1I ran the build script on a Ubuntu server which I do not manage with the same Ansible code. On both of them R successfully finds the certificates. So the problem is not with the build script but the system state.
Update 2I created a simple R script (install-r-package.R
) which install a package:
1#!/bin/bash
2set -euo pipefail
3
4# install build dependecies
5(command -v apt && apt-get build-dep r-base) || (command -v dnf && dnf builddep R)
6
7
8version='4.1.2'
9major_version=$(echo "$version" | cut -c 1)
10
11wget "https://cran.rstudio.com/src/base/R-$major_version/R-$version.tar.gz"
12tar -xzf R-$version.tar.gz
13cd R-$version
14
15./configure \
16 --prefix=/opt/R/$version \
17 --sysconfdir=/etc/R \
18 --enable-R-shlib \
19 --with-pcre1 \
20 --with-blas \
21 --with-lapack
22make -j 8
23make install
24
25install.packages("remotes")
26local({
27 r <- getOption("repos")
28 r["CRAN"] <- "https://cloud.r-project.org"
29 options(repos = r)
30})
31export R_PROFILE=/etc/R/Rprofile.site
32/opt/R/4.1.2/bin/R
33Warning: unable to access index for repository https://cloud.r-project.org/src/contrib:
34 cannot open URL 'https://cloud.r-project.org/src/contrib/PACKAGES'
35Warning message:
36package ‘remotes’ is not available for this version of R
37
38A version of this package for your version of R might be available elsewhere,
39see the ideas at
40https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
41curlGetHeaders("https://example.com")
42Error in curlGetHeaders("https://example.com") : libcurl error code 77:
43 unable to access SSL/TLS CA certificates
44install.packages("renv", repos="https://cran.wu.ac.at/")
45
then I executed it with Rscript
and traced which file do they open on both the correct and erroneous hosts:
1#!/bin/bash
2set -euo pipefail
3
4# install build dependecies
5(command -v apt && apt-get build-dep r-base) || (command -v dnf && dnf builddep R)
6
7
8version='4.1.2'
9major_version=$(echo "$version" | cut -c 1)
10
11wget "https://cran.rstudio.com/src/base/R-$major_version/R-$version.tar.gz"
12tar -xzf R-$version.tar.gz
13cd R-$version
14
15./configure \
16 --prefix=/opt/R/$version \
17 --sysconfdir=/etc/R \
18 --enable-R-shlib \
19 --with-pcre1 \
20 --with-blas \
21 --with-lapack
22make -j 8
23make install
24
25install.packages("remotes")
26local({
27 r <- getOption("repos")
28 r["CRAN"] <- "https://cloud.r-project.org"
29 options(repos = r)
30})
31export R_PROFILE=/etc/R/Rprofile.site
32/opt/R/4.1.2/bin/R
33Warning: unable to access index for repository https://cloud.r-project.org/src/contrib:
34 cannot open URL 'https://cloud.r-project.org/src/contrib/PACKAGES'
35Warning message:
36package ‘remotes’ is not available for this version of R
37
38A version of this package for your version of R might be available elsewhere,
39see the ideas at
40https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
41curlGetHeaders("https://example.com")
42Error in curlGetHeaders("https://example.com") : libcurl error code 77:
43 unable to access SSL/TLS CA certificates
44install.packages("renv", repos="https://cran.wu.ac.at/")
45 strace -o strace.log -e trace=open,openat,close,read,write,connect,accept ./Rscript install-r-package.R
46
It turned out that on the problematic system R does not even try to open the certificate files.
The relevant trace snippet on the correct system:
1#!/bin/bash
2set -euo pipefail
3
4# install build dependecies
5(command -v apt && apt-get build-dep r-base) || (command -v dnf && dnf builddep R)
6
7
8version='4.1.2'
9major_version=$(echo "$version" | cut -c 1)
10
11wget "https://cran.rstudio.com/src/base/R-$major_version/R-$version.tar.gz"
12tar -xzf R-$version.tar.gz
13cd R-$version
14
15./configure \
16 --prefix=/opt/R/$version \
17 --sysconfdir=/etc/R \
18 --enable-R-shlib \
19 --with-pcre1 \
20 --with-blas \
21 --with-lapack
22make -j 8
23make install
24
25install.packages("remotes")
26local({
27 r <- getOption("repos")
28 r["CRAN"] <- "https://cloud.r-project.org"
29 options(repos = r)
30})
31export R_PROFILE=/etc/R/Rprofile.site
32/opt/R/4.1.2/bin/R
33Warning: unable to access index for repository https://cloud.r-project.org/src/contrib:
34 cannot open URL 'https://cloud.r-project.org/src/contrib/PACKAGES'
35Warning message:
36package ‘remotes’ is not available for this version of R
37
38A version of this package for your version of R might be available elsewhere,
39see the ideas at
40https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
41curlGetHeaders("https://example.com")
42Error in curlGetHeaders("https://example.com") : libcurl error code 77:
43 unable to access SSL/TLS CA certificates
44install.packages("renv", repos="https://cran.wu.ac.at/")
45 strace -o strace.log -e trace=open,openat,close,read,write,connect,accept ./Rscript install-r-package.R
46connect(5, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("137.208.57.37")}, 16) = -1 EINPROGRESS (Operation now in progress)
47openat(AT_FDCWD, "/etc/ssl/certs/ca-certificates.crt", O_RDONLY) = 6
48read(6, "-----BEGIN CERTIFICATE-----\nMIIH"..., 200704) = 200704
49read(6, "--\n", 4096) = 3
50read(6, "", 4096) = 0
51close(6) = 0
52
on the problematic system:
1#!/bin/bash
2set -euo pipefail
3
4# install build dependecies
5(command -v apt && apt-get build-dep r-base) || (command -v dnf && dnf builddep R)
6
7
8version='4.1.2'
9major_version=$(echo "$version" | cut -c 1)
10
11wget "https://cran.rstudio.com/src/base/R-$major_version/R-$version.tar.gz"
12tar -xzf R-$version.tar.gz
13cd R-$version
14
15./configure \
16 --prefix=/opt/R/$version \
17 --sysconfdir=/etc/R \
18 --enable-R-shlib \
19 --with-pcre1 \
20 --with-blas \
21 --with-lapack
22make -j 8
23make install
24
25install.packages("remotes")
26local({
27 r <- getOption("repos")
28 r["CRAN"] <- "https://cloud.r-project.org"
29 options(repos = r)
30})
31export R_PROFILE=/etc/R/Rprofile.site
32/opt/R/4.1.2/bin/R
33Warning: unable to access index for repository https://cloud.r-project.org/src/contrib:
34 cannot open URL 'https://cloud.r-project.org/src/contrib/PACKAGES'
35Warning message:
36package ‘remotes’ is not available for this version of R
37
38A version of this package for your version of R might be available elsewhere,
39see the ideas at
40https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
41curlGetHeaders("https://example.com")
42Error in curlGetHeaders("https://example.com") : libcurl error code 77:
43 unable to access SSL/TLS CA certificates
44install.packages("renv", repos="https://cran.wu.ac.at/")
45 strace -o strace.log -e trace=open,openat,close,read,write,connect,accept ./Rscript install-r-package.R
46connect(5, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("137.208.57.37")}, 16) = -1 EINPROGRESS (Operation now in progress)
47openat(AT_FDCWD, "/etc/ssl/certs/ca-certificates.crt", O_RDONLY) = 6
48read(6, "-----BEGIN CERTIFICATE-----\nMIIH"..., 200704) = 200704
49read(6, "--\n", 4096) = 3
50read(6, "", 4096) = 0
51close(6) = 0
52connect(5, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("137.208.57.37")}, 16) = -1 EINPROGRESS (Operation now in progress)
53openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/nss/libsoftokn3.so", O_RDONLY|O_CLOEXEC) = 6
54read(6, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\220A\0\0\0\0\0\0"..., 832) = 832
55close(6) = 0
56
In both cases R connects to the mirror (137.208.57.37
) after that on the correct system it reads the ca-certificates.crt
certificate file and many other .crt files after that. However the erroneous system jump this step altogether.
ANSWER
Answered 2022-Jan-14 at 17:25Finally I found the solution:
Since both system has the arch and OS. I cross copied the R compiled installations between them. The R which was compiled on the problematic system, but was run on the correct one gave the warnings below after the calling of the install.packages("renv", repos="https://cran.wu.ac.at/")
1#!/bin/bash
2set -euo pipefail
3
4# install build dependecies
5(command -v apt && apt-get build-dep r-base) || (command -v dnf && dnf builddep R)
6
7
8version='4.1.2'
9major_version=$(echo "$version" | cut -c 1)
10
11wget "https://cran.rstudio.com/src/base/R-$major_version/R-$version.tar.gz"
12tar -xzf R-$version.tar.gz
13cd R-$version
14
15./configure \
16 --prefix=/opt/R/$version \
17 --sysconfdir=/etc/R \
18 --enable-R-shlib \
19 --with-pcre1 \
20 --with-blas \
21 --with-lapack
22make -j 8
23make install
24
25install.packages("remotes")
26local({
27 r <- getOption("repos")
28 r["CRAN"] <- "https://cloud.r-project.org"
29 options(repos = r)
30})
31export R_PROFILE=/etc/R/Rprofile.site
32/opt/R/4.1.2/bin/R
33Warning: unable to access index for repository https://cloud.r-project.org/src/contrib:
34 cannot open URL 'https://cloud.r-project.org/src/contrib/PACKAGES'
35Warning message:
36package ‘remotes’ is not available for this version of R
37
38A version of this package for your version of R might be available elsewhere,
39see the ideas at
40https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
41curlGetHeaders("https://example.com")
42Error in curlGetHeaders("https://example.com") : libcurl error code 77:
43 unable to access SSL/TLS CA certificates
44install.packages("renv", repos="https://cran.wu.ac.at/")
45 strace -o strace.log -e trace=open,openat,close,read,write,connect,accept ./Rscript install-r-package.R
46connect(5, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("137.208.57.37")}, 16) = -1 EINPROGRESS (Operation now in progress)
47openat(AT_FDCWD, "/etc/ssl/certs/ca-certificates.crt", O_RDONLY) = 6
48read(6, "-----BEGIN CERTIFICATE-----\nMIIH"..., 200704) = 200704
49read(6, "--\n", 4096) = 3
50read(6, "", 4096) = 0
51close(6) = 0
52connect(5, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("137.208.57.37")}, 16) = -1 EINPROGRESS (Operation now in progress)
53openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/nss/libsoftokn3.so", O_RDONLY|O_CLOEXEC) = 6
54read(6, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\220A\0\0\0\0\0\0"..., 832) = 832
55close(6) = 0
56Warning: unable to access index for repository https://cran.wu.ac.at/src/contrib:
57 internet routines cannot be loaded
58Warning messages:
591: In download.file(url, destfile = f, quiet = TRUE) :
60 unable to load shared object '/opt/R/4.1.2/lib/R/modules//internet.so':
61 libcurl-nss.so.4: cannot open shared object file: No such file or directory
622: package ‘remotes’ is not available for this version of R
63
64A version of this package for your version of R might be available elsewhere,
65see the ideas at
66https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
67
If I do the reverse then the installation works.
The libcurl-nss.so.4: cannot open shared object file: No such file or directory
line gave me the clue that different libcurl4 flavors was used as build dependecies. I checked which dev dependecies were installed on the systems and libcurl4-nss-dev 7.58.0-2ubuntu3
were installed on the problematic system and libcurl4-gnutls-dev 7.58.0-2ubuntu3.16
on the correct system.
So I purged libcurl4-gnutls-dev
from the problematic system:
1#!/bin/bash
2set -euo pipefail
3
4# install build dependecies
5(command -v apt && apt-get build-dep r-base) || (command -v dnf && dnf builddep R)
6
7
8version='4.1.2'
9major_version=$(echo "$version" | cut -c 1)
10
11wget "https://cran.rstudio.com/src/base/R-$major_version/R-$version.tar.gz"
12tar -xzf R-$version.tar.gz
13cd R-$version
14
15./configure \
16 --prefix=/opt/R/$version \
17 --sysconfdir=/etc/R \
18 --enable-R-shlib \
19 --with-pcre1 \
20 --with-blas \
21 --with-lapack
22make -j 8
23make install
24
25install.packages("remotes")
26local({
27 r <- getOption("repos")
28 r["CRAN"] <- "https://cloud.r-project.org"
29 options(repos = r)
30})
31export R_PROFILE=/etc/R/Rprofile.site
32/opt/R/4.1.2/bin/R
33Warning: unable to access index for repository https://cloud.r-project.org/src/contrib:
34 cannot open URL 'https://cloud.r-project.org/src/contrib/PACKAGES'
35Warning message:
36package ‘remotes’ is not available for this version of R
37
38A version of this package for your version of R might be available elsewhere,
39see the ideas at
40https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
41curlGetHeaders("https://example.com")
42Error in curlGetHeaders("https://example.com") : libcurl error code 77:
43 unable to access SSL/TLS CA certificates
44install.packages("renv", repos="https://cran.wu.ac.at/")
45 strace -o strace.log -e trace=open,openat,close,read,write,connect,accept ./Rscript install-r-package.R
46connect(5, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("137.208.57.37")}, 16) = -1 EINPROGRESS (Operation now in progress)
47openat(AT_FDCWD, "/etc/ssl/certs/ca-certificates.crt", O_RDONLY) = 6
48read(6, "-----BEGIN CERTIFICATE-----\nMIIH"..., 200704) = 200704
49read(6, "--\n", 4096) = 3
50read(6, "", 4096) = 0
51close(6) = 0
52connect(5, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("137.208.57.37")}, 16) = -1 EINPROGRESS (Operation now in progress)
53openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/nss/libsoftokn3.so", O_RDONLY|O_CLOEXEC) = 6
54read(6, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\220A\0\0\0\0\0\0"..., 832) = 832
55close(6) = 0
56Warning: unable to access index for repository https://cran.wu.ac.at/src/contrib:
57 internet routines cannot be loaded
58Warning messages:
591: In download.file(url, destfile = f, quiet = TRUE) :
60 unable to load shared object '/opt/R/4.1.2/lib/R/modules//internet.so':
61 libcurl-nss.so.4: cannot open shared object file: No such file or directory
622: package ‘remotes’ is not available for this version of R
63
64A version of this package for your version of R might be available elsewhere,
65see the ideas at
66https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
67apt purge libcurl4-nss-dev -y
68
and installed libcurl4-gnutls-dev:
1#!/bin/bash
2set -euo pipefail
3
4# install build dependecies
5(command -v apt && apt-get build-dep r-base) || (command -v dnf && dnf builddep R)
6
7
8version='4.1.2'
9major_version=$(echo "$version" | cut -c 1)
10
11wget "https://cran.rstudio.com/src/base/R-$major_version/R-$version.tar.gz"
12tar -xzf R-$version.tar.gz
13cd R-$version
14
15./configure \
16 --prefix=/opt/R/$version \
17 --sysconfdir=/etc/R \
18 --enable-R-shlib \
19 --with-pcre1 \
20 --with-blas \
21 --with-lapack
22make -j 8
23make install
24
25install.packages("remotes")
26local({
27 r <- getOption("repos")
28 r["CRAN"] <- "https://cloud.r-project.org"
29 options(repos = r)
30})
31export R_PROFILE=/etc/R/Rprofile.site
32/opt/R/4.1.2/bin/R
33Warning: unable to access index for repository https://cloud.r-project.org/src/contrib:
34 cannot open URL 'https://cloud.r-project.org/src/contrib/PACKAGES'
35Warning message:
36package ‘remotes’ is not available for this version of R
37
38A version of this package for your version of R might be available elsewhere,
39see the ideas at
40https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
41curlGetHeaders("https://example.com")
42Error in curlGetHeaders("https://example.com") : libcurl error code 77:
43 unable to access SSL/TLS CA certificates
44install.packages("renv", repos="https://cran.wu.ac.at/")
45 strace -o strace.log -e trace=open,openat,close,read,write,connect,accept ./Rscript install-r-package.R
46connect(5, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("137.208.57.37")}, 16) = -1 EINPROGRESS (Operation now in progress)
47openat(AT_FDCWD, "/etc/ssl/certs/ca-certificates.crt", O_RDONLY) = 6
48read(6, "-----BEGIN CERTIFICATE-----\nMIIH"..., 200704) = 200704
49read(6, "--\n", 4096) = 3
50read(6, "", 4096) = 0
51close(6) = 0
52connect(5, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("137.208.57.37")}, 16) = -1 EINPROGRESS (Operation now in progress)
53openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/nss/libsoftokn3.so", O_RDONLY|O_CLOEXEC) = 6
54read(6, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\220A\0\0\0\0\0\0"..., 832) = 832
55close(6) = 0
56Warning: unable to access index for repository https://cran.wu.ac.at/src/contrib:
57 internet routines cannot be loaded
58Warning messages:
591: In download.file(url, destfile = f, quiet = TRUE) :
60 unable to load shared object '/opt/R/4.1.2/lib/R/modules//internet.so':
61 libcurl-nss.so.4: cannot open shared object file: No such file or directory
622: package ‘remotes’ is not available for this version of R
63
64A version of this package for your version of R might be available elsewhere,
65see the ideas at
66https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
67apt purge libcurl4-nss-dev -y
68aptitude install libcurl4-gnutls-dev
69
I used aptitude, because I had to downgrade libcurl3-gnutls 7.58.0-2ubuntu3.16 (now) -> 7.58.0-2ubuntu3 (bionic)
which is a dependency of libcurl4-gnutls-dev
, then I run a make clean
in the R-4.1.2 source directory. Finally I re-run the build script from the question, and got a well working R, which can read the certificates, hence can reach the HTTPS using package mirrors.
QUESTION
Add `git remote add usptream` to repositories but using Ansible
Asked 2021-Dec-29 at 18:44I have an Ansible 2.9.27 and I am trying to add upstream remote for git repositories which I previously cloned with Ansible. Let's assume that already cloned repositories are located in /home/user/Documents/github/
directory and I want to add upstream remote for them (git remote add upstream
for each repo).
The task looks like this:
1- name: Add remote upstream to github projects
2 # TODO: how to add remote with git module?
3 command: git remote add upstream git@github.com:{{ git_user }}/{{ item }}.git
4 changed_when: false
5 args:
6 chdir: /home/user/Documents/github/{{ item }}
7 loop: "{{ github_repos }}"
8
The issue is that ansible-lint doesn't like using command
instead of git
module:
1- name: Add remote upstream to github projects
2 # TODO: how to add remote with git module?
3 command: git remote add upstream git@github.com:{{ git_user }}/{{ item }}.git
4 changed_when: false
5 args:
6 chdir: /home/user/Documents/github/{{ item }}
7 loop: "{{ github_repos }}"
8WARNING Listing 1 violation(s) that are fatal
9command-instead-of-module: git used in place of git module
10tasks/github.yaml:15 Task/Handler: Add remote upstream to github projects
11
What I need to do to add remote upstream for these repositories with git
module?
ANSWER
Answered 2021-Dec-29 at 18:44Since what you want to achieve is not (yet...) taken in charge by the git
module, this is a very legitimate use of command
.
In such cases, it is possible to silence the specific rule in ansible lint for that specific task.
To go a bit further, your changed_when: false
clause looks a bit like a quick and dirty fix to silence the no-changed-when
rule and can be enhanced in conjunction with a failed_when
clause to detect cases where the remote already exists.
Here is how I would write that task to be idempotent, documented and passing all needed lint rules:
1- name: Add remote upstream to github projects
2 # TODO: how to add remote with git module?
3 command: git remote add upstream git@github.com:{{ git_user }}/{{ item }}.git
4 changed_when: false
5 args:
6 chdir: /home/user/Documents/github/{{ item }}
7 loop: "{{ github_repos }}"
8WARNING Listing 1 violation(s) that are fatal
9command-instead-of-module: git used in place of git module
10tasks/github.yaml:15 Task/Handler: Add remote upstream to github projects
11- name: Add remote upstream to github projects
12 # Git module does not know how to add remotes (yet...)
13 # Using command and silencing corresponding ansible-lint rule
14 # noqa command-instead-of-module
15 command:
16 cmd: git remote add upstream git@github.com:{{ git_user }}/{{ item }}.git
17 chdir: /home/user/Documents/github/{{ item }}
18 register: add_result
19 changed_when: add_result.rc == 0
20 failed_when:
21 - add_result.rc != 0
22 - add_result.stderr | default('') is not search("remote .* already exists")
23 loop: "{{ github_repos }}"
24
QUESTION
AWX all jobs stop processing and hang indefinitely -- why
Asked 2021-Dec-21 at 14:42We've had a working Ansible AWX instance running on v5.0.0 for over a year, and suddenly all jobs stop working -- no output is rendered. They will start "running" but hang indefinitely without printing out any logging.
The AWX instance is running in a docker compose container setup as defined here: https://github.com/ansible/awx/blob/5.0.0/INSTALL.md#docker-compose
ObservationsStandard troubleshooting such as restarting of containers, host OS, etc. hasn't helped. No configuration changes in either environment.
Upon debugging an actual playbook command, we observe that the command to run a playbook from the UI is like the below:
ssh-agent sh -c ssh-add /tmp/awx_11021_0fmwm5uz/artifacts/11021/ssh_key_data && rm -f /tmp/awx_11021_0fmwm5uz/artifacts/11021/ssh_key_data && ansible-playbook -vvvvv -u ubuntu --become --ask-vault-pass -i /tmp/awx_11021_0fmwm5uz/tmppo7rcdqn -e @/tmp/awx_11021_0fmwm5uz/env/extravars playbook.yml
That's broken down into three commands in sequence:
ssh-agent sh -c ssh-add /tmp/awx_11021_0fmwm5uz/artifacts/11021/ssh_key_data
rm -f /tmp/awx_11021_0fmwm5uz/artifacts/11021/ssh_key_data
ansible-playbook -vvvvv -u ubuntu --become --ask-vault-pass -i /tmp/awx_11021_0fmwm5uz/tmppo7rcdqn -e @/tmp/awx_11021_0fmwm5uz/env/extravars playbook.yml
You can see in part 3, the -vvvvv
is the debugging argument -- however, the hang is happening on command #1. Which has nothing to do with ansible or AWX specifically, but it's not going to get us much debugging info.
I tried doing an strace
to see what is going on, but for reasons given below, it is pretty difficult to follow what it is actually hanging on. I can provide this output if it might help.
So one natural question with command #1 -- what is 'ssh_key_data'?
Well it's what we set up to be the Machine credential in AWX (an SSH key) -- it hasn't changed in a while and it works just fine when used in a direct SSH command. It's also apparently being set up by AWX as a file pipe:
prw------- 1 root root 0 Dec 10 08:29 ssh_key_data
Which starts to explain why it could be potentially hanging (if nothing is being read in from the other side of the pipe).
Running a normal ansible-playbook from command line (and supplying the SSH key in a more normal way) works just fine, so we can still deploy, but only via CLI right now -- it's just AWX that is broken.
ConclusionsSo the question then becomes "why now"? And "how to debug"? I have checked the health of awx_postgres, and verified that indeed the Machine credential is present in an expected format (in the main_credential
table). I have also verified that can use ssh-agent on the awx_task container without the use of that pipe keyfile. So it really seems to be this piped file that is the problem -- but I haven't been able to glean from any logs where the other side of the pipe (sender) is supposed to be or why they aren't sending the data.
ANSWER
Answered 2021-Dec-13 at 04:21Had the same issue starting this Friday in the same timeframe as you. Turned out that Crowdstrike (falcon sensor) Agent was the culprit. I'm guessing they pushed a definition update that is breaking or blocking fifo pipes. When we stopped the CS agent, AWX started working correctly again, with no issues. See if you are running a similar security product.
Community Discussions contain sources that include Stack Exchange Network
Tutorials and Learning Resources in Ansible
Tutorials and Learning Resources are not available at this moment for Ansible