14

I'm trying to write an Ansible playbook to bootstrap my servers. By default on Linode I can only login as root with a password, so my playbook logs in as root, creates a non-root user with an SSH key, and disables root and password SSH.

This is a problem because I now can't run that playbook again since root login is disabled! I would like the playbook to be idempotent and not have to add and remove hosts after bootstrapping them.

1
  • 1
    You may find some inspiration here. Commented Mar 23, 2017 at 14:41

4 Answers 4

12

I like to do it this way:

- hosts: all remote_user: root gather_facts: no tasks: - name: Check ansible user command: ssh -q -o BatchMode=yes -o ConnectTimeout=3 ansible@{{ inventory_hostname }} "echo OK" delegate_to: 127.0.0.1 changed_when: false failed_when: false register: check_ansible_user - block: - name: Create Ansible user user: name: ansible comment: "Ansible user" password: $6$u3GdHI6FzXL01U9q$LENkJYHcA/NbnXAoJ1jzj.n3a7X6W35rj2TU1kSx4cDtgOEV9S6UboZ4BQ414UDjVvpaQhTt8sXVtkPvOuNt.0 shell: /bin/bash - name: Add authorized key authorized_key: user: ansible key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}" exclusive: yes - name: Allow sudo for ansible copy: content: ansible ALL=(ALL) ALL dest: /etc/sudoers.d/ansible mode: 0600 when: check_ansible_user | failed 

I try to connect to the remote host with my ansible user. If this is impossible (on the first run), I connect as root and create the ansible user along with its authorized_keys file and sudo rights.

On subsequent runs, connecting as ansible user works, so the block of tasks can be skipped.

Once the remote host is bootstrapped, I can go on with the ansible user and become:

- hosts: all remote_user: ansible become: yes roles: - ... 
4
  • Are you manually changing the remote_user in your playbook after that first run? That isn't idempotent. I hope I'm missing something. Commented Sep 16, 2018 at 2:30
  • 2
    You do, I don't change anything manually. The two codesamples represent two different plays (maybe it helps to imagine them as being called bootstrap.yml and site.yml, where site.yml includes bootstrap.yml before anything else). If the first task of bootstrap.yml fails, all other tasks of this play are skipped and site.yml takes over. Commented Sep 16, 2018 at 15:25
  • copy-pasted the code snippet, but ansible skips that block of tasks: "skip_reason": "Conditional result was False". Running the play with -vvv shows the ssh call returns "msg": "non-zero return code", "rc": 255, Commented Sep 30, 2019 at 9:55
  • 1
    I fixed changing the when condition: when: not "OK" in check_ansible_user.stdout Commented Sep 30, 2019 at 13:27
3

If you create your servers on Linode with the linode module you could register the return value of the linode task and include the the bootstrap tasks with an condition checking the outout of the linode task. That should be idempotent. Try something like this:

- linode: api_key: 'longStringFromLinodeApi' name: linode-test1 plan: 1 datacenter: 2 distribution: 99 password: 'superSecureRootPassword' private_ip: yes ssh_pub_key: 'ssh-rsa qwerty' swap: 768 wait: yes wait_timeout: 600 state: present register: linode_node - include: bootstrap.yml when: linode_node.changed 

bootstrap.yml would than contain all the tasks necessary to disable ssh root login and so on.

2

I would do the following:

  • create a role (something like 'base') where you (amongst other things), create a suitable user (and sudo rules) for ansible to use
  • create or adapt your role for SSH, to manage sshd_config (I would tend to recommend you manage the entire file, using a template, but that is up to you), and disable root logins
  • make your SSH role depend on the base role, e.g. using meta.

For the first role (the base one), I tend to use something like:

 name: base | local ansible user | create user user: name: "{{ local_ansible_user }}" group: "{{ local_ansible_group }}" home: "/home/{{ local_ansible_user }}" state: present generate_ssh_key: "{{ local_ansible_generate_key }}" ssh_key_bits: 4096 ssh_key_type: rsa tags: - ansible - local_user - name: base | local ansible user | provision authorised keys authorized_key: user: "{{ local_ansible_user }}" state: present key: "{{ item }}" with_items: "{{ local_ansible_authorised_keys }}" tags: - ansible - authorised_keys 

For the SSH config, I would use:

- name: openssh | server | create configuration template: src: sshd_config.j2 dest: /etc/ssh/sshd_config owner: root group: root mode: "0640" validate: "/usr/sbin/sshd -tf %s" notify: - openssh | server | restart tags: - ssh - openssh 

Ansible's role dependencies are documented here.

You could also just use ordering within your playbook to do this.

I have some ansible stuff on github (from which the above is taken), if you want to see it in context

-1

Maybe you could just modify the ansible_ssh_user in the inventory after you have bootstrapped the host?

[targets] other1.example.com ansible_connection=ssh ansible_ssh_user=root # new host other2.example.com ansible_connection=ssh ansible_ssh_user=user # bootstrapped host 

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.