Skip to content

Conversation

@abandholm
Copy link

In our setup we depend on allocator tags, so we have done a simple extension of secondary/install_stack.yml

It's a primitive solution because it looks only if an allocator tag is defined in a variable, not if it is actually an allocator being created. But it works for us :-)

We intend to use the Ansible role to rebuild the hosts (terraform taint + apply) at a regular interval, instead of doing OS upgrades on them. For this automation adding the tags at creation time (based on the inventory) is very useful to us.

Maybe this PR is not the right way to do it - we just want to open up the discussion...

@cla-checker-service
Copy link

cla-checker-service bot commented Nov 19, 2021

💚 CLA has been signed

Document allocator tags in README.md

See merge request ddp/elastic/ansible-elastic-cloud-enterprise!1
@abandholm
Copy link
Author

Documentation added.

@abandholm
Copy link
Author

Resolves #142

@abandholm
Copy link
Author

@obierlaire - sorry to disturb!

Is there any chance that this PR will ever be merged - or just reviewed?

Comment on lines +22 to +36
- name: Execute installation - no allocator-tags
shell: /home/elastic/elastic-cloud-enterprise.sh --coordinator-host {{ primary_hostname }} --roles-token '{{ roles_token.json.token }}' --roles '{{ ece_roles | join(',') }}' --availability-zone {{ availability_zone }} --cloud-enterprise-version {{ ece_version }} --docker-registry {{ ece_docker_registry }} --ece-docker-repository {{ ece_docker_repository }} --host-storage-path {{ data_dir }}/elastic --memory-settings '{{ memory_settings }}' --runner-id {{ ece_runner_id }}
become: yes
become_method: sudo
become_user: elastic
register: installation
when: allocator_tags is undefined

- name: Execute installation - with allocator-tags
shell: /home/elastic/elastic-cloud-enterprise.sh --coordinator-host {{ primary_hostname }} --roles-token '{{ roles_token.json.token }}' --roles '{{ ece_roles | join(',') }}' --availability-zone {{ availability_zone }} --cloud-enterprise-version {{ ece_version }} --docker-registry {{ ece_docker_registry }} --ece-docker-repository {{ ece_docker_repository }} --host-storage-path {{ data_dir }}/elastic --memory-settings '{{ memory_settings }}' --runner-id {{ ece_runner_id }} --allocator-tags {{ allocator_tags }}
become: yes
become_method: sudo
become_user: elastic
register: installation
when: allocator_tags is defined
Copy link
Contributor

@m-a-leclercq m-a-leclercq Apr 4, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should be able to inline that with jinja conditionals, it would avoid having two tasks for the same thing with only one option differing.

I have not tested that code but maybe you could, see if that works?

Suggested change
- name: Execute installation - no allocator-tags
shell: /home/elastic/elastic-cloud-enterprise.sh --coordinator-host {{ primary_hostname }} --roles-token '{{ roles_token.json.token }}' --roles '{{ ece_roles | join(',') }}' --availability-zone {{ availability_zone }} --cloud-enterprise-version {{ ece_version }} --docker-registry {{ ece_docker_registry }} --ece-docker-repository {{ ece_docker_repository }} --host-storage-path {{ data_dir }}/elastic --memory-settings '{{ memory_settings }}' --runner-id {{ ece_runner_id }}
become: yes
become_method: sudo
become_user: elastic
register: installation
when: allocator_tags is undefined
- name: Execute installation - with allocator-tags
shell: /home/elastic/elastic-cloud-enterprise.sh --coordinator-host {{ primary_hostname }} --roles-token '{{ roles_token.json.token }}' --roles '{{ ece_roles | join(',') }}' --availability-zone {{ availability_zone }} --cloud-enterprise-version {{ ece_version }} --docker-registry {{ ece_docker_registry }} --ece-docker-repository {{ ece_docker_repository }} --host-storage-path {{ data_dir }}/elastic --memory-settings '{{ memory_settings }}' --runner-id {{ ece_runner_id }} --allocator-tags {{ allocator_tags }}
become: yes
become_method: sudo
become_user: elastic
register: installation
when: allocator_tags is defined
- name: Execute installation
shell: "/home/elastic/elastic-cloud-enterprise.sh --coordinator-host {{ primary_hostname }} --roles-token '{{ roles_token.json.token }}' --roles '{{ ece_roles | join(',') }}' --availability-zone {{ availability_zone }} --cloud-enterprise-version {{ ece_version }} --docker-registry {{ ece_docker_registry }} --ece-docker-repository {{ ece_docker_repository }} --host-storage-path {{ data_dir }}/elastic --memory-settings '{{ memory_settings }}' --runner-id {{ ece_runner_id }}{% if allocator_tags is defined %} --allocator-tags {{ allocator_tags }}{% endif %}"
become: yes
become_method: sudo
become_user: elastic
register: installation

Just my personal review as I am not maintaining that repo but this is what I would do 😄

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the review! I didn't realise that you can review without being a maintainer. That's nice.

I was focused on the YAML-stuff, but you are right of course: Utilising Jinja2 to do logic as well is the way to go.
I can probably use that make sure that allocator tags get added to allocators only. This was my original concern: If this quick hack would be accepted as-is or a check for the host being an allocator was needed. I expected some feedback on that question, but didn't get any.

So instead of just accepting your suggestion, I will try to fix both problems in one go. I hope to find time to work on this soon.

Copy link
Contributor

@m-a-leclercq m-a-leclercq Apr 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're welcome.

I did another PR that is up to date with the master branch command and reworked the syntax for better readability. It can be found here https://github.com/elastic/ansible-elastic-cloud-enterprise/pull/151/files You shoud be able to jinja your way to an allocator tags command from there.

I agree with you on the control part but I think it should be expanded to the whole command. Since there is no check being done via the API, you can also add garbage in the --role part of the command and it can have disastrous consequences (I tried and had a really bad time, I couldn't remove the newly added machine from the cluster without shutting down the machine and removing it forcefully).

@abandholm abandholm marked this pull request as draft April 21, 2022 11:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants