Browse Source

Refactor to support Ansible 2.8 (#1549)

* bump ansible to 2.8.3

* DigitalOcean: move to the latest modules

* Add Hetzner Cloud

* Scaleway and Lightsail fixes

* lint missing roles

* Update roles/cloud-hetzner/tasks/main.yml

Add api_token

Co-Authored-By: phaer <phaer@phaer.org>

* Update roles/cloud-hetzner/tasks/main.yml

Add api_token

Co-Authored-By: phaer <phaer@phaer.org>

* Try to run apt until succeeded

* Scaleway modules upgrade

* GCP: Refactoring, remove deprecated modules

* Doc updates (#1552)

* Update README.md

Adding links and mentions of Exoscale aka CloudStack and Hetzner Cloud.

* Update index.md

Add the Hetzner Cloud to the docs index

* Remove link to Win 10 IPsec instructions

* Delete client-windows.md

Unnecessary since the deprecation of IPsec for Win10.

* Update deploy-from-ansible.md

Added sections and required variables for CloudStack and Hetzner Cloud.

* Update deploy-from-ansible.md

Added sections for CloudStack and Hetzner, added req variables and examples, mentioned environment variables, and added links to the provider role section.

* Update deploy-from-ansible.md

Cosmetic changes to links, fix typo.

* Update GCE variables

* Update deploy-from-script-or-cloud-init-to-localhost.md

Fix a finer point, and make variables list more readable.

* update azure requirements

* Python3 draft

* set LANG=c to the p12 password generation task

* Update README

* Install cloud requirements to the existing venv

* FreeBSD fix

* env->.env fixes

* lightsail_region_facts fix

* yaml syntax fix

* Update README for Python 3 (#1564)

* Update README for Python 3

* Remove tabs and tweak instructions

* Remove cosmetic command indentation

* Update README.md

* Update README for Python 3 (#1565)

* DO fix for "found unpermitted parameters: id"

* Verify Python version

* Remove ubuntu 16.04 from readme

* Revert back DigitalOcean module

* Update deploy-from-script-or-cloud-init-to-localhost.md

* env to .env
bubble_ubuntu_18_04
Jack Ivanov 4 years ago
committed by Paul Kehrer
parent
commit
8bdd99c05d
64 changed files with 879 additions and 951 deletions
  1. +1
    -1
      .dockerignore
  2. +1
    -1
      .gitignore
  3. +4
    -4
      .travis.yml
  4. +6
    -6
      Dockerfile
  5. +52
    -67
      README.md
  6. +1
    -1
      algo
  7. +3
    -3
      algo-showenv.sh
  8. +1
    -0
      ansible.cfg
  9. +4
    -4
      config.cfg
  10. +37
    -0
      docs/client-linux-ipsec.md
  11. +0
    -6
      docs/client-windows.md
  12. +3
    -0
      docs/cloud-hetzner.md
  13. +30
    -10
      docs/deploy-from-ansible.md
  14. +0
    -115
      docs/deploy-from-fedora-workstation.md
  15. +48
    -37
      docs/deploy-from-redhat-centos6.md
  16. +16
    -4
      docs/deploy-from-script-or-cloud-init-to-localhost.md
  17. +1
    -1
      docs/deploy-from-windows.md
  18. +1
    -1
      docs/deploy-to-ubuntu.md
  19. +2
    -2
      docs/index.md
  20. +8
    -42
      docs/troubleshooting.md
  21. +2
    -1
      input.yml
  22. +7
    -7
      install.sh
  23. +1
    -1
      inventory
  24. +0
    -138
      library/gce_region_facts.py
  25. +93
    -0
      library/gcp_compute_location_info.py
  26. +1
    -1
      library/lightsail_region_facts.py
  27. +132
    -74
      library/scaleway_compute.py
  28. +10
    -2
      main.yml
  29. +1
    -1
      playbooks/cloud-post.yml
  30. +1
    -1
      requirements.txt
  31. +0
    -1
      roles/cloud-azure/defaults/main.yml
  32. +31
    -34
      roles/cloud-azure/tasks/main.yml
  33. +23
    -22
      roles/cloud-azure/tasks/venv.yml
  34. +0
    -1
      roles/cloud-cloudstack/defaults/main.yml
  35. +0
    -1
      roles/cloud-cloudstack/tasks/main.yml
  36. +1
    -8
      roles/cloud-cloudstack/tasks/venv.yml
  37. +0
    -1
      roles/cloud-digitalocean/defaults/main.yml
  38. +29
    -104
      roles/cloud-digitalocean/tasks/main.yml
  39. +8
    -1
      roles/cloud-digitalocean/tasks/prompts.yml
  40. +0
    -12
      roles/cloud-digitalocean/tasks/venv.yml
  41. +0
    -1
      roles/cloud-ec2/defaults/main.yml
  42. +19
    -22
      roles/cloud-ec2/tasks/main.yml
  43. +1
    -8
      roles/cloud-ec2/tasks/venv.yml
  44. +0
    -1
      roles/cloud-gce/defaults/main.yml
  45. +74
    -51
      roles/cloud-gce/tasks/main.yml
  46. +28
    -16
      roles/cloud-gce/tasks/prompts.yml
  47. +3
    -9
      roles/cloud-gce/tasks/venv.yml
  48. +31
    -0
      roles/cloud-hetzner/tasks/main.yml
  49. +48
    -0
      roles/cloud-hetzner/tasks/prompts.yml
  50. +7
    -0
      roles/cloud-hetzner/tasks/venv.yml
  51. +0
    -1
      roles/cloud-lightsail/defaults/main.yml
  52. +35
    -38
      roles/cloud-lightsail/tasks/main.yml
  53. +1
    -1
      roles/cloud-lightsail/tasks/prompts.yml
  54. +1
    -8
      roles/cloud-lightsail/tasks/venv.yml
  55. +0
    -1
      roles/cloud-openstack/defaults/main.yml
  56. +62
    -65
      roles/cloud-openstack/tasks/main.yml
  57. +1
    -8
      roles/cloud-openstack/tasks/venv.yml
  58. +1
    -0
      roles/cloud-scaleway/tasks/main.yml
  59. +4
    -0
      roles/common/tasks/ubuntu.yml
  60. +0
    -1
      roles/dns/tasks/freebsd.yml
  61. +1
    -1
      roles/strongswan/tasks/main.yml
  62. +1
    -1
      roles/strongswan/tasks/ubuntu.yml
  63. +1
    -1
      tests/local-deploy.sh
  64. +1
    -1
      tests/update-users.sh

+ 1
- 1
.dockerignore View File

@@ -9,6 +9,6 @@ README.md
config.cfg
configs
docs
env
.env
logo.png
tests

+ 1
- 1
.gitignore View File

@@ -3,7 +3,7 @@
configs/*
inventory_users
*.kate-swp
env
*env
.DS_Store
venvs/*
!venvs/.gitinit

+ 4
- 4
.travis.yml View File

@@ -1,6 +1,6 @@
---
language: python
python: "2.7"
python: "3.7"
dist: xenial

services:
@@ -12,7 +12,7 @@ addons:
- sourceline: 'ppa:ubuntu-lxc/stable'
- sourceline: 'ppa:wireguard/wireguard'
packages: &default_packages
- python-pip
- python3-pip
- lxd
- expect-dev
- debootstrap
@@ -22,7 +22,7 @@ addons:
- build-essential
- libssl-dev
- libffi-dev
- python-dev
- python3-dev
- linux-headers-$(uname -r)
- wireguard
- libxml2-utils
@@ -63,7 +63,7 @@ stages:
- pip install ansible-lint
- shellcheck algo install.sh
- ansible-playbook main.yml --syntax-check
- ansible-lint -v *.yml
- ansible-lint -v *.yml roles/{local,cloud-*}/*/*.yml

- &deploy-local
stage: Deploy


+ 6
- 6
Dockerfile View File

@@ -1,4 +1,4 @@
FROM python:2-alpine
FROM python:3-alpine

ARG VERSION="git"
ARG PACKAGES="bash libffi openssh-client openssl rsync tini"
@@ -16,11 +16,11 @@ RUN mkdir -p /algo && mkdir -p /algo/configs
WORKDIR /algo
COPY requirements.txt .
RUN apk --no-cache add ${BUILD_PACKAGES} && \
python -m pip --no-cache-dir install -U pip && \
python -m pip --no-cache-dir install virtualenv && \
python -m virtualenv env && \
source env/bin/activate && \
python -m pip --no-cache-dir install -r requirements.txt && \
python3 -m pip --no-cache-dir install -U pip && \
python3 -m pip --no-cache-dir install virtualenv && \
python3 -m virtualenv .env && \
source .env/bin/activate && \
python3 -m pip --no-cache-dir install -r requirements.txt && \
apk del ${BUILD_PACKAGES}
COPY . .
RUN chmod 0755 /algo/algo-docker.sh


+ 52
- 67
README.md View File

@@ -4,7 +4,7 @@
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/fold_left.svg?style=social&label=Follow%20%40AlgoVPN)](https://twitter.com/AlgoVPN)
[![TravisCI Status](https://api.travis-ci.org/trailofbits/algo.svg?branch=master)](https://travis-ci.org/trailofbits/algo)

Algo VPN is a set of Ansible scripts that simplify the setup of a personal IPSEC and Wireguard VPN. It uses the most secure defaults available, works with common cloud providers, and does not require client software on most devices. See our [release announcement](https://blog.trailofbits.com/2016/12/12/meet-algo-the-vpn-that-works/) for more information.
Algo VPN is a set of Ansible scripts that simplify the setup of a personal Wireguard and IPSEC VPN. It uses the most secure defaults available, works with common cloud providers, and does not require client software on most devices. See our [release announcement](https://blog.trailofbits.com/2016/12/12/meet-algo-the-vpn-that-works/) for more information.

## Features

@@ -14,7 +14,7 @@ Algo VPN is a set of Ansible scripts that simplify the setup of a personal IPSEC
* Blocks ads with a local DNS resolver (optional)
* Sets up limited SSH users for tunneling traffic (optional)
* Based on current versions of Ubuntu and strongSwan
* Installs to DigitalOcean, Amazon Lightsail, Amazon EC2, Vultr, Microsoft Azure, Google Compute Engine, Scaleway, OpenStack, or [your own Ubuntu server](docs/deploy-to-ubuntu.md)
* Installs to DigitalOcean, Amazon Lightsail, Amazon EC2, Vultr, Microsoft Azure, Google Compute Engine, Scaleway, OpenStack, CloudStack, Hetzner Cloud, or [your own Ubuntu server](docs/deploy-to-ubuntu.md)

## Anti-features

@@ -27,49 +27,57 @@ Algo VPN is a set of Ansible scripts that simplify the setup of a personal IPSEC

## Deploy the Algo Server

The easiest way to get an Algo server running is to let it set up a _new_ virtual machine in the cloud for you.

1. **Setup an account on a cloud hosting provider.** Algo supports [DigitalOcean](https://m.do.co/c/4d7f4ff9cfe4) (most user friendly), [Amazon Lightsail](https://aws.amazon.com/lightsail/), [Amazon EC2](https://aws.amazon.com/), [Vultr](https://www.vultr.com/), [Microsoft Azure](https://azure.microsoft.com/), [Google Compute Engine](https://cloud.google.com/compute/), [Scaleway](https://www.scaleway.com/), and [DreamCompute](https://www.dreamhost.com/cloud/computing/) or other OpenStack-based cloud hosting.

2. **[Download Algo](https://github.com/trailofbits/algo/archive/master.zip).** Unzip it in a convenient location on your local machine.

3. **Install Algo's core dependencies.** Open the Terminal. The `python` interpreter you use to deploy Algo must be python2. If you don't know what this means, you're probably fine. `cd` into the `algo-master` directory where you unzipped Algo, then run:

- macOS:
```bash
$ python -m ensurepip --user
$ python -m pip install --user --upgrade virtualenv
```
- Linux (deb-based):
```bash
$ sudo apt-get update && sudo apt-get install \
build-essential \
libssl-dev \
libffi-dev \
python-dev \
python-pip \
python-setuptools \
python-virtualenv -y
```
- Linux (rpm-based): See the pre-installation documentation for [RedHat/CentOS 6.x](docs/deploy-from-redhat-centos6.md) or [Fedora](docs/deploy-from-fedora-workstation.md)
- Windows: See the [Windows documentation](docs/deploy-from-windows.md)

4. **Install Algo's remaining dependencies.** Use the same Terminal window as the previous step and run:
The easiest way to get an Algo server running is to run it on your local system and let it set up a _new_ virtual machine in the cloud for you.

1. **Setup an account on a cloud hosting provider.** Algo supports [DigitalOcean](https://m.do.co/c/4d7f4ff9cfe4) (most user friendly), [Amazon Lightsail](https://aws.amazon.com/lightsail/), [Amazon EC2](https://aws.amazon.com/), [Vultr](https://www.vultr.com/), [Microsoft Azure](https://azure.microsoft.com/), [Google Compute Engine](https://cloud.google.com/compute/), [Scaleway](https://www.scaleway.com/), [DreamCompute](https://www.dreamhost.com/cloud/computing/) or other OpenStack-based cloud hosting, [Exoscale](https://www.exoscale.com) or other CloudStack-based cloud hosting, or [Hetzner Cloud](https://www.hetzner.com/).

2. **Get a copy of Algo.** The Algo scripts will be installed on your local system. There are two ways to get a copy:

- Download the [ZIP file](https://github.com/trailofbits/algo/archive/master.zip). Unzip the file to create a directory named `algo-master` containing the Algo scripts.

- Run the command `git clone https://github.com/trailofbits/algo.git` to create a directory named `algo` containing the Algo scripts.

3. **Install Algo's core dependencies.** Algo requires that **Python 3** and at least one supporting package are installed on your system.

- **macOS:** Apple does not provide Python 3 with macOS. There are two ways to obtain it:
* Use the [Homebrew](https://brew.sh) package manager. After installing Homebrew install Python 3 by running `brew install python3`.

* Download and install the latest stable [Python 3 package](https://www.python.org/downloads/mac-osx/). Be sure to run the included *Install Certificates* command from Finder.

Once Python 3 is installed on your Mac, from Terminal run:
```bash
python3 -m pip install --upgrade virtualenv
```

- **Linux:** Recent releases of Ubuntu, Debian, and Fedora come with Python 3 already installed. Make sure your system is up-to-date and install the supporting package(s):
* Ubuntu and Debian:
```bash
sudo apt install -y python3-virtualenv
```
* Fedora:
```bash
sudo dnf install -y python3-virtualenv
```
* Red Hat and CentOS: See this [documentation](docs/deploy-from-redhat-centos6.md).

- **Windows:** Use the Windows Subsystem for Linux (WSL) to create your own copy of Ubuntu running under Windows from which to install and run Algo. See the [Windows documentation](docs/deploy-from-windows.md).

4. **Install Algo's remaining dependencies.** You'll need to run these commands from the Algo directory each time you download a new copy of Algo. In a Terminal window `cd` into the `algo-master` (ZIP file) or `algo` (`git clone`) directory and run:
```bash
$ python -m virtualenv --python=`which python2` env &&
source env/bin/activate &&
python -m pip install -U pip virtualenv &&
python -m pip install -r requirements.txt
python3 -m virtualenv --python="$(command -v python3)" .env &&
source .env/bin/activate &&
python3 -m pip install -U pip virtualenv &&
python3 -m pip install -r requirements.txt
```
On macOS, you may be prompted to install `cc`. You should press accept if so.
On Fedora add the option `--system-site-packages` to the first command above. On macOS install the C compiler if prompted.

5. **List the users to create.** Open `config.cfg` in your favorite text editor. Specify the users you wish to create in the `users` list. If you want to be able to add or delete users later, you **must** select `yes` for the `Do you want to retain the CA key?` prompt during the deployment. Make a unique user for each device you plan to setup.
5. **List the users to create.** Open the file `config.cfg` in your favorite text editor. Specify the users you wish to create in the `users` list. Create a unique user for each device you plan to connect to your VPN. If you want to be able to add or delete users later, you **must** select `yes` at the `Do you want to retain the keys (PKI)?` prompt during the deployment.

6. **Start the deployment.** Return to your terminal. In the Algo directory, run `./algo` and follow the instructions. There are several optional features available. None are required for a fully functional VPN server. These optional features are described in greater detail in [deploy-from-ansible.md](docs/deploy-from-ansible.md).
6. **Start the deployment.** Return to your terminal. In the Algo directory, run `./algo` and follow the instructions. There are several optional features available. None are required for a fully functional VPN server. These optional features are described in greater detail in [here](docs/deploy-from-ansible.md).

That's it! You will get the message below when the server deployment process completes. You now have an Algo server on the internet. Take note of the p12 (user certificate) password and the CA key in case you need them later, **they will only be displayed this time**.
That's it! You will get the message below when the server deployment process completes. Take note of the p12 (user certificate) password and the CA key in case you need them later, **they will only be displayed this time**.

You can now setup clients to connect it, e.g. your iPhone or laptop. Proceed to [Configure the VPN Clients](#configure-the-vpn-clients) below.
You can now set up clients to connect to your VPN. Proceed to [Configure the VPN Clients](#configure-the-vpn-clients) below.

```
"# Congratulations! #"
@@ -111,36 +119,13 @@ WireGuard is used to provide VPN services on Windows. Algo generates a WireGuard

Install the [WireGuard VPN Client](https://www.wireguard.com/install/#windows-7-8-81-10-2012-2016-2019). Import the generated `wireguard/<username>.conf` file to your device, then setup a new connection with it.

### Linux Network Manager Clients (e.g., Ubuntu, Debian, or Fedora Desktop)

Network Manager does not support AES-GCM. In order to support Linux Desktop clients, choose the "compatible" cryptography during the deploy process and use at least Network Manager 1.4.1. See [Issue #263](https://github.com/trailofbits/algo/issues/263) for more information.

### Linux strongSwan Clients (e.g., OpenWRT, Ubuntu Server, etc.)

Install strongSwan, then copy the included ipsec_user.conf, ipsec_user.secrets, user.crt (user certificate), and user.key (private key) files to your client device. These will require customization based on your exact use case. These files were originally generated with a point-to-point OpenWRT-based VPN in mind.

#### Ubuntu Server example

1. `sudo apt-get install strongswan libstrongswan-standard-plugins`: install strongSwan
2. `/etc/ipsec.d/certs`: copy `<name>.crt` from `algo-master/configs/<server_ip>/ipsec/manual/<name>.crt`
3. `/etc/ipsec.d/private`: copy `<name>.key` from `algo-master/configs/<server_ip>/ipsec/manual/<name>.key`
4. `/etc/ipsec.d/cacerts`: copy `cacert.pem` from `algo-master/configs/<server_ip>/ipsec/manual/cacert.pem`
5. `/etc/ipsec.secrets`: add your `user.key` to the list, e.g. `<server_ip> : ECDSA <name>.key`
6. `/etc/ipsec.conf`: add the connection from `ipsec_user.conf` and ensure `leftcert` matches the `<name>.crt` filename
7. `sudo ipsec restart`: pick up config changes
8. `sudo ipsec up <conn-name>`: start the ipsec tunnel
9. `sudo ipsec down <conn-name>`: shutdown the ipsec tunnel
### Linux WireGuard Clients

One common use case is to let your server access your local LAN without going through the VPN. Set up a passthrough connection by adding the following to `/etc/ipsec.conf`:
WireGuard works great with Linux clients. See [this page](docs/client-linux-wireguard.md) for an example of how to configure WireGuard on Ubuntu.

conn lan-passthrough
leftsubnet=192.168.1.1/24 # Replace with your LAN subnet
rightsubnet=192.168.1.1/24 # Replace with your LAN subnet
authby=never # No authentication necessary
type=pass # passthrough
auto=route # no need to ipsec up lan-passthrough
### Linux strongSwan IPsec Clients (e.g., OpenWRT, Ubuntu Server, etc.)

To configure the connection to come up at boot time replace `auto=add` with `auto=start`.
Please see [this page](docs/client-linux-ipsec.md).

### Other Devices

@@ -177,7 +162,7 @@ where `user` is either `root` or `ubuntu` as listed on the success message, and
_If you chose to save the CA key during the deploy process,_ then Algo's own scripts can easily add and remove users from the VPN server.

1. Update the `users` list in your `config.cfg`
2. Open a terminal, `cd` to the algo directory, and activate the virtual environment with `source env/bin/activate`
2. Open a terminal, `cd` to the algo directory, and activate the virtual environment with `source .env/bin/activate`
3. Run the command: `./algo update-users`

After this process completes, the Algo VPN server will contain only the users listed in the `config.cfg` file.


+ 1
- 1
algo View File

@@ -4,7 +4,7 @@ set -e

if [ -z ${VIRTUAL_ENV+x} ]
then
ACTIVATE_SCRIPT="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/env/bin/activate"
ACTIVATE_SCRIPT="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/.env/bin/activate"
if [ -f "$ACTIVATE_SCRIPT" ]
then
# shellcheck source=/dev/null


+ 3
- 3
algo-showenv.sh View File

@@ -68,10 +68,10 @@ elif [[ -f LICENSE && ${STAT} ]]; then
fi

# The Python version might be useful to know.
if [[ -x ./env/bin/python ]]; then
./env/bin/python --version 2>&1
if [[ -x ./.env/bin/python3 ]]; then
./.env/bin/python3 --version 2>&1
elif [[ -f ./algo ]]; then
echo "env/bin/python not found: has 'python -m virtualenv ...' been run?"
echo ".env/bin/python3 not found: has 'python3 -m virtualenv ...' been run?"
fi

# Just print out all command line arguments, which are expected


+ 1
- 0
ansible.cfg View File

@@ -6,6 +6,7 @@ host_key_checking = False
timeout = 60
stdout_callback = default
display_skipped_hosts = no
force_valid_group_names = ignore

[paramiko_connection]
record_host_keys = False


+ 4
- 4
config.cfg View File

@@ -18,9 +18,6 @@ pki_in_tmpfs: true
# If True re-init all existing certificates. Boolean
keys_clean_all: False

# Clean up cloud python environments
clean_environment: false

# Deploy StrongSwan to enable IPsec support
ipsec_enabled: true

@@ -159,9 +156,12 @@ cloud_providers:
size: nano_1_0
image: ubuntu_18_04
scaleway:
size: START1-S
size: DEV1-S
image: Ubuntu Bionic Beaver
arch: x86_64
hetzner:
server_type: cx11
image: ubuntu-18.04
openstack:
flavor_ram: ">=512"
image: Ubuntu-18.04


+ 37
- 0
docs/client-linux-ipsec.md View File

@@ -0,0 +1,37 @@
# Linux strongSwan IPsec Clients (e.g., OpenWRT, Ubuntu Server, etc.)

Install strongSwan, then copy the included ipsec_user.conf, ipsec_user.secrets, user.crt (user certificate), and user.key (private key) files to your client device. These will require customization based on your exact use case. These files were originally generated with a point-to-point OpenWRT-based VPN in mind.

## Ubuntu Server example

1. `sudo apt-get install strongswan libstrongswan-standard-plugins`: install strongSwan
2. `/etc/ipsec.d/certs`: copy `<name>.crt` from `algo-master/configs/<server_ip>/ipsec/manual/<name>.crt`
3. `/etc/ipsec.d/private`: copy `<name>.key` from `algo-master/configs/<server_ip>/ipsec/manual/<name>.key`
4. `/etc/ipsec.d/cacerts`: copy `cacert.pem` from `algo-master/configs/<server_ip>/ipsec/manual/cacert.pem`
5. `/etc/ipsec.secrets`: add your `user.key` to the list, e.g. `<server_ip> : ECDSA <name>.key`
6. `/etc/ipsec.conf`: add the connection from `ipsec_user.conf` and ensure `leftcert` matches the `<name>.crt` filename
7. `sudo ipsec restart`: pick up config changes
8. `sudo ipsec up <conn-name>`: start the ipsec tunnel
9. `sudo ipsec down <conn-name>`: shutdown the ipsec tunnel

One common use case is to let your server access your local LAN without going through the VPN. Set up a passthrough connection by adding the following to `/etc/ipsec.conf`:

conn lan-passthrough
leftsubnet=192.168.1.1/24 # Replace with your LAN subnet
rightsubnet=192.168.1.1/24 # Replace with your LAN subnet
authby=never # No authentication necessary
type=pass # passthrough
auto=route # no need to ipsec up lan-passthrough

To configure the connection to come up at boot time replace `auto=add` with `auto=start`.

## Notes on SELinux

If you use a system with SELinux enabled you might need to set appropriate file contexts:

````
semanage fcontext -a -t ipsec_key_file_t "$(pwd)(/.*)?"
restorecon -R -v $(pwd)
````

See [this comment](https://github.com/trailofbits/algo/issues/263#issuecomment-328053950).

+ 0
- 6
docs/client-windows.md View File

@@ -1,6 +0,0 @@
# Windows client setup

## Installation via profiles

1. Install the [WireGuard VPN Client](https://www.wireguard.com/install/#windows-7-8-81-10-2012-2016-2019) and start it.
2. Import the corresponding `wireguard/<name>.conf` file to your device, then setup a new connection with it.

+ 3
- 0
docs/cloud-hetzner.md View File

@@ -0,0 +1,3 @@
## API Token

Sign in into the [Hetzner Cloud Console](https://console.hetzner.cloud/) choose a project, go to `Access` → `Tokens`, and create a new token. Make sure to copy the token because it won’t be shown to you again. A token is bound to a project, to interact with the API of another project you have to create a new token inside the project.

+ 30
- 10
docs/deploy-from-ansible.md View File

@@ -41,13 +41,16 @@ Cloud roles can be activated by specifying an extra variable `provider`.

Cloud roles:

- role: cloud-digitalocean, provider: digitalocean
- role: cloud-ec2, provider: ec2
- role: cloud-vultr, provider: vultr
- role: cloud-gce, provider: gce
- role: cloud-azure, provider: azure
- role: cloud-scaleway, provider: scaleway
- role: cloud-openstack, provider: openstack
- role: cloud-digitalocean, [provider: digitalocean](#digital-ocean)
- role: cloud-ec2, [provider: ec2](#amazon-ec2)
- role: cloud-gce, [provider: gce](#google-compute-engine)
- role: cloud-vultr, [provider: vultr](#vultr)
- role: cloud-azure, [provider: azure](#azure)
- role: cloud-lightsail, [provider: lightsail](#lightsail)
- role: cloud-scaleway, [provider: scaleway](#scaleway)
- role: cloud-openstack, [provider: openstack](#openstack)
- role: cloud-cloudstack, [provider: cloudstack](#cloudstack)
- role: cloud-hetzner, [provider: hetzner](#hetzner)

Server roles:

@@ -180,8 +183,8 @@ Additional variables:

Required variables:

- gce_credentials_file
- [region](https://cloud.google.com/compute/docs/regions-zones/)
- gce_credentials_file: e.g. /configs/gce.json if you use the [GCE docs](https://trailofbits.github.io/algo/cloud-gce.html) - can also be defined in environment as GCE_CREDENTIALS_FILE_PATH
- [region](https://cloud.google.com/compute/docs/regions-zones/): e.g. `useast-1`

### Vultr

@@ -238,12 +241,29 @@ Possible options can be gathered via cli `aws lightsail get-regions`
Required variables:

- [scaleway_token](https://www.scaleway.com/docs/generate-an-api-token/)
- region: e.g. ams1, par1
- region: e.g. `ams1`, `par1`

### OpenStack

You need to source the rc file prior to run Algo. Download it from the OpenStack dashboard->Compute->API Access and source it in the shell (eg: source /tmp/dhc-openrc.sh)

### CloudStack

Required variables:

- [cs_config](https://trailofbits.github.io/algo/cloud-cloudstack.html): /path/to/.cloudstack.ini
- cs_region: e.g. `exoscale`
- cs_zones: e.g. `ch-gva2`

The first two can also be defined in your environment, using the variables `CLOUDSTACK_CONFIG` and `CLOUDSTACK_REGION`.

### Hetzner

Required variables:

- hcloud_token: Your [API token](https://trailofbits.github.io/algo/cloud-hetzner.html#api-token) - can also be defined in the environment as HCLOUD_TOKEN
- region: e.g. `nbg1`

### Update users

Playbook:


+ 0
- 115
docs/deploy-from-fedora-workstation.md View File

@@ -1,115 +0,0 @@
# Deploy from Fedora Workstation

These docs were written based on experience on Fedora Workstation 30.

## Prerequisites

### DNF counterparts of apt packages

The following table lists `apt` packages with their `dnf` counterpart. This is purely informative.
Using `python2-*` in favour of `python3-*` as per [declared dependency](https://github.com/trailofbits/algo#deploy-the-algo-server).

| `apt` | `dnf` |
| ----- | ----- |
| `build-essential` | `make automake gcc gcc-c++ kernel-devel` |
| `libssl-dev` | `openssl-devel` |
| `libffi-dev` | `libffi-devel` |
| `python-dev` | `python2-devel` |
| `python-pip` | `python2-pip` |
| `python-setuptools` | `python2-setuptools` |
| `python-virtualenv` | `python2-virtualenv` |

### Install requirements

First, let's make sure our system is up-to-date:

````
dnf upgrade
````

Next, install the required packages:

````
dnf install -y \
ansible \
automake \
gcc \
gcc-c++ \
kernel-devel \
openssl-devel \
libffi-devel \
libselinux-python \
python2-devel \
python2-pip \
python2-setuptools \
python2-virtualenv \
python2-crypto \
python2-pyyaml \
python2-pyOpenSSL \
python2-libselinux \
make
````

## Get Algo


[Download](https://github.com/trailofbits/algo/archive/master.zip) or clone:

````
git clone git@github.com:trailofbits/algo.git
cd algo
````

If you downloaded Algo, unzip to your prefered location and `cd` into it.
We'll assume from this point forward that our working directory is the `algo` root directory.


## Prepare algo

Some steps are needed before we can deploy our Algo VPN server.

### Check `pip`

Run `pip -v` and check the python version it is using:

````
$ pip -V
pip 19.0.3 from /usr/lib/python2.7/site-packages (python 2.7)
````

`python 2.7` is what we're looking for.

### Setup virtualenv and install requirements

````
python2 -m virtualenv --system-site-packages env
source env/bin/activate
pip -q install --user -r requirements.txt
````

## Configure

Edit the userlist and any other settings you desire in `config.cfg` using your prefered editor.

## Deploy

We can now deploy our server by running:

````
./algo
````

Note the IP and password of the newly created Algo VPN server and store it safely.

If you want to setup client config on your Fedora Workstation, refer to [the Linux Client docs](client-linux.md).

## Notes on SELinux

If you have SELinux enabled, you'll need to set appropriate file contexts:

````
semanage fcontext -a -t ipsec_key_file_t "$(pwd)(/.*)?"
restorecon -R -v $(pwd)
````

See [this comment](https://github.com/trailofbits/algo/issues/263#issuecomment-328053950).

+ 48
- 37
docs/deploy-from-redhat-centos6.md View File

@@ -5,8 +5,8 @@ Many people prefer RedHat or CentOS 6 (or similar variants like Amazon Linux) fo
## Step 1: Prep for RH/CentOS 6.8/Amazon

```shell
yum -y -q update
yum -y -q install epel-release
yum -y update
yum -y install epel-release
```

Enable any kernel updates:
@@ -17,53 +17,64 @@ reboot

## Step 2: Install Ansible and launch Algo

Fix GPG key warnings during Ansible rpm install:
RedHat/CentOS 6.x uses Python 2.6 by default, which is explicitly deprecated and produces many warnings and errors, so we must install a safe, non-invasive 3.6 tool set which has to be expressly enabled (and will not survive login sessions and reboots):

- Install the Software Collections Library (to enable Python 3.6)
```shell
rpm --import https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-6
yum -y install centos-release-SCL
yum -y install \
openssl-devel \
libffi-devel \
automake \
gcc \
gcc-c++ \
kernel-devel \
rh-python36-python \
rh-python36-python-devel \
rh-python36-python-setuptools \
rh-python36-python-pip \
rh-python36-python-virtualenv \
rh-python36-python-crypto \
rh-python36-PyYAML \
libselinux-python \
python-crypto \
wget \
unzip \
nano
```

Fix GPG key warning during official Software Collections (SCL) package install:

```shell
rpm --import https://raw.githubusercontent.com/sclorg/centos-release-scl/master/centos-release-scl/RPM-GPG-KEY-CentOS-SIG-SCLo
- 3.6 will not be used until explicitly enabled, per login session. Enable 3.6 default for this session (needs re-run between logins & reboots)
```
scl enable rh-python36 bash
```

RedHat/CentOS 6.x uses Python 2.6 by default, which is explicitly deprecated and produces many warnings and errors, so we must install a safe, non-invasive 2.7 tool set which has to be expressly enabled (and will not survive login sessions and reboots):
- We're now defaulted to 3.6. Upgrade required components
```
python3 -m pip install -U pip virtualenv pycrypto setuptools
```

```shell
# Install the Software Collections Library (to enable Python 2.7)
yum -y -q install centos-release-SCL

# 2.7 will not be used until explicitly enabled, per login session
yum -y -q install python27-python-devel python27-python-setuptools python27-python-pip
yum -y -q install openssl-devel libffi-devel automake gcc gcc-c++ kernel-devel wget unzip ansible nano

# Enable 2.7 default for this session (needs re-run between logins & reboots)
# shellcheck disable=SC1091
source /opt/rh/python27/enable
# We're now defaulted to 2.7

# Upgrade pip itself
pip -q install --upgrade pip
# python-devel needed to prevent setup.py crash
pip -q install pycrypto
# pycrypto 2.7.1 needed for latest security patch
pip -q install setuptools --upgrade
# virtualenv to make installing dependencies easier
pip -q install virtualenv

wget -q https://github.com/trailofbits/algo/archive/master.zip
- Download and uzip Algo
```
wget https://github.com/trailofbits/algo/archive/master.zip
unzip master.zip
cd algo-master || echo "No Algo directory found"
```

# Set up a virtualenv and install the local Algo dependencies (must be run from algo-master)
virtualenv env && source env/bin/activate
pip -q install -r requirements.txt
- Set up a virtualenv and install the local Algo dependencies (must be run from algo-master)
```
python3 -m virtualenv --python="$(command -v python3)" .env
source .env/bin/activate
python3 -m pip install -U pip virtualenv
python3 -m pip install -r requirements.txt
```

# Edit the userlist and any other settings you desire
- Edit the userlist and any other settings you desire
```
nano config.cfg
# Now you can run the Algo installer!
```

- Now you can run the Algo installer!
```
./algo
```



+ 16
- 4
docs/deploy-from-script-or-cloud-init-to-localhost.md View File

@@ -8,7 +8,7 @@ The script doesn't configure any parameters in your cloud, so it's on your own t

You can copy-paste the snippet below to the user data (cloud-init or startup script) field when creating a new server.

For now it is only possible for [DigitalOcean](https://www.digitalocean.com/docs/droplets/resources/metadata/), Amazon [EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) and [Lightsail](https://lightsail.aws.amazon.com/ls/docs/en/articles/lightsail-how-to-configure-server-additional-data-shell-script), [Google Cloud](https://cloud.google.com/compute/docs/startupscript), [Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/using-cloud-init) and [Vultr](https://my.vultr.com/startup/), although Vultr doesn't [officially support cloud-init](https://www.vultr.com/docs/getting-started-with-cloud-init).
For now this has only been successfully tested on [DigitalOcean](https://www.digitalocean.com/docs/droplets/resources/metadata/), Amazon [EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) and [Lightsail](https://lightsail.aws.amazon.com/ls/docs/en/articles/lightsail-how-to-configure-server-additional-data-shell-script), [Google Cloud](https://cloud.google.com/compute/docs/startupscript), [Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/using-cloud-init) and [Vultr](https://my.vultr.com/startup/), although Vultr doesn't [officially support cloud-init](https://www.vultr.com/docs/getting-started-with-cloud-init).

```
#!/bin/bash
@@ -18,18 +18,30 @@ The command will prepare the environment and install AlgoVPN with the default pa

## Variables

- `METHOD`: Which method of the deployment to use. Possible values are local and cloud. Default: cloud. The cloud method is intended to use in cloud-init deployments only. If you are not using cloud-init to deploy the server you have to use the local method.
- `METHOD`: which method of the deployment to use. Possible values are local and cloud. Default: cloud. The cloud method is intended to use in cloud-init deployments only. If you are not using cloud-init to deploy the server you have to use the local method.

- `ONDEMAND_CELLULAR`: "Connect On Demand" when connected to cellular networks. Boolean. Default: false.

- `ONDEMAND_WIFI`: "Connect On Demand" when connected to Wi-Fi. Default: false.

- `ONDEMAND_WIFI_EXCLUDE`: List the names of any trusted Wi-Fi networks where macOS/iOS IPsec clients should not use "Connect On Demand". Comma-separated list.

- `STORE_PKI`: To retain the PKI. (required to add users in the future, but less secure). Default: false.

- `DNS_ADBLOCKING`: To install an ad blocking DNS resolver. Default: false.

- `SSH_TUNNELING`: Enable SSH tunneling for each user. Default: false.
- `ENDPOINT`: The public IP address or domain name of your server: (IMPORTANT! This is used to verify the certificate). It will be gathered automatically for DigitalOcean, AWS, GCE, Azure or Vultr if the `METHOD` is cloud. Otherwise you need to define this variable according to your public IP address.

- `ENDPOINT`: The public IP address or domain name of your server: (IMPORTANT! This is used to verify the certificate). It will be gathered automatically for DigitalOcean, AWS, GCE, Azure or Vultr if the `METHOD` is cloud. Otherwise you need to define this variable according to your public IP address.

- `USERS`: list of VPN users. Comma-separated list. Default: user1.
- `REPO_SLUG`: Owner and repository that used to get the installation scripts from. Default: trailofbits/algo.

- `REPO_SLUG`: Owner and repository that used to get the installation scripts from. Default: trailofbits/algo.

- `REPO_BRANCH`: Branch for `REPO_SLUG`. Default: master.

- `EXTRA_VARS`: Additional extra variables.

- `ANSIBLE_EXTRA_ARGS`: Any available ansible parameters. ie: `--skip-tags apparmor`.

## Examples


+ 1
- 1
docs/deploy-from-windows.md View File

@@ -21,7 +21,7 @@ Wait a minute for Windows to install a few things in the background (it will eve
Install additional packages:

```shell
sudo apt-get update && sudo apt-get install git build-essential libssl-dev libffi-dev python-dev python-pip python-setuptools python-virtualenv -y
sudo apt-get update && sudo apt-get install git build-essential libssl-dev libffi-dev python3-dev python3-pip python3-setuptools python3-virtualenv -y
```

Clone the Algo repository:


+ 1
- 1
docs/deploy-to-ubuntu.md View File

@@ -1,6 +1,6 @@
# Local Installation

You can use Algo to configure a pre-existing server as an AlgoVPN rather than using it create and configure a new server on a supported cloud provider. This is referred to as a **local** installation rather than a **cloud** deployment.
You can use Algo to configure a pre-existing server as an AlgoVPN rather than using it to create and configure a new server on a supported cloud provider. This is referred to as a **local** installation rather than a **cloud** deployment.

Install the Algo scripts following the normal installation instructions, then choose:
```


+ 2
- 2
docs/index.md View File

@@ -1,7 +1,6 @@
# Algo VPN documentation

* Deployment instructions
- Deploy from [Fedora Workstation (26)](deploy-from-fedora-workstation.md)
- Deploy from [RedHat/CentOS 6.x](deploy-from-redhat-centos6.md)
- Deploy from [Windows](deploy-from-windows.md)
- Deploy from a [Docker container](deploy-from-docker.md)
@@ -11,9 +10,9 @@
- Setup [Android](client-android.md) clients
- Setup [Generic/Linux](client-linux.md) clients with Ansible
- Setup Ubuntu clients to use [WireGuard](client-linux-wireguard.md)
- Setup Linux clients to use [IPSEC](client-linux-ipsec.md)
- Setup Apple devices to use [IPSEC](client-apple-ipsec.md)
- Setup Macs running macOS 10.13 or older to use [Wireguard](client-macos-wireguard.md)
- Manual Windows 10 client setup for [IPSEC](client-windows.md)
* Cloud provider setup
- Configure [Amazon EC2](cloud-amazon-ec2.md)
- Configure [Azure](cloud-azure.md)
@@ -21,6 +20,7 @@
- Configure [Google Cloud Platform](cloud-gce.md)
- Configure [Vultr](cloud-vultr.md)
- Configure [CloudStack](cloud-cloudstack.md)
- Configure [Hetzner Cloud](cloud-hetzner.md)
* Advanced Deployment
- Deploy to your own [FreeBSD](deploy-to-freebsd.md) server
- Deploy to your own [Ubuntu](deploy-to-ubuntu.md) server


+ 8
- 42
docs/troubleshooting.md View File

@@ -36,6 +36,10 @@ First of all, check [this](https://github.com/trailofbits/algo#features) and ens

Look here if you have a problem running the installer to set up a new Algo server.

### Python version is not supported

The minimum Python version required to run Algo is 3.6. Most modern operation systems should have it by default, but if the OS you are using doesn't meet the requirements, you have to upgrade. See the official documentation for your OS, or manual download it from https://www.python.org/downloads/. Otherwise, you may [deploy from docker](deploy-from-docker.md)

### Error: "You have not agreed to the Xcode license agreements"

On macOS, you tried to install the dependencies with pip and encountered the following error:
@@ -105,25 +109,13 @@ Command /usr/bin/python -c "import setuptools, tokenize;__file__='/private/tmp/p
Storing debug log for failure in /Users/algore/Library/Logs/pip.log
```

You are running an old version of `pip` that cannot download the binary `cryptography` dependency. Upgrade to a new version of `pip` by running `sudo pip install -U pip`.

### Error: "TypeError: must be str, not bytes"

You tried to install Algo and you see many repeated errors referencing `TypeError`, such as `TypeError: '>=' not supported between instances of 'TypeError' and 'int'` and `TypeError: must be str, not bytes`. For example:

```
TASK [Wait until SSH becomes ready...] *****************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: must be str, not bytes
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Traceback (most recent call last):\n File \"/var/folders/x_/nvr61v455qq98vp22k5r5vm40000gn/T/ansible_6sdjysth/ansible_module_wait_for.py\", line 538, in <module>\n main()\n File \"/var/folders/x_/nvr61v455qq98vp22k5r5vm40000gn/T/ansible_6sdjysth/ansible_module_wait_for.py\", line 483, in main\n data += response\nTypeError: must be str, not bytes\n", "module_stdout": "", "msg": "MODULE FAILURE"}
```

You may be trying to run Algo with Python3. Algo uses [Ansible](https://github.com/ansible/ansible) which has issues with Python3, although this situation is improving over time. Try running Algo with Python2 to fix this issue. Open your terminal and `cd` to the directory with Algo, then run: ``virtualenv -p `which python2.7` env && source env/bin/activate && pip install -r requirements.txt``
You are running an old version of `pip` that cannot download the binary `cryptography` dependency. Upgrade to a new version of `pip` by running `sudo python3 -m pip install -U pip`.

### Error: "ansible-playbook: command not found"

You tried to install Algo and you see an error that reads "ansible-playbook: command not found."

You did not finish step 4 in the installation instructions, "[Install Algo's remaining dependencies](https://github.com/trailofbits/algo#deploy-the-algo-server)." Algo depends on [Ansible](https://github.com/ansible/ansible), an automation framework, and this error indicates that you do not have Ansible installed. Ansible is installed by `pip` when you run `python -m pip install -r requirements.txt`. You must complete the installation instructions to run the Algo server deployment process.
You did not finish step 4 in the installation instructions, "[Install Algo's remaining dependencies](https://github.com/trailofbits/algo#deploy-the-algo-server)." Algo depends on [Ansible](https://github.com/ansible/ansible), an automation framework, and this error indicates that you do not have Ansible installed. Ansible is installed by `pip` when you run `python3 -m pip install -r requirements.txt`. You must complete the installation instructions to run the Algo server deployment process.

### Could not fetch URL ... TLSV1_ALERT_PROTOCOL_VERSION

@@ -137,9 +129,9 @@ No matching distribution found for SecretStorage<3 (from -r requirements.txt (li

It's time to upgrade your python.

`brew upgrade python2`
`brew upgrade python3`

You can also download python 2.7.x from python.org.
You can also download python 3.7.x from python.org.

### Bad owner or permissions on .ssh

@@ -414,32 +406,6 @@ Certain cloud providers (like AWS Lightsail) don't assign an IPv6 address to you

Manually disconnecting and then reconnecting should restore your connection. To solve this, you need to either "force IPv4 connection" if available on your phone, or install an IPv4 APN, which might be available from your carrier tech support. T-mobile's is available [for iOS here under "iOS IPv4/IPv6 fix"](https://www.reddit.com/r/tmobile/wiki/index), and [here is a walkthrough for Android phones](https://www.myopenrouter.com/article/vpn-connections-not-working-t-mobile-heres-how-fix).

### Error: name 'basestring' is not defined

```
TASK [cloud-digitalocean : Creating a droplet...] *******************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NameError: name 'basestring' is not defined
fatal: [localhost]: FAILED! => {"changed": false, "msg": "name 'basestring' is not defined"}
```

If you get something like the above it's likely you're not using a python2 virtualenv.

Ensure running `python2.7` drops you into a python 2 shell (it looks something like this)

```
user@homebook ~ $ python2.7
Python 2.7.10 (default, Feb 7 2017, 00:08:15)
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
```

Then rerun the dependency installation explicitly using python 2.7

```
python2.7 -m virtualenv --python=`which python2.7` env && source env/bin/activate && python2.7 -m pip install -U pip && python2.7 -m pip install -r requirements.txt
```

### IPsec: Difficulty connecting through router

Some routers treat IPsec connections specially because older versions of IPsec did not work properly through [NAT](https://en.wikipedia.org/wiki/Network_address_translation). If you're having problems connecting to your AlgoVPN through a specific router using IPsec you might need to change some settings on the router.


+ 2
- 1
input.yml View File

@@ -14,9 +14,10 @@
- { name: DigitalOcean, alias: digitalocean }
- { name: Amazon Lightsail, alias: lightsail }
- { name: Amazon EC2, alias: ec2 }
- { name: Vultr, alias: vultr }
- { name: Microsoft Azure, alias: azure }
- { name: Google Compute Engine, alias: gce }
- { name: Hetzner Cloud, alias: hetzner }
- { name: Vultr, alias: vultr }
- { name: Scaleway, alias: scaleway}
- { name: OpenStack (DreamCompute optimised), alias: openstack }
- { name: CloudStack (Exoscale optimised), alias: cloudstack }


+ 7
- 7
install.sh View File

@@ -27,10 +27,10 @@ installRequirements() {
build-essential \
libssl-dev \
libffi-dev \
python-dev \
python-pip \
python-setuptools \
python-virtualenv \
python3-dev \
python3-pip \
python3-setuptools \
python3-virtualenv \
bind9-host \
jq -y
}
@@ -39,11 +39,11 @@ getAlgo() {
[ ! -d "algo" ] && git clone "https://github.com/${REPO_SLUG}" -b "${REPO_BRANCH}" algo
cd algo

python -m virtualenv --python="$(command -v python2)" .venv
python3 -m virtualenv --python="$(command -v python3)" .venv
# shellcheck source=/dev/null
. .venv/bin/activate
python -m pip install -U pip virtualenv
python -m pip install -r requirements.txt
python3 -m pip install -U pip virtualenv
python3 -m pip install -r requirements.txt
}

publicIpFromInterface() {


+ 1
- 1
inventory View File

@@ -1,2 +1,2 @@
[local]
localhost ansible_connection=local ansible_python_interpreter=python
localhost ansible_connection=local ansible_python_interpreter=python3

+ 0
- 138
library/gce_region_facts.py View File

@@ -1,139 +0,0 @@
#!/usr/bin/python
# Copyright 2013 Google Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)

from __future__ import absolute_import, division, print_function
__metaclass__ = type


ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}


DOCUMENTATION = '''
module: gce_region_facts
version_added: "5.3"
short_description: Gather facts about GCE regions.
description:
- Gather facts about GCE regions.
options:
service_account_email:
version_added: "1.6"
description:
- service account email
required: false
default: null
aliases: []
pem_file:
version_added: "1.6"
description:
- path to the pem file associated with the service account email
This option is deprecated. Use 'credentials_file'.
required: false
default: null
aliases: []
credentials_file:
version_added: "2.1.0"
description:
- path to the JSON file associated with the service account email
required: false
default: null
aliases: []
project_id:
version_added: "1.6"
description:
- your GCE project ID
required: false
default: null
aliases: []
requirements:
- "python >= 2.6"
- "apache-libcloud >= 0.13.3, >= 0.17.0 if using JSON credentials"
author: "Jack Ivanov (@jackivanov)"
'''

EXAMPLES = '''
# Gather facts about all regions
- gce_region_facts:
'''

RETURN = '''
regions:
returned: on success
description: >
Each element consists of a dict with all the information related
to that region.
type: list
sample: "[{
"name": "asia-east1",
"status": "UP",
"zones": [
{
"name": "asia-east1-a",
"status": "UP"
},
{
"name": "asia-east1-b",
"status": "UP"
},
{
"name": "asia-east1-c",
"status": "UP"
}
]
}]"
'''
try:
from libcloud.compute.types import Provider
from libcloud.compute.providers import get_driver
from libcloud.common.google import GoogleBaseError, QuotaExceededError, ResourceExistsError, ResourceNotFoundError
_ = Provider.GCE
HAS_LIBCLOUD = True
except ImportError:
HAS_LIBCLOUD = False

from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.gce import gce_connect, unexpected_error_msg


def main():
module = AnsibleModule(
argument_spec=dict(
service_account_email=dict(),
pem_file=dict(type='path'),
credentials_file=dict(type='path'),
project_id=dict(),
)
)

if not HAS_LIBCLOUD:
module.fail_json(msg='libcloud with GCE support (0.17.0+) required for this module')

gce = gce_connect(module)

changed = False
gce_regions = []

try:
regions = gce.ex_list_regions()
for r in regions:
gce_region = {}
gce_region['name'] = r.name
gce_region['status'] = r.status
gce_region['zones'] = []
for z in r.zones:
gce_zone = {}
gce_zone['name'] = z.name
gce_zone['status'] = z.status
gce_region['zones'].append(gce_zone)
gce_regions.append(gce_region)
json_output = { 'regions': gce_regions }
module.exit_json(changed=False, results=json_output)
except ResourceNotFoundError:
pass


if __name__ == '__main__':
main()

+ 93
- 0
library/gcp_compute_location_info.py View File

@@ -0,0 +1,93 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-

from __future__ import absolute_import, division, print_function

__metaclass__ = type

################################################################################
# Documentation
################################################################################

ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ["preview"], 'supported_by': 'community'}

################################################################################
# Imports
################################################################################
from ansible.module_utils.gcp_utils import navigate_hash, GcpSession, GcpModule, GcpRequest
import json

################################################################################
# Main
################################################################################


def main():
module = GcpModule(argument_spec=dict(filters=dict(type='list', elements='str'), scope=dict(required=True, type='str')))

if module._name == 'gcp_compute_image_facts':
module.deprecate("The 'gcp_compute_image_facts' module has been renamed to 'gcp_compute_regions_info'", version='2.13')

if not module.params['scopes']:
module.params['scopes'] = ['https://www.googleapis.com/auth/compute']

items = fetch_list(module, collection(module), query_options(module.params['filters']))
if items.get('items'):
items = items.get('items')
else:
items = []
return_value = {'resources': items}
module.exit_json(**return_value)


def collection(module):
return "https://www.googleapis.com/compute/v1/projects/{project}/{scope}".format(**module.params)


def fetch_list(module, link, query):
auth = GcpSession(module, 'compute')
response = auth.get(link, params={'filter': query})
return return_if_object(module, response)


def query_options(filters):
if not filters:
return ''

if len(filters) == 1:
return filters[0]
else:
queries = []
for f in filters:
# For multiple queries, all queries should have ()
if f[0] != '(' and f[-1] != ')':
queries.append("(%s)" % ''.join(f))
else:
queries.append(f)

return ' '.join(queries)


def return_if_object(module, response):
# If not found, return nothing.
if response.status_code == 404:
return None

# If no content, return nothing.
if response.status_code == 204:
return None

try:
module.raise_for_status(response)
result = response.json()
except getattr(json.decoder, 'JSONDecodeError', ValueError) as inst:
module.fail_json(msg="Invalid JSON response with error: %s" % inst)

if navigate_hash(result, ['error', 'errors']):
module.fail_json(msg=navigate_hash(result, ['error', 'errors']))

return result


if __name__ == "__main__":
main()

+ 1
- 1
library/lightsail_region_facts.py View File

@@ -93,7 +93,7 @@ def main():
response = client.get_regions(
includeAvailabilityZones=False
)
module.exit_json(changed=False, results=response)
module.exit_json(changed=False, data=response)
except (botocore.exceptions.ClientError, Exception) as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())



+ 132
- 74
library/scaleway_compute.py View File

@@ -29,6 +29,15 @@ extends_documentation_fragment: scaleway

options:

public_ip:
description:
- Manage public IP on a Scaleway server
- Could be Scaleway IP address UUID
- C(dynamic) Means that IP is destroyed at the same time the host is destroyed
- C(absent) Means no public IP at all
version_added: '2.8'
default: absent

enable_ipv6:
description:
- Enable public IPv6 connectivity on the instance
@@ -88,26 +97,6 @@ options:
description:
- Commercial name of the compute node
required: true
choices:
- ARM64-2GB
- ARM64-4GB
- ARM64-8GB
- ARM64-16GB
- ARM64-32GB
- ARM64-64GB
- ARM64-128GB
- C1
- C2S
- C2M
- C2L
- START1-XS
- START1-S
- START1-M
- START1-L
- X64-15GB
- X64-30GB
- X64-60GB
- X64-120GB

wait:
description:
@@ -126,6 +115,13 @@ options:
- Time to wait before every attempt to check the state of the server
required: false
default: 3

security_group:
description:
- Security group unique identifier
- If no value provided, the default security group or current security group will be used
required: false
version_added: "2.8"
'''

EXAMPLES = '''
@@ -141,6 +137,19 @@ EXAMPLES = '''
- test
- www

- name: Create a server attached to a security group
scaleway_compute:
name: foobar
state: present
image: 89ee4018-f8c3-4dc4-a6b5-bca14f985ebe
organization: 951df375-e094-4d26-97c1-ba548eeb9c42
region: ams1
commercial_type: VC1S
security_group: 4a31b633-118e-4900-bd52-facf1085fc8d
tags:
- test
- www

- name: Destroy it right after
scaleway_compute:
name: foobar
@@ -161,34 +170,6 @@ from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves.urllib.parse import quote as urlquote
from ansible.module_utils.scaleway import SCALEWAY_LOCATION, scaleway_argument_spec, Scaleway

SCALEWAY_COMMERCIAL_TYPES = [

# Virtual ARM64 compute instance
'ARM64-2GB',
'ARM64-4GB',
'ARM64-8GB',
'ARM64-16GB',
'ARM64-32GB',
'ARM64-64GB',
'ARM64-128GB',

# Baremetal
'C1', # ARM64 (4 cores) - 2GB
'C2S', # X86-64 (4 cores) - 8GB
'C2M', # X86-64 (8 cores) - 16GB
'C2L', # x86-64 (8 cores) - 32 GB

# Virtual X86-64 compute instance
'START1-XS', # Starter X86-64 (1 core) - 1GB - 25 GB NVMe
'START1-S', # Starter X86-64 (2 cores) - 2GB - 50 GB NVMe
'START1-M', # Starter X86-64 (4 cores) - 4GB - 100 GB NVMe
'START1-L', # Starter X86-64 (8 cores) - 8GB - 200 GB NVMe
'X64-15GB',
'X64-30GB',
'X64-60GB',
'X64-120GB',
]

SCALEWAY_SERVER_STATES = (
'stopped',
'stopping',
@@ -204,6 +185,17 @@ SCALEWAY_TRANSITIONS_STATES = (
)


def check_image_id(compute_api, image_id):
response = compute_api.get(path="images")

if response.ok and response.json:
image_ids = [image["id"] for image in response.json["images"]]
if image_id not in image_ids:
compute_api.module.fail_json(msg='Error in getting image %s on %s' % (image_id, compute_api.module.params.get('api_url')))
else:
compute_api.module.fail_json(msg="Error in getting images from: %s" % compute_api.module.params.get('api_url'))


def fetch_state(compute_api, server):
compute_api.module.debug("fetch_state of server: %s" % server["id"])
response = compute_api.get(path="servers/%s" % server["id"])
@@ -242,17 +234,51 @@ def wait_to_complete_state_transition(compute_api, server):
compute_api.module.fail_json(msg="Server takes too long to finish its transition")


def public_ip_payload(compute_api, public_ip):
# We don't want a public ip
if public_ip in ("absent",):
return {"dynamic_ip_required": False}

# IP is only attached to the instance and is released as soon as the instance terminates
if public_ip in ("dynamic", "allocated"):
return {"dynamic_ip_required": True}

# We check that the IP we want to attach exists, if so its ID is returned
response = compute_api.get("ips")
if not response.ok:
msg = 'Error during public IP validation: (%s) %s' % (response.status_code, response.json)
compute_api.module.fail_json(msg=msg)

ip_list = []
try:
ip_list = response.json["ips"]
except KeyError:
compute_api.module.fail_json(msg="Error in getting the IP information from: %s" % response.json)

lookup = [ip["id"] for ip in ip_list]
if public_ip in lookup:
return {"public_ip": public_ip}


def create_server(compute_api, server):
compute_api.module.debug("Starting a create_server")
target_server = None
response = compute_api.post(path="servers",
data={"enable_ipv6": server["enable_ipv6"],
"boot_type": server["boot_type"],
"tags": server["tags"],
"commercial_type": server["commercial_type"],
"image": server["image"],
"name": server["name"],
"organization": server["organization"]})
data = {"enable_ipv6": server["enable_ipv6"],
"tags": server["tags"],
"commercial_type": server["commercial_type"],
"image": server["image"],
"dynamic_ip_required": server["dynamic_ip_required"],
"name": server["name"],
"organization": server["organization"]
}

if server["boot_type"]:
data["boot_type"] = server["boot_type"]

if server["security_group"]:
data["security_group"] = server["security_group"]

response = compute_api.post(path="servers", data=data)

if not response.ok:
msg = 'Error during server creation: (%s) %s' % (response.status_code, response.json)
@@ -325,7 +351,7 @@ def present_strategy(compute_api, wished_server):
if compute_api.module.check_mode:
return changed, {"status": "Server %s attributes would be changed." % target_server["id"]}

server_change_attributes(compute_api=compute_api, target_server=target_server, wished_server=wished_server)
target_server = server_change_attributes(compute_api=compute_api, target_server=target_server, wished_server=wished_server)

return changed, target_server

@@ -347,7 +373,7 @@ def absent_strategy(compute_api, wished_server):
return changed, {"status": "Server %s would be made absent." % target_server["id"]}

# A server MUST be stopped to be deleted.
while not fetch_state(compute_api=compute_api, server=target_server) == "stopped":
while fetch_state(compute_api=compute_api, server=target_server) != "stopped":
wait_to_complete_state_transition(compute_api=compute_api, server=target_server)
response = stop_server(compute_api=compute_api, server=target_server)

@@ -388,7 +414,7 @@ def running_strategy(compute_api, wished_server):
if compute_api.module.check_mode:
return changed, {"status": "Server %s attributes would be changed before running it." % target_server["id"]}

server_change_attributes(compute_api=compute_api, target_server=target_server, wished_server=wished_server)
target_server = server_change_attributes(compute_api=compute_api, target_server=target_server, wished_server=wished_server)

current_state = fetch_state(compute_api=compute_api, server=target_server)
if current_state not in ("running", "starting"):
@@ -432,7 +458,7 @@ def stop_strategy(compute_api, wished_server):
return changed, {
"status": "Server %s attributes would be changed before stopping it." % target_server["id"]}

server_change_attributes(compute_api=compute_api, target_server=target_server, wished_server=wished_server)
target_server = server_change_attributes(compute_api=compute_api, target_server=target_server, wished_server=wished_server)

wait_to_complete_state_transition(compute_api=compute_api, server=target_server)

@@ -479,7 +505,7 @@ def restart_strategy(compute_api, wished_server):
return changed, {
"status": "Server %s attributes would be changed before rebooting it." % target_server["id"]}

server_change_attributes(compute_api=compute_api, target_server=target_server, wished_server=wished_server)
target_server = server_change_attributes(compute_api=compute_api, target_server=target_server, wished_server=wished_server)

changed = True
if compute_api.module.check_mode:
@@ -518,8 +544,8 @@ state_strategy = {
def find(compute_api, wished_server, per_page=1):
compute_api.module.debug("Getting inside find")
# Only the name attribute is accepted in the Compute query API
url = 'servers?name=%s&per_page=%d' % (urlquote(wished_server["name"]), per_page)
response = compute_api.get(url)
response = compute_api.get("servers", params={"name": wished_server["name"],
"per_page": per_page})

if not response.ok:
msg = 'Error during server search: (%s) %s' % (response.status_code, response.json)
@@ -535,6 +561,7 @@ PATCH_MUTABLE_SERVER_ATTRIBUTES = (
"tags",
"name",
"dynamic_ip_required",
"security_group",
)


@@ -546,29 +573,51 @@ def server_attributes_should_be_changed(compute_api, target_server, wished_serve
for x in PATCH_MUTABLE_SERVER_ATTRIBUTES
if x in target_server and x in wished_server)
compute_api.module.debug("Debug dict %s" % debug_dict)

try:
return any([target_server[x] != wished_server[x]
for x in PATCH_MUTABLE_SERVER_ATTRIBUTES
if x in target_server and x in wished_server])
for key in PATCH_MUTABLE_SERVER_ATTRIBUTES:
if key in target_server and key in wished_server:
# When you are working with dict, only ID matter as we ask user to put only the resource ID in the playbook
if isinstance(target_server[key], dict) and wished_server[key] and "id" in target_server[key].keys(
) and target_server[key]["id"] != wished_server[key]:
return True
# Handling other structure compare simply the two objects content
elif not isinstance(target_server[key], dict) and target_server[key] != wished_server[key]:
return True
return False
except AttributeError:
compute_api.module.fail_json(msg="Error while checking if attributes should be changed")


def server_change_attributes(compute_api, target_server, wished_server):
compute_api.module.debug("Starting patching server attributes")
patch_payload = dict((x, wished_server[x])
for x in PATCH_MUTABLE_SERVER_ATTRIBUTES
if x in wished_server and x in target_server)
patch_payload = dict()

for key in PATCH_MUTABLE_SERVER_ATTRIBUTES:
if key in target_server and key in wished_server:
# When you are working with dict, only ID matter as we ask user to put only the resource ID in the playbook
if isinstance(target_server[key], dict) and "id" in target_server[key] and wished_server[key]:
# Setting all key to current value except ID
key_dict = dict((x, target_server[key][x]) for x in target_server[key].keys() if x != "id")
# Setting ID to the user specified ID
key_dict["id"] = wished_server[key]
patch_payload[key] = key_dict
elif not isinstance(target_server[key], dict):
patch_payload[key] = wished_server[key]

response = compute_api.patch(path="servers/%s" % target_server["id"],
data=patch_payload)
if not response.ok:
msg = 'Error during server attributes patching: (%s) %s' % (response.status_code, response.json)
compute_api.module.fail_json(msg=msg)

try:
target_server = response.json["server"]
except KeyError:
compute_api.module.fail_json(msg="Error in getting the server information from: %s" % response.json)

wait_to_complete_state_transition(compute_api=compute_api, server=target_server)

return response
return target_server


def core(module):
@@ -581,12 +630,19 @@ def core(module):
"enable_ipv6": module.params["enable_ipv6"],
"boot_type": module.params["boot_type"],
"tags": module.params["tags"],
"organization": module.params["organization"]
"organization": module.params["organization"],
"security_group": module.params["security_group"]
}
module.params['api_url'] = SCALEWAY_LOCATION[region]["api_endpoint"]

compute_api = Scaleway(module=module)

check_image_id(compute_api, wished_server["image"])

# IP parameters of the wished server depends on the configuration
ip_payload = public_ip_payload(compute_api=compute_api, public_ip=module.params["public_ip"])
wished_server.update(ip_payload)

changed, summary = state_strategy[wished_server["state"]](compute_api=compute_api, wished_server=wished_server)
module.exit_json(changed=changed, msg=summary)

@@ -597,15 +653,17 @@ def main():
image=dict(required=True),
name=dict(),
region=dict(required=True, choices=SCALEWAY_LOCATION.keys()),
commercial_type=dict(required=True, choices=SCALEWAY_COMMERCIAL_TYPES),
commercial_type=dict(required=True),
enable_ipv6=dict(default=False, type="bool"),
boot_type=dict(default="bootscript"),
boot_type=dict(choices=['bootscript', 'local']),
public_ip=dict(default="absent"),
state=dict(choices=state_strategy.keys(), default='present'),
tags=dict(type="list", default=[]),
organization=dict(required=True),
wait=dict(type="bool", default=False),
wait_timeout=dict(type="int", default=300),
wait_sleep_time=dict(type="int", default=3),
security_group=dict(),
))
module = AnsibleModule(
argument_spec=argument_spec,


+ 10
- 2
main.yml View File

@@ -17,7 +17,15 @@
when: '"ansible" in item'
with_items: "{{ lookup('file', 'requirements.txt').splitlines() }}"

- name: Verify Ansible meets Algo VPN requirements.
- name: Verify Python meets Algo VPN requirements
assert:
that: (ansible_python.version.major|string + '.' + ansible_python.version.minor|string)|float is version('3.6', '>=')
msg: >
Python version is not supported.
You must upgrade to at least Python 3.6 to use this version of Algo.
See for more details - https://trailofbits.github.io/algo/troubleshooting.html#python-version-is-not-supported

- name: Verify Ansible meets Algo VPN requirements
assert:
that:
- ansible_version.full is version(required_ansible_version.ver, required_ansible_version.op)
@@ -25,7 +33,7 @@
msg: >
Ansible version is {{ ansible_version.full }}.
You must update the requirements to use this version of Algo.
Try to run python -m pip install -U -r requirements.txt
Try to run python3 -m pip install -U -r requirements.txt

- name: Include prompts playbook
import_playbook: input.yml


+ 1
- 1
playbooks/cloud-post.yml View File

@@ -1,5 +1,5 @@
---
- name: Set subjectAltName as afact
- name: Set subjectAltName as a fact
set_fact:
IP_subject_alt_name: "{{ (IP_subject_alt_name if algo_provider == 'local' else cloud_instance_ip) | lower }}"



+ 1
- 1
requirements.txt View File

@@ -1,2 +1,2 @@
ansible==2.7.12
ansible==2.8.3
netaddr

+ 0
- 1
roles/cloud-azure/defaults/main.yml View File

@@ -1,5 +1,4 @@
---
azure_venv: "{{ playbook_dir }}/configs/.venvs/azure"
_azure_regions: >
[
{


+ 31
- 34
roles/cloud-azure/tasks/main.yml View File

@@ -2,40 +2,37 @@
- name: Build python virtual environment
import_tasks: venv.yml

- block:
- name: Include prompts
import_tasks: prompts.yml
- name: Include prompts
import_tasks: prompts.yml

- set_fact:
algo_region: >-
{% if region is defined %}{{ region }}
{%- elif _algo_region.user_input %}{{ azure_regions[_algo_region.user_input | int -1 ]['name'] }}
{%- else %}{{ azure_regions[default_region | int - 1]['name'] }}{% endif %}
- set_fact:
algo_region: >-
{% if region is defined %}{{ region }}
{%- elif _algo_region.user_input %}{{ azure_regions[_algo_region.user_input | int -1 ]['name'] }}
{%- else %}{{ azure_regions[default_region | int - 1]['name'] }}{% endif %}

- name: Create AlgoVPN Server
azure_rm_deployment:
state: present
deployment_name: "{{ algo_server_name }}"
template: "{{ lookup('file', role_path + '/files/deployment.json') }}"
secret: "{{ secret }}"
tenant: "{{ tenant }}"
client_id: "{{ client_id }}"
subscription_id: "{{ subscription_id }}"
resource_group_name: "{{ algo_server_name }}"
location: "{{ algo_region }}"
parameters:
sshKeyData:
value: "{{ lookup('file', '{{ SSH_keys.public }}') }}"
WireGuardPort:
value: "{{ wireguard_port }}"
vmSize:
value: "{{ cloud_providers.azure.size }}"
imageReferenceSku:
value: "{{ cloud_providers.azure.image }}"
register: azure_rm_deployment
- name: Create AlgoVPN Server
azure_rm_deployment:
state: present
deployment_name: "{{ algo_server_name }}"
template: "{{ lookup('file', role_path + '/files/deployment.json') }}"
secret: "{{ secret }}"
tenant: "{{ tenant }}"
client_id: "{{ client_id }}"
subscription_id: "{{ subscription_id }}"
resource_group_name: "{{ algo_server_name }}"
location: "{{ algo_region }}"
parameters:
sshKeyData:
value: "{{ lookup('file', '{{ SSH_keys.public }}') }}"
WireGuardPort:
value: "{{ wireguard_port }}"
vmSize:
value: "{{ cloud_providers.azure.size }}"
imageReferenceSku:
value: "{{ cloud_providers.azure.image }}"
register: azure_rm_deployment

- set_fact:
cloud_instance_ip: "{{ azure_rm_deployment.deployment.outputs.publicIPAddresses.value }}"
ansible_ssh_user: ubuntu
environment:
PYTHONPATH: "{{ azure_venv }}/lib/python2.7/site-packages/"
- set_fact:
cloud_instance_ip: "{{ azure_rm_deployment.deployment.outputs.publicIPAddresses.value }}"
ansible_ssh_user: ubuntu

+ 23
- 22
roles/cloud-azure/tasks/venv.yml View File

@@ -1,10 +1,4 @@
---
- name: Clean up the environment
file:
dest: "{{ azure_venv }}"
state: absent
when: clean_environment

- name: Install requirements
pip:
name:
@@ -13,29 +7,36 @@
- azure-cli-core==2.0.35
- azure-cli-nspkg==3.0.2
- azure-common==1.1.11
- azure-mgmt-batch==4.1.0
- azure-mgmt-compute==2.1.0
- azure-mgmt-containerinstance==0.4.0
- azure-mgmt-authorization==0.51.1
- azure-mgmt-batch==5.0.1
- azure-mgmt-cdn==3.0.0
- azure-mgmt-compute==4.4.0
- azure-mgmt-containerinstance==1.4.0
- azure-mgmt-containerregistry==2.0.0
- azure-mgmt-containerservice==3.0.1
- azure-mgmt-dns==1.2.0
- azure-mgmt-keyvault==0.40.0
- azure-mgmt-containerservice==4.4.0
- azure-mgmt-dns==2.1.0
- azure-mgmt-keyvault==1.1.0
- azure-mgmt-marketplaceordering==0.1.0
- azure-mgmt-monitor==0.5.2
- azure-mgmt-network==1.7.1
- azure-mgmt-network==2.3.0
- azure-mgmt-nspkg==2.0.0
- azure-mgmt-rdbms==1.2.0
- azure-mgmt-resource==1.2.2
- azure-mgmt-sql==0.7.1
- azure-mgmt-storage==1.5.0
- azure-mgmt-redis==5.0.0
- azure-mgmt-resource==2.1.0
- azure-mgmt-rdbms==1.4.1
- azure-mgmt-servicebus==0.5.3
- azure-mgmt-sql==0.10.0
- azure-mgmt-storage==3.1.0
- azure-mgmt-trafficmanager==0.50.0
- azure-mgmt-web==0.32.0
- azure-mgmt-web==0.41.0
- azure-nspkg==2.0.0
- azure-storage==0.35.1
- msrest==0.4.29
- msrestazure==0.4.31
- msrest==0.6.1
- msrestazure==0.5.0
- azure-keyvault==1.0.0a1
- azure-graphrbac==0.40.0
- azure-mgmt-cosmosdb==0.5.2
- azure-mgmt-hdinsight==0.1.0
- azure-mgmt-devtestlabs==3.0.0
- azure-mgmt-loganalytics==0.2.0
state: latest
virtualenv: "{{ azure_venv }}"
virtualenv_python: python2.7
virtualenv_python: python3

+ 0
- 1
roles/cloud-cloudstack/defaults/main.yml View File

@@ -1,2 +0,0 @@
cloudstack_venv: "{{ playbook_dir }}/configs/.venvs/cloudstack"

+ 0
- 1
roles/cloud-cloudstack/tasks/main.yml View File

@@ -60,7 +60,6 @@
cloud_instance_ip: "{{ cs_server.default_ip }}"
ansible_ssh_user: ubuntu
environment:
PYTHONPATH: "{{ cloudstack_venv }}/lib/python2.7/site-packages/"
CLOUDSTACK_CONFIG: "{{ algo_cs_config }}"
CLOUDSTACK_REGION: "{{ algo_cs_region }}"



+ 1
- 8
roles/cloud-cloudstack/tasks/venv.yml View File

@@ -1,15 +1,8 @@
---
- name: Clean up the environment
file:
dest: "{{ cloudstack_venv }}"
state: absent
when: clean_environment

- name: Install requirements
pip:
name:
- cs
- sshpubkeys
state: latest
virtualenv: "{{ cloudstack_venv }}"
virtualenv_python: python2.7
virtualenv_python: python3

+ 0
- 1
roles/cloud-digitalocean/defaults/main.yml View File

@@ -1,2 +0,0 @@
digitalocean_venv: "{{ playbook_dir }}/configs/.venvs/digitalocean"

+ 29
- 104
roles/cloud-digitalocean/tasks/main.yml View File

@@ -1,105 +1,30 @@
---
- name: Build python virtual environment
import_tasks: venv.yml

- block:
- name: Include prompts
import_tasks: prompts.yml

- name: Set additional facts
set_fact:
algo_do_region: >-
{% if region is defined %}{{ region }}
{%- elif _algo_region.user_input %}{{ do_regions[_algo_region.user_input | int -1 ]['slug'] }}
{%- else %}{{ do_regions[default_region | int - 1]['slug'] }}{% endif %}
public_key: "{{ lookup('file', '{{ SSH_keys.public }}') }}"

- block:
- name: "Delete the existing Algo SSH keys"
digital_ocean:
state: absent
command: ssh
api_token: "{{ algo_do_token }}"
name: "{{ SSH_keys.comment }}"
register: ssh_keys
until: not ssh_keys.changed
retries: 10
delay: 1

rescue:
- name: Collect the fail error
digital_ocean:
state: absent
command: ssh
api_token: "{{ algo_do_token }}"
name: "{{ SSH_keys.comment }}"
register: ssh_keys
ignore_errors: yes

- debug: var=ssh_keys

- fail:
msg: "Please, ensure that your API token is not read-only."

- name: "Upload the SSH key"
digital_ocean:
state: present
command: ssh
ssh_pub_key: "{{ public_key }}"
api_token: "{{ algo_do_token }}"
name: "{{ SSH_keys.comment }}"
register: do_ssh_key

- name: "Creating a droplet..."
digital_ocean:
state: present
command: droplet
name: "{{ algo_server_name }}"
region_id: "{{ algo_do_region }}"
size_id: "{{ cloud_providers.digitalocean.size }}"
image_id: "{{ cloud_providers.digitalocean.image }}"
ssh_key_ids: "{{ do_ssh_key.ssh_key.id }}"
unique_name: yes
api_token: "{{ algo_do_token }}"
ipv6: yes
register: do

- set_fact:
cloud_instance_ip: "{{ do.droplet.ip_address }}"
ansible_ssh_user: root

- name: Tag the droplet
digital_ocean_tag:
name: "Environment:Algo"
resource_id: "{{ do.droplet.id }}"
api_token: "{{ algo_do_token }}"
state: present

- block:
- name: "Delete the new Algo SSH key"
digital_ocean:
state: absent
command: ssh
api_token: "{{ algo_do_token }}"
name: "{{ SSH_keys.comment }}"
register: ssh_keys
until: not ssh_keys.changed
retries: 10
delay: 1

rescue:
- name: Collect the fail error
digital_ocean:
state: absent
command: ssh
api_token: "{{ algo_do_token }}"
name: "{{ SSH_keys.comment }}"
register: ssh_keys
ignore_errors: yes

- debug: var=ssh_keys

- fail:
msg: "Please, ensure that your API token is not read-only."
environment:
PYTHONPATH: "{{ digitalocean_venv }}/lib/python2.7/site-packages/"
- name: Include prompts
import_tasks: prompts.yml

- name: "Upload the SSH key"
digital_ocean_sshkey:
oauth_token: "{{ algo_do_token }}"
name: "{{ SSH_keys.comment }}"
ssh_pub_key: "{{ lookup('file', '{{ SSH_keys.public }}') }}"
register: do_ssh_key

- name: "Creating a droplet..."
digital_ocean_droplet:
state: present
name: "{{ algo_server_name }}"
oauth_token: "{{ algo_do_token }}"
size: "{{ cloud_providers.digitalocean.size }}"
region: "{{ algo_do_region }}"
image: "{{ cloud_providers.digitalocean.image }}"
wait_timeout: 300
unique_name: true
ipv6: true
ssh_keys: "{{ do_ssh_key.data.ssh_key.id }}"
tags:
- Environment:Algo
register: digital_ocean_droplet

- set_fact:
cloud_instance_ip: "{{ digital_ocean_droplet.data.ip_address }}"
ansible_ssh_user: root

+ 8
- 1
roles/cloud-digitalocean/tasks/prompts.yml View File

@@ -22,7 +22,7 @@
Authorization: "Bearer {{ algo_do_token }}"
register: _do_regions

- name: Set facts about thre regions
- name: Set facts about the regions
set_fact:
do_regions: "{{ _do_regions.json.regions | sort(attribute='slug') }}"

@@ -44,3 +44,10 @@
[{{ default_region }}]
register: _algo_region
when: region is undefined

- name: Set additional facts
set_fact:
algo_do_region: >-
{% if region is defined %}{{ region }}
{%- elif _algo_region.user_input %}{{ do_regions[_algo_region.user_input | int -1 ]['slug'] }}
{%- else %}{{ do_regions[default_region | int - 1]['slug'] }}{% endif %}

+ 0
- 12
roles/cloud-digitalocean/tasks/venv.yml View File

@@ -1,13 +0,0 @@
- name: Clean up the environment
file:
dest: "{{ digitalocean_venv }}"
state: absent
when: clean_environment

- name: Install requirements
pip:
name: dopy
version: 0.3.5
virtualenv: "{{ digitalocean_venv }}"
virtualenv_python: python2.7

+ 0
- 1
roles/cloud-ec2/defaults/main.yml View File

@@ -3,5 +3,4 @@ encrypted: "{{ cloud_providers.ec2.encrypted }}"
ec2_vpc_nets:
cidr_block: 172.16.0.0/16
subnet_cidr: 172.16.254.0/23
ec2_venv: "{{ playbook_dir }}/configs/.venvs/aws"
existing_eip: ""

+ 19
- 22
roles/cloud-ec2/tasks/main.yml View File

@@ -2,29 +2,26 @@
- name: Build python virtual environment
import_tasks: venv.yml

- block:
- name: Include prompts
import_tasks: prompts.yml
- name: Include prompts
import_tasks: prompts.yml

- name: Locate official AMI for region
ec2_ami_facts:
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
owners: "{{ cloud_providers.ec2.image.owner }}"
region: "{{ algo_region }}"
filters:
name: "ubuntu/images/hvm-ssd/{{ cloud_providers.ec2.image.name }}-amd64-server-*"
register: ami_search
- name: Locate official AMI for region
ec2_ami_facts:
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
owners: "{{ cloud_providers.ec2.image.owner }}"
region: "{{ algo_region }}"
filters:
name: "ubuntu/images/hvm-ssd/{{ cloud_providers.ec2.image.name }}-amd64-server-*"
register: ami_search

- name: Set the ami id as a fact
set_fact:
ami_image: "{{ (ami_search.images | sort(attribute='creation_date') | last)['image_id'] }}"
- name: Set the ami id as a fact
set_fact:
ami_image: "{{ (ami_search.images | sort(attribute='creation_date') | last)['image_id'] }}"

- name: Deploy the stack
import_tasks: cloudformation.yml
- name: Deploy the stack
import_tasks: cloudformation.yml

- set_fact:
cloud_instance_ip: "{{ stack.stack_outputs.ElasticIP }}"
ansible_ssh_user: ubuntu
environment:
PYTHONPATH: "{{ ec2_venv }}/lib/python2.7/site-packages/"
- set_fact:
cloud_instance_ip: "{{ stack.stack_outputs.ElasticIP }}"
ansible_ssh_user: ubuntu

+ 1
- 8
roles/cloud-ec2/tasks/venv.yml View File

@@ -1,15 +1,8 @@
---
- name: Clean up the environment
file:
dest: "{{ ec2_venv }}"
state: absent
when: clean_environment

- name: Install requirements
pip:
name:
- boto>=2.5
- boto3
state: latest
virtualenv: "{{ ec2_venv }}"
virtualenv_python: python2.7
virtualenv_python: python3

+ 0
- 1
roles/cloud-gce/defaults/main.yml View File

@@ -1,2 +0,0 @@
gce_venv: "{{ playbook_dir }}/configs/.venvs/gce"

+ 74
- 51
roles/cloud-gce/tasks/main.yml View File

@@ -2,60 +2,83 @@
- name: Build python virtual environment
import_tasks: venv.yml

- block:
- name: Include prompts
import_tasks: prompts.yml
- name: Include prompts
import_tasks: prompts.yml

- name: Network configured
gce_net:
name: "{{ algo_server_name }}"
fwname: "{{ algo_server_name }}-fw"
allowed: "udp:500,4500,{{ wireguard_port }};tcp:22"
state: "present"
mode: auto
src_range: 0.0.0.0/0
service_account_email: "{{ service_account_email }}"
credentials_file: "{{ credentials_file_path }}"
project_id: "{{ project_id }}"
- name: Network configured
gcp_compute_network:
auth_kind: serviceaccount
service_account_file: "{{ credentials_file_path }}"
project: "{{ project_id }}"
name: algovpn
auto_create_subnetworks: true
routing_config:
routing_mode: REGIONAL
register: gcp_compute_network

- name: Firewall configured
gcp_compute_firewall:
auth_kind: serviceaccount
service_account_file: "{{ credentials_file_path }}"
project: "{{ project_id }}"
name: algovpn
network: "{{ gcp_compute_network }}"
direction: INGRESS
allowed:
- ip_protocol: udp
ports:
- '500'
- '4500'
- '{{ wireguard_port|string }}'
- ip_protocol: tcp
ports:
- '22'
- ip_protocol: icmp

- block:
- name: External IP allocated
gce_eip:
service_account_email: "{{ service_account_email }}"
credentials_file: "{{ credentials_file_path }}"
project_id: "{{ project_id }}"
name: "{{ algo_server_name }}"
region: "{{ algo_region.split('-')[0:2] | join('-') }}"
state: present
register: gce_eip
- block:
- name: External IP allocated
gcp_compute_address:
auth_kind: serviceaccount
service_account_file: "{{ credentials_file_path }}"
project: "{{ project_id }}"
name: "{{ algo_server_name }}"
region: "{{ algo_region }}"
register: gcp_compute_address

- name: Set External IP as a fact
set_fact:
external_ip: "{{ gce_eip.address }}"
when: cloud_providers.gce.external_static_ip
- name: Set External IP as a fact
set_fact:
external_ip: "{{ gcp_compute_address.address }}"
when: cloud_providers.gce.external_static_ip

- name: "Creating a new instance..."
gce:
instance_names: "{{ algo_server_name }}"
zone: "{{ algo_region }}"
external_ip: "{{ external_ip | default('ephemeral') }}"
machine_type: "{{ cloud_providers.gce.size }}"
image: "{{ cloud_providers.gce.image }}"
service_account_email: "{{ service_account_email }}"
credentials_file: "{{ credentials_file_path }}"
project_id: "{{ project_id }}"
metadata:
ssh-keys: "ubuntu:{{ ssh_public_key_lookup }}"
user-data: |
#!/bin/bash
sudo apt-get remove -y --purge sshguard
network: "{{ algo_server_name }}"
tags:
- name: Instance created
gcp_compute_instance:
auth_kind: serviceaccount
service_account_file: "{{ credentials_file_path }}"
project: "{{ project_id }}"
name: "{{ algo_server_name }}"
zone: "{{ algo_zone }}"
machine_type: "{{ cloud_providers.gce.size }}"
disks:
- auto_delete: true
boot: true
initialize_params:
source_image: "projects/ubuntu-os-cloud/global/images/family/{{ cloud_providers.gce.image }}"
metadata:
ssh-keys: "ubuntu:{{ ssh_public_key_lookup }}"
user-data: |
#!/bin/bash
sudo apt-get remove -y --purge sshguard
network_interfaces:
- network: "{{ gcp_compute_network }}"
access_configs:
- name: "{{ algo_server_name }}"
nat_ip: "{{ gcp_compute_address|default(None) }}"
type: ONE_TO_ONE_NAT
tags:
items:
- "environment-algo"
register: google_vm
register: gcp_compute_instance

- set_fact:
cloud_instance_ip: "{{ google_vm.instance_data[0].public_ip }}"
ansible_ssh_user: ubuntu
environment:
PYTHONPATH: "{{ gce_venv }}/lib/python2.7/site-packages/"
- set_fact:
cloud_instance_ip: "{{ gcp_compute_instance.networkInterfaces[0].accessConfigs[0].natIP }}"
ansible_ssh_user: ubuntu

+ 28
- 16
roles/cloud-gce/tasks/prompts.yml View File

@@ -21,36 +21,32 @@

- block:
- name: Get regions
gce_region_facts:
service_account_email: "{{ credentials_file_lookup.client_email }}"
credentials_file: "{{ credentials_file_path }}"
project_id: "{{ credentials_file_lookup.project_id }}"
register: _gce_regions
gcp_compute_location_info:
auth_kind: serviceaccount
service_account_file: "{{ credentials_file_path }}"
project: "{{ project_id }}"
scope: regions
filters: status=UP
register: gcp_compute_regions_info

- name: Set facts about the regions
set_fact:
gce_regions: >-
[{%- for region in _gce_regions.results.regions | sort(attribute='name') -%}
{% if region.status == "UP" %}
{% for zone in region.zones | sort(attribute='name') %}
{% if zone.status == "UP" %}
'{{ zone.name }}'
{% endif %}{% if not loop.last %},{% endif %}
{% endfor %}
{% endif %}{% if not loop.last %},{% endif %}
[{%- for region in gcp_compute_regions_info.resources | sort(attribute='name') -%}
'{{ region.name }}'{% if not loop.last %},{% endif %}
{%- endfor -%}]

- name: Set facts about the default region
set_fact:
default_region: >-
{% for region in gce_regions %}
{%- if region == "us-east1-b" %}{{ loop.index }}{% endif %}
{%- if region == "us-east1" %}{{ loop.index }}{% endif %}
{%- endfor %}

- pause:
prompt: |
What region should the server be located in?
(https://cloud.google.com/compute/docs/regions-zones/)
(https://cloud.google.com/compute/docs/regions-zones/#locations)
{% for r in gce_regions %}
{{ loop.index }}. {{ r }}
{% endfor %}
@@ -60,8 +56,24 @@
register: _gce_region
when: region is undefined

- set_fact:
- name: Set region as a fact
set_fact:
algo_region: >-
{% if region is defined %}{{ region }}
{%- elif _gce_region.user_input %}{{ gce_regions[_gce_region.user_input | int -1 ] }}
{%- else %}{{ gce_regions[default_region | int - 1] }}{% endif %}

- name: Get zones
gcp_compute_location_info:
auth_kind: serviceaccount
service_account_file: "{{ credentials_file_path }}"
project: "{{ project_id }}"
scope: zones
filters:
- "name={{ algo_region }}-*"
- "status=UP"
register: gcp_compute_zone_info

- name: Set random available zone as a fact
set_fact:
algo_zone: "{{ (gcp_compute_zone_info.resources | random(seed=algo_server_name + algo_region + project_id) ).name }}"

+ 3
- 9
roles/cloud-gce/tasks/venv.yml View File

@@ -1,14 +1,8 @@
---
- name: Clean up the environment
file:
dest: "{{ gce_venv }}"
state: absent
when: clean_environment

- name: Install requirements
pip:
name:
- apache-libcloud
- requests>=2.18.4
- google-auth>=1.3.0
state: latest
virtualenv: "{{ gce_venv }}"
virtualenv_python: python2.7
virtualenv_python: python3

+ 31
- 0
roles/cloud-hetzner/tasks/main.yml View File

@@ -0,0 +1,31 @@
---
- name: Build python virtual environment
import_tasks: venv.yml

- name: Include prompts
import_tasks: prompts.yml

- name: Create an ssh key
hcloud_ssh_key:
name: "algo-{{ 999999 | random(seed=lookup('file', SSH_keys.public)) }}"
public_key: "{{ lookup('file', SSH_keys.public) }}"
state: present
api_token: "{{ algo_hcloud_token }}"
register: hcloud_ssh_key

- name: Create a server...
hcloud_server:
name: "{{ algo_server_name }}"
location: "{{ algo_hcloud_region }}"
server_type: "{{ cloud_providers.hetzner.server_type }}"
image: "{{ cloud_providers.hetzner.image }}"
state: present
api_token: "{{ algo_hcloud_token }}"
ssh_keys: "{{ hcloud_ssh_key.hcloud_ssh_key.name }}"
labels:
Environment: algo
register: hcloud_server

- set_fact:
cloud_instance_ip: "{{ hcloud_server.hcloud_server.ipv4_address }}"
ansible_ssh_user: root

+ 48
- 0
roles/cloud-hetzner/tasks/prompts.yml View File

@@ -0,0 +1,48 @@
---
- pause:
prompt: |
Enter your API token (https://trailofbits.github.io/algo/cloud-hetzner.html#api-token):
echo: false
register: _hcloud_token
when:
- hcloud_token is undefined
- lookup('env','HCLOUD_TOKEN')|length <= 0

- name: Set the token as a fact
set_fact:
algo_hcloud_token: "{{ hcloud_token | default(_hcloud_token.user_input|default(None)) | default(lookup('env','HCLOUD_TOKEN'), true) }}"

- name: Get regions
hcloud_datacenter_facts:
api_token: "{{ algo_hcloud_token }}"
register: _hcloud_regions

- name: Set facts about thre regions
set_fact:
hcloud_regions: "{{ hcloud_datacenter_facts | sort(attribute='location') }}"

- name: Set default region
set_fact:
default_region: >-
{% for r in hcloud_regions %}
{%- if r['location'] == "nbg1" %}{{ loop.index }}{% endif %}
{%- endfor %}

- pause:
prompt: |
What region should the server be located in?
{% for r in hcloud_regions %}
{{ loop.index }}. {{ r['location'] }} {{ r['description'] }}
{% endfor %}

Enter the number of your desired region
[{{ default_region }}]
register: _algo_region
when: region is undefined

- name: Set additional facts
set_fact:
algo_hcloud_region: >-
{% if region is defined %}{{ region }}
{%- elif _algo_region.user_input %}{{ hcloud_regions[_algo_region.user_input | int -1 ]['location'] }}
{%- else %}{{ hcloud_regions[default_region | int - 1]['location'] }}{% endif %}

+ 7
- 0
roles/cloud-hetzner/tasks/venv.yml View File

@@ -0,0 +1,7 @@
---
- name: Install requirements
pip:
name:
- hcloud
state: latest
virtualenv_python: python3

+ 0
- 1
roles/cloud-lightsail/defaults/main.yml View File

@@ -1,2 +0,0 @@
lightsail_venv: "{{ playbook_dir }}/configs/.venvs/aws"

+ 35
- 38
roles/cloud-lightsail/tasks/main.yml View File

@@ -2,43 +2,40 @@
- name: Build python virtual environment
import_tasks: venv.yml

- block:
- name: Include prompts
import_tasks: prompts.yml
- name: Include prompts
import_tasks: prompts.yml

- name: Create an instance
lightsail:
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
name: "{{ algo_server_name }}"
state: present
region: "{{ algo_region }}"
zone: "{{ algo_region }}a"
blueprint_id: "{{ cloud_providers.lightsail.image }}"
bundle_id: "{{ cloud_providers.lightsail.size }}"
wait_timeout: 300
open_ports:
- from_port: 4500
to_port: 4500
protocol: udp
- from_port: 500
to_port: 500
protocol: udp
- from_port: "{{ wireguard_port }}"
to_port: "{{ wireguard_port }}"
protocol: udp
user_data: |
#!/bin/bash
mkdir -p /home/ubuntu/.ssh/
echo "{{ lookup('file', '{{ SSH_keys.public }}') }}" >> /home/ubuntu/.ssh/authorized_keys
chown -R ubuntu: /home/ubuntu/.ssh/
chmod 0700 /home/ubuntu/.ssh/
chmod 0600 /home/ubuntu/.ssh/*
test
register: algo_instance
- name: Create an instance
lightsail:
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
name: "{{ algo_server_name }}"
state: present
region: "{{ algo_region }}"
zone: "{{ algo_region }}a"
blueprint_id: "{{ cloud_providers.lightsail.image }}"
bundle_id: "{{ cloud_providers.lightsail.size }}"
wait_timeout: "300"
open_ports:
- from_port: 4500
to_port: 4500
protocol: udp
- from_port: 500
to_port: 500
protocol: udp
- from_port: "{{ wireguard_port }}"
to_port: "{{ wireguard_port }}"
protocol: udp
user_data: |
#!/bin/bash
mkdir -p /home/ubuntu/.ssh/
echo "{{ lookup('file', '{{ SSH_keys.public }}') }}" >> /home/ubuntu/.ssh/authorized_keys
chown -R ubuntu: /home/ubuntu/.ssh/
chmod 0700 /home/ubuntu/.ssh/
chmod 0600 /home/ubuntu/.ssh/*
test
register: algo_instance

- set_fact:
cloud_instance_ip: "{{ algo_instance['instance']['public_ip_address'] }}"
ansible_ssh_user: ubuntu
environment:
PYTHONPATH: "{{ lightsail_venv }}/lib/python2.7/site-packages/"
- set_fact:
cloud_instance_ip: "{{ algo_instance['instance']['public_ip_address'] }}"
ansible_ssh_user: ubuntu

+ 1
- 1
roles/cloud-lightsail/tasks/prompts.yml View File

@@ -32,7 +32,7 @@

- name: Set facts about the regions
set_fact:
lightsail_regions: "{{ _lightsail_regions.results.regions | sort(attribute='name') }}"
lightsail_regions: "{{ _lightsail_regions.data.regions | sort(attribute='name') }}"

- name: Set the default region
set_fact:


+ 1
- 8
roles/cloud-lightsail/tasks/venv.yml View File

@@ -1,15 +1,8 @@
---
- name: Clean up the environment
file:
dest: "{{ lightsail_venv }}"
state: absent
when: clean_environment

- name: Install requirements
pip:
name:
- boto>=2.5
- boto3
state: latest
virtualenv: "{{ lightsail_venv }}"
virtualenv_python: python2.7
virtualenv_python: python3

+ 0
- 1
roles/cloud-openstack/defaults/main.yml View File

@@ -1,2 +0,0 @@
openstack_venv: "{{ playbook_dir }}/configs/.venvs/openstack"

+ 62
- 65
roles/cloud-openstack/tasks/main.yml View File

@@ -6,77 +6,74 @@
- name: Build python virtual environment
import_tasks: venv.yml

- block:
- name: Security group created
os_security_group:
state: "{{ state|default('present') }}"
name: "{{ algo_server_name }}-security_group"
description: AlgoVPN security group
register: os_security_group
- name: Security group created
os_security_group:
state: "{{ state|default('present') }}"
name: "{{ algo_server_name }}-security_group"
description: AlgoVPN security group
register: os_security_group

- name: Security rules created
os_security_group_rule:
state: "{{ state|default('present') }}"
security_group: "{{ os_security_group.id }}"
protocol: "{{ item.proto }}"
port_range_min: "{{ item.port_min }}"
port_range_max: "{{ item.port_max }}"
remote_ip_prefix: "{{ item.range }}"
with_items:
- { proto: tcp, port_min: 22, port_max: 22, range: 0.0.0.0/0 }
- { proto: icmp, port_min: -1, port_max: -1, range: 0.0.0.0/0 }
- { proto: udp, port_min: 4500, port_max: 4500, range: 0.0.0.0/0 }
- { proto: udp, port_min: 500, port_max: 500, range: 0.0.0.0/0 }
- { proto: udp, port_min: "{{ wireguard_port }}", port_max: "{{ wireguard_port }}", range: 0.0.0.0/0 }
- name: Security rules created
os_security_group_rule:
state: "{{ state|default('present') }}"
security_group: "{{ os_security_group.id }}"
protocol: "{{ item.proto }}"
port_range_min: "{{ item.port_min }}"
port_range_max: "{{ item.port_max }}"
remote_ip_prefix: "{{ item.range }}"
with_items:
- { proto: tcp, port_min: 22, port_max: 22, range: 0.0.0.0/0 }
- { proto: icmp, port_min: -1, port_max: -1, range: 0.0.0.0/0 }
- { proto: udp, port_min: 4500, port_max: 4500, range: 0.0.0.0/0 }
- { proto: udp, port_min: 500, port_max: 500, range: 0.0.0.0/0 }
- { proto: udp, port_min: "{{ wireguard_port }}", port_max: "{{ wireguard_port }}", range: 0.0.0.0/0 }

- name: Keypair created
os_keypair:
state: "{{ state|default('present') }}"
name: "{{ SSH_keys.comment|regex_replace('@', '_') }}"
public_key_file: "{{ SSH_keys.public }}"
register: os_keypair
- name: Keypair created
os_keypair:
state: "{{ state|default('present') }}"
name: "{{ SSH_keys.comment|regex_replace('@', '_') }}"
public_key_file: "{{ SSH_keys.public }}"
register: os_keypair

- name: Gather facts about flavors
os_flavor_facts:
ram: "{{ cloud_providers.openstack.flavor_ram }}"
- name: Gather facts about flavors
os_flavor_facts:
ram: "{{ cloud_providers.openstack.flavor_ram }}"

- name: Gather facts about images
os_image_facts:
image: "{{ cloud_providers.openstack.image }}"
- name: Gather facts about images
os_image_facts:
image: "{{ cloud_providers.openstack.image }}"

- name: Gather facts about public networks
os_networks_facts:
- name: Gather facts about public networks
os_networks_facts:

- name: Set the network as a fact
set_fact:
public_network_id: "{{ item.id }}"
when:
- item['router:external']|default(omit)
- item['admin_state_up']|default(omit)
- item['status'] == 'ACTIVE'
with_items: "{{ openstack_networks }}"
- name: Set the network as a fact
set_fact:
public_network_id: "{{ item.id }}"
when:
- item['router:external']|default(omit)
- item['admin_state_up']|default(omit)
- item['status'] == 'ACTIVE'
with_items: "{{ openstack_networks }}"

- name: Set facts
set_fact:
flavor_id: "{{ (openstack_flavors | sort(attribute='ram'))[0]['id'] }}"
image_id: "{{ openstack_image['id'] }}"
keypair_name: "{{ os_keypair.key.name }}"
security_group_name: "{{ os_security_group['secgroup']['name'] }}"
- name: Set facts
set_fact:
flavor_id: "{{ (openstack_flavors | sort(attribute='ram'))[0]['id'] }}"
image_id: "{{ openstack_image['id'] }}"
keypair_name: "{{ os_keypair.key.name }}"
security_group_name: "{{ os_security_group['secgroup']['name'] }}"

- name: Server created
os_server:
state: "{{ state|default('present') }}"
name: "{{ algo_server_name }}"
image: "{{ image_id }}"
flavor: "{{ flavor_id }}"
key_name: "{{ keypair_name }}"
security_groups: "{{ security_group_name }}"
nics:
- net-id: "{{ public_network_id }}"
register: os_server
- name: Server created
os_server:
state: "{{ state|default('present') }}"
name: "{{ algo_server_name }}"
image: "{{ image_id }}"
flavor: "{{ flavor_id }}"
key_name: "{{ keypair_name }}"
security_groups: "{{ security_group_name }}"
nics:
- net-id: "{{ public_network_id }}"
register: os_server

- set_fact:
cloud_instance_ip: "{{ os_server['openstack']['public_v4'] }}"
ansible_ssh_user: ubuntu
environment:
PYTHONPATH: "{{ openstack_venv }}/lib/python2.7/site-packages/"
- set_fact:
cloud_instance_ip: "{{ os_server['openstack']['public_v4'] }}"
ansible_ssh_user: ubuntu

+ 1
- 8
roles/cloud-openstack/tasks/venv.yml View File

@@ -1,13 +1,6 @@
---
- name: Clean up the environment
file:
dest: "{{ openstack_venv }}"
state: absent
when: clean_environment

- name: Install requirements
pip:
name: shade
state: latest
virtualenv: "{{ openstack_venv }}"
virtualenv_python: python2.7
virtualenv_python: python3

+ 1
- 0
roles/cloud-scaleway/tasks/main.yml View File

@@ -24,6 +24,7 @@
scaleway_compute:
name: "{{ algo_server_name }}"
enable_ipv6: true
public_ip: dynamic
boot_type: local
state: running
image: "{{ images[0] }}"


+ 4
- 0
roles/common/tasks/ubuntu.yml View File

@@ -9,6 +9,10 @@
update_cache: true
install_recommends: true
upgrade: dist
register: result
until: result is succeeded
retries: 30
delay: 10

- name: Check if reboot is required
shell: >


+ 0
- 1
roles/dns/tasks/freebsd.yml View File

@@ -7,4 +7,3 @@
lineinfile:
path: /etc/rc.conf
line: 'dnscrypt_proxy_mac_portacl_enable="YES"'
when: listen_port|int == 53

+ 1
- 1
roles/strongswan/tasks/main.yml View File

@@ -2,7 +2,7 @@
- include_tasks: ubuntu.yml
when: ansible_distribution == 'Debian' or ansible_distribution == 'Ubuntu'

- name: Ensure that the strongswan user exist
- name: Ensure that the strongswan user exists
user:
name: strongswan
group: nogroup


+ 1
- 1
roles/strongswan/tasks/ubuntu.yml View File

@@ -38,7 +38,7 @@
- strongswan
- netfilter-persistent

- name: Ubuntu | Ensure that the strongswan service directory exist
- name: Ubuntu | Ensure that the strongswan service directory exists
file:
path: /etc/systemd/system/strongswan.service.d/
state: directory


+ 1
- 1
tests/local-deploy.sh View File

@@ -6,7 +6,7 @@ DEPLOY_ARGS="provider=local server=10.0.8.100 ssh_user=ubuntu endpoint=10.0.8.10

if [ "${DEPLOY}" == "docker" ]
then
docker run -it -v $(pwd)/config.cfg:/algo/config.cfg -v ~/.ssh:/root/.ssh -v $(pwd)/configs:/algo/configs -e "DEPLOY_ARGS=${DEPLOY_ARGS}" travis/algo /bin/sh -c "chown -R root: /root/.ssh && chmod -R 600 /root/.ssh && source env/bin/activate && ansible-playbook main.yml -e \"${DEPLOY_ARGS}\" --skip-tags apparmor"
docker run -it -v $(pwd)/config.cfg:/algo/config.cfg -v ~/.ssh:/root/.ssh -v $(pwd)/configs:/algo/configs -e "DEPLOY_ARGS=${DEPLOY_ARGS}" travis/algo /bin/sh -c "chown -R root: /root/.ssh && chmod -R 600 /root/.ssh && source .env/bin/activate && ansible-playbook main.yml -e \"${DEPLOY_ARGS}\" --skip-tags apparmor"
else
ansible-playbook main.yml -e "${DEPLOY_ARGS}" --skip-tags apparmor
fi

+ 1
- 1
tests/update-users.sh View File

@@ -6,7 +6,7 @@ USER_ARGS="{ 'server': '10.0.8.100', 'users': ['desktop', 'user1', 'user2'], 'lo

if [ "${DEPLOY}" == "docker" ]
then
docker run -it -v $(pwd)/config.cfg:/algo/config.cfg -v ~/.ssh:/root/.ssh -v $(pwd)/configs:/algo/configs -e "USER_ARGS=${USER_ARGS}" travis/algo /bin/sh -c "chown -R root: /root/.ssh && chmod -R 600 /root/.ssh && source env/bin/activate && ansible-playbook users.yml -e \"${USER_ARGS}\" -t update-users"
docker run -it -v $(pwd)/config.cfg:/algo/config.cfg -v ~/.ssh:/root/.ssh -v $(pwd)/configs:/algo/configs -e "USER_ARGS=${USER_ARGS}" travis/algo /bin/sh -c "chown -R root: /root/.ssh && chmod -R 600 /root/.ssh && source .env/bin/activate && ansible-playbook users.yml -e \"${USER_ARGS}\" -t update-users"
else
ansible-playbook users.yml -e "${USER_ARGS}" -t update-users
fi


Loading…
Cancel
Save