Kaynağa Gözat

Fix backup and restore test (#16)

Lower time waiting for restore instance to start in test

Show better message on Vultr start server failure

Delete plan instead of network in test

Merge branch 'master' into kris/fix_backup_restore_test_cont_2

# Conflicts:
#	bubble-server/src/test/resources/models/include/new_bubble.json
#	utils/cobbzilla-wizard

Change and add awaiting URLs required in restore test

Do not lock accounts on node restore

Set username in test in a single place

Add test for new debug call and test helpers

Update lib

Merge branch 'master' into kris/fix_backup_restore_test_cont_2

# Conflicts:
#	utils/cobbzilla-utils
#	utils/cobbzilla-wizard

Remove MITM nat iptables entries from ansible setup

Log admin port

Change log format to more standard one

Rename algo related ansible tag

Add some more grace time for restore

Add more info in log messages and some more logging

Revert wrong meter tick reordering

Fix error in bash var reference

Fix response from debug echo call

Update algo's meter tick pattern

Add debug echo API call

Fix name position within service task in ansible

Fix required indent

Use ansible tags to properly run post-restore tasks

Revert "Reference install_type where required"

This reverts commit 9184cd8113.

Reorder ticks appropriately

Look for only non-promotional payment methods in new bubble script

Reference install_type where required

Rename algo ansible tasks uniquely

Wait for bubble to stop for real in the test

Remove not needed stored vars from test script

Use grace period in await_url for new bubbles

Set waiting for keys in test inbox

Set global mapping to be global static

Show full exception logs

Add activation key for new bubble's first login in test

Call new bubble's API before DNS list in test

update libs

Beautify code

Fix DNS list API request URL in test

Better DNS listing APIs usage

Co-authored-by: Kristijan Mitrovic <kmitrovic@itekako.com>
Reviewed-on: #16
tags/v0.10.5
Kristijan Mitrovic 4 yıl önce
committed by jonathan
ebeveyn
işleme
6cd5e5c16a
16 değiştirilmiş dosya ile 183 ekleme ve 128 silme
  1. +8
    -11
      automation/roles/algo/tasks/main.yml
  2. +15
    -9
      automation/roles/bubble/files/bubble_restore_monitor.sh
  3. +3
    -1
      automation/roles/bubble/templates/full_reset_db.sh.j2
  4. +5
    -1
      automation/roles/mitmproxy/tasks/main.yml
  5. +11
    -29
      automation/roles/mitmproxy/tasks/route.yml
  6. +7
    -1
      bubble-server/src/main/java/bubble/cloud/compute/vultr/VultrDriver.java
  7. +3
    -3
      bubble-server/src/main/java/bubble/cloud/storage/StorageServiceDriverBase.java
  8. +18
    -0
      bubble-server/src/main/java/bubble/resources/DebugResource.java
  9. +1
    -1
      bubble-server/src/main/java/bubble/resources/notify/InboundNotifyResource.java
  10. +6
    -1
      bubble-server/src/main/resources/ansible/install_local.sh.hbs
  11. +1
    -1
      bubble-server/src/main/resources/bubble/node_progress_meter_ticks.json
  12. +14
    -0
      bubble-server/src/test/java/bubble/test/DebugCallsTest.java
  13. +6
    -12
      bubble-server/src/test/resources/models/include/get_network_keys.json
  14. +12
    -10
      bubble-server/src/test/resources/models/include/new_bubble.json
  15. +48
    -0
      bubble-server/src/test/resources/models/tests/debug_echo.json
  16. +25
    -48
      bubble-server/src/test/resources/models/tests/live/backup_and_restore.json

+ 8
- 11
automation/roles/algo/tasks/main.yml Dosyayı Görüntüle

@@ -48,18 +48,15 @@
src: supervisor_wg_monitor_connections.conf
dest: /etc/supervisor/conf.d/wg_monitor_connections.conf

# Don't setup algo when in restore mode, bubble_restore_monitor.sh will set it up after the CA key has been restored
- name: Run algo playbook to install algo
shell: /root/ansible/roles/algo/algo/install_algo.sh
when: restore_key is not defined
block:
- name: Run install algo script including playbook
shell: /root/ansible/roles/algo/algo/install_algo.sh

# Don't start monitors when in restore mode, bubble_restore_monitor.sh will start it after algo is installed
- name: Run algo playbook to install algo
shell: bash -c "supervisorctl reload && sleep 5s && supervisorctl restart algo_refresh_users_monitor && supervisorctl restart wg_monitor_connections"
when: restore_key is not defined
- name: Restart algo related services
shell: bash -c "supervisorctl reload && sleep 5s && supervisorctl restart algo_refresh_users_monitor && supervisorctl restart wg_monitor_connections"

- name: Run algo playbook to install algo
shell: bash -c "supervisorctl reload && sleep 5s && supervisorctl stop algo_refresh_users_monitor && supervisorctl stop wg_monitor_connections"
when: restore_key is defined
- include: algo_firewall.yml
# Don't setup algo when in restore mode, bubble_restore_monitor.sh will set it up after the CA key has been restored
tags: algo_related

- include: algo_firewall.yml

+ 15
- 9
automation/roles/bubble/files/bubble_restore_monitor.sh Dosyayı Görüntüle

@@ -21,7 +21,7 @@ function die {
}

function log {
echo "${1}" >> ${LOG}
echo "$(date): ${1}" >> ${LOG}
}

START=$(date +%s)
@@ -92,7 +92,7 @@ cp ${RESTORE_BASE}/bubble.sql.gz ${BUBBLE_HOME}/sql/ \
&& chgrp -R postgres ${BUBBLE_HOME}/sql \
&& chmod 550 ${BUBBLE_HOME}/sql \
&& chmod 440 ${BUBBLE_HOME}/sql/* || die "Error restoring bubble database archive"
su - postgres bash -c "cd ${BUBBLE_HOME}/sql && full_reset_db.sh drop" || die "Error restoring database"
su - postgres bash -c "cd ${BUBBLE_HOME}/sql && full_reset_db.sh drop restored_node" || die "Error restoring database"

# Remove old keys
log "Removing node keys"
@@ -107,21 +107,27 @@ log "Flushing redis"
echo "FLUSHALL" | redis-cli || die "Error flushing redis"

# restore algo configs
log "Restoring algo configs"
CONFIGS_BACKUP=/home/bubble/.BUBBLE_ALGO_CONFIGS.tgz
if [[ ! -f ${CONFIGS_BACKUP} ]] ; then
log "Warning: Algo VPN configs backup not found: ${CONFIGS_BACKUP}, not installing algo"
else
ALGO_BASE=/root/ansible/roles/algo/algo
ANSIBLE_HOME="/root"
ANSIBLE_DIR="${ANSIBLE_HOME}/ansible"
ID_FILE="${ANSIBLE_HOME}/.ssh/bubble_rsa"
SSH_OPTIONS="--ssh-extra-args '-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey -i ${ID_FILE}'"

ALGO_BASE=${ANSIBLE_DIR}/roles/algo/algo
if [[ ! -d ${ALGO_BASE} ]] ; then
die "Error restoring Algo VPN: directory ${ALGO_BASE} not found"
fi
cd ${ALGO_BASE} && tar xzf ${CONFIGS_BACKUP} || die "Error restoring algo VPN configs"

# install/configure algo
${ALGO_BASE}/install_algo.sh || die "Error configuring or installing algo VPN"
# ensure user monitor is running
supervisorctl restart algo_refresh_users_monitor
cd "${ANSIBLE_DIR}" && \
. ./venv/bin/activate && \
bash -c \
"ansible-playbook ${SSH_OPTIONS} --tags 'algo_related,always' --inventory ./hosts ./playbook.yml 2>&1 >> ${LOG}" \
|| die "Error running ansible in post-restore. journalctl -xe = $(journalctl -xe | tail -n 50)"
fi

# restart mitm proxy service
@@ -135,7 +141,7 @@ supervisorctl restart bubble
# verify service is running OK
log "Pausing for a bit, then verifying bubble server has successfully restarted after restore"
sleep 60
curl https://$(hostname):${ADMIN_PORT}/api/.bubble || log "Error restarting bubble server"
curl https://$(hostname):${ADMIN_PORT}/api/.bubble || log "Error restarting bubble server - port ${ADMIN_PORT}"

# remove restore markers, we are done
log "Cleaning up temp files"


+ 3
- 1
automation/roles/bubble/templates/full_reset_db.sh.j2 Dosyayı Görüntüle

@@ -5,6 +5,8 @@ function die {
exit 1
}

INSTALL_MODE=${2:-{{install_type}}}

if [[ $(whoami) == "root" ]] ; then
su - postgres ${0} ${@}
exit $?
@@ -15,5 +17,5 @@ if [[ $(whoami) != "postgres" ]] ; then
fi

cd ~bubble/sql \
&& init_bubble_db.sh {{ db_name }} {{ db_user }} {{ is_fork }} {{ install_type }} ${1} \
&& init_bubble_db.sh {{ db_name }} {{ db_user }} {{ is_fork }} ${INSTALL_MODE} ${1} \
|| die "error reinitializing database"

+ 5
- 1
automation/roles/mitmproxy/tasks/main.yml Dosyayı Görüntüle

@@ -88,13 +88,17 @@
state: link

- name: Restart dnscrypt-proxy
shell: service dnscrypt-proxy restart
service:
name: dnscrypt-proxy
state: restarted
tags: algo_related

- name: restart supervisord
service:
name: supervisor
enabled: yes
state: restarted
tags: always

- import_tasks: route.yml



+ 11
- 29
automation/roles/mitmproxy/tasks/route.yml Dosyayı Görüntüle

@@ -14,7 +14,7 @@
value: 0
sysctl_set: yes

- name: "Allow MITM private port"
- name: Allow MITM private port
iptables:
chain: INPUT
action: insert
@@ -26,33 +26,15 @@
jump: ACCEPT
comment: Accept new local TCP DNS connections on private port
become: yes
tags: algo_related

- name: Route port 80 through mitmproxy
iptables:
table: nat
chain: PREROUTING
action: insert
rule_num: 1
protocol: tcp
destination_port: 80
jump: REDIRECT
to_ports: "{{ mitm_port }}"

- name: Route port 443 through mitmproxy
iptables:
table: nat
chain: PREROUTING
action: insert
rule_num: 2
protocol: tcp
destination_port: 443
jump: REDIRECT
to_ports: "{{ mitm_port }}"
- name: Setup for MITM and save iptables
block:
- name: save iptables rules
shell: iptables-save > /etc/iptables/rules.v4
become: yes

- name: save iptables rules
shell: iptables-save > /etc/iptables/rules.v4
become: yes

- name: save iptables v6 rules
shell: ip6tables-save > /etc/iptables/rules.v6
become: yes
- name: save iptables v6 rules
shell: ip6tables-save > /etc/iptables/rules.v6
become: yes
tags: always

+ 7
- 1
bubble-server/src/main/java/bubble/cloud/compute/vultr/VultrDriver.java Dosyayı Görüntüle

@@ -120,7 +120,13 @@ public class VultrDriver extends ComputeServiceDriverBase {
// create server, check response
final HttpResponseBean serverResponse = serverRequest.curl(); // fixme: we can do better than shelling to curl
if (serverResponse.getStatus() != 200) return die("start: error creating server: " + serverResponse);
final String subId = json(serverResponse.getEntityString(), JsonNode.class).get(VULTR_SUBID).textValue();
final JsonNode responseJson;
try {
responseJson = json(serverResponse.getEntityString(), JsonNode.class);
} catch (IllegalStateException e) {
return die("start: error creating server (error parsing response as JSON): " + serverResponse);
}
final var subId = responseJson.get(VULTR_SUBID).textValue();

node.setState(BubbleNodeState.booting);
node.setTag(TAG_INSTANCE_ID, subId);


+ 3
- 3
bubble-server/src/main/java/bubble/cloud/storage/StorageServiceDriverBase.java Dosyayı Görüntüle

@@ -20,7 +20,7 @@ import static org.cobbzilla.util.security.ShaUtil.sha256_hex;

public abstract class StorageServiceDriverBase<T> extends CloudServiceDriverBase<T> implements StorageServiceDriver {

private final Map<String, WriteRequest> requestMap = new ConcurrentHashMap<>();
private static final Map<String, WriteRequest> requestMap = new ConcurrentHashMap<>();

private static final Map<String, WriteRequestCleaner> cleaners = new ConcurrentHashMap<>();

@@ -66,11 +66,11 @@ public abstract class StorageServiceDriverBase<T> extends CloudServiceDriverBase
} catch (IllegalStateException e) {
if (e.getMessage().contains("timeout")) {
if (countBytes == 0) {
return die("StorageServiceDriverBase._write: error: " + e);
return die("StorageServiceDriverBase._write: error (no bytes) ", e);
}
}
} catch (Exception e) {
return die("StorageServiceDriverBase._write: error: " + e);
return die("StorageServiceDriverBase._write: exception ", e);
}
}
}


+ 18
- 0
bubble-server/src/main/java/bubble/resources/DebugResource.java Dosyayı Görüntüle

@@ -13,6 +13,8 @@ import bubble.model.account.message.AccountAction;
import bubble.model.account.message.AccountMessageType;
import bubble.model.account.message.ActionTarget;
import bubble.server.BubbleConfiguration;
import com.fasterxml.jackson.databind.JsonNode;
import lombok.NonNull;
import lombok.extern.slf4j.Slf4j;
import org.glassfish.jersey.server.ContainerRequest;
import org.springframework.beans.factory.annotation.Autowired;
@@ -20,9 +22,12 @@ import org.springframework.context.ApplicationContext;
import org.springframework.stereotype.Repository;
import org.springframework.stereotype.Service;

import javax.annotation.Nullable;
import javax.validation.Valid;
import javax.ws.rs.*;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.Response;
import java.io.IOException;
import java.util.*;
import java.util.function.Predicate;
import java.util.stream.Collectors;
@@ -32,6 +37,7 @@ import static bubble.cloud.auth.RenderedMessage.filteredInbox;
import static org.cobbzilla.util.daemon.ZillaRuntime.die;
import static org.cobbzilla.util.daemon.ZillaRuntime.empty;
import static org.cobbzilla.util.http.HttpContentTypes.APPLICATION_JSON;
import static org.cobbzilla.util.json.JsonUtil.*;
import static org.cobbzilla.util.reflect.ReflectionUtil.forName;
import static org.cobbzilla.util.reflect.ReflectionUtil.instantiate;
import static org.cobbzilla.wizard.resources.ResourceUtil.*;
@@ -117,4 +123,16 @@ public class DebugResource {
}
}

@POST @Path("/echo")
public Response echoJsonInLog(@Context ContainerRequest ctx,
@Valid @NonNull final JsonNode input,
@QueryParam("respondWith") @Nullable final String respondWith) throws IOException {
final var output = "ECHO: \n" + toJsonOrDie(input);
log.info(output);

if (empty(respondWith)) return ok();

log.debug("Responding with value in path: " + respondWith);
return ok(getNodeAsJava(findNode(input, respondWith), ""));
}
}

+ 1
- 1
bubble-server/src/main/java/bubble/resources/notify/InboundNotifyResource.java Dosyayı Görüntüle

@@ -146,7 +146,7 @@ public class InboundNotifyResource {
? stream(APPLICATION_OCTET_STREAM, data)
: notFound(storageRequest.getKey());
} catch (Exception e) {
return die("readStorage: "+e);
return die("readStorage: exception", e);
} finally {
storageStreamService.clearToken(token);
}


+ 6
- 1
bubble-server/src/main/resources/ansible/install_local.sh.hbs Dosyayı Görüntüle

@@ -49,9 +49,14 @@ sudo pip3 install setuptools psycopg2-binary || die "Error pip3 installing setup

SSH_OPTIONS="--ssh-extra-args '-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey -i ${ID_FILE}'"

SKIP_TAGS=""
if [[ -n "{{restoreKey}}" ]] ; then
SKIP_TAGS="--skip-tags algo_related"
fi

cd "${ANSIBLE_DIR}" && \
virtualenv -p python3 ./venv && \
. ./venv/bin/activate && \
pip3 install ansible && \
bash -c "ansible-playbook ${SSH_OPTIONS} --inventory ./hosts ./playbook.yml" \
bash -c "ansible-playbook ${SSH_OPTIONS} ${SKIP_TAGS} --inventory ./hosts ./playbook.yml" \
|| die "Error running ansible. journalctl -xe = $(journalctl -xe | tail -n 50)"

+ 1
- 1
bubble-server/src/main/resources/bubble/node_progress_meter_ticks.json Dosyayı Görüntüle

@@ -13,7 +13,7 @@
{ "percent": 44,"messageKey":"role_bubble_jar", "pattern":"TASK \\[bubble : Install bubble jar] \\*{5,}" },
{ "percent": 48,"messageKey":"role_bubble_db", "pattern":"TASK \\[bubble : Populate database] \\*{5,}" },
{ "percent": 51,"messageKey":"role_bubble_restore", "pattern":"TASK \\[bubble : Install restore helper scripts] \\*{5,}" },
{ "percent": 52,"messageKey":"role_bubble_algo", "pattern":"TASK \\[algo : Run algo playbook to install algo] \\*{5,}" },
{ "percent": 52,"messageKey":"role_bubble_algo", "pattern":"TASK \\[algo : [\\w\\s]+] \\*{5,}" },
{ "percent": 76,"messageKey":"role_nginx", "pattern":"TASK \\[nginx : [\\w\\s]+] \\*{5,}" },
{ "percent": 81,"messageKey":"role_nginx_certbot", "pattern":"TASK \\[nginx : Init certbot] \\*{5,}" },
{ "percent": 91,"messageKey":"role_mitmproxy", "pattern":"TASK \\[mitmproxy : [\\w\\s]+] \\*{5,}" },


+ 14
- 0
bubble-server/src/test/java/bubble/test/DebugCallsTest.java Dosyayı Görüntüle

@@ -0,0 +1,14 @@
/**
* Copyright (c) 2020 Bubble, Inc. All rights reserved.
* For personal (non-commercial) use, see license: https://getbubblenow.com/bubble-license/
*/
package bubble.test;

import org.junit.Test;

public class DebugCallsTest extends ActivatedBubbleModelTestBase {

@Override protected String getManifest() { return "manifest-empty"; }

@Test public void testEcho() throws Exception { modelTest("debug_echo"); }
}

+ 6
- 12
bubble-server/src/test/resources/models/include/get_network_keys.json Dosyayı Görüntüle

@@ -16,15 +16,12 @@
},

{
"before": "sleep 3s",
"comment": "check email inbox, expect network keys request",
"request": {
"uri": "debug/inbox/email/<<rootEmail>>?type=request&action=password&target=network"
},
"comment": "await and get network keys request in email inbox",
"before": "await_url debug/inbox/email/<<rootEmail>>?type=request&action=password&target=network 3m 5s len(await_json) > 0",
"request": { "uri": "debug/inbox/email/<<rootEmail>>?type=request&action=password&target=network" },
"response": {
"store": "emailInbox",
"check": [
{"condition": "json.length >= 1"},
{"condition": "'{{json.[0].ctx.message.messageType}}' == 'request'"},
{"condition": "'{{json.[0].ctx.message.action}}' == 'password'"},
{"condition": "'{{json.[0].ctx.message.target}}' == 'network'"}
@@ -41,15 +38,12 @@
},

{
"before": "sleep 3s",
"comment": "check email inbox for key request confirmation",
"request": {
"uri": "debug/inbox/email/<<rootEmail>>?type=confirmation&action=password&target=network"
},
"comment": "await and get key request confirmation from email inbox",
"before": "await_url debug/inbox/email/<<rootEmail>>?type=confirmation&action=password&target=network 3m 5s len(await_json) > 0",
"request": { "uri": "debug/inbox/email/<<rootEmail>>?type=confirmation&action=password&target=network" },
"response": {
"store": "emailInbox",
"check": [
{"condition": "json.length >= 1"},
{"condition": "'{{json.[0].ctx.message.messageType}}' == 'confirmation'"},
{"condition": "'{{json.[0].ctx.message.action}}' == 'password'"},
{"condition": "'{{json.[0].ctx.message.target}}' == 'network'"}


+ 12
- 10
bubble-server/src/test/resources/models/include/new_bubble.json Dosyayı Görüntüle

@@ -111,13 +111,13 @@

{
"comment": "list all payment methods",
"request": { "uri": "me/paymentMethods?all=true" },
"request": { "uri": "me/paymentMethods" },
"response": { "store": "paymentMethods" }
},

{
"comment": "add payment method for the user",
"onlyIf": "len(paymentMethods) == 0",
"onlyIf": "!match_any(paymentMethods, function(m) { return !m.hasPromotion() && m.getPaymentMethodType() != `promotional_credit`; })",
"before": "stripe_tokenize_card",
"request": {
"uri": "me/paymentMethods",
@@ -127,15 +127,15 @@
},

{
"comment": "list all payment methods again after creating one",
"onlyIf": "len(paymentMethods) == 0",
"request": { "uri": "me/paymentMethods?all=true" },
"response": { "store": "paymentMethods", "check": [{ "condition": "len(json) == 1" }] }
"comment": "wait for the one created above and fetch all payment methods again including that one",
"onlyIf": "!match_any(paymentMethods, function(m) { return !m.hasPromotion() && m.getPaymentMethodType() != `promotional_credit`; })",
"before": "await_url me/paymentMethods 5m 10s match_any(await_json, function(m) { return !m.hasPromotion() && m.getPaymentMethodType() != `promotional_credit`; })",
"request": { "uri": "me/paymentMethods" },
"response": { "store": "paymentMethods" }
},

{
"comment": "add plan, using the first found payment method for the new bubble",
"before": "sleep 24s",
"request": {
"uri": "me/plans",
"method": "put",
@@ -147,7 +147,9 @@
"plan": "<<plan>>",
"footprint": "<<footprint>>",
"sendMetrics": <<sendMetrics>>,
"paymentMethodObject": { "uuid": "{{ paymentMethods.[0].uuid }}" }
"paymentMethodObject": {
"uuid": "{{ js '_find(paymentMethods, function(m) { return !m.hasPromotion() && m.getPaymentMethodType() != `promotional_credit`; }).getUuid()' }}"
}
}
},
"response": { "store": "plan" }
@@ -166,7 +168,7 @@

{
"comment": "call API of deployed node after some grace period, ensure it is running",
"before": "await_url .bubble 20m:40m 20s",
"before": "await_url .bubble 20m:20m 20s",
"connection": {
"name": "<<bubbleConnectionVar>>",
"baseUri": "https://{{<<networkVar>>.host}}.<<network>>.<<domain>>:{{serverConfig.nginxPort}}/api"
@@ -218,4 +220,4 @@
]
}
}
]
]

+ 48
- 0
bubble-server/src/test/resources/models/tests/debug_echo.json Dosyayı Görüntüle

@@ -0,0 +1,48 @@
[
{
"comment": "Simplest example of using ECHO debug call without the query param",
"request": { "uri": "debug/echo", "entity": { "anything": "something" } },
"response": { "check": [{ "condition": "len(json) == 0" }] },
"after": "add_to_ctx { \"added\": \"val\", \"addedInner\": { \"inner\": \"value\", \"another\": \"variable\" }, \"addedArray\": [ \"abc\", \"def\" ] }"
},

{
"comment": "Example of using ECHO debug call and echo_in_log after",
"request": {
"uri": "debug/echo?respondWith=inner.comment",
"entity": {
"inner": { "comment": "Fixed text comment" },
"non-existing": "{{notexistingvar}}"
}
},
"response": {
"raw": true,
"store": "echoedResponse",
"check": [{ "condition": "response.json == 'Fixed text comment'" }]
},
"after": "echo_in_log Test:\n\tnon existent value: {{somethingThatDoesntExist}}\n\tjust stored response: {{echoedResponse}}"
},

{
"comment": "Another example of using ECHO debug call and echo_in_log after",
"before": "add_to_ctx { \"brand\": \"new\" }",
"request": {
"uri": "debug/echo?respondWith=inner",
"entity": {
"previouslyStored": "{{echoedResponse}}",
"inner": { "comment": "Another fixed text comment" }
}
},
"response": {
"store": "echoedJson",
"check": [
{ "condition": "json.get('comment') == 'Another fixed text comment'" },
{ "condition": "'{{brand}}' == 'new'" },
{ "condition": "'{{addedInner.another}}' == 'variable'" },
{ "condition": "'{{added}}' == 'val'" }
]
},
"after": "echo_in_log \"And now the stored value is: {{echoedJson.comment}}\""
// echo_in_log is, of course, available within `before` also
}
]

+ 25
- 48
bubble-server/src/test/resources/models/tests/live/backup_and_restore.json Dosyayı Görüntüle

@@ -1,6 +1,7 @@
[
{
"comment": "login as root on sage node",
"comment": "login as root on sage node (adding username to ctx also)",
"before": "add_to_ctx { \"username\": \"bubble-user\" }",
"connection": {
"name": "sageConnection",
"baseUri": "https://{{sageFqdn}}:{{serverConfig.nginxPort}}/api"
@@ -41,11 +42,11 @@
"params": {
"sageFqdn": "{{sageFqdn}}",
"rootPassword": "{{sageRootPass}}",
"username": "bubble-user",
"username": "{{username}}",
"password": "password1!",
"userSessionVar": "userSession",
"network": "bubble-{{rand 5}}",
"email": "bubble-user@example.com",
"email": "{{username}}@example.com",
"plan": "bubble",
"networkVar": "bubbleNetwork",
"bubbleConnectionVar": "bubbleConnection",
@@ -61,9 +62,6 @@
"uri": "me/networks/{{bubbleNetwork.network}}/storage/write/test_file_{{bubbleNetwork.network}}.txt",
"headers": { "Content-Type": "multipart/form-data" },
"entity": {"file": "data:this is a test file: {{rand 20}}"}
},
"response": {
"store": "fileMeta"
}
},

@@ -71,11 +69,11 @@
"comment": "add verified email to root account on new node",
"include": "add_approved_contact",
"params": {
"username": "bubble-user",
"username": "{{username}}",
"userSession": "bubbleUserSession",
"userConnection": "bubbleConnection",
"contactInfo": "bubble-user@example.com",
"contactLookup": "bubble-user@example.com",
"contactInfo": "{{username}}@example.com",
"contactLookup": "{{username}}@example.com",
"authFactor": "not_required",
"rootSession": "bubbleUserSession",
"rootConnection": "bubbleConnection"
@@ -94,9 +92,6 @@
"agreeToTerms": true,
"contact": {"type": "email", "info": "user-{{rand 5}}@example.com"}
}
},
"response": {
"store": "newUser"
}
},

@@ -108,28 +103,14 @@
},

{
"comment": "backup network",
"request": {
"uri": "me/networks/{{bubbleNetwork.network}}/backups/test_backup",
"method": "put"
},
"response": {
"store": "backup"
}
"comment": "backup network for later restore",
"request": { "method": "put", "uri": "me/networks/{{bubbleNetwork.network}}/backups/test_backup" }
},

{
"before": "await_url me/networks/{{bubbleNetwork.network}}/backups/test_backup?status=backup_completed 5m 10s",
"comment": "find completed backup",
"request": {
"uri": "me/networks/{{bubbleNetwork.network}}/backups/test_backup"
},
"response": {
"store": "backup",
"check": [
{"condition": "json.getStatus().name() == 'backup_completed'"}
]
}
"comment": "await completed backup and store in the context",
"before": "await_url me/networks/{{bubbleNetwork.network}}/backups/test_backup?status=backup_completed 90s:10m 15s",
"request": { "uri": "me/networks/{{bubbleNetwork.network}}/backups/test_backup?status=backup_completed" }
},

{
@@ -137,7 +118,7 @@
"include": "get_network_keys",
"params": {
"network": "{{bubbleNetwork.network}}",
"rootEmail": "bubble-user@example.com",
"rootEmail": "{{username}}@example.com",
"networkKeysVar": "networkKeys",
"networkKeysPassword": "Passw0rd!!"
}
@@ -155,13 +136,9 @@
},

{
"comment": "verify network is stopped",
"request": {"uri": "me/networks/{{bubbleNetwork.network}}" },
"response": {
"check": [
{"condition": "json.getState().name() == 'stopped'"}
]
}
"comment": "wait for network to stop",
"before": "await_url me/networks/{{bubbleNetwork.network}} 5m 10s await_json.getState().name() == 'stopped'",
"request": { "uri": "me" }
},

{
@@ -179,12 +156,12 @@
},

{
"before": "await_url .bubble 40m 20s",
"comment": "restore node using restoreKey",
"before": "await_url .bubble 16m:20m 20s",
"connection": {
"name": "restoredBubbleConnection",
"baseUri": "https://{{restoreNN.fqdn}}:{{serverConfig.nginxPort}}/api"
},
"comment": "restore node using restoreKey",
"request": {
"uri": "auth/restore/{{restoreNN.restoreKey}}",
"entity": {
@@ -193,17 +170,16 @@
},
"method": "put"
},
"after": "sleep 240s" // give the restore some time to stop the server, restore and restart
"after": "await_url .bubble 9m:10m 20s" // give the restore some time to stop the server, restore and restart
},

{
"before": "await_url .bubble 10m 20s",
"comment": "login to restored bubble",
"request": {
"session": "new",
"uri": "auth/login",
"entity": {
"name": "bubble-user",
"name": "{{username}}",
"password": "password1!"
}
},
@@ -240,15 +216,16 @@
"request": {
"uri": "me/networks/{{restoreNN.network}}/actions/stop",
"method": "post"
}
},
"after": "await_url me/networks/{{restoreNN.network}} 5m 10s await_json.getState().name() == 'stopped'"
},

{
"comment": "delete restored bubble network from sage",
"comment": "delete restored bubble network from sage by deleting plan which should have the same name",
"request": {
"uri": "me/networks/{{restoreNN.network}}",
"uri": "me/plans/{{restoreNN.network}}",
"method": "delete"
},
"after": "verify_unreachable https://{{restoreNN.fqdn}}:{{serverConfig.nginxPort}}/api/me"
}
]
]

Yükleniyor…
İptal
Kaydet