#18 Introduce packer support

Merged
jonathan merged 80 commits from cobbzilla/introduce_packer into master 4 years ago
jonathan commented 4 years ago

@kris please review. This is a big one

@kris please review. This is a big one
kris was assigned by jonathan 4 years ago
kris commented 4 years ago
Collaborator

This looks like a branch created out of another branch (sfedoriv/APIAddSupportForAmazonEC2ComputeCloudService) which is not yet merged into master.
So I’ll review diff: https://git.bubblev.org/bubblev/bubble/compare/sfedoriv/APIAddSupportForAmazonEC2ComputeCloudService...cobbzilla/introduce_packer and put my comments here. I guess most of changed files here are actually either new or deleted files, so it should be easy for me to do so.

This looks like a branch created out of another branch (`sfedoriv/APIAddSupportForAmazonEC2ComputeCloudService`) which is not yet merged into master. So I'll review diff: https://git.bubblev.org/bubblev/bubble/compare/sfedoriv/APIAddSupportForAmazonEC2ComputeCloudService...cobbzilla/introduce_packer and put my comments here. I guess most of changed files here are actually either new or deleted files, so it should be easy for me to do so.
kris changed title from cobbzilla/introduce_packer to WIP: (blocked by https://git.bubblev.org/bubblev/bubble/pulls/4 )cobbzilla/introduce_packer 4 years ago
kris changed title from WIP: (blocked by https://git.bubblev.org/bubblev/bubble/pulls/4 )cobbzilla/introduce_packer to WIP: (blocked by https://git.bubblev.org/bubblev/bubble/pulls/4 ) cobbzilla/introduce_packer 4 years ago
kris requested changes 4 years ago
kris left a comment

Break before continuing review starting from
bubble-server/src/main/java/bubble/cloud/storage/local/LocalStorageDriver.java

@@ -38,3 +38,3 @@
else
echo "Patching and restarting..."
ssh ${HOST} "cd /tmp && cp ~bubble/current/bubble.jar . && cd classes && jar uvf ../bubble.jar . | egrep -v '*/\(*' && cat ../bubble.jar > ~bubble/current/bubble.jar && supervisorctl restart bubble" || die "Error patching remote jar"
ssh ${HOST} "cd /tmp && cp ~bubble/api/bubble.jar . && cd classes && jar uvf ../bubble.jar . | egrep -v '*/\(*' && cat ../bubble.jar > ~bubble/api/bubble.jar && supervisorctl restart bubble" || die "Error patching remote jar"
kris commented 4 years ago

I did something similar while I didn’t have root access to btest5. Only root has rights to do supervisorctl, so it would be good to add a check - if user is set and it is not root, do not allow execution of this script without norestart option.

Also, do rm -rf /tmp/classes afterwards, so the same script may be called with other user afterwards. The one created here is owned by the currently used user, so if it is root, next call with bubble@hostname will fail.

I did something similar while I didn't have root access to btest5. Only `root` has rights to do `supervisorctl`, so it would be good to add a check - if `user` is set and it is not `root`, do not allow execution of this script without `norestart` option. Also, do `rm -rf /tmp/classes` afterwards, so the same script may be called with other `user` afterwards. The one created here is owned by the currently used `user`, so if it is `root`, next call with `bubble@hostname` will fail.
jonathan commented 4 years ago

These would be nice-to-haves, but it’s working OK for me now -- so maybe in a future PR

These would be nice-to-haves, but it's working OK for me now -- so maybe in a future PR
kris commented 4 years ago

You’re introducing usage of this script by some other user and not default root within the header comment at the top of this script. And for that to be completed, this change should be done. However, this PR is already a huge one, so, I have to agree that this can fo in future PR.

You're introducing usage of this script by some other user and not default `root` within the header comment at the top of this script. And for that to be completed, this change should be done. However, this PR is already a huge one, so, I have to agree that this can fo in future PR.
@@ -55,3 +55,3 @@
else
echo "Patching and restarting..."
ssh ${HOST} "cat /tmp/bubble.jar > ~bubble/current/bubble.jar && supervisorctl restart bubble"
ssh ${HOST} "cat /tmp/bubble.jar > ~bubble/api/bubble.jar && supervisorctl restart bubble"
kris commented 4 years ago

Same as in bpatch:

  • restart case should be restricted to root user only
  • /tmp/bubble.jar file should be removed at the end of this script

Also, any currently existing /tmp/bubble.jar and /tmp/classes should be deleted from all existing servers once this change is applied, so these scripts will actually work for other than root user. This can be done simply by calling these 2 patch scripts as root user on all servers before using them with any other user.

Same as in `bpatch`: - restart case should be restricted to root user only - /tmp/bubble.jar file should be removed at the end of this script Also, any currently existing /tmp/bubble.jar and /tmp/classes should be deleted from all existing servers once this change is applied, so these scripts will actually work for other than root user. This can be done simply by calling these 2 patch scripts as root user on all servers before using them with any other user.
jonathan commented 4 years ago

These would be nice-to-haves, but it’s working OK for me now -- so maybe in a future PR

These would be nice-to-haves, but it's working OK for me now -- so maybe in a future PR
@@ -14,3 +14,3 @@
# Environment variables
#
# BUBBLE_ENV : env file to load. Default is ~/.bubble.env or /home/bubble/current/bubble.env (whichever is found first)
# BUBBLE_ENV : env file to load. Default is ~/.bubble.env or /home/bubble/api/bubble.env (whichever is found first)
kris commented 4 years ago

I’m not yet sure where the file came from as I cannot find it in this PR, but grep -rn /home/bubble/current . returns:
./bubble-server/src/main/resources/packer/roles/bubble/tasks/main.yml:71: dest: /home/bubble/current

I guess, the current -> api replacement should be done in there also.

I'm not yet sure where the file came from as I cannot find it in this PR, but `grep -rn /home/bubble/current .` returns: `./bubble-server/src/main/resources/packer/roles/bubble/tasks/main.yml:71: dest: /home/bubble/current` I guess, the current -> api replacement should be done in there also.
jonathan commented 4 years ago

We can probably remove that. That line is creating a symlink from current -> api just in case anyone still refers to the jar or env file under current instead of api, it will get the same file.

We can probably remove that. That line is creating a symlink from current -> api just in case anyone still refers to the jar or env file under `current` instead of `api`, it will get the same file.
@@ -13,3 +13,3 @@

# Install packages
sudo apt install openjdk-11-jdk maven postgresql-10 redis-server jq python3 python3-pip npm webpack curl -y || die "Error installing apt packages"
sudo apt install openjdk-11-jdk maven postgresql-10 redis-server jq python3 python3-pip npm webpack curl unzip -y || die "Error installing apt packages"
kris commented 4 years ago

jar command is not available on btest5 while it is used within bpatch script. Should we add openjdk-11-jdk-headless here, or is it added somewhere else in this PR (I’ll try finding it, but still, I’ll leave this comment as a reminder).

`jar` command is not available on btest5 while it is used within bpatch script. Should we add `openjdk-11-jdk-headless` here, or is it added somewhere else in this PR (I'll try finding it, but still, I'll leave this comment as a reminder).
jonathan commented 4 years ago

this script is only intended to be run in development environments, and it installs openjdk-11-jdk, so we have the jar command.

this script is only intended to be run in development environments, and it installs openjdk-11-jdk, so we have the jar command.
@@ -0,0 +7,4 @@

# Install packer
if [[ ! -f ${HOME}/packer/packer ]] ; then
PACKER_VERSION=1.5.6
kris commented 4 years ago

Version 1.6.0 is available on https://www.packer.io/downloads/ - any reason not to use newer version?

Version 1.6.0 is available on https://www.packer.io/downloads/ - any reason not to use newer version?
jonathan commented 4 years ago

1.5.6 was the latest version when I started this. we can upgrade later.

1.5.6 was the latest version when I started this. we can upgrade later.
@@ -0,0 +17,4 @@

# Install packer Vultr plugin
if [[ ! -f ${HOME}/.packer.d/plugins/packer-builder-vultr ]] ; then
PACKER_VULTR_VERSION=1.0.8
kris commented 4 years ago

This one has new version too: 1.0.9

This one has new version too: 1.0.9
kris commented 4 years ago

Maybe it would be good to have these version numbers in env file so it would be easier to upgrade versions.

Maybe it would be good to have these version numbers in env file so it would be easier to upgrade versions.
jonathan commented 4 years ago

Yes, nice additions, for later :)

Yes, nice additions, for later :)
@@ -117,3 +43,3 @@
echo "Updated $(ls -1 ${ROLES_DIR} | wc -l) roles in ${DEFAULT_ROLES}"
cd ${CLASSES_DIR} && jar uvf ${BUBBLE_JAR} scripts || die "Error updating ${BUBBLE_JAR} with scripts"

rm -f "${UPDATED}"
kris commented 4 years ago

Remove. There’s no UPDATED folder any more.

Remove. There's no UPDATED folder any more.
jonathan commented 4 years ago

good catch, I’ll remove that line.

good catch, I'll remove that line.
@@ -0,0 +47,4 @@
public boolean isOptionalConfigName(String cfgName) {
final String[] names = getOptionalConfigNames();
if (names == null) return false;
return Arrays.asList(names).contains(cfgName);
kris commented 4 years ago

Arrays.stream(names).anyMatch(cfgName::equals) might work a bit faster, but I guess this is not so important here as names array should not be larger.

`Arrays.stream(names).anyMatch(cfgName::equals)` might work a bit faster, but I guess this is not so important here as names array should not be larger.
@@ -59,0 +55,4 @@
for (CloudRegion configRegion : config.getRegions()) {
final CloudRegion cloudRegion = getCloudRegions().stream()
.filter(s -> s.getInternalName().equalsIgnoreCase(configRegion.getInternalName()))
.findFirst()
kris commented 4 years ago

No need to use null below. Instead:

...
.findFirst()
.map(cr -> configRegion.setId(cr.getId()))
.ifPresentOrElse(cloudRegions::add),
                 () -> log.warn("initRegions: config region not found: " + configRegion.getInternalName());

So actually no need for cloudRegion variable here.

No need to use null below. Instead: ``` ... .findFirst() .map(cr -> configRegion.setId(cr.getId())) .ifPresentOrElse(cloudRegions::add), () -> log.warn("initRegions: config region not found: " + configRegion.getInternalName()); ``` So actually no need for `cloudRegion` variable here.
@@ -59,0 +63,4 @@
cloudRegions.add(configRegion.setId(cloudRegion.getId()));
}
}
return cloudRegions;
kris commented 4 years ago

Strange naming and a bit confusing. According to the code above, this returned array list actually contains (slightly altered) configRegion-s (line 63), and so it’s name cloudRegions is not really correct. Maybe just call it regions.

Strange naming and a bit confusing. According to the code above, this returned array list actually contains (slightly altered) `configRegion`-s (line 63), and so it's name `cloudRegions` is not really correct. Maybe just call it `regions`.
@@ -66,0 +70,4 @@
private List<ComputeNodeSize> initSizes() {
final ArrayList<ComputeNodeSize> cloudSizes = new ArrayList<>();
for (ComputeNodeSize configSize : config.getSizes()) {
final ComputeNodeSize cloudSize = getCloudSizes().stream().filter(sz -> sz.getInternalName().equals(configSize.getInternalName())).findFirst().orElse(null);
kris commented 4 years ago

Same as above with ifPresentOrElse

Same as above with `ifPresentOrElse`
kris commented 4 years ago

Not using equalsIgnoreCase here?

Not using `equalsIgnoreCase` here?
jonathan commented 4 years ago

can cleanup later

can cleanup later
@@ -66,0 +74,4 @@
if (cloudSize == null) {
log.warn("initSizes: config region not found: "+configSize.getInternalName());
} else {
cloudSizes.add(cloudSize
kris commented 4 years ago

And here it is inverted as opposed to the regions method - here the result is really made of couldSize entries. Is this intentionally? Or are these cloud vs. config entries really the same thing except these few fields updated in these methods?

And here it is inverted as opposed to the regions method - here the result is really made of `couldSize` entries. Is this intentionally? Or are these cloud vs. config entries really the same thing except these few fields updated in these methods?
jonathan commented 4 years ago

The cloud regions are (for Vultr and DigitalOcean) coming from the vendor API. The config regions come from the JSON config for the CloudService. It’s possible that the vendor has new regions that we don’t have, or that we have obsolete regions that the vendor has decommissioned.

The cloud regions are (for Vultr and DigitalOcean) coming from the vendor API. The config regions come from the JSON config for the CloudService. It's possible that the vendor has new regions that we don't have, or that we have obsolete regions that the vendor has decommissioned.
@@ -69,0 +85,4 @@
@Getter(lazy=true) private final OsImage os = initOs();
private OsImage initOs() {
final OsImage os = getCloudOsImages().stream()
.filter(s -> s.getName().equals(config.getOs()))
kris commented 4 years ago

Not using equalsIgnoreCase here?

Not using `equalsIgnoreCase` here?
jonathan commented 4 years ago

These are vendor keys, they will be case sensitive.

These are vendor keys, they will be case sensitive.
@@ -172,2 +165,2 @@
final ComputeNodeSize size = config.getSize(node.getSize());
final String os = config.getConfig("os");
final CloudRegion region = getRegions().stream()
.filter(r -> r.getInternalName().equals(node.getRegion()))
kris commented 4 years ago

Here also, do we need equalsIgnoreCase?

Here also, do we need equalsIgnoreCase?
@@ -260,1 +234,4 @@

@Override public List<PackerImage> getPackerImages() {
final List<PackerImage> images = getResources(PACKER_IMAGES_URI, new DigitalOceanPackerImageParser(configuration.getVersion(), packerService.getPackerPublicKeyHash(), configuration.getJarSha()));
return images == null ? Collections.emptyList() : images;
kris commented 4 years ago

According to the code, I would say that this cannot be null. One less check to do here. Just add lombok’s @NonNull annotation to getResources and all Parser calsses’ newResults methods.

According to the code, I would say that this cannot be null. One less check to do here. Just add lombok's `@NonNull` annotation to `getResources` and all Parser calsses' `newResults` methods.
@@ -0,0 +20,4 @@
} else {
return die("parse: id not found");
}
if (item.has("name")) {
kris commented 4 years ago

Checking for name here while using slug in the next line. I'm guessing here we should check for slug`.

Checking for `name` here while using `slug in the next line. I'm guessing here we should check for `slug`.
jonathan commented 4 years ago

good catch, will fix.

good catch, will fix.
@@ -0,0 +62,4 @@

@Override protected List<OsImage> getCloudOsImages() {
// todo
return null;
kris commented 4 years ago

It would be good to return notSupported("...") in these 3 methods here also for now.

It would be good to return notSupported("...") in these 3 methods here also for now.
@@ -0,0 +14,4 @@
if (!item.has("vcpu_count")) return die("parse: vcpu_count not found");
if (!item.has("ram")) return die("parse: ram not found");
if (!item.has("disk")) return die("parse: disk not found");
return new ComputeNodeSize()
kris commented 4 years ago

Add check for bandwidth above.

Add check for `bandwidth` above.
jonathan commented 4 years ago

yup, will fix.

yup, will fix.
@@ -0,0 +20,4 @@
.setVcpu(item.get("vcpu_count").asInt())
.setMemoryMB(item.get("ram").asInt())
.setSsdGB(item.get("disk").asInt())
.setTransferGB(item.get("bandwidth_gb").asInt());
kris commented 4 years ago

According to https://www.vultr.com/api/#plans_plan_list the property’s name is just bandwidth.

But then again https://www.vultr.com/api/#plans_plan_list_baremetal API call has bandwidth_tb.

I’m just guessing that we should use bandwidth here, and that it is given in TB again.

According to https://www.vultr.com/api/#plans_plan_list the property's name is just `bandwidth`. But then again https://www.vultr.com/api/#plans_plan_list_baremetal API call has `bandwidth_tb`. I'm just guessing that we should use `bandwidth` here, and that it is given in TB again.
@@ -60,3 +66,1 @@
@Getter(lazy=true) private static final Map<String, Integer> regionMap = getResourceMap(REGIONS_URL);
@Getter(lazy=true) private static final Map<String, Integer> plansMap = getResourceMap(PLANS_URL);
@Getter(lazy=true) private static final Map<String, Integer> osMap = getResourceMap(OS_URL);
@Getter(lazy=true) private final List<CloudRegion> cloudRegions = loadCloudResources(REGIONS_URL, new VultrRegionParser());
kris commented 4 years ago

Why change in method’s name here? Similar method in DigitalOcean’s class was named just getResources.

BTW, I link this one loadCloudResources name better as it is not a single getter.

Why change in method's name here? Similar method in DigitalOcean's class was named just `getResources`. BTW, I link this one `loadCloudResources` name better as it is not a single getter.
@@ -64,0 +71,4 @@
private int initSnapshotOsId() {
final OsImage snapshot = getCloudOsImages().stream()
.filter(i -> i.getName().equals("Snapshot"))
.findFirst()
kris commented 4 years ago

Here also you may use ifPresentOrElse method, and remove snapshot

Here also you may use ifPresentOrElse method, and remove `snapshot`
@@ -67,3 +83,3 @@
public static final long SERVER_STOP_CHECK_INTERVAL = SECONDS.toMillis(5);

private static Map<String, Integer> getResourceMap(String uri) {
private <T> List<T> loadCloudResources(String uri, ResourceParser<T, List<T>> parser) {
kris commented 4 years ago

This method is also @NonNull.

This method is also @NonNull.
@@ -366,1 +395,3 @@
final HttpRequestBean destroyKeyRequest = auth(new HttpRequestBean(POST, DESTROY_SSH_KEY_URL, keyData));
@Override public List<PackerImage> getPackerImages() {
final List<PackerImage> images = loadCloudResources(SNAPSHOT_URL, new VultrPackerImageParser(configuration.getVersion(), packerService.getPackerPublicKeyHash(), configuration.getJarSha()));
return images == null ? Collections.emptyList() : images;
kris commented 4 years ago

And here then loadCloudResources is @NonNull, so no need for check - just return.

And here then loadCloudResources is @NonNull, so no need for check - just return.
@@ -367,0 +399,4 @@

public PackerImage getPackerImage(AnsibleInstallType installType) {
final List<PackerImage> images = getPackerImages();
return images == null ? null : images.stream()
kris commented 4 years ago

And additional not needed null check (even with the current code)... No need for this one.

And additional not needed null check (even with the current code)... No need for this one.
kris requested changes 4 years ago
kris left a comment

I had to leave. Continuing tomorrow from
bubble-server/src/main/java/bubble/service/cloud/NodeProgressMeter.java

@@ -43,29 +41,56 @@ public abstract class ComputeServiceDriverBase
}

@Autowired protected BubbleNodeDAO nodeDAO;
@Autowired protected PackerService packerService;
kris commented 4 years ago

Autowiring class of the same type (Services). And this instance is used only once, while configuration is available. Please use configuration.getBean instead.

Autowiring class of the same type (Services). And this instance is used only once, while `configuration` is available. Please use configuration.getBean instead.
@@ -367,0 +444,4 @@
return tag != null && tag.contains("_"+installType.name()+"_") && tag.endsWith(jarSha);
});
} catch (IOException e) {
log.error("finalizeIncompletePackerRun: error listing servers: "+shortError(e), e);
kris commented 4 years ago

log messages’ prefixes are wrong in this method.

log messages' prefixes are wrong in this method.
jonathan commented 4 years ago

thanks, will fix.

thanks, will fix.
@@ -78,4 +77,2 @@
}

// Special handling when bubble has not been activated for bootstrapping ansible roles
if (activated() || !key.startsWith(ROLE_PATH)) return false;
kris commented 4 years ago

Completely unrelated to ROLE_PATH, this line was returning false in case if just activated() returned true. Was this a bug, or should we leave just this activated() check here? Same question in remove line 117 right after this change.

Completely unrelated to ROLE_PATH, this line was returning false in case if just `activated()` returned true. Was this a bug, or should we leave just this `activated()` check here? Same question in remove line 117 right after this change.
kris commented 4 years ago

Also, this activated method is used only in this class, and so can be changed to private.

Also, this `activated` method is used only in this class, and so can be changed to private.
jonathan commented 4 years ago

We don’t need the check anymore since roles are no longer stored in localstorage. Can probably remove activated everywhere in this class, but will wait on that for now.

We don't need the check anymore since roles are no longer stored in localstorage. Can probably remove `activated` everywhere in this class, but will wait on that for now.
@@ -327,3 +324,3 @@

@Transactional(Transactional.TxType.REQUIRES_NEW)
private void deleteTransactional(@NonNull final String uuid) {
protected void deleteTransactional(@NonNull final String uuid) {
kris commented 4 years ago

This method is not used outside of this class. Any reason why changing it to protected?

This method is not used outside of this class. Any reason why changing it to protected?
jonathan commented 4 years ago

iirc I got some kind of spring error keeping it as private and had to change it. some kind of reflection problem i guess.

iirc I got some kind of spring error keeping it as `private` and had to change it. some kind of reflection problem i guess.
@@ -73,1 +68,3 @@
.collect(Collectors.toList());
final String[] installRoles = installType == AnsibleInstallType.sage
? ApiConstants.ROLES_SAGE
: ApiConstants.ROLES_NODE;
kris commented 4 years ago

If ansible scripts are the same, than node started as restore should executed only sage roles initially (or at least some tasks from algo and mitm should be excluded). The remaining two roles should be executed after restore call run on that restored instance bubble API’s.

Changes I recently made to automation files should be applied to bubble-server/src/main/resources/ansible/roles files for restoring to work properly as it is implemented now.

Other way would be to use packer for full backup and restore process instead of one we have right now. Should we do that - it would take more space, right?

If ansible scripts are the same, than node started as restore should executed only sage roles initially (or at least some tasks from algo and mitm should be excluded). The remaining two roles should be executed after restore call run on that restored instance bubble API's. Changes I recently made to automation files should be applied to bubble-server/src/main/resources/ansible/roles files for restoring to work properly as it is implemented now. Other way would be to use packer for full backup and restore process instead of one we have right now. Should we do that - it would take more space, right?
jonathan commented 4 years ago

It’s an interesting idea to use packer for backup/restore, but (1) it would take a LOT more space and (2) what if you wanted to restore to a different cloud? there’s no way to restore a Vultr image to a DigitalOcean droplet, for example. I think the existing way of backing up/restoring through a CloudService of type storage is nice because it is cloud-independent.

It's an interesting idea to use packer for backup/restore, but (1) it would take a LOT more space and (2) what if you wanted to restore to a different cloud? there's no way to restore a Vultr image to a DigitalOcean droplet, for example. I think the existing way of backing up/restoring through a `CloudService` of type `storage` is nice because it is cloud-independent.
@@ -124,6 +122,7 @@ public class StandardNetworkService implements NetworkService {
@Autowired private NodeService nodeService;
@Autowired private GeoService geoService;
@Autowired private AnsiblePrepService ansiblePrep;
@Autowired private PackerService packerService;
kris commented 4 years ago

Autowiring class of same type (Service). Please use configuration.getBean instead.

Autowiring class of same type (Service). Please use configuration.getBean instead.
@@ -0,0 +122,4 @@
// determine lat/lon to find closest cloud region to perform build in
final GeoLocation here = geoService.locate(account.getUuid(), getExternalIp());
final List<CloudRegionRelative> closestRegions = findClosestRegions(configuration, new SingletonList<>(cloud), null, here.getLatitude(), here.getLongitude());
if (empty(closestRegions)) return die("writePackerImages: no closest region could be determined");
kris commented 4 years ago

Wrong log message’s prefix here. There no method name as prefix in log messages elsewhere in this method AFAICS

Wrong log message's prefix here. There no method name as prefix in log messages elsewhere in this method AFAICS
jonathan commented 4 years ago

yup, removed the prefix. it’s all packer.

yup, removed the prefix. it's all packer.
kris requested changes 4 years ago
kris left a comment

Reach the gitea’s limit. Following comments will be as plain comments within the PR.

@@ -174,0 +183,4 @@
super(meter.nn, meter.redis);
this.meter = meter;
}
@Override public void close() {}
kris commented 4 years ago

I couldn’t find out why is this needed. And does this influences memory usage anyhow? It’s currently happening only within newNode method, but it is not anyhow limited for usage in other places in the future (i.e. with a JavaDoc comment which will discourage usage of this).

I couldn't find out why is this needed. And does this influences memory usage anyhow? It's currently happening only within newNode method, but it is not anyhow limited for usage in other places in the future (i.e. with a JavaDoc comment which will discourage usage of this).
jonathan commented 4 years ago

In StandardNetworkService we re-enabled retries if the SSH command fails due to SSH not yet being available. In this case, the progressMeter was being closed, and no further writes were allowed. Thus, if the second attempt to run ansible succeeded, it would run but you would not see any output and the progress meter would be stuck.

In `StandardNetworkService` we re-enabled retries if the SSH command fails due to SSH not yet being available. In this case, the progressMeter was being closed, and no further writes were allowed. Thus, if the second attempt to run ansible succeeded, it would run but you would not see any output and the progress meter would be stuck.
@@ -238,3 +225,4 @@
node.setState(BubbleNodeState.starting);
nodeDAO.update(node);

final ExecutorService backgroundJobs = DaemonThreadFactory.fixedPool(3);
kris commented 4 years ago

Only 2 jobs are submitted into this pool. I guess we can set 2 here also.

Only 2 jobs are submitted into this pool. I guess we can set 2 here also.
@@ -0,0 +15,4 @@

private static final long DNS_TIMEOUT = MINUTES.toMillis(60);

private CloudServiceDAO cloudDAO;
kris commented 4 years ago

Used only in one place where you can get it from already available configuration.

Used only in one place where you can get it from already available `configuration`.
jonathan commented 4 years ago

Yeah it’s faster to pass it because the caller already has it. Getting it from configuration requires a spring bean lookup. Probably cached, but still.

Yeah it's faster to pass it because the caller already has it. Getting it from configuration requires a spring bean lookup. Probably cached, but still.
@@ -0,0 +12,4 @@

public class NodeStartJob implements Runnable {

private StandardNetworkService networkService;
kris commented 4 years ago

Never used.

Never used.
jonathan commented 4 years ago

good catch, will remove

good catch, will remove
@@ -0,0 +51,4 @@
public static final AnsibleInstallType[] PACKER_TYPES = {AnsibleInstallType.sage, AnsibleInstallType.node};
public static final String PACKER_TEMPLATE = PACKER_DIR+"/packer.json.hbs";
public static final String PACKER_IMAGE_NAME_VAR = "packerImageName";
public static final String PACKER_IMAGE_PREFIX = "packer_bubble_";
kris commented 4 years ago

All constants here except this one are used only within this class. It would be good to show that by setting them to be private.

All constants here except this one are used only within this class. It would be good to show that by setting them to be private.
@@ -0,0 +67,4 @@
public static final String VARIABLES_VAR = "packerVariables";
public static final String BUILD_REGION_VAR = "buildRegion";
public static final String IMAGE_REGIONS_VAR = "imageRegions";
public static final String BUILDERS_VAR = "builders";
kris commented 4 years ago

It would be good to have all the var names here as constants - I was looking for roles as I find that one in json.hbs, but there are some others used in this class.

It would be good to have all the var names here as constants - I was looking for `roles` as I find that one in json.hbs, but there are some others used in this class.
@@ -0,0 +129,4 @@
// set environment variables
final Map<String, String> env = new HashMap<>();
for (NameAndValue variable : packerConfig.getVars()) {
env.put(variable.getName(), HandlebarsUtil.apply(configuration.getHandlebars(), variable.getValue(), ctx, '[', ']'));
kris commented 4 years ago

As this is actually using ctx to build a value which will be set as ENV, should this env.put part go further back in the code - after the last ctx.put call and before usage of env in command execution (line 221).

As ctx is not really read and used in such way anywhere in between, it would be safe to move this complete // set environment variables part there.

As this is actually using `ctx` to build a value which will be set as ENV, should this env.put part go further back in the code - after the last `ctx.put` call and before usage of `env` in command execution (line 221). As ctx is not really read and used in such way anywhere in between, it would be safe to move this complete `// set environment variables` part there.
jonathan commented 4 years ago

this was some complex stuff to get right and it’s working. we can revisit later.

this was some complex stuff to get right and it's working. we can revisit later.
@@ -0,0 +162,4 @@
} else {
final List<CloudRegion> existingRegions = new ArrayList<>();
for (PackerImage image : existingForInstallType) {
existingRegions.addAll(Arrays.asList(image.getRegions()));
kris commented 4 years ago

What if within these images we find one that is for all regions (image.getRegions() == null as checked above in line 155). Should be break there as use that one?

If so, maybe a better check in line 155 would be

final var existingForAllRegions = existingForInstallType.stream().filter(i -> i.getRegions() == null).findFirst();
if (existingForAllRegions.isPresent()) {
   // found an image for all regions: `existingForAllRegions.get()`
   ...
} else {
   ...
}
What if within these images we find one that is for all regions (`image.getRegions() == null` as checked above in line 155). Should be break there as use that one? If so, maybe a better check in line 155 would be ``` final var existingForAllRegions = existingForInstallType.stream().filter(i -> i.getRegions() == null).findFirst(); if (existingForAllRegions.isPresent()) { // found an image for all regions: `existingForAllRegions.get()` ... } else { ... } ```
@@ -0,0 +165,4 @@
existingRegions.addAll(Arrays.asList(image.getRegions()));
}
log.info("packer images already exist for "+installType+" for regions: "+existingRegions.stream().map(CloudRegion::getInternalName).collect(Collectors.joining(", ")));
final List<String> existingRegionNames = existingRegions.stream().map(CloudRegion::getInternalName).collect(Collectors.toList());;
kris commented 4 years ago

Collect this list above the log line so mapping here will not be done twice.

Collect this list above the log line so mapping here will not be done twice.
jonathan commented 4 years ago

Can you call .collect() on a Stream more than once? I wasn’t sure.

Can you call `.collect()` on a `Stream` more than once? I wasn't sure.
kris commented 4 years ago

I guess I thought of just calling it up to map. Even if this collecting is done, and then we do String.join in the log, then that String.join is executed without need.

But then again, collect has to be done twice, and I’m not sure is this optimized in Java streams properly.
Anyhow, the list should be short, right, so you can just skip this. Sorry for this one.

I guess I thought of just calling it up to `map`. Even if this collecting is done, and then we do String.join in the log, then that String.join is executed without need. But then again, collect has to be done twice, and I'm not sure is this optimized in Java streams properly. Anyhow, the list should be short, right, so you can just skip this. Sorry for this one.
@@ -0,0 +177,4 @@
if (imagesRef != null) imagesRef.set(images);
return images;
}
ctx.put(IMAGE_REGIONS_VAR, toInnerStringList(imagesToCreate));
kris commented 4 years ago

'"' + String.join("\", \"", imagesToCreate) + ‘"‘` instead of creating a new method for this.

`'"' + String.join("\", \"", imagesToCreate)` + '"'` instead of creating a new method for this.
@@ -0,0 +202,4 @@

final List<String> builderJsons = new ArrayList<>();
if (packerConfig.iterateRegions()) {
for (CloudRegion region : computeDriver.getRegions()) {
kris commented 4 years ago

So this will actually build packer image for each and every region on the first call. I hope that packer will create those images in parallel so there will be no significant delay.

So this will actually build packer image for each and every region on the first call. I hope that packer will create those images in parallel so there will be no significant delay.
kris commented 4 years ago

On the other hand, VultrDriver is calling just getPackerImage(node.getInstallType()) which takes just the first packer image. Do we have any need for different images in different regions yet?

On the other hand, VultrDriver is calling just `getPackerImage(node.getInstallType())` which takes just the first packer image. Do we have any need for different images in different regions yet?
@@ -0,0 +267,4 @@
public void addAllRegions(ComputeServiceDriver computeDriver, Map<String, Object> ctx) {
ctx.put(IMAGE_REGIONS_VAR, toInnerStringList(computeDriver.getRegions().stream()
.map(CloudRegion::getInternalName)
.collect(Collectors.toList())));
kris commented 4 years ago

Just use Collectors.joining("\", \""), and do not forget to add surrounding quotes as in the current implementation if those are really needed around each element.

Just use `Collectors.joining("\", \"")`, and do not forget to add surrounding quotes as in the current implementation if those are really needed around each element.
jonathan commented 4 years ago

That would probably be easier. A good change to make in the future. fwiw, the leading/trailing quotes are intentionally omitted, they are present in the template.

This is so we can write valid JSON in cloudService.json, for example "something": "<<listOfValues>>", when listOfValues is replaced, we want the inner quotes but the outer ones are already there.

That would probably be easier. A good change to make in the future. fwiw, the leading/trailing quotes are intentionally omitted, they are present in the template. This is so we can write valid JSON in `cloudService.json`, for example `"something": "<<listOfValues>>"`, when `listOfValues` is replaced, we want the inner quotes but the outer ones are already there.
@@ -0,0 +270,4 @@
.collect(Collectors.toList())));
}

private String toInnerStringList(List<String> list) {
kris commented 4 years ago

Not needed as per last 2 comments here.

Not needed as per last 2 comments here.
@@ -0,0 +54,4 @@
if (images != null) return images;
activeJobs.computeIfAbsent(cacheKey, k -> {
final PackerJob packerJob = configuration.autowire(new PackerJob(cloud, installType, imagesRef));
return pool.submit(packerJob);
kris commented 4 years ago

Is this really allowed to return Future where List is expected???

Anyhow, this writePackerImages is used in such way that is doesn’t expect any return value, so void as return type would be more understandable.

Is this really allowed to return Future<T> where List<PackerImage> is expected??? Anyhow, this `writePackerImages` is used in such way that is doesn't expect any return value, so void as return type would be more understandable.
jonathan commented 4 years ago

It’s returning a Future where a Future is expected. The Future is a Future.

Probably could declare this void.

It's returning a Future where a Future is expected. The Future is a Future<List>. Probably could declare this void.
kris commented 4 years ago
Collaborator
  • Compare and use newer from master: ansible/roles/algo/tasks/main.yml (master was merged after, but these ansible files were moved to other folder, so I guess you just missed this one).
  • Same for roles/mitmproxy/tasks/main.yml
  • I assume that packer/roles/... are executed only for building a new packer image, and so these backup restore changes are not needed there.

I’ll have to make a break here. Continuing tomorrow from /bubble-server/src/main/resources/ansible/roles/bubble/tasks/main.yml

- Compare and use newer from master: ansible/roles/algo/tasks/main.yml (master was merged after, but these ansible files were moved to other folder, so I guess you just missed this one). - Same for roles/mitmproxy/tasks/main.yml - I assume that `packer/roles/...` are executed only for building a new packer image, and so these backup restore changes are not needed there. I'll have to make a break here. Continuing tomorrow from `/bubble-server/src/main/resources/ansible/roles/bubble/tasks/main.yml`
kris commented 4 years ago
Collaborator
  • /bubble-server/src/main/resources/ansible/roles/bubble/tasks/main.yml restore.yml is not imported any more, but bubble_restore_monitor.sh was required for restore. Is this monitor started somewhere else now? I coudn’t find ‘restore.yml’ anywhere.

  • Restart dnscrypt-proxy in bubble-server/src/main/resources/ansible/roles/mitmproxy/tasks/main.yml - how do we and do we at all build packer instances for restored bubbles? Is algo installed there before actual restoring? Note that algo installs this dnscrypto-proxy, and so this will fail if there’s no algo

  • Is API-Key used inside of our code fully replaced with apiKey - please check if activation works (bubble/config/activation.json line 26 still uses API-Key)

  • should we use tags in DigitalOcean packer builds in bubble-server/src/main/resources/models/defaults/cloudService.json also as in Vultr? Also in bubble-server/src/test/resources/models/system/cloudService.json while addin yet another tag within this one: test_instance.

  • bubble-server/src/main/resources/packer/roles/algo/defaults/main.yml where is this used and why adding it together with bubble-server/src/main/resources/packer/roles/algo/files/config.cfg.hbs ?

  • bubble-server/src/main/resources/packer/roles/bubble/files/init_bubble_db.sh lines 83 and 85 - argumenth with indexes 3 and 4 are already read into consts IS_FORK and INSTALL_MODE in lines 42-43. And with that theirs usage here is a bit strange: IS_FORK == "INIT" and INSTALL_MODE > .BUBBLE_DB_ENCRYPTION_KEY. I saw the usage - that’s cheating :) Can you please just move this new part of code above the IS_FORK and INSTALL_MODE definition just to be clear. Not sure if you would need to move used db_exists method also.

  • bubble-server/src/main/resources/packer/roles/bubble/tasks/main.yml - at the very end - line: shell: su - bubble bash -c install_packer.sh it would be better to use runuser -l bubble -c instead of su - bubble. And also, do we really need bash -c there - will this runuser or su - actually start a new bash, and therefore there’s no need to start yet another inside?

  • Do we need bubble-server/src/main/resources/ansible/roles/bubble/templates/snapshot_ansible.sh.j2 now that we have bubble-server/src/main/resources/packer/roles/finalizer/files/snapshot_ansible.sh?

  • Yes, bubble-server/src/main/resources/packer/roles/finalizer/tasks/main.yml kept just task Touch first-time setup file, while there no similar for case when restore_key is defined. Should then the one that is left really have condition when: restore_key is not defined? Will this restore_key be defined anytime when running packer ansible scripts? And how we will build instances when user is trying to restore a bubble - building full clean instance with full algo and mitmproxy setup, and then alter setup on it restoring the backupped setup?

  • bubble-server/src/main/resources/packer/roles/mitmproxy/files/supervisor_mitmproxy.conf current line command=sudo -H -u mitmproxy bash -c "/home/mitmproxy/mitmproxy/run_mitmdump.sh 8888" - should the port 8888 really be fixed here? {{ mitm_port }} is still used in bubble-server/src/main/resources/ansible/roles/mitmproxy/tasks/route.yml:23 and it is setup in bubble-server/src/main/resources/ansible/roles/mitmproxy/files/bubble_role.json:6, and so in configuration.defaultMitmProxyPort. If fixed, then we can fix/remove all this other references.

  • Learning some thing would require much more time for me, so I just run through things like all that algo setup (dnscrypt setup, strongswan & charon, upgrades,... - is this all just taken from algo’s installation?) - looks ok, so I hope it works fine.

  • Where can we test this? I would like to run backup_and_restore test there.

And more:

  • DigitalOcean has available flags for regions and sizes in regions. We are not processing those. Should we? If some regios or size is not available we shouldn’t use/show it, right?
  • Migration is missing!
  • We don’t need scripts/update_role.json and bin/update_role any more, right? If so, please delete those.

The 1st Review iteration done... I’ll start with reviewing new commits now

- `/bubble-server/src/main/resources/ansible/roles/bubble/tasks/main.yml` restore.yml is not imported any more, but bubble_restore_monitor.sh was required for restore. Is this monitor started somewhere else now? I coudn't find 'restore.yml' anywhere. - `Restart dnscrypt-proxy` in `bubble-server/src/main/resources/ansible/roles/mitmproxy/tasks/main.yml` - how do we and do we at all build packer instances for restored bubbles? Is algo installed there before actual restoring? Note that algo installs this dnscrypto-proxy, and so this will fail if there's no algo - Is `API-Key` used inside of our code fully replaced with `apiKey` - please check if activation works (bubble/config/activation.json line 26 still uses `API-Key`) - should we use `tags` in DigitalOcean packer builds in `bubble-server/src/main/resources/models/defaults/cloudService.json` also as in Vultr? Also in `bubble-server/src/test/resources/models/system/cloudService.json` while addin yet another tag within this one: `test_instance`. - `bubble-server/src/main/resources/packer/roles/algo/defaults/main.yml` where is this used and why adding it together with `bubble-server/src/main/resources/packer/roles/algo/files/config.cfg.hbs` ? - `bubble-server/src/main/resources/packer/roles/bubble/files/init_bubble_db.sh` lines 83 and 85 - argumenth with indexes 3 and 4 are already read into consts IS_FORK and INSTALL_MODE in lines 42-43. And with that theirs usage here is a bit strange: `IS_FORK == "INIT"` and `INSTALL_MODE > .BUBBLE_DB_ENCRYPTION_KEY`. I saw the usage - that's cheating :) Can you please just move this new part of code above the IS_FORK and INSTALL_MODE definition just to be clear. Not sure if you would need to move used db_exists method also. - `bubble-server/src/main/resources/packer/roles/bubble/tasks/main.yml` - at the very end - line: `shell: su - bubble bash -c install_packer.sh` it would be better to use `runuser -l bubble -c` instead of `su - bubble`. And also, do we really need `bash -c` there - will this `runuser` or `su -` actually start a new bash, and therefore there's no need to start yet another inside? - Do we need `bubble-server/src/main/resources/ansible/roles/bubble/templates/snapshot_ansible.sh.j2` now that we have `bubble-server/src/main/resources/packer/roles/finalizer/files/snapshot_ansible.sh`? - Yes, `bubble-server/src/main/resources/packer/roles/finalizer/tasks/main.yml` kept just task `Touch first-time setup file`, while there no similar for case when `restore_key is defined`. Should then the one that is left really have condition `when: restore_key is not defined`? Will this `restore_key` be defined anytime when running packer ansible scripts? And how we will build instances when user is trying to restore a bubble - building full clean instance with full algo and mitmproxy setup, and then alter setup on it restoring the backupped setup? - `bubble-server/src/main/resources/packer/roles/mitmproxy/files/supervisor_mitmproxy.conf` current line `command=sudo -H -u mitmproxy bash -c "/home/mitmproxy/mitmproxy/run_mitmdump.sh 8888"` - should the port 8888 really be fixed here? `{{ mitm_port }}` is still used in `bubble-server/src/main/resources/ansible/roles/mitmproxy/tasks/route.yml:23` and it is setup in `bubble-server/src/main/resources/ansible/roles/mitmproxy/files/bubble_role.json:6`, and so in `configuration.defaultMitmProxyPort`. If fixed, then we can fix/remove all this other references. - Learning some thing would require much more time for me, so I just run through things like all that algo setup (dnscrypt setup, strongswan & charon, upgrades,... - is this all just taken from algo's installation?) - looks ok, so I hope it works fine. - Where can we test this? I would like to run backup_and_restore test there. And more: - DigitalOcean has `available` flags for regions and sizes in regions. We are not processing those. Should we? If some regios or size is not available we shouldn't use/show it, right? - Migration is missing! - We don't need `scripts/update_role.json` and `bin/update_role` any more, right? If so, please delete those. The 1st Review iteration done... I'll start with reviewing new commits now
kris commented 4 years ago
Collaborator

Review of newer commit 828cdcf736

  • Should we replace __REGION__ in config template at line 5 in bin/aws/init_aws_configs.sh ?
  • New verion available on https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-ec2 (1.11.800). Look like they are regularly updating so we should remember to update this verion again when the PR is ready.
  • It would be better to return empty colleciton from getAllPackerImages and getPackerImagesForRegion instead of null, and set those to be @NonNull, so some previous comments I added here may be applicable, and some parts of code will be simplified (no null checks).
  • ComputeServiceDriverBase.getPackerImage(BubbleNode node) - better name would be getOrCreatePackerImage to clearly difirentiate from simpler getPackerImage(AnsibleInstallType installType, String region) which only does get.
  • Also, this other method getPackerImage(AnsibleInstallType installType, String region) is used only in this class - set it to private, please.
  • Class AmazonEC2Driver line 121 - will this return the oldest image? What is order resulting from comparing(Image::getCreationDate) - ASC or DESC for time(stamp)?
  • Looks like this method getPackerImagesForRegion should be protected now.
  • VPC_IPV6_CIDR_BLOCK is not used - Amazon’s provided value is used instead.
  • Class AmazonEC2Driver line 285 - NPE for subnet coparing with == null check in the next line. And again, please try to get this default value just by using orElseGet(SupplierFunction) in line 284.
  • Class AmazonEC2Driver: Similarly, instead of usage of or(() -> Optional.ofNullable(inner_supplier_function)).get() in line 309 use orElseGet(inner_supplier_function) making it clear that result should be existing object. or() itself returns Optional again, and so it is not required to have the real existing object as a result of its supplier.
  • Class AmazonEC2Driver: AmazonEC2Driver.getSubnetsByRegion is huge method. And it look like it is easy to extract some of those parts for fetching different objects from Amazon.
  • As you can already see, I’m avoiding using null as much as I can - Class AmazonEC2Driver, line 319-323 - just use .orElseGet(() -> die(...))at the far end of stream command.orElseThrowwould be nicer here, butdie` has some other logging staff which is better thing here.
  • Class AmazonEC2Driver lines 326-331 - dead code?

Reached bubble-server/src/main/java/bubble/cloud/compute/vultr/VultrDriver.java today. Continuing on Monday...

Review of newer commit 828cdcf736 - Should we replace `__REGION__` in config template at line 5 in `bin/aws/init_aws_configs.sh` ? - New verion available on https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-ec2 (1.11.800). Look like they are regularly updating so we should remember to update this verion again when the PR is ready. - It would be better to return empty colleciton from getAllPackerImages and getPackerImagesForRegion instead of null, and set those to be @NonNull, so some previous comments I added here may be applicable, and some parts of code will be simplified (no null checks). - `ComputeServiceDriverBase.getPackerImage(BubbleNode node)` - better name would be `getOrCreatePackerImage` to clearly difirentiate from simpler `getPackerImage(AnsibleInstallType installType, String region)` which only does get. - Also, this other method `getPackerImage(AnsibleInstallType installType, String region)` is used only in this class - set it to private, please. - Class `AmazonEC2Driver` line 121 - will this return the oldest image? What is order resulting from `comparing(Image::getCreationDate)` - ASC or DESC for time(stamp)? - Looks like this method `getPackerImagesForRegion` should be protected now. - `VPC_IPV6_CIDR_BLOCK` is not used - Amazon's provided value is used instead. - Class `AmazonEC2Driver` line 285 - NPE for `subnet` coparing with `== null` check in the next line. And again, please try to get this default value just by using `orElseGet(SupplierFunction)` in line 284. - Class `AmazonEC2Driver`: Similarly, instead of usage of `or(() -> Optional.ofNullable(inner_supplier_function)).get()` in line 309 use `orElseGet(inner_supplier_function)` making it clear that result should be existing object. `or()` itself returns `Optional` again, and so it is not required to have the real existing object as a result of its supplier. - Class `AmazonEC2Driver`: `AmazonEC2Driver.getSubnetsByRegion` is huge method. And it look like it is easy to extract some of those parts for fetching different objects from Amazon. - As you can already see, I'm avoiding using null as much as I can - Class `AmazonEC2Driver, line 319-323 - just use `.orElseGet(() -> die(...))` at the far end of stream command. `orElseThrow` would be nicer here, but `die` has some other logging staff which is better thing here. - Class `AmazonEC2Driver` lines 326-331 - dead code? Reached `bubble-server/src/main/java/bubble/cloud/compute/vultr/VultrDriver.java` today. Continuing on Monday...
kris commented 4 years ago
Collaborator

No new comments on commit 828cdcf736 and no comments on commit d97c7641bc

Continuing CR with next commit 1900999670

  • Again, some backup-restore fixes were reverted - tags: algo_related was removed from Allow MITM private port task in bubble-server/src/main/resources/ansible/roles/mitmproxy/tasks/route.yml so we should definitelly try this out with backup and restore before releasing it.
  • Are you updating files like bubble-server/src/main/resources/packer/roles/algo/files/config.cfg.hbs or is this a part of algo version update? Is there need to upgrade ubuntu version to 20.04 there for lightsail and openstack cloud_provider-s also?
  • Good thing I just run throug those charon and dnscrypt-proxy setup files at the initial code review here as they are removed now :)

CR on currently existing commits done.

No new comments on commit 828cdcf736 and no comments on commit d97c7641bc Continuing CR with next commit 1900999670 - Again, some backup-restore fixes were reverted - `tags: algo_related` was removed from `Allow MITM private port` task in `bubble-server/src/main/resources/ansible/roles/mitmproxy/tasks/route.yml` so we should definitelly try this out with backup and restore before releasing it. - Are you updating files like `bubble-server/src/main/resources/packer/roles/algo/files/config.cfg.hbs` or is this a part of algo version update? Is there need to upgrade ubuntu version to 20.04 there for lightsail and openstack `cloud_provider`-s also? - Good thing I just run throug those charon and dnscrypt-proxy setup files at the initial code review here as they are removed now :) CR on currently existing commits done.
jonathan commented 4 years ago
Owner

Awesome feedback. I’ve made a bunch of changes and will have one last commit coming, then I’ll merge.

Awesome feedback. I've made a bunch of changes and will have one last commit coming, then I'll merge.
jonathan changed title from WIP: (blocked by https://git.bubblev.org/bubblev/bubble/pulls/4 ) cobbzilla/introduce_packer to Introduce packer support 4 years ago
jonathan referenced this issue from a commit 4 years ago
Introduce packer support (#18) cleanups and fixes, packer is ready to roll add errorApi endpoint initialize mitmproxy dependencies packer deployments finally working fix virtualenv call for ubuntu 20.04 WIP. update to ubuntu 20.04. fixing algo installation WIP. packer fixes remove unused constant WIP. packer basics working for ec2 Merge branch 'master' of git.bubblev.org:bubblev/bubble into cobbzilla/introduce_packer remove automation dir, all moved to resources log pg autovacuum if longer than 250ms touch install marker for algo set mitmproxy as owner of all mitmproxy files add hostname to packer image name avoid closing progress meter prematurely WIP. parallelize node startup, fix packer bugs clarify docs add missing vars, algo tweaks add missing vars, update algo hash WIP. improving algo/mitmproxy packer stuff install packer for sage, call packer from proper location Use compute driver to get regions improve comments wait longer before polling new vultr server, avoid spurious ok status unquote simple filenames WIP. Use packer key, no more instance ssh key. Change API installation. Simplify packer/ansible. rename bubble_finalizer to just finalizer, remove default_roles filter servers/images based on installType for now, consider packer image OK if bubble version matches add algo/mitm roles to packer. add installType to BubbleNode fix NODE_ROLES file templatize packer file and playbook, use same template for sage and node WIP: refactor addAllRegions WIP: do not re-create identical images WIP: packer build for vultr now working Merge branch 'master' of git.bubblev.org:bubblev/bubble into cobbzilla/introduce_packer WIP. working on vultr packer builds WIP: packer image creation working for digitalocean WIP: packer basics working for digitalocean add packer endpoints, introduce packer support to cloud compute drivers remove roles endpoints AnsibleRole is no longer an model entity. Introduce Packer. Merge branch 'sfedoriv/APIAddSupportForAmazonEC2ComputeCloudService' of git.bubblev.org:bubblev/bubble into sfedoriv/APIAddSupportForAmazonEC2ComputeCloudService Merge branch 'master' of git.bubblev.org:bubblev/bubble into sfedoriv/APIAddSupportForAmazonEC2ComputeCloudService Merge branch 'master' into sfedoriv/APIAddSupportForAmazonEC2ComputeCloudService Merge branch 'master' into sfedoriv/APIAddSupportForAmazonEC2ComputeCloudService Merge branch 'master' into sfedoriv/APIAddSupportForAmazonEC2ComputeCloudService Merge branch 'master' into sfedoriv/APIAddSupportForAmazonEC2ComputeCloudService Add instance count to script Co-authored-by: Jonathan Cobb <jonathan@kyuss.org> Co-authored-by: Svitlana <sfedoriv@itekako.com> Reviewed-on: https://git.bubblev.org/bubblev/bubble/pulls/18
jonathan closed this pull request 4 years ago
jonathan deleted branch cobbzilla/introduce_packer 4 years ago

Reviewers

kris requested changes 4 years ago
The pull request has been merged as 6bfd3be018.
Sign in to join this conversation.
No reviewers
No Label
No Milestone
No Assignees
2 Participants
Notifications
Due Date

No due date set.

Dependencies

This pull request currently doesn't have any dependencies.

Loading…
There is no content yet.