LAB navigation

Each tab keeps one lane legible: control plane, workload, modern workloads, cloud terminals, or the microservice fabric.

public site on vercel, private execution through LAB lanes
read-only until sign in

AWS command lane

Keep the AWS lane focused on the live public-cloud proof: inspect EC2, disks, and Systems Manager activity from a server-side route without leaking credentials to the browser.

AWS
live route + STS + SSM
numbered workflow junctionslive command highlights active pathFigma ref

Multi-cloud control plumbing

>

AWS live workflow

Expand to see the phase-by-phase operator sequence for this tab.

6 phases
>
phase 011. Create tiny EC2 with one-hour TTLlive

Bring up the smallest practical demo host, tag it for the lab, and stamp a one-hour self-destruct deadline.

4 cmds
  1. 01
    Print the exact policy statement required for LAB to read the public Amazon Linux 2023 AMI parameter before host creation can succeed.
    show raw commands
    selected command
    lab aws iam fix create-db-host
    raw step 01
    Action needed on assumed role: ssm:GetParameter
    raw step 02
    Resource: arn:aws:ssm:us-east-1::parameter/aws/service/ami-amazon-linux-latest/*
  2. 02
    LAB resolves the latest Amazon Linux 2023 AMI, launches a t3.nano, and either arms a real one-time cleanup schedule or clearly reports that only TTL tags were stamped.
    show raw commands
    selected command
    aws ssm get-parameter --name /aws/service/ami-amazon-linux-latest/al2023-ami-kernel-default-x86_64 --query Parameter.Value --output text
    raw step 01
    aws ec2 run-instances --image-id <ami-id> --instance-type t3.nano --tag-specifications ResourceType=instance,... ResourceType=volume,... --credit-specification CpuCredits=standard --metadata-options HttpTokens=required
    raw step 02
    aws ec2 create-tags --resources <instance-id> --tags Key=delete-after,Value=<utc-plus-1-hour> Key=lab-ttl-policy,Value=terminate
    raw step 03
    aws scheduler create-schedule --name lab-host-ttl-<instance-id> --schedule-expression at(<utc-plus-1-hour>) --target Arn=arn:aws:scheduler:::aws-sdk:ec2:terminateInstances,RoleArn=<ttl-scheduler-role>,Input='{"InstanceIds":["<instance-id>"]}'
    raw step 04
    Dependency: true TTL enforcement requires AWS_TTL_SCHEDULER_ROLE_ARN.
  3. 03
    Verify the instance, TTL tag, private IP, and attached devices through LAB before storage prep begins.
    show raw commands
    selected command
    lab aws verify db-host
    raw step 01
    aws ec2 describe-instances --instance-ids <instance-id> --query Reservations[].Instances[].{Id:InstanceId,State:State.Name,DeleteAfter:Tags[?Key=='delete-after']|[0].Value}
  4. 04
    List every TTL-tagged DB host and the tagged data volume so you can spot cost leakage and confirm whether real scheduler cleanup was armed.
    show raw commands
    selected command
    lab aws inspect ttl resources
    raw step 01
    aws ec2 describe-instances --filters Name=tag:Name,Values=lab-db-aws-ttl01
    raw step 02
    aws ec2 describe-volumes --filters Name=tag:Name,Values=lab-db-aws-data01
    raw step 03
    aws scheduler get-schedule --group-name default --name lab-host-ttl-<instance-id>
    raw step 04
    aws scheduler get-schedule --group-name default --name lab-volume-ttl-<volume-id>
>
phase 022. Provision and attach storagelive

Create a small dedicated data disk so the future PostgreSQL import is isolated from the root volume.

4 cmds
  1. 01
    LAB reads the tiny host placement, then creates a 12 GiB gp3 volume in the matching AZ, tags it as the AWS migration data disk, and arms delayed cleanup when scheduler enforcement is configured.
    show raw commands
    selected command
    aws ec2 describe-instances --instance-ids <instance-id> --query Reservations[].Instances[].Placement.AvailabilityZone --output text
    raw step 01
    aws ec2 create-volume --availability-zone <az> --size 12 --volume-type gp3 --tag-specifications ResourceType=volume,...
    raw step 02
    aws scheduler create-schedule --name lab-volume-ttl-<volume-id> --schedule-expression at(<utc-plus-75-minutes>) --target Arn=arn:aws:scheduler:::aws-sdk:ec2:deleteVolume,RoleArn=<ttl-scheduler-role>,Input='{"VolumeId":"<volume-id>"}'
  2. 02
    Attach the tagged data volume to the tiny EC2 host at /dev/sdf so PostgreSQL can land on dedicated storage.
    show raw commands
    selected command
    lab aws detach stale db-volume
    raw step 01
    aws ec2 attach-volume --volume-id <volume-id> --instance-id <instance-id> --device /dev/sdf
  3. 03
    If the volume is still attached to an older TTL host, detach it cleanly first so the current migration target can claim it.
    show raw commands
    selected command
    lab aws detach stale db-volume
    raw step 01
    aws ec2 detach-volume --volume-id <volume-id> --instance-id <older-instance-id>
    raw step 02
    aws ec2 describe-volumes --volume-ids <volume-id>
  4. 04
    Confirm the dedicated data disk exists and is attached before guest prep begins.
    show raw commands
    selected command
    lab aws inspect db-volume
    raw step 01
    aws ec2 describe-volumes --filters Name=attachment.instance-id,Values=<instance-id> --query Volumes[].{Id:VolumeId,Device:Attachments[0].Device,Size:Size,State:State}
>
phase 033. Prepare the VM for the migrated databaselive

Use SSM to partition, mount, and install PostgreSQL tooling without opening SSH in the browser.

5 cmds
  1. 01
    Confirm whether the current EC2 host actually has an attached instance profile, which is the prerequisite for SSM registration to become real.
    show raw commands
    selected command
    lab aws inspect instance-profile
    raw step 01
    aws ec2 describe-iam-instance-profile-associations --filters Name=instance-id,Values=<instance-id>
    raw step 02
    Dependency: AWS_EC2_INSTANCE_PROFILE_NAME or AWS_EC2_INSTANCE_PROFILE_ARN must be configured for LAB launches, or a profile must be attached outside LAB.
  2. 02
    Confirm the new EC2 VM is registered with Systems Manager before guest validation starts.
    show raw commands
    selected command
    lab aws verify ssm
    raw step 01
    aws ssm describe-instance-information --filters Key=InstanceIds,Values=<instance-id> --query InstanceInformationList[].PingStatus
  3. 03
    Queue the SSM preparation job that installs PostgreSQL, initializes the service, and gets the host ready for the incoming migration workflow.
    show raw commands
    selected command
    aws ssm send-command --document-name AWS-RunShellScript --targets Key=tag:Name,Values=lab-db-aws-ttl01 --parameters commands='sudo dnf install -y postgresql15-server postgresql15','sudo /usr/bin/postgresql-setup --initdb || true','sudo systemctl enable postgresql','sudo systemctl restart postgresql'
  4. 04
    Queue a read-only validation pass that checks PostgreSQL, block devices, and mount points before any migration staging begins.
    show raw commands
    selected command
    lab aws validate db-host
    raw step 01
    aws ssm send-command --document-name AWS-RunShellScript --targets Key=tag:Name,Values=lab-db-aws-ttl01 --parameters commands='sudo systemctl is-active postgresql','psql --version','lsblk -o NAME,SIZE,TYPE,MOUNTPOINT','findmnt /var/lib/pgsql || findmnt /var/lib/postgresql || true'
    raw step 02
    Dependency: Systems Manager registration must be online before this validation can succeed.
  5. 05
    Run one aggregated readiness check that reports host state, volume state, SSM registration, and whether the staging bucket, secret, and source export path are configured.
    show raw commands
    selected command
    lab aws verify migration-ready
    raw step 01
    aws ec2 describe-instances --filters Name=tag:Name,Values=lab-db-aws-ttl01
    raw step 02
    aws ec2 describe-volumes --filters Name=tag:Name,Values=lab-db-aws-data01
    raw step 03
    aws ssm describe-instance-information --filters Key=tag:Name,Values=lab-db-aws-ttl01
    raw step 04
    Expected config keys: AWS_MIGRATION_STAGING_BUCKET, AWS_MIGRATION_SECRET_ARN, LAB_DB_EXPORT_PATH
>
phase 044. Migrate database from VCF to AWSstaged

Export from the VCF PostgreSQL VM, import into the AWS target, then validate the row count and service health.

2 cmds
  1. 01
    Read the exact prerequisites for a real transfer: host, data volume, staging bucket, secret material, and a repeatable VCF dump artifact path.
    show raw commands
    selected command
    lab aws stage migration placeholder
    raw step 01
    Prerequisite check: lab aws verify migration-ready
    raw step 02
    lab db export --source vcf --format custom --output /tmp/labapp-vcf.dump
    raw step 03
    aws s3 cp /tmp/labapp-vcf.dump s3://<staging-bucket>/labapp-vcf.dump
    raw step 04
    aws ssm send-command --document-name AWS-RunShellScript --targets Key=InstanceIds,Values=<instance-id> --parameters commands='aws s3 cp s3://<staging-bucket>/labapp-vcf.dump /tmp/labapp-vcf.dump'
    raw step 05
    Dependency: the transfer remains blocked until the staging bucket, secret ARN, and export path are configured.
  2. 02
    See the exact raw transfer order LAB will follow once host, volume, SSM, staging bucket, secret, and export path are all ready.
    show raw commands
    selected command
    lab aws migration status
    raw step 01
    lab aws verify migration-ready
    raw step 02
    lab aws stage migration placeholder
    raw step 03
    lab db export --target-file /tmp/labapp-vcf.dump
    raw step 04
    pg_dump -Fc -f /tmp/labapp-vcf.dump labapp
    raw step 05
    aws s3 cp /tmp/labapp-vcf.dump s3://<staging-bucket>/labapp-vcf.dump
    raw step 06
    aws ssm send-command --document-name AWS-RunShellScript --targets Key=InstanceIds,Values=<instance-id> --parameters commands='sudo -u postgres psql -c "CREATE ROLE labapp WITH LOGIN PASSWORD ''<db-password>'';"','sudo -u postgres createdb -O labapp labapp'
    raw step 07
    aws ssm send-command --document-name AWS-RunShellScript --targets Key=InstanceIds,Values=<instance-id> --parameters commands='sudo -u postgres pg_restore -d labapp /tmp/labapp-vcf.dump'
    raw step 08
    aws ssm send-command --document-name AWS-RunShellScript --targets Key=InstanceIds,Values=<instance-id> --parameters commands='sudo -u postgres psql -d labapp -c "select count(*) from operator_notes;"'
>
phase 055. Migrate database back from AWS to VCFstaged

Export from the AWS target, bring the dump back to the VCF database VM, and confirm the fallback path is proven.

2 cmds
  1. 01
    Keep the repatriation order visible without pretending the reverse transfer is armed yet.
    show raw commands
    selected command
    lab aws migration status
    raw step 01
    aws ssm send-command --document-name AWS-RunShellScript --targets Key=InstanceIds,Values=<instance-id> --parameters commands='sudo -u postgres pg_dump -Fc labapp -f /tmp/labapp-aws.dump'
    raw step 02
    aws ssm send-command --document-name AWS-RunShellScript --targets Key=InstanceIds,Values=<instance-id> --parameters commands='aws s3 cp /tmp/labapp-aws.dump s3://<staging-bucket>/labapp-aws.dump'
    raw step 03
    lab db import --target vcf --format custom --input /tmp/labapp-aws.dump
    raw step 04
    Dependency: the reverse path is documentation-only until an AWS export artifact path and return staging workflow are wired.
  2. 02
    Read back the latest rows on the VCF side after the return migration completes.
>
phase 066. Future Azure migration placeholderstaged

Keep the Azure branch visible so the operator flow has a ready-made place for the next migration chapter.

3 cmds
  1. 01
    Start by reading the staged Azure compute inventory.
  2. 02
    Surface the future Azure database landing zone.
  3. 03
    Reserve the exact operator verb that the future Azure workflow will use.
live terminalaws@lab

AWS terminal

Live route

command lineidle
aws@lab$
Public visitors can still fill and copy commands. Sign in or create an account to browse with a member session.
operator indexfill command line

AWS

history buffer

Run a command to capture the last five entries here. Each row stays compact until you expand it.