Page tree
Skip to end of metadata
Go to start of metadata

Developing for AWS is different depending on how far you are from the infrastructure as managed services - or how much of the shared responsibility model you defer to Amazon.  For example developing on EC2 IaaS will be familiar and unchanged from traditional spring based java development to spring-boot jar or tomcat war deployments.  However as you climb the stack past PaaS beanstalk to unmanaged container based SaaS ECS, EKS-EC2 past managed kubernetes clusters in EKS Fargate all the way to fully managed Lambda event driven functional programming - you will need to develop and debug differently.

AWS Architecture Considerations

Determining how far to go up the serverless architecture tier requires some tradeoffs that need to be reviewed first - like docker volume support for example.

Reference AWS Architecture

Public Subnets can connect to an Internet Gateway - Private Subnets do not

If the subnet has an IG (internet gateway) is is public, however the use of a VPC Endpoint can also be used.  Private subnets do use internet gateways.

L7 ELB better for SSL traffic to ECS clusters

An ELB L4 Load Balancer is better for for end to end encrypted traffic as it does not need to encrypt/decrypt traffic for layer 6 path based routing

Persistent Volume Support

In a serverless architecture support for persistent volumes may not be directly available.  For example running your own kubernetes cluster - you can create a persistent volume claim directly to the drive or a NFS/EFS share.

K8S and the AWS VPC CNI

The following tiers are ordered by minimum developer responsibility (Lambda Functions) all the way to maximum shared responsibility (EC2 VMs)

AWS Serverless Development

In these two architectural patterns - the notion of a server (EC2 based) is completely abstracted out.  You deploy individual function definitions (Lambda) or microservice tasks (Docker image based wrappers of the traditional kubernetes pod (EKS) or docker definition (ECS).

FaaS - Serverless Function as a Service Development via AWS Lambda

Lambda Quickstart

AWS SAM | AWS Event Driven Architecture#DI2:InvestigateDynamicwebsiteviaS3andLambda | 

CaaS - Serverless Containers as a Service Development via AWS Fargate

EKS Fargate

ECS Fargate

AWS Server based Development

These methods of deployment either deploy directly to EC2 instances or indirectly to EC2 via you own managed ECS or EKS (Kubernetes) cluster.

CMaaS - Container Management as a Service Development via AWS ECS/EKS

Native EKS

Native ECS

CloudFormation Designer

AWS Artifacts for ECS EC2 based deployment


Network ACL -;sort=networkAclId

ECS Container Instance

ECS EC2 Container;sort=instanceId
upgrade -

ECS Service

ECS Task

Hybrid CMaaS using AWS EKS and on-premises Kubernetes cluster VMs

PaaS - Platform as a Service via AWS Elastic Beanstalk

IaaS - Infrastructure as a Service with AWS EC2

AWS DevOps

VPC with Public and Private Subnets

AWS-6 - Getting issue details... STATUS

Create a 2 subnet Public/Private VPC with NAT Gateway and Bastion

If you want to be able to communicate from the bastion to an instance in the private VPC subnet - you can either add all the instances to the security group - or open all traffic on the ::0 CIDR

20191213 see AWS ECS E2E Architecture

Verify private instance initiated web traffic

We need to verify that instances in the private subnet can reach github or any other public repos.

# better to tunnel - but for now scp your key to the bastion
$ scp -i ~/.ssh/obrien_systems*.pem ~/.ssh/obrien_system*.pem ubuntu@bastion*

# ssh into the bastion
$ ssh -i ~/.ssh/obrien_systems*.pem ubuntu@bastion-*
ubuntu@ip-10-0-0-129:~$ sudo chmod 400 obrien*.pem
ubuntu@ip-10-0-0-129:~$ sudo cp obrien_*.pem ~/.ssh
ubuntu@ip-10-0-0-129:~$ sudo chown  ubuntu:ubuntu ~/.ssh/obrien*.pem
# test connectivity
ubuntu@ip-10-0-0-129:~$ curl

# ssh from the bastion into a private test instance
ubuntu@ip-10-0-0-129:~$ ssh -i ~/.ssh/obrien_....5.pem ubuntu@

# initiate web traffic
ubuntu@ip-10-0-1-110:~$ curl

bcqKU\x22,\x22uhde\x22:false}}';google.pmc=JSON.parse(pmc);})();</script>        </body></html>ubuntu@ip-10-0-1-110:~$ 

Use an SSH tunnel to connect to the private EC2 from a Bastion

Use SSH Key Forwarding to connect to the private EC2 from a Bastion

Add the following to your ~/.ssh/config

Then just connect normally (After doing an ssh-add on the key) - assuming you are using the same key to access the bastion in the public subnet and the instance in the private subnet.

obrienlabs:infrastructure $ ssh
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-1051-aws x86_64)
ubuntuubuntu@ip-10-0-0-121:~$ ssh ubuntu@
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1021-aws x86_64)

AWS Cloud Native Services


AWS2 is in preview

curl "" -o ""
sudo ./aws/install
biometric:install $ aws2 --version
aws-cli/2.0.0dev1 Python/3.7.4 Darwin/19.2.0 botocore/2.0.0dev1

# upgrade aws cli v2, redo above curl and unzip - but add --update to the ./aws/install
biometric:install $ sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --update
biometric:install $ aws2 --version
aws-cli/2.0.0dev2 Python/3.7.4 Darwin/19.2.0 botocore/2.0.0dev1


see Developer Guide#OSX


To get debug/trace logs run with -Ddebug=true

AWS SAM - Serverless Application Model







Installing OSX AWS CLI



# specific to OSX
$ curl "" -o "AWSCLIV2.pkg"
$ sudo installer -pkg AWSCLIV2.pkg -target /
installer: Installing at base path /
$ aws --version
aws-cli/2.1.29 Python/3.8.8 Darwin/19.6.0 exe/x86_64 prompt/off
obrienlabs:biometric.web.docker mi.$ aws configure 
AWS Access Key ID [None]: A..PA
AWS Secret Access Key [None]: Zi..Pa
Default region name [None]: us-east-1 
Default output format [None]: json
obrienlabs:biometric.web.docker $ aws s3 ls
2019-06-28 17:02:12 config-bucket-2...
2019-05-31 14:44:52 ...-public

Installing Linux AWS CLI

# specific to Linux

Installing Windows AWS CLI

# specific to windows
# python can be installed either from an msi or using the web installer at
mfobrien@biometricvm MINGW64 ~
$ python --version
Python 3.7.4

mfobrien@biometricvm MINGW64 ~
$ pip --version
pip 19.0.3 from c:\users\mfobrien\appdata\local\programs\python\python37\lib\site-packages\pip (python 3.7)

# check versions
@biometrics MINGW64 ~
$ python --version
Python 2.7.10
@biometrics MINGW64 ~
$ pip --version
pip 7.0.1 from C:\opt\Python27\lib\site-packages (python 2.7)

# install AWS CLI
$ pip install awscli
Collecting awscli
  Downloading (2.0MB)
    100% |████████████████████████████████| 2.0MB 3.4MB/s

# optional
python -m pip install --upgrade pip

$ aws --version
aws-cli/1.16.230 Python/3.7.4 Windows/10 botocore/1.12.220
opriate aws_secret_access_key and aws_access_key_id
@biometrics MINGW64 ~
$ cp ~/.aws/cred_obriensystems ~/.aws/credentials

$ aws s3 ls
2019-06-28 17:02:12 config-bucket-2493099999

on some systems running self signed certificates - disable SSL checking if you get
SSL validation failed for [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)

$ aws s3 ls --no-verify-ssl
2019-06-28 17:02:12 config-bucket-249309999999

Cloning CodeCommit repos using temporary IAM credentials

Install GIT


install python (will install pip)

relaunch MING64 (comes with GIT

set keys
micha@carbon MINGW64 /c/wse_codecommit
micha@carbon MINGW64 /c/wse_codecommit
micha@carbon MINGW64 /c/wse_codecommit

check versions

micha@carbon MINGW64 ~
$ python --version
Python 3.10.0

micha@carbon MINGW64 ~
$ pip --version
pip 21.2.3 from C:\opt\python3\lib\site-packages\pip (python 3.10)


Step 2: Set up the AWS CLI Credential Helper
Set up your connection to AWS CodeCommit repositories using the credential helper included in the AWS CLI. This is the only connection method for AWS CodeCommit repositories that does not require an IAM user, so it is the only method that supports root access, federated access, and temporary credentials.

git config --global credential.helper "!aws codecommit credential-helper $@"
git config --global credential.UseHttpPath true

micha@carbon MINGW64 ~
$ cat ~/.gitconfig
        helper = "aws codecommit help codecommit credential-helper "
        UseHttpPath = true

fix it to be
micha@carbon MINGW64 /c/wse_codecommit
$ cat ~/.gitconfig
        helper = !aws codecommit credential-helper $@
        UseHttpPath = true

get url from

micha@carbon MINGW64 /c/wse_codecommit
$ git clone                                                              

Cloning into 'uipath'...
fatal: User cancelled the authentication prompt.
fatal: Failed to write item to store. [0x6c6]
fatal: The array bounds are invalid
remote: Counting objects: 304, done.
Receiving objects: 100% (304/304), 3.77 MiB | 4.16 MiB/s, done.
Resolving deltas: 100% (179/179), done.

micha@carbon MINGW64 /c/wse_codecommit
$ ls uipath/  cloudformation/  doc/  pom.xml  rpa_bastion.pem

Connecting to EC2 instances via SSM

Create a Bastion/Jump box VM for CLI access

Don't run CLI commands directly from one of your pc's - better to setup an account on a bastion VM - ideally inside a VPC.

Follow - however depending on the VM you use (I am using T3a.micro) under Ubuntu 18.04 - python3 will be missing the distutils package required by pip.  Run the following.

A T3a.micro is 13% cheaper than a T3 or T2 and runs $48/year

Installing the AWS CLI will also enable Terraform Developer Guide when it is installed.

# windows
PS C:\Windows\system32> ssh -i some.pem
# mac

ubuntu@ip-172-31-94-184:~$ python -version
Command 'python' not found, but can be installed with:
You also have python3 installed, you can run 'python3' instead.
ubuntu@ip-172-31-94-184:~$ python3 --user
ModuleNotFoundError: No module named 'distutils.util'

ubuntu@ip-172-31-94-184:~$ sudo apt-get install python3-distutils
ubuntu@ip-172-31-94-184:~$ python3 --user
Successfully installed pip-19.1.1 setuptools-41.0.1 wheel-0.33.4

ubuntu@ip-172-31-94-184:~$ pip3 --version
pip 19.1.1 from /home/ubuntu/.local/lib/python3.6/site-packages/pip (python 3.6)
ubuntu@ip-172-31-94-184:~$ pip --version
pip 19.1.1 from /home/ubuntu/.local/lib/python3.6/site-packages/pip (python 3.6)

ubuntu@ip-172-31-94-184:~$  pip3 install awscli --upgrade --user
Successfully installed awscli-1.16.172 botocore-1.12.162 docutils-0.14 jmespath-0.9.4 python-dateutil-2.8.0 rsa-3.4.2 s3transfer-0.2.1

ubuntu@ip-172-31-94-184:~$ aws --version
aws-cli/1.16.172 Python/3.6.7 Linux/4.15.0-1039-aws botocore/1.12.162
ubuntu@ip-172-31-94-184:~$ pip3 install awscli --upgrade --user

# configure
ubuntu@ip-172-31-94-184:~$ aws configure
AWS Access Key ID [None]: B***
AWS Secret Access Key [None]: C****
Default region name [None]: us-east-1
Default output format [None]: json

ubuntu@ip-172-31-94-184:~$ aws s3 ls
2019-05-16 18:32:40

Create and EC2 instance for Kubernetes RKE installation and EFS share

Allocate an EIP static public IP (one-time)

$aws ec2 allocate-address
{    "PublicIp": "35.172..",     "Domain": "vpc",     "AllocationId": "eipalloc-2f743..."}

Create a Route53 Record Set - Type A (one-time)

$ cat route53-a-record-change-set.json 
{"Comment": "comment","Changes": [
    { "Action": "CREATE",
      "ResourceRecordSet": {
        "Name": "",
        "Type": "A", "TTL": 300,
        "ResourceRecords": [
          { "Value": "35.172.36.." }]}}]}
$ aws route53 change-resource-record-sets --hosted-zone-id Z...7 --change-batch file://route53-a-record-change-set.json
{    "ChangeInfo": {        "Status": "PENDING",         "Comment": "comment", 
       "SubmittedAt": "2018-02-17T15:02:46.512Z",         "Id": "/change/C2QUNYTDVF453x"    }}

$ dig
; <<>> DiG 9.9.7-P3 <<>>	300	IN	A	35.172.36..		172800	IN	NS

Request a spot EC2 Instance

# request the usually cheapest $0.13 spot 64G EBS instance at AWS
aws ec2 request-spot-instances --spot-price "0.25" --instance-count 1 --type "one-time" --launch-specification file://aws_ec2_spot_cli.json

# don't pass in the the following - it will be generated for the EBS volume
            "SnapshotId": "snap-0cfc17b071e696816"
launch specification json
{      "ImageId": "ami-c0ddd64ba",
      "InstanceType": "r4.2xlarge",
      "KeyName": "obrien_systems_aws_201",
      "BlockDeviceMappings": [
        {"DeviceName": "/dev/sda1",
          "Ebs": {
            "DeleteOnTermination": true,
            "VolumeType": "gp2",
            "VolumeSize": 120
      "SecurityGroupIds": [ "s2" ]}
# results
{    "SpotInstanceRequests": [{   "Status": {
                "Message": "Your Spot request has been submitted for review, and is pending evaluation.", 
                "Code": "pending-evaluation", 

Get EC2 instanceId after creation

aws ec2 describe-spot-instance-requests  --spot-instance-request-id sir-1tyr5etg
            "InstanceId": "i-02a653592cb748e2x",

Associate EIP with EC2 Instance

Can be done separately as long as it is in the first 30 sec during initialization and before rancher starts on the instance.

$aws ec2 associate-address --instance-id i-02a653592cb748e2x --allocation-id eipalloc-375c1d0x
{    "AssociationId": "eipassoc-a4b5a29x"}

Reboot EC2 Instance to apply DNS change to Rancher in AMI

$aws ec2 reboot-instances --instance-ids i-02a653592cb748e2x

EFS share for shared NFS

"From the NFS wizard"

Setting up your EC2 instance

  1. Using the Amazon EC2 console, associate your EC2 instance with a VPC security group that enables access to your mount target. For example, if you assigned the "default" security group to your mount target, you should assign the "default" security group to your EC2 instance. Learn more
  2. Open an SSH client and connect to your EC2 instance. (Find out how to connect)

  3. If you're not using the EFS mount helper, install the NFS client on your EC2 instance:
    • On an Ubuntu instance:
      sudo apt-get install nfs-common

Mounting your file system

  1. Open an SSH client and connect to your EC2 instance. (Find out how to connect)
  2. Create a new directory on your EC2 instance, such as "efs".
    • sudo mkdir efs
  3. Mount your file system. If you require encryption of data in transit, use the EFS mount helper and the TLS mount option. Mounting considerations
    • Using the EFS mount helper:
      sudo mount -t efs fs-43b2763a:/ efs
    • Using the EFS mount helper and encryption of data in transit:
      sudo mount -t efs -o tls fs-43b2763a:/ efs
    • Using the NFS client:
      sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 efs

If you are unable to connect, see our troubleshooting documentation.

EFS/NFS Provisioning Script for AWS


ECS - Elastic Container Service - Development

AWS-1 - Getting issue details... STATUS   AWS-4 - Getting issue details... STATUS

ECS DevOps Architecture

see Architecture#AWSECSEC2basedArchitecture

ECS Networking

ECS Local Container Networking

ECS Task Definition networkMode
ECS task definition networkMode is bridge

ECS service discovery is available via route53

ECS task definition networkMode is awsvpc


Set hostPort to 0 for default bridge and HTTP_PORT for host/awsvpc or you will get

An error occurred (ClientException) when calling the RegisterTaskDefinition operation: When networkMode=awsvpc, the host ports and container ports in port mappings must match.fix is use the same port for hostPort and containerPort

ECS External REST API Networking

ECS Local Development Testing

ECR upload

where the . in the docker build command is the current directory path

obrienlabs:biometric.web.docker $ $(aws ecr get-login --no-include-email --region us-east-1)
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded

obrienlabs:biometric.web.docker $ docker build -f DockerFile -t ecr .
Sending build context to Docker daemon 50.96MB
Step 1/2 : FROM tomcat:8.0.48-jre8
---> e072422ca96f
Step 2/2 : COPY target/biometric.web-*-SNAPSHOT.war /usr/local/tomcat/webapps/biometric.web.war
---> Using cache
---> 5cd1152a967b
Successfully built 5cd1152a967b
Successfully tagged ecr:latest

obrienlabs:biometric.web.docker $ docker tag ecr:latest
obrienlabs:biometric.web.docker $ docker push
The push refers to repository []
2d64f287a896: Pushed 

latest: digest: sha256:dac09eb8c16fe9d5fb0080479e183acddbd178f77ab60d24fe7072aef0d7d073 size: 3048


aws codebuild start-build — project-name name —environment-variables-override “name=BUILD_ID,value=32,type=PLAINTEXT

#aws ecs register-task-definition —cli-input-json file://biometric.json
# using S3 to retain artifacts
#aws deploy create-deployment —application-name —deployment-config-name CodeDeployDefault.ECSAllAtOnce —deployment-group-name deployment-group —description desc —s3-location “bucket=dev-build,key=dev/$build/appspec.json,bundleTupe=json”

ECS Tasks

ECS task examples -


Connecting to ECS instances and containers

There are three connection use cases here - ECS standard bastion SSH access - and Docker 18.09 remote docker CLI and the Session Manager in the AWS Systems Manager

Using the Systems Manager in the AWS Systems Manager to run docker commands against an ECS container


Cost: free.  Limits: wide - 100 sessions, 20 min timeout,

You need to setup the agent first on your EC2 instances -

Using the AWS Secrets Manager for RDS credentials

see AWS Secrets Manager

ECS docker task connectivity to Secrets Manager

Roles must be specified for the call to secrets manager to work

Debugging ECS containers locally

ECS NFS share on AWS



follow the VPC CNI plugin -

and 20190121 work with John Lotoskion,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,29382184

Network Diagram

Provision access to EKS cluster

DynamoDB Development

DynamoDB Use Cases