AWS
Launch an instance using EC2
Use a foundation model in the Amazon Bedrock
playground
Set up a cost budget using AWS Budgets
Create a web app using AWS Lambda
Create an Amazon RDS Database
AWS
AWS Lumberyard: An Overview
Amazon Web Services (AWS) Lumberyard is a free,
cross-platform game engine developed by Amazon, designed to empower creators
with professional-grade tools for building high-quality games and interactive
experiences. Based on CryEngine technology, Lumberyard was first introduced in
2016 as Amazon’s foray into the highly competitive world of game development
engines. Its creation reflects Amazon’s broader vision to integrate cloud
computing, multiplayer networking, and content creation into one cohesive ecosystem
for developers.
Foundation and Technology
At its core, Lumberyard is built on the
architecture of CryEngine, a renowned graphics engine known for its
high-fidelity visuals and rendering capabilities. Amazon licensed the CryEngine
framework and expanded upon it, layering in AWS cloud services and new features
aimed at both small studios and AAA developers. Lumberyard was designed to
appeal to developers seeking cinematic quality graphics, real-time rendering,
and powerful asset pipelines. It provides a visual scripting system (Script
Canvas), a component entity system, physics simulation, and support for C++
coding, allowing teams with varying expertise to collaborate.
AWS Integration
What sets Lumberyard apart from other engines
like Unity or Unreal is its deep integration with Amazon Web Services.
Developers can seamlessly connect their games to AWS cloud infrastructure for
scalable multiplayer servers, analytics, data storage, and live game
operations. For example, with Amazon GameLift, teams can deploy and manage
multiplayer servers with relative ease, ensuring low-latency experiences for
players worldwide. This makes Lumberyard particularly appealing for developers
who want to build large, online, persistent worlds without the overhead of
building server infrastructure from scratch.
Multiplayer and Networking
Lumberyard includes a sophisticated networking
layer tailored for modern online games. It allows developers to build real-time
multiplayer features, matchmaking, and in-game communications. The engine’s
architecture supports peer-to-peer and client-server models, with AWS cloud
services enhancing scalability. This focus on online connectivity reflects
Amazon’s recognition that multiplayer and social features are central to
today’s gaming ecosystem.
Visual Tools and Developer Features
Lumberyard emphasizes accessibility through tools
like Script Canvas, a node-based visual scripting environment, and an advanced
animation editor known as EMotion FX. These features lower barriers for
non-programmers, enabling artists and designers to contribute directly to
gameplay logic and character development. Lumberyard also supports popular
workflows such as integration with Autodesk Maya, 3ds Max, and Photoshop. Its
rendering pipeline supports cutting-edge effects like physically based
rendering (PBR), dynamic global illumination, and realistic vegetation and
terrain systems.
Community and Open Source Transition
Initially, Lumberyard was distributed as a
proprietary engine, free of royalties and licensing fees, but tied to AWS for
cloud services. Over time, Amazon recognized the benefits of openness and, in
2021, transitioned Lumberyard into an open-source project called the Open 3D
Engine (O3DE) under the Linux Foundation. This shift gave developers more
freedom to customize and contribute to the engine’s evolution while maintaining
AWS as the preferred cloud backend.
Conclusion
AWS Lumberyard represented Amazon’s bold attempt
to merge world-class graphics with cloud-based scalability. By combining
CryEngine’s rendering power, AWS’s cloud services, and user-friendly tools, it
offered a compelling choice for developers seeking to build visually stunning,
connected games. Though now succeeded by the community-driven O3DE,
Lumberyard’s legacy lies in shaping a new era of open, cloud-integrated game
development.
Launch an instance using EC2
Here’s a clear step-by-step explanation of the
process to launch an instance using Amazon EC2 (Elastic Compute Cloud):
Launching an Instance in Amazon EC2
Amazon EC2 provides scalable virtual servers in
the cloud. Launching an instance essentially means creating and running a
virtual machine (VM) in the AWS environment. The process involves several
steps, each allowing you to customize the instance to your needs.
1. Log in to the AWS Management Console
- Go
to the AWS Management Console.
- Navigate
to EC2 under the “Services” menu.
- Click
Launch Instance to begin the setup process.
2. Choose an Amazon Machine Image (AMI)
- An
AMI is a preconfigured template for your instance that includes an
operating system (Linux, Windows, etc.) and optional applications.
- AWS
offers:
- Quick
Start AMIs
(common OS templates like Ubuntu, Amazon Linux, Windows Server).
- AWS
Marketplace AMIs (vendor-provided software packages).
- Custom
AMIs
you or your organization have created.
3. Select an Instance Type
- Choose
the instance type, which defines the hardware of the host computer:
- vCPU
(virtual CPUs)
- Memory
(RAM)
- Storage
type
- Networking
performance
- For
example, t2.micro or t3.micro are commonly used free-tier
eligible types.
4. Configure Instance Details
- Set
the number of instances.
- Choose
the network (VPC) and subnet.
- Assign
IAM roles (if needed) for secure access to other AWS services.
- Configure
advanced features such as shutdown behavior, monitoring (CloudWatch), and
tenancy (shared or dedicated hardware).
5. Add Storage
- Specify
the size and type of storage (Elastic Block Store – EBS).
- By
default, the selected AMI determines the root volume size (e.g., 8GB).
- You
can add additional volumes or configure options like encryption.
6. Add Tags (Optional)
- Tags
are key–value pairs to organize and manage resources.
- For
example:
- Key: Name
- Value: WebServer1
7. Configure Security Group
- A
security group acts as a virtual firewall.
- Define
inbound and outbound rules to control traffic.
- Example:
Allow SSH (port 22) for Linux, or RDP (port 3389) for
Windows, and allow HTTP (port 80) for web traffic.
8. Review and Launch
- Review
all configuration details.
- When
ready, click Launch.
- You’ll
be prompted to select or create a key pair:
- A
key pair consists of a public key (stored by AWS) and a private key (.pem
file you download).
- This
private key is required for secure SSH or RDP access to your instance.
9. Connect to the Instance
- Once
the instance status is running, you can connect:
- Linux
Instance
→ Use SSH from your terminal:
- ssh
-i your-key.pem ec2-user@public-dns-name
- Windows
Instance
→ Use RDP with the Administrator password (retrieved from the AWS console
using your key pair).
Summary
Launching an EC2 instance involves:
- Logging
into AWS Console
- Choosing
an AMI
- Selecting
an instance type
- Configuring
details
- Adding
storage
- Tagging
resources
- Setting
up security groups
- Reviewing
and launching with a key pair
- Connecting
to your running instance
here’s a practical, copy-pasteable guide to automate EC2 launches with
the AWS CLI (no console needed).
0) Prereqs
- AWS CLI v2 installed and aws configure done (access key, secret,
default region).
- An IAM user/role with ec2:* permissions needed below (at
minimum: RunInstances, CreateTags, CreateSecurityGroup, AuthorizeSecurityGroupIngress, CreateKeyPair, etc.).
1) Pick an AMI programmatically
(keeps things current)
Use AWS Systems Manager public parameters so you don’t hardcode AMI IDs:
REGION=us-east-1
aws ssm get-parameters \
--names
/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-6.1-x86_64 \
--query 'Parameters[0].Value' --output text --region $REGION
Save it:
AMI_ID=$(aws ssm get-parameters \
--names
/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-6.1-x86_64 \
--query 'Parameters[0].Value' --output text --region $REGION)
2) Create (or reuse) a key pair
KEY_NAME=my-ec2-key
aws ec2 create-key-pair --key-name
$KEY_NAME \
--query 'KeyMaterial' --output text --region $REGION >
${KEY_NAME}.pem
chmod 400 ${KEY_NAME}.pem
(If it exists already, skip and reuse.)
3) Create a minimal security group
VPC_ID=$(aws ec2 describe-vpcs --query
'Vpcs[0].VpcId' --output text --region $REGION)
SG_ID=$(aws ec2 create-security-group
\
--group-name web-sg --description "Web SG" --vpc-id $VPC_ID \
--query 'GroupId' --output text --region $REGION)
# Allow SSH from your IP (replace
1.2.3.4/32) and HTTP for a demo
aws ec2
authorize-security-group-ingress --group-id $SG_ID \
--ip-permissions '[
{"IpProtocol":"tcp","FromPort":22,"ToPort":22,"IpRanges":[{"CidrIp":"1.2.3.4/32"}]},
{"IpProtocol":"tcp","FromPort":80,"ToPort":80,"IpRanges":[{"CidrIp":"0.0.0.0/0"}]}
]' --region $REGION
4) (Optional) User-data to
bootstrap the instance
Example that installs a tiny web page on Amazon Linux:
cat > user-data.sh <<'EOF'
#!/bin/bash
dnf -y install nginx
echo "<h1>Hello from AWS
CLI</h1>" > /usr/share/nginx/html/index.html
systemctl enable --now nginx
EOF
5) Launch directly with run-instances
Quick one-off:
INSTANCE_TYPE=t3.micro
aws ec2 run-instances \
--image-id $AMI_ID \
--instance-type $INSTANCE_TYPE \
--key-name $KEY_NAME \
--security-group-ids $SG_ID \
--user-data file://user-data.sh \
--tag-specifications
'ResourceType=instance,Tags=[{Key=Name,Value=cli-web-1}]' \
--count 1 \
--region $REGION
Add --dry-run first to validate permissions without launching.
Wait until it’s ready and fetch its public IP:
INSTANCE_ID=$(aws ec2
describe-instances \
--filters Name=tag:Name,Values=cli-web-1
Name=instance-state-name,Values=pending,running \
--query 'Reservations[0].Instances[0].InstanceId' --output text --region
$REGION)
aws ec2 wait instance-running
--instance-ids $INSTANCE_ID --region $REGION
PUBLIC_IP=$(aws ec2 describe-instances
--instance-ids $INSTANCE_ID \
--query 'Reservations[0].Instances[0].PublicIpAddress' \
--output text --region $REGION)
echo "IP: $PUBLIC_IP"
Test:
curl -I http://$PUBLIC_IP/
ssh -i ${KEY_NAME}.pem
ec2-user@$PUBLIC_IP
6) Reusable automation with Launch
Templates
Create a template once, then launch as many as you want.
Template (JSON)
cat > lt-data.json <<EOF
{
"ImageId": "$AMI_ID",
"InstanceType": "t3.micro",
"KeyName": "$KEY_NAME",
"SecurityGroupIds": ["$SG_ID"],
"UserData": "$(base64 -w0 user-data.sh)"
}
EOF
Create the Launch Template
LT_NAME=web-lt
aws ec2 create-launch-template \
--launch-template-name $LT_NAME \
--version-description "v1" \
--launch-template-data file://lt-data.json \
--region $REGION
Launch from the template
aws ec2 run-instances \
--launch-template LaunchTemplateName=$LT_NAME,Version=1 \
--tag-specifications
'ResourceType=instance,Tags=[{Key=Name,Value=cli-web-from-lt}]' \
--region $REGION
You can evolve the template via new versions (e.g., different AMI,
instance type, user-data) and refer to Version=$Latest.
7) Optional: Spot & scaling
- Spot Instances (cheap, can be interrupted):
·
aws ec2 run-instances \
·
--launch-template
LaunchTemplateName=$LT_NAME,Version=$Latest \
·
--instance-market-options
'MarketType=spot' \
·
--region $REGION
- Auto Scaling Group: pair the launch template with
an ASG to keep N instances running automatically.
8) Clean up
aws ec2 terminate-instances
--instance-ids $INSTANCE_ID --region $REGION
aws ec2 delete-launch-template
--launch-template-name $LT_NAME --region $REGION
aws ec2 delete-security-group
--group-id $SG_ID --region $REGION
aws ec2 delete-key-pair --key-name
$KEY_NAME --region $REGION
rm -f ${KEY_NAME}.pem user-data.sh
lt-data.json
Tips
- Use profiles: add --profile
myprofile.
- Pin a region consistently
to avoid “AMI not found”.
- Prefer launch templates
for repeatability and ASG integration.
- Always start with --dry-run in production environments.
Use a foundation model in the Amazon Bedrock
playground
Here’s a clear explanation of the process to use
a foundation model in the Amazon Bedrock playground:
Using a Foundation Model in the Amazon Bedrock
Playground
Amazon Bedrock is a fully managed service that
lets you build and scale generative AI applications without needing to manage
infrastructure or train models from scratch. It provides access to multiple foundation
models (FMs) from providers like Anthropic, Cohere, Meta, Mistral, and
Stability AI through an API. The Bedrock playground is an interactive
web-based environment in the AWS Console where you can experiment with these
models before integrating them into your applications.
1. Log in to the AWS Management Console
- Sign
in to your AWS account.
- From
the services menu, navigate to Amazon Bedrock.
- Open
the Bedrock Playground section.
2. Choose a Foundation Model
- The
playground lists several foundation models across categories such
as:
- Text
generation
(chatbots, summarization, Q&A).
- Embeddings (semantic search,
recommendations).
- Image
generation
(create visuals from text prompts).
- Select
a model provider (e.g., Anthropic Claude, AI21 Jurassic, Cohere, Stability
AI).
- Each
model has different strengths—some are optimized for reasoning and
conversation, others for creative writing or image synthesis.
3. Set Up Your Prompt
- In
the prompt editor, you can type your input (for example, “Write a
summary of cloud computing in simple terms”).
- For
text models:
- You
can structure prompts as instructions, questions, or even
simulate chat dialogues.
- For
image models:
- You
provide a descriptive text prompt (e.g., “A futuristic city skyline at
sunset in cyberpunk style”).
4. Adjust Parameters
- The
playground allows you to tweak generation settings:
- Temperature – Controls
creativity vs. precision (higher values = more creative, lower = more
deterministic).
- Maximum
tokens
– Sets the length of the output.
- Top-p
/ Top-k
– Influence randomness and diversity of responses.
- These
settings let you experiment with how the model behaves.
5. Run and Review Output
- Click
Submit / Generate to run the model.
- The
foundation model processes your prompt and displays the output directly in
the playground.
- For
conversational models, you can continue the dialogue interactively by
entering follow-up prompts.
6. Compare Models (Optional)
- The
playground supports side-by-side comparisons.
- You
can select multiple models and run the same prompt through them to see
differences in style, accuracy, and tone.
- This
helps you evaluate which foundation model best fits your use case.
7. Save and Export
- You
can save successful prompts and outputs for reference.
- The
playground often provides code snippets (Python, JavaScript, etc.)
to show how to call the same model via the Bedrock API in your
applications.
8. Move Toward Deployment
- Once
you’re satisfied with model behavior in the playground:
- You
can integrate the chosen foundation model into your app using the Amazon
Bedrock API.
- This
involves creating a Bedrock endpoint, calling the model programmatically,
and embedding it in your workflows.
Summary
Using a foundation model in the Amazon Bedrock
playground involves:
- Logging
into AWS Console and opening Bedrock.
- Selecting
a foundation model (text, embedding, or image).
- Writing
prompts in the editor.
- Adjusting
parameters like temperature and token limits.
- Generating
and reviewing outputs.
- Optionally
comparing multiple models.
- Saving
prompts and outputs.
- Transitioning
from testing to application integration via APIs.
Here’s a guided, click-by-click demo (with text “screenshots”) for
using a foundation model in the Amazon Bedrock Playground.
Step-by-step: Use a model in Amazon
Bedrock Playground
0) Prereqs (one-time)
- You have an AWS account and model
access enabled for at least one Bedrock model (you can request/modify
access in the console). (AWS Documentation)
1) Open the Playground
- In the AWS Console, search “Bedrock”
and open Amazon Bedrock.
- In the left nav, under Playgrounds,
choose Chat/text (for text/chat) or Image (for image
generation). (AWS Documentation)
[Screenshot (text)]
- Left nav: “Home · Model catalog ·
Knowledge bases · Guardrails · Playgrounds ▸ Chat/text
| Image”
- Page header: “Playgrounds” with
tabs “Chat” and “Single prompt.”
2) Pick a foundation model
- At the top of the playground,
open the Model dropdown and select a model (e.g., Claude, Cohere,
Mistral, Meta, Amazon Nova for text/chat; Stability
for images). If you don’t see a model, click Manage model access
and enable it for your account. (AWS Documentation)
[Screenshot (text)]
- Top toolbar: “Model: [Select…] ▾ Mode: [Chat |
Single prompt]”
- Banner hint: “You don’t have
access to this model. Manage model access.”
3) Compose your prompt (Chat/text)
- In Chat mode, type your
instruction (e.g., “Summarize zero-trust networking in 120 words”).
- (Optional) Click Attach to
include an image or document as extra context (multimodal chat).
- Click Generate. Subsequent
turns keep the conversation context. (AWS Documentation)
[Screenshot (text)]
- Prompt box at bottom: “Message
the model…” with a paperclip icon (Attach).
- Right pane: “Response” area
streams model output.
Tip: Switch to Single prompt mode for one-off prompts without chat
history. (AWS Documentation)
4) Tune output with parameters
- In the Response settings/parameters
panel, adjust: Temperature, Max tokens, Top-p, Top-k
(names may vary by model). Then re-run to see differences. (AWS Documentation)
[Screenshot (text)]
- Side panel: “Generation settings”
- Temperature [0.7]
- Max tokens [512]
- Top-p [0.9]
- Top-k [50]
5) Compare models side-by-side
(optional)
- Use Compare to run the same
prompt across multiple models and review outputs in columns. (Note:
speech-to-speech models aren’t supported in compare.) (AWS Documentation)
[Screenshot (text)]
- Toolbar: “Compare ▸ Select models
(up to 3)”
- Grid with columns labeled by
model names; identical prompt shown above each result.
6) Image generation (optional)
- In Playgrounds ▸ Image, enter a descriptive prompt (e.g., “futuristic city at dusk, wide
angle”).
- (Optional) Upload a reference
image to edit or generate variations. Click Generate. (AWS Documentation)
[Screenshot (text)]
- Left: prompt box + advanced
options (resolution, steps, guidance).
- Right: tiled image results with Download
and Variations buttons.
7) After you generate
- Use the playground to iterate
quickly. When satisfied, note that what you do here corresponds to the Bedrock
runtime APIs (e.g., InvokeModel, Converse/streaming). This makes it straightforward to reproduce in code
outside the console. (AWS Documentation)
[Screenshot (text)]
- Info banner: “Running a prompt in
a playground is equivalent to calling the Bedrock runtime APIs.”
- Link: “Learn more about Bedrock
runtime.”
8) Troubleshooting & tips
- Model not listed? You likely need to request
access for that provider/model. (AWS
Documentation)
- Region mismatch? Ensure you’re in a Bedrock-supported
region (e.g., us-east-1 / us-west-2). (AWS Documentation)
- Costs: You’re billed for inference
usage per model’s pricing; use the playground sparingly when exploring.
(See Bedrock docs/pricing from the service homepage.) (Amazon
Web Services, Inc.)
Quick recap
- Console → Amazon Bedrock →
Playgrounds.
- Choose Chat/text or Image.
- Select a model, write a
prompt, tweak parameters, and Generate.
- (Optional) Compare models.
- Transition to code using the
corresponding runtime APIs. (AWS Documentation)
Create a one-page printable checklist or add a mini lab (e.g., “evaluate the same
prompt on 3 models and record differences”).
Set up a cost budget using AWS Budgets
Here’s a clear step-by-step explanation of the
process to set up a cost budget using AWS Budgets:
Setting Up a Cost Budget in AWS Budgets
AWS Budgets helps you plan and monitor your cloud
spending. By creating a cost budget, you can track how much you’re
spending in AWS against a threshold you define. You’ll also receive alerts (via
email or SNS) when your actual or forecasted costs exceed that threshold.
1. Log in to the AWS Management Console
- Sign
in to your AWS account.
- In
the Services menu, search for and open AWS Budgets.
2. Start Creating a Budget
- On
the AWS Budgets dashboard, click Create budget.
- You’ll
be prompted to choose a budget type. Select Cost budget,
then click Next.
3. Set Budget Details
- Enter
a name for your budget (e.g., “Monthly EC2 Budget”).
- Choose
the period for tracking:
- Monthly (most common)
- Quarterly
or annually
- Define
the budget renewal type:
- Recurring
budget
(resets each month, quarter, or year).
- Expiring
budget
(runs for a specific time frame only).
4. Define Budget Scope
- Choose
whether to track:
- Linked
accounts
(if you’re in an AWS Organization).
- Specific
services
(e.g., EC2, S3, RDS).
- Tags (to monitor
specific projects or environments).
- This
helps narrow your budget to the resources that matter most.
5. Set the Budgeted Amount
- Enter
the budget limit (e.g., $100 for monthly costs).
- Choose
how to measure costs:
- Unblended
costs
(default, shows list prices).
- Amortized
costs
(spreads upfront RI or Savings Plan costs).
- Net
unblended costs
(after discounts, credits, and refunds).
6. Configure Alerts (Optional but Recommended)
- Add
an alert threshold to notify you when costs exceed a percentage of
the budget:
- Example:
Send an alert at 80% of budgeted amount and again at 100%.
- Select
actual costs or forecasted costs:
- Actual
costs
alert you when you’ve already spent the amount.
- Forecasted
costs
alert you when AWS predicts you’ll exceed the budget by the end of the
period.
- Enter
an email address (or an Amazon SNS topic) to receive alerts.
7. Review and Create
- Review
all settings (budget name, amount, scope, alerts).
- Click
Create budget to finalize.
- Your
budget will now appear in the AWS Budgets dashboard.
8. Monitor and Take Action
- After
setup, AWS will track your costs in near real-time.
- If
your usage crosses the defined threshold, you’ll get an alert.
- Use
these insights to optimize resources (e.g., stop unused EC2 instances,
move data to cheaper S3 storage tiers, or buy Savings Plans).
Summary
Setting up a cost budget in AWS Budgets involves:
- Logging
into AWS Budgets.
- Choosing
Cost budget.
- Setting
name, period, and renewal type.
- Defining
scope (accounts, services, or tags).
- Entering
the budget amount.
- Adding
alert thresholds and recipients.
- Reviewing
and creating the budget.
- Monitoring
costs and responding to alerts.
AWS Budgets by itself only sends alerts (emails or SNS
notifications).
If you want to go beyond alerts and automate actions (e.g., stop EC2
instances when a cost threshold is crossed), you need to combine Budgets with AWS
Budgets Actions, SNS, and sometimes AWS Lambda or SSM.
Here’s the breakdown:
1. Alerts Only (default Budgets
feature)
- When creating a budget, you can
add alerts.
- Alerts are sent to email
addresses or SNS topics when actual or forecasted
spend crosses your threshold.
- These alerts don’t take direct
action—they’re notifications only.
- Good for: Teams that just need
visibility into overspending.
2. Automated Actions via AWS
Budgets Actions
Since 2020, AWS Budgets supports actions tied to budget
thresholds. With this, you can:
- Stop, start, or terminate EC2 and
RDS instances automatically.
- Update IAM policies to restrict usage.
How it works:
- When setting up your budget, go
to the “Configure actions” step.
- Choose an action type:
- EC2 or RDS instance control (stop, start, terminate).
- IAM permissions control (apply a policy to restrict
usage).
- Select the resources to act on.
- Link an SNS topic for
notifications (optional but recommended).
- Confirm with an IAM role
that allows Budgets to perform these actions on your behalf.
Example:
- If your monthly spend exceeds $50,
AWS Budgets can automatically stop all t3.micro instances in a
chosen region.
3. More Complex Automation with
Lambda + SNS
If you need more flexibility than built-in Budgets Actions:
- Create an SNS topic.
- Subscribe an AWS Lambda
function to it.
- Configure your Budget to send
alerts to that SNS topic.
- In Lambda, write custom logic—for
example:
- Stop or downsize all running
EC2 instances.
- Move data from expensive storage
classes to cheaper ones (S3 IA/Glacier).
- Notify Slack/Teams with a
webhook.
Sample Lambda pseudocode (Python):
import boto3
def lambda_handler(event, context):
ec2 = boto3.client('ec2')
# Get all running instances
response =
ec2.describe_instances(Filters=[{'Name':'instance-state-name','Values':['running']}])
instance_ids = [i['InstanceId'] for r in response['Reservations'] for i
in r['Instances']]
if instance_ids:
ec2.stop_instances(InstanceIds=instance_ids)
print(f"Stopped instances: {instance_ids}")
4. Which Should You Use?
- Alerts only → If you just want cost
visibility.
- Budgets Actions → If you need quick, built-in
automation (stop/start/terminate EC2/RDS, update IAM).
- SNS + Lambda → If you want full control and
can code custom responses.
✅ Summary:
By default, AWS Budgets provides alerts only. If you want automatic
cost control, enable Budgets Actions to stop/start/terminate EC2 or
RDS or adjust IAM permissions. For advanced workflows, integrate alerts with SNS
+ Lambda for customized automation.
Create a step-by-step tutorial (with click-path descriptions like the
console demo earlier) showing exactly how to set up a Budget Action that
stops EC2 instances when a threshold is crossed.
Create a web app using AWS Lambda
Here’s a structured explanation of the process to
create a web app using AWS Lambda:
Creating a Web App Using AWS Lambda
AWS Lambda is a serverless compute service
that lets you run code without managing servers. Instead of provisioning EC2
instances or containers, you write functions that execute in response to events
such as HTTP requests, file uploads, or database changes. To build a simple web
app, you combine Lambda with Amazon API Gateway (to handle
HTTP requests) and optionally DynamoDB or S3 (for data storage).
1. Set Up Your AWS Environment
- Sign
in to the AWS Management Console.
- Ensure
you have the necessary permissions for Lambda, API Gateway, IAM, and
S3/DynamoDB (if used).
2. Create a Lambda Function
- Go
to the AWS Lambda Console.
- Click
Create function.
- Choose:
- Author
from scratch
- Name:
e.g., MyWebAppFunction
- Runtime:
Select a language (Python, Node.js, Java, etc.).
- Execution
role: Assign an IAM role with basic Lambda permissions (and
permissions for other services if needed).
- Click
Create function.
3. Write the Lambda Code
- In
the Lambda editor, write your application logic.
- Example:
A simple Node.js handler returning a web response:
exports.handler = async (event) => {
const
response = {
statusCode: 200,
headers: { "Content-Type": "text/html" },
body: "<h1>Hello from My Web App on AWS
Lambda!</h1>",
};
return
response;
};
- Save
and Deploy the function.
4. Create an API Gateway Endpoint
- Go
to the Amazon API Gateway Console.
- Choose
Create API → HTTP API (simpler) or REST API (more
customizable).
- Define
a new API (e.g., MyWebAppAPI).
- Create
a route (e.g., /) with method GET.
- Connect
the route to your Lambda function (MyWebAppFunction).
- Deploy
the API to a stage (e.g., prod).
5. Test the Web App
- API
Gateway generates an endpoint URL like:
- https://abc123.execute-api.us-east-1.amazonaws.com/prod/
- Open
this URL in your browser.
- You
should see:
- Hello from My Web
App on AWS Lambda!
6. (Optional) Add Storage or Database
- Amazon
DynamoDB
– Store user data (e.g., profiles, posts, messages).
- Amazon
S3
– Store static assets like images, CSS, or JavaScript files.
- Update
your Lambda function to interact with these services.
7. Secure and Scale
- Use
IAM roles and resource policies to secure access.
- Enable
CORS in API Gateway if your web app is accessed from browsers.
- Lambda
scales automatically with requests, so no server management is needed.
8. (Optional) Host a Front-End
- If
you’re building a full-stack app, host your front-end (HTML, CSS, JS) in
an S3 bucket with static website hosting or AWS Amplify Hosting,
and connect it to your Lambda-powered API.
Summary
The process of creating a web app with AWS Lambda
involves:
- Creating
a Lambda function with your backend logic.
- Using
Amazon API Gateway to expose it as a web endpoint.
- Writing
and deploying code to return web responses.
- Testing
the API in a browser.
- Optionally
integrating storage (S3, DynamoDB).
- Securing
and scaling automatically.
Create a simple CRUD web app using API Gateway + Lambda
(Node.js) + DynamoDB, with copy-pasteable bits you can deploy fast.
What you’ll build
- DynamoDB table: Todos (partition key id).
- Single Lambda handling all CRUD via HTTP
methods.
- API Gateway (HTTP API) routes:
- POST /items → Create
- GET
/items/{id} → Read one
- GET /items → Read all
- PUT
/items/{id} → Update
- DELETE
/items/{id} → Delete
- CORS enabled so a browser front-end
can call it.
1) Create the DynamoDB table
aws dynamodb create-table \
--table-name Todos \
--attribute-definitions AttributeName=id,AttributeType=S \
--key-schema AttributeName=id,KeyType=HASH \
--billing-mode PAY_PER_REQUEST
2) Lambda code (Node.js 20)
Create index.mjs:
import { DynamoDBClient } from
"@aws-sdk/client-dynamodb";
import {
PutCommand, GetCommand, ScanCommand, UpdateCommand, DeleteCommand,
DynamoDBDocumentClient
} from
"@aws-sdk/lib-dynamodb";
import { randomUUID } from
"crypto";
const client =
DynamoDBDocumentClient.from(new DynamoDBClient({}));
const TABLE = process.env.TABLE_NAME
|| "Todos";
const json = (statusCode, body) =>
({
statusCode,
headers: {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "*",
"Access-Control-Allow-Methods":
"GET,POST,PUT,DELETE,OPTIONS"
},
body: JSON.stringify(body)
});
export const handler = async (event)
=> {
try {
const { routeKey, body, pathParameters, queryStringParameters } = event;
const parsed = body ? JSON.parse(body) : {};
switch (routeKey) {
case "POST /items": {
const item = {
id: parsed.id || randomUUID(),
title: parsed.title ?? "",
done: !!parsed.done,
createdAt: new Date().toISOString()
};
await client.send(new PutCommand({ TableName: TABLE, Item: item }));
return json(201, item);
}
case "GET /items/{id}": {
const id = pathParameters?.id;
const res = await client.send(new GetCommand({ TableName: TABLE, Key: {
id } }));
return res.Item ? json(200, res.Item) : json(404, { message: "Not
found" });
}
case "GET /items": {
// Optional basic pagination with limit & start key
const limit = Number(queryStringParameters?.limit || 25);
const startKey = queryStringParameters?.lastKey ? { id:
queryStringParameters.lastKey } : undefined;
const res = await client.send(new ScanCommand({ TableName: TABLE, Limit:
limit, ExclusiveStartKey: startKey }));
return json(200, { items: res.Items ?? [], lastKey:
res.LastEvaluatedKey?.id });
}
case "PUT /items/{id}": {
const id = pathParameters?.id;
const { title, done } = parsed;
const res = await client.send(new UpdateCommand({
TableName: TABLE,
Key: { id },
UpdateExpression: "SET #t = :t,
#d = :d",
ExpressionAttributeNames: {
"#t": "title", "#d": "done" },
ExpressionAttributeValues: {
":t": title, ":d": !!done },
ReturnValues: "ALL_NEW"
}));
return json(200, res.Attributes);
}
case "DELETE /items/{id}": {
const id = pathParameters?.id;
await client.send(new DeleteCommand({ TableName: TABLE, Key: { id } }));
return json(204, {});
}
case "OPTIONS /{proxy+}":
case "OPTIONS /items":
case "OPTIONS /items/{id}":
return json(204, {});
default:
return json(400, { message: `Unsupported routeKey: ${routeKey}` });
}
} catch (err) {
console.error(err);
return json(500, { message: "Server error", error: String(err)
});
}
};
Create package.json:
{
"type": "module",
"dependencies": {
"@aws-sdk/client-dynamodb": "^3.614.0",
"@aws-sdk/lib-dynamodb": "^3.614.0"
}
}
Install deps and zip:
npm i
zip -r function.zip index.mjs
node_modules
3) Create the Lambda + permissions
REGION=us-east-1
ROLE_NAME=crud-lambda-role
aws iam create-role --role-name
$ROLE_NAME \
--assume-role-policy-document '{
"Version":"2012-10-17",
"Statement":[{"Effect":"Allow","Principal":{"Service":"lambda.amazonaws.com"},"Action":"sts:AssumeRole"}]
}'
aws iam attach-role-policy --role-name
$ROLE_NAME \
--policy-arn
arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
aws iam put-role-policy --role-name
$ROLE_NAME --policy-name DDBAccess --policy-document "{
\"Version\":\"2012-10-17\",
\"Statement\":[
{\"Effect\":\"Allow\",\"Action\":[\"dynamodb:PutItem\",\"dynamodb:GetItem\",\"dynamodb:Scan\",\"dynamodb:UpdateItem\",\"dynamodb:DeleteItem\"],\"Resource\":\"*\"}
]
}"
# Wait a few seconds for role
propagation
sleep 10
ROLE_ARN=$(aws iam get-role
--role-name $ROLE_NAME --query 'Role.Arn' --output text)
aws lambda create-function \
--function-name crud-todos-fn \
--runtime nodejs20.x \
--role $ROLE_ARN \
--handler index.handler \
--zip-file fileb://function.zip \
--environment "Variables={TABLE_NAME=Todos}" \
--region $REGION
4) Create the HTTP API + routes
API_ID=$(aws apigatewayv2 create-api \
--name crud-todos-api \
--protocol-type HTTP \
--target arn:aws:lambda:$REGION:$(aws sts get-caller-identity --query
Account --output text):function:crud-todos-fn \
--cors-configuration
'AllowOrigins=["*"],AllowMethods=["GET","POST","PUT","DELETE","OPTIONS"],AllowHeaders=["*"]'
\
--query 'ApiId' --output text --region $REGION)
# Grant API Gateway permission to
invoke Lambda
aws lambda add-permission \
--function-name crud-todos-fn \
--statement-id apigw \
--action lambda:InvokeFunction \
--principal apigateway.amazonaws.com \
--source-arn "arn:aws:execute-api:$REGION:$(aws sts
get-caller-identity --query Account --output text):$API_ID/*/*/{proxy+}"
# Define routes
for r in "POST /items"
"GET /items" "GET /items/{id}" "PUT /items/{id}"
"DELETE /items/{id}" "OPTIONS /{proxy+}"; do
METHOD=$(echo $r | cut -d' ' -f1); PATH=$(echo $r | cut -d' ' -f2-)
aws apigatewayv2 create-route --api-id $API_ID --route-key "$METHOD
$PATH" \
--target "integrations/$(aws apigatewayv2 create-integration \
--api-id $API_ID \
--integration-type AWS_PROXY \
--integration-uri arn:aws:lambda:$REGION:$(aws sts get-caller-identity
--query Account --output text):function:crud-todos-fn \
--payload-format-version 2.0 \
--query 'IntegrationId' --output text --region $REGION)" \
--region $REGION
done
# Deploy stage
aws apigatewayv2 create-stage --api-id
$API_ID --stage-name prod --auto-deploy --region $REGION
API_URL=$(aws apigatewayv2 get-apis
--query "Items[?ApiId=='$API_ID'].ApiEndpoint" --output text --region
$REGION)
echo "API: $API_URL/prod"
5) Test with curl
BASE="$API_URL/prod"
# Create
curl -s -X POST
"$BASE/items" -H "Content-Type: application/json" \
-d '{"title":"Buy milk","done":false}'
# List
curl -s "$BASE/items"
# Read one (replace ID)
curl -s "$BASE/items/{id}"
# Update
curl -s -X PUT
"$BASE/items/{id}" -H "Content-Type: application/json" \
-d '{"title":"Buy oat milk","done":true}'
# Delete
curl -s -X DELETE
"$BASE/items/{id}"
6) Front-end (optional)
Host a static SPA (HTML/JS) on S3 + CloudFront or Amplify
Hosting; call the API with fetch. CORS is already open to * for quick demos—tighten in production.
7) Clean up
aws apigatewayv2 delete-api --api-id
$API_ID --region $REGION
aws lambda delete-function
--function-name crud-todos-fn --region $REGION
aws dynamodb delete-table --table-name
Todos
aws iam delete-role-policy --role-name
$ROLE_NAME --policy-name DDBAccess
aws iam detach-role-policy --role-name
$ROLE_NAME --policy-arn
arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
aws iam delete-role --role-name
$ROLE_NAME
Notes & production tips
- Scope IAM to the specific
table ARN (least privilege).
- Add request validation and
auth (e.g., Cognito authorizer) in API Gateway.
- Replace full-table Scan
with Query when you design keyed access patterns.
- Use SAM / CDK to codify
this stack for repeatable deploys.
Turn this into a SAM or CDK template so you can deploy the whole
stack with one command.
Create an Amazon RDS Database
Here’s a clear, step-by-step explanation of the
process to create an Amazon RDS database:
Creating an Amazon RDS Database
Amazon Relational Database Service (RDS) is a managed
database service that simplifies the setup, operation, and scaling of
relational databases in the cloud. It supports popular engines like MySQL,
PostgreSQL, MariaDB, Oracle, and Microsoft SQL Server. With RDS, AWS
handles tasks such as backups, patching, monitoring, and scaling so you can
focus on applications instead of database administration.
1. Log in to the AWS Management Console
- Open
the AWS Management Console.
- From
the Services menu, select RDS.
2. Start the Database Creation Wizard
- In
the RDS dashboard, click Create database.
- You’ll
be prompted to choose between:
- Standard
create
(gives you full configuration control).
- Easy
create
(automatically selects recommended defaults).
- For
more flexibility, select Standard create.
3. Choose a Database Engine
- Select
the engine you want to use:
- Amazon
Aurora
(MySQL and PostgreSQL compatible, highly scalable).
- MySQL
- PostgreSQL
- MariaDB
- Oracle
- Microsoft
SQL Server
4. Select a Database Template
- Choose
a template depending on your use case:
- Production (multi-AZ
deployment, backups, monitoring enabled).
- Dev/Test (lower cost, fewer
high-availability features).
- Free
Tier
(limited resources, great for practice).
5. Configure Settings
- DB
Instance Identifier: A unique name for your database.
- Master
Username:
The admin account for the database.
- Master
Password:
Choose a secure password (and confirm it).
6. Choose DB Instance Size
- Select
the instance type (compute and memory resources).
- For
example, db.t3.micro is free-tier eligible.
- Pick
a storage type (General Purpose SSD, Provisioned IOPS, or Magnetic).
- Define
allocated storage size (e.g., 20 GB).
7. Configure Connectivity
- Choose
the Virtual Private Cloud (VPC) where the DB will run.
- Decide
if the database should be publicly accessible (open to internet) or
private (internal only).
- Assign
a VPC security group to control inbound and outbound traffic.
- Example:
Allow connections on port 3306 for MySQL.
8. Set Advanced Options (Optional)
- Enable
automated backups (recommended).
- Set
backup retention period (1–35 days).
- Enable
Multi-AZ deployment for high availability.
- Enable
monitoring and logging (CloudWatch).
9. Review and Launch
- Review
all chosen settings.
- Click
Create database.
- RDS
will start provisioning the instance (this may take several minutes).
10. Connect to the Database
- Once
the DB status is available, click on the DB instance to view
details.
- Copy
the endpoint (hostname) and port.
- Use
a SQL client (e.g., MySQL Workbench, pgAdmin, SQL Server Management
Studio) or application code to connect:
Example (MySQL from terminal):
mysql -h
mydbinstance.xxxxxxx.us-east-1.rds.amazonaws.com -P 3306 -u admin -p
Summary
Creating an Amazon RDS database involves:
- Logging
into AWS RDS.
- Starting
the database creation wizard.
- Choosing
an engine (MySQL, PostgreSQL, etc.).
- Selecting
a template (production, dev/test, free tier).
- Configuring
DB name, admin credentials, and instance size.
- Setting
up connectivity (VPC, security groups, public/private).
- Configuring
backups, monitoring, and high availability.
- Launching
the DB instance.
- Connecting
to the DB using the provided endpoint.
initial setup and walk through the main ways to scale Amazon RDS—what
each option does, trade-offs, and exactly how to do it (Console + CLI).
1) Scale compute (change instance
class)
What it is: Move to a larger/smaller DB instance class (more vCPU/RAM; e.g., db.t3.micro → db.m7g.large).
Impact: Usually requires a restart; brief downtime unless you use
Multi-AZ (failover minimizes it).
Console
- RDS → Databases → select
your instance.
- Modify → DB instance class →
choose new class.
- (Optional) Apply immediately
or during the next maintenance window.
- Continue → Modify DB instance.
CLI
aws rds modify-db-instance \
--db-instance-identifier mydb \
--db-instance-class db.m7g.large \
--apply-immediately
2) Scale storage capacity (increase
size / autoscaling)
What it is: Grow storage (e.g., 20→200 GB) and optionally enable Storage
Autoscaling with a Max allocated storage cap.
Impact: Online operation; RDS grows storage without downtime (cannot
shrink).
Console
- DB → Modify.
- Allocated storage: raise GB.
- Enable storage autoscaling and set Max allocated storage.
- Continue → Modify.
CLI
# Increase storage and enable
autoscaling up to 1000 GB
aws rds modify-db-instance \
--db-instance-identifier mydb \
--allocated-storage 200 \
--max-allocated-storage 1000 \
--apply-immediately
3) Scale storage performance (type
& IOPS)
What it is: Move between gp3 / io2 and adjust provisioned IOPS
for steady throughput/latency.
Impact: May cause a brief interruption when switching types.
Console
- DB → Modify.
- Storage type: pick gp3 or io2.
- Set IOPS (for io2) or Throughput/IOPS (for gp3).
- Modify.
CLI
# Switch to io2 with 20k IOPS
aws rds modify-db-instance \
--db-instance-identifier mydb \
--storage-type io2 \
--iops 20000 \
--apply-immediately
4) High availability vs read
scaling (Multi-AZ vs Read Replicas)
- Multi-AZ = HA/failover (synchronous
standby). It does not add read capacity (except the Multi-AZ DB
instance with two readable standbys option in some
engines/configs—distinct feature; check your engine).
- Read Replicas (asynchronous) do add
read capacity. Supported for MySQL, MariaDB, PostgreSQL, and Aurora
(engine specifics vary). You point read traffic to the replicas; you can promote
a replica to standalone if needed.
4a) Enable Multi-AZ
Console: DB → Modify → Availability & durability → choose Multi-AZ
option → Modify.
CLI:
aws rds modify-db-instance \
--db-instance-identifier mydb \
--multi-az \
--apply-immediately
4b) Create Read Replicas
Console
- DB → Actions → Create
read replica.
- Choose instance class/size,
AZ/Region, storage type/IOPS.
- Create read replica.
CLI (same region):
aws rds
create-db-instance-read-replica \
--db-instance-identifier mydb-replica-1 \
--source-db-instance-identifier mydb \
--db-instance-class db.m7g.large
CLI (cross-region):
aws rds
create-db-instance-read-replica \
--db-instance-identifier mydb-replica-usw2 \
--source-db-instance-identifier
arn:aws:rds:us-east-1:123456789012:db:mydb \
--destination-region us-west-2 \
--db-instance-class db.m7g.large
Promote a replica (break replication):
aws rds promote-read-replica
--db-instance-identifier mydb-replica-1
5) Aurora-specific notes (if you
use Aurora)
- Scale reads by adding Aurora Replicas;
apps use the reader endpoint to load balance.
- Scale writes by changing the writer
instance class.
- Aurora Serverless v2 scales capacity units
automatically with near-instant adjustments—great for spiky/variable
loads.
6) Connection & query scaling
(cheap wins)
- Add connection pooling
(e.g., RDS Proxy for MySQL/Postgres) to reduce connection storm
overhead.
- Use proper indexes, avoid
table scans, and adopt Query patterns instead of full-table Scan
(especially with ORMs).
7) Observability before/after
scaling
- CloudWatch metrics: CPUUtilization, FreeableMemory,
ReadIOPS/WriteIOPS, Read/WriteLatency, FreeStorageSpace,
DatabaseConnections, ReplicaLag.
- Enhanced Monitoring and Performance Insights
to find bottlenecks (CPU vs I/O vs SQL).
8) Typical upgrade playbook
- Turn on Performance Insights
and review slow SQL.
- If CPU-bound → bigger instance;
if I/O-bound → IOPS / storage type; if read-heavy → Read
Replicas.
- Add RDS Proxy for bursty
connections.
- Enable Multi-AZ for HA.
- Configure storage autoscaling
to avoid surprises.
Quick examples (one-liners)
Bump instance class + apply later (maintenance window):
aws rds modify-db-instance \
--db-instance-identifier mydb \
--db-instance-class db.r7g.xlarge
Turn on Performance Insights:
aws rds modify-db-instance \
--db-instance-identifier mydb \
--enable-performance-insights \
--performance-insights-retention-period 7 \
--apply-immediately
Final tip
Plan changes during low-traffic windows, and if you need near-zero
downtime, pair Multi-AZ with connection draining on the app side
(retry logic), then modify with Apply immediately to fail over
quickly.
Suggest a targeted scaling path with exact console clicks and CLI lines
for your case.
No comments:
Post a Comment