Total Pageviews

Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

2024/02/09

執行 AWS PutMetricData operation 出現錯誤:The parameter MetricData.member.1.Timestamp must specify a time no more than two hours in the future

Problem

AWS PutMetricData operation 內容如下

#!/bin/bash
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T11:05:00.000Z 
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T11:06:00.000Z 
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T11:07:00.000Z
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T11:08:00.000Z
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T11:09:00.000Z
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T11:10:00.000Z
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T11:11:00.000Z 
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T11:12:00.000Z 
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T11:13:00.000Z 
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T11:14:00.000Z 
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T11:15:00.000Z
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T11:16:00.000Z 
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T11:17:00.000Z 
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T11:18:00.000Z 
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T11:19:00.000Z
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T11:20:00.000Z
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T11:21:00.000Z


執行後出現以下錯誤

An error occurred (InvalidParameterValue) when calling the PutMetricData operation: The parameter MetricData.member.1.Timestamp must specify a time no more than two hours in the future.

An error occurred (InvalidParameterValue) when calling the PutMetricData operation: The parameter MetricData.member.1.Timestamp must specify a time no more than two hours in the future.
An error occurred (InvalidParameterValue) when calling the PutMetricData operation: The parameter MetricData.member.1.Timestamp must specify a time no more than two hours in the future. An error occurred (InvalidParameterValue) when calling the PutMetricData operation: The parameter MetricData.member.1.Timestamp must specify a time no more than two hours in the future.


Root Cause

設定的 timestamp 時間不能比當前時間超過 2小時,此限制是為了防止因時間錯誤設置而導致數據不一致。


How-To

重新修正 timestamp

#!/bin/bash
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T03:05:00.000Z 
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T03:06:00.000Z 
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T03:07:00.000Z
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T03:08:00.000Z
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T03:09:00.000Z
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T03:10:00.000Z
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T03:11:00.000Z 
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T03:12:00.000Z 
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T03:13:00.000Z 
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T03:14:00.000Z 
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T03:15:00.000Z
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T03:16:00.000Z 
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T03:17:00.000Z 
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T03:18:00.000Z 
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T03:19:00.000Z
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T03:20:00.000Z
aws cloudwatch put-metric-data --metric-name CriticalError --namespace MyService --value 1 --timestamp 2024-02-09T03:21:00.000Z


在 CloudWatch Metrics 也如預期找到所執行的結果





AWS CloudWatch log group 未如預期出現所設定的 log group name

Problem

安裝 CloudWatch Agent 至 EC2,並將設定的 metrics 資料傳送至AWS CloudWatch log group,但在畫面未能找到所指定的 log group name。

config.json 的內容如下

{
        "agent": {
                "metrics_collection_interval": 1,
                "run_as_user": "cwagent"
        },
        "logs": {
                "logs_collected": {
                        "files": {
                                "collect_list": [
                                        {
                                                "file_path": "/var/log/messages",
                                                "log_group_class": "STANDARD",
                                                "log_group_name": "messages",
                                                "log_stream_name": "{instance_id}",
                                                "retention_in_days": 1
                                        }
                                ]
                        }
                }
        },
        "metrics": {
                "aggregation_dimensions": [
                        [
                                "InstanceId"
                        ]
                ],
                "append_dimensions": {
                        "AutoScalingGroupName": "${aws:AutoScalingGroupName}",
                        "ImageId": "${aws:ImageId}",
                        "InstanceId": "${aws:InstanceId}",
                        "InstanceType": "${aws:InstanceType}"
                },
                "metrics_collected": {
                        "cpu": {
                                "measurement": [
                                        "cpu_usage_idle",
                                        "cpu_usage_iowait",
                                        "cpu_usage_user",
                                        "cpu_usage_system"
                                ],
                                "metrics_collection_interval": 1,
                                "resources": [
                                        "*"
                                ],
                                "totalcpu": false
                        },
                        "disk": {
                                "measurement": [
                                        "used_percent",
                                        "inodes_free"
                                ],
                                "metrics_collection_interval": 1,
                                "resources": [
                                        "*"
                                ]
                        },
                        "diskio": {
                                "measurement": [
                                        "io_time"
                                ],
                                "metrics_collection_interval": 1,
                                "resources": [
                                        "*"
                                ]
                        },
                        "mem": {
                                "measurement": [
                                        "mem_used_percent"
                                ],
                                "metrics_collection_interval": 1
                        },
                        "statsd": {
                                "metrics_aggregation_interval": 10,
                                "metrics_collection_interval": 10,
                                "service_address": ":8125"
                        },
                        "swap": {
                                "measurement": [
                                        "swap_used_percent"
                                ],
                                "metrics_collection_interval": 1
                        }
                }
        }
}


Root Cause

到此目錄 /opt/aws/amazon-cloudwatch-agent/logs/ 檢查 CloudWatch Agent Log,發現是權限問題,我應該要把 run_as_user 改成 root

[inputs.logfile] Failed to tail file /var/log/messages with error: open /var/log/messages: permission denied


How-To

將 config.json 修改如下即可

{
        "agent": {
                "metrics_collection_interval": 1,
                "run_as_user": "root"
        },
        "logs": {
                "logs_collected": {
                        "files": {
                                "collect_list": [
                                        {
                                                "file_path": "/var/log/messages",
                                                "log_group_class": "STANDARD",
                                                "log_group_name": "messages",
                                                "log_stream_name": "{instance_id}",
                                                "retention_in_days": 1
                                        }
                                ]
                        }
                }
        },
        "metrics": {
                "aggregation_dimensions": [
                        [
                                "InstanceId"
                        ]
                ],
                "append_dimensions": {
                        "AutoScalingGroupName": "${aws:AutoScalingGroupName}",
                        "ImageId": "${aws:ImageId}",
                        "InstanceId": "${aws:InstanceId}",
                        "InstanceType": "${aws:InstanceType}"
                },
                "metrics_collected": {
                        "cpu": {
                                "measurement": [
                                        "cpu_usage_idle",
                                        "cpu_usage_iowait",
                                        "cpu_usage_user",
                                        "cpu_usage_system"
                                ],
                                "metrics_collection_interval": 1,
                                "resources": [
                                        "*"
                                ],
                                "totalcpu": false
                        },
                        "disk": {
                                "measurement": [
                                        "used_percent",
                                        "inodes_free"
                                ],
                                "metrics_collection_interval": 1,
                                "resources": [
                                        "*"
                                ]
                        },
                        "diskio": {
                                "measurement": [
                                        "io_time"
                                ],
                                "metrics_collection_interval": 1,
                                "resources": [
                                        "*"
                                ]
                        },
                        "mem": {
                                "measurement": [
                                        "mem_used_percent"
                                ],
                                "metrics_collection_interval": 1
                        },
                        "statsd": {
                                "metrics_aggregation_interval": 10,
                                "metrics_collection_interval": 10,
                                "service_address": ":8125"
                        },
                        "swap": {
                                "measurement": [
                                        "swap_used_percent"
                                ],
                                "metrics_collection_interval": 1
                        }
                }
        }
}


2024/02/07

如何透過 AWS Lambda 呼叫 SES (Simple Email Service) 發信

實作步驟如下

1. 建立 Verified Identities來做為測試用途的 Sender 與 Receiver email address


2. 建立 Lambda Function 後,進入 Configuration => Permission,確認該 Role 具備 SES 角色權限


3. 呼叫 SES 的 Python code

import boto3
from botocore.exceptions import ClientError

def lambda_handler(event, context):
    # 建立 SES 客戶端
    ses_client = boto3.client('ses', region_name='us-east-1')  # 請根據需要替換為您的 SES 區域

    # 電子郵件參數
    SENDER = "hekarey795@giratex.com"  # 替換為您的發件人地址
    RECIPIENT = "hekarey795@giratex.com"  # 替換為您的收件人地址
    SUBJECT = "AWS SES Test Email from Lambda"
    BODY_TEXT = ("This is a test email sent from AWS Lambda using SES")
    CHARSET = "UTF-8"

    # 嘗試發送電子郵件
    try:
        response = ses_client.send_email(
            Destination={
                'ToAddresses': [
                    RECIPIENT,
                ],
            },
            Message={
                'Body': {
                    'Text': {
                        'Charset': CHARSET,
                        'Data': BODY_TEXT,
                    },
                },
                'Subject': {
                    'Charset': CHARSET,
                    'Data': SUBJECT,
                },
            },
            Source=SENDER,
        )
    except ClientError as e:
        print(e.response['Error']['Message'])
    else:
        print("Email sent! Message ID:"),
        print(response['MessageId'])

4. 執行結果


5. 檢查信箱






AWS CloudShell 無法成功執行指令

Problem

我在 CloudShell 執行以下指令,卻無法正常執行

aws cognito-identity set-identity-pool-roles \
--identity-pool-id "us-east-1:xxxx-xxxx-xxxx-xxxx-xxxxxx” \
--roles unauthenticated=arn:aws:iam::xxxx:role/Cognito_DynamoPoolUnauth --output json


Root Cause

第二行的第二個 double quote 有打錯

aws cognito-identity set-identity-pool-roles \
--identity-pool-id "us-east-1:xxxx-xxxx-xxxx-xxxx-xxxxxx \
--roles unauthenticated=arn:aws:iam::xxxx:role/Cognito_DynamoPoolUnauth --output json


How-To

修正為正確的 double quote 後即可正常執行

aws cognito-identity set-identity-pool-roles \
--identity-pool-id "us-east-1:xxxx-xxxx-xxxx-xxxx-xxxxxx" \
--roles unauthenticated=arn:aws:iam::xxxx:role/Cognito_DynamoPoolUnauth --output json


2024/02/06

AWS CloudFormation Error: TemplateURL must be a supported URL

Problem

當我要執行以下 CloudFormation yaml file:

{
    "AWSTemplateFormatVersion" : "2010-09-09",
    "Resources" : {
        "myStack" : {
	       "Type" : "AWS::CloudFormation::Stack",
	       "Properties" : {
              "TemplateURL" : "https://s3.amazonaws.com/nested-demo-531193295833/s3static.json",
              "TimeoutInMinutes" : "60"
	       }
        },
        "myStack2" : {
            "Type" : "AWS::CloudFormation::Stack",
            "Properties" : {
               "TemplateURL" : "https://s3.amazonaws.com/nested-demo-531193295833/noretain.json",
               "TimeoutInMinutes" : "60"
            }
         }    
    }
}


出現以下錯誤



Root Cause

因為 TemplateURL 輸入有誤,故出現上述錯誤

{
    "AWSTemplateFormatVersion" : "2010-09-09",
    "Resources" : {
        "myStack" : {
	       "Type" : "AWS::CloudFormation::Stack",
	       "Properties" : {
              "TemplateURL" : "https://s3.amazonaws.com/nested-demo-531193295833/s3static.json",
              "TimeoutInMinutes" : "60"
	       }
        },
        "myStack2" : {
            "Type" : "AWS::CloudFormation::Stack",
            "Properties" : {
               "TemplateURL" : "https://s3.amazonaws.com/nested-demo-531193295833/noretain.json",
               "TimeoutInMinutes" : "60"
            }
         }    
    }
}


How-To

重新修正 YAML 內容中的 TemplateURL,即可成功執行

{
    "AWSTemplateFormatVersion" : "2010-09-09",
    "Resources" : {
        "myStack" : {
	       "Type" : "AWS::CloudFormation::Stack",
	       "Properties" : {
              "TemplateURL" : "https://nested-demo-531193295833.s3.amazonaws.com/s3static.json",
              "TimeoutInMinutes" : "60"
	       }
        },
        "myStack2" : {
            "Type" : "AWS::CloudFormation::Stack",
            "Properties" : {
               "TemplateURL" : "https://nested-demo-531193295833.s3.amazonaws.com/noretain.json",
               "TimeoutInMinutes" : "60"
            }
         }    
    }
}


執行結果



2024/02/05

AWS CloudFormation Error Code: InvalidAMIID.NotFound

Problem

當我透過已經編輯好的 CloudFormation yaml file 來建立相關資源,出現以下錯誤訊息



Root Cause

因為 yaml file 中的 image id 在該 region 不存在,所以會出現上述錯誤


How-To

Fix image id 即可成功建立資源

AWSTemplateFormatVersion: 2010-09-09

Description: Template to create an EC2 instance and enable SSH

Parameters: 
  KeyName:
    Description: Name of SSH KeyPair
    Type: 'AWS::EC2::KeyPair::KeyName'
    ConstraintDescription: Provide the name of an existing SSH key pair

Resources:
  MyEC2Instance:
    Type: 'AWS::EC2::Instance'
    Properties:
      InstanceType: t2.micro
      ImageId: ami-0277155c3f0ab2930
      KeyName: !Ref KeyName
      SecurityGroups:
       - Ref: InstanceSecurityGroup
      Tags:
        - Key: Name
          Value: My CF Instance
  InstanceSecurityGroup:
    Type: 'AWS::EC2::SecurityGroup'
    Properties:
      GroupDescription: Enable SSH access via port 22
      SecurityGroupIngress:
        IpProtocol: tcp
        FromPort: 22
        ToPort: 22
        CidrIp: 0.0.0.0/0

Outputs: 
  InstanceID:
    Description: The Instance ID
    Value: !Ref MyEC2Instance







AWS CloudFormation 建立失敗:The key pair 'irkp' does not exist (Service: AmazonEC2; Status Code: 400; Error Code: InvalidKeyPair.NotFound

Problem

當我透過 CLI 建立相關 AWS 資源

aws cloudformation create-stack --stack-name CodeDeployDemoStack-2 \
--template-url https://my-cf-template-347854199521.s3.amazonaws.com/CF_Template.json \
--parameters ParameterKey=InstanceCount,ParameterValue=1 \
ParameterKey=InstanceType,ParameterValue=t3.micro \
ParameterKey=KeyPairName,ParameterValue=irkp \
ParameterKey=OperatingSystem,ParameterValue=Linux \
ParameterKey=SSHLocation,ParameterValue=0.0.0.0/0 \
ParameterKey=TagKey,ParameterValue=Name \
ParameterKey=TagValue,ParameterValue=CodeDeployDemo \
--capabilities CAPABILITY_IAM


出現以下錯誤


Root Cause

因為指令列中的 key pair 不存在


How-To

修正 AWS CLI 為存在的的 key pair 名字

aws cloudformation create-stack --stack-name CodeDeployDemoStack-3 \
--template-url https://my-cf-template-347854199521.s3.amazonaws.com/CF_Template.json \
--parameters ParameterKey=InstanceCount,ParameterValue=1 \
ParameterKey=InstanceType,ParameterValue=t3.micro \
ParameterKey=KeyPairName,ParameterValue=nvkp \
ParameterKey=OperatingSystem,ParameterValue=Linux \
ParameterKey=SSHLocation,ParameterValue=0.0.0.0/0 \
ParameterKey=TagKey,ParameterValue=Name \
ParameterKey=TagValue,ParameterValue=CodeDeployDemo \
--capabilities CAPABILITY_IAM


執行結果



2024/02/04

AWS CodeDploy 失敗:Zip end of central directory signature not found

Problem

我已經 upload source code 到指定 S3 bucket,並規劃透過 CodeDploy 從 S3 bucket 上傳 source code 到指定的 EC2 instances,但發生錯誤



Root Cause

經查 Deployment lifecycle events,發現是在 DownloadBundle 時發生錯誤

錯誤訊息為:Zip end of central directory signature not found

此錯誤訊息的成因:
在 AWS CodeDeploy 的過程中,試圖從 S3 下載並解壓 ZIP 壓縮檔作為部署包時,無法正確解壓縮檔案,此錯誤通常為檔案毀損。



Solution

重新上傳 source code 至 S3,即可成功透過 CodeDepoly 佈署程式


2024/01/14

"errorMessage": "require is not defined in ES module scope, you can use import instead"

Problem

我在跟著 cloudguru 的 lab,學習如何寫一個簡單的 Node.js Lambda function


const https = require('https');
let url = "https://www.amazon.com";

exports.handler = async function(event) {
    let statusCode;
    await new Promise(function(resolve, reject) {
        https.get(url, (res) => {
            statusCode = res.statusCode;
            resolve(statusCode);
        }).on("error", (e) => {
            reject(Error(e));
        });
    });
    console.log(statusCode);
    return statusCode;
};

執行 Test Event 出現以下錯誤訊息
Test Event Name
MyTestEvent

Response
{
  "errorType": "ReferenceError",
  "errorMessage": "require is not defined in ES module scope, you can use import instead",
  "trace": [
    "ReferenceError: require is not defined in ES module scope, you can use import instead",
    "    at file:///var/task/index.mjs:1:15",
    "    at ModuleJob.run (node:internal/modules/esm/module_job:218:25)",
    "    at async ModuleLoader.import (node:internal/modules/esm/loader:329:24)",
    "    at async _tryAwaitImport (file:///var/runtime/index.mjs:1008:16)",
    "    at async _tryRequire (file:///var/runtime/index.mjs:1057:86)",
    "    at async _loadUserApp (file:///var/runtime/index.mjs:1081:16)",
    "    at async UserFunction.js.module.exports.load (file:///var/runtime/index.mjs:1119:21)",
    "    at async start (file:///var/runtime/index.mjs:1282:23)",
    "    at async file:///var/runtime/index.mjs:1288:1"
  ]
}

Root Cause

當您在 AWS Lambda 中使用 Node.js 20 版本,並且選擇使用 .mjs 附檔名時,Lambda 函數被視為 ES module。在 ECMAScript module 中,標準的 require 語法不能使用,因為它是 CommonJS 模塊系統的一部分,連帶上述 export 語法也需稍微改寫


Solution

程式碼需修改如下

import https from 'https';
let url = "https://www.amazon.com";

export async function handler(event) {
    let statusCode;
    await new Promise(function(resolve, reject) {
        https.get(url, (res) => {
            statusCode = res.statusCode;
            resolve(statusCode);
        }).on("error", (e) => {
            reject(Error(e));
        });
    });
    console.log(statusCode);
    return statusCode;
};


執行結果



2024/01/01

Unable to install mysqlclient package on EC2 instance (Amazon Linux 2023)

Problem

當我在建立 EC2 instance 時,輸入以下 bootstrap script 至 user data


#!/bin/bash  
yum update -y
yum install mysql -y

但是當啟動 EC2 後,使用 EC2 Instance Connect 進入後,輸入 mysql --version 卻出現以下錯誤訊息

mysql: command not found

經查啟動 log,發現 mysql client 安裝失敗

[ec2-user@ip-172-31-44-29 ~]$ cat /var/log/cloud-init-output.log
Last metadata expiration check: 0:00:03 ago on Mon Jan  1 04:21:12 2024.
No match for argument: mysql
Error: Unable to find a match: mysql
2024-01-01 04:21:15,423 - cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
2024-01-01 04:21:15,424 - util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python3.9/site-packages/cloudinit/config/cc_scripts_user.py'>) failed
Cloud-init v. 22.2.2 finished at Mon, 01 Jan 2024 04:21:15 +0000. Datasource DataSourceEc2.  Up 32.23 seconds


Solution

經查因 Amazon Linux 2023 的內建套件與過往的 AMI 不同,需執行以下指令

# install pip (AL 2023 does not have one by default)
sudo dnf install -y pip

# install dependencies
sudo dnf install -y mariadb105-devel gcc python3-devel

# install mysqlclient
pip install mysqlclient


執行結果

   ,     #_
   ~\_  ####_        Amazon Linux 2023
  ~~  \_#####\
  ~~     \###|
  ~~       \#/ ___   https://aws.amazon.com/linux/amazon-linux-2023
   ~~       V~' '->
    ~~~         /
      ~~._.   _/
         _/ _/
       _/m/'
Last login: Mon Jan  1 04:24:46 2024 from 18.206.107.28
[ec2-user@ip-172-31-44-29 ~]$ mysql --version
mysql  Ver 8.0.35 for Linux on x86_64 (MySQL Community Server - GPL)


Increase/Decrease Font Size in iTerm2

Problem 
How to Increase/Decrease Font Size in iTerm2

How To
View => Make Text Bigger


2022/12/03

AWS S3 Durability and Availability

AWS S3 Durability (耐久性)

  • 任何 S3 storage class 皆相同
  • S3 的 durability 為 99.999999999% ,假設你存放  10,000,000 個檔案,每 10,000 年才會有一個檔案遺失。計算公式如下
    • average annual expected loss % = 100 - 99.99999999999 = 0.00000000001 
    • 損失一個檔案所需時間 = 10,000,000 * 10,000 * 0.00000000001 = 1.000444172


AWS S3 Availability (可用性)

  • 不同的 S3 storage class 有所差異
  • 以 S3 Standard 來說,Availability = 99.99%,代表一年只有 53 分鐘無法使用,計算公式如下
    • 一年的分鐘數 = 365 * 24 * 60 = 525,600 分鐘
    • non-availability = 1 - 0.9999 = 0.0001
    • 一年無法使用的分鐘數 = 525,600 * 0.0001 = 52.56 分鐘

2022/11/01

使用 WinSCP 連線 AWS EC2

2022/08/04

[AWS] 如何挑選離我現在所處區域 latency 最低的 AWS Region

Requirement

如何挑選離我現在所處區域 latency 最低的 Region


How-To

到 https://www.cloudping.info/ 進行查詢,例如以下查詢結果

RegionLatency
Amazon Web Services
us-east-1 (Virginia)209 ms
us-east-2 (Ohio)209 ms
us-west-1 (California)141 ms
us-west-2 (Oregon)160 ms
ca-central-1 (Central)228 ms
eu-west-1 (Ireland)289 ms
eu-west-2 (London)284 ms
eu-west-3 (Paris)286 ms
eu-central-1 (Frankfurt)295 ms
eu-south-1 (Milan)303 ms
eu-north-1 (Stockholm)300 ms
me-south-1 (Bahrain)252 ms
af-south-1 (Cape Town)424 ms
ap-east-1 (Hong Kong)45 ms
ap-southeast-3 (Jakarta)109 ms
ap-south-1 (Mumbai)121 ms
ap-northeast-3 (Osaka-Local)49 ms
ap-northeast-2 (Seoul)73 ms
ap-southeast-1 (Singapore)95 ms
ap-southeast-2 (Sydney)149 ms
ap-northeast-1 (Tokyo)51 ms
sa-east-1 (São Paulo)311 ms
cn-north-1 (Beijing)66 ms
cn-northwest-1 (Ningxia)180 ms
us-gov-east-1206 ms
us-gov-west-1159 ms

2022/08/01

AWS Price Calculator Example

 AWS Cloud cost calculation example


Characteristic

Estimated Usage

Description

Utilization

100%

All infrastructure components run 24 hour per day, 7 days per week

Instance

t3a.xlarge

16 GB memory, 4 vCPU

Storage

Amazon EBS SSD gp2

1 EBS volume per instance with 30 GB of storage per volume

Data backup

Daily EBS snapshots

1 EBS volume per instance with 30 GB of storage per volume

Data transfer


Data in: 1 Tb/month


Data out: 1 Tb/month

10% incremental change per day

Instance scale

4

On average per day, there are 4 instances running

Load Balancing

20 Gb/Hour

Elastic Load Balancing is used 24 hours per day, 7 days per week. It processes a total of 20 Gb/Hour (data in + data out)

Database

MySQL, db.m5.large instance with 8 GB memory, 2 vCPUs, 100 GB storage

Multi-AZ deployment with synchronous standby replica in separate Availability Zone



Cost breakdown

Elastic Load Balancing

Number of Network Load Balancers (1), Processed bytes per NLB for TCP (20 GB per hour)



EC2

Operating system (Linux), Quantity (4), Storage for each EC2 instance (General Purpose SSD (gp2)), Storage amount (30 GB), Instance type (t3a.xlarge)





Amazon Elastic IP address

Number of EC2 instances (1), Number of EIPs per instance (1)



Amazon RDS for MySQL

Quantity (1) db.m5.large, Storage for each RDS instance (General Purpose SSD [gp2]), Storage amount (100 GB)




Amazon Route 53

Hosted Zones (1), Number of Elastic Network Interfaces (2), Basic Checks Within AWS (0)




Amazon Virtual Private Cloud (Amazon VPC)

Data Transfer cost, Inbound (from: Internet) 1 TB per month, Outbound (to: Internet) 1 TB per month, Intra-Region 0 TB per month



估價單

參考連結