AWS NETWORKING AND CONTENT DELIVERY PRACTICAL NOTES

Updating Existing Content with a CloudFront Distribution:

If you need to remove a file from CloudFront edge cache before it expires, you can do one of the following:

  • Invalidate the file from edge caches. The next time a viewer requests the file, CloudFront returns to the origin to fetch the latest version of the file.
  • Use file versioning to serve a different version of the file that has a different name. For more information, see Updating Existing Files Using Versioned File Names.
  • In this case, the best option available is to invalidate all the application objects from the edge caches. This will result in the new objects being cached next time a request is made for them.

Use Latency-based routing to improve application performance for your users:

If your application is hosted in multiple AWS Regions, you can improve performance for your users by serving their requests from the AWS Region that provides the lowest latency.

  • To use latency-based routing, you create latency records for your resources in multiple AWS Regions. When Route 53 receives a DNS query for your domain or subdomain (example.com or acme.example.com), it determines which AWS Regions you’ve created latency records for, determines which region gives the user the lowest latency, and then selects a latency record for that region. Route 53 responds with the value from the selected record, such as the IP address for a web server.

Client-Side and Server-Side Errors Debugging:

If CloudFront requests an object from your origin, and the origin returns an HTTP 4xx or 5xx status code, there’s a problem with communication between CloudFront and your origin.

  • To determine the number of client-side errors captured in a given period, the Developer should look at the 4XX Error metric. To determine the number of server-side errors captured in a given period, the Developer should look at the 5XX Error.

Reference: How CloudFront Processes and Caches HTTP 4xx and 5xx Status Codes from Your OriginTroubleshooting Error Responses from Your Origin

S3 Access from within VPC resources:

When using a private subnet with no Internet connectivity there are only two options available for connecting to Amazon S3 (which remember, is a service with a public endpoint, it’s not in your VPC).

  • The first option is to enable Internet connectivity through either a NAT Gateway or a NAT Instance.
  • The other option is to enable a VPC endpoint for S3.The specific type of VPC endpoint to S3 is a Gateway Endpoint. EC2 instances running in private subnets of a VPC can use the endpoint to enable controlled access to S3 buckets, objects, and API functions that are in the same region as the VPC. You can then use an S3 bucket policy to indicate which VPCs and which VPC Endpoints have access to your S3 buckets.

Example Diagram (VPC Gateway Endpoint):


				Using a gateway endpoint to access Amazon S3

Reference:
Gateway VPC endpoints

Enable API Caching in Amazon API Gateway to enhance responsiveness

You can enable API caching in Amazon API Gateway to cache your endpoint’s responses. With caching, you can reduce the number of calls made to your endpoint and also improve the latency of requests to your API.

  • When you enable caching for a stage, API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds. API Gateway then responds to the request by looking up the endpoint response from the cache instead of making a request to your endpoint. The default TTL value for API caching is 300 seconds. The maximum TTL value is 3600 seconds. TTL=0 means caching is disabled.
  • A client of your API can invalidate an existing cache entry and reload it from the integration endpoint for individual requests. The client must send a request that contains the Cache-Control: max-age=0 header.
  • The client receives the response directly from the integration endpoint instead of the cache, provided that the client is authorized to do so. This replaces the existing cache entry with the new response, which is fetched from the integration endpoint.

To grant permission for a client, attach a policy of the following format to an IAM execution role for the user.

This policy allows the API Gateway execution service to invalidate the cache for requests on the specified resource (or resources).

Reference:
Enabling API caching to enhance responsiveness

Enabling VPC Flow Logs

VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs or Amazon S3. After you’ve created a flow log, you can retrieve and view its data in the chosen destination.

Flow logs can help you with a number of tasks, such as:
• Diagnosing overly restrictive security group rules
• Monitoring the traffic that is reaching your instance
• Determining the direction of the traffic to and from the network interfaces

As you can see in the image below, you can create a flow log for a VPC, a subnet,or a network interface. If you create a flow log for a subnet or VPC, each network interface in that subnet or VPC is monitored.

https://miro.medium.com/max/1200/0*qpgTpyyfjiacJTRp

Reference :

How to log, view and analyze network traffic flows using VPC Flow Logs?

Tariq Sheikh Administrator

Tariq Sheikh has been working in IT industry for 15 plus years He is a dual CCIEx26141 with Security,Collaboration and Data Center as his specialities as well as 4xAWS Certified . He is based in Dubai,UAE and his areas of expertise include Data Center technologies, Networking, Security and AWS solution architect

Leave a Reply

Your email address will not be published. Required fields are marked *

Close Bitnami banner
Bitnami