How to serve static HTML files using NGINX

My html folder structure is as below. this is located in /var/www folder

html/
├── index.html
├── index.nginx-debian.html
├── one.html
└── sub
    └── one.html

To serve these files using NGINX, you can use following NGINX configurations. I have given two version and both works. First version is using root directive and second is by using alias directive. personally, i prefer alias as it easy to understand.

server {
    listen 80;
    server_name staticweb.com;

  # access_log /var/log/nginx/staticweb_access.log;
  # error_log  /var/log/nginx/staticweb_error.log;

    location / {
        root /var/www/html;
        index index.html;
        try_files $uri $uri/ =404;
    }

    location /sub/ {
        root /var/www/html;
        index index.html;
        try_files $uri $uri/ =404;
    }

}

Here is another version using alias. Please note here you dont need to add "/" after sub in location block

server {
    listen 80;
    server_name staticweb.com;

  # access_log /var/log/nginx/staticweb_access.log;
  # error_log  /var/log/nginx/staticweb_error.log;

    location / {
        root /var/www/html;
        index index.html;
        try_files $uri $uri/ =404;
    }

    location /sub {
        alias /var/www/html/sub;
        index index.html;
        try_files $uri $uri/ =404;
    }

}


AWS Analytics Services S3 Select Athena, QuickSight, Glue, Redshift, EMR

AWS multiple services for analytics. Right from simple S3 select to EMR cluster which is managed hadoop. In this arcile we will analyse all these offering and understand differences and their use cases:

S3 Select

  • Amazon S3 Select and S3 Glacier Select support only the SELECT SQL command.
  • Data in object storage have traditionally been accessed as a whole entities, meaning when you ask for a 5 gigabyte object you get all 5 gigabytes. Select for S3 and Glacier allows you to use simple SQL expressions to pull out only the bytes you need from those objects.
  • this partial data retrieval ability is especially useful for serverless applications built with AWS Lambda.
  • Amazon Athena, Amazon Redshift, and Amazon EMR as well as partners like Cloudera, DataBricks, and Hortonworks will all support S3 Select S3 Select
SELECT d.dir_name, d.files FROM S3Object[*] d

Amazon Athena

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. You don’t even need to load your data into Athena, it works directly with data stored in S3.

  • Supports only S3

Features

  • Serverless. Zero infrastructure. Zero administration.
  • Easy to query, just use standard SQL
  • Pay per query
  • Integrated with AWS Glue
  • Amazon Athena integrates with Amazon QuickSight for easy visualization.

Amazon QuickSight

  • Power BI
  • Data Source upload (CSV, excel), S3, RedShift, RDS, Salesforce., Athena
  • QuickSight is built with “SPICE” – a Super-fast, Parallel, In-memory Calculation Engine Amazon QuickSight is a scalable, serverless, embeddable, machine learning-powered business intelligence (BI) service built for the cloud. QuickSight is serverless and can automatically scale to tens of thousands of users without any infrastructure to manage or capacity to plan for
  • Scale from tens to tens of thousands of users
  • Embed BI dashboards in your applications
  • Ask questions of your data, receive answers
  • Pay-per-session pricing
  • Content
    • Analysis
    • Dashboard
    • Dataset
      • Can be from upload (CSV, excel), S3, RedShift, RDS, Salesforce., Athena

AWS Glue

AWS Glue is a fully managed ETL service. Glue has three main components:

  • The AWS Glue Data Catalog
    • The AWS Glue Data Catalog is your persistent metadata store.
    • It is a managed service that lets you store, annotate, and share metadata in the AWS Cloud in the same way you would in an Apache Hive metastore.
    • The AWS Glue Data Catalog is an index to the location, schema, and runtime metrics of your data.
    • The AWS Glue Data Catalog contains references to data that is used as sources and targets of your extract, transform, and load (ETL) jobs in AWS Glue.
    • AWS Glue Data Catalog is Apache Hive Metastore compatible / replacement
  • AWS Glue Crawlers and Classifiers
    • AWS Glue also lets you set up crawlers that can scan data in all kinds of repositories, classify it, extract schema information from it, and store the metadata automatically in the AWS Glue Data Catalog.
  • Fully Managed ETL
    • a fully managed ETL service that allows you to transform and move data to various destinations, and

AWS Glue provides both visual and code-based interfaces to make data integration easier.

  • AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development.
    • Serverless
    • Crawlers that infer schema
    • Autogen ETL scripts
  • AWS Glue provides a console and API operations to set up and manage your extract, transform, and load (ETL) workload.

AWS Redshift

  • Cloud data warehouse
  • Features
    • Deepest integration with your data lake and AWS services
    • Best performance
    • Most scalable
    • Best Value
    • Easy to manage
    • Most secure and compliant
  • It allows you to run complex analytic queries against terabytes to petabytes of structured and semi-structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution.
  • Amazon Redshift also includes Amazon Redshift Spectrum, allowing you to run SQL queries directly against exabytes of unstructured data in Amazon S3 data lakes.
  • with Redshift Spectrum, it also makes it easy to analyze large amounts of data in its native format without requiring you to load the data
  • AQUA (Advanced Query Accelerator) is a new distributed and hardware-accelerated cache that enables Redshift to run up to 10x faster than any other enterprise cloud data warehouse.
  • You can load data into Amazon Redshift from a range of data sources including Amazon S3, Amazon RDS, Amazon DynamoDB, Amazon EMR, AWS Glue, AWS Data Pipeline and or any SSH-enabled host on Amazon EC2 or on-premises.
  • There are two types of snapshots: automated and manual. Amazon Redshift stores these snapshots internally in Amazon S3 by using an encrypted Secure Sockets Layer (SSL) connection

AWS RedShift

AWS EMR

Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto.

  • Features
    • Easy to use
    • Low cost
    • Elastic
    • Reliable
    • Secure
    • Flexible
  • You can deploy your workloads to EMR using Amazon EC2, Amazon Elastic Kubernetes Service (EKS), or on-premises AWS Outposts.
  • Amazon EMR lets you focus on transforming and analyzing your data without having to worry about managing compute capacity or open-source applications, and saves you money. Using EMR, you can instantly provision as much or as little capacity as you like on Amazon EC2 and set up scaling rules to manage changing compute demand.

How these work together

Below is a sample architecture diagram having how these different services can work together

AWS Analytics

Comparison

Type S3 Select Athena QuickSight Glue Redshift EMR
Keywork Partial File Fetch run ad-hoc queries Dashboards ETL Datawarehouse Bigdata (hadoop)
Input S3 S3 upload (CSV, excel), S3, RedShift, RDS, Salesforce., Athena RDS, Redshift, DynamoDB, S3, MySQL Amazon S3, Amazon RDS, Amazon DynamoDB, Amazon EMR, AWS Glue, AWS Data Pipeline
Purpose Fetch selected data from file (avoid loading whole file) Load data into Athena for analytics Load data and dislay dashboard & Analytics ETL Datawarehouse Big Data processing

reference

https://aws.amazon.com/blogs/aws/s3-glacier-select/

https://www.youtube.com/watch?v=Gn7lxQiSZPQ https://www.youtube.com/watch?v=WnFYoiRqEHw

AWS RDS Multi AZ deployment and Read Replica

RDS makes it easy to set up, operate, and scale a relational database in the cloud. It comes up with many features that makes customers like easier. In this article, we will discuss somewhate related concepts Multi AZ and read replica. This should help you answer multi AZ and read replica related questions.

Amazon RDS Multi-AZ deployment

  • In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone.
  • Multi-AZ deployment provides high availability, durability and automatic failover support
  • RDS automatically provisions and manages a synchronous standby instance in a different AZ
  • standby replica can’t be used to serve read traffic.
  • RDS automatically fails over to the standby so that database operations can resume quickly without administrative intervention
  • Failover mechanism automatically changes the DNS record of the DB instance to point to the standby DB instance.
  • Running a DB instance with high availability can enhance availability during planned system maintenance, and help protect your databases against DB instance failure and Availability Zone disruption

Multi AZ

Amazon RDS Read Replicas

  • Amazon RDS uses DB engines’ built-in replication functionality to create a special type of DB instance called a read replica from a source DB instance.
  • The source DB instance becomes the primary DB instance.
  • Updates made to the primary DB instance are asynchronously copied to the read replica.
  • You can reduce the load on your primary DB instance by routing read queries from your applications to the read replica.
  • When you create a read replica, you first specify an existing DB instance as the source. Then Amazon RDS takes a snapshot of the source instance and creates a read-only instance from the snapshot. Amazon RDS then uses the asynchronous replication method for the DB engine to update the read replica whenever there is a change to the primary DB instance. The read replica operates as a DB instance that allows only read-only connections. Applications connect to a read replica the same way they do to any DB instance. Amazon RDS replicates all databases in the source DB instance.

read Replica

Comparison

Multi-AZ deployments Multi-Region deployments Read replicas
Purpose is Availability Purpose is DR & local performance Purpose is Scalability
Non-Aurora:Syncronous & Aurora Asynchronous Replication Asynchronous Replication Asynchronous Replication
Always span at least two Availability Zones within a single region Each region can have a Multi-AZ deployment Can be within an Availability Zone, Cross-AZ, or Cross-Region
utomatic failover to standby (non-Aurora) or read replica (Aurora) when a problem is detected Aurora allows promotion of a secondary region to be the master Can be manually promoted to a standalone database instance (non-Aurora) or to be the primary instance (Aurora)

References

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html https://aws.amazon.com/rds/features/multi-az/

What is the difference between AWS Direct Connect, AWS Storage Gateway, AWS Site-to-site VPN & AWS Direct Connect Gateway

Direct connect is mainly used to establish a dedicated private connection between an on-premises network and AWS network. This could provide a higher bandwidth than your standard ISP. Storage Gateway on the other hand is for hybrid cloud storage. This service can help you in situations where you want to save on storage costs by moving some/most of your data to AWS Cloud with low-latency access (just as though you’re accessing them within the same disk).

AWS Direct Connect

  • AWS Direct Connect is a networking service that provides an alternative to using the internet to connect to AWS.
  • Using AWS Direct Connect, data that would have previously been transported over the internet is delivered through a private network connection between your facilities and AWS
  • AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you create a private connection between AWS and your datacenter, office, or colocation environment. This can increase bandwidth throughput and provide a more consistent network experience than internet-based connections.

AWS Direct Connect is compatible with all AWS services accessible over the Internet, and is available in speeds starting at 50 Mbps and scaling up to 100 Gbps.

  • Supports only 802.1Q VLAN encapsulation
  • TYpes
    • Direct connect collocation
    • Contract with Direct connect partner (LOA-CFA): that will help you connect a router from your data center, office, or colocation environment to an AWS Direct Connect location.
    • Connect directly at an AWS Direct Connect Location : using 1 Gbps, 10 Gbps,100 Gbps
  • Supports IPV4 & IPV6
  • Actual physical set up required
  • Pricing : Port hours and data transfer

AWS Storage Gateway

AWS Storage Gateway is a set of hybrid cloud services that gives you on-premises access to virtually unlimited cloud storage. AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the AWS storage infrastructure

AWS Storage Gateway offers following

  1. File-based file gateways (Amazon S3 File and Amazon FSx File),
    • Amazon S3 File Gateway supports a file interface into Amazon S3. You can store and retrieve objects in Amazon S3 using industry-standard file protocols such as Network File System (NFS) and Server Message Block (SMB).
    • Amazon FSx File Gateway (FSx File) is a new file gateway type that provides low latency, and efficient access to in-cloud Amazon FSx for Windows File Server file shares from your on-premises facility
  2. Volume-based (Cached and Stored)
    • Volume Gateway – A volume gateway provides cloud-backed storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from your on-premises application servers.
    • The volume gateway is deployed into your on-premises environment as a VM running on VMware ESXi, KVM, or Microsoft Hyper-V hypervisor
      • Cached volumes – You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally.
      • Stored volumes – If you need low-latency access to your entire dataset, first configure your on-premises gateway to store all your data locally. Then asynchronously back up point-in-time snapshots of this data to Amazon S3.
  3. Tape-based storage solutions
    • Tape Gateway – A tape gateway provides cloud-backed virtual tape storage. The tape gateway is deployed into your on-premises environment as a VM running on VMware ESXi, KVM, or Microsoft Hyper-V hypervisor

Storage gateway

AWS Site-to-Site VPN

Although the term VPN connection is a general term, in AWS terms, a VPN connection refers to the connection between your VPC and your own on-premises network. Site-to-Site VPN supports Internet Protocol security (IPsec) VPN connections.

components

  • VPN connection: A secure connection between your on-premises equipment and your VPCs.
  • VPN tunnel: An encrypted link where data can pass from the customer network to or from AWS. Each VPN connection includes two VPN tunnels which you can simultaneously use for high availability.
  • Customer gateway: An AWS resource which provides information to AWS about your customer gateway device.
  • Customer gateway device: A physical device or software application on your side of the Site-to-Site VPN connection.
  • Virtual private gateway: The VPN concentrator on the Amazon side of the Site-to-Site VPN connection. You use a virtual private gateway or a transit gateway as the gateway for the Amazon side of the Site-to-Site VPN connection.
  • Transit gateway: A transit hub that can be used to interconnect your VPCs and on-premises networks. You use a transit gateway or virtual private gateway as the gateway for the Amazon side of the Site-to-Site VPN connection.

Your Site-to-Site VPN connection is either an AWS Classic VPN or an AWS VPN.

  • Each Site-to-Site VPN connection has two tunnels, with each tunnel using a unique virtual private gateway public IP address. It is important to configure both tunnels for redundancy. When one tunnel becomes unavailable (for example, down for maintenance), network traffic is automatically routed to the available tunnel for that specific Site-to-Site VPN connection.

How it works

A Site-to-Site VPN connection offers two VPN tunnels between a virtual private gateway or a transit gateway on the AWS side, and a customer gateway (which represents a VPN device) on the remote (on-premises) side.

VPN

Transit gateway

A transit gateway is a transit hub that you can use to interconnect your virtual private clouds (VPC) and on-premises networks.

Transit GTW

Multiple Site-to-Site VPN connections with a transit gateway The VPC has an attached transit gateway, and you have multiple Site-to-Site VPN connections to multiple on-premises locations.

multiple sites

multiple sites

  • Your Site-to-Site VPN connection on a transit gateway can support either IPv4 traffic or IPv6 traffic inside the VPN tunnels.
  • AWS-managed VPN is a hardware IPsec VPN that enables you to create an encrypted connection over the public Internet between your Amazon VPC and your private IT infrastructure. The VPN connection lets you extend your existing security and management policies to your VPC as if they were running within your own infrastructure.

VPN is a great connectivity option for businesses that are just getting started with AWS. It is quick and easy to setup. Keep in mind, however, that VPN connectivity utilizes the public Internet, which can have unpredictable performance and despite being encrypted, can present security concerns.

  • You can monitor VPN tunnels using CloudWatch, which collects and processes raw data from the VPN service into readable, near real-time metrics.

AWS Direct Connect Gateway

  • AWS Direct Connect gateway is a relatively new service from AWS. Connecting from a single Direct Connect location to multiple AWS VPCs wasn’t so straight forward. AWS Direct Connect gateway is aimed at making it easier to connect from a single Direct Connect location to multiple AWS regions or VPCs
  • An AWS Direct Connect gateway is a grouping of virtual private gateways and private virtual interfaces that belong to the same AWS account.
  • A Direct Connect gateway is a grouping of virtual private gateways (VGWs) and private virtual interfaces (VIFs). A Direct Connect gateway is a globally available resource. You can create the Direct Connect gateway in any Region and access it from all other Regions.
  • AWS Direct Connect Gateway is a service built on top of the AWS Direct Connect. It allows AWS Direct Connect users to connect multiple VPCs in the same or different AWS regions to their Direct Connect connection.

Virtual private gateway associations In the following diagram, the Direct Connect gateway enables you to use your AWS Direct Connect connection in the US East (N. Virginia) Region to access VPCs in your account in both the US East (N. Virginia) and US West (N. California) Regions.

Each VPC has a virtual private gateway that connects to the Direct Connect gateway using a virtual private gateway association. The Direct Connect gateway uses a private virtual interface for the connection to the AWS Direct Connect location. There is an AWS Direct Connect connection from the location to the customer data center.

Direct Connect gateway

Consider this scenario of a Direct Connect gateway owner (Account Z) who owns the Direct Connect gateway. Account A and Account B want to use the Direct Connect gateway. Account A and Account B each send an association proposal to Account Z. Account Z accepts the association proposals and can optionally update the prefixes that are allowed from Account A’s virtual private gateway or Account B’s virtual private gateway. After Account Z accepts the proposals, Account A and Account B can route traffic from their virtual private gateway to the Direct Connect gateway. Account Z also owns the routing to the customers because Account Z owns the gateway.

Direct connect gateway

reference:

https://aws.amazon.com/blogs/aws/new-aws-direct-connect-gateway-inter-region-vpc-access/ https://aws.amazon.com/blogs/storage/aws-storage-gateway-in-2019/

What is the difference between AWS Shield, AWS WAF, AWS GaurdDuty, AWS Firewall Manager and AWS Inspector

Amazon have lots of services and some of these are kind of overlapping. While attempting practice Tests, I used to get confused about what is exact different between these and when to use which server. Below article is my notes about these services.

AWS Shield

  • Protection against Distributed Denial of Service (DDoS) attacks for applications running on AWS.
  • AWS Shield is a managed service
  • Infrastructure (Layer 3 and 4) security
  • AWS Shield Standard is automatically enabled to all AWS customers at no additional cost.
  • There are two tiers of AWS Shield
    • Standard and
    • Advanced. (With Shield advanced, you get WAF)
  • When you use AWS Shield Standard with Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks
  • For higher levels of protection against attacks targeting your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator and Amazon Route 53 resources, you can subscribe to AWS Shield Advanced.
  • AWS Shield Advanced is available globally on
    • all Amazon CloudFront,
    • AWS Global Accelerator, and
    • Amazon Route 53 edge locations.

AWS WAF

  • Blocks common attack patterns, such as SQL injection or cross-site scripting.
  • level 7
  • Can handle http/https
  • Fully managed
  • OWASP Top 10 assessment
  • AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits and bots that may affect availability, compromise security, or consume excessive resources
  • customize rules that filter out specific traffic patterns.
  • You can deploy AWS WAF on
    • Amazon CloudFront as part of your CDN solution,
    • the Application Load Balancer that fronts your web servers or
    • origin servers running on EC2,
    • Amazon API Gateway for your REST APIs, or
    • AWS AppSync for your GraphQL APIs.
  • Web data on site.
  • Traffic filtering

Amazon GuardDuty

  • Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3.
  • Detective service and not preventive
  • GuardDuty is a monitoring service that analyzes AWS CloudTrail management and Amazon S3 data events, VPC flow logs, and DNS logs to generate security findings for your account. Once GuardDuty is enabled, it starts monitoring your environment immediately. GuardDuty can be disabled at any time to stop it from processing all AWS CloudTrail events, VPC Flow Logs, and DNS logs.
  • GuardDuty is a Regional service, meaning any of the configuration procedures you follow on this page must be repeated in each region that you want to monitor with GuardDuty.
  • Analyzes and processes the following Data sources:
    • VPC Flow Logs,
    • AWS CloudTrail management event logs,
    • CloudTrail S3 data event logs, and
    • DNS logs.
  • It uses threat intelligence feeds, such as lists of malicious IP addresses and domains, and machine learning to identify unexpected and potentially unauthorized and malicious activity within your AWS environment.

AWS Firewall Manager

  • AWS Firewall Manager is a security management service which allows you to centrally configure and manage firewall rules across your accounts and applications in AWS Organizations.
  • Using AWS Firewall Manager, you can easily roll out AWS WAF rules for your Application Load Balancers, API Gateways, and Amazon CloudFront distributions. You can create AWS Shield Advanced protections for your Application Load Balancers, ELB Classic Load Balancers, Elastic IP Addresses and CloudFront distributions. You can also configure new Amazon Virtual Private Cloud (VPC) security groups and audit any existing VPC security groups for your Amazon EC2, Application Load Balancer (ALB) and ENI resource types. You can deploy AWS Network Firewalls across accounts and VPCs in your organization. Finally, with AWS Firewall Manager, you can also associate your VPCs with Amazon Route 53 Resolvers DNS Firewall rules.
  • Integrated with Organizations to enable AWS WAF rules across multiple AWS accounts. (Global rules, local rules/account wise can still be applied)

Amazon Inspector

Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity. These findings can be reviewed directly or as part of detailed assessment reports which are available via the Amazon Inspector console or API.

  • Select workloads to assess and define frequency
  • Supports only EC2 at the moment
  • Amazon Inspector provides you with security assessments of your applications’ settings and configurations while Amazon GuardDuty helps with analysing the entirety of your AWS accounts for potential threats.

References

Videos

  • https://www.youtube.com/watch?v=WI4EVgShkn0&t=1s
  • https://www.youtube.com/watch?v=eLQIVLTALDk
  • https://www.youtube.com/watch?v=lU_zPruIL9w&t=10s
  • https://www.youtube.com/watch?v=4P_J3OiH42g&t=16s

Documentation

  • https://aws.amazon.com/shield/faqs/
  • https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html

AWS VPC Architecture Diagram and notes

VPC

VPC stands for virtual private cloud. VPC is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can launch your AWS resources, such as Amazon EC2 instances, into your VPC.

Following are the key components of VPC

  1. Virtual private cloud (VPC) — A virtual network dedicated to your AWS account.
  2. Internet gateway — A gateway that you attach to your VPC to enable communication between resources in your VPC and the internet.
  3. Route table — A set of rules, called routes, that are used to determine where network traffic is directed. You can explicitly associate a subnet with a particular route table. Otherwise, the subnet is implicitly associated with the main route table.
  4. Network ACL – It is a list of rules to determine whether traffic is allowed in or out of any subnet associated with the network ACL
  5. Subnet — A subnet is a logical partition within a VPC. It is a range of IP addresses in your VPC. You can launch AWS resources into a specified subnet. Use a public subnet for resources that must be connected to the internet, and a private subnet for resources that won’t be connected to the internet .
  6. VPC endpoint — Enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.
  7. CIDR block —Classless Inter-Domain Routing. An internet protocol address allocation and route aggregation methodology. For more information, see Classless Inter-Domain Routing in Wikipedia.

When new account is created, following components are already created for you.

  1. Default VPC
  2. Internet Gateway
  3. Main route table
  4. Main network ACL : Associated with all subnets
  5. Subnets in each of the availability zones
  6. Security Group

When you create new VPC, it does not create subnet. In each region, AWS automatically provides a default VPC with default subnets, a main route table, a default security group, and a default NACL.

  • Subnet
    • NACL
  • Instance
    • Security Groups

If you delete VPC following is deleted

  • IGW
  • Subnets
  • Route tables

CIDR

There are different ways to represent a range of IP addresses. The shortest way is by CIDR notation, sometimes called slash notation. For example, the CIDR 172.16.0.0/16 includes all addresses from 172.16.0.0 to 172.16.255.255—a total of 65,536 addresses!

Valid IPv4 prefix lengths range from /0 to /32. Although you can specify any valid IP range for your VPC CIDR, it’s best to use one in the RFC 1918 range to avoid conflicts with public Internet addresses.

  • 10.0.0.0–10.255.255.255 (10.0.0.0/8)
  • 172.16.0.0–172.31.255.255 (172.16.0.0/12)
  • 192.168.0.0–192.168.255.255 (192.168.0.0/16)

192.988.0.0/24 – 24 indicates the first 6 numbers of network. When you manually creates Subnet IP range must fall within VPC IP range

  • VPC 11.0.0.0/16
  • sub 11.0.1.0/24
  • /32 is a network mask of 255.255.255.255
  • /24 is a network mask of 255.255.255.0

Point to be noted:

192.168.0.254/32 = ip address of 192.168.0.254

192.168.0.1/24 = range of ip’s from 192.168.0.1 to 192.168.0.255

Subnets

A subnet is a logical partition within a VPC that holds your EC2 instances. A subnet lets you isolate instances from each other, control how traffic flows to and from your instances, and lets you organize them by function. For example, you can create one subnet for public web servers that need to be accessible from the Internet and create another subnet for database servers that only the web instances can access.

Each subnet has its own CIDR block that must be a subset of the VPC CIDR that it resides in. For example, if your VPC has a CIDR of 172.16.0.0/16, one of your subnets may have a CIDR of 172.16.100.0/24. This range covers 172.16.100.0–172.16.100.255, which yields a total of 256 addresses

A subnet can exist within only one availability zone

VPC and Subnet

aws ec2 create-subnet-vpc-id [VPC resource ID] --cidr-block 172.16.100.0/24 --availability-zone us-east-1a

Each subnet must be associated with a route table, which specifies the allowed routes for outbound traffic leaving the subnet. Every subnet that you create is automatically associated with the main route table for the VPC. You can change the association, and you can change the contents of the main route table.

To connect to internet following is needed

  1. connection to internet (IGW)
  2. route (Route table)
  3. public ip (IP)

Internet Gateways

An Internet gateway gives instances the ability to receive a public IP address, connect to the Internet, and receive requests from the Internet.

When you create a VPC, it does not have an Internet gateway associated with it. You must create an Internet gateway and associate it with a VPC manually. You can associate only one Internet gateway with a VPC. But you may create multiple Internet gateways and associate each one with a different VPC.

Route Table

A route table contains a set of rules, called routes, that are used to determine where network traffic from your subnet or gateway is directed. Main route table The route table that automatically comes with your VPC. It controls the routing for all subnets that are not explicitly associated with any other route table.

Your VPC has an implicit router, and you use route tables to control where network traffic is directed. Each subnet in your VPC must be associated with a route table, which controls the routing for the subnet (subnet route table). You can explicitly associate a subnet with a particular route table. Otherwise, the subnet is implicitly associated with the main route table. A subnet can only be associated with one route table at a time, but you can associate multiple subnets with the same subnet route table.

IP routing is destination-based, meaning that routing decisions are based only on the destination IP address , not the source. To enable Internet access for your instances, you must create a default route pointing to the Internet gateway

Destination Target
172.31.0.0/16 Local
0.0.0.0/0 igw-0e538022a0fddc318
  1. A local route that allows instances in different subnets to communicate with each other.
  2. enable your subnet to access the internet through an internet gateway

Any subnet that is associated with a route table containing a default route pointing to an Internet gateway, meaning if a subnet’s traffic is routed to an internet gateway, it is called a public subnet. Contrast this with a private subnet that does not have a default route.

Security Groups

  • A security group acts as a virtual firewall for your instance to control inbound and outbound traffic.
  • Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC can be assigned to a different set of security groups.

Distributed firewalls and firewalls are stateful

Basic Rules:

  • You can specify allow rules, but not deny rules.
  • You can specify separate rules for inbound and outbound traffic.
  • Security group rules enable you to filter traffic based on protocols and port numbers.
  • Security groups are stateful — if you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. Responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules.
  • You can add and remove rules at any time. Your changes are automatically applied to the instances that are associated with the security group.
  • When you associate multiple security groups with an instance, the rules from each security group are effectively aggregated to create one set of rules.

Inbound Rules

When you create a security group, it doesn’t contain any inbound rules. Security groups use a default-deny approach, also called whitelisting, which denies all traffic that is not explicitly allowed by a rule.

Inbound rules specify what traffic is allowed into the attached ENI. An inbound rule consists of three required elements:

  • Source
  • Protocol
  • Port range
Source Protocol Port Range
198.51.100.10/32 TCP 22
0.0.0.0/0 TCP 443
  • SSH access only from IP 198.51.100.10
  • HTTPS access from internet. The prefix 0.0.0.0/0 covers all valid IP addresses

Outbound Rules

the outbound rules of a security group will be less restrictive than the inbound rules. When you create a security group, AWS automatically creates the outbound rule listed

  • Destination
  • Protocol
  • Port range
Destination Protocol Port Range
0.0.0.0/0 All All

Network Access Control Lists

Like a security group, a network access control list (NACL) functions as a firewall in that it contains inbound and outbound rules to allow traffic based on a source or destination CIDR, protocol, and port. Also, each VPC has a default NACL that can’t be deleted. But the similarities end there.

A NACL differs from a security group in many respects. Instead of being attached to an ENI, a NACL is attached to a subnet. The NACL associated with a subnet controls what traffic may enter and exit that subnet. This means that NACLs can’t be used to control traffic between instances in the same subnet. If you want to do that, you have to use security groups.

  • NACL rule order matters! Create a new NACL
  • NACL rules are processed in ascending order of the rule number. Any traffic not matching this rule would be processed by next rule.

Inbound rules determine what traffic is allowed to ingress the subnet. Each rule contains the following elements:

  1. Rule number
  2. Protocol
  3. Port range
  4. Source
  5. Action
Rule Number Protocol Port Range Source Action
100 All All 0.0.0.0/0 Allow

Outbound Rules As you might expect, the outbound NACL rules follow an almost identical format as the inbound rules. Each rule contains the following elements:

  1. Rule number
  2. Protocol
  3. Port range
  4. Destination
  5. Action
Rule Number Protocol Port Range Destination Action
100 All All 0.0.0.0/0 Allow

Using Network Access Control Lists and Security Groups Together

You may want to use a NACL in addition to a security group so that you aren’t dependent upon users to specify the correct security group when they launch an instance. Because a NACL is applied to the subnet, the rules of the NACL apply to all traffic ingressing and egressing the subnet, regardless of how the security groups are configured.

When you make a change to a NACL or security group rule, that change takes effect immediately

Network Address Translation Devices

Although network address translation occurs at the Internet gateway, there are two other resources that can also perform NAT.

  • NAT gateway
  • NAT instance

AWS calls these NAT devices. The purpose of a NAT device is to allow an instance to access the Internet while preventing hosts on the Internet from reaching the instance directly. This is useful when an instance needs to go out to the Internet to fetch updates or to upload data but does not need to service requests from clients.

When you use a NAT device, the instance needing Internet access does not have a public IP address allocated to it. Incidentally, this makes it impossible for hosts on the Internet to reach it directly. Instead, only the NAT device is configured with a public IP. Additionally, the NAT device has an interface in a public subnet

Configuring Route Tables to Use NAT Devices Instances that use the NAT device must send Internet-bound traffic to it, while the NAT device must send Internet-bound traffic to an Internet gateway. Hence, the NAT device and the instances that use it must use different default routes. Furthermore, they must also use different route tables and hence must reside in separate subnets.

Route Table for private instance using NAT gateway

Destination Target
10.0.0.0/16 local
0.0.0.0/0 NAT Device

Route table for public instance

Destination Target
10.0.0.0/16 local
0.0.0.0/0 igw-0e538022a0fddc318

NAT Device

NAT Gateway

A NAT gateway is a Network Address Translation (NAT) service. You can use a NAT gateway so that instances in a private subnet can connect to services outside your VPC but external services cannot initiate a connection with those instances.

  • It automatically scales to accommodate your bandwidth requirements. You set it and forget it.
  • You create a public NAT gateway in a public subnet and must associate an elastic IP address with the NAT gateway at creation.
  • you can use a public NAT gateway to connect to other VPCs or your on-premises network. In this case, you route traffic from the NAT gateway through a transit gateway or a virtual private gateway.

When you create a NAT gateway, you must assign it an EIP. A NAT gateway can reside in only one subnet, which must be a public subnet for it to access the Internet. AWS selects a private IP address from the subnet and assigns it to the NAT gateway. For redundancy, you may create additional NAT gateways in different availability zones.

NAT Instance

A NAT instance is a normal EC2 instance that uses a preconfigured Linux-based AMI. You have to perform the same steps to launch it as you would any other instance. It functions like a NAT gateway in many respects, but there are some key differences.

Unlike a NAT gateway, a NAT instance doesn’t automatically scale to accommodate increased bandwidth requirements. Therefore, it’s important that you select an appropriately robust instance type. If you choose an instance type that’s too small, you must manually upgrade to a larger instance type.

Attribute NAT gateway NAT instance
Availability Highly available. NAT gateways in each Availability Zone are implemented with redundancy. Create a NAT gateway in each Availability Zone to ensure zone-independent architecture. Use a script to manage failover between instances.
Bandwidth Scale up to 45 Gbps. Depends on the bandwidth of the instance type.
Maintenance Managed by AWS. You do not need to perform any maintenance. Managed by you, for example, by installing software updates or operating system patches on the instance.
Performance Software is optimized for handling NAT traffic. A generic AMI that’s configured to perform NAT.
Cost Charged depending on the number of NAT gateways you use, duration of usage, and amount of data that you send through the NAT gateways. Charged depending on the number of NAT instances that you use, duration of usage, and instance type and size.
Type and size Uniform offering; you don’t need to decide on the type or size. Choose a suitable instance type and size, according to your predicted workload.
Public IP addresses Choose the Elastic IP address to associate with a public NAT gateway at creation. Use an Elastic IP address or a public IP address with a NAT instance. You can change the public IP address at any time by associating a new Elastic IP address with the instance.
Private IP addresses Automatically selected from the subnet’s IP address range when you create the gateway. Assign a specific private IP address from the subnet’s IP address range when you launch the instance.
Security groups You cannot associate security groups with NAT gateways. You can associate them with the resources behind the NAT gateway to control inbound and outbound traffic. Associate with your NAT instance and the resources behind your NAT instance to control inbound and outbound traffic.
Network ACLs Use a network ACL to control the traffic to and from the subnet in which your NAT gateway resides. Use a network ACL to control the traffic to and from the subnet in which your NAT instance resides.
Flow logs Use flow logs to capture the traffic. Use flow logs to capture the traffic.
Port forwarding Not supported. Manually customize the configuration to support port forwarding.
Bastion servers Not supported. Use as a bastion server.
Traffic metrics View CloudWatch metrics for the NAT gateway. View CloudWatch metrics for the instance.
Timeout behavior When a connection times out, a NAT gateway returns an RST packet to any resources behind the NAT gateway that attempt to continue the connection (it does not send a FIN packet). When a connection times out, a NAT instance sends a FIN packet to resources behind the NAT instance to close the connection.
IP fragmentation Supports forwarding of IP fragmented packets for the UDP protocol. Does not support fragmentation for the TCP and ICMP protocols. Fragmented packets for these protocols will get dropped. Supports reassembly of IP fragmented packets for the UDP, TCP, and ICMP protocols.

NAT getway allows only response to come back and does not allow internet to reach private submnet

Elastic IP Addresses

An elastic IP address (EIP) is a type of public IP address that AWS allocates to your account when you request it. Once AWS allocates an EIP to your account, you have exclusive use of that address until you manually release it. Outside of AWS, there’s no noticeable difference between an EIP and an automatically assigned public IP.

When you initially allocate an EIP, it is not bound to any instance. Instead, you must associate it with an ENI. You can move an EIP around to different ENIs, although you can associate it with only one ENI at a time. Once you associate an EIP with an ENI, it will remain associated for the life of the ENI or until you disassociate it.

Ingress traffic is composed of all the data communications and network traffic originating from external networks and destined for a node in the host network.

Egress traffic is the reverse of ingress traffic. Egress is all traffic is directed towards an external network and originated from inside the host network.

If you want your instances to be accessible from the Internet, you must provision an Internet gateway, create a default route, and assign public IP addresses. Those are the basics. If you choose to use a NAT gateway or instance or a VPC peering connection, you’ll have to modify multiple route tables.

NAT instance enables hosts in a private subnet within your VPC, outbound access to the internet. a NAT instance allows instances within your VPC to go out to the internet. Bastion host allows inbound access to known IP addresses and authenticated users. Bastion host allows access to internal private services to approved sources.

Mostly the bastian host is used for incoming access NAT instance is for providing outgoing access to the instance

VPC Flow logs

It captures the ip logs. Can be stored on S3 and send to CloudWatch logs.

  • CloudTrail : API Calls
  • Flow Logs : IP Traffic

All encompassing VPC design

Here are sample design presented in our of the AWS presentation

VPC Batteries include

Hope this is helpful !

Sample NGINX config files

Many times NGINX conf reference comes handy. Here I am copying couple of NGINX conf setups for future reference.

Single WordPress

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        root /var/www/html;
        index index.html index.htm index.nginx-debian.html;

        #server_name _;
        server_name example.com www.example.com;

        location / {
                #try_files $uri $uri/ =404;
                #try_files $uri/ index.php$args;
	        try_files $uri /index.php$is_args$args;
        }

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ \.php$ {
                include snippets/fastcgi-php.conf;

                # With php7.0-cgi alone:
                #fastcgi_pass 127.0.0.1:9000;
                # With php7.0-fpm:
                fastcgi_pass unix:/run/php/php7.0-fpm.sock;
        }

	location /wp-admin/ {
		index index.php;
		try_files $uri $uri/ /index.php$args;
	}
}

WordPress multi site

Please not the additional details at top and before closing bracket of server block. For installation of multiple please check post How to Install WordPress Multisite

map $http_host $blogid {
    default 0;
    include /var/www/yourwebsite/wp-content/uploads/nginx-helper/map.conf;
}

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        root /var/www/html;
        index index.html index.htm index.nginx-debian.html;

        #server_name _;
        server_name example.com www.example.com;

        location / {
                #try_files $uri $uri/ =404;
                #try_files $uri/ index.php$args;
	        try_files $uri /index.php$is_args$args;
        }

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ \.php$ {
                include snippets/fastcgi-php.conf;

                # With php7.0-cgi alone:
                #fastcgi_pass 127.0.0.1:9000;
                # With php7.0-fpm:
                fastcgi_pass unix:/run/php/php7.0-fpm.sock;
        }

	location /wp-admin/ {
		index index.php;
		try_files $uri $uri/ /index.php$args;
	}
	
	location ~ ^/files/(.*)$ {
	  try_files /wp-content/blogs.dir/$blogid/$uri /wp-includes/ms-files.php?file=$1 ;
	  access_log off; log_not_found off; expires max;
	}

	location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
		expires 24h;
		log_not_found off;
	}

	location ^~ /blogs.dir {
		internal;
		alias /var/www/fintrekking/wp-content/blogs.dir ;
		access_log off; log_not_found off;      expires max;
	}


	if (!-e $request_filename) {
		rewrite /wp-admin$ $scheme://$host$uri/ permanent;
		rewrite ^/[_0-9a-zA-Z-]+(/wp-.*) $1 last;
		rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 last;
	}
	
}

How to check and Change timezone in Ubuntu

When you create Ubuntu server, by default timezone is of the same regions where host is located or predefined by the cloud service provider. If you need to check the timezone use following command.

$ timedatectl
               Local time: Sun 2021-06-27 11:09:28 CEST
           Universal time: Sun 2021-06-27 09:09:28 UTC 
                 RTC time: Sun 2021-06-27 09:09:29     
                Time zone: Europe/Paris (CEST, +0200)  
System clock synchronized: yes                         
              NTP service: active                      
          RTC in local TZ: no 

You can check the timezone using following command as well

$ cat /etc/timezone
Europe/Paris

Now let us check how to change the timezone. First you need long name of timezone and you can fetch it using following command

$ timedatectl list-timezones
Africa/Abidjan
Africa/Accra
Africa/Algiers
Africa/Bissau
Africa/Cairo
Africa/Casablanca
.
.
Asia/Kolkata
.
.
Pacific/Tongatapu
Pacific/Wake
Pacific/Wallis
UTC
lines 298-348/348 (END

To change timezone, you need to run below command

$sudo timedatectl set-timezone your_time_zone

In current scenario, I need to change it to IST (Indian Standard Time). Here is the the output of current timezone after changing timezone

$ sudo timedatectl set-timezone Asia/Kolkata
$ timedatectl
               Local time: Sun 2021-06-27 14:46:44 IST
           Universal time: Sun 2021-06-27 09:16:44 UTC
                 RTC time: Sun 2021-06-27 09:16:45    
                Time zone: Asia/Kolkata (IST, +0530)  
System clock synchronized: yes                        
              NTP service: active                     
          RTC in local TZ: no 

Installing NGINX on Ubuntu

First you need to update the Ubuntu and then simply run sudo apt install as below

sudo apt update

When NGINX is installed it will create a folder called as /www inside /var/ directory in Ubuntu. Here is how folders look like before installation

/var$ ll
total 52
drwxr-xr-x 13 root root 4096 Apr 30 23:26 ./
drwxr-xr-x 19 root root 4096 Jun 15 14:39 ../
drwxr-xr-x 2 root root 4096 Apr 15 2020 backups/
drwxr-xr-x 12 root root 4096 Jun 15 14:40 cache/
drwxrwxrwt 2 root root 4096 Apr 30 23:25 crash/
drwxr-xr-x 38 root root 4096 Jun 15 14:39 lib/
drwxrwsr-x 2 root staff 4096 Apr 15 2020 local/
lrwxrwxrwx 1 root root 9 Apr 30 23:15 lock -> /run/lock/
drwxrwxr-x 9 root syslog 4096 Jun 15 14:39 log/
drwxrwsr-x 2 root mail 4096 Apr 30 23:15 mail/
drwxr-xr-x 2 root root 4096 Apr 30 23:15 opt/
lrwxrwxrwx 1 root root 4 Apr 30 23:15 run -> /run/
drwxr-xr-x 6 root root 4096 Apr 30 23:36 snap/
drwxr-xr-x 4 root root 4096 Apr 30 23:17 spool/
drwxrwxrwt 6 root root 4096 Jun 15 14:39 tmp/

Please note there is no /www directory as we have not installed NGINX yet. Please note that, if you have any other server such as apache in install, /www will already be there.
Now let us install NGINX.

sudo apt install nginx
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  fontconfig-config fonts-dejavu-core libfontconfig1 libgd3 libjbig0 libjpeg-turbo8 libjpeg8 libnginx-mod-http-image-filter libnginx-mod-http-xslt-filter libnginx-mod-mail libnginx-mod-stream libtiff5 libwebp6
  libxpm4 nginx-common nginx-core
Suggested packages:
  libgd-tools fcgiwrap nginx-doc ssl-cert
The following NEW packages will be installed:
  fontconfig-config fonts-dejavu-core libfontconfig1 libgd3 libjbig0 libjpeg-turbo8 libjpeg8 libnginx-mod-http-image-filter libnginx-mod-http-xslt-filter libnginx-mod-mail libnginx-mod-stream libtiff5 libwebp6
  libxpm4 nginx nginx-common nginx-core
0 upgraded, 17 newly installed, 0 to remove and 60 not upgraded.
Need to get 2431 kB of archives.
After this operation, 7891 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://us-west-1.ec2.archive.ubuntu.com/ubuntu focal/main amd64 fonts-dejavu-core all 2.37-1 [1041 kB]
Get:2 http://us-west-1.ec2.archive.ubuntu.com/ubuntu focal/main amd64 fontconfig-config all 2.13.1-2ubuntu3 [28.8 kB]
Get:3 http://us-west-1.ec2.archive.ubuntu.com/ubuntu focal/main amd64 libfontconfig1 amd64 2.13.1-2ubuntu3 [114 kB]
Get:4 http://us-west-1.ec2.archive.ubuntu.com/ubuntu focal-updates/main amd64 libjpeg-turbo8 amd64 2.0.3-0ubuntu1.20.04.1 [117 kB]
Get:5 http://us-west-1.ec2.archive.ubuntu.com/ubuntu focal/main amd64 libjpeg8 amd64 8c-2ubuntu8 [2194 B]
......
Setting up libnginx-mod-stream (1.18.0-0ubuntu1.2) ...
Setting up libtiff5:amd64 (4.1.0+git191117-2ubuntu0.20.04.1) ...
Setting up libfontconfig1:amd64 (2.13.1-2ubuntu3) ...
Setting up libgd3:amd64 (2.2.5-5.2ubuntu2) ...
Setting up libnginx-mod-http-image-filter (1.18.0-0ubuntu1.2) ...
Setting up nginx-core (1.18.0-0ubuntu1.2) ...
Setting up nginx (1.18.0-0ubuntu1.2) ...
Processing triggers for ufw (0.36-6) ...
Processing triggers for systemd (245.4-4ubuntu3.6) ...
Processing triggers for man-db (2.9.1-1) ...
Processing triggers for libc-bin (2.31-0ubuntu9.2) ...

After installation /www will be created. Here is how it looks like after installation

var$ ll
total 56
drwxr-xr-x 14 root root 4096 Jun 15 14:42 ./
drwxr-xr-x 19 root root 4096 Jun 15 14:39 ../
drwxr-xr-x 2 root root 4096 Apr 15 2020 backups/
drwxr-xr-x 12 root root 4096 Jun 15 14:40 cache/
drwxrwxrwt 2 root root 4096 Apr 30 23:25 crash/
drwxr-xr-x 39 root root 4096 Jun 15 14:42 lib/
drwxrwsr-x 2 root staff 4096 Apr 15 2020 local/
lrwxrwxrwx 1 root root 9 Apr 30 23:15 lock -> /run/lock/
drwxrwxr-x 10 root syslog 4096 Jun 15 14:42 log/
drwxrwsr-x 2 root mail 4096 Apr 30 23:15 mail/
drwxr-xr-x 2 root root 4096 Apr 30 23:15 opt/
lrwxrwxrwx 1 root root 4 Apr 30 23:15 run -> /run/
drwxr-xr-x 6 root root 4096 Apr 30 23:36 snap/
drwxr-xr-x 4 root root 4096 Apr 30 23:17 spool/
drwxrwxrwt 6 root root 4096 Jun 15 14:39 tmp/
drwxr-xr-x 3 root root 4096 Jun 15 14:42 www/

NGINX registers itself as a service with ufw after installation.

$ sudo ufw app list
Available applications:
Nginx Full
Nginx HTTP
Nginx HTTPS
OpenSSH

Let us allow traffic using NGINX HTTP by using following command

$sudo ufw allow 'Nginx HTTP'
Rules updated
Rules updated (v6)

Finally just to ensure NGINX is running file, you can check the status using following command

$ sudo service nginx status
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2021-06-15 14:42:10 UTC; 2min 35s ago
Docs: man:nginx(8)
Main PID: 2030 (nginx)
Tasks: 2 (limit: 1160)
Memory: 5.3M
CGroup: /system.slice/nginx.service
├─2030 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
└─2031 nginx: worker process

Jun 15 14:42:10 ip-172-31-7-66 systemd[1]: Starting A high performance web server and a reverse proxy server...
Jun 15 14:42:10 ip-172-31-7-66 systemd[1]: Started A high performance web server and a reverse proxy server.

other commands.

$sudo service nginx status
$sudo service nginx start
$sudo service nginx stop
$sudo service nginx restart
$sudo service nginx reload
$sudo service nginx disable
$sudo service nginx enable

Now you can access your server and it will display below page

Hope this is helpful. Please let me know if you have any query or getting any unexpected error.

Getting Started with django with ubuntu

Tag line for django says “The web framework for perfectionists with deadlines.” and most would agree. django is rediculously fast to implement, its fully loaded with lots of utilitis, its secure and at the same time its highly scalable. some of the most popular sites like instagram, pinterest are built with django.

Let us get our hands dirty by getting started with django instead of talking about django features.

Which django version should I use ?

Here are the details about djnago release. you can choose latest version, however, I will stick with LTS version

Currently I am using Django 2.2 for most of my projects.

Which Python version should I use ?

This is the most common question and following table with clarify your query

Django version Python versions
  1.11 2.7, 3.4, 3.5, 3.6
  2.0 3.4, 3.5, 3.6, 3.7
  2.1,2.2 3.5, 3.6, 3.7

In this tutorial we are going to use python version 3.6 (3.6.4 to be very specifuc) and django version 1.11.

Install django version 1.11

$ sudo python3.6 -m pip install django==1.11
Collecting django==1.11
Downloading https://files.pythonhosted.org/packages/47/a6/078ebcbd49b19e22fd560a2348cfc5cec9e5dcfe3c4fad8e64c9865135bb/Django-1.11-py2.py3-none-any.whl (6.9MB)
100% |████████████████████████████████| 6.9MB 339kB/s 
Requirement already satisfied: pytz in /usr/local/lib/python3.6/site-packages (from django==1.11) (2017.3)
Installing collected packages: django
Successfully installed django-1.11

You can check the version as below

$ python3.6 -m django --version
1.11

Creating django project

$ django-admin startproject mysite

Now CD into directory “mysite” and run following command

$python3.6 manage.py runserver
Performing system checks...

System check identified no issues (0 silenced).

You have 13 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
Run 'python manage.py migrate' to apply them.

July 18, 2018 - 17:43:11
Django version 1.11, using settings 'mysite.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
[18/Jul/2018 17:43:31] "GET / HTTP/1.1" 200 1716
Not Found: /favicon.ico

Now you can access django website at location http://127.0.0.1:8000/

here is the output

Creating application

Now that your environment – a “project” – is set up, you’re set to start doing work.

$ python3.6 manage.py startapp polls

after creating polls app, please add following code to polls/views.py


from django.http import HttpResponse

def index(request):
    return HttpResponse("Hello, world. You're at the polls index.")

Now create a file urls.py and add following code


from django.conf.urls import url

from . import views

urlpatterns = [
    url(r'^$', views.index, name='index'),
]

Now add following code in mysite.urls.py


from django.conf.urls import include, url
from django.contrib import admin

urlpatterns = [
    url(r'^polls/', include('polls.urls')),
    url(r'^admin/', admin.site.urls),
]

Here is the output

Hope this is helpful…