Subnetting

Content
1. Subnetting
2. CIDR (Classless Inter Domain Routing)
3. Variable length Subnet Mask (VLSM)
4. Who manages IP addresses?
5. Why we need subnetting?
6. How to assign IP address to device?

Subnetting
Subnetting is a process of dividing large network into the smaller network
based on layer 3 (Network Layer) IP address.
A subnet is a logical subdivision of an IP network. The practice of dividing a
network into two or more networks is called subnetting.
Subnetting provides a method of allocating a part of the host address space to
network addresses, which generate more networks.
Subnetting allows an organization to add sub-networks without the need to
acquire a new IP addresses from ISP.

Benefits of Subnetting
It reduces the network traffic by reducing the size of broadcasts domain.
It enables users to access a work network from their homes.
Subnetting helps in reducing the network traffic and network complexity.
It increase the security options in the network
By using subnetting network addresses can be decentralized it means the
administrator of the network can monitor the subnet.

Classless Inter Domain Routing (CIDR)
Classless Inter-Domain Routing is a method for allocating IP addresses and IP
routing in the Network.
CIDR is introduced in 1993 by Internet Engineering Task Force .
It replace the previous classful addressing method to design a network in the
Internet.
Its goal was to reduce the rapid exhaustion of IPv4 addresses.

IP address consist of two groups of bits
The most significant bits are the network address or network prefix , which
identifies a whole network or subnet.
The least significant bits are the host address, which specifies a particular
interface of a host on that network.
This division is used in CIDR to perform subnetting.
CIDR allocates address space to ISP and to end users on any address bit
boundary.
CIDR is based on the variable-length subnet masking (VLSM) technique.

Variable Length Subnet Mask (VLSM)
Variable length subnet mask(VLSM) technique is used in CIDR
VLSM is a process of dividing an IP address space into the subnet of different
sizes without wasting IP addresses. Example 192.168.1.160/30
VLSM is closely related to CIDR.
VLSM allows various network subnets to have different subnet masks.
CIDR allows routers to group the various routes together to reduce the amount
of routing information at the core routers whereas VLSM helps how to optimise
the available address space.

Who manages IP addresses?
The Internet Assigned Numbers Authority (IANA) manages the IP address.
It define space allocations globally and form five regional Internet registries
(RIRs) to allocate IP address blocks to ISP such as BSNL, Airtel, Vodafone etc.
Five Regional Internet Registries (RIRs) are:
RIPE (Reseaux IP Europeens) – Europe
APNIC (Asia pacific network information centre) – Asia
AFRINIC (African Network Information Centre) – Africa
ARIN (American registry for internet numbers) – North America
LACNIC (Latin america network information centre) – Latin America

Who manages IP addresses?
If a device wants to connect to internet then that device request the ISP for the
IP address.
ISP get the range of IP addresses from Internet Assigned Numbers Authority
(IANA) through different five Regional Registries according to the location of a
device.
In this way, device get an IP address from the range of IP Addresses.

Why we need Subnetting?
Let take an example, Any Internet Service Provider (ISP) require 150 IP
addresses to install a network. Then ISP request to APNIC for IP addresses.
APNIC provide a IP address 193.172.16.0/24 to the ISP
As this is Class C address ,and we know class C have 254 valid IP address in
total.
ISP required 150 IP address and APNIC provide 254 IP address. Thus, this led to
the wastage of 104 IP address.
To stop the wastage of IP addresses, a method is introduced known as
subnetting.

IP address are very costly so to stop the wastage of IP addresses we do
Subnetting.
Like Class C have 256 IP address in total.

Assigning IP address
IP address can be assign in two ways to the device
1. Static IP address
2. Dynamic IP address

Static IP address
A static IP address is an IP address that are manually configured for a device.
A static IP address is called static because it doesn’t change.
Static IP addresses are also known as fixed IP addresses or dedicated IP addresses.
Dynamic IP address
A dynamic IP address is an IP address that is automatically assigned to each device in a network.
This automatic assignment of IP addresses is done by a DHCP server.
Dynamic IP address is called dynamic because it will change on future connections to the network.

 

AWS SERVICES

What is AWS?

Amazon Web Services (AWS), a subsidiary of Amazon.com, offering
cloud-computing services
Cloud Computing or simply Cloud means, using a network of remote
servers hosted on the Internet to store, manage, and process data, rather
than a local server or a personal computer
Cloud Computing provides on-demand access to a shared pool of
configurable computing resources (e.g., computer networks, servers,
storage, applications and services)

AWS Global infrastructure

● AWS locations : regions and availability zones
● 43 availability zones
● 16 regions
● 11 Availability zones and 4 regions – plan to launch
● Placement of data and resources in multiple locations.
● Regions are isolated to each other.

Accessing platform

To access AWS cloud services , you can use
● AWS management console
● AWS command line interface
● AWS software development kits

AWS management console

● It is a web application for managing AWS cloud services. It provides an
interactive user interface. Each service has its own console which can
be accessed by AWS management console.
● It also provides information about account and billing.

AWS command line interface

● It is a unified tool used to manage AWS cloud services.
● With just one tool to download and configure , you can control
multiple services from the command line and automate them using
scripts.

AWS software development kits

● It provides an application programming interface that interacts with
web services that fundamentally make up the AWS platform.
● SDKs provide support for many different programming languages.
● SDKs can take the complexity out of coding by providing
programmatic access for many of the services.

Elastic Load Balancing

Elastic Load Balancing

  • Elastic Load balancing is a web service which distributes the application traffic across multiple EC2 instances within multiple Availability Zone.
  • It is used to increase the fault tolerance of users applications.
  • There are two type of load balancer such as: Application Load Balancer and Classic Load Balancer.
  • Elastic Load Balancing distributes incoming application traffic across multiple EC2 instances, in multiple Availability Zones.
  • This increases the fault tolerance of user’s applications.
  • The load balancer serves as a single point of contact for clients
  • User’s can configure health checks, which are used to monitor the health of the registered instances so that the load balancer can send requests only to the healthy instances.
  • User’s can also offload the work of encryption and decryption to their load balancer so that their instances can focus on their main work.

Elastic Load Balancing supports two types of load balancers

  • Application Load Balancers
  • Classic Load Balancers

load balancer can be choosed, according to the need of user’s.

User’s can create, access and manage their own load balancer using any of the following interfaces

  • AWS Management Console
  • AWS Command Line interface (AWS-CLI)
  • AWS SDKs
  • Query API
  • AWS Management Console

Provides a web interface that can be used to access Elastic Load Balancing.

  • AWS Command Line Interface (AWS CLI)

Provides commands for a broad set of AWS services, including Elastic Load Balancing

It is supported on Windows, Mac, and Linux.

  • AWS SDKs

Provides language-specific APIs

Also manage the connection details, such as calculating signatures, handling request retries, and error handling.

  • Query API

Provides low-level API actions using HTTPS requests.

It provide the direct way to access Elastic Load Balancing, but it requires that user’s application must handle low-level details such as generating the hash to sign the request, and error handling.

Elastic load balancing works with these services to increase the availability and scalability of user’s application

  • Amazon EC2
  • Amazon ECS
  • Amazon Route 53
  • Amazon CloudWatch
  • Autoscaling
  • Amazon EC2

Provide virtual servers to run user’s application in cloud.

User’s can configure their own load balancer to route the traffic to their EC2 instance.

  • Amazon ECS

It Enables user’s to run, stop, and manage their Docker containers on a cluster of EC2 instances.

User’s can configure their load balancer to route traffic to their containers.

  • Amazon Route 53

It provide reliable and cost effective way to route viewer to websites by translating their domain names into their corresponding IP addresses.

AWS assign their URLs to their resources i.e. to load balancer.

Amazon Route 53 help to get a website or web application up and running.

  • Amazon CloudWatch

It enables user’s to monitor their load balancer and take action as needed.

For example, user’s can monitor the CPU usage and disk reads and writes of your Amazon EC2 instances and then use this data to determine whether to launch additional instances to handle increased load or not.

  • Autoscaling

If user’s enable Auto Scaling with Elastic Load Balancing

Then instances that are launched by Auto Scaling are automatically registered with the load balancer.

The instances that are terminated by Auto Scaling are automatically de-registered from the load balancer.

How Elastic Load Balancing Works ?

  • A load balancer accepts incoming traffic from clients and routes requests to its registered EC2 instances in one or more Availability Zones.
  • Then load balancer monitors the health of its registered instances and routes traffic only to healthy instances.
  • User’s can configure their load balancer by specifying one or more listeners to accept incoming traffic.
  • A listener is a process that checks for connection requests.
  • It is configured with a protocol and port number for connections from clients to the load balancer and a protocol and port number for connections from the load balancer to the instances.

Elastic Load Balancing support two type of Load balancer:

  • Classic Load Balancer :

registers the instances to the load balancer

  • Application Load Balancers :

registers the instance as a target in a target group and route traffic to a target group.

AWS Managed Services

Managed Services

1

  • AWS Managed services include AWS Health check.
  • AWS Health provides personalized information.

AWS Health

AWS Health provides ongoing visibility into the state of the AWS resources, services, and accounts.

AWS Health provides relevant and timely information to help to manage events in progress, as well as be aware of and prepare for planned activities.

AWS Management Console

  • The AWS Management Console is a web application for managing Amazon Web Services.
  • The console provides an intuitive user interface for performing many AWS tasks such as working with Amazon S3 buckets, launching and connecting to Amazon EC2 instances, setting Amazon CloudWatch alarms, and so on.

AWS Command Line Interface

  • The AWS CLI is an open source tool built on top of the AWS SDK for Python (Boto) that provides commands for interacting with AWS services.
  • With less configuration, user can start using all of the functionality provided by the AWS Management Console using terminal program such as.
  • Linux shells
  • Windows command line
  • Remotely

AWS Tools for Windows Powershell

  • The AWS Tools for Windows PowerShell and AWS Tools for PowerShell Core are PowerShell modules that are built on the functionality exposed by the AWS SDK for .NET.
  • The AWS Tools for Windows PowerShell and AWS Tools for PowerShell Core are flexible in how they enable the user to handle credentials including support for the AWS IAM infrastructure.

Cloud Computing

Cloud computing is a type of internet based computing which provide the delivery of hosted services over the internet

It provide a  network of remote servers to store, manage and process data over the internet.

Companies offering these computing services are called cloud providers and  they charge for cloud computing services based on usage.

Example: Microsoft Window Azure, Amazon web services, Huawei GalaX cloud etc

2

Cloud Services

Cloud services are broadly divided into three categories:

1.Cloud Software as a Service (SaaS)

2.Cloud Platform as a Service (PaaS)

3.Cloud Infrastructure as a Service (IaaS)

These three models are independent of each other.

Cloud Software as a Service (SaaS)

Software as a service is a way of delivering applications over the Internet—as a service. The users manages access to the application, including security, availability, and performance.

SaaS customers have no hardware or software to buy, install, maintain or update.

Access to applications is easy by having internet connection.

Example: Google Apps, Salesforce, Workday, Cisco WebEx.

Cloud Platform as a Service (PaaS)

In Platform as a Service model, a cloud provider delivers hardware and software tools as a service to their users which are used for application development.A PaaS provider hosts the hardware and software on its own infrastructure.

PaaS allow developers to frequently change or upgrade operating system features. users access PaaS through a Web browser.PaaS  charge for that access on a per-use basis or as a monthly fee for the access to platform.

Example of PaaS vendors are Salesforce.com’s Force.com, Google and Amazon.

PaaS platforms for development and management of software are Appear IQ, Amazon Web Services (AWS) Elastic Beanstalk, Google App Engine.

Cloud Infrastructure as a Service (IaaS)

This cloud offer infrastructure resources such as hardware, software, server and storage.

Users can use these resources over internet and deploy application on them.

IaaS platforms offer highly scalable resources that can be adjusted on-demand.

Example: Amazon Web Services (AWS), Windows Azure, Google Compute Engine.

Advantages of Cloud Computing Services

1.Reduced Capital Cost

2.Device and Location independence

3.Scalability and Elasticity

4.Agility

5.Maintenance

Cloud Computing deployment models are

1.Cloud-based deployments

2.Hybrid deployments

Cloud-based deployment

A cloud-based application is fully deployed in the cloud

All parts of the application run in the cloud.

Applications have either been created in the cloud or have been migrated from an existing infrastructure

This migration is done to take advantage of the benefits of cloud computing. It can be built on low-level infrastructure pieces or can use higher level services.

Hybrid deployment

A hybrid deployment is a way to connect infrastructure and applications between cloud-based resources and existing resources (that are not located in the cloud).

It is used to extend, and grow, an organization’s infrastructure into the cloud while connecting cloud resources to internal system.

Features of Cloud Computing

  • On demand computing resources
  • Elastic resources—Scale up or down quickly and easily to meet demand
  • Metered service so you only pay for what you use
  • Self service—All the IT resources you need with self-service access

Cloud infrastructure as a service

In the 2016 Magic Quadrant for Cloud Infrastructure as a Service, Worldwide, for the 6th straight year, Gartner placed Amazon Web Services in the “Leaders” quadrant and named AWS as having both the furthest completeness of vision and the highest ability to execute

3

 

AWS – Amazon Cloudfront

CloudFront

13

CloudFront

  • Amazon CloudFront is a web service.
  • Quickly distribute user content over worldwide network of data centers.
  • Increasing network performance by reducing latency (time delays).
  • CloudFront is compliance with HIPAA and PCI DSS

Amazon CloudFront

Amazon CloudFront is a web service that speeds up distribution of static and dynamic web content, such as .html, .css, .php, and image files, to end users.

CloudFront delivers user content through a worldwide network of data centers called edge locations.

When a user requests content that is serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

If the content is already in the edge location with the lowest latency, CloudFront delivers it immediately.

If the content is not in that edge location, CloudFront retrieves it from an Amazon S3 bucket or an HTTP server (for example, a web server).

Amazon CloudFront Content delivering:

1.Configuring CloudFront to deliver Content.

2.How CloudFront deliver content to user?

  • Configuring CloudFront to deliver Content:

1.Fistly, configure your origin servers, from which CloudFront gets your files for distribution from CloudFront edge locations all over the world.

An origin server stores the original, definitive version of user’s objects. If user serving content over HTTP, then origin server is either an Amazon S3 bucket or an HTTP server, such as a web server.

HTTP server can run on an Amazon Elastic Compute Cloud (Amazon EC2) instance or on a server that you manage

These servers are also known as custom origins.

2.Then upload your files to your origin servers.

Your files, also known as objects, which include web pages, images, and media files,  that can be served over HTTP or a supported version of Adobe RTMP, the protocol used by Adobe Flash Media Server.

3.Then create a CloudFront distribution, which tells CloudFront which origin servers to get your files from when users request the files through your web site or application.

4.CloudFront assigns a domain name to your new distribution and displays it in the CloudFront console or returns it in the response to a programmatic request, for example, an API request.

5.CloudFront sends your distribution’s configuration (but not your content) to all of its edge locations—collections of servers in geographically dispersed data centers where CloudFront caches copies of your objects.

15

  • How cloudFront deliver content to user?

After configuring CloudFront to deliver your content, what happens when users request your objects:

1.A user accesses your website or application and requests one or more objects, such as an image file and an HTML file.

2.DNS routes the request to the CloudFront edge location that can best serve the user’s request, typically the nearest CloudFront edge location in terms of latency, and routes the request to that edge location.

3.In the edge location, CloudFront checks its cache for the requested files. If the files are in the cache, CloudFront returns them to the user. If the files are not in the cache, it does the following:

a.CloudFront compares the request with the specifications in your distribution and forwards the request for the files to the applicable origin server for the corresponding file type—for example, to your Amazon S3 bucket for image files and to your HTTP server for the HTML files.

b.The origin servers send the files back to the CloudFront edge location.

c.As soon as the first byte arrives from the origin, CloudFront begins to forward the files to the user. CloudFront also adds the files to the cache in the edge location for the next time someone requests those files.

16

Amazon CloudFront

CloudFront Regional Edge Caches bring more of the user content closer to the viewers.

It also store not so popular content at a CloudFront edge location.

This helps to improve performance for viewers, while lowering the operational burden and cost of scaling origin resources.

This feature helps with all types of content, particularly content that tends to become less popular over time.

Features of CloudFront Regional Edge:

  • Their is no need to make any changes to CloudFront distributions. Regional edge caches are enabled by default for all CloudFront distributions.
  • There is no additional cost for using this feature.
  • Regional Edge Caches have feature parity with edge locations. For example, a cache invalidation request removes an object from both edge caches and Regional Edge Caches before it expires.
  • Regional Edge Caches are available for custom origins. Amazon S3 origins are not supported.
  • Dynamic content as determined at request time (cache-behavior configured to forward all headers) does not flow through the Regional Edge Caches, but goes directly to the origin.
  • User can measure the performance improvements from this feature by using cache-hit ratio metrics available on the console.

Amazon Web Services (AWS) publishes its current IP address ranges in JSON format.

To view the current ranges, download the .json file.

To maintain history, save successive versions of the .json file.

Amazon CloudFront is Compliance with

  • PCI DSS
  • HIPAA
  • PCI DSS:

The Payment Card Industry Data Security Standard ( PCI DSS) is a proprietary information security standard

It is administered by the PCI Security Standards Council, which was founded by American Express, Discover Financial Services, JCB International, MasterCard Worldwide and Visa Inc.

CloudFront supports the processing, storage, and transmission of credit card data by a merchant or service provider  and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS).

  • HIPAA:

A large and growing number of healthcare providers, payers and IT professionals are using AWS’s utility-based cloud services to process, store, and transmit PHI (Protected health information).

AWS enables covered entities and their business associates subject to the U.S. Health Insurance Portability and Accountability Act (HIPAA).

It is used to leverage the secure AWS environment to process, maintain, and store protected health information.

AWS Direct connect

Direct Connect

13

  • Amazon Direct Connect links the user internal network with AWS Direct Connect location.
  • Use fiber optical cable of 1 gigabit and 10 gigabit

AWS Direct Connect links the user internal network to an AWS Direct Connect location over a standard 1-gigabit or 10-gigabit Ethernet fiber-optic cable.

One end of the cable is connected to user’s router, the other to an AWS Direct Connect router.

With this connection in place, user’s can create virtual interfaces directly to public AWS services (for example, to Amazon S3) or to Amazon VPC, bypassing Internet service providers in  network path.

An AWS Direct Connect location provides access to AWS in the region with which it is associated.

User can provision a single connection to any AWS Direct Connect location in North America and use it to access public AWS services in all North America regions and AWS GovCloud (US).

Following diagram shows how AWS Direct Connect interfaces with user’s network.

14

Key Components of AWS Direct Connect are:

  • Connection
  • Virtual interface
  • Connection:

Creating a connection in an AWS Direct Connect location to establish a network connection between user’s premises to an AWS region.

To create connection, study following information:

1.AWS Direct Connect location

2.Port speed

  • Connection:

1.AWS Direct Connect location

AWS Partner Network (APN) help to establish network circuits between an AWS Direct Connect location and user’s data center, office, or colocation environment.

It is used to provide colocation space within the same facility as the AWS Direct Connect location.

  • Connection:

2.Port speed:

AWS Direct Connect supports two port speeds:

1 Gbps: 1000BASE-LX (1310nm) over single-mode fiber

10 Gbps: 10GBASE-LR (1310nm) over single-mode fiber

User cannot change the port speed after created the connection request. If he/she need to change the port speed, then create and configure a new connection.

  • Virtual Interface:

Create a virtual interface to enable access to AWS services.

A public virtual interface enables access to public-facing services, such as Amazon S3.

A private virtual interface enables access to user’s VPC.

User  can configure multiple virtual interfaces on a single AWS Direct Connect connection.

Network Requirements for AWS Direct Connect

To use AWS Direct Connect in an AWS Direct Connect location, user’s network must meet one of the following conditions:

  • User network must be collocated with an existing AWS Direct Connect location.
  • User must be working with an AWS Direct Connect partner who is a member of the AWS Partner Network (APN)

AWS Direct Connect supports both the IPv4 and IPv6 communication protocols.

AWS Direct Connect supports a maximum transmission unit (MTU) of up to 1522 bytes at the physical connection layer .

(14 bytes ethernet header + 4 bytes VLAN tag + 1500 bytes IP datagram + 4 bytes FCS).