The last few years have seen the IT industry get excited about Cloud. Cloud technology has been a major focus of large IT firms and consultancies. They have invested billions of dollars, pounds, and yen in it. What’s the deal?
Cloud may be producing more heat than light, but it’s still giving us something to think about, and something to offer our customers. Cloud is not new in some ways, but it is groundbreaking in others. It will change the way businesses provide their customers with apps and services.
It is already happening. Users will be able to access their own Processing, Memory, Storage, and Network (PMSN), and receive services and applications at other levels using almost any mobile technology. Cloud can help users be more productive, reduce IT costs, and make remote work more possible. It also makes it easier to manage IT. Cloud may be the best option for a business that uses applications and services. It depends on which type of Cloud it is. It will only need to pay for the services and applications it uses. This may be perceived as a threat by some IT professionals, but it could also be seen as a way to liberate others.
What is Cloud, then?
Cloud is best understood by understanding the principles, base technologies, and drivers that underpin it.
The industry has been busy consolidating server-rooms and data centers from racks full of tin containers to fewer racks full of tin. This has resulted in a super-busy decade. The number of applications that can exist within this smaller footprint is increasing at the same time.
Virtualization: Why do you want it?
Servers that host a single application see a 15% usage rate. This means that servers hosting a single application have utilisation levels of around 15%. Data centres with servers running at 15% are a financial nightmare. Server usage of 15% is unlikely to return any money on the initial investment over many years. The lifecycle of servers is approximately 3 years. There’s a 50% depreciation. The servers can be worth nothing in corporate terms after three years.
We have modernized our tool-sets to enable virtualization of almost any server. This allows us to create clusters with virtualised servers that can host multiple applications or services. This has many benefits. This allows the data centre to offer more services and applications by hosting higher numbers of Application servers on Resource servers.
It’s cooler, it’s greener
In addition to reducing the hardware system’s size through virtualisation, data center designers and manufacturers have also introduced new technologies and methods to lower the power needed to cool the halls and systems. Servers and other hardware systems now have directional airflow. Servers may be equipped with directional fans. These fans direct the heated air in a direction that is compatible with the air-flow design at the data centre. The new science of the IT industry is air-flow. It is common to have both a hot-isle or a cold-isle matrix in the data centre hall. It can make a significant difference in power consumption by having systems that are able to respond and participate in this design. It is becoming increasingly important to choose the right location for a data center.
The Green agenda is another option. This new movement is a popular one and companies want to be perceived as being involved. Large data centers require a lot of power. This is not Green. High power requirements will be required for large data centres. Manufacturers of hardware are trying to reduce the power consumption of their products. Data centre designers are also making an effort to use more natural airflow. These efforts, taken together, are making a significant difference. It’s good to be green if it saves money.
There are downsides
High hardware usage leads to higher failure rates, most often due to heat. The 121 ratio means that the server is idle, cool, and under-used. This will result in a higher ROI, but a longer life cycle. Virtualization will produce more heat if there is higher utilisation per host. Heat can cause damage to components and reduce their lifespan. This will also impact TCO (Total cost of ownership = bottom line) as well as ROI (Return On Investment). This increases the cooling requirements, which in turn leads to an increase in power consumption. Massive Parallel Processing, which is cloud technology, will require more cooling and power. Massive Parallel Processing can utilize tens to thousands of servers/VMs and large storage environments, as well as complex and large networks. This processing level will result in higher energy consumption. It’s impossible to have it both way.
Virtualization has a downside: VM density. Imagine 500 hardware servers hosting 192 virtual machines each. This is 96,000 Virtual Machines. The number of recommended VMs per CPU limits the average number of virtual machines per host server. A server with 16 cores (Cores), you can create 12 VMs per Core. This is dependent on the purpose of the VM. It’s easy math: 500 x 192 = 96,000 virtual machines. When designing large virtualisation infrastructures, architects take this into consideration and ensure that Sprawl is controlled. But there is danger.
Virtualization: The basics of virtualization
Install software on a single computer (a server) that allows abstraction of the hardware resources: Memory, Processing, Storage, and Networking. Once you’ve configured this virtualisation-capable software, you can use it to fool various operating systems into thinking that they are being installed into a familiar environment that they recognise. The virtualization software (should) include all drivers required by the operating system for the hardware to communicate with it.
The Hardware Host is at the bottom of virtualisation stack. This machine should be installed with the hypervisor. The hypervisor abstracts hardware resources and transfers them to virtual machines (VMs). Install the appropriate operating system on the VM. Install the application/s. Depending on the purpose of the VM, and the number processing cores available in the Host, a single hardware host can support multiple Guest operating systems or Virtual Machines. Each hypervisor vendor will have its own ratio of VMs and Cores. However, it is important to know exactly what the virtual machines are capable of supporting in order to determine how to provision them. The new art of provisioning and sizing virtual infrastructures is a complex and difficult task in IT. There are many utilities and tools that can help you do this important and crucial job. Even with all the technology available, sizing can still be done by using intuition and experience. The machines are still in control, but they haven’t yet taken over!
Two options are available for installing the hypervisor:
1. Install an operating system with code that is a hypervisor. To activate the hypervisor, click on a few boxes after the operating system has been installed. Host Virtualisation is the name given to this method. It is because a Host OS, such as Windows 2008, or a Linux distribution, acts as the controller and foundation of the hypervisor. The base operating system is then installed as usual, directly onto the server/hardware. The system is then modified and rebooted. The hypervisor configuration will be available as a bootable option next time the system loads
2. 2. Install the hypervisor directly on the server/hardware. Once the hypervisor is installed, it will abstract hardware resources and make them accessible to multiple Guest operating system via a Virtual machine. This type of hypervisor is called an on-the-metal hypervisor. VMware’s ESXi (XEN) and ESXi (XEN) are examples.
Microsoft’s HyperV and VMware ESXi are the most popular hypervisors. ESXi can be installed on your hardware as a standalone hypervisor. Hyper-V is an integral part of Windows 2008. To use the hypervisor within Windows 2008, Windows 2008 must first be installed. Although Hyper-V may seem appealing, it doesn’t reduce the file size to ESXi. (HyperV is approximately 2GB and ESXi about 70MB). It also does not reduce overhead to a level comparable to ESXi.
Other applications are required to manage virtual environments. Microsoft offers System Center Virtual Machine Manager and VMware has vCenter Server. These activities can be enhanced by a variety of third-party tools.
Which hypervisor should I use?
It is important to make informed decisions when choosing the virtualization software. To ensure that time and money are spent efficiently and that the implementation works, it is important to determine the size of the hosts, how to provision the VMs, what support toolsets to use, and other questions. This is the.
What is Cloud Computing?
There are many definitions on the Internet. Here are mine. “Cloud Computing is billable virtualized elastic services
Cloud refers to the Internet’s methods of allowing users to access services and applications via the Internet.
Everything, from the Access layer up to the bottom end of the stack, is stored in the data center and never leaves it.
This stack includes many applications and services that allow monitoring of the Memory, Storage, and Network. These can then be used to billback applications for metering and billing.
Cloud Computing Models
The Delivery Model and the Deployment Model.
The deployment model
– Public Cloud-Private Cloud
– Community Cloud
Private Cloud Deployment Model
The Private Cloud Deployment Model is the best choice for most businesses. It offers a high level security, and is the best choice for companies and organisations that must comply with data security laws.
Note: Cloud hosting can be sold by providers (companies). They believe in the hype and confusion surrounding Cloud. You should carefully examine what the product offers. It may be that it is not Cloud or cannot offer all the attributes of Cloud.
Public Cloud Deployment Model
Amazon EC2 is a great example of the Public Cloud Deployment Model. Although the users in this instance are largely the Public, more and more businesses find Public Cloud useful to augment their existing delivery models.
The Public Cloud is affordable for small businesses, especially if security is not an issue. Public Cloud can be used by large organisations, government agencies, and even enterprises. It will all depend on the legal and data security requirements.
Community Cloud Deployment Model
Users allow their computers to be used in a Point-to-Point (P2P) network. Modern PCs/Workstations are equipped with multiprocessors, large amounts of RAM, and large SATA storage drives. It is therefore sensible to make use of these resources in order to create a community of users who can each contribute a PMSN and share the available applications and services. A single subnet can connect large numbers of servers and PCs. The Community Cloud’s users are both the consumers and contributors to compute resources, apps and services.
The Community Cloud has the advantage that it is not tied to vendors and is not subject to vendor’s business cases. This allows the community to set its own prices and costs. You can run it as a cooperative or a free service.
Although security may not be as important, having access to the entire group at a low level could lead to security breaches and bad blood.
Vendor detachment can be beneficial to user communities, but it doesn’t have to mean that vendors should be excluded. Community Cloud can be delivered by vendor/providers at a charge.
Community Cloud is also available to large companies who may have similar needs. If a company has suffered a major loss of services or has been affected by a disaster, Community Cloud may be a useful tool. A Community Cloud is a group of companies (car manufacturers, oil firms etc.). These services might be available from other sources in the Cloud.
Hybrid Cloud Deployment Model
The Hybrid Cloud can be used when it is necessary to have access the Public Cloud, but keep certain security restrictions on users or data in a Private Cloud. A company may have a data center from which it provides Private Cloud services to its employees, but it also needs a way to deliver ubiquitous services to the general public or users outside its network. This environment can be provided by the Hybrid Cloud. Hybrid Cloud service providers can offer massive scaling of Public Cloud, but companies still have control over their critical data and compliance requirements.
Although this isn’t a Cloud delivery or deployment model, it will be an integral part of Cloud Computing services.
The Cloud market will grow and expand around the globe, and the variety of providers will make it more difficult to manage and even clear up. Many Cloud providers may be hostile to one another and not want to share their Clouds with each other. Both business and end users want to be able diversify their Cloud delivery and provision options. Multiple Clouds increase the availability of services and applications.
One company might decide that it makes sense to use multiple Cloud providers in order to allow data to be used in different Clouds for different groups. This is the problem: how do you manage multiple-headed delivery models? IT can be the clearinghouse for all the Clouds and take back control. Different workloads might require different levels and compliance. The ability to use multiple Clouds for different workloads is an advantage over the one-size fits all approach of a single Cloud provider. Federated Cloud answers the question, “How can I avoid vendor lock-in?” Federated Cloud is a solution to managing multiple Clouds.
What is stopping this from happening? It’s mostly about differences between platforms and operating systems. Moving a virtual machine that is over 100GB can prove difficult. Imagine thousands of them being moved at once. This is why true Cloud Federation isn’t yet possible, though some companies are working towards it. You can’t currently move a VM from EC2 to Azure or OpenStack.
True federation allows disparate Clouds to be managed seamlessly together and where VMs are easily moved between Clouds.
To provide a virtual environment for Guest operating systems, the hypervisor abstracted the physical layer resources. The appropriate vendor virtualisation management tools manage this layer of abstraction (in the case VMware, its vSphere vCenter server and its APIs). The Virtualisation Layer is represented by the Cloud Management Layer (vCloud Director for VMware). It takes the VMs and applications, as well as users, and organizes them into groups. Then, it can make them available for users.
It is possible to provide IaaS, PaaS, and SaaS to private, public, community, and hybrid cloud users by using the abstracted virtual layers.
Cloud Delivery Models
IaaS-Infrastructure as a service (Lower Layer).
IaaS customers will get the complete compute infrastructure, including power/cooling, storage, networking, and virtual machines (supplied as servers). Customers are responsible for installing the operating systems and managing the infrastructure. These terms may vary depending on which vendor/provider you choose and what each contract says.
PaaS-Platform as a Service, Middle Layer
PaaS is a platform that a customer uses. This could be either a Linux or Windows environment. All the necessary operating systems are available for software developers (the primary users of PaaS), to help them create and test their products. You can bill based on how much resource you use over the course of a given time. There are many billing models that can be used to meet different requirements.
SaaS-Software is a service (Top Layer).
SaaS provides a complete computing environment with all the applications available for users. This is the most popular offer in the Public Cloud. Microsoft Office 365 is an example of such applications. The customer is not responsible for managing the infrastructure in this environment.
Cloud Metering & Billing
The infrastructure chargeback information (Metering), is used to calculate the billable amount. The resources listed below will be included in the bill depending on the service ordered.