The idea for Hyperscaler was born at the retailer Amazon in the early 2000s. To be able to meet the growing needs of the online retailer itself, server instances were developed under the name Amazon Elastic Compute Cloud (EC2 for short). These instances are virtual servers that can be operated with either a Linux distribution or a Microsoft Windows Server operating system.

using different configuration types

The concept of public cloud computing introduces a dynamic ecosystem where users can harness the power of remote servers and resources. Within this ecosystem, various configurations of virtual CPU cores (vCPU) and virtual memory (vRAM) play a pivotal role in optimizing performance and resource allocation.

A vCPU, or virtual Central Processing Unit, functions as a slice or share of a physical CPU. The vCPU is exclusively assigned to a specific virtual machine (VM). A Hypervisor software layer, residing on the underlying host system, is orchestrating it. The hypervisor acts as a mediator, efficiently distributing computational tasks among the VMs and managing the utilization of the physical CPU resources. This approach not only enables multitasking within a single server but also facilitates the seamless scaling of resources as needed.

Hyperscaler are using huge Datacenters

In a similar vein, vRAM, representing virtual Random Access Memory. It delineates a portion or proportion of the underlying physical RAM of the host system. This allocation mechanism ensures that each VM has access to a dedicated segment of the physical memory. This is preventing resource contention and guaranteeing consistent performance. It also allows for the dynamic adjustment of memory allocations, adapting to the changing demands of applications and workloads hosted in the cloud.

This flexible configuration of vCPU and vRAM is at the core of the public cloud’s versatility. This enables users to tailor their virtual environments to meet specific computational needs. Whether you require a high-performance virtual machine with a substantial vCPU allocation or a memory-intensive VM with ample vRAM, the public cloud empowers users to optimize their resources for various tasks and workloads.


The public cloud is decentralized

Creating a decentralized infrastructure system is the foundational concept behind the emergence of the public cloud. This innovative approach laid the groundwork for the remarkable evolution of cloud computing as we know it today. Central to this pioneering concept were the virtual servers, often referred to as virtual machines (VMs), also called EC2 instances.

Amazon Web Services (AWS), recognized today as a leading hyperscaler, played a pivotal role in shaping the landscape of public cloud services. AWS ingeniously paired EC2 instances with a cutting-edge scalable storage solution. This groundbreaking storage service, now widely recognized as Simple Storage Service (or simply S3), proved to be a revolutionary development in cloud technology. It provided users with the ability to store and retrieve data effortlessly and at an unprecedented scale.

These pioneering AWS services marked a significant milestone in the history of cloud computing, representing the world’s first publicly accessible cloud infrastructure service. They laid the foundation for the rapid growth and ongoing dominance of AWS in the hyperscaler market. As of 2021, this Seattle-based company boasted staggering annual sales of $62.2 billion. This is a remarkable contrast to its 2013 figures, which stood at a mere $3.1 billion.

Decentralized Networks as basis for Hyperscaler

Read the book – SAP on Hyperscaler

This, and many more facts about the large hyperscalers and SAP, you now can read for yourself. Our new book “SAP auf Hyperscaler Clouds” by Steffi Dünnebier and me is available. First in German only.

Buy now: SAP auf Hyperscaler-Clouds | SAP PRESS (

❓Will you pick up the book ❓ Let us know in the comments.

share this post on: