In Force

ITU Recommendation

International Telecommunications Union
ITU-T
Y.3507
(12/2018)
Telecommunication
Standardization Sector
of ITU
Series Y: Global Information Infrastructure, Internet Protocol Aspects, Next-Generation Networks, Internet of Things and Smart Cities Cloud Computing Cloud computing - Functional requirements of physical machine

International Telecommunication Union

Place des Nations 1211 Geneva 20 Switzerland
itumail@itu.int
www.itu.int


INTELLECTUAL PROPERTY RIGHTS

ITU draws attention to the possibility that the practice or implementation of this Recommendation may involve the use of a claimed Intellectual Property Right. ITU takes no position concerning the evidence, validity or applicability of claimed Intellectual Property Rights, whether asserted by ITU members or others outside of the Recommendation development process.

As of the date of approval of this Recommendation, ITU had received notice of intellectual property, protected by patents, which may be required to implement this Recommendation. However, implementers are cautioned that this may not represent the latest information and are therefore strongly urged to consult the TSB patent database at http://www.itu.int/ITU-T/ipr/.



Summary

The physical machine is a type of computing machine to provide physical resources. Since all cloud services have to reside and operate on physical machines, it is important for the cloud service providers as well as for the manufacturers to identify specific functional requirements of the physical machine.

Recommendation ITU-T Y.3507 provides an introduction to the physical machine including the physical machine components, physical machine types, virtualizations in the physical machine as well as the scalability of components in the physical machine.

In addition, this Recommendation provides functional requirements for the physical machine derived from various use cases described in Appendix II. The relationship with other related specifications developed in other Standards Development Organizations (SDOs) has is introduced in Appendix I.

History

EditionRecommendationApprovalStudy GroupUnique IDa)
1.0ITU-T Y.35072018-12-141311.1002/1000/13812

a)  To access the Recommendation, type the URL http://handle.itu.int/ in the address field of your web browser, followed by the Recommendation's unique ID. For example, http://handle.itu.int/11.1002/1000/11830-en.

Recommendation ITU-T Y.3507

Cloud computing - Functional requirements of physical machine

1.  Scope

This Recommendation provides the functional requirements of the physical machine for cloud computing based on cloud computing infrastructure requirements presented in [ITU-T Y.3510]. This Recommendation addresses the following:

NOTE 1 – This Recommendation does not advocate, imply, or assume the use of any specific set or sets of technical specifications. Examples of such sets of technical specifications can be found in Appendix I.

NOTE 2 – This Recommendation addresses a set of use cases which are included in Appendix II.

2.  References

The following ITU-T Recommendations and other references contain provisions which, through reference in this text, constitute provisions of this Recommendation. At the time of publication, the editions indicated were valid. All Recommendations and other references are subject to revision; users of this Recommendation are therefore encouraged to investigate the possibility of applying the most recent edition of the Recommendations and other references listed below. A list of the currently valid ITU-T Recommendations is regularly published. The reference to a document within this Recommendation does not give it, as a stand-alone document, the status of a Recommendation.

[ITU‑T X.1601]Recommendation ITU-T X.1601, Security framework for cloud computing, 2nd edition.
[ITU‑T Y.3100]Recommendation ITU-T Y.3100, Terms and definitions for IMT-2020 network, 1st edition.
[ITU‑T Y.3500]Recommendation ITU-T Y.3500 | ISO/IEC 17788, Information technology – Cloud computing – Overview and vocabulary, 1st edition.
[ITU‑T Y.3510]Recommendation ITU-T Y.3510, Cloud computing infrastructure requirements, 2nd edition.
[ITU‑T Y.3521/M.3070]Recommendation ITU-T Y.3521/M.3070, Overview of end-to-end cloud computing management, 1st edition.

3.  Definitions

3.1.  Terms defined elsewhere

This Recommendation uses the following terms defined elsewhere:

3.1.1.  cloud computing: [ITU-T Y.3500] Paradigm for enabling network access to a scalable and elastic pool of shareable physical or virtual resources with self-service provisioning and administration on-demand.

3.1.2.  cloud service: [ITU-T Y.3500] One or more capabilities offered via cloud computing invoked using a defined interface.

3.1.3.  cloud service customer: [ITU-T Y.3500] Party which is in a business relationship for thepurpose of using cloud services.

3.1.4.  cloud service provider: [ITU-T Y.3500] Party which makes cloud services available.

3.1.5.  physical resource: [ITU-T Y.3100] A physical asset for computation, storage and/or networking.

NOTE – Components, systems and equipment can be regarded as physical resources

3.2.  Terms defined in this recommendation

This Recommendation defines the following terms:

3.2.1.  physical machine: A type of computing machine providing physical resources.

NOTE – A computing machine provides allocation and scheduling of processing resources. Types of computing machine are physical or virtual [ITU-T Y.3510].

4.  Abbreviations and acronyms

This Recommendation uses the following abbreviations and acronyms:

AC

Alternating Current

AI

Artificial intelligence

API

Application Programming Interface

ATA

AT Attachment

CD

Compact Disc

CPU

Central Processing Unit

CSC

Cloud Service Customer

CSP

Cloud Service Provider

DC

Direct Current

DRAM

Dynamic Random Access Memory

ECC

Error Correcting Code

FSC

Fan Speed Control

GPU

Graphics Processing Unit

HDD

Hard Disk Drive

I2C

Inter-Integrated Circuit

IDE

Integrated Development Environment

IaaS

Infrastructure as a Service

IPMI

Intelligent Platform Management Interface

IT

Information Technology

I/O

Input/Output

iSCSI

Internet Small Computer System Interface

NIC

Network Interface Card

NFV

Network Function Virtualization

NGFF

Next Generation Form Factor

mSATA

Mini-Serial AT Attachment

OPEX

Operational Expenditure

OS

Operating System

PCI

Peripheral Component Interconnect

PCI-E

Peripheral Component Interconnect Express

PDU

Power Distribution Unit

PMBus

Power Management Bus

PWM

Pulse Width Modulation

RAID

Redundant Array of Independent Disks

RPM

Revolutions Per Minute

ROM

Read-Only Memory

SAS

Serial Attached SCSI

SATA

Serial AT Attachment

SCSI

Small Computer System Interface

SEL

System Event Log

SoC

System-on-a-Chip

SRAM

Static Random Access Memory

TCP

Transmission Control Protocol

UART

Universal Asynchronous Receiver/Transmitter

USB

Universal Serial Bus

VGA

Video Graphics Array

VM

Virtual Machine

5.  Conventions

In this Recommendation:

The keywords "is required to" indicate a requirement which must be strictly followed and from which no deviation is permitted if conformance to this document is to be claimed.

The keywords "is recommended" indicate a requirement which is recommended but which is not absolutely required. Thus this requirement need not be present to claim conformance.

The keywords "is not recommended" indicate a requirement which is not recommended but which is not specifically prohibited. Thus, conformance with this specification can still be claimed even if this requirement is present.

The keywords "can optionally" indicate an optional requirement which is permissible, without implying any sense of being recommended. This term is not intended to imply that the vendor's implementation must provide the option and the feature can be optionally enabled by the network operator/service provider. Rather, it means the vendor may optionally provide the feature and still claim conformance with the specification.

6.  Overview of the physical machine

6.1.  Introduction to the computing machine

Cloud infrastructureincludes processing, storage, networking and other hardware resources, as well as software assets, for more information see clause 6 in [ITU-T Y.3510]. Processing resources are used to provide essential capabilities for cloud services and to support other system capabilities such as resource abstraction and control, management, security and monitoring.

A computing machine provides allocation and scheduling of processing resources. Types of computing machine are physical or virtual [ITU-T Y.3510]. The capability of a computing machine is typically expressed in terms of configuration, availability, scalability, manageability and energy consumption [ITU-T Y.3510].

The requirements of the virtual machine, as one of categories of the computing machine, have been specified in [ITU-T Y.3510]. Those requirements include virtualization technologies that can be applied to resource types such as the central processing unit (CPU), memory, input/output (I/O) and network interfaces. Several requirements regarding virtual machine management have alsobeen identified, e.g., duplication of a virtual machine (VM) dynamic/static migration of aVM and management automation.

For the physical machine, [ITU-T Y.3510] defines three requirements as follows.

  • It is recommended to support hardware resource virtualization.

  • It is recommended to support horizontal scalability (e.g., adding more computing machines) and vertical scalability (e.g., adding more resources with a computing machine).

  • It is recommended to use power optimization solutions to reduce energy consumption.

It is inferred from the requirements that the physical machine supports scalable resources with consideration of energy consumption.

Figure 1 shows the conceptual diagram of a computing machine in [ITU-T Y.3510].

Figure 1 — Concept of a computing machine in [ITU-T Y.3510]

A virtual machine provides virtualized resource pools using virtualization technologies specific to physical resource types like CPU, memory, I/O and network from a physical machine. The virtual machine also covers management issues.

Since all cloud services have to reside and operate on physical machines, it is important for the cloud service providers and especially for the infrastructure as a service (IaaS) cloud service provider (CSP) who will build the cloud infrastructure, as well as for the manufacturer who will sell the cloud infrastructure, to identify specific requirements of the physical machine.

6.2.  Introduction to the physical machine

The physical machine is a type of computing machine in which the cloud services must reside and operate and that provides physical resources, such as processing, storage, networking, etc.

Figure 2 depictsan overview of the physical machine. The scope of this Recommendation focuses on the physical machine.

Figure 2 — Overview of the physical machine

The physical machine is composed of multiple components, which are described as follows:

  • Processing units: A processing unit has CPUs, memories, storages and I/O devices. These sub-components in a processing unit are physically implemented on a motherboard. The processing unit is the basic element as a hardware processing resource and normally multiple processing units are involved to provide capacity of resources. Processing units is a mandatory component for the physical machine. A single processing unit type physical machine has only one processing unit, while a multi-processing unittype physical machine has two or more processing units.

  • Interconnect network: An interconnect network has a role of connecting multiple processing units aiming to be used to share resources in individual processing units through virtualization. In addition, the interconnect network provides a communication interface to other external physical machines. An interconnect network is an optional component only for multi-processing unit type physical machines.

  • Enclosure: An enclosure includes multiple processing units and other components such as power supply, cooling and interconnect network (in some cases) by providing the form factor with metal apparatus that specifies the physical dimensions of a physical machine. The enclosure also shields the electromagnetic components and helps to dissipate heat of other components. An enclosure is a mandatory component for a physical machine.

  • Power supply: A power supply provides electrical power to all components in an enclosure. The power supply converts AC power into DC power which all components use to operate and provides redundancy to ensure that the stability and operability of a physical machine is maintained even in the case that a physical machine's power goes out. Power supply is a mandatory component for the physical machine.

  • Cooling: A cooling system is for maintaining a certain range of temperature in an enclosure by cooling heat generated due to operation of the physical machine. The implementation type can vary depending on cooling materials (e.g., air-cooled or a water-cooled type) and form factors (e.g., air flow or water pipes) of an enclosure. Cooling is a mandatory component for the physical machine.

  • Management component: A management component monitors and controls all components in a physical machine, by analyzing the gathered status information from the components. A management component is a mandatory component for the physical machine.

    NOTE – Standard interfaces (e.g., I2C, PMbus, Ethernet, UART, PWM) are normally used to communicate between the management component and others.

Beside these components, the following are needed to manage and operate the physical machine:

  • I/O interface is used for I/O device to communicate with other physical machines or CSC/CSP. The I/O interface has two capabilities (i) capability to provide the channel for data input and output of the physical machine, (ii) capability to provide the channel for CSP/CSC to access the physical machine. The I/O interface follows industrial standards so that the CSP could select and replace the components from multiple vendors. The cloud computing management system communicates with the physical machines without any other development by the standard management interface.

  • Physical machine operation reports and maintains its running information, as well as environment condition periodically to the cloud computing management system [ITU-T Y.3521/M.3070]. In addition, the administrator can operate the physical machine with operation capabilities.

  • Scalability of components in the physical machines allows the physical machines to extend their resources elastically in the processing units, power supply and cooling system.

  • Security of the physical machine provides access control of the processing units.

  • Reliability of the physical machine is to keep physical machine consistently performing as expected. To provide reliability, when some components fail, the physical machine needs to support, detect and locate the faulty components.

6.3.  Types of physical machine

6.3.1.  Single processing unit type

The single processing unit type of physical machine has one processing unit, a single management component as well as one or more power supplies and cooling components. Since a single processing unit type has only one processing unit, no interconnect network component is involved in this type.

NOTE – An example of single processing unit type is a rack server [b-OCP BS].

Figure 3 — Example of single processing unit type

6.3.2.  Multi-processing unit type

The multi-processing unit type has two or more processing units, as well as one or more power supplies, cooling components and a single management component and a single interconnect network.

NOTE – Examples of a multi-processing unit type are blade servers [b-OCP OSR] and rack scale servers [b-OCP OCSC].

Figure 4 — Example of a multi-processing unit type

6.4.  Virtualization in physical machines

This clause identifies different types of virtualization of the components in processing units such as CPUs, memory and I/Os. The mode of virtualization in each component can be software based mode or hardware-assisted mode. The requirements in this Recommendation only consider the hardware‑assisted mode for virtualization.

6.4.1.  CPU virtualization

CPU virtualization technology makes a single CPU act as if it was multiple individual CPUs. There are different ways to implement CPU virtualization. CPU virtualization can be implemented in software-based mode and in hardware-assisted mode:

  • In software based mode, the privileged instructions are simulated by software.

  • In hardware-assisted mode, the privileged instructions can be directly run by the physical CPU to achieve higher performance. Hardware-assisted mode requires the CPU to support a virtualization instruction set.

NOTE – The difference between the two modes is in the execution of privileged instructions in VM's operating system (OS).

6.4.2.  Memory virtualization

Memory virtualization abstracts physical memory to a divided virtual memory for use by a virtual machine. There are two modes of memory virtualization: software-based and hardware-assisted memory virtualization:

  • Software-based mode builds a software based memory mapping table.

  • In hardware-assisted mode, a memory mapping table is implemented in hardware with better performance.

NOTE – The difference between the two modes is the mapping between virtual memory and physical memory.

6.4.3.  I/O virtualization

I/O virtualization refers to dividing a single physical I/O into multiple isolated logical I/Os. There are two modes of I/O virtualization: software based and and hardware-assisted I/O virtualization:

  • Software based mode simulates I/O devices based on software.

  • The hardware-assisted mode provides better performance by reducing a hypervisor's participation in I/O processing by using hardware.

A network adapter is an I/O device specifically for data transmission. A network adapter provides an isolated logical I/O based on a single physical I/O toreceive and send data packets inside and outside of a physical machine as virtual network interfaces in order to improve interface utilization.

6.5.  Scalability of components in the physical machine

Scalability of components in a physical machine allows enhancing the processing unit, power supply and cooling components of the physical machine.

6.5.1.  Scalability of the processing unit

Scalability of the processing unit allows the processing units of a physical machine to be expanded. Scalability of the processing unit provides more hardware processing resources in order to meet potential growth needs, such as providing more CPU and memory resources to host more VMs with the growth of business needs.

There are several ways to expand processing units as shown hereafter with availability of motherboard interfaces and enclosure:

  • Replacing components of a processing unit with other components with higher capability, such as a CPU, memory, storage and I/O devices;

  • Adding components to a processing unit, such as a CPU, memory, storage and I/O devices;

  • Replacing processing units with other processing units with higher capability;

  • Adding processing units to the physical machine.

6.5.2.  Scalability of power supply

Scalability of power supply allows the power supply of a physical machine to be expanded. Scalability of power supply provides more power in future for the potential increasing power consumption needs of the physical machine, such as providing more power for additional processing units.

There are several ways to expand power supply as shown hereafter with availability of enclosure:

  • Replacing power supplies of a physical machine with other power supplieswith higher capability;

  • Adding power supplies to the physical machine.

6.5.3.  Scalability of cooling

Scalability of cooling allows the cooling capability of a physical machine to be increased. Scalability of cooling provides a higher cooling capability to meet the potential increasing cooling needs of the physical machine.

There are several ways to expand cooling capability as shown hereafter with availability of enclosure:

  • Replacing cooling components of a physical machine with other cooling componentswith higher capability;

  • Adding cooling components to the physical machine.

7.  Functional requirements for a physical machine

7.1.  Component requirements

7.1.1.  Processing unit requirements

7.1.1.1.  CPU requirements

  • Virtualization instruction set: It is recommended that a physical machine supports a CPUvirtualization instruction set to improve the performance of CPU virtualization.

  • CPU replacement: It is recommended that a physical machine supports substitution of CPU with other CPUs to allow CPU upgrade or replacement of faulty CPUs.

  • Multiple CPUs: It is recommended that a physical machine supports multiple CPUs to achieve higher performance.

  • Low power consumption of CPU: It is recommended that a physical machine supports low power consumption of CPU to reduce the operational expenditure (OPEX).

7.1.1.2.  Memory requirements

  • Hardware-assisted memory virtualization: It is recommended that a physical machine supports hardware-assisted memory virtualization to improve the performance of memory virtualization.

  • Memory replacement: It is recommended that a physical machine supports substitution of memory with other memories to allow memory upgrade or replacement of faulty memory.

  • Memory reliability: It is recommended that a physical machine supports memory reliability using memory redundancy and memory error correction technologies.

    NOTE 1 – Memory reliability refers to technologies to improve the reliability of the physical machine by preventing permanent loss of data or downtime caused by memory failure. One example is memory mirroring, as one implementation of memory redundancy. Memory mirroring replicates and stores data on a different physical memory within different channels simultaneously. If the primaryphysical memory failure occurs, subsequent read and write will use the backup memory.

  • Supporting various types of memory: It is recommended that a physical machine provides various types of memory such as non-volatile and volatile memory depending on the CPU's memory usage.

NOTE 2 – Examples of CPU's memory usage with non-volatile and volatile types are booting up and storing temporary data as main memory, respectively. Non-volatile type includes ROM and volatile type is classified into static random access memory (SRAM) and dynamic random access memory (DRAM).

7.1.1.3.  Storage requirements

  • Multiple interfaces for storage: It is recommended that a physical machine supports interfaces of storage for different media, such as magnetic storage, optical storage and semiconductor storage.

    NOTE 1 – Examples of interfaces include integrated development environment (IDE), serial AT attachment (SATA), serial attached SCSI (SAS), small computer system interface (SCSI), AT attachment (ATA), M.2 (formerly known as NGFF), peripheral component interconnect express (PCI-E) and mini-serial AT attachment (mSATA).

  • Storage replacement: It is recommended that storage in a physical machine supports substitution of storage with other storages to allow external storage upgrade or replacement of faulty external storage.

  • Storage redundancy hardware: It is recommended that a physical machine supports storage redundancy hardware.

    NOTE 2 – An example of storage redundancy hardware is RAID card. RAID card is to support data storage virtualization technology that combines multiple physical disk drive components into one or more logical units for the purposes of data redundancy, performance improvement, or both.

  • Storage hibernation: It is recommended that a physical machine supports hibernation of storages without I/O for a long time to reduce energy consumption.

NOTE 3 – An example of storage hibernation is hard disk drive (HDD) hibernation. The HDD spins continuously at 5400/7200 revolutions per minute (RPM) consuming lots of power. During HDD hibernation, the HDD stops spinning to reduce power consumption.

7.1.1.4.  I/O device requirements

  • Hardware-assisted I/O virtualization: It is recommended that a physical machine supports hardware-assisted I/O virtualization to improve the performance of I/O virtualization.

  • I/O devices direct accessing: It is recommended that a physical machine supports I/O devices direct accessing so that a virtual machine can directly access hardware I/O devices.

    NOTE 1 – I/O devices direct accessing refers to technologies supporting VM's native accessing of physical I/O devices. One example of I/O devices direct accessing is I/O devices pass-through. I/O devices pass-through is an I/O device assigned directly to a VM. The VM can access the I/O devices without a hypervisor's participation.

  • Workload offload: It is recommended that a physical machine support offloading workload to I/O devices to reduce the load of the CPU.

    NOTE 2 – In offloading workload, hardware I/O devices execute workload instead of software on a CPU in order to relieve the CPU's overhead. An example of offloading workload is checking transmission control protocol (TCP) checksum in a network interface card (NIC) and not in a CPU.

  • Hardware acceleration: It is recommended that a physical machine supports application‑specific hardware acceleration to perform specific applications more efficiently.

NOTE 3 – Application-specific hardware is customized for a particular use, rather than intended for general‑purpose use. An example of application-specific hardware is a graphics processing unit (GPU).

7.1.2.  Power supply requirements

  • Power supply replacement: It is recommended that a physical machine supports substitution with other power supplies to allow power supply upgrade or replacement of a faulty power supply.

  • Supporting power redundancy: It is recommended that a physical machine supports redundant power supply to keep powered on in case of main power supply failure.

    NOTE 1 – N+1 redundancy of power supplyis widely used (N: number of power supplies based on total power budget).

  • Minimum energy consumption: It is recommended that a physical machine provides minimum energy consumption.

  • Interface for monitoring power: It is recommended that a physical machine supports an interface to a management component for monitoring status of the power supply.

NOTE 2 – An example of the interface for monitoring power is a power management bus (PMBus).

7.1.3.  Cooling requirements

  • Cooling component replacement: It is recommended that a physical machine supports substitution with other cooling components to allow substitution of a faulty cooling component.

  • Cooling component redundancy: It is recommended that a physical machine supports cooling component redundancy to maintain temperature in case of main cooling component failure.

  • Interface for controlling fan speed: It is recommended that a physical machine supports an interface to a management component to control fan speed.

NOTE – An example of an interface for controlling fan speed is a pulse width modulation (PWM) management component.

7.1.4.  Enclosure requirements

  • Monitoring status of the physical machine: It is recommended that a physical machine provides a status panel to check whether components of the physical machine are installed and working correctly.

  • Visual indications: It is recommended that a physical machine provides visual indications of working state (e.g., starting, running, stopped, faulty), suitable for administrators of the physical machine to understand.

  • Equipment for mounting and removal: It is recommended that a physical machine supports safe mounting and easy removal of all components in the enclosure.

  • Circulation of air flow: It is recommended that a physical machine supports circulation of enough air flow to minimize the heat generated inside the enclosure with cooling components.

7.1.5.  Interconnect network requirements

This functional requirement is applied for multi-processing unit types.

  • Interconnect network supports: It is recommended that a physical machine supports a non‑Ethernet based interconnect network as well as an Ethernet based interconnect network among the multiple processing units.

    NOTE 1 – For this non-Ethernet based interconnect network, a CSP:cloud operations manager employs a CPU I/O (e.g., PCI Express) of processing units to construct the interconnect network.

  • Sharing process unit component: It is recommended that a physical machine provides a sharing component in the processing unit in other processing units by an interconnect network.

    NOTE 2 – Examples of sharing components are memory, storage and I/O.

  • Network topology: It is recommended that a physical machine supports various types of network topology (e.g., Ring, Tree, Mesh, Cube, etc.) for multiple processing units.

  • Configuration of multiple processing units: It is required that a physical machine provides configuration of multiple processing units.

7.1.6.  Management component requirements

  • Providing running information:It is recommended that a physical machine provides running information in all components of the physical machine.

    NOTE 1 – Examples of running information are CPU temperature, CPU utilization, memory utilization, storage read/write load, fan speed and the traffic load of interconnect network.

  • Automatically power operation: It is recommended that a physical machine supports automatically managing for power on, power off and restart operations for automatic scheduling according to the load of the physical machine.

  • Monitoring of environment conditions: It is recommended that a physical machine provides monitoring of environment conditions, such as air temperature,air humidity, etc.

  • Self-checking mechanism: It is recommended that a physical machine supports self‑checking to ensure the stability of the physical machine after power on.

NOTE 2 – Self-checking is a process to verify CPU and memory, to initialize BIOS and to identity booting devices after a physical machine is powered on.

7.2.  I/O interface requirements

  • Provide I/O interface to administrator: A physical machine can optionally provide an I/O interface to administrators for I/O devices such as a monitor, a mouse and a keyboard.

    NOTE 1 – Examples of I/O interface to administrators are a video graphics array (VGA) and a universal serial bus (USB).

  • Provide I/O interface to external storage device: A physical machine can optionally provide an I/O interface for an external storage device to install the hypervisor, operating system and/or other software applications.

    NOTE 2 – Examples of external storage device are CD ROM and USB flash disk.

  • Network interface virtualization: It is recommended that a physical machine supports network interface virtualization to improve interface utilization.

    NOTE 3 – Network interface virtualization is sharing a network interface into multiple virtual network interfaces.

  • Device driver and API supports: It is required that a physical machine supports device drivers and APIs for I/O interface.

7.3.  Operation requirements

  • Processing unit operation: It is recommended that a physical machine provides operations for processing units, such as power operation, monitoring configuration information of each processing units.

    NOTE 1 – The power operation for a processing unit is to control the power status (e.g., power on, power off and restart) of each of the processing units. The monitoring configuration information of processing units is to collect and report the parameters of the processing units (e.g., CPU type, CPU clock speed, memory frequency and storage capacity).

  • Remote management: It is recommended that a physical machine supports to be managed remotely through network.

    NOTE 2 – Examples of remote management of physical machine are power operation, firmware update and log querying for the physical machine remotely.

  • Diagnostic of physical machine:It is recommended that a physical machine supports diagnostic to analyze before and after a hardware fault as well as firmware and components of physical machine changes.

NOTE 3 – The fault prediction is accomplished by software.

7.4.  Scalability requirements

  • Expansion of interconnect network: It is recommended that a physical machine provides external expansion of the interconnect network among multiple physical machines to meet required computing performance level from a CSU.

  • I/O interface for device extensions: It is recommended that a physical machine provides an I/O interface for device extensions that can be used to extend high performance network cards, graphics card and so forth.

  • Processing unit replacement: It is recommended that a physical machine supports substitution with other processing units to allow processing unit upgrade.

  • Adding processing units: It is recommended that a physical machine supports the addition of more processing units to the physical machine.

  • Adding components of processing units: It is recommended that a physical machine supports the addition of more components to the processing units, including CPU, memory, storage and I/O device.

  • Adding power supply: It is recommended that a physical machine supports the addition of more power supply components to the physical machine.

  • Adding cooling component: It is recommended that a physical machine supports the addition of more cooling components to the physical machine.

7.5.  Security requirements

  • No additional ports: It is recommended that a physical machine does not expose network ports that are not used.

  • Authorized access: It is recommended that a physical machine supports an authorized access.

7.6.  Reliability requirement

  • Support fault location: It is recommended that a physical machine supports fault location, so that the operator can easily replace the failing components.

  • Hot-plug support: A physical machine can optionally support hot-plug without damage.

NOTE – Hot-plug is plugging in and out some components of the physical machine while it is running. An example of hot-plug support is hot-plug disk. Hot-plug disk refer to the disks supporting plug in to or plug out from the physical machine without damage while the physical machine is running.

8.  Security considerations

Security aspects for consideration within the cloud computing environment are addressed by security challenges for the CSPs as described in [ITU-T X.1601]. In particular, [ITU-T X.1601] analyses security threats and challenges and describes security capabilities that could mitigate these threats and meet the security challenges.


Appendix I

Comparison between functional requirements and other specifications

(This appendix does not form an integral part of this Recommendation.)

I.1.  Specifications and other SDOs

I.1.1.  Open Compute Project

The Open Compute Project (OCP) is a rapidly growing community of engineers around the world whose mission is to design and enable the delivery of the most efficient server, storage and data centre hardware designs available for scalable computing.

The OCP Server Project provides standardized server system specifications for scale computing. Standardization is key to ensure that the OCP specification pool does not get fragmented by point solutions that plague the industry today. The Server Project collaborates with the other OCP disciplines to ensure broad adoption and achieve optimizations throughout all aspects from validation, to manufacturing, deployments, data centre operations and de-commissioning.

Table I.1 lists OCP related specifications.

Table I.1 — OCP related specifications

FamilySpecificationSummaryPublished
OpenRack V2Twin Lakes 1S Server Design Specification V1.00 [b-OCP 1S]This specification describes the design of the Twin Lakes 1S server based on the Intel Xeon Processor D-2191 System-on-a-Chip (SoC).2018
Facebook 2S Server Tioga Pass Specification V1.0 [b-OCP 2S]This specification describes Facebook dual sockets server Intel Motherboard v4.0 (Project name: Tioga Pass) design and design requirement to integrate Tioga Pass into Open Rack V2.2018
Big Basin-JBOG Specification V1.0 [b-OCP JBOG]This document describes technical specifications for Facebook's Big Basin-JBOG for use in Open Rack V2.2018
Inspur Server Project San Jose V1.01 [b-OCP SJ]This document defines the technical specification for San Jose Motherboard and chassis used in Open Compute Project Open Rack V2.2017
Facebook Multi-Node Server Platform: Yosemite V2 Design Specification V1.0 [b-OCP Yose]This specification describes the design of the Yosemite V2 Platform that hosts four One Socket (1S) servers, or two sets of 1S server/device card pairs.2017
Facebook Server Intel Motherboard V4.0 Project Tioga Pass V0.30 [b-OCP TP]This specification describes Facebook dual sockets server Intel Motherboard v4.0 (Project name: Tioga Pass) design and design requirement to integrate Intel Motherboard v4.0 into Open Rack V2.2017
Facebook Server Intel Motherboard V3.1 [b-OCP MB]This specification describes Intel Motherboard v3.0 design and design requirement to integrate Intel Motherboard v3.0 into Open Rack V11 and Open Rack V2.2016
Open Rack- Intel Motherboard Hardware V2.0 [b-OCP IMBH]This document defines the technical specifications for the Intel motherboard used in Open Compute Project servers.2016
OpenRack v1Open Rack- AMD Motherboard Hardware V2.0 [b-OCP AMBH]This document defines the technical specifications for the AMD motherboard used in Open Compute Project servers.2012
Facebook server Fan Speed Control Interface Draft V0.1 [b-OCP FSCI]This document describes Facebook's FSC algorithm and its update methodology. Using the OpenIPMI fan speed control (FSC) is an intelligent method for controlling server fans to provide adequate cooling while managing thermal constraints and power efficiency. This document will help to manage FSC settings and FSC updates by using intelligent platform management interface (IPMI) commands to vary the fan control profile on either local or remote systems.2017
OlympusProject Olympus AMD EPYC Processor Motherboard Specification [b-OCP OAPM]This specification describes the Project Olympus AMD Server Motherboard. This is an implementation specific specification under the Project Olympus Universal Motherboard Specification.2017
Project Olympus Cavium ThunderX2 ARMx64 Motherboard Specification [b-OCP OCTAM]This specification focuses on the Project Olympus Cavium ThunderX2 ARMx64 Motherboard. This is an implementation specific specification under the Project Olympus Universal Motherboard Specification.2017
Project Olympus 1U Server Mechanical Specification [b-OCP O1USM]This specification focuses on the Project Olympus full-width server mechanical assembly. It covers the mechanical features and supported components of the server, as well as the interfaces with the mechanical and power support structure.2017
Project Olympus 2U Server Mechanical Specification [b-OCP O2USM]This specification focuses on the Project Olympus 2U server mechanical assembly. It covers the mechanical features and supported components of the server, as well as the interfaces with the mechanical and power support structure.2017
Project Olympus Intel Xeon Scalable Processor BIOS Specification [b-OCP OBIOS]The System BIOS is an essential platform ingredient which is responsible for platform initialization that must be completed before booting of an operating system. Thus, the BIOS execution phase of the boot process is often referred to as pre‑boot phase.2017
Project Olympus Intel Xeon Scalable Processor Motherboard Specification [b-OCP OMB]This specification describes the Project Olympus Intel Server Motherboard. This is an implementation specific specification under the Project Olympus Universal Motherboard Specification.2017
OCSOpen CloudServer OCS Programmable Server Adapter Mezzanine Programmables V1.0 [b-OCP OCSPSAM]This document defines physical and interface requirements for the programmable NIC mezzanine card that can be installed on an Open Cloud Server (OCS) server blade. This server adapter is programmable and provides CPU offload for Host‑based SDN, virtual switch data path and tunneling protocols.2016
Open CloudServer OCS Chassis Manager Specification V2.1 [b-OCP OCSCM]This specification is an addendum to the OCS Open CloudServer Chassis Management v2.0 specification. It defines the requirements for the upgrade to the Chassis Manager v1.0 made necessary by end of production of the CPU.2016
Open CloudServer OCS Blade Specification V2.1 [b-OCP OCSB]This document is intended for designers and engineers who will be building blades for an OCS system.2016
Open CloudServer OCS Solid State Drive V2.1 [b-OCP OCSSSD]This specification, Open CloudServer Solid State Drive, OCS SSD, describes the low-cost, high‑performance flash-based storage devices deployed first in the Open CloudServer OCS Blade V2 specification. The OCS Blade V2 supports four PCI-Express riser cards and eight Open CloudServer Solid State Drive M.2 modules. The Table 1 briefly describes the required features.2015
Open CloudServer OCS Power Supply V2.0 [b-OCP OCSPS]This specification, Open CloudServer Chassis Power Supply Version 2.0, describes the power supply family requirements for the Windows Cloud Server system. The mechanical interface and electrical interface is identical between power supply options to enable a common slot, universal, modular foundation power supply system to enable the Microsoft Windows Cloud Server systems.2015
Open CloudServer SAS Mezzanine I/O specification V1.0 [b-OCP OCSSAS]This document outlines specifications for the Open CloudServer Storage Attached SCSI (SAS) mezzanine card.2015
Open CloudServer JBOD specification V1.0 [b-OCP OCSJBOD]This document provides the technical specifications for the design of the 6G half-width JBOD blade for the Open CloudServer system.2015
Open CloudServer OCS Tray Mezzanine Specification V2.0 [b-OCP OCSTRAY]This specification, Open CloudServer OCS Tray Mezzanine Version 2.0, describes the physical and interface requirements for the Open CloudServer (OCS) tray mezzanine card. The mezzanine card will be installed on the tray backplane and will have a Peripheral Component Interconnect Express (PCIe) x16 Gen3 interface. This interface can either be used as one x16, two x8, or four x4 channels.2015
Open CloudServer Chassis Specification V2.0 [b-OCP OCSC]Describes the hardware used in the Version 2.0 (V2.0) OCS system, including the chassis, tray and systems management.2015
Open CloudServer OCS NIC Mezzanine Specification V2.0 [b-OCP OCSNIC]This specification, Open CloudServer NIC Mezzanine Version 2.0, describes the physical and interface requirements for the Open CloudServer (OCS) NIC mezzanine card that to be installed on an OCS blade.2014
OCP MezzanineMezzanine Card 2.0 Design Specification V1.0 [b-OCP MEZZ]Mezzanine card 2.0 specification is developed based on original OCP Mezzanine card. It extends the card mechanical and electrical interface to enable new uses cases for Facebook and other users in OCP community. The extension takes backward compatibility to existing OCP platforms designed for original OCP Mezzanine card specification V0.5 into consideration and some tradeoffs are made between backward compatibility and new requirements.2016
Mezzanine Card for Intel v2.0 Motherboard [b-OCP MEZZMB]This document describes the mezzanine card design for use with Open Compute Project Intel v2.0 motherboards. The mezzanine card is installed on an Intel v2.0 OCP motherboard to provide extended functionality, such as support for 10GbE PCI-E devices.2012
19" ServerQCT Big Sur Product Architecture Following Big Sur Specification V1.0 [b-OCP BS]The QCT Big Suris 4OU/21 "chassis which using IA-64 based dual-socket servers that support the Grantley–EP processors in combination with the Wellsburg PCH to provide a balanced feature set between technology leadership and cost. QCT Grantley platform will be 16DIMMsand supports 8 GPGPU cards and Max. 8x2.5" HDDs.2017
Hyve Solutions Ambient Series-E V1.2 [b-OCP HSAS]This document defines the technical specifications for the Hyve Solutions Ambient Series-E server, including motherboard, chassis and power supply.2017
QuantaGrid D51B-1U V1.1 [b-OCP QGD]The Quanta Grid D51B-1Uwill be IA-64 based dual-socket servers that support the Grantley–EP processors in combination with the Wellsburg PCH (PCH) to provide a balanced feature set between technology leadership and cost.2015
Decathlete Server Board Standard V2.1 [b-OCP DSBS]This standard provides board-specific information detailing the features and functionality of a general purpose 2-socket server board for adoption by the Open Compute Project community. The purpose of this document is to define a dual socket server board that is capable of deployment in scale out data centres as well as traditional data centres with 19" rack enclosures.2013
SOC BoardsPanther+ Micro-Server Card Hardware V0.8 [b-OCP PMSCH]This document describes the technical specifications used in the design of an Intel Avoton SoC based Micro-Server card for Open Compute Project, known as the Panther+.2016
Micro-Server Card Hardware V1.0 [b-OCP HSCH]This specification provides a common form factor for emerging micro-server and SOC (System-On-Chip) server designs.2016
BarreleyeBarreleye G2 Specification [b-OCP G2]This document describes the specifications for: Zaius POWER9 motherboard, Barreleye G2 server – 2OU, Zaius server – 1.5OU2017
Barreleye G1 Specification [b-OCP G1]This document describes the specification of Barreleye, an OpenPOWER-based Open Compute server, with a mechanical and electrical package designed for Open Rack.2016
Facebook, Microsoft, M.2 Carrier Card Design Specification V1.0 [b-OCP M2]This specification provides the requirements for a PCIe Full Height Half Length (FHHL) form factor card that supports up to four M.2 form factor solid-state drives (SSDs). The card shall support 110mm (Type 22110) or 80mm (Type 22080) dual sided M.2 modules.2018
Facebook PCIe Retimer Card V1.1 [b-OCP PCIRC]This specification describes the design and design requirements for a PCIe add-in card that converts an internal PCIe connection to an external PCIe connection.2017
Add-on-Card Thermal Interface Spec for Intel Motherboard V3.0 [b-OCP ACTI]The goal of this document is to define a standard interface for Facebook Intel motherboard V3.0 to poll thermal data from an add-on-card including Mezzanine card.2017
Debug CardOCP debug card with LCD spec V1.0 [b-OCP Debug]The specification defines the OCP Debug Card with LCD for a server system debug.2018
Mezz Card25G Dual Port OCP 2.0 NIC Mezzanine Card V1.0 [b-OCP 25GDual]This document specifies a technical design implementation to define 25G Ethernet card which meets the requirements of OCP Mezzanine card 2.0 type-A design and the heat sink design could let this card to be able to deployment in OCP server or standard server.2018
OCP NIC 3.0 Design Specification V0.8 [b-OCP NIC]The OCP NIC 3.0 specification is a follow-on to the OCP Mezz 2.0 rev 1.00 design specification. The OCP NIC 3.0 specification supports two basic card sizes: Small Card and Large Card. The Small Card allows for up to 16 PCIe lanes on the card edge while the Large Card supports up to 32 PCIe lanes.2018

I.1.2.  DMTF

The Distributed Management Task Force (DMTF) is an industry standards organization working to simplify the manageability of network-accessible technologies through open and collaborative efforts by leading technology companies. DMTF creates and drives the international adoption of interoperable management standards, supporting implementations that enable the management of diverse traditional and emerging technologies including cloud, virtualization, network and infrastructure.

DMTF has developed specifications related to management interface, which are related to the management of physical machines.

Table I.2 — DMTF related specifications

SpecificationSummaryPublished

Redfish Scalable Platforms ManagementAPI Specification [b-DMTF RFAPI]

This specification is to define the protocols, data model and behaviors, as well as other architectural components needed for an interoperable, cross-vendor, remote and out-of-band capable interface that meets the expectations of Cloud and web-based IT professionals for scalable platform management. While large scale systems are the primary focus, the specifications are also capable of being used for more traditional system platform management implementations.

2018-08-23

Redfish Host Interface Specification [b-DMTF RFHI]

This specification defines functional requirements for Redfish Host Interfaces. In the context of this document, the term "Host Interface" refers to interfaces that can be used by software running on a computer system to access the Redfish Service that is used to manage that computer system.

2017-12-11

Redfish Interoperability Profiles [b-DMTF RFP]

The Redfish Interoperability Profile is a JSON document that contains Schema-level, Property‑level and Registry-level requirements. At the property level, these requirements can include a variety of conditions under which the requirement applies.

2018-05-15

I.1.3.  SNIA

The Storage Networking Industry Association (SNIA) is a non-profit organization made up of member companies spanning information technology. A globally recognized and trusted authority, SNIA's mission is to lead the storage industry in developing and promoting vendor-neutral architectures, standards and educational services that facilitate the efficient management, movement and security of information.

Table I.3 — SNIA related specifications

SpecificationSummaryPublished

SNIA Swordfish Specification V1.0.6 [b-SNIA SF]

The Swordfish Scalable Storage Management API ("Swordfish") defines a RESTful interface and a standardized data model to provide a scalable, customer-centric interface for managing storage and related data services. It extends the Redfish Scalable Platforms Management API Specification (DSP0266) from the DMTF.

2018-05-25

I.1.4.  ETSI

The European Telecommunications Standards Institute (ETSI) is the recognized regional standards body – European Standards Organization (ESO) – dealing with telecommunications, broadcasting and other electronic communications networks and services.

The ETSI NFV EVE Working Group seeks to develop the necessary requirements to enable a common set of hardware elements and physical environments (e.g., data centres) that can be used to support network function virtualization (NFV) services [b-ETSI EVE007].

Table I.4 — ETSI related specifications

SpecificationSummaryPublished

Hardware Interoperability Requirements Specification [b-ETSI EVE007]

The document develops a set of normative interoperability requirements for the NFV hardware ecosystem and telecommunications physical environment to support NFV deployment.

2017-03


Appendix II

Use cases of the physical machine for cloud computing

(This appendix does not form an integral part of this Recommendation.)

This appendix describes physical machine related use cases.

Table II.1 — Providing IaaS service with processing resource – use case

TitleProviding IaaS service with processing resource use case
DescriptionInfrastructure capabilities type, especially for providing VM, which is resided on a physical machine. The virtual machine provides a virtualized and isolated computing environment for each guest operating system (OS) with processing, storage, networking and other hardware resources. Usually a physical machine would carry more than one virtual machine and needs to provide more virtual CPU than physical CPU, so as the memory and I/O devices. Furthermore, the physical machine usually provides a management function for the CSP to manage and monitor the physical machines. The loading information would help the CSP to select which host to deploy the VMs and the fault location information could help the CSP easily replace the failure component. As more applications run in one physical machine, the reliability is required so that it could continue to run and bear the services without migrating when there are few failures. As some applications may require some other hardware components with special features, the physical machine would have to reserve one or more extended interfaces for adding cloud resources. To improve the VM's performance, the physical machines usually use CPUs with virtualization instruction set support. For situations with a lot of network traffic processing, physical machines can offload traffic processing to I/O devices to improve performance and reduce the load of CPUs.
Roles/sub-rolesCSP: cloud service operations manager
Figure
Pre-conditions (optional)The virtual machine resides on the physical machine.
Post-conditions (optional)CSP:cloud service operations manager wants to build or expand a resource pool to provide VM service.
Derived requirements

Table II.2 — Preparing a physical machine – use case

TitlePreparing a physical machine
Description

A cloud service operations manager installs multiple processing units on the backplane of an enclosure to increase the computing resources of a physical machine. He or she also installs more than one storage device per CPU in a processing unit through standard storage interfaces.
To connect all multiple processing units together, a cloud service operations manager mounts a switch device which is responsible for system interconnects and network topology to the backplane of the enclosure.
In addition, the enclosure is equipped with a power supply to provide power to an entire physical machine and a cooling device to reduce heating. A cloud service operations manager should be able to check whether each device is installed in a physical machine correctly through a console panel on the enclosure.
System software and operating systems are installed on corresponding devices or rebooted and consequently a physical machine completes to set up its operations.
Based on the above steps to construct a physical machine, several physical machines are connected with each other by their external interfaces and can be installed in a standard rack.

Roles/sub-rolesCSP: cloud service operations manager.
Figure
Pre-conditions (optional)CSP: cloud service operations manager wants to modify or expand the resources in a physical machine.
Post-conditions (optional)A physical machine provides a cloud service.
Derived requirements

Table II.3 — Cold data storage resource – use case

TitleCold data storage resource use case
DescriptionCold data means data that is stored but almost never read again. The cloud storage service for cold data provides almost no limit storage capacity at a very low cost and the physical machine for cold data storage requires the highest capacity and the lowest cost. The typical use case is a series of sequential writes, but random reads. As the physical machine is recommended to provide as much storage capacity as possible, it would have many large-capacity disks. To provide a highly durable storage service, the physical machine is recommended to provide management function for the CSP to manage and monitor the physical machine cluster and then the CSP can rebuild the data in time when some failures happen, also the hot plug function is needed. Physical machines usually support storage of different media to balance between cost and performance for different situations.
Roles/sub-rolesCSP: cloud service operations manager
Figure
Pre-conditions (optional)The cloud storage service resides on the physical machine.
Post-conditions (optional)CSP:cloud service operations manager wants to build or expand a resource pool to provide cloud storage service for cold data.
Derived requirements

Table II.4 — Using an interconnect network for high-performance service – use case

TitleUsing interconnect network for high-performance service
Description

When a cloud service user requests a high performance computing service (e.g., parallel processing, Big data processing, etc.), a cloud service operations manager is responsible for setting up an interconnection network among processing units, because multiple computing resources are required to perform such a service.
Ethernet is a widely used protocol for the interconnection network but the cloud service operations manager may provide other protocols. Therefore, depending on the protocols, the performance of the interconnect network is determined and the cloud service operations manager can select and provide the appropriate interconnect network to the cloud service user.
For utilizing this interconnect network system, a cloud service user can employ socket based applications in the case of using Ethernet protocol. For other networks, a cloud service user is provided with applications per application program interfaces (API) from a cloud service operations manager. This is due to the fact that, a cloud operations manager also has a responsibility to provide a device driver and API for utilizing the propriety network.
Occasionally, a cloud service user may request multiple physical machines for such a service. In this case, a cloud operations manager has a responsibility to support multiple shared resources through the network. Since the physical machines are independent of network devices, this network would be arranged as cabling in an interconnect network.
When a cloud service operations manager makes this cabled network, it is constructed to support various topologies (e.g., ring, mesh, tree) to meet the different performance levels of the service requested from a cloud service user.

Roles/sub-rolesCSC: Cloud service user and CSP: Cloud service operations manager
Figure
Pre-conditions (optional)CSC: Cloud service user wants to use a high performance computing application.
Post-conditions (optional)Physical machines can run a high performance computing application through a system interconnect network and cabled network.
Derived requirements

Table II.5 — Physical machine for hyper-scale deployment – use case

TitlePhysical machine for hyper-scale deployment
DescriptionThis use case covers the situation where very large numbers of physical machines will be employed in multiple data centres around a region or around the world. In this case, a single physical machine will typically be implemented as a server blade that fits into a specially built rack. The rack will typically also include storage, management and networking equipment, which might or might not be implemented using similar blades.
Roles/sub-rolesCSP: cloud service operations manager.
Figure

In this type of deployment, physical machines are constructed for deployment in high-density racks specifically designed for the purpose. Each rack provides all the infrastructure required to support the machines, including power, network connectivity and ventilation/cooling. Individual physical machines are automatically initialised and provisioned with necessary software when plugged into an active rack. Once running, each machine is made available for the deployment of VMs or other functions requested by CSCs or CSP administrators. Fault tolerance is often provided by software across multiple machines, so an individual machine can fail without adverse effect on the overall system.

Pre-conditions (optional)CSP: The CSP wishes to deploy many physical machines in two or more locations, using minimal staffing.
Post-conditions (optional)

A physical machine is plugged into a rack, configures itself to the local network and is automatically provisioned with all necessary software including the host operating system, network stacks and hypervisor. The data centre management system is then able to deploy workloads to the machine. In the event of failure, the machine can be removed from the rack and a replacement plugged in with minimal impact on deployed cloud services.
A data centre can be left unmanned for several days or weeks at a time. When visited, the failed machines can be quickly removed and replaced by new or refurbished machines, which configure and provision themselves automatically.

Derived requirements

Table II.6 — Physical machine for unmanned deployment – use case

TitlePhysical machine for unmanned deployment
DescriptionThis use case covers the situation where physical machines will be in places where physical access is extremely difficult or only available at infrequent intervals. In this case, a single physical machine will typically be implemented as a server blade that fits into a specially built rack. The rack will typically also include storage, management and networking equipment, which might or might not be implemented using similar blades.
Roles/sub-rolesCSP: cloud service operations manager.
Figure

In this type of deployment, physical machines are constructed for deployment in locations where physical access is very tightly constrained. These will typically be inside some form of self-contained "capsule" rather than a normal building.
Examples are systems deployed in underwater containers for positioning close to coastal urban centres.

Pre-conditions (optional)CSP: The CSP wishes to deploy physical machines in locations where human access will not be possible for the majority or operational time.
Post-conditions (optional)

The physical machines are left running without human access for periods of six months or more.
The data centre management system can deploy workloads to the machine. In the event of failure, the machine can be taken off line, remote diagnostics can be run and the machine either returned to service or taken permanently offline.

Derived requirements

Table II.7 — Physical machine for network edge – use case

TitlePhysical machine for network edge
DescriptionThis use case covers the situation where physical machines will be in network edge locations such as at 5G cell towers, telephone exchanges and cable TV head-end multiplexers. This is one use of the term "micro data centre". The primary reason for this is to minimise network latency between end users and the cloud services running at the network edge, or to concentrate network traffic at the edge to manage load on core network servers (e.g., for massive IoT telemetry applications).
Roles/sub-rolesCSP: cloud service operations manager.
Figure

In this type of deployment, physical machines are constructed for deployment at the edge of the network, usually co-located with access network transmission and/or multiplexing equipment.
Examples of such locations include:

  • Local telephone exchanges

  • Street multiplexers (e.g., FTTC)

  • Cellular towers

  • Cable TV head-ends

  • Airliner WiFi/Cellular network equipment.

Pre-conditions (optional)CSP: The CSP wishes to deploy physical machines to support cloud services running at the edge of their (or a partner's) physical network.
Post-conditions (optional)

The physical machine(s) run co-located with other network equipment at the edge of the network. The CSP can deploy cloud services or service components to these network edge micro data centres.
CSUs can access the cloud service within minimal network latency.
The CSP can process large amounts of data from CSUs, without imposing heavy loads on the backhaul network or the core network cloud services.

Derived requirements

Table II.8 — Configurations of clustering processing units – use case

TitleA use case of configurations of clustering processing units
Description

When a CSC: cloud service user requests a cloud service, a CSP: cloud service operations manager is responsible for providing computing resources, which can run the cloud service. In this case, based on the multiple processing units in the physical machine, it is possible for the CSP: cloud operations manager to configure a clustering system which can distribute the computational loads among the multiple processing resources.
Therefore, in order to utilize highly integrated computing resources efficiently, the CSP: cloud operations manager has a responsibility to configure the clustered processing units. The configuration can be changed dynamically and elastically according to the cloud service's requirements from the CSC: cloud service user. In other words, according to the required cloud service's characteristics (e.g., network usage ratio, computing capability, the proportion of memory intensive computation, an efficiency of distributed computing), the configuration of the cluster system can vary.
In case of a cloud service based on distributed processing clustering configuration is suitable because data analysis work load can be divided into several processing units. On the other hand, when a CSC: cloud service user requests a cloud service, where data communications occur intensively among processing units, minimum numbers of processing units are recommended as a clustered resources.
In addition, the networking between each processing unit for clustering can be basically based on a legacy network such as Ethernet. For a cloud service that is clustering-favourable but has high proportion of network usage among processing units, a proprietary network for clustering environment can be provided to eliminate the overhead of network communications.

Roles/sub-rolesCSP: cloud service operations manager, CSC: cloud service user
Figure
Pre-conditions (optional)CSC: cloud service user wants to use a cloud service.
Post-conditions (optional)Physical machines can run a cloud service efficiently and elastically according to the desired performance.
Derived requirements

Table II.9 — Replacing components of a physical machine – use case

TitleA use case of replacing components of physical machine
DescriptionThis use case covers the situation where components of physical machine be replaced. After deployment and operation, a component of physical machine may need to be replaced with another new one due to component failure. In addition, a component could be replaced with component of other model to upgrade the performance. Typically, physical machine should support replacement of the main components, including CPU, memory, storage, power supply and cooling component.
Roles/sub-rolesCSP: cloud service operations manager
Figure

To support replacement, the components are not coupled to the physical machine so that the components can be installed or uninstalled dynamically. Typically, the components are designed to follow certain specifications to ensure compatible interface and shape.

Pre-conditions (optional)CSP wishes to replace some components of physical machines to fix components failure or upgrade components.
Post-conditions (optional)Physical machine can runs normally with the new components.
Derived requirements

Table II.10 — High reliability deployment – use case

TitleA use case for high reliability deployment
DescriptionThis use case covers the situation where physical machine will be deployed with high reliability. To achieve high reliability, physical machine needs to be deployed with redundant main components, such as CPU, power supply and cooling component. In addition, physical machine needs to support technology to reduce data errors and data loss at work, such as memory error correction and RAID.
Trough RAID technology, data of physical machine is distributed across multiple drives in different ways, referred to as RAID levels, depending on the required level of redundancy and performance. Take RAID1 as an example, as shown in below figure, RAID 1 consists of an exact copy (or mirror) of a set of data on two or more disks. The data of physical machine will not loss so long as at least one member drive is operational.
Roles/sub-rolesCSP: cloud service operations manager
Figure
Pre-conditions (optional)CSP wishes to deploy physical machines withhigh reliability.
Post-conditions (optional)Physical machines can still work when some components fail. The physical machine can reduce data errors and losses due to memory and storage failures.
Derived requirements

Bibliography

[b‑DMTF RFAPI]b-DMTF RFAPI, Redfish Scalable Platforms Management API Specification.
[b‑DMTF RFHI]b-DMTF RFHI, Redfish Host Interface Specification.
[b‑DMTF RFP]b-DMTF RFP, Redfish Interoperability Profiles.
[b‑ETSI EVE007]b-ETSI EVE007, Hardware Interoperability Requirements Specification.
[b‑OCP 1S]b-OCP 1S, Twin Lakes 1S Server Design Specification V1.00.
[b‑OCP 25GDual]b-OCP 25GDual, 25G Dual Port OCP 2.0 NIC Mezzanine Card V1.0.
[b‑OCP 2S]b-OCP 2S, Facebook 2S Server Tioga Pass Specification V1.0.
[b‑OCP ACTI]b-OCP ACTI, Add-on-Card Thermal Interface Spec for Intel Motherboard V3.0.
[b‑OCP AMBH]b-OCP AMBH, Open Rack- AMD Motherboard Hardware V2.0.
[b‑OCP BS]b-OCP BS, QCT Big Sur Product Architecture Following Big Sur Specification V1.0.
[b‑OCP DSBS]b-OCP DSBS, Decathlete Server Board Standard V2.1.
[b‑OCP Debug]b-OCP Debug, OCP debug card with LCD spec V1.0.
[b‑OCP FSCI]b-OCP FSCI, Facebook server Fan Speed Control Interface Draft V0.1.
[b‑OCP G1]b-OCP G1, Barreleye G1 Specification.
[b‑OCP G2]b-OCP G2, Barreleye G2 Specification.
[b‑OCP HSAS]b-OCP HSAS, Hyve Solutions Ambient Series-E V1.2.
[b‑OCP HSCH]b-OCP HSCH, Micro-Server Card Hardware V1.0.
[b‑OCP IMBH]b-OCP IMBH, Open Rack- Intel Motherboard Hardware V2.0.
[b‑OCP JBOG]b-OCP JBOG, Big Basin-JBOG Specification V1.0.
[b‑OCP M2]b-OCP M2, Facebook, Microsoft, M.2 Carrier Card Design Specification V1.0.
[b‑OCP MB]b-OCP MB, Facebook Server Intel Motherboard V3.1.
[b‑OCP MEZZ]b-OCP MEZZ, Mezzanine Card 2.0 Design Specification V1.0.
[b‑OCP MEZZMB]b-OCP MEZZMB, Mezzanine Card for Intel v2.0 Motherboard.
[b‑OCP NIC]b-OCP NIC, OCP NIC 3.0 Design Specification V0.8.
[b‑OCP O1USM]b-OCP O1USM, Project Olympus 1U Server Mechanical Specification.
[b‑OCP O2USM]b-OCP O2USM, Project Olympus 2U Server Mechanical Specification.
[b‑OCP OAPM]b-OCP OAPM, Project Olympus AMD EPYC Processor Motherboard Specification.
[b‑OCP OBIOS]b-OCP OBIOS, Project Olympus Intel Xeon Scalable Processor BIOS Specification.
[b‑OCP OCSB]b-OCP OCSB, Open CloudServer OCS Blade Specification V2.1.
[b‑OCP OCSC]b-OCP OCSC, Open CloudServer Chassis Specification V2.0.
[b‑OCP OCSCM]b-OCP OCSCM, Open CloudServer OCS Chassis Manager Specification V2.1.
[b‑OCP OCSJBOD]b-OCP OCSJBOD, Open CloudServer JBOD specification V1.0.
[b‑OCP OCSNIC]b-OCP OCSNIC, Open CloudServer OCS NIC Mezzanine Specification V2.0.
[b‑OCP OCSPS]b-OCP OCSPS, Open CloudServer OCS Power Supply V2.0.
[b‑OCP OCSPSAM]b-OCP OCSPSAM, Open CloudServer OCS Programmable Server Adapter Mezzanine Programmables V1.0.
[b‑OCP OCSSAS]b-OCP OCSSAS, Open CloudServer SAS Mezzanine I/O specification V1.0.
[b‑OCP OCSSSD]b-OCP OCSSSD, Open CloudServer OCS Solid State Drive V2.1.
[b‑OCP OCSTRAY]b-OCP OCSTRAY, Open CloudServer OCS Tray Mezzanine Specification V2.0.
[b‑OCP OCTAM]b-OCP OCTAM, Project Olympus Cavium ThunderX2 ARMx64 Motherboard Specification.
[b‑OCP OMB]b-OCP OMB, Project Olympus Intel Xeon Scalable Processor Motherboard Specification.
[b‑OCP OSR]b-OCP OSR, Project Olympus Server Rack Specification.
[b‑OCP PCIRC]b-OCP PCIRC, Facebook PCIe Retimer Card V1.1.
[b‑OCP PMSCH]b-OCP PMSCH, Panther+ Micro-Server Card Hardware V0.8.
[b‑OCP QGD]b-OCP QGD, QuantaGrid D51B-1U V1.1.
[b‑OCP SJ]b-OCP SJ, Inspur Server Project San Jose V1.01.
[b‑OCP TP]b-OCP TP, Facebook Server Intel Motherboard V4.0 Project Tioga Pass V0.30.
[b‑OCP Yose]b-OCP Yose, Facebook Multi-Node Server Platform: Yosemite V2 Design SpecificationV1.0.
[b‑SNIA SF]b-SNIA SF, SNIA Swordfish Specification V1.0.6.