Deploying Baremetal Instances with OpenStack Ver1.1 2013/02/10 Etsuji Nakai
$ who am i Etsuji Nakai Senior solution architect and cloud evangelist at Red Hat. Working for NII (National Institute of Informatics Japan) as a cloud technology consultant. The author of Professional Linux Systems series. Available only in Japanese. Translation offering from publishers are welcomed ;-) Professional Linux Systems Technology for Next Decade 2 Professional Linux Systems Deployment and Management Professional Linux Systems Network Management
Background of the project
Why does baremetal matter? General usecase I/O Intensive application (RDB) Realtime application (Deterministic latency) Native Processor Features etc... Specific usecase in Academic Research Cloud (ARC) of NII Flexible extension of existing server cluster. Flexible extension of existing cloud infrastructure. 4
Academic Research Cloud (ARC) in NII, today. This is a prototype of the Japan-wide research cloud. It's now running in NII's laboratories, and will be extended as a Japan-wide research cloud. Research labs can extend their existing clusters (HPC cluster, cloud infrastructures, etc...) by attaching baremetal servers from the resource pool. L2 connection(vlan) Baremetal Resource Pool Existing HPC Cluster Existing Cloud Infrastructure 5 On-demand provisioning/ de-provisioning Self Service Portal Flexible extension of existing cluster
Future plan of the ARC. ARC will be extended as a Japan-wide cloud with SINET4 WAN connection. SINET4 is a MPLS based wide area Ethernet service for academic facilities in Japan, operated by NII. Baremetal Resource Pool Existing HPC Cluster Existing Cloud Infrastructure http://www.sinet.ad.jp/index_en.html 6 MPLS based Wide Area Ethernet
Overview of dodai-compute1.0 What is dodai-compute? Baremetal driver extension of Nova, currently used in ARC. Designed and developed by NII in 2012 Based on Diablo with Ubuntu 11.10 Source codes https://github.com/nii-cloud/dodai-compute Upside: Simple extension aimed for the specific usecase :-) Downside: Unsuitable for general usecase :-( Cannot manage mixed environment of baremetal and hypervisor hosts. One-to-one mapping from instance flavor to baremetal host. (No scheduling logic to select suitable host automatically.) Nonstandard use of availability zone. (Used for host status management.) The most outstanding issue It's not merged in upstream. 7 No community support, No future!
Planning of ARC baremetal provisioning feature It should be designed based on the framework in the upstream. Existing framework: GeneralBareMetalProvisioningFramework. So called NTTdocomo-openstack. Blueprint - http://wiki.openstack.org/generalbaremetalprovisioningframework Source codes - https://github.com/nttdocomo-openstack/nova As a first step, we compared the architectures of dodai-compute and NTTdocomo-openstack, and considered the following things. What's common and what's uncommon? What can be more generalized in NTTdocomo-openstack? What should be added to be used for ARC? The goal of the project dodai-compute2.0 is - Extend the upstream framework for ARC. - Not to be a private branch, stay in the upstream. Note: NTTdocomo-openstack branch has been merged in the upstream with many modifications. Although this slide is based on NTTdocomo-openstack branch, the future extension will be done directly on the upstream. 8
By the way, what does dodai stand for? 1. Base, Foundation, Framework, etc... 2. A sub flight system (SFS) featured in Mobile Suit Gundam. 9
Comparison of dodai-compute1.0 and NTTdocomo-openstack
Today's Topics 1. Coupling Structure with Nova Scheduler. 2. OS Provisioning Mechanism. 3. Network Virtualization. 11
Coupling Stricture with Nova Scheduler
General flow of instance launch Question: How can we apply baremetal servers in place of VM instances in this structure? VM VM Select host for new instance Compute Driver Register hosts to scheduler Nova Scheduler VM VM Launch VM Asks to launch instance Compute Driver 13
A1. Register Baremetal Pool as an Instance Host dodai-compute takes this approach. Its driver acts as a single host which accommodates multiple baremetal servers. Select baremetal server to launch Asks to launch instance Nova Scheduler Launch baremetal server Baremetal Pool Compute Driver Register pools Baremetal Pool Select pool for new instance Compute Driver 14
A2. Register each baremetal as a Single Instance Host NTTdocomo-openstack takes this approach. Its driver acts as a proxy for baremetal servers, each of them accommodates just one instance. Launch selected baremetal server Register each baremetal as host Nova Scheduler Asks to launch instance Select baremetal server for new instance 15 Compute Driver
Class structure for coupling with Nova dodai-compute1.0 and NTTdocomo-openstack has basically the same class structure in terms of coupling with Nova. The drawing is the case of dodai-compute1.0 NTTdocomo-openstack uses BareMetalDriver in place of DodaiConnection Base class of different kinds of visualization hosts Driver for libvirt managed hypervisor (KVM/LXC) Driver for baremetal management 16 https://github.com/nii-cloud/dodai-compute/wiki/developer-guide
How does Nova Scheduler see baremetal servers? dodai-compute's driver acts as a single host which accommodates multiple baremetal servers. It's like representing a baremetal pool as a single Host which runs baremetal servers as its VM's. Scheduling policy is implemented in the driver side. (Nova Scheduler has no choice of hosts.) Nova API Nova Scheduler A host of baremetal VM's Nova Compute (dodaiconnection) Scheduler recognizes it as a single host dodai db Baremetal serverinformation Choose host to provision by referring to dodai db 17
How does Nova Scheduler see baremetal servers? NTTdocomo-openstack driver acts as a proxy of all baremetal hosts. Each baremetal server is seen as an independent host which can accommodate up to one instance. Scheduling policy is implemented as a part of Nova Scheduler. It uses "extra_specs metadata to distinguish baremetal hosts from hypervisor hosts. Scheduler recognizes all baremetal hosts Nova API Nova Scheduler Hosts of just one instance Nova Compute (BareMetalDriver) Register all hosts by referring to baremetal db beremetal db (Baremetal serverinformation 18 extra_specs=cpu_arch:x86_64
Considerations on the Nova Scheduler coupling dodai-compute Scheduling (server selection logic) is up to the driver. Currently, there's no intelligence in the driver's scheduler. One-to-one mappings between physical servers and instance types are pre-defined. However, it enables users to choose a baremetal server explicitly. NTTdocomo-openstack Scheduling (server selection logic) is up to Nova Scheduler. Currently, the standard Filter Scheduler is used. instance_type_extra_specs=cup_arch:x86_64 is used to distinguish baremetal hosts from hypervisor hosts. Users cannot choose a baremetal server to use explicitly. This must be addressed for ARC usecase. We may use additional labels in instance_type_extra_specs, like, instance_type_extra_specs=cpu_arch:x86_64,racklocation:a32 19
OS Provisioning Mechanism
OS Installation Mechanism of dadai-compute1.0 The basic flow of OS installation in dodai-compute1.0 Management IP (IPMI) of baremetal servers are stored in database. The driver prepares a boot image and an installation script. The actual installation works are handled by the script. (2) Pass installation script URL as a kernel parameter (1) Fetch the target image from Glance (tar ball of root file system contents), And prepare the installation script. BareMetal Driver OS Installation Server (4) Fetch the image tar ball, and expand it into the local disk 21 (3) Fetch the installation script and run it. PXEBoot Server pxe boot image Baremetal Server
OS Installation Mechanism of NTTdocomo-openstack The basic flow of OS installation in NTTdocomo-openstack. Management IP (IPMI) of baremetal servers are stored in database. The driver prepares a boot image and an installation script. The actual installation works are handled by the script. (2) Embed installation script into the init script (1) Fetch the target image from Glance (dd image of root filesystem), And prepare the installation script. (4) Attache the iscsi LUN, and fill it with the dd image. 22 BareMetal Driver OS Installation Server (3) export local disk as an iscsi LUN, and ask installation service to fill it. PXEBoot Server pxe boot image Baremetal Server
OS Installation Mechanism The basic framework is the same for both of them. Management IP (IPMI) of baremetal servers are stored in database. The driver prepares a pxe boot image to start OS installation. The actual installation works are handled by scripts in the boot image. The difference just lies on the actual installation method. Installation script of dodai-compute1.0: Make partitions and filesystems on the local disk. Fetch tar.gz image and unbundle it directly to the local filesystem. Install grub to the local disk. Installation script of NTTdocomo-openstack: Start tgtd (iscsi target daemon) and export the local disk as an iscsi LUN. Ask the external Installation Server to install OS in that LUN. The installation server attaches the LUN and copy dd image to it. Grub is not installed. The baremetal relies on PXE boot even for bootstrapping of OS provisioned in the local disk. 23 So,...
Considerations on OS Installation Mechanism We could give more general framework which allows multiple installation methods. Registered machine images need to have meta-data to specify: Type of Installation Service Installation service's FQDN We may use properties attribute of the image. (1) Prepare the target Image in the corresponding installation service BareMetal Driver OS Installation Server A OS Installation Server B 24 (2) Prepare PXE boot image corresponding to the selected installation service PXEBoot Server pxe boot image/ initrd script for the selected installation service Baremetal Server (3) Script in initrd starts the installation using the selected installation service.
Considerations on OS Installation Mechanism Candidates of Installation Service: Existing ones such as in dodai-compute and NTTdocomo-openstack. We'd like to add Kickstart method, too. The image contains a ks.cfg file instead of an actual binary image. The installation service install the baremetal using Kickstart. Kickstart gives more flexibility and ease of use for customizing image contents. 25
Network Virtualization
Network configuration of dadai-compute1.0 L2 separation is done by VLAN. Each lab has its own fixed VLAN ID assigned on SINET4. SINET4 dodai-compute asks OpenFlow controller to setup a port/vlan mapping. VLAN is explicitly specified by a user. Mappings between baremetal's NICs and associated switch ports are stored in database. OS side configuration is done by the local agent. VLAN Trunking Service Network Switch #1 NIC bonding is also configured for redundancy. NIC bonding is mandatory in ARC. Service IP Service IP and Bonding config is done by local agent based on the request from dodai-compute Service Network Switch #2 bonding Baremetal Server Management IP (Fixed) OpenFlow Controller Port/VLAN mapping PXE Boot / Agent Operations Management Network dodai-compute 27
Network configuration of NTTdocomo-openstack Virtual Network is managed by Quantum API and NEC OpenFlow Plug-in. L2 separation is done port-based packet separation using flowtable entries. Mappings between baremetal's NICs and associated switch ports are stored in database. VLAN based separation needs to be added for ARC usecase. Service Network When a user specifies more than two NICs, the Switch driver choose unused NICs from the database and setup the flowtable entries for associated ports. NIC bonding mechanism needs to be added for ARC Service IP usecase. Baremetal Server Management IP (Fixed) OpenFlow Controller PXE Boot Management Network BaremetalDriver 28
How will Quantum API be used for ARC usecase? Using Quantum API and plugin is a preferable choice for ARC. But we need some modification/extension, too. VLAN based separation needs to be added for ARC usecase. Our plan is to add BareMetal VLAN plugin which configures port/vlan mappings using flowtable entries, or directly configures portvlan on CISCO switches. This enables us not only SINET4 VLAN connection but also interconnection with VM instances using OVS plugin(via VLAN). NIC bonding mechanism needs to be added for ARC usecase. As all NICs of baremetal servers are registered in database, we may add redundancy information there. (eg. NIC-A should be paired with NIC-B for bonding.) We may still need a local agent to make actual bonding configuration. SINET4 VLAN Trunking Service Network Switch #1 Port VLAN VLAN Trunking BareMetal VLAN Plugin Baremetal Server 29 Service Network Switch #2 OVS Plugin Hypervisor Host
Summary
Summary Target areas for the future extension: 1. Scheduler extension for grouping of baremetal servers. Allowing users to specify baremetal servers to be used. 2. Multiple OS provisioning method. Allowing multiple types of OS images such as: dd-image (NTTdocomo-openstack style) tar ball (dodai-compute style) Kickstart installation (new feature) 3. Baremetal Quantum plugin for VLAN inter-connection. Allowing inter-connection to existing VLAN networks. Allowing NIC-bonding configuration. As NTTdocomo-openstack branch has been merged in the upstream, the future extension will be done directly on the upstream. 31
Thank You! Etsuji Nakai Twitter @enakai00