How OpenStack is implemented at GMO Public Cloud service GMO Inetnet, Inc. Technical Evangelist Hironobu Saitoh GMO Internet, Inc. Architect Naoto Gohko
Japan s Leading All-in Provider of Internet Services http://gmo.jp/en
(Staff only)
Develop OpenStack related tools Develop Vagrant provider for ConoHa. https://github.com/hironobu-s/vagrant-conoha Docker Machine Golang Tool that create Docker host. Fix a problem and pull request.
Develop OpenStack related tools Develop plugin that enable to save media files to Swift(Object Store) https://wordpress.org/plugins/conoha-object-sync/ Golang CLI tool that handle ConoHa specific APIs https://github.com/hironobu-s/conoha-iso
OpenStack service: Onamae.com VPS(Diablo) Oname.com VPS(Diablo) Service XaaS model: VPS (KVM, libvirt) Network: 1Gbps Network model: Flat-VLAN (Nova Network) IPv4 only Public API None (only web-panel) Glance None Cinder None ObjectStorage None
OpenStack service: Onamae.com VPS(Diablo) Oname.com VPS(Diablo) Nova Network: very simple(linuxbridge) Flat networking is scalable. è But There is no added value, such as a free configuration of the network
OpenStack service: ConoHa(Grizzly) ConoHa(Grizzly) Service XaaS model: VPS + Private networks (KVM + libvirt) Network: 10Gbps wired(10gbase-t) Network model: Flat-VLAN + Quantam ovs-gre overlay IPv6/IPv4 dualstack Public API None (only web-panel) Glance None Cinder None ObjectStorage Swift (After Havana)
OpenStack service: ConoHa(Grizzly) ConoHa(Grizzly) Quantam Network: It was using the initial version of the Open vswitch full mesh GRE-vlan overlay network è But When the scale becomes large, Localization occurs to a specific node of the communication of the GRE-mesh-tunnel (with under cloud network(l2) problems) (Broadcast storm?)
OpenStack service: GMO AppsCloud(Havana) GMO AppsCloud(Havana) Service XaaS model: KVM compute + Private VLAN networks + Cinder + Swift Network: 10Gbps wired(10gbase SFP+) Network model: IPv4 Flat-VLAN + Neutron LinuxBridge(not ML2) + Brocade ADX L4-LBaaS original driver Public API Provided the public API Ceilometer Glance Provided(GlusterFS) Cinder HP 3PAR(Active-Active Multipath original) + NetApp ObjectStorage Swift cluster Bare-Metal Compute Modifiyed cobbler bare-metal deploy driver.
GMO AppsCloud(Havana) public API Endpoint L7:reverse proxy Havana Swift Proxy Web panel(httpd, php) API wrapper proxy (httpd, php Framework: fuel php) Customer sys API Havana Ceilometer API Havana Keystone API Havana Nova API Havana Neutron API Havana Glance API Havana Cinder API
GMO AppsCloud(Havana) public API
Havana: baremetal compute cobbler driver
Havana: baremetal compute cobbler driver Baremetal net: Bonding NIC Taged VLAN allowd VLAN + dhcp native VLAN
Swift cluster (Havana to Juno upgrade) SSD storage: container/account server at every zone
Havana: baremetal compute Cisco ios in southbound
OpenStack Juno: 2 service cluster, released Mikumo = 美 雲 = Beautiful cloud Mikumo ConoHa New Juno region released: 10/26/2015 Mikumo Anzu
OpenStack Juno: 2 service cluster, released Service model: Public cloud by KVM Network: 10Gbps wired(10gbase SFP+) Network model: Flat-VLAN + Neutron ML2 ovs-vxlan overlay + ML2 LinuxBridge(SaaS only) IPv6/IPv4 dualstack LBaaS: LVS-DSR(original) Public API Provided the public API (v2 Domain) Compute node: ALL SSD for booting OS Without Cinder boot Glance: provided Cinder: SSD NexentaStore zfs (SDS) Swift (shared Juno cluster) Cobbler deply on under-cloud Ansible configuration SaaS original service with keystone auth Email, web, CPanel and WordPress Service model: Public cloud by KVM Network: 10Gbps wired(10gbase SFP+) Network model: L4-LB-Nat + Neutron ML2 LinuxBridge VLAN IPv4 only LBaaS: Brocade ADX L4-NAT-LB(original) Public API Provided the public API Compute node: Flash cached or SSD Glance: provided (NetApp offload) Cinder: NetApp storage Swift (shared Juno cluster) Ironic on under-cloud Compute server deploy with Ansible config Ironic baremetal compute Nexsus Cisco for Tagged VLAN module iomemory configuration
Compute and Cinder(zfs): SSD Toshiba enterprise SSD The balance of cost and performance we have taken. Excellent IOPS performance, low latency Compute local SSD The benefits of SSD of Compute of local storage The provision of high-speed storage than cinder boot. It is easy to take online live snapshot of vm instance. deployment of vm is fast. ConoHa: Compute option was modified: take online live snapshot of vm instance. http://toshiba.semicon-storage.com/jp/product/storageproducts/publicity/storage-20150914.html
NexentaStor zfs cinder: ConoHa cloud(juno) Compute
Designate DNS: ConoHa cloud(juno) Components of the DNS and GSLB(original) back-end services Identify Endpoint OpenStack Keystone Backend DB DNS Client API Central RabbitMQ Storage DB
NetApp storage: GMO Appscloud(Juno) If you are using the same Cluster ontap NetApp a Glance and Cinder storage, it is possible to offload a copy of the inter-service of OpenStack as the processing of NetApp side. Create volume from glance image ((glance the image is converted (ex: qcow2 to raw) required that does not cause the condition)
Ironic with undercloud: GMO Appscloud(Juno) For Compute server deployment. Kilo Ironic and All-in-one Compute server: 10G boot Clout-init: network Compute setup: Ansible Under-cloud Ironic(Kilo): It will use a different network and Ironic Baremetal dhcp for Service baremetal compute Ironic(Kilo).
Ironic(Kilo) baremetal: GMO Appscloud(Juno) Boot baremetal instance baremetal server (with Fusion iomemory SanDisk) 1G x4 bonding + Tagged VLAN Clout-init: network + lldp Network: Nexsus Cisco Allowd VLAN security Ironic Kilo + Juno: Fine Ironic Python driver Whole Image write
OpenStack Juno: 2 service cluster, released Service model: Public cloud by KVM Network: 10Gbps wired(10gbase SFP+) Network model: Flat-VLAN + Neutron ML2 ovs-vxlan overlay + ML2 LinuxBridge(SaaS only) IPv6/IPv4 dualstack LBaaS: LVS-DSR(original) Public API Provided the public API (v2 Domain) Compute node: ALL SSD for booting OS Without Cinder boot Glance: provided Cinder: SSD NexentaStore zfs (SDS) Swift (shared Juno cluster) Cobbler deply on under-cloud Ansible configuration SaaS original service with keystone auth Email, web, CPanel and WordPress Service model: Public cloud by KVM Network: 10Gbps wired(10gbase SFP+) Network model: L4-LB-Nat + Neutron ML2 LinuxBridge VLAN IPv4 only LBaaS: Brocade ADX L4-NAT-LB(original) Public API Provided the public API Compute node: Flash cached or SSD Glance: provided (NetApp offload) Cinder: NetApp storage Swift (shared Juno cluster) Ironic on under-cloud Compute server deploy with Ansible config Ironic baremetal compute Nexsus Cisco for Tagged VLAN module iomemory configuration
Finally: The GMO AppsCloud in Juno OpenStack it was released on 10/27/2015. Deployment of SanDisk Fusion iomemory by Kilo Ironic on Juno OpenSack I can also. Compute server was deployed by Kilo Ironic with under-cloud All-in-One openstack. Compute server configuration was deployed by Ansible. Cinder and Glance was provied NetApp copyoffload storage mechanism. LbaaS is Brocade ADX NAT mode original driver. On the otherhand;; Juno OpenStack ConoHa released on 05/18/2015. Designate DNS and GSLB service was started on ConoHa. Cinder storage is SDS provied NexentaStor zfs storage for single volume type. LBaaS is LVS-DSR original driver.