docs.rackspace.com/api

Similar documents
docs.rackspace.com/api

docs.rackspace.com/api

docs.rackspace.com/api

docs.rackspace.com/api

docs.rackspace.com/api

How To Create A Port On A Neutron.Org Server On A Microsoft Powerbook (Networking) On A Macbook 2 (Netware) On An Ipad Or Ipad On A

vcloud Air Platform Programmer's Guide

Cloud Servers Developer Guide

CA Nimsoft Service Desk

Setup Guide Access Manager 3.2 SP3

RackConnect User Guide

EMC ViPR Controller. ViPR Controller REST API Virtual Data Center Configuration Guide. Version

Interworks. Interworks Cloud Platform Installation Guide

API Reference Guide. API Version 1. Copyright Platfora 2016

Setup Guide Access Manager Appliance 3.2 SP3

Quick Install Guide. Lumension Endpoint Management and Security Suite 7.1

WP4: Cloud Hosting Chapter Object Storage Generic Enabler

Web Portal Installation Guide 5.0

EMC Data Domain Management Center

DameWare Server. Administrator Guide

Cloudera Enterprise Reference Architecture for Google Cloud Platform Deployments

Cloudera Manager Training: Hands-On Exercises

Fairsail REST API: Guide for Developers

Remote Access API 2.0

How To Use Netiq Access Manager (Netiq) On A Pc Or Mac Or Macbook Or Macode (For Pc Or Ipad) On Your Computer Or Ipa (For Mac) On An Ip

Dell One Identity Cloud Access Manager How to Configure vworkspace Integration

UpCloud API Documentation. API version Updated Aug 13, 2013

Cloud Elements ecommerce Hub Provisioning Guide API Version 2.0 BETA

Contents. 2 Alfresco API Version 1.0

Cloudera Backup and Disaster Recovery

Ankush Cluster Manager - Hadoop2 Technology User Guide

IBM Cloud Manager with OpenStack. REST API Reference, version 4.1

Dell One Identity Cloud Access Manager How to Configure Microsoft Office 365

How To Use Kiteworks On A Microsoft Webmail Account On A Pc Or Macbook Or Ipad (For A Webmail Password) On A Webcomposer (For An Ipad) On An Ipa Or Ipa (For

Dell One Identity Cloud Access Manager Installation Guide

GFI Product Manual. Outlook Connector User Manual

Intel Internet of Things (IoT) Developer Kit

Strong Authentication for Juniper Networks

Dell One Identity Cloud Access Manager How to Develop OpenID Connect Apps

Centrify Mobile Authentication Services

Windows Azure Pack Installation and Initial Configuration

Cloudera Backup and Disaster Recovery

Dell Enterprise Reporter 2.5. Configuration Manager User Guide

CRM to Exchange Synchronization

User and Programmer Guide for the FI- STAR Monitoring Service SE

Nuance Mobile Developer Program. HTTP Services for Nuance Mobile Developer Program Clients

CA Nimsoft Monitor. Probe Guide for URL Endpoint Response Monitoring. url_response v4.1 series

Hadoop on OpenStack Cloud. Dmitry Mescheryakov Software

Reference Architecture: Enterprise Security For The Cloud

Lepide Active Directory Self Service. Installation Guide. Lepide Active Directory Self Service Tool. Lepide Software Private Limited Page 1

Title page. Alcatel-Lucent 5620 SERVICE AWARE MANAGER 13.0 R7

MicrosoftDynam ics GP TenantServices Installation and Adm inistration Guide

OAuth 2.0 Developers Guide. Ping Identity, Inc th Street, Suite 100, Denver, CO

Axway API Gateway. Version 7.4.1

Big Data Operations Guide for Cloudera Manager v5.x Hadoop

API documentation - 1 -

RSA Authentication Manager 7.1 Basic Exercises

CA Service Desk Manager - Mobile Enabler 2.0

Cloud Elements! Marketing Hub Provisioning and Usage Guide!

Backup Exec Cloud Storage for Nirvanix Installation Guide. Release 2.0

Cisco TelePresence Authenticating Cisco VCS Accounts Using LDAP

CA Nimsoft Monitor. Probe Guide for CA ServiceDesk Gateway. casdgtw v2.4 series

DIGIPASS Authentication for Check Point Security Gateways

CA Spectrum and CA Embedded Entitlements Manager

Novell Open Workgroup Suite Small Business Edition Helpdesk

Installation and configuration guide

CA Performance Center

VMware Identity Manager Connector Installation and Configuration

Oracle Fusion Middleware Oracle API Gateway OAuth User Guide 11g Release 2 ( )

Description of Microsoft Internet Information Services (IIS) 5.0 and

RSA Authentication Manager 7.1 to 8.1 Migration Guide: Upgrading RSA SecurID Appliance 3.0 On Existing Hardware

Table of Contents. Open-Xchange Authentication & Session Handling. 1.Introduction...3

EMC Data Protection Search

Installation & Configuration Guide

Policy Guide Access Manager 3.1 SP5 January 2013

NetIQ Identity Manager Identity Reporting Module Guide

Novell Access Manager

Symantec Protection Engine for Cloud Services 7.0 Release Notes

Omniquad Exchange Archiving

Jobs Guide Identity Manager February 10, 2012

docs.hortonworks.com

CA Spectrum and CA Service Desk

Symantec Endpoint Protection Shared Insight Cache User Guide

Cloudera Enterprise Reference Architecture for Google Cloud Platform Deployments

Companion for MS Analysis Server, v4

Easy Manage Helpdesk Guide version 5.4

VPN Client User s Guide Issue 2

Copyright Pivotal Software Inc, of 10

Audit Management Reference

GFI Product Guide. GFI Archiver and Office 365 Deployment Guide

Dell Statistica Statistica Enterprise Installation Instructions

Configuring Keystone in OpenStack (Essex)

Portal Administration. Administrator Guide

Administration Quick Start

Transcription:

docs.rackspace.com/api

Rackspace Cloud Big Data Developer API v1.0 (2015-04-23) 2015 Rackspace US, Inc. This guide is intended for software developers interested in developing applications using the Rackspace Cloud Big Data Application Programming Interface (API). The document is for informational purposes only and is provided AS IS. RACKSPACE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, AS TO THE ACCURACY OR COM- PLETENESS OF THE CONTENTS OF THIS DOCUMENT AND RESERVES THE RIGHT TO MAKE CHANGES TO SPECIFICATIONS AND PROD- UCT/SERVICES DESCRIPTION AT ANY TIME WITHOUT NOTICE. RACKSPACE SERVICES OFFERINGS ARE SUBJECT TO CHANGE WITH- OUT NOTICE. USERS MUST TAKE FULL RESPONSIBILITY FOR APPLICATION OF ANY SERVICES MENTIONED HEREIN. EXCEPT AS SET FORTH IN RACKSPACE GENERAL TERMS AND CONDITIONS AND/OR CLOUD TERMS OF SERVICE, RACKSPACE ASSUMES NO LIABILITY WHATSOEVER, AND DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO ITS SERVICES INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NONINFRINGEMENT. Except as expressly provided in any written license agreement from Rackspace, the furnishing of this document does not give you any license to patents, trademarks, copyrights, or other intellectual property. Rackspace, Rackspace logo and Fanatical Support are registered service marks of Rackspace US, Inc. All other product names and trademarks used in this document are for identification purposes only and are property of their respective owners. ii

Table of Contents 1. Overview... 1 1.1. Intended audience... 1 1.2. Document change history... 1 1.3. Prerequisites... 2 1.4. API contract changes... 3 1.5. Pricing and service level... 3 1.6. Additional resources... 3 2. Concepts... 4 3. General API information... 5 3.1. Authentication... 5 3.2. Role Based Access Control... 7 3.2.1. Assigning roles to account users... 7 3.2.2. Roles available for Cloud Big Data... 8 3.2.3. Resolving conflicts between RBAC multiproduct vs. custom (product-specific) roles... 8 3.2.4. RBAC permissions cross-reference to Cloud Big Data API operations... 8 3.3. Service access endpoints... 9 3.4. Request and response types... 9 3.5. Faults... 9 3.6. Limits... 10 3.6.1. Rate limits... 10 3.6.2. Absolute limits... 10 3.7. Date and time format... 11 3.8. Pagination... 11 3.9. Data node instances... 14 3.10. Cluster status... 14 3.11. Node status... 15 4. API operations... 16 4.1. Profiles... 16 4.1.1. Create or update profile... 18 4.1.2. View profile information... 20 4.2. Clusters... 20 4.2.1. Create cluster... 22 4.2.2. List all clusters... 24 4.2.3. Show cluster details... 26 4.2.4. Delete cluster... 28 4.2.5. Resize cluster... 30 4.3. Nodes... 31 4.3.1. List cluster nodes... 32 4.3.2. Show node details... 36 4.4. Flavors... 37 4.4.1. List available flavors... 38 4.4.2. Show flavor details... 41 4.4.3. List supported cluster types for a flavor... 42 4.5. Types... 43 4.5.1. List cluster types... 44 4.5.2. Show cluster type details... 46 4.5.3. List supported flavors for a type... 47 iii

4.6. Resource limits... 48 4.6.1. Show resource limits... 49 Glossary... 51 iv

List of Figures 3.1. Cluster states with valid operations... 15 v

List of Tables 3.1. Cloud Big Data product roles and capabilities... 8 3.2. Multiproduct (global) roles and permissions... 8 3.3. Regionalized service endpoints... 9 3.4. Response formats... 9 3.5. Default rate limits... 10 3.6. Absolute limits... 11 3.7. Explanation of date and time format codes... 11 3.8. Data node instances... 14 vi

List of Examples 3.1. User name and API key... 6 3.2. User name and password... 6 3.3. Authentication request with multi-factor authentication credentials... 6 3.4. curl get profile request: JSON... 7 3.5. Cloud Big Data service date and time format... 11 3.6. List nodes paged request: JSON... 12 3.7. List nodes paged response: JSON... 12 4.1. Create or update profile: JSON request... 18 4.2. Create or update profile: JSON response... 19 4.3. View profile information: JSON response... 20 4.4. Create cluster: JSON request... 22 4.5. Create cluster: JSON response... 23 4.6. List all clusters: JSON response... 24 4.7. Show cluster details: JSON request... 26 4.8. Show cluster details: JSON response... 26 4.9. Delete cluster: JSON request... 28 4.10. Delete cluster: JSON response... 28 4.11. Resize cluster: JSON request... 30 4.12. Resize cluster: JSON response... 30 4.13. List cluster nodes: JSON request... 32 4.14. List cluster nodes: JSON response... 32 4.15. Show node details: JSON request... 36 4.16. Show node details: JSON response... 36 4.17. List available flavors: JSON response... 38 4.18. Show flavor details: JSON response... 41 4.19. List supported cluster types for a flavor: JSON response... 42 4.20. List cluster types: JSON response... 44 4.21. Show cluster type details: JSON response... 46 4.22. List supported flavors for a type: JSON response... 47 4.23. Show resource limits: JSON response... 49 vii

1. Overview Rackspace Cloud Big Data is an on-demand Apache Hadoop service on the Rackspace open cloud. The service supports a RESTful API and alleviates the pain associated with deploying, managing, and scaling Hadoop clusters. Cloud Big Data is just as flexible and feature-rich as Hadoop. With Cloud Big Data, you benefit from on-demand servers, utility-based pricing, and access to the full set of Hadoop features and APIs. However, you do not have to worry about provisioning, growing, or maintaining your Hadoop infrastructure. The Cloud Big Data service uses an environment that is specifically optimized for Hadoop, which ensures that your jobs run efficiently and reliably. Note that you are still responsible for developing, troubleshooting, and deploying your applications. The primary use cases for Cloud Big Data are as follows: Create on-demand infrastructure for applications in production where physical servers would be too costly and time-consuming to configure and maintain. Develop, test, and pilot data analysis applications. Cloud Big Data provides the following benefits: Create or resize Hadoop clusters in minutes and pay only for what you use. Access the Hortonworks Data Platform (HDP), an enterprise-ready distribution that is 100 percent Apache open source. Provision and manage Hadoop using an easy-to-use Control Panel and a RESTful API. Seamlessly access data in Cloud Files containers. Gain interoperability with any third party software tool that supports HDP. Fanatical Support on a 24x7x365 basis via chat, phone or ticket. 1.1. Intended audience This guide is intended to assist software developers who want to develop applications by using the Cloud Big Data API. It assumes the reader has a general understanding of Big Data concepts and is familiar with the following technology: Hadoop, Apache Hadoop Distributed File System (HDFS), and MapReduce Hortonworks Data Platform (HDP) RESTful web services HTTP/1.1 conventions JSON serialization formats 1.2. Document change history This version of the guide replaces and obsoletes all earlier versions. The most recent changes are described in the following table: 1

Revision Date April 23, 2015 March 4, 2015 January 15, 2015 Summary of Changes Corrected links in Assigning roles to account users. Removed the London endpoint for the Rackspace Cloud Identity service. Rackspace now has one global endpoint for authentication. See Section 3.1, Authentication [5. Updated Section 3.1, Authentication [5 with information about using multi-factor authentication for added security when a user authenticates with username and password credentials. October 15, 2014 Added the IAD region to Section 3.3, Service access endpoints [9. August 19, 2014 July 22, 2014 Added the test terms for the Preview release of Spark to Section 1.5, Pricing and service level [3. Added the Spark cluster type to the list cluster types example response in Section 4.5, Types [43. Added the onmetal-io flavor to the table in Section 3.9, Data node instances [14. Added the onmetal-io flavor to the list cluster types example response in Section 4.4, Flavors [37. Added the Medium Hadoop Instance and Large Hadoop Instance flavors to example responses in Section 4.4, Flavors [37. Added a link to guidance on choosing a regionalized endpoint in Section 3.3, Service access endpoints [9. June 2, 2014 Initial General Availability (GA) release, v1. April 9, 2014 Updated Section 3.6.2, Absolute limits [10 to show the current default values. February 14, 2014 Added Section 3.11, Node status [15. February 3, 2014 In the API operations chapter, updated the description for the operation to create or update a profile in the API Operations chapter. In the API operations chapter, updated the description of the operation to create a cluster. In the API operations chapter, updated the nodes operations to include a cross-reference to Section 3.11, Node status [15 and to include valid values for postinitscriptstatus. Added Role Based Access Control (RBAC). For more information, see Section 3.2, Role Based Access Control [7. January 14, 2014 Initial Limited Availability (LA) release, v1. October 29, 2013 Corrected the URI for the operation to get node details in Section 4.3, Nodes [31. September 16, 2013 Initial Early Access (EA) release, v1. 1.3. Prerequisites To work with the Cloud Big Data API, you must have the following prerequisites: A Rackspace Cloud account A Rackspace Cloud username and password, as specified during registration The following OS and Hadoop distribution are supported: CentOS 6.5 HDP version 2.1 and 1.3 By using the Cloud Big Data API, you understand and agree to the following limitations and conditions: Cloud Big Data includes a Swift integration feature wherein Hadoop, MapReduce, or Apache Pig jobs can directly reference Cloud Files containers. 2

The following resource limits apply: Up to 3 data nodes Up to 6 virtual CPUs Up to 23040 GB of RAM Up to 4500 GB of disk space 1.4. API contract changes The API contract is not locked and might change. If the contract changes, Rackspace will notify customers in release notes. 1.5. Pricing and service level Cloud Big Data is part of the Rackspace Cloud and your use through the API will be billed according to the pricing schedule at http://www.rackspace.com/cloud/big-data/pricing/. The Service Level Agreement (SLA) for Cloud Big Data is available at http:// www.rackspace.com/cloud/legal/sla. The Preview release of the Spark cluster type, which is included in Cloud Big Data, is subject to the Rackspace test terms at Legal Information - Test Terms.. 1.6. Additional resources You can download the most current versions of the API-related documents from docs.rackspace.com. For information about Rackspace Cloud products, go to www.rackspace.com/cloud. This site also offers links to official Rackspace support channels, including knowledge base articles, forums, phone, chat, and email. Email all support questions to <cbdteam@rackspace.com>. For information about getting started using Cloud Big Data, refer to Getting Started with Rackspace Cloud Big Data at docs.rackspace.com. You can follow Rackspace updates and announcements via Twitter at www.twitter.com/ rackspace. This API uses standard HTTP 1.1 response codes as documented at www.w3.org/protocols/rfc2616/rfc2616-sec10.html. 3

2. Concepts To use the Cloud Big Data API effectively, you should understand the terminology defined in the Glossary [51 at the end of the book. 4

3. General API information The Cloud Big Data API is implemented using a RESTful web service interface. Cloud Big Data shares a common token-based authentication system with other products in the Rackspace Cloud suite. This system enables seamless access between Rackspace products and services. Note 3.1. Authentication All requests to authenticate against and operate the service are performed using SSL over HTTP (HTTPS) on TCP port 443. Each REST request against the Cloud Big Data service requires the inclusion of a specific authorization token, supplied in the X-Auth-Token HTTP header. Customers obtain this token by first using the Rackspace Cloud Identity service and supplying a valid user name and API access key. To authenticate, you submit a POST/v2.0/tokens request, presenting valid Rackspace customer credentials in the message body to a Rackspace authentication endpoint. 1. GET YOUR CREDENTIALS You can use either of the following sets of credentials: Your user name and password Your user name and API key Your user name and password are the ones that you use to log in to the Rackspace Cloud Control Panel. After you are logged in, you can use the Rackspace Cloud Control Panel to obtain your API key. US and UK based accounts use the Cloud Control Panel at https://mycloud.rackspace.com/. Note If you authenticate with username and password credentials, you can use multi-factor authentication to add an additional level of account security. This feature is not implemented for username and API credentials. For more information, see Multi-factor authentication in the Cloud Identity Client Developer Guide. 2. USE THE GLOBAL AUTHENTICATION ENDPOINT Use the following endpoint for authentication using the Cloud Identity service: https://identity.api.rackspacecloud.com/v2.0/ 3. SEND YOUR CREDENTIALS TO YOUR AUTHENTICATION ENDPOINT 5

If you know your credentials and your authentication endpoint, and you can issue a POST /v2.0/tokens request in an API call, you have all the basic information that you need to use the Rackspace Cloud Identity service. You can use curl to perform the authentication process in two steps: get a token, and send the token to a service. 1. Get an authentication token by providing your user name and either your API key or your password. Following are examples of both approaches: Example 3.1. User name and API key curl -X POST https://auth.api.rackspacecloud.com/v2.0/tokens -d ' "auth": "RAX-KSKEY:apiKeyCredentials": "username":"yourusername", "apikey":"yourapikey" ' -H "Content-type: application/json" Example 3.2. User name and password curl -X POST https://auth.api.rackspacecloud.com/v2.0/tokens -d '"auth":"passwordcredentials":"username":"yourusername", "password":"yourpassword"' -H "Content-type: application/json" 2. Review the authentication response. Successful authentication returns a token that you can use as evidence that your identity has already been authenticated along with a service catalog, which lists the endpoints that you can use for Rackspace Cloud services. To use the token, pass it to other services as an X-Auth-Token header. If the Identity service returns a returns a 401 message with a request for additional credentials, your account requires multi-factor authentication. To complete the authentication process, submit a second POST tokens request with these multi-factor authentication credentials: The session ID value returned in the WWW-Authenticate: OS-MF sessionid header parameter that is included in the response to the initial authentication request. The passcode from the mobile phone associated with your user account. Example 3.3. Authentication request with multi-factor authentication credentials $curl https://identity.api.rackspacecloud.com/v2.0/tokens \ -X POST \ -d '"auth": "RAX-AUTH:passcodeCredentials": "passcode":"1411594"'\ -H "X-SessionId: $SESSION_ID" \ -H "Content-Type: application/json" --verbose python -m json.tool 3. Use the authentication token to send a GET request to a service that you want to use. The following example shows passing an authentication token to the Cloud Big Data service by using the Cloud Big Data service catalog endpoint that was returned along with the token. 6

Example 3.4. curl get profile request: JSON curl -i -X GET https://dfw.bigdata.api.rackspacecloud.com/v1. 0/yourAccountID/profile -d \ -H "X-Auth-Token: yourauthtoken" \ -H "Accept: application/json"\ -H "Content-type: application/json" Authentication tokens are typically valid for 24 hours. Applications should be designed to re-authenticate after receiving a 401 (Unauthorized) response from a service endpoint. Important If you are programmatically parsing an authentication response, be aware that service names are stable for the life of the particular service and can be used as keys. You should also be aware that a user's service catalog can include multiple uniquely named services that perform similar functions. In Cloud Identity 2.0, the service type attribute can be used as a key by which to recognize similar services. 3.2. Role Based Access Control Role Based Access Control (RBAC) restricts access to the capabilities of Rackspace Cloud services, including the Cloud Big Data API, to authorized users only. RBAC enables Rackspace Cloud customers to specify which account users of their Cloud account have access to which Cloud Big Data API service capabilities, based on roles defined by Rackspace (see Table 3.1, Cloud Big Data product roles and capabilities [8). The permissions to perform certain operations in Cloud Big Data API create, read, update, delete are assigned to specific roles, and these roles can be assigned by the Cloud account admin user to account users of the account. 3.2.1. Assigning roles to account users The account owner (identity:user-admin) can create account users on the account and then assign roles to those users. The roles grant the account users specific permissions for accessing the capabilities of the Cloud Big Data service. Each account has only one account owner, and that role is assigned by default to any Rackspace Cloud account when the account is created. See the Cloud Identity Client Developer Guide for information about how to perform the following tasks: Add account users Assign roles to account users Delete roles from account users Note The account admin user (identity:user-admin) role cannot hold any additional roles because it already has full access to all capabilities by default. 7

3.2.2. Roles available for Cloud Big Data Three roles (admin, creator, and observer) can be used to access the Cloud Big Data API specifically. The following table describes these roles and their permissions. Table 3.1. Cloud Big Data product roles and capabilities Role name bigdata:admin bigdata:creator bigdata:observer Role permissions This role provides Create, Read, Update, and Delete permissions in Cloud Big Data, where access is granted. This role provides Create, Read and Update permissions in Cloud Big Data, where access is granted. This role provides Read permission in Cloud Big Data, where access is granted. Additionally, two multiproduct roles apply to all products. Users with multiproduct roles inherit access to future products when those products become RBAC-enabled. The following table describes these roles and their permissions. Table 3.2. Multiproduct (global) roles and permissions Role name admin observer Role permissions This role provides Create, Read, Update, and Delete permissions in all products, where access is granted. This role provides Read permission in all products, where access is granted. 3.2.3. Resolving conflicts between RBAC multiproduct vs. custom (product-specific) roles The account owner can set roles for both multiproduct and Cloud Big Data scope, and it is important to understand how any potential conflicts among these roles are resolved. When two roles appear to conflict, the role that provides the more extensive permissions takes precedence. Therefore, admin roles take precedence over observer and creator roles, because admin roles provide more permissions. The following table shows two examples of how potential conflicts between user roles in the Control Panel are resolved: Permission configuration User is assigned the following roles: multiproduct observer and Cloud Big Data admin User is assigned the following roles: multiproduct admin and Cloud Big Data observer View of permission in the Control Panel Appears that the user has only the multiproduct observer role Appears that the user has only the multiproduct admin role Can the user perform product admin functions in the Control Panel? Yes, for Cloud Big Data only. The user has the observer role for the rest of the products. Yes, for all of the products. The Cloud Big Data observer role is ignored. 3.2.4. RBAC permissions cross-reference to Cloud Big Data API operations API operations for Cloud Big Data may or may not be available to all roles. To see which operations are permitted to invoke which calls, please review the Knowledge Center article. 8

3.3. Service access endpoints The Cloud Big Data service is a regionalized service. The user of the service is therefore responsible for appropriate replication, caching, and overall maintenance of Cloud Big Data data across regional boundaries to other Cloud Servers. The endpoints to use for your Cloud Big Data API calls are summarized in the table below. To help you decide which regionalized endpoint to use, read the Knowledge Center article about special considerations for choosing a data center at About Regions. Table 3.3. Regionalized service endpoints Region Chicago (ORD) Dallas/Ft. Worth (DFW) Northern Virginia (IAD) London (LON) Endpoint https://ord.bigdata.api.rackspacecloud.com/v1.0/youraccountid/ https://dfw.bigdata.api.rackspacecloud.com/v1.0/youraccountid/ https://iad.bigdata.api.rackspacecloud.com/v1.0/youraccountid/ https://lon.bigdata.api.rackspacecloud.com/v1.0/youraccountid/ Replace the youraccountid placeholder with your actual account number,, which is returned as part of the authentication service response, after the final '/' in the publicurl field. 3.4. Request and response types The Cloud Big Data API supports JSON data serialization formats. The request format is specified by using the Content-Type header and is required for operations that have a request body. The response format can be specified in requests either by using the Accept header or by adding.json extension to the request URI. Note that JSON is the default format for data serialization. 3.5. Faults Table 3.4. Response formats Format Accept header Query extension Default JSON application/json.json Yes When an error occurs, the Cloud Big Data service returns a fault object that contains an HTTP error response code that denotes the type of error. In the body of the response, the system will return additional information about the fault. The following table lists possible fault types with their associated error codes and descriptions. Fault type Associated error code Description badrequest 400 The user-provided request contained an error. 9

3.6. Limits Fault type Associated error code Description unauthorized 401 The supplied token is not authorized to access the resources. The token is either expired or invalid. forbidden 403 Access to the requested resource was denied. itemnotfound 404 The back-end services did not find anything matching the request URI. conflictingrequest 409 The requested resource cannot currently be operated on. overlimit 413 The resource quota was exceeded. lavafault 500 An unknown exception occurred. serviceunavailable 503 The service is currently unavailable. All accounts, by default, have a preconfigured set of thresholds (or limits) to manage capacity and prevent abuse of the system. The system recognizes rate limits and absolute limits. Rate limits are thresholds that are reset after a certain amount of time passes. Absolute limits are fixed. 3.6.1. Rate limits Rate limits are specified in both a human-readable wildcard URI and a machine-processable regular expression. The regular expression boundary matcher '^' takes effect after the root URI path. For example, the regular expression ^/v1.0/clusters would match the bold portion of the following URI: https://dfw.bigdata.api.rackspacecloud.com/v1.0/clusters. The following table specifies the default rate limits for all GET, POST, PUT, and DELETE API operations for clusters. Table 3.5. Default rate limits Verb URI Regular Expression Default GET changes-since */clusters/* ^/vd+.d+/clusters.* 3 per minute POST */clusters/* ^/vd+.d+/clusters.* 2 per minute POST clusters */clusters/* ^/vd+.d+/clusters.* 50 per day PUT */clusters/* ^/vd+.d+/clusters.* 2 per minute DELETE */clusters/* ^/vd+.d+/clusters.* 5 per minute Rate limits are applied in order relative to the verb, going from least to most specific. For example, although the threshold for issuing a POST request to /v1.0/* is 2 per minute, you cannot issue a POST request to /v1.0/* more than 50 times within a single day. If you exceed the thresholds established for your account, a 413 (OverLimit) HTTP response is returned with a Retry-After header to notify the client when it can attempt to try again. 3.6.2. Absolute limits The following table provides the default values for the absolute limits. 10

Table 3.6. Absolute limits Name Description Limit - default values Node count Maximum number of allowed data nodes 3 Disk Maximum disk capacity across all data nodes, in gigabytes (GB) 4500 RAM Maximum RAM across all data nodes, in gigabytes (GB) 23040 VCPUs Maximum virtual CPUs across all data nodes 6 3.7. Date and time format For the display and consumption of date and time values, the Cloud Big Data service uses a date format that complies with ISO 8601. The system time is expressed as UTC. Example 3.5. Cloud Big Data service date and time format yyyy-mm-dd't'hh:mm:ss.sssz For example, May 19, 2013 at 8:07:08 a.m., UTC-5 would have the following format: 2013-05-19T08:07:08-0500 The following table describes the date and time format codes. Table 3.7. Explanation of date and time format codes Code yyyy MM dd T Description Four-digit year Two-digit month Two-digit day of the month Separator of the date and time HH Two-digit hour of the day (00-23) mm ss SSS Z 3.8. Pagination Two-digit minutes of the hour Two-digit seconds of the minute Three-digit milliseconds of the second RFC 822 time zone Pagination provides the ability to limit the size of the returned data in the response body as well as retrieve a specified subset of a large data set. Pagination has two key concepts: limit and marker. limit is the restriction on the maximum number of items for that type that can be returned. marker is the ID of the last item in the previous list returned. The ID is the respective ID for the last cluster, node, or flavor. For example, a query could request the next 10 nodes after the node xyz as follows:?limit=10&marker=xyz. Items displayed are sorted by ID. 11

Pagination applies only to the operations listed in the following table: Verb URI Description GET /v1.0/tenant_id/clusters Lists all clusters for your account. GET /v1.0/tenant_id/clusters/clusterid/nodes Lists all nodes for the specified cluster. GET /v1.0/tenant_id/flavors Lists all available flavors, including the drive size and the amount of RAM. The default paging limit for all calls is 25 with a maximum of 200. Requests for more than 200 result in a 400 error. See the following example of the operation to list paged nodes. Example 3.6. List nodes paged request: JSON curl -i -X GET https://dfw.bigdata.api.rackspacecloud.com/v1.0/youraccountid/ clusters/ac111111-2d86-4597-8010-cbe787bbbc41/nodes?limit=2 -d \ -H "X-Auth-Token: yourauthtoken" \ -H "Accept: application/json" \ -H "Content-Type: application/json" Notice that the paged request example above sets the limit to 2 (?limit=2), so the response that follows shows 2 nodes: Example 3.7. List nodes paged response: JSON "nodes": [ "id": "000", "created": "2012-12-27T10:10:10Z", "role": "NAMENODE", "name": "NAMENODE-1", "status": "ACTIVE", "addresses": "public": [ "addr": "168.x.x.3", "version": 4 "private": [ "addr": "10.x.x.3", "version": 4, "services": [ "name": "namenode", "name": "jobtracker", "name": "ssh", 12

"uri": "ssh://user@168.x.x.3" "links": [ "rel": "self", "href": "https://dfw.bigdata.api.rackspacecloud.com/ v1.0/1234/clusters/db478fc1-2d86-4597-8010-cbe787bbbc41/nodes/000", "rel": "bookmark", "href": "https://dfw.bigdata.api.rackspacecloud.com/ 1234/clusters/db478fc1-2d86-4597-8010-cbe787bbbc41/nodes/000", "id": "aaa", "role": "GATEWAY", "name": "GATEWAY-1", "status": "ACTIVE", "addresses": "public": [ "addr": "168.x.x.4", "version": 4 "private": [ "addr": "10.x.x.4", "version": 4, "services": [ "name": "pig", "name": "hive", "name": "ssh", "uri": "ssh://user@168.x.x.4", "name": "status", "uri": "http://10.x.x.4", "name": "hdfs-scp", "uri": "scp://user@168.x.x.4:9022" "links": [ "rel": "self", "href": "https://dfw.bigdata.api.rackspacecloud.com/ v1.0/1234/clusters/db478fc1-2d86-4597-8010-cbe787bbbc41/nodes/aaa" 13

, "rel": "bookmark", "href": "https://dfw.bigdata.api.rackspacecloud.com/ 1234/clusters/db478fc1-2d86-4597-8010-cbe787bbbc41/nodes/aaa" 3.9. Data node instances Cloud Big Data offers the data node instances described in the following table. Table 3.8. Data node instances Flavor ID Name vcpu RAM Disk hadoop1-7 hadoop1-15 hadoop1-30 hadoop1-60 Small Hadoop Instance Medium Hadoop Instance Large Hadoop Instance XLarge Hadoop Instance 2 7.5 GB 1.25 TB 4 15 GB 2.5 TB 8 30 GB 5 TB 16 60 GB 10 TB onmetal-io1 OnMetal IO v1 40 120 GB 3.2 TB For more information about data node instances, see http://www.rackspace.com/ knowledge_center/article/cloud-big-data-platform-provisioning-and-pricing. 3.10. Cluster status When you send an API request to create, list, or delete a cluster or clusters, the following cluster status values might be returned: BUILDING The cluster is being provisioned. CONFIGURING The cluster is being configured. ACTIVE The cluster is online and available to for use. UPDATING The cluster is being updated, either through a resize operation or another update operation. ERROR The cluster failed to start up. DELETING The cluster is being deleted. DELETED The cluster is deleted. The following figure shows the cluster states and the operations that are valid for each one. 14

Figure 3.1. Cluster states with valid operations 3.11. Node status When you send an API request to list or get details about a node or nodes, the following node status values might be returned: BUILDING Cloud Big Data is waiting for a nova resource to become available. CONFIGURING The server resource was acquired from nova and is being configured for cluster. ACTIVE The node is provisioned and part of a cluster. ERROR The provisioning failed for the node. DELETING A delete was requested but is not yet completed. DELETED The node has been deleted, and the nova resource has been freed. DEACTIVATING The node is preparing to be removed from the cluster. 15

4. API operations The chapter describes each of the API operations provided by the Cloud Big Data service. Method URI Description Profiles POST /v1.0/tenant_id/profile Creates a profile or updates the information in an existing profile. GET /v1.0/tenant_id/profile Returns profile details for the current user. Clusters POST /v1.0/tenant_id/clusters Creates a cluster. GET /v1.0/tenant_id/clusters Lists all clusters for your account. GET DELETE POST GET GET Nodes Flavors Shows details for a specified cluster. Deletes a specified cluster. Resizes a specified cluster. Lists all nodes for a specified cluster. Shows details for a specified node in a specified cluster. GET /v1.0/tenant_id/flavors Lists all available flavors, including the drive size and amount of RAM. GET GET Types Shows details for a specified flavor. GET /v1.0/tenant_id/types Lists cluster types. Lists the supported cluster types for a specified flavor. GET /v1.0/tenant_id/types/typeid Shows details for a specified cluster type. GET /v1.0/tenant_id/clusters/clusterid /v1.0/tenant_id/clusters/clusterid /v1.0/tenant_id/clusters/clusterid/action /v1.0/tenant_id/clusters/clusterid/nodes /v1.0/tenant_id/clusters/clusterid/nodes/nodeid /v1.0/tenant_id/flavors/flavorid /v1.0/tenant_id/flavors/flavorid/types /v1.0/tenant_id/types/type- Id/flavors Resource limits Lists the supported flavors for a specified cluster type. GET /v1.0/tenant_id/limits Shows the absolute resource limits, such as remaining node count, available RAM, and remaining disk space, for the user. 4.1. Profiles This section describes the operations that are supported for profiles. Note Your Cloud Big Data profile is different from your cloud account. Your profile has the following characteristics and requirements: A profile is the configuration for the administration and login account for the cluster. 16

Only one profile is allowed for each user or account. Any updates or additions override the existing profile. Method URI Description POST /v1.0/tenant_id/profile Creates a profile or updates the information in an existing profile. GET /v1.0/tenant_id/profile Returns profile details for the current user. 17

4.1.1. Create or update profile Method URI Description POST /v1.0/tenant_id/profile Creates a profile or updates the information in an existing profile. Cloud Big Data provisions each server in the cluster with the username and password that are part of the profile. You can ssh into the nodes with those credentials. These credentials are required in the request (as shown in the example "Create or update profile request:json"). The cloudcredentials.username and cloudcredentials.apikey are stored in the cluster configuration so that Hadoop can read or write objects stored in Cloud Files. These credentials are optional. If they are not supplied, the cluster does not have access to Cloud Files, but otherwise operates normally. Note You must create your profile before you create a cluster. The 400 error code might indicate malformed data or unacceptable parameters. Normal response codes: 200 Error response codes: badrequest (400) 4.1.1.1. Request This table shows the URI parameters for the create or update profile request: Name Type Description tenant_id String The tenant ID in a multi-tenancy cloud. Example 4.1. Create or update profile: JSON request "profile": "username":"john.doe", "password":"j0hnd03", "sshkeys":[ "name":"t@test", "publickey":"ssh-rsa..." "cloudcredentials": "username":"jdoe", "apikey":"df23gkh34h52gkdgfakgf" 18

4.1.1.2. Response Example 4.2. Create or update profile: JSON response "profile": "username":"john.doe", "userid":"12346", "tenantid":"123456", "sshkeys":[ "name":"t@test", "publickey":"ssh-rsa..." "cloudcredentials": "username":"jdoe", "links":[ "rel":"self", "href":"https://dfw.bigdata.api.rackspacecloud.com/v1.0/123456/ profile", "rel":"bookmark", "href":"https://dfw.bigdata.api.rackspacecloud.com/123456/profile" 19

4.1.2. View profile information Method URI Description GET /v1.0/tenant_id/profile Returns profile details for the current user. Normal response codes: 200 4.1.2.1. Request This table shows the URI parameters for the view profile information request: Name Type Description tenant_id String The tenant ID in a multi-tenancy cloud. This operation does not accept a request body. 4.1.2.2. Response Example 4.3. View profile information: JSON response "profile": "username":"john.doe", "user_id":"12346", "tenant_id":"123456", "sshkeys":[ "name":"t@test" "cloudcredentials":, "links":[ "rel":"self", "href":"https://dfw.bigdata.api.rackspacecloud.com/v1.0/123456/ profile", "rel":"bookmark", "href":"https://dfw.bigdata.api.rackspacecloud.com/123456/profile" 4.2. Clusters This section describes the operations that are supported for clusters. Method URI Description POST /v1.0/tenant_id/clusters Creates a cluster. 20

Method URI Description GET /v1.0/tenant_id/clusters Lists all clusters for your account. GET DELETE POST /v1.0/tenant_id/clusters/clusterid /v1.0/tenant_id/clusters/clusterid /v1.0/tenant_id/clusters/clusterid/action Shows details for a specified cluster. Deletes a specified cluster. Resizes a specified cluster. 21

4.2.1. Create cluster Method URI Description POST /v1.0/tenant_id/clusters Creates a cluster. Note You must create your profile before you create a cluster. The postinitscript request parameter specifies a URL that downloads a script that runs after the cluster is created. The status of the run is shown in the postinitscript- Status response parameter. Possible values for postinitscriptstatus are FAILED, PENDING, DELIVERED, RUN- NING, SUCCEEDED, and None. The progress response parameter is calculated based on the number of nodes in the cluster and their progress through configuration. Currently, the calculation is as follows but is subject to change: BUILDING: progress = 0.5 * configuring_count / len(self.nodes) CONFIGURING/RESIZING: progress = 0.5 + (0.5 * active_count / len(self.nodes)) ACTIVE: progress = 1.0 The 400 error code might indicate any of the following issues: The response body is invalid. You need to create a user profile. The node count is invalid. The flavor is invalid. The data is malformed. The 413 error code might indicate that the resource limit is exceeded. Normal response codes: 200 Error response codes: badrequest (400), overlimit (413) 4.2.1.1. Request This table shows the URI parameters for the create cluster request: Name Type Description tenant_id String The tenant ID in a multi-tenancy cloud. Example 4.4. Create cluster: JSON request "cluster": "name":"slice", "clustertype":"hadoop_hdp2_1", "flavorid":"hadoop1-7", 22

"nodecount":5, "postinitscript":"http://example.com/configure_cluster.sh" 4.2.1.2. Response Example 4.5. Create cluster: JSON response "cluster": "id":"db478fc1-2d86-4597-8010-cbe787bbbc41", "created":"2012-12-27t10:10:10z", "updated":"", "name":"slice", "clustertype":"hadoop_hdp2_1", "flavorid":"hadoop1-7", "nodecount":5, "postinitscriptstatus":"pending", "progress":0.0, "status":"building", "links":[ "rel":"self", "href":"https://dfw.bigdata.api.rackspacecloud.com/v1.0/1234/ clusters/db478fc1-2d86-4597-8010-cbe787bbbc41", "rel":"bookmark", "href":"https://dfw.bigdata.api.rackspacecloud.com/1234/clusters/ db478fc1-2d86-4597-8010-cbe787bbbc41" 23

4.2.2. List all clusters Method URI Description GET /v1.0/tenant_id/clusters Lists all clusters for your account. Normal response codes: 200 4.2.2.1. Request This table shows the URI parameters for the list all clusters request: Name Type Description tenant_id String The tenant ID in a multi-tenancy cloud. This operation does not accept a request body. 4.2.2.2. Response Example 4.6. List all clusters: JSON response "clusters":[ "id":"db478fc1-2d86-4597-8010-cbe787bbbc41", "name":"slice", "created":"2012-12-27t10:10:10z", "updated":"2012-12-27t10:15:10z", "clustertype":"hadoop_hdp2_1", "flavorid":"hadoop1-7", "nodecount":5, "postinitscriptstatus":"succeeded", "progress":1.0, "status":"active", "links":[ "rel":"self", "href":"https://dfw.bigdata.api.rackspacecloud.com/v1.0/1234/ clusters/db478fc1-2d86-4597-8010-cbe787bbbc41", "rel":"bookmark", "href":"https://dfw.bigdata.api.rackspacecloud.com/1234/ clusters/db478fc1-2d86-4597-8010-cbe787bbbc41", "id":"ac111111-2d86-4597-8010-cbe787bbbc41", "name":"real", "created":"2012-12-27t10:10:10z", "updated":"2012-12-27t10:15:10z", "clustertype":"hbase_hdp2_1", "flavorid":"hadoop1-60", "nodecount":20, "postinitscriptstatus":null, 24

"progress":1.0, "status":"active", "links":[ "rel":"self", "href":"https://dfw.bigdata.api.rackspacecloud.com/v1.0/1234/ clusters/ac111111-2d86-4597-8010-cbe787bbbc41", "rel":"bookmark", "href":"https://dfw.bigdata.api.rackspacecloud.com/1234/ clusters/ac111111-2d86-4597-8010-cbe787bbbc41" 25

4.2.3. Show cluster details Method URI Description GET /v1.0/tenant_id/clusters/clusterid Normal response codes: 200 Error response codes: itemnotfound (404) 4.2.3.1. Request Shows details for a specified cluster. This table shows the URI parameters for the show cluster details request: Name Type Description tenant_id String The tenant ID in a multi-tenancy cloud. clusterid String Specifies the cluster ID. Example 4.7. Show cluster details: JSON request GET https://dfw.bigdata.api.rackspacecloud.com/v1.0/7654321/clusters/ ac111111-2d86-4597-8010-cbe787bbbc41 Accept: application/json X-Auth-Token:ea85e6ac-baff-4a6c-bf43-848020ea3812 Content-Type: application/json This operation does not accept a request body. 4.2.3.2. Response Example 4.8. Show cluster details: JSON response Status: 200 OK Date: Mon, 06 Aug 2012 21:54:21 GMT Content-Type: application/json Content-Length: 110 "cluster": "id":"db478fc1-2d86-4597-8010-cbe787bbbc41", "created":"2012-12-27t10:10:10z", "updated":"2012-12-27t10:20:10z", "name":"slice", "clustertype":"hadoop_hdp2_1", "flavorid":"hadoop1-7", "nodecount":5, "postinitscriptstatus":"succeeded", "progress":1.0, "status":"active", "links":[ "rel":"self", "href":"https://dfw.bigdata.api.rackspacecloud.com/v1.0/1234/ clusters/db478fc1-2d86-4597-8010-cbe787bbbc41", 26

"rel":"bookmark", "href":"https://dfw.bigdata.api.rackspacecloud.com/1234/clusters/ db478fc1-2d86-4597-8010-cbe787bbbc41" 27

4.2.4. Delete cluster Method URI Description DELETE /v1.0/tenant_id/clusters/clusterid Deletes a specified cluster. The 400 error code might indicate missing or invalid parameters. The 409 error code might indicate an invalid state. This operation deletes the cluster that is specified by clusterid. Normal response codes: 204 Error response codes: badrequest (400), itemnotfound (404), conflict (409) 4.2.4.1. Request This table shows the URI parameters for the delete cluster request: Name Type Description tenant_id String The tenant ID in a multi-tenancy cloud. clusterid String Specifies the cluster ID. Example 4.9. Delete cluster: JSON request DELETE https://dfw.bigdata.api.rackspacecloud.com/v1.0/7654321/clusters/ db478fc1-2d86-4597-8010-cbe787bbbc41 Accept: application/json X-Auth-Token:ea85e6ac-baff-4a6c-bf43-848020ea3812 Content-Type: application/json This operation does not accept a request body. 4.2.4.2. Response Example 4.10. Delete cluster: JSON response Status: 202 Accepted Date: Mon, 06 Aug 2012 21:54:21 GMT Content-Type: application/json "cluster": "id":"db478fc1-2d86-4597-8010-cbe787bbbc41", "created":"2012-12-27t10:10:10z", "updated":"2012-12-27t20:14:10z", "name":"slice", "clustertype":"hadoop_hdp2_1", "flavorid":"hadoop1-7", "nodecount":5, "postinitscriptstatus":null, "status":"deleting", "links":[ 28

"rel":"self", "href":"https://dfw.bigdata.api.rackspacecloud.com/v1.0/1234/ clusters/db478fc1-2d86-4597-8010-cbe787bbbc41", "rel":"bookmark", "href":"https://dfw.bigdata.api.rackspacecloud.com/1234/clusters/ db478fc1-2d86-4597-8010-cbe787bbbc41" 29

4.2.5. Resize cluster Method URI Description POST /v1.0/tenant_id/clusters/clusterid/action Resizes a specified cluster. The 400 error code might indicate the presence of unacceptable parameters or malformed data. The 409 error code might indicate an invalid state. This operation resizes the cluster specified by clusterid. Normal response codes: 200 Error response codes: badrequest (400), itemnotfound (404), conflict (409) 4.2.5.1. Request This table shows the URI parameters for the resize cluster request: Name Type Description tenant_id String The tenant ID in a multi-tenancy cloud. clusterid String Specifies the cluster ID. Example 4.11. Resize cluster: JSON request "resize": "nodecount":10 4.2.5.2. Response Example 4.12. Resize cluster: JSON response "cluster": "id":"db478fc1-2d86-4597-8010-cbe787bbbc41", "created":"2012-12-27t10:10:10z", "updated":"2012-12-27t16:20:10z", "name":"slice", "clustertype":"hadoop_hdp2_1", "flavorid":"hadoop1-7", "nodecount":10, "postinitscriptstatus":"pending", "progress":0.5, "status":"updating", "links":[ "rel":"self", "href":"https://dfw.bigdata.api.rackspacecloud.com/v1.0/1234/ clusters/db478fc1-2d86-4597-8010-cbe787bbbc41" 30

, "rel":"bookmark", "href":"https://dfw.bigdata.api.rackspacecloud.com/1234/clusters/ db478fc1-2d86-4597-8010-cbe787bbbc41" 4.3. Nodes This section describes operations that are supported for the servers (nodes) that are part of a cluster. Method URI Description GET GET /v1.0/tenant_id/clusters/clusterid/nodes /v1.0/tenant_id/clusters/clusterid/nodes/nodeid Lists all nodes for a specified cluster. Shows details for a specified node in a specified cluster. 31

4.3.1. List cluster nodes Method URI Description GET /v1.0/tenant_id/clusters/clusterid/nodes Lists all nodes for a specified cluster. Valid values for the request body parameter postinitscriptstatus are FAILED, PENDING, DELIVERED, RUNNING, SUCCEEDED, and None. Valid values for the node status are in Section 3.11, Node status [15. Normal response codes: 200 Error response codes: itemnotfound (404) 4.3.1.1. Request This table shows the URI parameters for the list cluster nodes request: Name Type Description tenant_id String The tenant ID in a multi-tenancy cloud. clusterid String Specifies the cluster ID. Example 4.13. List cluster nodes: JSON request GET https://dfw.bigdata.api.rackspacecloud.com/v1.0/7654321/clusters/ ac111111-2d86-4597-8010-cbe787bbbc41/nodes Accept: application/json X-Auth-Token:ea85e6ac-baff-4a6c-bf43-848020ea3812 Content-Type: application/json This operation does not accept a request body. 4.3.1.2. Response Example 4.14. List cluster nodes: JSON response "nodes":[ "id":"000", "created":"2012-12-27t10:10:10z", "role":"namenode", "name":"namenode-1", "postinitscriptstatus":null, "status":"active", "addresses": "public":[ "addr":"168.x.x.3", "version":4 "private":[ "addr":"10.x.x.3", "version":4 32

, "services":[ "name":"namenode", "name":"jobtracker", "name":"ssh", "uri":"ssh://user@168.x.x.3" "links":[ "rel":"self", "href":"https://dfw.bigdata.api.rackspacecloud.com/v1.0/1234/ clusters/db478fc1-2d86-4597-8010-cbe787bbbc41/nodes/000", "rel":"bookmark", "href":"https://dfw.bigdata.api.rackspacecloud.com/1234/ clusters/db478fc1-2d86-4597-8010-cbe787bbbc41/nodes/000", "id":"aaa", "role":"gateway", "name":"gateway-1", "postinitscriptstatus":null, "status":"active", "addresses": "public":[ "addr":"168.x.x.4", "version":4 "private":[ "addr":"10.x.x.4", "version":4, "services":[ "name":"pig", "name":"hive", "name":"ssh", "uri":"ssh://user@168.x.x.4", 33

, "name":"status", "uri":"http://10.x.x.4" "name":"hdfs-scp", "uri":"scp://user@168.x.x.4:9022" "links":[ "rel":"self", "href":"https://dfw.bigdata.api.rackspacecloud.com/v1.0/1234/ clusters/db478fc1-2d86-4597-8010-cbe787bbbc41/nodes/aaa", "rel":"bookmark", "href":"https://dfw.bigdata.api.rackspacecloud.com/1234/ clusters/db478fc1-2d86-4597-8010-cbe787bbbc41/nodes/aaa", "id":"bbb", "role":"datanode", "name":"datanode-1", "postinitscriptstatus":null, "status":"active", "addresses": "public":[ "addr":"168.x.x.5", "version":4 "private":[ "addr":"10.x.x.5", "version":4, "services":[ "name":"datanode", "name":"tasktracker", "name":"ssh", "uri":"ssh://user@168.x.x.5" "links":[ "rel":"self", "href":"https://dfw.bigdata.api.rackspacecloud.com/v1.0/1234/ clusters/db478fc1-2d86-4597-8010-cbe787bbbc41/nodes/bbb", 34

"rel":"bookmark", "href":"https://dfw.bigdata.api.rackspacecloud.com/1234/ clusters/db478fc1-2d86-4597-8010-cbe787bbbc41/nodes/bbb", "id":"ccc", "role":"datanode", "name":"datanode-2", "postinitscriptstatus":null, "status":"active", "addresses": "public":[ "addr":"168.x.x.6", "version":4 "private":[ "addr":"10.x.x.6", "version":4, "services":[ "name":"datanode", "name":"tasktracker", "name":"ssh", "uri":"ssh://user@168.x.x.6" "links":[ "rel":"self", "href":"https://dfw.bigdata.api.rackspacecloud.com/v1.0/1234/ clusters/db478fc1-2d86-4597-8010-cbe787bbbc41/nodes/ccc", "rel":"bookmark", "href":"https://dfw.bigdata.api.rackspacecloud.com/1234/ clusters/db478fc1-2d86-4597-8010-cbe787bbbc41/nodes/ccc" 35

4.3.2. Show node details Method URI Description GET /v1.0/tenant_id/clusters/clusterid/nodes/nodeid Shows details for a specified node in a specified cluster. Valid values for the request body parameter postinitscriptstatus are FAILED, PENDING, DELIVERED, RUNNING, SUCCEEDED, and None. Valid values for the node status are in Section 3.11, Node status [15. Normal response codes: 200 Error response codes: itemnotfound (404) 4.3.2.1. Request This table shows the URI parameters for the show node details request: Name Type Description tenant_id String The tenant ID in a multi-tenancy cloud. clusterid String Specifies the cluster ID. nodeid String Specifies the node ID. Example 4.15. Show node details: JSON request GET https://dfw.bigdata.api.rackspacecloud.com/v1.0/7654321/clusters/ ac111111-2d86-4597-8010-cbe787bbbc41/nodes/000 Accept: application/json X-Auth-Token:ea85e6ac-baff-4a6c-bf43-848020ea3812 Content-Type: application/json This operation does not accept a request body. 4.3.2.2. Response Example 4.16. Show node details: JSON response "node": "id":"000", "created":"2012-12-27t10:10:10z", "role":"namenode", "name":"namenode-1", "postinitscriptstatus":null, "status":"active", "addresses": "public":[ "addr":"168.x.x.3", "version":4 "private":[ 36

"addr":"10.x.x.3", "version":4, "services":[ "name":"datanode", "name":"tasktracker", "name":"ssh", "uri":"ssh://user@168.x.x.3" "links":[ "rel":"self", "href":"https://dfw.bigdata.api.rackspacecloud.com/v1.0/1234/ clusters/db478fc1-2d86-4597-8010-cbe787bbbc41/nodes/000", "rel":"bookmark", "href":"https://dfw.bigdata.api.rackspacecloud.com/1234/clusters/ db478fc1-2d86-4597-8010-cbe787bbbc41/nodes/000" 4.4. Flavors This section describes operations that are supported for flavors. Method URI Description GET /v1.0/tenant_id/flavors Lists all available flavors, including the drive size and amount of RAM. GET GET /v1.0/tenant_id/flavors/flavorid /v1.0/tenant_id/flavors/flavorid/types Shows details for a specified flavor. Lists the supported cluster types for a specified flavor. 37

4.4.1. List available flavors Method URI Description GET /v1.0/tenant_id/flavors Lists all available flavors, including the drive size and amount of RAM. Normal response codes: 200 4.4.1.1. Request This table shows the URI parameters for the list available flavors request: Name Type Description tenant_id String The tenant ID in a multi-tenancy cloud. This operation does not accept a request body. 4.4.1.2. Response Example 4.17. List available flavors: JSON response "flavors": [ "disk": 2500, "id": "hadoop1-15", "links": [ "href": "https://dfw.bigdata.api.rackspacecloud.com/v1.0/ 1234/flavors/hadoop1-15", "rel": "self", "href": "https://dfw.bigdata.api.rackspacecloud.com/1234/ flavors/hadoop1-15", "rel": "bookmark" "name": "Medium Hadoop Instance", "ram": 15360, "vcpus": 4, "disk": 5000, "id": "hadoop1-30", "links": [ "href": "https://dfw.bigdata.api.rackspacecloud.com/v1.0/ 1234/flavors/hadoop1-30", "rel": "self", "href": "https://dfw.bigdata.api.rackspacecloud.com/1234/ flavors/hadoop1-30", "rel": "bookmark" 38

, "name": "Large Hadoop Instance", "ram": 30720, "vcpus": 8 "disk": 10000, "id": "hadoop1-60", "links": [ "href": "https://dfw.bigdata.api.rackspacecloud.com/v1.0/ 1234/flavors/hadoop1-60", "rel": "self", "href": "https://dfw.bigdata.api.rackspacecloud.com/1234/ flavors/hadoop1-60", "rel": "bookmark" "name": "XLarge Hadoop Instance", "ram": 61440, "vcpus": 16, "disk": 1250, "id": "hadoop1-7", "links": [ "href": "https://dfw.bigdata.api.rackspacecloud.com/v1.0/ 1234/flavors/hadoop1-7", "rel": "self", "href": "https://dfw.bigdata.api.rackspacecloud.com/1234/ flavors/hadoop1-7", "rel": "bookmark" "name": "Small Hadoop Instance", "ram": 7680, "vcpus": 2 "disk": 3200, "id": "onmetal-io1", "links": [ "href": "https://iad.bigdata.api.rackspacecloud.com/v1.0/ 1234/flavors/onmetal-io1", "rel": "self", "href": "https://iad.bigdata.api.rackspacecloud.com/1234/ flavors/onmetal-io1", "rel": "bookmark" "name": "OnMetal IO v1", "ram": 131072, "vcpus": 40 39

40

4.4.2. Show flavor details Method URI Description GET /v1.0/tenant_id/flavors/flavorid Normal response codes: 200 Error response codes: itemnotfound (404) 4.4.2.1. Request Shows details for a specified flavor. This table shows the URI parameters for the show flavor details request: Name Type Description tenant_id String The tenant ID in a multi-tenancy cloud. flavorid String Specifies the flavor ID. This operation does not accept a request body. 4.4.2.2. Response Example 4.18. Show flavor details: JSON response "flavor": "disk":1250, "id":"hadoop1-7", "links":[ "href":"https://dfw.bigdata.api.rackspacecloud.com/v1.0/1234/ flavors/hadoop1-7", "rel":"self", "href":"https://dfw.bigdata.api.rackspacecloud.com/1234/flavors/ hadoop1-7", "rel":"bookmark" "name":"small Hadoop Instance", "ram":7680, "vcpus":2 41

4.4.3. List supported cluster types for a flavor Method URI Description GET /v1.0/tenant_id/flavors/flavorid/types Normal response codes: 200 Error response codes: itemnotfound (404) 4.4.3.1. Request Lists the supported cluster types for a specified flavor. This table shows the URI parameters for the list supported cluster types for a flavor request: Name Type Description tenant_id String The tenant ID in a multi-tenancy cloud. flavorid String Specifies the flavor ID. This operation does not accept a request body. 4.4.3.2. Response Example 4.19. List supported cluster types for a flavor: JSON response "types": [ "id": "HADOOP_HDP1_3", "links": [ "href": "https://dfw.bigdata.api.rackspacecloud.com/v1.0/ 1234/types/HADOOP_HDP1_3", "rel": "self", "href": "https://dfw.bigdata.api.rackspacecloud.com/1234/ types/hadoop_hdp1_3", "rel": "bookmark" "name": "Hadoop (HDP 1.3)", "version": "1.3", "id": "HADOOP_HDP2_1", "links": [ "href": "https://dfw.bigdata.api.rackspacecloud.com/v1.0/ 1234/types/HADOOP_HDP2_1", "rel": "self", "href": "https://dfw.bigdata.api.rackspacecloud.com/1234/ types/hadoop_hdp2_1", "rel": "bookmark" 42

4.5. Types "name": "Hadoop (HDP 2.1)", "version": "2.1" This section describes the operations that are supported for cluster types. Method URI Description GET /v1.0/tenant_id/types Lists cluster types. GET /v1.0/tenant_id/types/typeid Shows details for a specified cluster type. GET /v1.0/tenant_id/types/type- Id/flavors Lists the supported flavors for a specified cluster type. 43

4.5.1. List cluster types Method URI Description GET /v1.0/tenant_id/types Lists cluster types. Normal response codes: 200 4.5.1.1. Request This table shows the URI parameters for the list cluster types request: Name Type Description tenant_id String The tenant ID in a multi-tenancy cloud. This operation does not accept a request body. 4.5.1.2. Response Example 4.20. List cluster types: JSON response "types": [ "id": "HADOOP_HDP1_3", "links": [ "href": "http://dfw.bigdata.api.rackspacecloud.com/v1.0/ 1234/types/HADOOP_HDP1_3", "rel": "self", "href": "http://dfw.bigdata.api.rackspacecloud.com/1234/ types/hadoop_hdp1_3", "rel": "bookmark" "name": "Hadoop (HDP 1.3)", "version": "1.3", "id": "HADOOP_HDP2_1", "links": [ "href": "http://dfw.bigdata.api.rackspacecloud.com/v1.0/ 1234/types/HADOOP_HDP2_1", "rel": "self", "href": "http://dfw.bigdata.api.rackspacecloud.com/1234/ types/hadoop_hdp2_1", "rel": "bookmark" "name": "Hadoop (HDP 2.1)", "version": "2.1" 44

, "id": "SPARK_HDP2_1", "links": [ "href": "http://dfw.bigdata.api.rackspacecloud.com/v1.0/ 1234/types/SPARK_HDP2_1", "rel": "self", "href": "http://dfw.bigdata.api.rackspacecloud.com/1234/ types/spark_hdp2_1", "rel": "bookmark" "name": "Spark Technical Preview (HDP 2.1)", "version": "2.1" 45

4.5.2. Show cluster type details Method URI Description GET /v1.0/tenant_id/types/typeid Shows details for a specified cluster type. Normal response codes: 200 Error response codes: itemnotfound (404) 4.5.2.1. Request This table shows the URI parameters for the show cluster type details request: Name Type Description tenant_id String The tenant ID in a multi-tenancy cloud. typeid String Specifies the type ID. This operation does not accept a request body. 4.5.2.2. Response Example 4.21. Show cluster type details: JSON response "type": "id":"hadoop_hdp2_1", "links":[ "href":"http://dfw.bigdata.api.rackspacecloud.com/v1.0/1234/types/ HADOOP_HDP2_1", "rel":"self", "href":"http://dfw.bigdata.api.rackspacecloud.com/1234/types/ HADOOP_HDP2_1", "rel":"bookmark" "name":"hadoop (HDP 2.1)", "services":[ 46

4.5.3. List supported flavors for a type Method URI Description GET /v1.0/tenant_id/types/type- Id/flavors Normal response codes: 200 Error response codes: itemnotfound (404) 4.5.3.1. Request Lists the supported flavors for a specified cluster type. This table shows the URI parameters for the list supported flavors for a type request: Name Type Description tenant_id String The tenant ID in a multi-tenancy cloud. typeid String Specifies the type ID. This operation does not accept a request body. 4.5.3.2. Response Example 4.22. List supported flavors for a type: JSON response "flavors": [ "disk": 2500, "id": "hadoop1-15", "links": [ "href": "https://dfw.bigdata.api.rackspacecloud.com/v1.0/ 1234/flavors/hadoop1-15", "rel": "self", "href": "https://dfw.bigdata.api.rackspacecloud.com/1234/ flavors/hadoop1-15", "rel": "bookmark" "name": "Medium Hadoop Instance", "ram": 15360, "vcpus": 4, "disk": 5000, "id": "hadoop1-30", "links": [ "href": "https://dfw.bigdata.api.rackspacecloud.com/v1.0/ 1234/flavors/hadoop1-30", "rel": "self", 47

"href": "https://dfw.bigdata.api.rackspacecloud.com/1234/ flavors/hadoop1-30", "rel": "bookmark" "name": "Large Hadoop Instance", "ram": 30720, "vcpus": 8, "disk": 10000, "id": "hadoop1-60", "links": [ "href": "https://dfw.bigdata.api.rackspacecloud.com/v1.0/ 1234/flavors/hadoop1-60", "rel": "self", "href": "https://dfw.bigdata.api.rackspacecloud.com/1234/ flavors/hadoop1-60", "rel": "bookmark" "name": "XLarge Hadoop Instance", "ram": 61440, "vcpus": 16, "disk": 1250, "id": "hadoop1-7", "links": [ "href": "https://dfw.bigdata.api.rackspacecloud.com/v1.0/ 1234/flavors/hadoop1-7", "rel": "self", "href": "https://dfw.bigdata.api.rackspacecloud.com/1234/ flavors/hadoop1-7", "rel": "bookmark" "name": "Small Hadoop Instance", "ram": 7680, "vcpus": 2 4.6. Resource limits This section describes the operation that is supported for resource limits. Method URI Description GET /v1.0/tenant_id/limits Shows the absolute resource limits, such as remaining node count, available RAM, and remaining disk space, for the user. 48

4.6.1. Show resource limits Method URI Description GET /v1.0/tenant_id/limits Shows the absolute resource limits, such as remaining node count, available RAM, and remaining disk space, for the user. Normal response codes: 200 4.6.1.1. Request This table shows the URI parameters for the show resource limits request: Name Type Description tenant_id String The tenant ID in a multi-tenancy cloud. This operation does not accept a request body. 4.6.1.2. Response Example 4.23. Show resource limits: JSON response "limits": "absolute": "disk": "limit":5120, "remaining":5120, "nodecount": "limit":5, "remaining":5, "ram": "limit":40960, "remaining":40960, "vcpus": "limit":10, "remaining":10, "links":[ "href":"http://dfw.bigdata.api.rackspacecloud.com/v1.0/1234/ limits", "rel":"self", "href":"http://dfw.bigdata.api.rackspacecloud.com/1234/limits", "rel":"bookmark" 49

50

Glossary Cluster A group of servers (nodes). In Cloud Big Data, the servers are virtual. HDFS The Apache Hadoop Distributed File System. This is the default file system used in Cloud Big Data. MapReduce A framework for performing calculations on the data in the distributed file system. Map tasks run in parallel with each other. Reduce tasks also run in parallel with each other. Node In a network, a node is a connection point, either a redistribution point or an end point for data transmissions. In general, a node has programmed or engineered capability to recognize and process or forward transmissions to other nodes. SCP server proxy An SCP service that runs on your Hadoop cluster and distributes your files across the cluster. Service catalog Your service catalog is the list of services available to you, as returned along with your authentication token and an expiration date for that token. All the services in your service catalog should recognize your token as valid until it expires. The catalog listing for each service provides at least one endpoint URL for that service. Other information, such as regions and versions and tenants, is provided if it is relevant to your access to this service. Tenant A container used to group or isolate resources or identity objects. Depending on the service operator, a tenant could map to a customer, account, organization, or project. 51