1 Energy Efficiency: The New Holy Grail of Data Management Systems Research Stavros Harizopoulos HP Labs Palo Alto, CA Mehul A. Shah HP Labs Palo Alto, CA Justin Meza UCLA Los Angeles, CA Parthasarathy Ranganathan HP Labs Palo Alto, CA ABSTRACT Energy costs are quickly rising in large-scale data centers and are soon projected to overtake the cost of hardware. As a result, data center operators have recently started turning into using more energy-friendly hardware. Despite the growing body of research in power management techniques, there has been little work to date on energy efficiency from a data management software perspective. In this paper, we argue that hardware-only approaches are only part of the solution, and that data management software will be key in optimizing for energy efficiency. We discuss the problems arising from growing energy use in data centers and the trends that point to an increasing set of opportunities for software-level optimizations. Using two simple experiments, we illustrate the potential of such optimizations, and, motivated by these examples, we discuss general approaches for reducing energy waste. Lastly, we point out existing places within database systems that are promising for energy-efficiency optimizations and urge the data management systems community to shift focus from performance-oriented research to energy-efficient computing. Categories and Subject Descriptors H.2.4 [Database Management]: Systems relational databases; query processing. H.3.4 [Information Storage and Retrieval]: Systems and Software performance evaluation (efficiency and effectiveness). General Terms Algorithms, Performance, Design. Keywords Data management systems, database systems, energy efficiency, power management. This article is published under a Creative Commons License Agreement (http://creativecommons.org/licenses/by/3.0/). You may copy, distribute, display, and perform the work, make derivative works and make commercial use of the work, but you must attribute the work to the author and CIDR th Biennial Conference on Innovative Data Systems Research (CIDR) January 4-7, 2009, Asilomar, California, USA. 1. INTRODUCTION Growing demands for information processing have lead to a demand for cheaper, faster, and larger data management systems. At the same time, an important and growing component of the total cost of ownership for these systems is power and cooling [Bar05]. A recent report by the Environmental Protection Agency shows that data center power consumption in the US doubled between 2000 and 2006, and will double again in the next five years [EPA07]. Uncontrolled energy use in data centers also has negative implications on density, scalability, reliability, and the environment. These trends are raising awareness across multiple disciplines to optimize for energy use in data centers. Recent work in computer architecture and power management has called for system designs and provisioning of hardware components that take power consumption under consideration. The goal in these approaches is to design systems that will be highly energy efficient at the peak performance point (maximum utilization) and remain energy efficient as load fluctuates or drops. The notion of energy proportionality [BH07] characterizes exactly that: servers should use no power when not used and power only in proportion to delivered performance or system utilization. Thus, servers would offer constant energy efficiency, i.e., the ratio of performance to power, at all performance levels. In this paper, we argue that energy-friendly and energy-proportional hardware is only part of the answer. Software choices can be particularly effective, especially in database management systems that permit broad choices due to physical data independence and query optimization. We expect the opportunity for software-level optimizations to be even larger with the recent trend of increased heterogeneity at all levels, from clusters to storage hierarchies and CPUs. We demonstrate the efficacy of software choices in database systems through two simple examples. In the first example, we experiment with a system which was configured similarly to an audited TPC-H server and show that making the right decision in physical design can improve energy efficiency. The second example uses a relational scan operator as a basis to demonstrate that optimizing for performance is different from optimizing for energy efficiency. Motivated by these examples, we discuss potential approaches for reducing energy waste. We identify three such general approaches, each with an increased level of complexity: (a) adjust existing system-wide configuration knobs and query optimization parameters to account for energy costs, (b) consolidate resource use in both time and space to facilitate powering down unused
2 hardware components, and (c) revisit and redesign data structures, algorithms, and policies, from a whole-system perspective, to improve energy efficiency. With these approaches in mind, we examine several areas of database systems that are ripe for energy efficiency optimizations. These areas range from physical design to storage and buffer management, and from query optimization and query processing to new architectures and special purpose engines. Our aim in this paper is to bring awareness to the database systems community about the important opportunities for research on energy-efficient data management software. We start our discussion of the problem by presenting trends and implications of rising energy use (Section 2), and motivate the potential that database software has for affecting energy efficiency. Section 3 contains our two examples that further highlight the potential for software-based energy optimizations. In Section 4, we describe three general approaches for reducing energy waste, and in Section 5, we discuss promising areas in data management systems for improving energy efficiency. We conclude in Section THE NEED FOR ENERGY EFFICIENCY In this section, we first define and motivate the need for energy efficiency in data management systems (Sections 2.1 and 2.2), and then highlight current findings and trends that will shape the scope of future solutions (Sections 2.3 and 2.4). 2.1 What is Energy Efficiency Energy is the physical currency used for accomplishing a particular task, e.g., moving a car, lighting a room, or even performing a computation. It can take many forms such as electrical, light, mechanical, nuclear, and so on, and can be converted from one form to another. For computing systems and data centers, energy is delivered as electricity. Power is the instantaneous rate of energy use, or equivalently, energy used for a task is the product of average power used and the time taken for the task: Energy = AvgPower Time The typical units for energy and power are Joules and Watts, respectively, and 1 Joule = 1 Watt x 1 second. For computer systems, we roughly define energy efficiency as the ratio of computing work done per unit energy. It is analogous to the miles per gallon metric for automobiles. This metric varies from application to application since the notion of work done varies. For example, it might be transactions/joule for OLTP systems, and searches/joule for a search engine. Energy efficiency is also equivalent to the ratio of performance, measured as the rate of work done, to power used: EE = WorkDone = WorkDone = Perf Energy Power Time Power For fixed amount of work, maximizing energy efficiency is the same as minimizing energy. Thus, unlike performance optimization, we can improve energy efficiency by reducing power, time, or both. 2.2 Energy Cost vs. Data Management Needs Energy use in data centers is a growing problem and a key concern for data center operators and IT executives. A recent study by the Environmental Protection Agency (EPA) shows that 60 billion kwh, or 1.5% of the total US energy use in 2006, was used to power data centers, and this use is expected to nearly double by 2010 [EPA07]. Koomey also observes a similar trend and estimates $2.7 billion was spent in the US and $7.2 billion was spent worldwide to power and cool servers in 2005 [Koo07]. Moreover, studies show that every 1W used to power servers requires an additional 0.5W to 1W of power for cooling equipment [PBS+03]. Although energy costs and breakdowns vary depending upon the installation, analysts predict that energy costs will eventually outstrip the cost of hardware [Bar05]. Data center operators will, therefore, need to adjust their pricing structure accordingly to reflect these rising energy costs. Besides the cost of electricity, energy use by computing equipment has implications on data center density, reliability, and the environment. Racks in data centers are provisioned to deliver a certain capacity in order to properly power and cool the servers. As power consumed by servers increases, many racks end up going empty. Even when racks deliver enough power, often cooling is the limitation, since undercooled equipment exhibits higher failure rates. Thus, energy-use limits achievable data-center level scalability. Finally, energy use in data centers is starting to prompt environmental concerns of pollution and excessive load placed on local utilities [PR06]. As a result of these trends, governmental agencies (e.g., EPA, US Congress, Intelligent Energy Europe, TopRunner) are actively seeking to regulate enterprise power. Recently, a new industrial consortium, GreenGrid, has been formed to address energy efficiency in data centers. At the same time, the demand for cheaper, faster, and larger data management systems continues to grow. Some systems in data centers support traditional data management tasks like transaction processing and business intelligence. The business intelligence market is a multi-billion dollar market and still seeing double-digit growth rates [VMW+08]. With the growth of the internet sector, data centers are also running non-traditional data management workloads such as search, multimedia storage and delivery, web analytics, and more recently, cloud computing. To accommodate their rapidly growing datasets and workload, internet-sector companies like Google are increasingly concerned with energy use [Bar05]. For example, such companies are starting to build data centers close to electric plants in cold-weather climates [MH06]. Ultimately, however, to tackle the energy problem in data centers while continuing to fuel the demand for data management resources, we will need to shift our focus from optimizing data management systems for pure performance to optimizing for energy efficiency. 2.3 Initial Reactions to Energy Concerns To tackle the energy-related concerns, there has been a plethora of work from the architecture and power management communities. The previous work spans all levels from chips to data centers [Ran09]. At the chip level, designers have considered techniques such as dynamic voltage and frequency scaling (DVFS), clock routing optimizations, low-power logic, asymmetric multi-cores,
3 and so on. At the platform level, there has been work on reducing power supply inefficiencies as well as power optimizations for the memory hierarchy. For example, researchers have suggested strategies for dynamically turning off DRAM, disk speed control, and disk spin down [ZCT+05]. More recently, there has been an emphasis on cluster-level optimizations: shifting workloads and power budgets by considering power and temperature constraints across multiple domains and the data center as a whole. Finally, to complement these compute-based techniques, there have been improvements in data center cooling. Unfortunately, most of this past work has been application and database agnostic. Recent work on energy-efficiency optimizations for data management workloads has proposed the JouleSort [RSR+07] and SPEC- Power benchmarks. These two measure energy efficiency of entire systems that perform data management tasks. Their workloads, however, are quite specialized. As database system designers, we know that technology inflection points in the past such as virtual memory, deeping of the cache hierarchy, fast networks for clustering, and so on have necessitated a reconsideration in the database domain. Application-agnostic techniques for scaling have hardly helped if not obstructed the performance of database systems. 2.4 The Role of Data Management Software To date, there has been little work on energy efficiency from a data management software perspective. One source of the problem lies with the limited dynamic power range and limited power knobs that most hardware offers today. Analysis of TPC-C [PN08], SPEC-Power and external sort [Riv08] systems, show that most servers offer little power variance from no load to peak use. Thus, for a particular hardware configuration, performance optimizations are the only way to effect energy improvements. Barroso and Holze noticed the same inelasticity in power for Google s servers [BH07], and also found that the CPUs on these servers were mostly between 10-50% utilized. They, therefore, argued for energy-proportional systems: servers that use no power when not used and automatically consume power only in proportion to delivered performance, or in their case, system utilization. Such ideal energy-proportional systems would offer constant energy efficiency at all performance levels rather than the best energy efficiency only at peak performance. Unfortunately, systems or even their components (CPUs, memory, disks, etc.) are hardly energy proportional. Existing components offer only limited controls for trading power for performance. CPUs, for example, offer voltage and frequency scaling, which is a good first step but far from ideal. Memory and disks, whose power contribution can dwarf other components in database systems [PN08], offer almost no power control except for sleep states. They are either on (and at full performance and power) or off, and the transitions can be expensive. These limited choices in power-performance states prevent data management software from varying power consumption and thereby improving energy efficiency. However, we expect the opportunities for software-level optimization to increase for two reasons. First, there is a push to design components that offer more control over power-performance tradeoffs. For example, we believe it is reasonable to expect that a software module will be able to control which CPU cores in a multicore chip are active at any time. Second, there is increasing hardware heterogeneity at all levels. At the cluster level, data centers already contain a heterogeneous collection of servers that offer different power-performance tradeoffs because of the technology refresh cycle. Recent work has considered using virtual machine migration and turning off servers to effect energy-proportionality [TWM+08]. Although promising, this approach ignores the disk subsystem. At the platform level, others have suggested blade system designs with an increasing number of choices, e.g., a remote memory blade, compute blades with components from embedded systems, flash blades, and so on [LRC+08]. Finally, at the chip level, processor manufacturers are starting to release heterogeneous CPUs such as IBM s Cell, AMD s Fusion (CPU/GPGPU), and Intel s Larrabee (GPGPU). These trends suggest that software running in data centers will have many more hardware choices with different power-performance characteristics. Database systems in particular are well equipped to harness these choices due to physical data independence and query optimization. Databases can dynamically choose the hardware based on the workload or consolidate data and processes to effect component-level energy proportionality. Moreover, they can improve energy efficiency even further by choosing among a variety of query processing algorithms that compute the same result but exercise hardware components differently. We explore these and other options for energy-use improvements in the upcoming sections. 3. EXAMPLE OPPORTUNITIES In this section, we describe two examples that show that choices made by database systems can improve energy efficiency independently of improving performance. The first is an experiment that varies the number of disks to affect power consumption for a decision support workload. Using this coarse knob, the system shows a point of diminishing returns for energy efficiency; the most efficient point is not the best performing. The second example shows that algorithms designed for energy efficiency do not necessarily result in the best performance. Although these examples are simple, they illustrate the tradeoffs that we need to consider when optimizing data management systems for energy efficiency. 3.1 Example 1: Diminishing Returns Most database systems running complex workloads can be configured and tuned to show a maximum energy-efficiency point at less than peak performance. The intuition behind this is simple. When configuring a system for high performance, additional instances of any one system component, after a certain point, provide decreasing incremental performance benefit, but add constant amounts of power into the power budget. For example, the 7th disk provides less incremental performance benefit than the 6th disk, more incremental benefit than the 8th disk, and each additional disk contributes the same power. Therefore, in configuring and tuning a system for energy efficiency, one ought to bal-
4 time (sec) Performance Energy Efficiency number of disks ance system components such that the incremental benefits among all types outweigh the additional power cost. We demonstrate this effect by exploring the energy-efficiency profile of a high-performance database server running a decision support workload (TPC-H). We found that the most effective means of varying power use in our system was by repartitioning our database across fewer disks, since the disk subsystem consumed more than 50% of the total system power. The experimental setup is as follows. Although our results are not audited, our system was configured similar to one running an audited TPC-H benchmark result at 300GB scale factor [HP08]. The system consisted of an HP ProLiant DL785 server tray with 8 Quad-Core AMD Opteron processors, 64GB of memory, and between SCSI drives (15K RPM, 73GB) connected by SAS to between 2-13 HP StorageWorks MSA70 disk trays. We ran a commercial database system also configured similar to the audited system [HP08] and ran on Microsoft Windows Enterprise Server We striped the database across all disks in a RAID 5 configuration and used compression to bring the database footprint to about 256GB. Processors were run in the C0 state, and, when idle, the system simply ran the system idle process. Figure 1 shows the time (performance) and energy efficiency of the system as we varied the number of disks for the throughput test. The throughput test issues a mixture of TPC-H queries simultaneously from multiple clients to the system. We see that 66 disks was the point of diminishing returns. The percentage gain in performance did not outweigh the percentage gain in power, so energy efficiency started to drop. For this system, the most efficient point offers a 14% increase in efficiency for a 45% drop in performance. The point of diminishing returns and efficiency-performance tradeoff will vary with the workload. Some workloads may be able to use additional resources while others will underutilize them and therefore waste power. This experiment shows that even with a coarse power control, there is opportunity for improvement. Clearly, the overall benefits also depend on the costs associated with creating or maintaining different partitionings of the database. However, as the number of knobs available and configuration choices that affect power energy efficiency (1/J x 10 8 ) Figure 1. Time and energy-efficiency vs. number of disks for TPC-H Throughput Test. The points correspond to 36, 66, 108, and 204 disks. time (secs) relational scan on uncompressed vs. compressed data E = 338 J uncompressed total time CPU time E = 487 J compressed Figure 2. Scan experiment using one CPU and three flash disks. Compressed data result in a faster query by trading CPU cycles for disk bandwidth, but overall energy consumption increases. increase, the larger the role the database software will play in optimizing the system for energy efficiency. 3.2 Example 2: Algorithm Design In this example, we show that given energy-proportional components, designing algorithms for energy efficiency is not the same as designing algorithms for performance. We show this by considering a scan with and without compression over a database on flash drives. Consider a table stored on disk and a simple selection query which scans the entire table and applies a predicate. We draw our numbers for this example from an implementation of a high-performance column-oriented relational scanner from our previous work [HLA+06]. In particular, we pick a query that projects five out of seven attributes of table ORDERS from TPC-H and examine two different configurations, one where the base table is noncompressed and one where it is compressed. The hardware configuration includes one CPU and three SSD flash disks, which are an order of magnitude more energy efficient than regular hard drives. We reproduce the CPU and total times in Figure 2. When uncompressed, the query is disk-bound: it takes 10 secs to read all data and 3.2 secs for the CPU to process it. By overlapping disk with CPU time, the total time is 10 secs. When compressed, the query becomes mostly CPU-bound, and takes 5.5 secs to run (out of which, 5.1s are CPU time). Clearly, for that particular configuration, if one were to run this query by itself, they would prefer the compressed version of the table, and would observe a speedup of 2x. With energy efficiency in mind, however, it turns out that the uncompressed table results into a more energy-efficient (but slower) query. The CPU has a power consumption of 90 Watts, while the flash disks together consume only 5 Watts. Therefore, for the uncompressed table, the total energy consumed is (90W x 3.2s + 5W x 10s) = 338 Joules, whereas for the compressed one
5 is (90W x 5.1s + 5W x 5.5s) 487 Joules (assuming that an idle CPU does not consume any power, or, simply, assuming that some other concurrent task is taking up the rest of the CPU cycles). This simple example points out just how counter-intuitive optimizing for energy efficiency can be. From a performance point of view, the compressed table makes sense: it results in trading 1.9 secs of CPU time for 4.5 secs of disk time, and also overlaps CPU and disk use. However, from an energy efficiency point of view, the compressed table uses a lot more of the CPU. The CPU power (90W) is much larger relative to the power of the Flash drives (5W). Thus, using the additional power to increase performance is not worth it from an energy perspective. This example shows that both power and performance considerations must be evaluated when designing algorithms for energy efficiency. 4. APPROACHES FOR REDUCING WASTE The previous section provided a brief motivation of how optimizing for energy efficiency can differ significantly from optimizing for performance. The space of possible techniques or optimizations for reducing energy waste is obviously larger than what those two examples suggest. In fact, we expect that most approaches that have been applied to date to improve performance, will have a counterpart in the approaches for optimizing for energy efficiency. We highlight this duality between performance and energy optimizations throughout the rest of this section. The first step to deploying an energy-efficient data management platform is to choose hardware components with good performance-per-watt characteristics. This is analogous to choosing high-performance components (server-grade CPUs, high-rpm disks etc.) when absolute performance is desired. With that as a starting point, we discuss three categories of software-based approaches for reducing wasted energy in data management systems, presented in order of increased complexity. A more general list of power optimizations can be found in [Ran09]. 4.1 Energy-aware Optimizations Approach: Use existing system-wide knobs and internal query optimization parameters to achieve the most energy-efficient configuration for the underlying hardware. All modern commercial database systems offer a multitude of knobs, from collecting statistics for query optimization to configuring input parameters to physical designers, and from selecting the degree of parallelization to assigning memory to operators or temporary space. The same way many of those knobs have been tuned to date to increase performance, we expect DBAs to use them to improve energy efficiency. The examples in the previous section demonstrate the necessity in understanding and modelling power cost, so that database administrators and developers can incorporate or apply the model into existing tools. Choosing the right algorithms and settings for improving energy efficiency entails different tradeoffs than when optimizing for performance, as shown in Section 3.2. Compression techniques, for example, trade off CPU cycles for reduced bandwidth requirements (both disk-to-memory and memory-to-cpu). By turning the focus on energy efficiency, tradeoffs like this one will need to be re-examined. We expect that query optimization decisions also have the potential to significantly affect energy consumption by producing significantly different query plans. As an example, consider the hash-join operator which has been known to outperform nestedloop join in many occasions, but it relies on using a large chunk of memory for building and maintaining the hash table. From a power perspective, these are expensive operations and may tip the balance in favor of nested-loop join in more occasions than before. To improve energy efficiency, query optimizers will need power models to estimate energy costs. There has been a lot of work on modeling power, but simple models may suffice in the same way simple models for device access times work well in practice. In addition to configuring and tuning a system for maximum energy efficiency in a given hardware configuration, the increased heterogeneity in hardware resources in large data centers will force knob settings and query optimization decisions to be made dynamically, depending on runtime conditions. 4.2 Resource Use Consolidation Approach: Shift computations and relocate data to consolidate resource use both in time and space, to facilitate powering down individual hardware components. Whenever system resources are not used or are partially used, there is an opportunity for saving energy by either allowing other concurrent tasks to utilize the otherwise idle resource or by allowing the resource to enter a suspended or reduced power mode. Ideally, this should be automatically handled by the underlying hardware and/or the operating system. For example, if the disk subsystem is periodically accessed, it should automatically enter into a sleep mode during periods with no activity. Such functionality is necessary for achieving energy-proportional components. However, as we pointed out in Section 2, current-technology components have limited power states and, furthermore, the switching costs across states can easily exceed energy savings. While we expect hardware components to improve over time in their ability to consume power in proportion to their usage, we believe that software choices and practices will ultimately improve energy-proportionality at the component level. Hardware components will require a certain minimum-length idle period to enter in a suspended mode, and the longer that period is the easier it is to hide the costs of switching between power states. Software mechanisms can help consolidate resource use by moving data and computation across resources. In that case, the energy savings from consolidation should exceed the energy overhead of such movements. For consolidating resource use across time, previous work on energy-efficient prefetching and caching for mobile computing proposed modifications to the OS to encourage burstiness and increase the length of idle periods [PS04]. A database storage manager could also incorporate similar techniques, especially since certain table scans have highly predictable access patterns. At a higher level, when considering entire systems or collections
6 of resources, we expect to see workload management policies that encourage identifiable periods of low and high activity perhaps batching requests at the cost of increased latency. To date, numerous techniques have successfully targeted idle periods to improve pure performance of data management systems. In this case, the goal was to move computation and data to increase overlap among occupied resources. As an example, asynchronous I/O is a mechanism that falls under this category. Energy-efficient techniques that aim to consolidate resource use across time can potentially utilize similar mechanisms. Further, we could imagine buffer and storage management policies that move data across memory and disks to consolidate space-shared resources. This consolidation would enable powering down unused hardware at the expense of data movement. We could imagine performing these at a coarse granularity, just as load-balancing techniques move database state to improve performance. 4.3 Redesign for Max Energy Efficiency Approach: Redesign software components to (a) minimize energy use, (b) reduce code bloat, and (c) sacrifice certain properties (or allow underperform in certain metrics) to improve energy efficiency. Certain power-oriented tradeoffs in the usage of various resources will not be achievable through existing configuration parameters and thus, we expect certain approaches to energy efficiency to require significant modifications at the software layer. In general, redesigning system components to include faster algorithms and mechanisms improves not only performance but also energy efficiency. However, several algorithms will need to be specifically redesigned for energy use. Consider, for example, the buffer manager: its whole notion and associated replacement policies are based on avoiding as much as possible costly (in terms of latency) accesses to slower storage. With energy savings in mind, the access costs of memory hierarchy levels are going to be different. Moreover, keeping a page in RAM will require energy, proportional to the time the page is cached. New caching and replacement policies will be needed, possibly involving a larger number of more diverse memory hierarchy levels. Software engineering practices will also need to be revisited and rethought. Multiple layers of abstraction in the code structure along with general-purpose component functionality have led to increases in programmer productivity but also contributed to significant bloat in the code [MS07]. Useful data is typically stored with significant structural overhead, and simple operations involve the execution of a disproportional large set of instructions, creating multiple temporary objects on the way. While from a performance point of view it may not worth addressing code bloat, energy efficiency considerations may reverse that. In recent years, there has been a rising interest in database-like systems that do not include the full suite of traditional database features. These new systems were designed around new applications with a different set of tradeoffs. Sacrificing certain properties such as consistency, reliability, availability, or even security, can lead to an interesting set of tradeoffs and therefore to opportunities for improving performance. We expect this kind of tradeoffs and reduced-functionality designs to also apply to energy optimizations. 5. FUTURE DATABASE DIRECTIONS Having discussed some general software approaches to reduce energy waste, in this section we point out areas in data management systems that are promising for energy-efficiency optimizations. 5.1 Data Placement and Query Processing Physical database design. Decisions on how and where data is stored are expected to have a significant impact on database energy use since initial studies show that more than half the power use is concentrated in the disk subsystem [RSR+07, PN+08]. In addition to re-evaluating all previous tradeoffs of redundant storage and its cost, the new challenge is to take advantage of more choices of physical locations for storing data and incorporate those into the design process. For example, with performance in mind, the physical locations for permanently storing data were restricted to only disks (that are uniformly fast), and, more recently, to faster, albeit more expensive, solid state drives. With energy efficiency in mind, we expect to see more choices: different sets of disk arrays that vary in performance/power characteristics, different types of solid state drives, along with remote storage, accessible over a network. Furthermore, for read-mostly workloads, increasing redundancy may improve energy efficiency. Additional capacity on disks does not carry energy costs if the disk usage remains the same, and, as the example of Section 3.1 hinted, different partitioning schemes may be optimal for different workloads. Lastly, techniques that reduce disk bandwidth requirements, such as column-oriented storage and compression, will need to be re-evaluated for their ability to reduce overall energy use. Query optimization and processing algorithms. The same way query optimization and query processing algorithms are crucial to absolute performance, we expect them to also play a central role in reducing energy waste. Current query processing algorithms are based on fundamental assumptions regarding the size of available memory, the nature and number of accesses they make to both main memory and secondary storage, and their CPU requirements. Optimizing for energy use will first bring changes to the implementation of the algorithms themselves, but most importantly it will change the way the query optimizer estimates costs and chooses a query plan, as we mention in Section Resource Managers Query scheduling and memory management. Due to the complexity of database systems and the wide variety of resources needed by a query over its lifetime (along with the complications that arise when multiplexing the execution of several queries), there has been a limited number of query scheduling policies that work well in practice. The objective in the past has been to improve query response times (either for individual queries or groups of queries) and maximize the utilization of available system resources. For complex queries, scheduling the various operators within a query and deciding on resource allocation for each operator has been crucial for the system s overall performance.
7 Again, switching objectives, from performance to energy efficiency warrants a thorough re-examination of work done in scheduling and resource management for database workloads. Techniques that enable and encourage work sharing across queries will become increasingly attractive. Buffer manager and storage manager. As we mention is Section 4.3, we expect the buffer and storage management policies to be significantly revised, to reflect energy costs for accessing and storing data, as well as leverage new energy-efficient levels in the memory hierarchy. Logging and recovery. While the actual process of system recovery is not likely to be a candidate for improving energy efficiency (since it happens rarely enough, that performance and correctness will still be the focus), a lot of a system s resources, both in computational power and use of the storage hierarchy, are devoted to logging: previous work showed that about 15% of the code executed in an online transaction processing system is related to logging [HAM+08]. Switching the optimization objectives and incorporating new technology (e.g., flash memories) will affect the logging mechanisms. For example, it may make sense to increase the batching factor (and increase response time) to avoid frequent commits on stable storage, or it might make sense to migrate certain data and transactions to operate directly on stable storage and therefore reduce the amount of logging needed. 5.3 New System Architectures New implementations. Traditional database systems are packed with functionality to appease the mid-range of the database market. In the past, this feature-driven approach has increased system complexity (i.e., bloat) and limited performance and scalability. As a result, we are seeing the development of data management software specialized for certain domains, e.g., data warehousing, streaming, transaction processing, application log processing, etc. Similarly, we expect that software complexity may limit the feasibility of energy optimizations. For example, a database vendor may prefer not to implement online data repartitioning because it may affect the usability of numerous external tools, including those distributed by partners. Further, we expect that different energy optimizations are applicable in different domains, e.g., SSDs are better suited for transactional application rather than warehousing. As a result, we expect the trend toward specialization to continue and energy reduction techniques to emerge first from more narrowly focused implementations. Many of the recent redesigns for improved performance have done more than separate out functionality. Some have also sacrificed other metrics such as consistency for improved performance. Similarly, we expect optimizations that sacrifice availability, reliability, or other -abilities to further improve energy use. Co-design. In redesigning data management software, we believe co-design with other disciplines will enable energy efficiency improvements that cannot be attained in isolation. We discuss two important opportunities for co-design. First, as in the past, data management considerations have influenced the design of hardware (e.g., CPUs and GPGPUs) for improved performance and should do so in the energy space. Our community should influence power-management and hardware architects to develop technologies that address the energy bottlenecks found in data management systems. Otherwise, we will be left optimizing for components that hardly make a difference. Conversely, we will also need to anticipate and adapt our algorithms to the multitude of technologies architects develop to address the larger market such as heterogeneous multicores, different types of solid state technologies (Flash, phase-change RAM, etc.), multi-speed drives, and so on. A second opportunity is in the coordination of power management performed at the database system level with other independent power management controllers at different levels of the system. For example, consider a hardware controller that changes the voltage and frequency in parallel with the query optimizer which is making decisions based on current runtime power states. If these two do not communicate and coordinate their choices, they may end up working cross purposes [RRT+08]. The software needs to ensure there is an efficient handoff from one controller to another, and ideally, it needs to implement an architecture that helps communicate information across controllers to reach a global optima. To achieve this, database designers will need broad cooperation from people across a variety of disciplines from computer architecture to operating systems to control theory. Designing for Total Cost of Ownership. Although the ratios vary across installations, data center operators recognize management, hardware, and energy costs as the three main costs of the total cost of ownership (TCO). Since energy costs are rising and hardware costs are dropping relatively, we speculate that there will eventually be an opportunity in redesign to sacrifice hardware cost for improved energy efficiency. For example, in configuring a system for maximum energy efficiency, we may end up with an configuration that does not meet minimum performance criteria. Two potential solutions for increased performance are to either waste energy and increase performance with diminishing returns or pay for more hardware (use more resources in a cluster) and parallelize, keeping the same energy efficiency. Over time, we expect that the latter solution will prevail since the energy costs will make up a larger fraction of TCO. In that respect, we speculate that parallelization and system scalability will continue to be important avenues for maintaining maximum efficiency. 6. CONCLUSIONS Energy efficiency is quickly emerging as a critical research topic across several disciplines. In this paper, our intention was to bring awareness to the database systems community about the opportunities and challenges of energy-aware database computing. We argued that static hardware configurations, energy-proportional hardware, and application-agnostic power management techniques are only part of the answer in controlling and reducing rising energy costs in large data centers. Data management software will ultimately play a significant role in optimizing for energy efficiency. Towards this goal we discussed possible solution approaches as well as areas in database systems ripe for energy improvements. Looking ahead, we urge the database systems community to take up on this challenge and shift focus from performance-oriented research to energy-efficient computing.
8 7. ACKNOWLEDGMENTS We thank Goetz Graefe for his valuable comments and constructive participation in discussions on energy-efficient software. 8. REFERENCES [BH07] L. A. Barroso and U. Hölzle. The Case for Energy-Proportional Computing. In IEEE Computer 40(12): 33-37, [Bar05] L. A. Barroso. The price of performance. In ACM Queue 3(7): 48-53, [EPA07] US Environmental Protection Agency, Report to Congress on Server and Data Center Efficiency: Public Law ; [HAM+08] S. Harizopoulos, D. Abadi, S. Madden, and M. Stonebraker. OLTP Through the Looking Glass, and What We Found There. In Proc. SIGMOD, [HLA+08] S. Harizopoulos, V. Liang, D. Abadi, and S. Madden. Performance Tradeoffs in Read-Optimized Databases. In Proc. VLDB, [HP08] H-P Company. TPC Benchmark H, Full Disclosure Report, July Result ID: , results/tpch_result_detail.asp?id= [Koo07] J. G. Koomey. Estimating Total Power Consumption by Severs in the US and the World ; Downloads/svrpwrusecompletefinal.pdf, [LRC+08] K. Lim, P. Ranganathan, J. Chang, C. Patel, T. Mudge, S. Reinhardt. Understanding and designing new server architectures for emerging warehouse-computing environments. In Proc. ISCA, [MH06] J. Markoff and S. Hansell. Hiding in plain sight, Google seeks an expansion of power. New York Times, June 14, [MS07] N. Mitchell and G. Sevitsky. The Causes of Bloat, The Limits of Health. In Proc. OOPSLA, [PS04] A. E. Papathanasiou and M. L. Scott. Energy Efficient Prefetching and Caching. In Proceedings of the USENIX 2004 Annual Technical Conference, June 27-July 2, 2004, pp [PBS+03] C.D. Patel, C. E. Bash, R. Sharma, and R. Friedrich. Smart Cooling of Data Centers. In ASME Interpack, Maui, Hawaii, [PN08] M. Poess and R. Nambiar. Energy Cost, The Key Challenge of Today's Data Centers: A Power Consumption Analysis of TPC-C Results. In Proc. VLDB, [PR06] C. D. Patel and P. Ranganathan. Enterprise power and cooling. ASPLOS Tutorial, Oct [PN08] M. Poess and R. O. Nambiar. Energy Cost, The Key Challenge of Today s Data Centers: A Power Consumption Analysis of TPC-C Results. In Proc. VLDB, [RRT+08] R. Raghavendra, P. Ranganathan, V. Talwar, Z. Wang, and X. Zhu. No Power Struggles: A Unified Multi-level Power Management Architecture for the Data Center. In Proc. ASP- LOS, [Ran09] P. Ranganathan. A Recipe for Efficiency? Some Principles of Power-aware Computing. In Communications of the ACM (to appear), [RLI+06] P. Ranganathan, P. Leech, D. Irwin, and J. Chase. Ensemble-level Power Management for Dense Blade Servers. In Proc. ISCA, [RSR+07] S. Rivoire, M. A. Shah, P. Ranganathan, C. Kozyrakis. JouleSort: a balanced energy-efficiency benchmark. In Proc. SIGMOD, [Riv08] S. Rivoire. Models and Metrics for Energy-Efficient Computer Systems. Doctoral dissertation, Stanford University Department of Electrical Engineering, June [TWM+08] N. Tolia, Z. Wang, M. Marwah et al. Zephyr: Unified Power and Cooling Management in Virtualized Data Centers. In Proc. HotPower, [VMW+08] D. Vesset, B. McDonough, K. Wilhide, M. Wardley, R. McCullough, and D. Sonnen. Worldwide Business Analytics Software Forecast Update and 2006 Vendor Shares. IDC, #208699, Sept [ZCT+05] Q. Zhu et al. Hibernator: Helping Disk Arrays Sleep Through the Winter. In Proc. SOSP, 2005.
Energy Constrained Resource Scheduling for Cloud Environment 1 R.Selvi, 2 S.Russia, 3 V.K.Anitha 1 2 nd Year M.E.(Software Engineering), 2 Assistant Professor Department of IT KSR Institute for Engineering
Resource Efficient Computing for Warehouse-scale Datacenters Christos Kozyrakis Stanford University http://csl.stanford.edu/~christos DATE Conference March 21 st 2013 Computing is the Innovation Catalyst
Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Applied Technology Abstract This white paper introduces EMC s latest groundbreaking technologies,
ioscale: The Holy Grail for Hyperscale The New World of Hyperscale Hyperscale describes new cloud computing deployments where hundreds or thousands of distributed servers support millions of remote, often
HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads Gen9 Servers give more performance per dollar for your investment. Executive Summary Information Technology (IT) organizations face increasing
SQL Server Business Intelligence on HP ProLiant DL785 Server By Ajay Goyal www.scalabilityexperts.com Mike Fitzner Hewlett Packard www.hp.com Recommendations presented in this document should be thoroughly
WHITEPAPER Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software SanDisk ZetaScale software unlocks the full benefits of flash for In-Memory Compute and NoSQL applications
Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS
An Oracle White Paper May 2011 Exadata Smart Flash Cache and the Oracle Exadata Database Machine Exadata Smart Flash Cache... 2 Oracle Database 11g: The First Flash Optimized Database... 2 Exadata Smart
White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card Version 1.0 April 2011 DB15-000761-00 Revision History Version and Date Version 1.0, April 2011 Initial
Express5800 Scalable Enterprise Server Reference Architecture For NEC PCIe SSD Appliance for Microsoft SQL Server An appliance that significantly improves performance of enterprise systems and large-scale
White Paper Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis White Paper March 2014 2014 Cisco and/or its affiliates. All rights reserved. This document
1 WWW.FUSIONIO.COM FUSION iocontrol HYBRID STORAGE ARCHITECTURE Contents Contents... 2 1 The Storage I/O and Management Gap... 3 2 Closing the Gap with Fusion-io... 4 2.1 Flash storage, the Right Way...
The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage sponsored by Dan Sullivan Chapter 1: Advantages of Hybrid Storage... 1 Overview of Flash Deployment in Hybrid Storage Systems...
Evaluation Report: Database Acceleration with HP 3PAR StoreServ 7450 All-flash Storage Evaluation report prepared under contract with HP Executive Summary Solid state storage is transforming the entire
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material,
SYSTEM X SERVERS SOLUTION BRIEF Maximum performance, minimal risk for data warehousing Microsoft Data Warehouse Fast Track for SQL Server 2014 on System x3850 X6 (95TB) The rapid growth of technology has
White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
GUIDELINE on SERVER CONSOLIDATION and VIRTUALISATION National Computer Board, 7th Floor Stratton Court, La Poudriere Street, Port Louis Introduction There is an ever increasing need for both organisations
EMC XTREMIO EXECUTIVE OVERVIEW COMPANY BACKGROUND XtremIO develops enterprise data storage systems based completely on random access media such as flash solid-state drives (SSDs). By leveraging the underlying
OBJECTIVE ANALYSIS WHITE PAPER MATCH ATCHING FLASH TO THE PROCESSOR Why Multithreading Requires Parallelized Flash T he computing community is at an important juncture: flash memory is now generally accepted
DELL s Oracle Database Advisor Underlying Methodology A Dell Technical White Paper Database Solutions Engineering By Roger Lopez Phani MV Dell Product Group January 2010 THIS WHITE PAPER IS FOR INFORMATIONAL
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
WHITE PAPER Optimizing Virtual Platform Disk Performance Think Faster. Visit us at Condusiv.com Optimizing Virtual Platform Disk Performance 1 The intensified demand for IT network efficiency and lower
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
A System for Energy-Efficient Data Management Yi-Cheng Tu 1, Xiaorui Wang 2, Bo Zeng 3 1 Dept. of Computer Science and Engineering, University of South Florida, U.S.A. 2 Dept. of Electrical and Computer
Last time Data Center Construction (and management) Johan Tordsson Department of Computing Science 1. Common (Web) application architectures N-tier applications Load Balancers Application Servers Databases
Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash
Abstract Energy Aware Consolidation for Cloud Computing Shekhar Srikantaiah Pennsylvania State University Consolidation of applications in cloud computing environments presents a significant opportunity
SUN ORACLE EXADATA STORAGE SERVER KEY FEATURES AND BENEFITS FEATURES 12 x 3.5 inch SAS or SATA disks 384 GB of Exadata Smart Flash Cache 2 Intel 2.53 Ghz quad-core processors 24 GB memory Dual InfiniBand
Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...
EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
An Oracle White Paper November 2010 Oracle Real Application Clusters One Node: The Always On Single-Instance Database Executive Summary... 1 Oracle Real Application Clusters One Node Overview... 1 Always
A Diablo Technologies Whitepaper Diablo and VMware TM powering SQL Server TM in Virtual SAN TM May 2015 Ricky Trigalo, Director for Virtualization Solutions Architecture, Diablo Technologies Daniel Beveridge,
NST STORAGE FOR THE VIRTUAL DESKTOP Nexsan s innovative product, the NST5000, is a hybrid storage system with unified protocols and highly dense storage for a combination of great performance, low cost,
White Paper Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and Dell PowerEdge M1000e Blade Enclosure White Paper March 2014 2014 Cisco and/or its affiliates. All rights reserved. This
Gr eenhpc Dynami cpower Management i nhpc AT ECHNOL OGYWHI T EP APER Green HPC Dynamic Power Management in HPC 2 Green HPC - Dynamic Power Management in HPC Introduction... 3 Green Strategies... 4 Implementation...
Optimizing Shared Resource Contention in HPC Clusters Sergey Blagodurov Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract Contention for shared resources in HPC clusters occurs
Lecture 21: Storage Administration Take QUIZ 15 over P&H 6.1-4, 6.8-9 before 11:59pm today Project: Cache Simulator, Due April 29, 2010 NEW OFFICE HOUR TIME: Tuesday 1-2, McKinley Last Time Exam discussion
Sizing guide for SAP and VMware ESX Server running on HP ProLiant x86-64 platforms Executive summary... 2 Server virtualization overview... 2 Solution definition...2 SAP architecture... 2 VMware... 3 Virtual
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi
Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for
Energy aware RAID Configuration for Large Storage Systems Norifumi Nishikawa firstname.lastname@example.org Miyuki Nakano email@example.com Masaru Kitsuregawa firstname.lastname@example.org Abstract
Oracle Database In-Memory The Next Big Thing Maria Colgan Master Product Manager #DBIM12c Why is Oracle do this Oracle Database In-Memory Goals Real Time Analytics Accelerate Mixed Workload OLTP No Changes
The Methodology Behind the Dell SQL Server Advisor Tool Database Solutions Engineering By Phani MV Dell Product Group October 2009 Executive Summary The Dell SQL Server Advisor is intended to perform capacity
Solve your IT energy crisis WITH AN ENERGY SMART SOLUTION FROM DELL overcome DATA center energy challenges IT managers share a common and pressing problem: how to reduce energy consumption and cost without
OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)
EVALUATING POWER MANAGEMENT CAPABILITIES OF LOW-POWER CLOUD PLATFORMS Jens Smeds Master of Science Thesis Supervisor: Prof. Johan Lilius Advisor: Dr. Sébastien Lafond Embedded Systems Laboratory Department
The Case for Massive Arrays of Idle Disks (MAID) Dennis Colarelli, Dirk Grunwald and Michael Neufeld Dept. of Computer Science Univ. of Colorado, Boulder January 7, 2002 Abstract The declining costs of
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
C O V E R F E A T U R E The Case for Energy-Proportional Computing Luiz André Barroso and Urs Hölzle Google Energy-proportional designs would enable large energy savings in servers, potentially doubling
Why Computers Are Getting Slower (and what we can do about it) Rik van Riel Sr. Software Engineer, Red Hat Why Computers Are Getting Slower The traditional approach better performance Why computers are
The Revival of Direct Attached Storage for Oracle Databases Revival of DAS in the IT Infrastructure Introduction Why is it that the industry needed SANs to get more than a few hundred disks attached to
Database Server Configuration Best Practices for Aras Innovator 10 Aras Innovator 10 Running on SQL Server 2012 Enterprise Edition Contents Executive Summary... 1 Introduction... 2 Overview... 2 Aras Innovator
An Oracle White Paper November 2010 Backup and Recovery with Oracle s Sun ZFS Storage Appliances and Oracle Recovery Manager Introduction...2 Oracle Backup and Recovery Solution Overview...3 Oracle Recovery
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
MAXCACHE 3. WHITEPAPER Reduce Latency and Increase Application Performance Up to 44x with Adaptec maxcache 3. SSD Read and Write Caching Solutions Executive Summary Today s data centers and cloud computing
Towards Energy Efficient Query Processing in Database Management System Report by: Ajaz Shaik, Ervina Cergani Abstract Rising concerns about the amount of energy consumed by the data centers, several computer
White Paper EMC VFCACHE ACCELERATES ORACLE VFCache extends Flash to the server FAST Suite automates storage placement in the array VNX protects data EMC Solutions Group Abstract This white paper describes
Total Cost of Ownership in High Performance Computing HPC datacenter energy efficient techniques: job scheduling, best programming practices, energy-saving hardware/software mechanisms SoSe 2014 Dozenten:
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
In-Memory Data Management for Enterprise Applications Jens Krueger Senior Researcher and Chair Representative Research Group of Prof. Hasso Plattner Hasso Plattner Institute for Software Engineering University
Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying
MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with
White Paper Cost-effective Strategies for Building the Next-generation Data Center Custom-made servers bearing energy-efficient processors are key to today s cloud computing-inspired architectures. Tom
A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems Anton Beloglazov, Rajkumar Buyya, Young Choon Lee, and Albert Zomaya Present by Leping Wang 1/25/2012 Outline Background
Implementing Cirrus Data Solutions DCS SAN Caching Appliance With the Seagate Nytro Technology Paper Authored by Rick Stehno, Principal Database Engineer, Seagate Introduction Supporting high transaction
Technical white paper Protect Microsoft Exchange databases, achieve long-term data retention HP StoreOnce Backup systems, HP StoreOnce Catalyst, and Symantec NetBackup OpenStorage Table of contents Introduction...
CHAPTER 1 INTRODUCTION 1.1 Background The command over cloud computing infrastructure is increasing with the growing demands of IT infrastructure during the changed business scenario of the 21 st Century.
The Benefits of POWER7+ and PowerVM over Intel and an x86 Hypervisor Howard Anglin email@example.com IBM Competitive Project Office May 2013 Abstract...3 Virtualization and Why It Is Important...3 Resiliency
I D C T E C H N O L O G Y S P O T L I G H T Optimizing Blade Ser ve r s f o r V irtualization: I m p r o ve I T E f ficiency, S t o p S e r ve r S p r aw l August 2011 Adapted from IDC Blade Market Survey:
NetApp FAS Hybrid Array Flash Efficiency Silverton Consulting, Inc. StorInt Briefing PAGE 2 OF 7 Introduction Hybrid storage arrays (storage systems with both disk and flash capacity) have become commonplace
Optimizing the Performance of Your Longview Application François Lalonde, Director Application Support May 15, 2013 Disclaimer This presentation is provided to you solely for information purposes, is not
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
Navigating the Enterprise Database Selection Process: A Comparison of RDMS Acquisition Costs Abstract Companies considering a new enterprise-level database system must navigate a number of variables which