SQL Server 2014 New Features/In- Memory Store Juergen Thomas Microsoft Corporation
AGENDA 1. SQL Server 2014 what and when 2. SQL Server 2014 In-Memory 3. SQL Server 2014 in IaaS scenarios 2
SQL Server 2014 what and when
What is SQL Server 2014 Just shipped SQL Server 2012 why a new release again? Microsoft shipped SQL Server 2012 in March 2012 Plan to ship SQL Server 2014 around the same time in 2014 Two years development cycle to round some edges and add some new functionality Three main focus areas: Improving Column Store experience and capabilities (already presented) Introduction of In-Memory store (In-Memory OLTP) Make SQL Server 2014 a better IaaS citizen (Infrastructure as a Service) 4
SQL Server 2014 SAP Support Plans SQL Server 2014 should be supported by the same Netweaver releases as SQL Server 2012 Same restriction to Windows Server 2012 and later as SQL Server 2012 in SAP environment Upgrade step is very straight forward, especially with HA frameworks like: Windows Clustering Database Mirroring AlwaysOn In-Place Upgrade possible There will be requirements to higher Basis Support Package Level than with SQL Server 2012 5
SQL Server 2014 In-Memory
In-Memory what a fuzzy term Aren t we already in memory with our DBMS systems? As we recall, we use DBMS systems to store data which ideally is cached by the DBMS system in memory Ideally the way we configure our servers running the DBMS systems are providing enough memory so that chances are high to find data in memory If there is not enough memory or set of data requested changes, we take the hit and read from storage But most of the data we retrieved in our DBMS system is likely to be inmemory the moment we request the data 7
In-Memory One form of In-Memory Consequently some customers use existing hardware and software to optimize in-memory experience Take a DBMS server and use 1 or 2 TB of RAM Compress database content to the maximum with e.g. SQL Server Page Dictionary compression Put Database data and log files onto FusionIO or Violin storage (NAND base storage) Results: Maximize chance to find data in memory If every 500 th page or so needs to come from storage, reduce storage latency from 2-5ms to like 300 microseconds Wipe out differences in storage latencies or capacities between reading random and sequentially 8
9 In-Memory Could I do more with large memory? Question remains: If I add 1 TB of memory could I make processing even faster Problem remains: Structures of our DBMS systems mostly got defined 20 years ago Focus at that point in time: Exchange data between Storage and Memory Make sure that ACID concepts is held Led to the fact that the structures as we keep them in memory today are really structures which are a compromise between finding data once in memory and exchanging data between memory and storage Leads to the question: Could I do better if I could disregard exchanging data with storage for persistency reasons
In-Memory We think we can use the memory better Problem of traditional DBMS in leveraging memory: Index and data pages need to be kept persisted in stored in block/page form on disk and in memory Block/Page organization forces synchronization when accessing such blocks/pages short term locking (latching) Contention on this short term locks introduce non-deterministic behavior B-Tree indexes great compromise for current DBMS, but when just searching in memory for a specific row hash indexes would perform way better So, yes, we think we can do way better if we would know that the data is in-memory only and not storage backed 10
In-Memory What is offered by SQL Server 2014 Big decision to make: Build a separate in-memory DBMS or integrate capabilities into existing DBMS engine Microsoft decides to integrate new In-Memory OLTP functionality into existing RDBMS engine Rationale: Customers want protection of their investment Customers want to continue to use their HA/DR mechanisms as before New functionality should be mostly transparent for applications, administrations and operations Granularity of a whole database to keep in-memory resident seemed to be too large anyway 11
In-Memory what were the design principles fro SQL Server 2014 Workload to be improved: High-End OLTP like applications dealing with Ticker data or website contexts OLAP workload was not in the scope at all Goal: Improve Throughput of such workload on a given hardware by many factors Eliminate sources of non-deterministic behavior 12 Conclusion: Data needs to be in-memory resident Short term locking (latches) and long term blocking locks (Locks) need to be eliminated Way of executing requests needs to become radically more efficient
In-Memory SQL Server 2014 In-Memory Building blocks Client App Natively Compiled SPs and Schema TDS Handler and Session Management Hekaton Compiler Parser, Catalog, Optimizer Quer y Inter op T 1 T-SQL Query Execution T 2 T 3 Tables Indexes Key Existing SQL Componen Hekaton t Componen t Generated.dll Memory Optimized Tables & Indexes SQL Server.exe Buffer Pool for Tables & Indexes T 1 T 2 T 3 T 1 T 2 T 3 Memory Optimized Table Filegroup Transaction Log Data Filegroup 13
In-Memory SQL Server 2014 In-Memory OLTP Details: Single tables of a database can be moved to an in-memory resident format Other tables of the database can remain in normal row or column oriented form in storage backed manner Data residing in In-Memory OLTP is ROW oriented Data is not organized in pages anymore no synchronization via Latches Customers with applications/schemas suffering on latch contention saw throughput gains of up to factor 7 Optimistic Locking/Isolation using SNAPSHOT Isolation level avoids blocking locks In order to guarantee ACID principles, changes are logged in normal transaction log of database Two different index types (HASH and BW indexes) Note BW indexes are no reference to SAP BW product Changes in indexes are not persisted nor logged Indexes are rebuild after data load in restart phase 14
In-Memory SQL Server 2014 In-Memory OLTP 15 Details: To enable fast restarts the continuous change of data needs to be logged and consolidated Continuous Checkpoint Checkpoint files are consolidated to represent a more recent state of data in memory Continuous stream of writes by continuous checkpoint process in opposite to some burst snapshot writing as with competitive products requires less disk resources Tremendous reduction of CPU cycles by compiling Stored Procedures natively Instead of caching access plan to data for a query in some high level description which is getting interpreted, Stored Procedure is compiled into C/C++ code and infused way lower in the execution stack Results in cut out of many layers consuming CPU Customers running in production or test could increase workload by factor 5 or more reduction in hardware Possibility to create schema only tables which are not getting logged, nor re-populated with data in restart phase
In-Memory - OLTP migration non SAP Before Migration 18 SQL Servers to handle overall workload from webfarms 15,000 Batches/Sec per server High number of Latch Waits due to schema and nature of application logic SQL Server In-memory OLTP 250,000 Batches/Sec - approx 16x gains Only one server needed for all webfarms (from 18) Reduction in cost: hardware/software, power consumption, datacenter space Easier to manage Less workload for DBA s (only single server to change) Less points of failure to troubleshoot 16
In-Memory SQL Server 2014 In-Memory OLTP for SAP SAP Netweaver applications could start and query tables in SQL Server 2014 In-Memory OLTP However there are a few short comings which will prevent specific implementation or support of In-Memory OLTP for SAP Netweaver Transaction Isolation Levels: Not offering all Isolation levels becomes a problem for SAP Netweaver application especially in case of concurrent modifiers Operations: Some dynamics missing in some operations for In-Memory OLTP objects Lifecycle Management: Changes in SQL syntax would require substantial work in Lifecycle Management tools Decision to work with SAP to get necessary functionality into the next release (SQL Server 2014+) 17
SQL Server 2014 Infrastructure as a Service
IaaS SQL Server 2014 What is the goal? Azure as IaaS platform allows to run applications in dedicated VMs As customer you are free to put into the VM whatever you want But we want to make SQL Server better integrated than every other software into the Azure IaaS platform SQL Server 2014 should leverage Azure functionality more closely than just running in the VMs as all the other applications Areas of better integration: High Availability Backup/Restore Data File Management 19
IaaS SQL Server 2014 Enable HA/DR for SQL Server 2014 in Azure IaaS: Database Mirroring works out of the box AlwaysOn does work even using Listener Requires Windows Cluster Config w/o shared disk Requires Active Directory in Azure. Either dedicated in Azure or via VPN tunnel from on premise Windows Clustering requiring shared disks is not supported 20
IaaS SQL Server 2014 Use SQL Server 2014 running in Azure VM to use it as secondary replica No automatic failover possible Asynchronous Data Replication only One can use secondary replica for read-only purposes as well for reporting Azure used as DR site 21
IaaS SQL Server 2014 Enable SQL Server to directly backup against Azure Storage directly Backup can be used to restore against onpremise again Backup can be used to restore against SQL Server running in a VM in Azure as well Use Azure BLOB storage as cheap Media to store backups 22
IaaS SQL Server 2014 Azure Blob Datafiles Native support for SQL Server database files stored as Windows Azure blobs. Especially with running SQL Server within Azure VM easier to create data and log files Increase HA and DR: A set of database files stored in Azure storage backed by Azure Storage SLA Fast disaster recovery using database attach operation without the need to restore data Maintain on-prem control of security: Allow TDE key on-prem while encrypted data in Azure Storage 23