Percona XtraBackup: install, usage, tricks



Similar documents
XtraBackup. Vadim Tkachenko CTO, Co-Founder, Percona Inc Percona Live SF Feb 2011

XtraBackup: Hot Backups and More

Percona XtraBackup Documentation. Percona Inc

A Quick Start Guide to Backup Technologies

Percona Server features for OpenStack and Trove Ops

MySQL backups: strategy, tools, recovery scenarios. Akshay Suryawanshi Roman Vynar

MySQL Enterprise Backup User's Guide (Version 3.5.4)

MySQL Enterprise Backup User's Guide (Version 3.5.4)

MySQL Enterprise Backup

MySQL Backup and Recovery: Tools and Techniques. Presented by: René Senior Operational DBA

MySQL Backup IEDR

MySQL backup and restore best practices. Roman Vynar September 21, 2015

MySQL Enterprise Backup User's Guide (Version 3.9.0)

MySQL Enterprise Backup User's Guide (Version )

MySQL Backups: From strategy to Implementation

Tushar Joshi Turtle Networks Ltd

MySQL Backup and Security. Best practices on how to run MySQL on Linux in a secure way Lenz Grimmer <lenz@mysql.com>

DBA Tutorial Kai Voigt Senior MySQL Instructor Sun Microsystems Santa Clara, April 12, 2010

S W I S S O R A C L E U S E R G R O U P. N e w s l e t t e r 3 / J u l i with MySQL 5.5. Spotlight on the SQL Tuning

Zero-Downtime MySQL Backups

InnoDB Data Recovery. Daniel Guzmán Burgos November 2014

PostgreSQL Backup Strategies

Testing and Verifying your MySQL Backup Strategy

High Availability Solutions for the MariaDB and MySQL Database

Lenz Grimmer

MySQL Administration and Management Essentials

Configuring Backup Settings. Copyright 2009, Oracle. All rights reserved.

Using MySQL for Big Data Advantage Integrate for Insight Sastry Vedantam

Preparing for the Big Oops! Disaster Recovery Sites for MySQL. Robert Hodges, CEO, Continuent MySQL Conference 2011

MySQL: Cloud vs Bare Metal, Performance and Reliability

Choosing Storage Systems

Sawmill Log Analyzer Best Practices!! Page 1 of 6. Sawmill Log Analyzer Best Practices

File Systems Management and Examples

MySQL Cluster Deployment Best Practices

Part 3. MySQL DBA I Exam

Restore and Recovery Tasks. Copyright 2009, Oracle. All rights reserved.

Oracle Database 10g: Backup and Recovery 1-2

Encrypting MySQL data at Google. Jonas Oreland and Jeremy Cole

Synchronous multi-master clusters with MySQL: an introduction to Galera

Oracle Database 11 g Performance Tuning. Recipes. Sam R. Alapati Darl Kuhn Bill Padfield. Apress*

5 Percona Toolkit tools that could save your day. Stéphane Combaudon FOSDEM February 3rd, 2013

Best Practices for Using MySQL in the Cloud

The Future of PostgreSQL High Availability Robert Hodges - Continuent, Inc. Simon Riggs - 2ndQuadrant

How To Backup and Restore in Infobright

4PSA Total Backup User's Guide. for Plesk and newer versions

Database Administration with MySQL

Be Very Afraid. Christophe Pettus PostgreSQL Experts Logical Decoding & Backup Conference Europe 2014

Amanda The Open Source Backup & Archiving Software. Ian Turner ian@zmanda.com 11 April Copyright 2006 Zmanda, Inc. All rights reserved.

Oracle Recovery Manager

MySQL and Virtualization Guide

TECHNICAL REPORT. Nimble Storage Oracle Backup and Recovery Guide

How to backup a remote MySQL server with ZRM over the Internet

COS 318: Operating Systems

RecoveryVault Express Client User Manual

Module 14: Scalability and High Availability

Online Backup Client User Manual Linux

QA PRO; TEST, MONITOR AND VISUALISE MYSQL PERFORMANCE IN JENKINS. Ramesh Sivaraman

This guide specifies the required and supported system elements for the application.

How To Scale Myroster With Flash Memory From Hgst On A Flash Flash Flash Memory On A Slave Server

MapGuide Open Source Repository Management Back up, restore, and recover your resource repository.

Data Integrity: Backups and RAID

Workflow Templates Library

ZFS Backup Platform. ZFS Backup Platform. Senior Systems Analyst TalkTalk Group. Robert Milkowski.

Web-Based Data Backup Solutions

Online Backup Client User Manual

How to Optimize the MySQL Server For Performance

MySQL Cluster New Features. Johan Andersson MySQL Cluster Consulting johan.andersson@sun.com

PARALLELS SERVER BARE METAL 5.0 README

MySQL Storage Engines

MySQL 6.0 Backup. Replication and Backup Team. Dr. Lars Thalmann Dr. Charles A. Bell Rafal Somla. Presented by, MySQL & O Reilly Media, Inc.

Online Backup Client User Manual

RMAN What is Rman Why use Rman Understanding The Rman Architecture Taking Backup in Non archive Backup Mode Taking Backup in archive Mode

A Records Recovery Method for InnoDB Tables Based on Reconstructed Table Definition Files

BACKUP YOUR SENSITIVE DATA WITH BACKUP- MANAGER

Backup Types. Backup and Recovery. Categories of Failures. Issues. Logical. Cold. Hot. Physical With. Statement failure

1. Product Information

SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications. Jürgen Primsch, SAP AG July 2011

Backup Software Technology

RingStor User Manual. Version 2.1 Last Update on September 17th, RingStor, Inc. 197 Route 18 South, Ste 3000 East Brunswick, NJ

PARALLELS SERVER 4 BARE METAL README

An Overview of Flash Storage for Databases

MySQL always-up with Galera Cluster

Online Backup Linux Client User Manual

Comparing MySQL and Postgres 9.0 Replication

Galera Replication. Synchronous Multi-Master Replication for InnoDB. ...well, why not for any other DBMS as well. Seppo Jaakola Alexey Yurchenko

<Insert Picture Here> MySQL Update

A block based storage model for remote online backups in a trust no one environment

Parallel Replication for MySQL in 5 Minutes or Less

Monitoreo de Bases de Datos

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE

Ultimate Guide to Oracle Storage

<Insert Picture Here> RMAN Configuration and Performance Tuning Best Practices

Backup and Recovery using PITR Mark Jones EnterpriseDB Corporation. All rights reserved. 1

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

High-availability with Galera Cluster for MySQL

NetNumen U31 R06. Backup and Recovery Guide. Unified Element Management System. Version: V

Kaspersky Endpoint Security 8 for Linux INSTALLATION GUIDE

WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression

Managing your Red Hat Enterprise Linux guests with RHN Satellite

Transcription:

Percona XtraBackup: install, usage, tricks Vadim Tkachenko Percona Inc, co-founder, CTO Vadim@percona.com Alexey Kopytov Percona Inc, Principal Software Engineer Alexey.Kopytov@percona.com

Why do we talk? Vadim Tkachenko Desing & architecture of XtraBackup Alexey Kopytov Lead developer of XtraBackup

This talk online PowerPoint http://bit.ly/xb-2012 PDF http://bit.ly/xb-2012-pdf

Do I/you need a backup? Percona XtraBackup: install, usage, tricks

Yes. A story from 2009 http://gnolia.com/ Ma.gnolia experienced every web service s worst nightmare: data corruption and loss. For Ma.gnolia, this means that the service is offline and members bookmarks are unavailable, both through the website itself and the API. As I evaluate recovery options, I can t provide a certain timeline or prognosis as to to when or to what degree Ma.gnolia or your bookmarks will return; only that this process will take days, not hours. The site was never recovered

Backups do not give you More users More money Increase productivity

Backups are boring Percona XtraBackup: install, usage, tricks

Not having backup Data loss Time loss Users loss Money loss

Recovery time matters Percona XtraBackup: install, usage, tricks

Regularly test your recovery procedure You may get surprised

Backup copy data Percona XtraBackup: install, usage, tricks

Why can t we use just cp or rsync?

Database Percona XtraBackup: install, usage, tricks

InnoDB updates Percona XtraBackup: install, usage, tricks

Cold copy Percona XtraBackup: install, usage, tricks

Slave backup Percona XtraBackup: install, usage, tricks

Cold slave backup Percona XtraBackup: install, usage, tricks

Slave as backup Percona XtraBackup: install, usage, tricks

It works till first DROP TABLE or bad UPDATE Shhhh. It happens

Snapshots Percona XtraBackup: install, usage, tricks

LVM snapshot Percona XtraBackup: install, usage, tricks

LVM snapshot performance 800 740 700 600 IO req/sec 500 400 300 240 no snapshot snapshot 200 100 0

ZFS snapshots I am told it works fine

ZFS available only Solaris FreeBSD

SAN snapshots Percona XtraBackup: install, usage, tricks

NetApp $$$

R1Soft I do not know much about it

Logical backup Percona XtraBackup: install, usage, tricks

mysqldump Percona XtraBackup: install, usage, tricks

Backup that takes 30 hours can t be a daily backup

Mysqdump benefits Easy to use Per-table Perdatabase

Mysqldump problems Intrusive Slow Single thread

Slave lag under mysqldump Slave behind master, time unit 80 70 60 50 40 30 20 10 0 slave lag 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 ----- Date-time ---

mydumper Parallel mysqldump

MySQL Enterprise Backup

MEB benefits Similar to XtraBackup

MEB drawbacks Closed source Annual subscription

Not true XtraBackup is reverse engineering of MEB

What is Percona XtraBackup Open-source hot backup tool for MySQL-based servers

InnoDB internals Percona XtraBackup: install, usage, tricks

InnoDB recovery Percona XtraBackup: install, usage, tricks

Backup idea Percona XtraBackup: install, usage, tricks

Backup 2 nd stage Percona XtraBackup: install, usage, tricks

Recovery uses memory Percona XtraBackup: install, usage, tricks

Backup logic is identical to InnoDB recovery logic

Backup logic is a part of InnoDB source code No needs for reverse engineering

XtraBackup source code structure Patches to InnoDB C code Innobackupex script Streaming utilities

bzr branch lp:perconaxtrabackup https://launchpad.net/percona-xtrabackup

Build tools build/ directory build.sh build.sh xtradb55 build-rpm.sh xtrabackup.spec build-dpkg.sh

Binary packages Percona XtraBackup: install, usage, tricks

Yum repository rpm -Uhv http://www.percona.com/downloads/perconarelease/percona-release-0.0-1.x86_64.rpm yum install percona-xtrabackup

Apt repository deb http://repo.percona.com/apt squeeze main Replace squeeze by your favorite name deb-src http://repo.percona.com/apt squeeze main apt-get update

Binary packages http://www.percona.com/downloads/xtrabackup/xtrabacku p-2.0.0/ Binary tar.gz Centos based, may not work on SUSE deb RPM RedHat based source

Supported MySQL versions MySQL 5.0 MySQL 5.1 + built-in InnoDB MySQL 5.1 + InnoDB-plugin MySQL 5.5 Percona Server 5.0 Percona Server 5.1 Percona Server 5.5 Percona XtraDB Cluster 5.5

Should work products MariaDB May need small fix Drizzle Shipped with its own xtrabackup

Supported engines InnoDB/XtraDB Hot backup MyISAM With read-lock Archive, CSV With read-lock Your favorite exotic engine May work if supports FLUSH TABLES WITH READ LOCK

Supported platforms Linux RedHat 5; RedHat 6 CentOS, Oracle Linux Debian 6 Ubuntu LTS Any other via compiling source code

Solaris Binaries will be soon

Mac OS X Binaries will be soon

Windows Experimental releases Active Perl

FreeBSD Compile source code

Binaries structure innobackupex Perl script xtrabackup Percona Server 5.1; MySQL 5.1 + InnoDB-plugin xtrabackup_51 MySQL 5.0; Percona Server 5.0; MySQL 5.1+builtin InnoDB xtrabackup_55 MySQL 5.5; Percona Server 5.5 tar4ibd Only in XtraBackup-1.6; built-in in 2.0 xbstream Only in 2.0

Basic command innobackupex /data/backup/mysql-data

Innobackupex connects to mysqld --defaults-file Datadir (default assumption /var/lib/mysql)! --user --password --host If you need to connected via specific IP --socket

Innobackupex basic output File xtrabackup-out1

Stage 2 Percona XtraBackup: install, usage, tricks

apply command innobackupex apply-log /data/backup/mysqldata

Use memory innobackupex apply-log use-memory=10g /data/backup/mysql-data use-memory == innodb_buffer_pool_size for mysqld

Innobackupex apply-log output File xtrabackup-apply-out Apply length: 8 minutes 25 seconds

Innobackupex apply-log use-memory output File xtrabackup-apply-use-memory-out Percona XtraBackup: install, usage, tricks Apply length: 3 minutes 36 seconds

Why xtrabackup prepare 2 times? Second time to create innodb log files Just for convenience

/data/backup/mysqldata is ready for usage

FLUSH TABLES WITH READ LOCK

FTWRL set the global read lock - after this step, insert/update/delete/replace/alter statements cannot run close open tables - this step will block until all statements started previously have stopped set a flag to block commits

Why? Consider: Copy table1.frm Copy table2.frm Copy table3.frm Copy table4.frm... Meantine ALTER TABLE table1 started.. Copy table5.frm Copy table6.frm Copy table7.frm

With FTWRL FLUSH TABLES WITH READ LOCK Copy table1.frm Copy table2.frm Copy table3.frm Copy table4.frm... ALTER TABLE table1 --- LOCKED.. Copy table5.frm Copy table6.frm Copy table7.frm UNLOCK TABLES

The same with MyISAM Not transactional engine. No redo logs

FTWRL problems Percona XtraBackup: install, usage, tricks

Problem 1 close open tables - this step will block until all statements started previously have stopped Long running SELECT will block FTWRL, which will block other statements Queries piles up and lock down server

Pileup SELECT 1mln rows FROM BIG_TABLE Blocks: FLUSH TABLES WITH READ LOCK Blocks all following queries: SELECT UPDATE SELECT

Solution: shoot long queries

Problem 2 Big MyISAM tables Block write queries on time copying MyISAM Example: you created 100GB MyISAM table and forgot about

--no-lock Percona XtraBackup: install, usage, tricks

--no-lock Do not execute FLUSH TABLES WITH READ LOCK

To remind: Percona XtraBackup: install, usage, tricks

You take responsibility There is NO ALTER TABLE statements There is NO INSERT/UPDATE/DELETE to MyISAM tables What about Binary log position? We will cover it

Directory after backup MySQL files + xtrabackup_binlog_info xtrabackup_binlog_pos_innodb Only after apply-log xtrabackup_slave_info When slave-info is used xtrabackup_checkpoints xtrabackup_logfile xtrabackup_binary backup-my.cnf

xtrabackup_binlog_info Master binary log position Result of SHOW MASTER STATUS binlog.000001 68212201

xtrabackup_checkpoints Information about InnoDB checkpoints LSN, useful for incremental backup

xtrabackup_logfile Can be very big The bigger this file the longer apply-log process

xtrabackup_binary Exact xtrabackup binary used for backup This is used for apply-log

backup-my.cnf NOT BACKUP of my.cnf Information to start mini-innodb instance during applyinfo Example: [mysqld] datadir=/data/bench/back/test/2012-04-06_13-09- 42 innodb_data_home_dir=/data/bench/back/test/2012-04-06_13-09-42 innodb_log_files_in_group=2 innodb_log_file_size=1992294400 innodb_fast_checksum=0 innodb_page_size=16384 innodb_log_block_size=4096

Summary Backups are important Recovery time is important Backup methods for MySQL Percona XtraBackup overview XtraBackup basic usage

Percona XtraBackup tutorial Agenda Minimizing footprint I/O throttling FS cache optimization Parallel le copying Restoring individual tables Partial backups individual partitions backup

Percona XtraBackup tutorial Minimizing footprint: I/O throttling --throttle=n Limit the number of I/O operations per second in 1 MB units Second 1 Second 2 Second 3 read read write write read read write write read read write write read read write write read read write write read read write write xtrabackup --throttle=1... Second 1 Second 2 Second 3 read read write write wait wait read read write write wait wait read read write write wait wait

Percona XtraBackup tutorial Minimizing footprint: I/O throttling Limitation: --throttle only has effect for InnoDB tables, other les are copied by the innobackupex script with cp or rsync. Use streaming backups and the pv utility as a workaround: $ innobackupex --stream=tar /tmp pv -q -L1m tar -xf - -C /data/backup pv utility monitor the progress of data through a pipe available in most Linux distributions -L to limit the transfer to a specied rate

Percona XtraBackup tutorial Minimizing footprint: FS cache optimizations XtraBackup on Linux: posix_fadvise(posix_fadv_dontneed) hints the kernel the application will not need the specied bytes again works automatically, no option to enable Didn't really work in XtraBackup 1.6, xed in 2.0

Percona XtraBackup tutorial Parallel le copying Data Data directory ibdata1 actor.ibd customer.ibd lm.ibd --parallel=4 Backup location ibdata1 actor.ibd customer.ibd lm.ibd creates N threads, each thread copying one le at a time utilizes disk hardware by copying multiple les in parallel best for SSDs less seeks on HDDs due to more merged reqs by I/O scheduler YMMV, benchmarking before using is recommended

Percona XtraBackup tutorial Parallel le copying $ innobackupex --parallel=4 --no-timestamp /data/backup... [01] Copying./ibdata1 to /data/backup/ibdata1 [02] Copying./sakila/actor.ibd to /data/backup/./sakila/actor.ibd [03] Copying./sakila/customer.ibd to /data/backup/./sakila/customer.ibd [04] Copying./sakila/film.ibd to /data/backup/./sakila/film.ibd works only with multiple InnoDB tablespaces (innodb_file_per_table=1) in XtraBackup 1.6 could not be used with streaming backups works with any backup types in XtraBackup 2.0

Percona XtraBackup tutorial Restoring individual tables: export Full Full backup Server A ibdata ibdata actor.ibd backup ibdata ibdata actor.ibd Server B ibdata ibdata actor.ibd customer.ibd customer.ibd export customer.ibd lm.ibd lm.ibd lm.ibd problem: restore individual InnoDB table(s) from a full backup to another server use --export to prepare use improved table import feature in Percona Server to restore innodb_file_per_table=1

Percona XtraBackup tutorial Restoring individual tables: export Why not just copy the.ibd le? metadata: InnoDB data dictionary (space ID, index IDs, pointers to root index pages).ibd page elds (space ID, LSNs, transaction IDs, index ID, etc.) xtrabackup --export dumps index metadata to.exp les on prepare Percona Server uses.exp les to update both data dictionary and.ibd on import

Percona XtraBackup tutorial Restoring individual tables: export $ xtrabackup --prepare --export --innodb-file-per-table=1 --target-dir=/data/backup... xtrabackup: export metadata of table 'sakila/customer' to file `./sakila/customer.exp` (4 indexes) xtrabackup: xtrabackup: xtrabackup: xtrabackup:... name=primary, id.low=23, page=3 name=idx_fk_store_id, id.low=24, page=4 name=idx_fk_address_id, id.low=25, page=5 name=idx_last_name, id.low=26, page=6

Percona XtraBackup tutorial Restoring individual tables: import improved import only available in Percona Server can be either the same or a different server instance: (on different server to create.frm) CREATE TABLE customer(...); SET FOREIGN_KEY_CHECKS=0; ALTER TABLE customer DISCARD TABLESPACE; <copy customer.ibd to the database directory> SET GLOBAL innodb_import_table_from_xtrabackup=1; (Percona Server 5.5) or SET GLOBAL innodb_expand_import=1; (Percona Server 5.1) ALTER TABLE customer IMPORT TABLESPACE; SET FOREIGN_KEY_CHECKS=1;

Percona XtraBackup tutorial Restoring individual tables: import Improved table import is only available in Percona Server tables can only be imported to the same server with MySQL (with limitations): there must be no DROP/CREATE/TRUNCATE/ALTER between taking backup and importing the table mysql> ALTER TABLE customer DISCARD TABLESPACE; <copy customer.ibd to the database directory> mysql> ALTER TABLE customer IMPORT TABLESPACE;

Percona XtraBackup tutorial Partial backups backup individual tables/schemas rather than the entire dataset InnoDB tables: require innodb_file_per_table=1 restored in the same way as individual tables from a full backup same limitations with the standard MySQL server (same server, no DDL) no limitations with Percona Server when innodb_import_table_from_xtrabackup is enabled

Percona XtraBackup tutorial Partial backups: selecting what to backup innobackupex: streaming backups: --databases= database1[.table1]..., e.g.: --databases= employees sales.orders local backups: --tables-file=filename, le contains database.table, one per line --include=regexp, e.g.: --include='^database(1 2)\.reports.*' xtrabackup: --tables-file=filename (same syntax as with innobackupex) --tables=regexp (equivalent to --include in innobackupex)

Percona XtraBackup tutorial Partial backups: selecting what to backup innobackupex: streaming backups: --databases= database1[.table1]..., e.g.: --databases= employees sales.orders local backups: --tables-file=filename, le contains database.table, one per line --include=regexp, e.g.: --include='^database(1 2)\.reports.*' xtrabackup: --tables-file=filename (same syntax as with innobackupex) --tables=regexp (equivalent to --include in innobackupex)

Percona XtraBackup tutorial Partial backups: selecting what to backup innobackupex: streaming backups: --databases= database1[.table1]..., e.g.: --databases= employees sales.orders local backups: --tables-file=filename, le contains database.table, one per line --include=regexp, e.g.: --include='^database(1 2)\.reports.*' xtrabackup: --tables-file=filename (same syntax as with innobackupex) --tables=regexp (equivalent to --include in innobackupex)

Percona XtraBackup tutorial Partial backups: selecting what to backup innobackupex: streaming backups: --databases= database1[.table1]..., e.g.: --databases= employees sales.orders local backups: --tables-file=filename, le contains database.table, one per line --include=regexp, e.g.: --include='^database(1 2)\.reports.*' xtrabackup: --tables-file=filename (same syntax as with innobackupex) --tables=regexp (equivalent to --include in innobackupex)

Percona XtraBackup tutorial Partial backups: preparing $ xtrabackup --prepare --export --target-dir=./... 120407 18:04:57 InnoDB: Error: table 'sakila/store' InnoDB: in InnoDB data dictionary has tablespace id 24, InnoDB: but tablespace with that id or name does not exist. It will be removed from data dictionary.... xtrabackup: export option is specified. xtrabackup: export metadata of table 'sakila/customer' to file `./sakila/customer.exp` (4 indexes) xtrabackup: name=primary, id.low=62, page=3 xtrabackup: name=idx_fk_store_id, id.low=63, page=4 xtrabackup: name=idx_fk_address_id, id.low=64, page=5 xtrabackup: name=idx_last_name, id.low=65, page=6...

Percona XtraBackup tutorial Partial backups: Non-InnoDB tables restoring just copy les to the database directory InnoDB (MySQL): ALTER TABLE... DISCARD/IMPORT TABLESPACE same limitations on import (must be same server, no ALTER/DROP/TRUNCATE after backup) XtraDB (Percona Server): xtrabackup --export on prepare innodb_import_table_from_xtrabackup=1; ALTER TABLE... DISCARD/IMPORT TABLESPACE no limitations

Percona XtraBackup tutorial Partial backups: individual partitions reports p_2010 p_2011 CREATE TABLE reports (d DATE, t TEXT) PARTITION BY RANGE (YEAR(d)) ( PARTITION p_2010 VALUES LESS THAN (2010), PARTITION p_2011 VALUES LESS THAN (2011), PARTITION p_2012 VALUES LESS THAN (2012), PARTITION p_cur VALUES LESS THAN (MAXVALUE) ); p_2012 p_cur Problem: archive p_2010 rst purge p_2010

Percona XtraBackup tutorial Partial backups: individual partitions Can I backup individual partitions with XtraBackup? Yes! MyISAM or InnoDB (with multiple tablespaces) partitions are like normal tables with specially formatted names: $ ls -1 /usr/local/mysql/data/test/reports* /usr/local/mysql/data/test/reports#p#p_2010.ibd /usr/local/mysql/data/test/reports#p#p_2011.ibd /usr/local/mysql/data/test/reports#p#p_2012.ibd /usr/local/mysql/data/test/reports#p#p_cur.ibd /usr/local/mysql/data/test/reports.frm /usr/local/mysql/data/test/reports.par

Percona XtraBackup tutorial Partial backups: individual partitions $ xtrabackup --backup --datadir=/usr/local/mysql/data/ --tables='reports#p#p_2010'... [01] Copying./ibdata1 to /data/backup/ibdata1 [01]...done [01] Copying./test/reports#P#p_2010.ibd to /data/backup/./test/reports#p#p_2010.ibd [01]...done [01] Skipping./test/reports#P#p_2011.ibd [01] Skipping./test/reports#P#p_2012.ibd [01] Skipping./test/reports#P#p_cur.ibd

Percona XtraBackup tutorial Partial backups: individual partitions To prepare: $ xtrabackup --prepare --export --innodb_file_per_table --target-dir=./ $ ls -1 test/ reports#p#p_2010.exp reports#p#p_2010.ibd

Percona XtraBackup tutorial Partial backups: individual partitions To restore: no support for attaching partitions in MySQL mysql> CREATE TABLE reports_2010( d DATE, t TEXT); mysql> ALTER TABLE reports_2010 DISCARD TABLESPACE;

Percona XtraBackup tutorial Partial backups: individual partitions To restore (continued): $ cp /data/backup/test/reports#p#p_2010.exp /usr/local/mysql/data/test/reports_2010.exp $ cp /data/backup/test/reports#p#p_2010.ibd /usr/local/mysql/data/test/reports_2010.ibd mysql> SET GLOBAL innodb_import_table_from_xtrabackup=1; mysql> ALTER TABLE report_2010 IMPORT TABLESPACE; same procedure to backup/restore MyISAM partitions (except DISCARD/IMPORT TABLESPACE)

Questions? Percona XtraBackup tutorial

End of part 1 Q&A

Part 2 Percona XtraBackup: install, usage, tricks

Streaming Unique for XtraBackup, not available in MEB

Local copy to Local partitions NFS partitions

Unix pipes tar zcvf - /wwwdata ssh root@192.168.1.201 "cat > /backup/wwwdata.tar.gz We want: innobackupex script1 scrip2

Basic command innobackupex --stream=tar /tmpdir

Stream output File xtrabackup-stream-out

Tar -i -i is IMPORTANT > tar --list -f backup.tar backup-my.cnf > tar --list -i -f backup.tar backup-my.cnf ibdata1./sbtest/sbtest3.ibd./sbtest/sbtest2.ibd./sbtest/sbtest1.ibd./sbtest/sbtest4.ibd xtrabackup_binlog_info mysql/time_zone_leap_second.myi mysql/ndb_binlog_index.myd mysql/plugin.frm mysql/time_zone_leap_second.myd mysql/time_zone_name.frm mysql/tables_priv.frm mysql/event.myd mysql/time_zone_name.myi mysql/ndb_binlog_index.myi mysql/proxies_priv.myd mysql/proxies_priv.myi mysql/func.myi

Usage: compression innobackupex --stream=tar./ gzip - > backup.tar.gz innobackupex --stream=tar./ qpress -io xtrabackup.tar > backup.tar.qpress

Usage: copy to remote host innobackupex --stream=tar./ ssh vadim@desthost "cat - > /data/vol1/mysqluc/backup.tar ssh user@desthost "( nc -l 9999 > /data/backups/backup.tar.qpress & )" && innobackupex --stream=tar./ qpress -io backup.tar nc desthost 9999 ssh user@desthost "( nc -l 9999 qpress io backup.tar > /data/backups/backup.tar.qpress & )" && innobackupex --stream=tar./ nc desthost 9999

Incremental backup Percona XtraBackup: install, usage, tricks

Full copy Percona XtraBackup: install, usage, tricks

Incremental. Day 1 changes since full

Incremental. Day 2 changes since Incremental

Differential. Changes since Full Percona XtraBackup: install, usage, tricks

To remind: InnoDB logs Percona XtraBackup: install, usage, tricks

Log Sequential Number LSN

LSN SHOW ENGINE INNODB STATUS\G --- LOG --- Log sequence number 180661841734

Xtrabackup_checkpoints cat xtrabackup_checkpoints backup_type = full-backuped from_lsn = 0 to_lsn = 172573554953 last_lsn = 172728858570

Innobackupex incremental process

Innobackupex incremental: Handles incremental changes to InnoDB DOES not handle MyISAM and other engines Full copy instead

Incremental apply Percona XtraBackup: install, usage, tricks

Innobackupex commands innobackupex --incremental \ --incremental-basedir= /previous/full/or/incremental \ /data/backup/inc

Innobackupex from LSN innobackupex --incremental \ --incremental-lsn=172573554953 \ /data/backup/inc

Innobackup incremental output File xtrabackup-incremental-out

Incremental apply complications

Ibdata1 structure Percona XtraBackup: install, usage, tricks

To remind: recovery process Percona XtraBackup: install, usage, tricks

This is what really happens Percona XtraBackup: install, usage, tricks

No need for ROLLBACK if we apply incremental later

Innobackupex redo-only Full backup: apply with redo-only innobackupex apply-log redo-only usememory=10g /data/backup/mysql-data

Innobackupex apply incremental innobackupex --apply-log --use-memory=10g /data/full/backup --incrementaldir=/data/incremental/backup

Innobackupex apply output File xtrabackup-incremental-apply-out

Incremental streaming New in Percona-XtraBackup 2.0

Why not in 1.6? Tar format has limitations We need to know filesize before sending to tar New streaming format xbstream Xbstream supports compression and incremental backups Custom format

Usage: innobackupex --incremental --incrementallsn=166678320660 --stream=xbstream./ > incremental.bak Unpack: xbstream x < inremental.bak Incremental directly to remote server innobackupex --incremental --incrementallsn=166678320660 --stream=xbstream./ ssh vadim@desthost "cat - xbstream x -C /data/vol1/mysqluc/"

Incremental now: Percona XtraBackup: install, usage, tricks

Point-In-Time Recovery PITR

Binary logs Percona XtraBackup: install, usage, tricks

Xtrabackup has binlog coordinates:

xtrabackup_binlog_info Master binary log position Result of SHOW MASTER STATUS binlog.000001 68212201

xtrabackup_binlog_pos_innodb Master binary log position STORED IN InnoDB (ibdata1)./binlog.000001 68212201 Available only after apply-log Can be used with no-lock

In innobackupex output Master binary log position STORED IN InnoDB (ibdata1) [notice (again)] If you use binary log and don't use any hack of group commit, the binary log position seems to be: InnoDB: Last MySQL binlog file position 0 68212201, file name./binlog.000001 Available only after apply-log Can be used with no-lock

mysqlbinlog mysqlbinlog to apply binary logs from position mysqlbinlog start-position=68212201 binlog.000001 mysqlbinlog --stop-datetime= To stop at exact time

Slave topics Percona XtraBackup: install, usage, tricks

Slave backup Percona XtraBackup: install, usage, tricks

xtrabackup_slave_info innobackupex slave-info Master binary log position for slave Result of SHOW SLAVE STATUS CHANGE MASTER TO MASTER_LOG_FILE='binlog.000001', MASTER_LOG_POS=68212201

xtrabackup_slave_info Used for PITR when we backup from slave Slave cloning

Slave cloning Percona XtraBackup: install, usage, tricks

Using streaming innobackupex --stream=tar /tmp --slave-info ssh user@newslave "tar xfi - -C /DESTDIR" innobackupex --apply-log /DESTDIR Execute CHANGE MASTER from xtrabackup_slave_info

XtraDB Cluster join process. Percona XtraDB Cluster

Rsync for copying Percona XtraBackup: install, usage, tricks

Rsync Innobackupex rsync Two stage 1. Rsync before FLUSH TABLES WITH READ LOCK 2. Rsync changes inside FLUSH TABLES WITH READ LOCK Can t be used with stream

Rsync output File xtrabackup rsync output

Statistics Percona XtraBackup: install, usage, tricks

Random inserts page splits Percona XtraBackup: install, usage, tricks

Xtrabackup --stats table: tpcc/order_line, index: PRIMARY, space id: 25, root page: 3, zip size: 0 estimated statistics in dictionary: key vals: 32471816, leaf pages: 264267, size pages: 302592 real statistics: level 2 pages: pages=1, data=8406 bytes, data/pages=51% level 1 pages: pages=467, data=4756806 bytes, data/pages=62% leaf pages: recs=35116662, pages=264267, data=2410713870 bytes, data/pages=55%

Optimize? OPTIMIZE TABLE for Primary Key (data) DROP KEY / CREATE KEY (via Fast Index Creation) for Secondary Keys

Backup schemas Percona XtraBackup: install, usage, tricks

Daily full backups /data/backup/2012-04-01 /data/backup/2012-04-02 /data/backup/2012-04-03 /data/backup/2012-04-04 /data/backup/2012-04-05 As many copies as you need

Daily full backups + binary logs /data/backup/2012-04-01 /data/backup/2012-04-02 /data/backup/2012-04-03 /data/backup/2012-04-04 /data/backup/2012-04-05 As many copies as you need + backup of binary logs

Daily backup with local copy /data/backup/2012-04-01 /data/backup/2012-04-02 /data/backup/2012-04-03 /data/backup/2012-04-04 /data/backup/2012-04-05 As many copies as you need The latest copy is stored also locally Compressed

Incremental into to the mix For low rate of changes. ~10%

Schema Full /data/backup/2012-04-01 Incremental /data/backup/2012-04-02 Incremental /data/backup/2012-04-03 Incremental /data/backup/2012-04-04 Full /data/backup/2012-04-05

We do not provide scripts DIY or use third-party tools

Third-party tools Zmanda XtraBackup manager

Resources Facebook hybrid incremental backup http://www.facebook.com/note.php?note_id=10150098033318920

Thank you! Questions? vadim@percona.com

Percona XtraBackup tutorial New features XtraBackup 2.0 streaming incremental backups parallel compression xbstream LRU dump backups Future releases true incremental backups log archiving compact backups merged innobackupex + xtrabackup

Percona XtraBackup tutorial Streaming incremental backups Problem: send an incremental backup to a remote host: innobackupex --stream=tar --incremental... ssh... didn't work in XtraBackup 1.6 innobackupex used external utilities to generate TAR streams, didn't invoke xtrabackup binary xtrabackup binary must be used for incremental backups to scan data -les and generate deltas in XtraBackup 2.0: xtrabackup binary can produce TAR or XBSTREAM streams on its own

Percona XtraBackup tutorial Compression disk space is often a problem compression with external utilities has some serious limitations built-in parallel compression in XtraBackup 2.0

Percona XtraBackup tutorial Compression: local backups: external utilities create a local uncompressed backup -rst, then gzip -les must have suf-cient disk space on the same machine data is read and written twice streaming backups: innobackupex --stream=tar./ gzip - > /data/backup.tar.gz innobackupex --stream=tar./ gzip - ssh user@host cat - > /data/backup.tar.gz gzip is single-threaded pigz (parallel gzip) can do parallel compression, but decompression is still single-threaded have to uncompress the entire.tar.gz even to restore a single table

Percona XtraBackup tutorial Compression: XtraBackup 2.0 new --compress option in both innobackupex and xtrabackup QuickLZ compression algorithm: http://www.quicklz.com/ the world's fastest compression library, reaching 308 Mbyte/s per core combines excellent speed with decent compression (8x in tests) more algorithms (gzip, bzip2) will be added later qpress archive format (the native QuickLZ -le format) each data -le becomes a single-threaded.qp archive no need to uncompress entire backup to restore a single table as with.tar.gz

Percona XtraBackup tutorial Compression: XtraBackup 2.0 parallel! --compress-threads=n can be used together with parallel -le copying: xtrabackup --backup --parallel=4 --compress --compress-threads=8 Data Data directory ibdata1 actor.ibd customer.ibd -lm.ibd I/O threads 1 2 3 4 Compression threads 1 2 3 4 5 6 7 8

Percona XtraBackup tutorial Compression: XtraBackup 2.0 qpress -le archiver with --compress every -le in the backup is a single--le qpress archive (e.g. customer.ibd.qp) available from http://www.quicklz.com/ multi-threaded decompression: qpress -d -TN customer.ibd.qp./ support for gzip/bzip2 to be added later

Percona XtraBackup tutorial xbstream new streaming format in XB 2.0 in addition to --stream=tar Problems with traditional archive formats (tar, cpio, ar): simultaneous compression and streaming is impossible.tar -le format header actor.ibd.gz header customer.ibd.gz header -lm.ibd.gz... File name File mode File size Checksum... have to know the -le size before adding a -le to archive => have to write to a temporary -le -rst before streaming

Percona XtraBackup tutorial xbstream (XtraBackup 2.0) Problems with traditional archive formats: no parallel streaming => can only backup single -le at a time meant as -le archiving utilities: not really suitable for dynamically generated data streams like backups XtraBackup 1.6 didn't support compression, parallel streaming, so using TAR streams was possible

Percona XtraBackup tutorial xbstream (XtraBackup 2.0) chunk header actor.ibd.gz chunk #1 chunk header header -lm.ibd.gz chunk #1 chunk header actor.ibd.gz chunk #2 chunk header -lm.ibd.gz chunk #2... the XBSTREAM format -les are written in chunks => can stream dynamically generated -les support interleaving chunks from different -les => can do parallel -le streaming CRC32 checksums => can be optimized to use CPU instructions

Percona XtraBackup tutorial xbstream (XtraBackup 2.0) The xbstream utility can be used to extract -les from streams produced by xtrabackup tar-like interface: xbstream -x [-C directory) read stream from the standard input and extract -les to the current (or speci-ed with -C) directory

Percona XtraBackup tutorial xbstream (XtraBackup 2.0) xbstream usage example $ innobackupex --parallel=8 --compress --compress-threads=4 --stream=xbstream./ ssh user@host xbstream -x -C /data/backup... xtrabackup: Starting 8 threads for parallel data files transfer [01] Compressing and streaming./ibdata1 [02] Compressing and streaming./sakila/actor.ibd [03] Compressing and streaming./sakila/address.ibd [04] Compressing and streaming./sakila/category.ibd [05] Compressing and streaming./sakila/city.ibd [06] Compressing and streaming./sakila/country.ibd [07] Compressing and streaming./sakila/customer.ibd

Percona XtraBackup tutorial LRU dump backup (XtraBackup 2.0) LRU dumps in Percona Server: most recently used InnoDB buffer pool least recently used page1 page1 page2 page2 page3 page3...... id id id id ib_lru_dump Reduced warmup time by restoring buffer pool state from ib_lru_dump after restart

Percona XtraBackup tutorial LRU dump backup (XtraBackup 2.0) XtraBackup 2.0 discovers ib_lru_dump and backs it up automatically buffer pool is in the warm state after restoring from a backup! make sure to enable buffer pool restore in my.cnf after restoring on a different server innodb_auto_lru_dump=1 (PS 5.1) innodb_buffer_pool_restore_at_startup=1 (PS 5.5)

Percona XtraBackup tutorial True incremental backups (XtraBackup 2.x) slow incremental backups scan page LSNs 100 1100 300 1500 200 800 ibdata1... incremental LSN = 1000 1100 1500... ibdata1.delta 2 solutions in future releases: changed page tracking log archiving

Percona XtraBackup tutorial True incremental backups (XtraBackup 2.x) changed page tracking (new feature in Percona Server) datadir ibdata1 changed page page bitmaps incremental backup ibdata1.delta actor.ibd actor.ibd.delta customer.ibd customer.ibd.delta -lm.ibd ib_log-le0 ib_log-le1 follow page updates ib_modi-ed_log.* xtrabackup --backup --incremental-lsn -lm.ibd.delta

Percona XtraBackup tutorial True incremental backups: log archiving log archiving (a new feature in Percona Server) datadir ibdata1 log_archive_dir ib_log_archive_lsn1 xtrabackup --apply-log full full backup ibdata1 actor.ibd customer.ibd -lm.ibd ib_log-le0 ib_log-le1 copy ib_log_archive_lsn2 ib_log_archive_lsn3... ib_log_archive_lsn... actor.ibd customer.ibd -lm.ibd

Percona XtraBackup tutorial True incremental backups: which one should I use? old incremental backups: standard MySQL server (w/o Percona Server features) changed page tracking: less data to store/transfer (only changed page bitmaps) faster recovery updates full backup to a certain point in time => more suitable for static backups log archiving: more data to store/transfer (all changes / log updates) longer recovery updates full backup to an arbitrary point in time => more suitable for stand-by copy of the main server

Percona XtraBackup tutorial Compact backups Idea: skip certain pages on backup, recreate on prepare pages 1 3 4 5 2 6 ibdata1... backup 1 2 4 page map -le Legend: primary key page secondary key page unused page

Percona XtraBackup tutorial Compact backups backup copy 1 2 4 page map -le prepare 1 2 3 4 5 6... On prepare: - recreate secondary keys - -x PK page offsets - use page map to translate page offsets when applying log records - update pointers to root pages in data dictionary no unused pages Legend: primary key page secondary key page unused page

Percona XtraBackup tutorial Compact backups Pros: smaller backups (sometimes a lot!) reclaim unused space (a very old problem, MySQL bug #1341) Cons: --prepare takes much longer

Percona XtraBackup tutorial Merging innobackupex and xtrabackup problems with innobackupex: legacy code inherited from InnoDB Hot Backup tool duplicates features from xtrabackup binary non-portable the plan is to rewrite it in C and merge with the xtrabackup binary (XtraBackup 2.x/3.0)

Percona XtraBackup tutorial XtraBackup 2.x changed page tracking log archiving compact backups obsoleted innobackupex

Percona XtraBackup tutorial Resources, further reading & feedback XtraBackup documentation: http://www.percona.com/doc/percona-xtrabackup/ Downloads: http://www.percona.com/software/percona-xtrabackup/downloads/ Google Group: http://groups.google.com/group/percona-discussion #percona IRC channel on Freenode Launchpad project: https://launchpad.net/percona-xtrabackup Bug reports: https://bugs.launchpad.net/percona-xtrabackup

Questions? Percona XtraBackup tutorial