Similar documents
Database migration. from Sybase ASE to PostgreSQL. Achim Eisele and Jens Wilke. 1&1 Internet AG

Configuring an Alternative Database for SAS Web Infrastructure Platform Services

Programming in postgresql with PL/pgSQL. Procedural Language extension to postgresql

Webair CDN Secure URLs

DEPLOYMENT GUIDE Version 1.2. Deploying the BIG-IP system v10 with Microsoft Exchange Outlook Web Access 2007

A table is a collection of related data entries and it consists of columns and rows.

Lab 2: PostgreSQL Tutorial II: Command Line

Admin Reference Guide. PinPoint Document Management System

Capture Pro Software FTP Server System Output

Oracle Database 10g Express

MAGENTO Migration Tools

The release notes provide details of enhancements and features in Cloudera ODBC Driver for Impala , as well as the version history.

Facebook Twitter YouTube Google Plus Website

Webapps Vulnerability Report

plproxy, pgbouncer, pgbalancer Asko Oja

Monitoring MySQL. Geert Vanderkelen MySQL Senior Support Engineer Sun Microsystems

Capture Pro Software FTP Server Output Format

Database Administration with MySQL

Clever SFTP Instructions

SQL Injection Vulnerabilities in Desktop Applications

Extracting META information from Interbase/Firebird SQL (INFORMATION_SCHEMA)

Product: DQ Order Manager Release Notes

Blackboard Open Source Monitoring

HTSQL is a comprehensive navigational query language for relational databases.

Benchmarking Citrix XenDesktop using Login Consultants VSI

Geodatabase Programming with SQL

Getting Started with SandStorm NoSQL Benchmark

PaperClip Audit System Installation Guide

SysAid Freeware Installation Guide

LifeSize UVC Access Deployment Guide

Information Technology NVEQ Level 2 Class X IT207-NQ2012-Database Development (Basic) Student s Handbook

pset 7: C$50 Finance Zamyla Chan

Advanced Tornado TWENTYONE Advanced Tornado Accessing MySQL from Python LAB

Introduction to Server-Side Programming. Charles Liu

How to configure the DBxtra Report Web Service on IIS (Internet Information Server)

DataLogger Kepware, Inc.

Web Development using PHP (WD_PHP) Duration 1.5 months

Weaving Stored Procedures into Java at Zalando

Sharding with postgres_fdw

Package uptimerobot. October 22, 2015

Microsoft Exam MB2-702 Microsoft Dynamics CRM 2013 Deployment Version: 6.1 [ Total Questions: 90 ]

INFORMATION BROCHURE Certificate Course in Web Design Using PHP/MySQL

Monnit Wi-Fi Sensors. Quick Start Guide

IBM Redistribute Big SQL v4.x Storage Paths IBM. Redistribute Big SQL v4.x Storage Paths

agileworkflow Manual 1. agileworkflow 2. The repository 1 of 29 Contents Definition

Once logged in you will have two options to access your e mails

StreamServe Persuasion SP5 Oracle Database

Preparing a SQL Server for EmpowerID installation

PostgreSQL Audit Extension User Guide Version 1.0beta. Open Source PostgreSQL Audit Logging

USING MAGENTO TRANSLATION TOOLS

Querying Massive Data Sets in the Cloud with Google BigQuery and Java. Kon Soulianidis JavaOne 2014

Excel To Component Interface Utility

Cloud Powered Mobile Apps with Azure

Intro to Databases. ACM Webmonkeys 2011

Connecting to a Database Using PHP. Prof. Jim Whitehead CMPS 183, Spring 2006 May 15, 2006

Database Extension 1.5 ez Publish Extension Manual

Package RCassandra. R topics documented: February 19, Version Title R/Cassandra interface

SysAidTM Freeware Installation Guide

MiaRec. Architecture for SIPREC recording

DB2 Security. Presented by DB2 Developer Domain

Call Recorder Apresa User Manual

L7_L10. MongoDB. Big Data and Analytics by Seema Acharya and Subhashini Chellappan Copyright 2015, WILEY INDIA PVT. LTD.

XEP-0043: Jabber Database Access

DiskPulse DISK CHANGE MONITOR

Converting InfoPlus.21 Data to a Microsoft SQL Server 2000 Database

NetBrain Enterprise Edition 6.0a NetBrain Server Backup and Failover Setup

Migrate Topaz databases from One Server to Another

Note: With v3.2, the DocuSign Fetch application was renamed DocuSign Retrieve.

PostgreSQL Functions By Example

shodan-python Documentation

DEPLOYMENT GUIDE Version 1.1. Deploying the BIG-IP LTM v10 with Citrix Presentation Server 4.5

Measuring Firebird Disc I/O

ON LINE QUOTATION GUIDE. Access Following screen will appear click on Individual Investors click click here

Role Based Access Control. Using PHP Sessions

PUBLIC. How to Use in SAP Business One. Solutions from SAP. SAP Business One 2005 A SP01

AutoMerge Online Service Configuration for MS CRM 2013

LICENTIA. Nuntius. Magento Marketing Extension REVISION: THURSDAY, JUNE 2, 2016 (V )

A programming model in Cloud: MapReduce

Oracle Database Security

Site Monitor. Version 5.3

Grandstream Networks, Inc. UCM6100 Series IP PBX Appliance CDR and REC API Guide

Practical Tips for Better PostgreSQL Applications

Using Redis as a Cache Backend in Magento

Chapter 4. Suppliers

Release Notes LS Retail Data Director August 2011

"SQL Database Professional " module PRINTED MANUAL

A Brief Introduction to MySQL

SQL Injection. Blossom Hands-on exercises for computer forensics and security

How, What, and Where of Data Warehouses for MySQL

Reports. Quick Reference Card. Option Group. Date Settings. Time Options. Agent and Queue Selection. Call Types

Advanced BIAR Participant Guide

DNNSmart Super Store User Manual

database abstraction layer database abstraction layers in PHP Lukas Smith BackendMedia

Oracle EBS Interface Connector User Guide V1.4

SQL Injection. The ability to inject SQL commands into the database engine through an existing application

Troubleshooting problems with the PDMWorks Enterprise database server

WEB BASED Access Control/Time Attendance Software Manual V 1.0

SRFax Fax API Web Services Documentation

Websense SQL Queries. David Buyer June 2009

Zend Framework Database Access

Transcription:

UI Backend Database and other shared resources

Plan Develop Test Report

Database Backend Frontend

Customers 200 600 1200 3000 6000 10000 RTUs 400 1200 2400 6000 12000 20000 Devices 1300 3900 7800 19500 39000 65000 A typical use session is about 8 minutes (480 s) We assume a 12 hour peak use time during the day (43,200 s) We assume # of users = # Contacts (3000) <peak time> 43200 s ----------------- = <average rate of new users> OR ----------- = 14.4 s <users> 3000 480 s / 14.4 s = 33.333 concurrent users

BUT I STILL DON T UNDERSTAND THE END-USER

Contacts 10000 6000 3000 1200 600 200 0 1 2 3 4 5 6 7 8 9 Hours time_db_setup time_general_setup time_data time_reindexing time_vacuum time_statistics

Size (GB) Rows (Millions) 160 300 140 250 120 100 200 80 150 60 100 40 20 50 0 1 2 3 4 5 6 db_size_simple db_size_complex rows_data 0

import sys if len(sys.argv) <= 2: inform = "False" else: inform = sys.argv[2] test = sys.argv[1] cpu_cores = 4 poll_interval = 2.5 years = 1 if inform.lower() == "true": inform = True else: inform = False

#!/usr/bin/env python import time import setup import pgdb import subprocess #postgresql wrapper def timestamp(): """Returns formatted timestamp MM.DD.YYYY HH:MI:SS""" lt = time.localtime(time.time()) return "%02d.%02d.%04d %02d:%02d:%02d" % (lt[2], lt[1], lt[0], lt[3], lt[4], lt[5]) start_overall = time.time()

# drop/create db ts_create = timestamp() print "Creating wre_test_base wre_test_%s Database" % setup.test, ts_create start_create = time.time() subprocess.call('dropdb wre_test_%s' % setup.test, shell=true, stdout = subprocess.pipe) subprocess.call('createdb -E UTF8 -O wre_test -T wre_test wre_test_%s' % setup.test, shell=true, stdout = subprocess.pipe) # connect to database my_db = pgdb.database("host='localhost' port='5432' dbname='wre_test_%s' user='postgres' password= someawesomehash'" % setup.test) end_create = time.time() print "Finished Creating wre_test_base wre_test_%s Database ( %s s)" % (setup.test, int(end_create - start_create)) print ""

# generate contacts, etc ts_setup = timestamp() print "Setting Up %s RTUs, Contacts, Devices, and Metatdata" % setup.test, ts_setup start_setup = time.time() subprocess.call('psql -c "SELECT * FROM test_gen_setup(%s);" wre_test_%s' % (setup.test, setup.test), shell=true, stdout = subprocess.pipe) end_setup = time.time() print "Finished Setting Up %s RTUs, Contacts, Devices, and Metatdata ( %s s)" % (setup.test, int(end_setup - start_setup)) print ""

# generate historical data ts_data = timestamp() print "Begin Generating Historical Data", ts_data start_data = time.time() # remove old partitions subprocess.call('psql -c "SELECT test_remove_partitions();" wre_test_%s' % setup.test, shell=true, stdout = subprocess.pipe) # create partitions subprocess.call('psql -c "SELECT test_gen_history_ddl(%s);" wre_test_%s' % (setup.years, setup.test), shell=true, stdout = subprocess.pipe) # disable trigger for last# generate data subprocess.call('psql -c "ALTER TABLE wre_test.my_partitioned_table DISABLE TRIGGER a_insert_my_partitioned_table_last_upsert_trigger;" wre_test_%s' % setup.test, shell=true, stdout = subprocess.pipe)

# generate data processes = [] for x in range (setup.cpu_cores): print " starting process %s" % (x+1) temp = subprocess.popen('psql -c "SELECT * FROM test_gen_history_by_cpu(%s, %s, %s, %s);" wre_test_%s' % (setup.cpu_cores, x+1, setup.poll_interval, setup.years, setup.test), shell=true, stdout = subprocess.pipe) processes.append(temp) for process in processes: process.wait()

# populate last subprocess.call('psql -c "INSERT INTO wre_test.my_partitioned_table_last (fk_id, val, updated) SELECT some_id, val, CURRENT_TIMESTAMP FROM t1;" wre_test_%s' % setup.test, shell=true, stdout = subprocess.pipe) # enable trigger for last subprocess.call('psql -c "ALTER TABLE wre_test.my_partitioned_table ENABLE TRIGGER a_insert_my_partitioned_table_last_upsert_trigger;" wre_test_%s' % setup.test, shell=true, stdout = subprocess.pipe) end_data = time.time() print "Finished Generating Historical Data (", int(end_data - start_data), "s)" print ""

# reindex ts_reindex = timestamp() print "Start Reindexing", ts_reindex start_reindex = time.time() my_db.execute_sql("select * FROM test_gen_history_indexes(%s)", setup.years) db_return = my_db.fetchone() # create indexes lines = [] # loop through subprocess.popen()... for line in db_return['test_gen_history_indexes']: temp = subprocess.popen('psql -c "%s" wre_test_%s' % (line, setup.test), shell=true, stdout = subprocess.pipe) lines.append(temp) for index in lines: index.wait() end_reindex = time.time() print "Finished Reindexing (", int(end_reindex - start_reindex), "s)" print ""

# vacuum db ts_vacuum = timestamp() print "Start Vacuum Analyze", ts_vacuum start_vacuum = time.time() subprocess.call('vacuumdb -z wre_test_%s' % setup.test, shell=true, stdout = subprocess.pipe) end_vacuum = time.time() print "Finished Vacuum Analyze (", int(end_vacuum - start_vacuum), "s)" print ""

# do statistics ts_stat = timestamp() print "Start Generating Statistics", ts_stat start_stat = time.time() my_db = pgdb.database("host='localhost' port='5432' dbname='wre_test_%s' user='postgres' password='someawesomehash'" % setup.test) my_db.execute_sql("select simple, complex FROM dbsize") db_return = my_db.fetchone() my_db.execute_sql("select count(*) AS total FROM wre_test.my_partitioned_table") db_rows = my_db.fetchone() end_stat = time.time() print "Finished Generating Statistics (", int(end_create - start_create), "s)" print ""

# done ts_finish = timestamp() end_overall = time.time() print "========== TEST COMPLETE ==========" print "Started: ", ts_create print "Finished: ", ts_finish print "Overall Time: ", int(end_overall - start_overall), "s" print " Database Setup: ", int(end_create - start_create), "s" print " General Setup: ", int(end_setup - start_setup), "s" print " Historical Data:", int(end_data - start_data), "s" print " Reindexing: ", int(end_reindex - start_reindex), "s" print " Vacuum Analyze: ", int(end_vacuum - start_vacuum), "s" print " Statistics: ", int(end_stat - start_stat), "s" print "Database Size" print " Simple: ", db_return['simple'] print " Complex: ", db_return['complex'] print "Data Rows: ", db_rows['total']

f = open('wre_test_%s.log' % setup.test, "w") f2 = open('wre_tests.log', "a") f.write("ts_start, ts_finish, time_overall, time_db_setup, time_general_setup, time_data, time_reindexing, time_vacuum, time_statistics, db_size_simple, db_size_complex, rows_data\n") f.write("%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s\n" % (ts_create, ts_finish, int(end_overall - start_overall), int(end_create - start_create), int(end_setup - start_setup), int(end_data - start_data), int(end_reindex - start_re$ f2.write("%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s\n" % (setup.test, ts_create, ts_finish, int(end_overall - start_overall), int(end_create - start_create), int(end_setup - start_setup), int(end_data - start_data), int(end_r$ f.close() f2.close() if setup.inform: subprocess.call ("echo 'Finished Generating Database for %s Contacts' /usr/sbin/sendmail 8885551212@vtext.com" % setup.test, shell=true)

cp wre_tests.log.clean wre_tests.log python gen_history.py 200 dropdb wre_test_200 python gen_history.py 600 dropdb wre_test_600 python gen_history.py 1200 dropdb wre_test_1200 python gen_history.py 3000 dropdb wre_test_3000 python gen_history.py 6000 dropdb wre_test_6000 python gen_history.py 10000 dropdb wre_test_10000 Perspective Units echo 'Finished WRE historical data test' /usr/sbin/sendmail 8885551212@vtext.com

php pgfouine.php -memorylimit 3840 -file pgsql -top 40 - report queries.html=overall,bytype,slowest,nmosttime,n-mostfrequent,n-slowestaverage -report hourly.html=overall,hourly -report errors.html=overall,n-mostfrequenterrors -format htmlwith-graphs split --lines=1000000 pgsql.log chunk

cat chunk/chunk* ~/pgfouine/pgfouine- 1.2/pgfouine.php -memorylimit 3750 - -report chunk*_queries.csv=csv-query -format text cat chunk*_queries.csv >> queries.csv

CREATE DATABASE mytest encoding 'UTF8'; CREATE TABLE log ( id integer, date timestamp, connection_id integer, database text, "user" text, duration float, query text);

COPY log FROM 'queries.csv' WITH CSV; ALTER TABLE log ADD COLUMN type CHAR(1) DEFAULT 'S'; UPDATE log SET type='i' WHERE query ~* '^insert.*$'; UPDATE log SET type='d' WHERE query ~* '^delete.*$'; UPDATE log SET type='u' WHERE query ~* '^update.*$'; UPDATE log SET type='o' WHERE query NOT ILIKE 'select%' AND query NOT ILIKE 'insert%' AND query NOT ILIKE 'update%' AND query NOT ILIKE 'delete%'; CREATE INDEX test_date_idx ON log(date); CREATE INDEX test_database_idx ON log(database); CREATE INDEX test_connection_idx ON log(connection_id); CREATE INDEX test_type_idx ON log(type);

Queries 4000 3500 3000 2500 2000 1500 1000 500 0 0 1 2 3 4 Hours

Connections 200 180 160 140 120 100 80 60 40 20 0 0 1 2 3 4 Hours

Millions 140,000 20 120,000 100,000 80,000 60,000 40,000 20,000 0 0 1 2 3 4 DELETE OTHER UPDATE INSERT SELECT running total 18 16 14 12 10 8 6 4 2 0

Portal Monitor Backend Socket Process Server 1 Backend Process Intercept 2 SELECT INSERT UPDATE DELETE OTHER Web Services 0 1000000 2000000 3000000 4000000 5000000 6000000 7000000

3,227,773 18% 2,703,693 15% 9,026,849 49% 1,708,831 9% 1,740,579 9% SELECT UPDATE DELETE INSERT OTHER

Customers 200 600 1200 3000 6000 10000 RTUs 400 1200 2400 6000 12000 20000 Devices 1300 3900 7800 19500 39000 65000 Database PASS PASS PASS PASS PASS FAIL Backend PASS PASS PASS PASS FAIL FAIL Frontend PASS PASS PASS PASS PASS FAIL