Jérôme Lepage - CFCamp 2013 Mondadori France
French press magazine group (#3) Subsidiary of Mondadori Italia ~ 30 magazines 10,6% market in 2011 381,6 million in 2012
In Web Development since 1999 Php for 14 years Java for 9 years Technical Project Manager for Mondadori France
Reason #1 : Tune platform Determine capacity of architecture Test and find best performance configs Find the limits to your platform
Reason #2 : Test application code Test application in real action Unit tests are jailed and not multi-users Detect pro-actively failure or memory leak
Reason #3 : Find and resolve failures On an environment same as production Time to (try to) reproduce failure(s) (vm clone) Analyze problem, fix it, verify the solution You won t and don t touch the production!
Reason #4 : Improve delivery process Test your application before deployment If it passes unit and scalability tests, go on prod It costs, but on high risk app., it s cheaper than a bug If you need to do #3, you are ready for it.
HUMAN TESTS You need people (of course Bozo!) Heavy work to prepare tests Heavy work to train users Heavy work to get feedback from users
HUMAN TESTS Random degree depends only on users Closeness to reality (or even the plan)? Who knows?! Hard to do the first test! It has to be short! Don t count on a second test! Users get bored!
HUMAN TESTS : Conclusions High costs on time, energy and money Results are not really what we expect High risk of ending up in jail for mass murder! Only good reason to do it, is for political reasons!
COMPUTED TESTS You need only you and maybe some admin sys. Heavy work to prepare scenario No user to train You have assertions to get/verify results
COMPUTED TESTS Random degree depends only your scenarios Closeness to reality depends on you Hard to do the first test! You can play and replay them as you want!
COMPUTED TESTS : Conclusions Cost (only) YOUR time, YOUR energy Results reflect the time you spent on it You can stay friends with your colleagues! Only reason not to do it : time or laziness
Comparison : Humans Computed Persons needed Many Close to 1 Random degree Medium to high Low to medium Closeness to reality Depends on testers Depends on you Difficulty to play once Medium Medium Difficulty to re-play Higher Low Time period of tests Short As you want Costs (not only on money) Medium to Life Low
Oh Yeah Baby! Come closer! jmeter Remote jmeter Test Server Client & Server on same app. http://jmeter.apache.org/
It s a Client-Server Application You need to avoid traffic from network You need to reduce impact on your network You need to reduce distance between server and jmeter Use VM on same node or Physical server on same switch
jmeter Client WorkBench Test Plan Test Controller Make scenario / tests Play Tests
#1 part : Preparation of tests You have to know the application You should know what users do (or will do) You should make small tests rather than a big one You should plan relevant tests for application
#2 part : Run tests You should do it more than once You should do several scenario to be sure and crossreference results You should get/save all data that you could have You have to write all your actions and results
#3 part : Interpret results If you don t, you let people do it by themselves You need to say more than It s work (or not)! Sloppy analysis will invalidate your time spent / work Months or years later, the context has been forgotten
Interpret results 25% Running test 15% Creating Scenario 60%
manually, the boring way You have to put all elements You have to configure all elements POST / GET data HTTP headers
with the proxy, the happy way You surf on server with your browser jmeter will record data sent GET / POST jmeter will handle HTTP headers jmeter will create a response assertion for each URL
In both cases (manually or by proxy) : you have to add realism with timers Timers will simulate user reading or pausing Nobody clicks a million times per minute Unless you test a website for people on Ecstasy
Stop talking, and make a live demo!
Test Plan Add Thread Group Add HTTP Request Defaults Config Server Name and port Add HTTP Cookie Manager Check Clear cookie on each iteration
Workbench : Add HTTP proxy Check Add Assertions Select Recording controller Use patterns to filter (or not) Add Listener > View Results Tree
Record your visit Do a first run with 1 user Complete Response Assertion Clean/Remove non-relevant links Do another run for check
On response assertions Test response code is useless and wrong Test on specific text for each page Use a sample text Use a regular expression
On Thread Group Number of threads (users?!) Number of independent threads launched Ram-up period in seconds Seconds between the beginning of a thread and the next
Aggregate report
Results Tree
Graph Result (if you can understand it)
Why FusionReactor? A HUMAN CAN UNDERSTAND THIS!!!!
Where to look?
Where to look? JVM Memory
Where to look? CPU
Where to look? CPU
Where to look? Database Requests
Where to look? DB Request Average
Where to look? DB Request Average
Where to look? Request Activity
Where to look? Request Activity
Where to look? Request Activity
Where to look? Request Activity
Get / watch/ verify all data that you can have. Don t base all your interpretation on only one tool. Be careful with SNMP monitoring tools (like cacti/zabbix) A new monitoring tool on beta : Morpheus for Railo.
Add more tests (other paths, other ways) Add complexity to tests (but not too much) Write a strategy of testing (You could have to justify it) Interpret results
Clean previous results after each run. Restart services before each run Test once manually Test as human on a run to sniff around Test each scenario separately before mixing
Test with data loaded from a CSV file (like auth) Connect and use auth from a LDAP Retrieve data from the loaded page Follow links from a starting page (not recommended) Test java or database directly
Request/sec. is not User capacity Users take from 30 to 60 sec/page 20 req/sec capacity => 600 1200 actives users capacity per sec Inversely : (Users target / avg time spend per page) = Req/sec target
Michael Hnat from BlueGras FusionReactor / Intergral Team Aurélien Deleusière from Prisma IT Jocelyne Treilly from Mondadori France
Questions? Thanks for your attention Twitter : @jlepage_info Don t leave now I get a last surprise!