Testing Times Testing Tactics & Strategies March 2013 Version 1.0
Contents Executive Summary... 3 Introduction... 4 Strategies... 5 Single Variable... 5 Multi Variant Testing... 6 Testing Plan... 7 Step 1 - Goal... 7 Step 2: Develop a theory... 7 Step 3: Create the Test... 7 Step 4: Segment the List... 8 Step 5: Measure and Analyse Results... 8 Step 6: Make Changes... 9 Keep the tests fair and honest... 9 Factors to Test... 10 Conclusion... 13 2
Executive Summary It s important to understand that, like most elements of marketing, there is far more science than art to successful email marketing. Testing can help marketers understand audience behaviour and preferences. It can also help identify and solve problems in email campaigns. Whatever strategy is chosen, testing must be approached as if training for a marathon; the more you test, the better you will become. In short, it is advisable to start off with single variable testing then once you have found out what works for your email campaign, move through all of the email components until a full understanding has been developed. Fundamentals of applying a testing plan: Testing big sample sizes Create a 'testing log' - document all tests and results in one place Rather than trying to test everything at once, test iteratively Choose smaller, less complex tests with significantly different variants Take the lessons learnt from the first test and quickly follow it up with another Repeat tests over time The remainder of this document covers the different strategies that can be applied, the processes of implementing a testing plan and finally, details associated tests for a range of email components. 3
Introduction The definition of testing in the context of email marketing is not complex. It is very simpl Testing involves sending a differing version of an email, including a control copy, to see which instance is most effective in increasing a desired metric. With competition in the inbox becoming increasingly fierce, a careful testing strategy gives a significant opportunity to improve metrics and strengthen your brand position. There are of course many more reasons why an organisation carrying out email campaigns should invest time and resources into testing. However, they can all be grouped into: 1. Testing leads to a proven increased return on investment (ROI). 2. Testing validates what you are currently doing. 3. Testing can help marketers understand audience behaviour. 4. Testing can provide solutions to specific problems and challenges with current email tactics. Regardless of the testing strategy, either single or multi variable, by focusing on a specific metric within email campaigns, testing has allowed organisations to significantly improve the success of the communications. For exampl 20% increase in opens by dispatching a few hours earlier 35% increase in clicks by changing the image to text ratio 55% increase in sales by moving the call to action With email, small changes can make a big difference in results. 4
Strategies There are fundamentally two strategies that can be applied to testing in email; one (or single) variable at a time or multiple variables at once. Each is valid and depends on the current tactics and results of email campaigns. Single Variable This strategy is based around the systematic testing of individual email parameters. The process would typically start by testing: variables of the from name subject lines super subject lines Headlines hero panels calls to actions images layout design elements (including responsive design) day of send time of send The purpose of this strategy is to test the email lifecycle from deliverability to open, click and finally conversion. A Single Variable approach is recommended if current email campaigns are performing well or in line with industry average. Testing one variable over time would give accurate metrics and provides a clearer understanding on how the testing performed. 5
Multi Variant Testing This strategy involves changing a number of elements of the email campaign at once, with the objective to significantly increase overall performance. Multi variable testing is recommended, if current email activity does not return the results needed and metrics are way below industry average. The major disadvantage of this approach is lack of understanding as to which of the different variables in the email have led to the results achieved. There is a structured approach to multi variant testing that allows for a clear understanding of the dynamics operating within the different tests. The testing methodology is as follows: Select at least two elements Change only one item in each element, such as call to action, images, title or copy. Elements must be independent of each other Elements must be significantly different to enable clear lessons to be learnt. 6
Testing Plan Step 1 - Goal Start with a simple question; what are you trying to achieve? In short, determine a specific goal to accomplish rather than attempt multiple goals. A series of small steps can be easy to test and understand. Goals could includ Increase open rates Increase click throughs to website Increase downloads or form completes Increase purchases/conversions Step 2: Develop a theory Use your marketing experience and best practice advice (see below) to create a theory about what aspects may make a difference to your defined goal. Examples could includ Click: Calls to actions are not visible or strong enough Conversions: Landing pages are too generic Opens: Subject lines do not emphasise the offering Step 3: Create the Test Set up the test. This will depend on the defined goal. Use the table below to select one variable at a time to test for the required metric. 7
If different layouts in the email are being tested it is important to have clearly distinguishable designs. This will allow for a "winner" to be identified and then an understanding as to why. Without a theory as to why there was a clear winner, it will be very difficult to feed lessons learnt back into the design process and brief. The example also gives an example of content split testing. It is recommended to focus on the key offer or content piece to ensure optimum results for the defined metric. Step 4: Segment the List This depends on what your existing segmentation strategy is. Unless you are testing different types of segmentation keep groupings and filter criteria constant. Select the required segment involved in the test and split it 50/50 or even 33/33/33. Always treat one section of the split as your control group, i.e. what you always do. Then use the others for the different variant of the particular test. It is important to remember that test groups have to be a good size to ensure results are credible. Step 5: Measure and Analyse Results Accurately pull together all metrics from delivery to interaction and conversion rates. Measure and analyse results to gain an understanding to prove or disprove the theory. If applicable examine the results from previous tests. 8
Even small percentage differences can mean large gains in response rates and return from the campaign. Remember not to focus just on the traditional catch all email metrics, review results to see if: Clicks were more focused on specific area, topic, or action There was a difference in the spread of clicks There were any differences in the types of conversions or web site activity between the groups Myth: More opens mean more clicks Be careful not to fall into the trap of thinking more opens automatically means more clicks. Research has shown that this is not always the case. This again proves why rigorous testing is very important. Step 6: Make Changes All the work involved in testing is wasted unless lessons are learnt and changes are made. Commit to making at least one change after each campaign. Keep the tests fair and honest Whatever approach is followed, stringent test procedures are required so results aren't skewed: Make clear notes about what you are testing and always make sure a control group is sent a "standard" version of your email Unless testing day or time, all emails must be sent at the same time Wait at least 2 days before deciding on a winner, as early results can be misleading Test groups must be of a significant size. There are many tools to calculate statistical accuracy; just type "sample size calculators" into a search engine 9
Factors to Test Variabl Subject Lines Metric: Open Rates Tactics: Copy: Use different article headlines from your newsletter Copy: Test your offer wording or try leading with your main offer to encourage interest Length of subject lin Long vs. short Personalisation: Use the recipient s name, company name, interests and more to increase relevance Urgency: Offer end date Incentiv Savings, discounts or reduced delivery costs Calls-to-action: Be upfront with what you want the recipient to do by including this in the subject line Variabl Content Metric: Click Rates Tactics: Offer Test varying discount levels 20% vs. 30% off Test a hard sell (Buy now) vs. soft sell (benefits) Test pricing 15.99 vs. 19.99 Urgency. A time limit or incentive for the recipient to act now Test a free offer - a free product with purchase vs. no free product Test free shipping vs. discount Percentage vs. Numerical Offer Free vs. Reduced 10
Content Long vs. Short text length Direct & Targeted vs. detailed information sections Previous customers: Use testimonials to increase trust and validate purchase Tone & length of headlines Ton Corporate / Edge / Friendly / Expert Advice Personalisation with subscriber data: Test using the information you hold on the recipient vs. non-personalised. This does not just have to include name, company or hard data. It could also take into behavioural information, such as opens, purchase history etc. Variabl Design Metric: Click Rates Tactics: Graphics Positioning: The positioning and size of main or sub images has a major effect on recipient engagement with the campaign Complexity of Design: From text only, very image heavy or strong single message Personalised Imagery: Testing of dynamic images based on recipient data Style Fonts and colours: This depends completely on your brand flexibility Layou Short vs. Long or Vertical vs. Horizontal content areas Length: Full content or short content with teaser links (e.g. 'read more') Styling: Try a completely different content or offer styling for a particular product 11
Variabl Call to Action Metric: Conversion / Click Rates Tactics: Text vs. image Placement. Above 'the fold', in the header, below the image Frequency: Placement of a secondary link vs. repeating the offer vs. more than one offer in the email. A single call to action can increase clicks by focusing recipients on one thing, but multiple call to actions give the opportunity to cross sell Different offers Text tone and length Personalised depending on past behaviour Different tex Urgency / Passive / Find out vs. Read more Trust Factor. Risk-free offers Variabl Day and time Metric: Open Rates Tactics: Simple and often is very powerful. Test could includ Changing Week days to Weekends or moving away from Friday afternoon dispatches Morning versus afternoon Variabl Frequency Metric: Conversion Rates Tactics: This is where you should be cautious. The volume of emails you send, as well as the frequency with which campaigns are dispatch have the potential to affect inbox delivery, but can also be detrimental to the overall engagement rates. This can be tested but ensure your testing is in line with the expectation of your recipients and what you specified when they first subscribed 12
Conclusion Testing has major advantages and is a fundamental part of email marketing. It can provide major insights into existing email campaigns and can allow for unrivalled impact on future activities. However, there is always one significant caveat; you always have to ask: what am I learning from doing this and what am I trying to achieve? You have to have an understanding of your next action from seeing one email outperform another, otherwise testing will become a luxury, and not a tool that gives you a platform for continual improvement. And finally remember: "If you're not testing, you're not learning." 13