A quick Google search of the phrase "email isn’t dead" will produce a vast array of articles and blogs by authors supporting the point of view that email is in fact a healthy and vibrant channel. Consumers who have chosen to opt into email are typically the most engaged consumers available for marketing — they have raised their hands to express interest in the brand, they often have already purchased product, and they were willing to exchange personal information for the promise of relevant, ongoing communications focusing on forthcoming products and promotions. However, it has gotten evermore difficult to get through the clutter of the inbox and to overcome the plethora of multichannel advertising to which people are exposed every day. Consumers are also expecting more in exchange for their attention. They want the messages to be timely, relevant, and of personal interest to them.
This leads an email marketing manager to wonder: for a channel that should expect high performance, yet still has to overcome a number of very real challenges, what is the actual ROI of the programs they are managing? Are they increasing the annual value of an opted-in consumer by 10%, 15%, 25%, or even more? Are they reducing attrition for emailed consumers by 2x or 3x that of non-emailed consumers? Are they adding several hundred thousand, millions, or even tens of millions in incremental dollar value to the bottom line each year?
Flashback to the scene in the 1987 movie Princess Bride where Miracle Max determines that the hero Wesley is indeed very much alive, despite others believing he had succumbed to the tortures within the Pit of Despair. It provides a cautionary reminder that an actual measure of vigor should be left to the experts:
[video_resize width="100" padding="75" url="https://www.youtube.com/watch?v=xbE8E1ez97M"][/video_resize]
So, aside from having your own personal Miracle Max to determine just exactly how alive an email program is, how are you to make that determination? The answer is quite simple: measurement, specifically controlled measurement. With proper measurement, the vitality of a program can be accurately quantified and then leveraged in order to plan the level of ongoing investment in the channel.
The gold standard for measuring the value of an email program is to take a perfectly matched set of opted-in consumers and purposefully not send them email communications for a period of time. You then measure the difference in value of consumers receiving email vs. those not receiving email. Typically, deriving an incremental annual value for the program would be the marketer’s goal, but if one year is too long to not send emails to a set of consumers, then a rotating holdout panel could be used for shorter periods.
However, it is always quickly noted by some facet of the organization that there is an opportunity cost associated with not sending emails, and the quest for a measurement alternative begins. I see marketers choosing to measure their program in a variety of creative ways, each of which comes with its own set of considerations. Here is a list of the methods that I see most commonly applied and a set of cautionary notes for each:
Method 1 : Comparing opted-in consumers with non-opted-in consumers
Considerations: These two populations are fundamentally different. Opted-in consumers are more likely to have had a deeper previous relationship with the brand, including prior purchases. They have been willing to share personal information with the brand in exchange for future communications. They may also skew demographically toward those who are more digitally savvy, which could include younger or higher income consumers. In fact, some of the opted-out consumers may have even chosen at some point to unsubscribe, exhibiting a strong disinclination toward future purchase. Additionally, just because a person is opted-in currently does not mean she is actively being emailed. She may fail the email contactability rules. Lastly, the very act of even having an email address on file can lead to higher value, as that allows for better linking of retail purchases with email address capture at the point of sale.
Method 2: Using non email openers as a proxy for non-opted-in consumers
Considerations: There are at least two underlying differences between non-email openers and a true holdout group. The first is that if you are using a reasonably long test period, say three or six months, and there is no email engagement, those consumers have likely chosen to let a lot of messages go unopened during that time. If a consumer has let that many messages go unread, even if they haven’t opted out, they have likely disengaged with the brand (a soft opt-out) and are shopping with other retailers by now. In most cases, you could know even without testing that this group will be of lower value. Additionally, in today’s world, it is also not safe to assume this group is not at least seeing the subject lines in their inboxes as they clear out daily messages, often from a mobile phone. For a brick-and-mortar retailer, sending out a subject line that advertises a weekend sale or 50% in-store sale could still drive traffic in some cases, even without triggering the engagement click to see the message details.
Method 3: Comparing opted-in consumers with non-opted-in consumers while applying profile groups
Considerations: This is similar to method 1 above, but this time the marketer is actively controlling for a few factors, such as customer tenure, purchase recency, age and income groups, or region. While this is certainly better than not trying to control for such factors, it cannot control for the fundamental difference in brand interest that may exist between the two groups. Additionally, it runs the risk of not controlling for the most critical factors. For instance, what if a store has most of its sales through brick-and-mortar locations and also requires email sign-up for a loyalty program (with a default ‘yes’ to opt in at the same time). In this case, loyalty club status and distance-to-store may be two of the most critical value-driving factors to control for, but the marketer may not have captured that as part of their a priori set of variables to control, leaving a significant gap in critical similarities across the two groups.Method 4: Modelled control group
Considerations: To reduce some of the error inherent in method 3, you can improve the comparison by using a model to help find a similar population(s) within the opted-in and non-opted-in groups. A wide array of aggregates would need to be created and used as model input variables. Then, during the model build phase, the most important variables would be selected to replace the a priori profile variables. This would help ensure that the proxy group was selected in the best mathematical way possible, as opposed to one person’s guess. One of the largest considerations for this approach is the resource time involved to create and apply the models. If there is not a lot of pre-existing infrastructure to support quick model builds and aggregate creation, then the investment in time and human capital could be offset by the cost of having a true holdout group initially.
As the movie progresses, Miracle Max does indeed revive Wesley with a miracle pill. He goes on to overpower Prince Humperdinck, win back Princess Buttercup, and ultimately save the day. The moral of the story: with farm-boys, just as with email, there can be a lot of fight left, even in something that others may have referred to as "mostly-dead." Furthermore, it’s not nearly the time to be looking for mere loose change. Instead, we should still be striving for and measuring the much larger value that email can bring. Who knows, maybe email will even help with a few ROUS’s — in this case, instead of defeating Rodents of Unusual Sizes, we mean creating Returns of Unusual Sizes! Just be sure that as you determine the size of those returns, you are measuring actual value, understanding the pros and cons of the methodology, and not accidently creating a fairytale metric.