Overview
Vanity metrics make you look good to others, but often do not help you improve your organization. In this presentation, Mike Burroughs dives deep into different common metrics that companies adopt. He talks about which are the most effective and which are just vanity metrics. Mike also talks about where organizations go wrong with data analysis and proposes ways that these organizations can improve their processes.
Speaker
Mike Burroughs is the President and CTO of Salesvue, an easy to use, salesforce-native sales engagement platform. Mike has decades of experience B2B Marketing.
Quotes
“However, I’m frequently amazed when asked why they are collecting a particular metric, people struggle to give a strong response. The quote here by Henry Ward Beecher strikes at the heart of a common feeling of many analytics initiatives. They’re based on what is easy to collect, not what is important to collect.”
Key Points
- Metrics are often based on what is easy to collect, not what is important
- One organization’s vanity metric may be crucial for another.
- Organizations often use the data they collect to prove what they believe is there instead of using data to find problems
Transcript
Hello, everybody. I’m Mike bros, President and CTO of sales view. Sales view is a Salesforce native sales engagement application. It offers customers the ability to structure their sales processes, and collect and analyze their results, all without forcing the user to leave the familiarity of their Salesforce org at its core sales view and indeed, all sales engagement applications are about structuring an organization sales process to foster it’s consistent execution. We’re always asked to produce reports with specific metrics that our customers believe to be crucial in understanding their success. Sometimes these metrics are driven by business level KPIs that are tracked throughout the organization. And sometimes there are metrics that have simply been collected forever. Often the teams we work with have a hard time explaining how the metrics they collect correlate to success in their organization. Well, every organization is different, implementing various processes and collecting various metrics. I’ve been able to make several observations about what seems to work when it comes to measuring your sales activity regardless of your process, or industry. At first glance, this observation seems fairly obvious. However, I’m frequently amazed when asked why they are collecting a particular metric, people struggle to give a strong response. The quote here by Henry Ward Beecher strikes at the heart of a common feeling of many analytics initiatives. They’re based on what is easy to collect, not what is important to collect. At sales view. We talk a lot about a related concept, vanity metrics. Salesforce is a Tableau product and is a commonly used data analytics platform. And they define a vanity metric as ones that make you look good to others, but did not help you understand your own performance in a way that informs future strategies. And sales view we’ve coined a term metric that matters to capture the opposite of this concept.
We are often asked to provide a list of metrics that matter for an organization and while it would be nice to generate such a list, it is impossible to do so reliably. Your metrics should correlate to your business KPIs. If your primary business KPI is customer outreach, then activity centric metrics probably makes sense. If your primary business KPI is revenue dollars, then activity alone is probably an insufficient measure. This is a typical report you may get from just about any sales engagement tool. Here we have a team of seven sales reps. Four of them in the middle are performing at roughly equal average rate. We see two members of the team that are somewhat lagging behind and another appears to be performing at a much higher rate than the rest. You’re probably all thinking at this point. That this is a team with one Rockstar, four solid contributors and two who are looking to be put on a plan very soon but who is the primary objective of this organization is to make outbound dials. And the activities in this report are counting those dials then it’s obvious who the star is. In this case, activity is truly a metric that matters anytime the purpose of the engagement is to notify a customer or prospect base, some important fact activity metrics may be all you need to analyze the success and there is no further Call to Action desired. I suspect that some percentage of all company’s activity could be similarly characterized. However, in most cases that I see sales engagement activity is designed to drive a specific response. Perhaps the objective is to drive somebody to view your website or request a demo or an appointment. In some cases, closing a sale. generating revenue immediately is the desired outcome. In these cases, analyzing activity levels only can lead to invalid assumptions. In that previous report, we saw one rep performing with a very high activity right? Let’s assume that the objective of this activity was to set a demo. In Sales view we generically refer to the desired outcome of the sales engagement process as a conversion. We see in this case that the rep with a high activity level has a significantly lower conversion rate than the rest of the team. This shows the correlation of the commonly tracked metric of activity to the key business objective. Appointment setting gives us a much better picture of what we need to be working on. It is important to keep in mind that tracking activity was not really the issue, but rather evaluating only activity levels independent of the resulting desired success was where we failed. It’s also important to note that this is not just some made up scenario that I’ve drawn up. I pulled this from our own example sales view. In this case, the rep with a high activity level was a new hire, straight from college, working very hard to impress, and for a period of time he did that until we investigated more deeply what was going on. Before we get into what we learned about this high energy new sales rep, I want to present a second observation. In order to understand why this rep was seemingly able to make huge volumes of calls, and you have relatively little conversion success, we needed a way to understand what was actually happening. What I have observed is that structuring activity results in a way that they can be aggregated and analyzed as a metric is both critical and frequently overlooked. How many of you utilize notes fields to record what was happening during a sales interaction? While flexible? This unstructured data is nearly impossible to analyze even with the promise of today’s AI. Tracking something as simple as I left a voicemail is challenging when using notes and unstructured responses, because everybody is going to indicate this outcome differently.
Diving into the data a little bit further we were able to determine that the majority of the phone calls that this new sales rep was making were identified with a result of not a fit. As we randomly distributed leads to all of our sales reps. We knew that it was highly unlikely that the sales rep just received a batch of bad leads. So we recorded some of the sales calls, and we talked with him about his sales approach. We realized he was uncomfortable with objection handling. His call volume was high because most of his calls were short. In this case, not only was the sales rep not getting the sale now, but he was likely creating a sales situation where he would be challenged to ever get a sale from that prospect as he was leaving all of their objections, unaddressed. Fortunately, additional coaching and practice allowed us to quickly address the challenge. The report you’re seeing now, the sales use conversation recency report, because we require the use of structured results activities. We know not only how many times a prospect or customer has been called, but we know whether the call resulted in a conversation and whether that conversation was positive or not. Because every rep in an organization uses the same set of structure results we can now dig deeply into analytics related to not just the activity in a sales process, but the outcomes of the individual steps. This report shows when the last meaningful positive interaction was had with a customer prospect. Critical information required to understand the health of your customer base, or pipeline. My first two observations were about identifying the metrics that matter in your organization. You need to identify meaningful metrics, correlate them to your organization’s measures of success. This is challenging in today’s world, paradoxically, because we have access to so much data. The challenge here is picking from a huge collection of data points, which ones we want to analyze and track most organizations have multiple systems in use in determining which metrics to use is often made more difficult. By needing to determine which of these many systems can be relied on as the system of record for a particular measure. When several different applications are used to implement a sales engagement process, there’s a potential of multiple systems collecting and reporting on the same metric. And when this happens, you run the risk of the two systems not agreeing. Consider the following simple marketing and sales process. I suspect that many of you use something similar. The marketing department uses a marketing automation solution to drive outbound email communication. The system will track marketing interactions with the prospect until the prospect registers some level of interest in purchasing. At this point, ownership of the customer moves to the marketing automation solution to the sales engagement application. It is in this transition between systems that we find the potential for confusion. Do we trust marketing automation solutions? Of leads passed to the sales engagement application? Or do we trust the sales engagement applications count of leads presented by marketing? Ideally, these numbers are always identical. But in practice, they frequently differ when a given metric can be driven by different systems. It’s important that an organization identify the system from which the metric will be collected and consistently and exclusively rely on that system. So far, we’ve talked about using your business objectives as a driver for your metrics, selection, finding metrics that truly correlate to those business objectives, and identifying sources for those metrics that inspire confidence. You can now begin to use this information to make process improvement decisions.
This leads me to my next observation. Too frequently, organizations use the statistics they collect to prove what they assume to be happening instead of using them to identify what is really going on. As you look at your data and you start to make decisions about what you need to do to improve. Make sure you consider all of the reasons a metric may be out of line where the projects happening at the same time impacting performance. Are there external seasons that you aren’t aware of? dig deeply into why metric is not what you expect, before you begin to plan your improvements. Now that we’re collecting accurate, meaningful data and approaching it without bias, we can start to envision a process improvement. We can implement our process changes. We can use our metrics over time to determine if our changes are having a positive effect. This leads me to my last observation, you need to know what truly constitutes improvement. To tie this together. I’m going to walk through an example that I hear frequently in one form or another from our customers and our prospects. I’m sure many of you experienced or at least heard about the challenges between sales and marketing over the quality of the lead. Flow. Marketing says leads are following up on sale says the leads are horrible. If we apply the principles outlined here, we can create a report that shows definitively whose vision of reality is correct. In this example, marketing is going to be sending out the email documenting a new product being introduced to the market. The email is going to have an invitation to go to the company’s website, get an overview of the product and hopefully fill out a form to schedule a demonstration. We start our process by determining what our key metric is. There are a lot of options here. We could measure the number of emails, the number of website hits, the amount of time between the sending of an email and when the link was clicked. But what really matters here is the number of demos set. Demo appointments may be the metric that matters but is it the best we can use? Perhaps it’s more important to track the demos that lead to further interest in a created opportunity. This is where an organization’s perspectives come into play, as arguments could be made for any of these, but since our fictional business is in the business of selling stuff, qualified leads is what we’re interested in. So we need to understand the outcome of each demo. Any analysis must consider the impact on this singular metric. It’s not to say that outbound email counts, website hits, schedule demos are not critical. They are but they’re critical in relation to the important outcome of driving prospects to become qualified sales leads. The first report we’ll look at is a very marketing centric report that shows the success of our email campaign from the perspective of the marketing manager. How many people opened the email and how many people came to the website by clicking on the link in the email? This report actually shows pretty good performance as a marketing manager. I’d be relatively happy with this. I feel confident that I was driving good leads to sales. There’s a lot of activity here. But we actually really don’t know the quality because it doesn’t correlate any of this activity to our downstream results. Combining the marketing results with data taken from the sales engagement platform, all the way through to the results of the demo can allow us to make an informed decision about the quality of the marketing leads. And the follow up from the sales reps. We’ve selected our results oriented metrics based upon real business objectives. And here I’m showing two potential outcomes. On the left, we have a sales rep that has done a good job of attempting to connect to every one of the inbound leads. They’ve attempted to connect to almost two thirds of them. Yeah, only three leads have expressed purchasing interest and it would be possible to use this data to say the leads report. But in reality, the sales rep is actually talking to very few leads at all.
They’ve really only managed to dispatch 28 of the prospects, leaving 166 Still in flight. Three opportunities out of 28 completed engagements is roughly 10% Which is good enough that you’d want your sales rep to continue pursuing these leads. You’re off the cuff observation of poor lead quality. It’s probably unwarranted at this point. On the right we have a sales rep that is working harder to push each individual to a decision before taking on any new in this case there are a lot more leads that haven’t purchased because they haven’t been contacted. But those that have results, they look good. About 17% of the leads that were run through the whole process are converting. It isn’t until we dig into the data, looking at truly critical metrics, and evaluating all possibilities that we can come to observations that really help us drive our businesses forward. In this case, taking both examples together a sales manager could turn the numbers into nine opportunities from over 700 leads provided by marketing and make a claim of bad leads with a 1.3% conversion rate. But applying the observations here we can see that this is a very biased analysis. At best we don’t have enough information to make a decision about lead quality. And in reality, the marketing manager would have a case that his leads are indeed good as we drive improvement, either in how we target in marketing, or how we engage in sales. We now have an unbiased set of metrics we can use to gauge improvement, how much improvement is required. Well, that’s up to you to determine. I want to end with a quote from the book, tech powered sales by Justin Michael and Tony Hughes. You cannot manage what is not measured, yet not everything measured can be managed. Focus on the core activities and important metrics that make a difference and decide which skills to coach I’d say avoided all costs falling into the trap of pursuing vanity metrics. Seems simple. But in practice it takes a lot of work to stay focused on the metrics that matter. My bros. I hope you’ve enjoyed these observations and presentation. Thank you very much