How to Use The Community Roundtable’s Formula to Calculate Your Jive Community’s ROI

Originally published Oct 12, 2016 in JiveWorks.

The Community Roundtable (The CR) is a great resource for anyone working with communities, and their simple-but-helpful ROI calculator provides an accessible way to describe the value of a community in financial terms. Getting the data inputs is not super-easy, but it’s definitely feasible. Here’s how I did it for our Jive community.

There are four inputs required:

  • The number of answered questions per month in your community.
  • The number of successful searches per month in your community (or the number of total searches and a rough estimate of the percentage successful).
  • Your estimated value of an answer.
  • The cost of your community.

This post will focus on how to get the first two inputs for your Jive community. Set up a simple spreadsheet to record the data you collect and make the calculations you need.

Questions Answered

If there’s a way to get this from Jive’s Data Export Service (DES), I’d love to learn how, but I could not find it. But you can get it from the Community Manager Reports (CMR). Go to the Questions Answered report in CMR and set the date range for a year. Hover over the chart to see the results on a given date.

Because this report displays cumulative results, you’ll need to subtract to calculate the number of responses On your spreadsheet, enter the number of questions with responses on the most recent date and the earliest date. Subtract to get the number of questions with responses in the past year, meaning the number of questions that got answered during the year. (Divide by 12 if you want the number per month.)

If you want to be more rigorous, you can use Questions with Helpful Answers or even those with Correct Answers. I’m not certain my community members apply those labels consistently enough to trust those counts are accurate, however.

Successful Searches

For search data you’ll need to use the Jive DES portal (if you’ve never accessed it before, look for guidance on that elsewhere in the Advanced Customer Measurement space for Jive customers). After you enter your credentials, set the date range for the past month and Filter on Actions. Filter on these two actions, which record when someone actually clicks on search result that has been presented to them (for more explanation about Jive DES search data, see What Does Search Data in Jive DES Actually Mean?):

  • ACTIVITY_SPOTLIGHT_SEARCH
  • ACTIVITY_MAINSEARCH_SEARCH

Click the button to download the CSV file for Activity. Repeat this process for at least the prior two months, so you have three months of data.

Manually Evaluate the Search Data

Now you have to apply some judgement. I’ll explain how I did it, but there’s not really a right or wrong way to evaluate what a “successful search” is, at least in gleaning that from the data we have to work with. Let’s start with a simplistic definition of “successful search”:

  • the searcher clicked through on a search result and did not immediately (within a few seconds) search for the same thing again.

The CSV file will often have multiple records for the same Actor (searcher) with timestamps that are mere seconds apart (especially with spotlight search). I don’t see how you can call each of those searches successful, since it’s often clear from the search terms entered (yes, you can see that in the CSV file, and that is invaluable) that they were trying multiple times to find one thing.

It gets even trickier to evaluate the data once you realize that these records are not always consecutive in the data, because another search by a different user can take place during the same time span, so User A’s five search records for “Joe Smith” may be separated by User B and User C’s search records, even if User A’s searches were just seconds apart.

I soon realized that the Web Session ID was a pretty useful data point, because it stays constant throughout each user’s search session. Of course, during the same web session a user can search for multiple things, so it’s not a direct proxy for a successful search, either.

I decided to go through the first 100 records and manually mark in a new column each “successful search,” which I defined as:

  • There is one or more record for the same searcher in the same Web Session ID, and
  • If there is more than one record, then the search terms in each record are similar enough to indicate the same item is being sought.

So, if there are five records with the same actor and same web session ID and similar search terms in each, I marked that as one search.

Tip: Hide all the columns in the file that you aren’t using, so it’s easy to see the timestamp, username, web session ID and search terms.

Estimate the Successful Searches

I chose to be as rigorous as I could in my estimation, but I am not certain it’s a more accurate method than a simpler approach would be. I found 64 of 100 search records were successful by my criteria. I then used Tableau count the unique Web Session IDs and there were 53. Therefore, I found that for every Web Session ID there were 1.21 successful searches.

A more rigorous approach would evaluate more than 100 records. A simpler approach would be to simply say that for every 100 searches there are 64 successful searches, so if you don’t have Tableau or just want a simpler approach you can stop there. But I went on…

Use Tableau to Count Web Session IDs

I copied and pasted each month’s search data from the CSV downloads into one Excel file. (BTW, I downloaded separate files simply to make the downloads go faster with less chance they would fail — you could simply download one large file.) Then I loaded that file in Tableau and used a custom calculation to count the unique Web Session IDs and multiply them by 1.21 — COUNTD(Web Session ID)*1.21. I averaged the result to get a monthly figure. Then I used that data to calculate that 44% of my search records were “successful.”

Use Judgement Again to Estimate Truly Successful Searches

Even though I now had “hard data” (so to speak) on successful searches, I did not think it was actually what The CR’s Community ROI model called for as a “successful search.” I don’t know that clicking on a result and not trying to search again for the same thing really means that someone got the answer they actually needed. That’s too big a stretch for me to make.

So, I decided to assume only 50% of the time they would get the answer they actually need, bringing my successful search percentage down to 22%. That is likely lower than reality, I’m guessing, so I feel comfortable defending it.

Estimated Value of an Answer

I’m not going to elaborate much on how to estimate the value of an answer in your community. Rachel Happe does a great job explaining how to do this, but I kept it very simple (how unlike me, I know…). I took the average cost/hour of a full-time employee (FTE) in our company (I got this by asking our finance team) and estimated how much time on average would be saved by finding an answer to a question.

Again, I chose to use a very low estimate that it’s hard to argue is too high: 10 minutes. Seems pretty reasonable to say that, on average, someone finding an answer to a question in the community saved them a mere 10 minutes. I’d bet it’s actually higher than that.

So, I did the math and came up with the value of an answer. It’s important to note this approach does NOT account in any way for the extra value implicit in finding an answer rapidly (such as, “I found what I needed and made my deadline!”), nor does it account for the value to the organization of making a good or informed decision by getting a question answered (such as, “I learned how to avoid a huge costly mistake!”). So, it’s the bare minimum way of estimating the value of an answer: time saved.

Community Program Costs

Costs were relatively easy to compile, although finding all of them took a little digging. I included both external costs of technology and internal costs for the people who support the program.

Calculate the ROI

The CR provides a nifty online calculator, so you can plug the inputs in there and see the results. Of course, I wanted to build my own calculator into a spreadsheet, using the formulas that The CR provided during a member call. Even with very low estimates for the value of an answer and the number of successful searches, our community delivered a solidly positive ROI that I can explain to my stakeholders. Give it a try!

Leave a Reply

Your email address will not be published. Required fields are marked *