In the previous posting, we explained how to break down a content in an existing digital asset library into major categories, Onboarding, Prospect Development, Product Knowledge Development, and Sales Skills Improvement programming. Each of these phases of the distributor and customer journey must be inventoried and the expected outcomes to be produced by each asset identified.
As a team begins to use machine learning, the next step is to focus on the customer journey in its Prospect Development content, because it has the greatest influence on conversion rates and revenue. These assets provide a machine learner with measurable steps in a customer journey and the team’s job is selecting what to measure at each customer touchpoint.
Focus on the process of moving a prospect from initial awareness into a distributor relationship or to the close of a product sale. Set aside the social content used to attract awareness along with the content used to train and inform distributors right now. The video, articles, product sheets, and other materials shared between a distributor and a prospect are the only concern at this step. Don’t be afraid to throw out content that doesn’t fit and to plan the production of new content that may work better.
Consider each content asset as though it is a candidate when hiring a sales support person to work with distributors to successfully complete the Prospect Development sequence. Is it up to the job? Can it be described completely so that its “boss,” the machine learner can understand what it is expected to do?
“Just as you wouldn’t hire a human employee without an understanding of how he or she would fit into your organization, you need to think clearly about how an artificial intelligence application will drive actual business results,” wrote Greg Satell, author of Mapping Innovation: A Playbook for Navigating a Disruptive Age, in the Harvard Business Review this month. The metrics identified at this step in the machine learning onboarding process are the equivalent of a job description for the machine learning platform.
A machine learner will analyze the performance of content assets, the sequence in which they are presented, and the messaging that drives views of the content, comparing the results to the expectations and metrics identified during this exercise by sales and marketing leadership. When the system identifies departures from those expectations, the system will seek alternative routes to improved conversion by testing different combinations of content and sales messaging. It sends suggested next steps to the distributor, which they can use or modify (creating more variations the machine learner can analyze), measuring all the results against the goal of speeding prospects to the close.
What does the machine learner need to know about each asset?
What is the asset about? What is the subject, as well as the keywords, themes, and who or what appears on-screen? Metadata is often missing and may be added in the content management platform. If an asset does not have extensive metadata describing its content, that must be created so that the machine learner is able to test different combinations of assets. For example, if the machine learner knows that a video features a female presenter, it could test that asset with female viewers to see if it converts better. There are myriad combinations of demographic and psychographic factors that can be tested, but only if the asset is thoroughly described in a way the platform can understand and use.
Where is the asset positioned in the current selling sequence? A machine learner may also test different sequences of content to understand if existing content can produce better conversion rates.
Is it the traditional first video shared with a prospect to create interest? Does it depend on any other assets for context, such as a previous video in the sequence? This information is important to preventing the machine learner from rearranging content in a way that doesn’t make sense to the recipient. For example, if your company uses jargon frequently, such as referring to a product using an acronym (e.g., “Comprehensive Weight Magagement is spoken about as “CWM”) it should be explained before it is used in other contexts. Telling the machine learner that one asset must precede another prevents customer confusion because information is presented in the wrong order.
What is the expected outcome of the customer’s engagement with an asset? Is a video or a sales action, like making a call or presenting products at a meeting, expected to increase customer interest? Is there a specific call to action associated with an asset, such as a link to send a message to the distributor who shared it? Is the expected next step after a distributor makes a presentation a purchase, a call being scheduled, or a specific follow-up asset should be shared and viewed? Documenting these expectations provides the machine learner with extensive options to test in different sequences. As long as each expectation is documented, your organization has the basis for a measurement of the response.
What is the expected pace of a complete sales motion? If there is a six-step sequence associated with selling a health product today, for example, are the assets performing satisfactorily as a unit? Is the distributor taking too long to present the steps? Are prospects responding in the expected timeframe? These pace-related signals catalyze machine-generated coaching for the distributor, reminding them to follow-up in the optimal timeframe to make a sale.
In the Gig Economy Group platform, clients can configure specific follow-up questions for the distributor to ask the prospect so that qualitative and quantitative feedback can be captured by humans. Determining whether the prospect more or less interested after seeing an asset or participating in a meeting often requires the sales rep to interpret statements and signals. This ability to interpret the impact of an asset is the distinct advantage in-person sales provide to marketers. Leverage it by developing follow-up questions that can be turned into metrics.
Attribution can be controversial. It is a mistake to lump social and other content together with your sales assets because social content often has different goals. However, as a direct selling company captures more information about its distributors and market, the opportunity to use assets in a different context, for example by adding a personal success story normally shared in social channels to a sales sequence, will emerge. The outcome is a more productive asset library with more applications, which can increase the ROI on every content investment.
With this sales process inventory in place, distributors can be equipped with an evolving selling process that can deliver ongoing improvement in revenue with greater distributor confidence and retention.