To this point in our journey of creating a concept and progressing it toward launch, we are in what I would consider the 'late phase' of concept testing. This is the point where we want to see how the entire 'package' may be received by the marketplace, but where we are still before full launch. I describe the goal in this phase as "dollars and minutes." We want to see, in small scale, how the audience responds to the concept. Ultimately, we are beyond the niceties of stated preference measures and Likert scales and now into real performance.
In this phase of testing, we can further tune the concept and hone messaging, examine areas of underperformance and over performance, and see if the entire concept package appears to generally deliver to expectation.
The beauty of this approach is that all of this is all still happening prelaunch, so in the grand scale of the program, spend and resourcing is still fairly compact. We want to make the best possible case to either fully scale and resource, or otherwise to sever the concept and move onto the next.
The intent of this Lesson is to show you how to "microtest" to understand real concept performance in the market, but in small scale.
By the end of this lesson, you should be able to:
To Read | Documents and assets as noted/linked in the Lesson (optional) |
---|---|
To Do | Case Assignment: Microtesting the Offering
|
If you have any questions, please send them to my axj153@psu.edu [1] Faculty email. I will check daily to respond. If your question is one that is relevant to the entire class, I may respond to the entire class rather than individually.
I'm going to be a touch blunt here, so if you're averse, simply read in a gentle, whispering internal voice.
There is a certain behavior from organizations and people in the sustainability space that an offering "being interesting" or "creating goodwill" or "building brand equity" or "increasing our sustainability cred" is, in and of itself, the end goal. Perhaps this is OK in your organization. Perhaps it is part of a longer term strategy. Fair enough. But this is not typically the case, as if there was a strategy in place, one would be met with more than blank stares when asked, "So how is your program progressing toward its goals so far?"
Here's the thing about any program or offering that just kind of happily floats around in this milky-white ether of suspended animation: If they can't show progress toward whatever goals they have set (if they have set any), they will be cut. Be it a management change or a lean year, they will be cut. Chances are, if "sustainability" as defined by these types of organizations is cut, it isn't coming back.
If you see organizations displaying signs of "sustainability as PR"–large organizations with billions of dollars of revenue and $20,000 annual "sustainability" budgets–chances are they are very much functioning in this space. The program is simply floating along: not "succeeding," but not "failing"... and not yet cut.
Because we see sustainability as an opportunity to "do well while doing good," we see sustainability as presenting us with meaningful opportunities everywhere, on fronts as diverse as employee retention and new product development. Opportunities are measured.
No matter the strategy, be it a social campaign, new product development, brand building, or any other strategy related to our sustainability offering, it lands on Dollars and Minutes. Here's why: Both of those units require a live human being (i.e., customer) to give us something of value to them: their time and/or their money.
This may all seem very capitalistic/oppressive/harsh, but, in actuality, I think you may find it quite liberating. Why? Because if we are truly about sustainability, we must be about sustainable sustainability. What feeds "sustainable sustainability?" Dollars and Minutes. What shows us that our program, initiative, or offering is indeed making an impact? Dollars and Minutes.
Here are a few examples, in application:
Because we believe in not only sustainability but also sustainable sustainability, we need to get to those Dollars and Minutes. Regardless of if we are talking about a speaker series on organic farming or a new sustainability-centric product, these two measures show how our offering is providing value to the organization ... and the world.
Without these two measures, we simply create offerings without customers and campaigns without audiences... and if you believe sustainability is about inaction, you're probably in the wrong field.
We are going to talk later about a derivative of Beta testing which is a bit different from those applied by the tech industry (more specifically, the software industry). Nonetheless, I would like to offer a primer on the "true" Beta testing.
"Pure" Beta, as applied in tech, tends to be more heavily weighted toward finding bugs and flaws in the product more than garnering live, revealed preference data on how the market receives the offering. Because this desire toward usability and performance testing tends to be the emphasis in the tech industry, their Betas tend to have little to no emphasis on understanding purchase behaviors, instead recruiting participants through either "closed" or "open" Betas.
Closed or "by invitation" Beta tests are as they sound: They are closed to the average Joe, and instead rely on either an outbound invitation from the organization, or for you to apply for consideration for a Beta slot. You will sometimes see this structure implemented when the offering being tested is of a sensitive or confidential nature, when a company wants to hand-pick a group of Beta testers based on past purchases or behaviors, or when they simply want to limit the number of participants.
Open Beta tests allow virtually anyone to participate, perhaps with minimal barriers to ensure the software/product will be used in the proper environment. This testing may allow free and open download of a Beta software version and simply follow up with all users as to their experiences, or in the case of limited physical products, may allow the first 500 or 1000 participants into the Beta before closing it.
As we will see, depending on where we are in our offering development process, we can apply the Beta logic in a wide variety of ways to meet our learning needs at that specific time. For example, if we were early in the offering development and had a prototype we needed early feedback for, we would likely lean toward a closed Beta with customers or organizations with which we are familiar to get some of the prototypes into the field and see how they perform. This would provide us fast feedback without having to recruit new and unproven participants, etc.
As an aside, the below is from one of the most well-executed closed betas I have seen in quite some time. It was for the Steam Machine, a new gaming console from an established content provider.
The specific byproduct I would like to point out here is how Betas, especially closed Betas, can be a fantastic engagement tool for customers and prospects alike.
In the case of the Steam Machine, there were hundreds of thousands, if not millions, of people not among the 300, who were vicariously participating through hourly updates and postings as these mysterious crates began arriving at homes. While I am personally not a gamer, I followed this story in 2013, as it was a fascinating example of what a beautifully deployed Beta can do in a high-engagement group.
If it is any evidence, the video below of unboxing the Steam Machine Beta has almost 500k views. (Feel free to scrub through the following 7:23 video to see how the Beta was presented.)
If we are considering Pure Beta as applied by myriad software companies, the following offers a simplified view of the six steps of Beta.
From the Centercode Beta Testing Process [3]:
Before beginning a beta test, the objectives of the project must be defined. It's common for the number of unique goals in a beta test to range from just a few to upwards of 20. Defining these goals in advance ensures that the appropriate number and composition of participants are selected, an adequate amount of time is available, and everyone involved understands what needs to be accomplished.
Beta testing begins with the selection of test candidates. The ideal candidates are those who match the product's target market and whose opinions won't be swayed by a prior relationship with the company. Most private beta tests include anywhere from 10 to 250 participants. However, this number is highly dependent on the complexity of the product, the audience involved, the time available for testing, and the individual goals you'd like to achieve.
Next, products are distributed to beta participants. The focus of a beta test is to understand the customer experience as though they purchased the product themselves. With this in mind, beta is most effective when a complete package including all appropriate materials (software, hardware, manuals, etc.) are sent to participants.
Once your participants begin to use the beta product, feedback needs to be gathered quickly. This feedback comes in many valuable forms including bug reports, general comments, quotes, suggestions, surveys, and testimonials. With good beta management and communication tools, you can get a lot of feedback from test participants.
A beta test provides a wealth of data about your product and how your customers view it. However, that information is useless unless it's effectively evaluated and organized to be manageable. All feedback should be systematically reviewed based on its impact on the product and relevant teams.
While bugs are often the core focus of a beta, other valuable data can also be derived from the test. Marketing and public relations material, customer support data, strategic sales information, and other information can all be collected from an effective beta test.
When a beta test comes to a conclusion, it's important to provide closure to both the project and the beta participants. This means providing feedback to the participants about their issue submissions, updating them on the status of the product, and taking the time to thank and reward them for their effort.
As you can perhaps imagine, having a closed pure Beta with 250 participants as described would provide some extremely useful feedback from users on the software, usability, instructions, and the entire use experience.
What it would NOT help us understand is any revealed preference data on our offering, and if we were to use Beta testing to try to understand market preference, we would have quite a few issues with this methodology:
For those reasons, in our next topic, we will explore a philosophy which has foundations in Beta, but is better suited to our needs in testing the proposition and offering in the market quickly and inexpensively. We may call this philosophy "microtesting."
There tends to be a belief in testing, developing, and releasing new offerings that is based in the practices and norms of decades past instead of what is possible today, and I would like to delve into this a bit.
When you hear about "product launch," it tends to be framed as a time of finality, that when launch happens, that is IT. The button is pushed, the impact will happen shortly thereafter.
Think of the very linguistics of the term "product launch"... there aren't too many occasions when you get a "do over" on things which are "launched." Oddly, we, as a society and a profession, have elected to have the dominant metaphor for selling "innocuous new product #8956" to be the same term we use for missiles and rockets.
Furthermore, piggybacking on the launch metaphor is that product launch is the "big date," used to rally teams and give visibility to programs, as if the organization is launching a man into orbit. Calendars are marked, countdowns are created, lavish lunches from Chipotle are had for all supporting our product astronauts in their epic journey to market.
If you're simply adding that "innocuous new product #8956" alongside your other 8955 products, the launch mindset may work for you. But if you're releasing a sustainability-driven innovation into a new space, and a category your organization has not sold into before, the launch mindset can be exceptionally counterproductive.
As long as so much emphasis is placed on this single, terminal, end-all launch date, it means that much of the testing will have to be based on closed tests, surveys, and other hypotheticals. It makes sense why stated preference methodologies, despite significant flaws and inaccuracies, could become so popular, as 'There is no way we could possibly sell product before launch!'
Consider also that the launch mindset likely served people well in times when the dominant media were newspaper, radio, TV, catalog, and the like. When you are buying airtime and page ads weeks and months in advance, there was a need for definite dates around which to schedule media.
Today though, for all of the emphasis any organization may place on their product launch, what are the chances it means anything to customers? In all the consumeristic love of things, how many product launch dates really make it into our consciousness every year, especially after removing Apple from consideration? Three? Five?
And what are the chances your epic product launch date will have so much pre-release power to find itself launched into the stratosphere of public consciousness on Day One? Perhaps zero?
Imagine a stream of consciousness connecting us all for hours a day. Our thoughts, our feelings, our needs, what we want for lunch, how we will get there, classes from our favorite University, any and every thought happening in this massive whitewater. This is the internet.
So, if we seek to learn what is happening within that torrent of information, we have the ability to do what a marine biologist would do, and that is dipping a small sampling net into different locations, at different times, and with different mesh sizes, and recording what fish happen to appear in the net.
We are not damming the river to capture and inventory every fish. We are not artificially partitioning the river to create a "simulated environment." We are not trying to blindly calculate how many fish are in this specific stretch of this river by applying some obsolete calculation or methodology.
We are simply, silently, and invisibly dipping a net into the water and seeing what actually happens. This is the philosophy of what I call microtesting.
If we are engaging in microtesting, we must set aside the single-shot Product Launch Mindset, as we will be testing propositions and conducting tests online using a variety of tactics. If you go by the strictest definition, these tactics will indeed constitute "releasing" (or more appropriately, "pre-releasing") the offering to a limited number of customers. It is designed to facilitate small, tightly designed, limited-term tests in the live market from which we can refine the offering. Importantly, it is entirely within your control to limit exactly how many people see the stimulus, exactly what stimulus they see, when you choose to pause the campaign, and even to be able to screen competitors from seeing the stimuli. Want to test around a geographic area in which you may be building a limited test market? You can do that, as well.
Importantly, if your organization still wants a big Launch for the offering, it certainly can, but you will ensure the Launch is based on live learnings, proven messages, and fact.
In the next Lesson, we will cover some of the tactics of microtesting and how they may be applied at this phase in the innovation process to provide us with live data on virtually anything we seek to learn about the offering.
For the purposes of all of our discussions and to avoid having to address the nuances of multiple ad platforms, "PPC" will refer to Google AdWords, as it controls about 70% of the PPC advertising market.
One of the most useful byproducts of our use of PPC for microtesting is that there is a stunning amount of information and tutorials available for all levels of experience, and essentially anything you need to accomplish. You will quickly see as you microtest that there are many, many PPC consultants and experts out there who do nothing but test and refine campaigns for ecommerce conversion and sales. While our application is a bit different, know that if you use this technique for testing, external resources are ample and easily engaged.
So, please know that there is a pretty significant difference between riding a bike and riding in the Tour de France in regard to the art and science of PPC... but for our purposes, I hope to demonstrate that someone with little experience will be able to set up initial microtesting quite quickly. Please watch the following 3:53 video.
There are myriad short, step-by-step videos [11] on how to get started in setting up a Google AdWords account when you are ready. It takes about three minutes to get started.
While we will be seeing quite a bit of AdWords and we will be mocking up keywords and test designs for this week's assignment, we will not be setting up live AdWords accounts in class.
Here's why:
The AdWords Keyword Tool and other research tools used to be freely available online, until Google required you to create an account to access them. That is usually no problem, but Google no longer allows you to create that account without entering a valid Credit Card. Although you can set the account to not make any charges, I am not comfortable asking you to do so for class.
Happily, for our discussion, we can emulate about 90% of the core function of the Keyword Tool (and more) with SEMRush [12], an excellent package of research tools with fantastic analytics and trend data to help in decision making. Most importantly, it also offers a freely accessible trial. So while it will not provide the direct tap into Google ad pricing and search volumes AdWords would, it is more than ample for our purposes.
I just wanted to be clear there about the disconnect of talking about AdWords, but using SEMRush for research. We will each set up trial accounts for this week's assignment, but the Pro trial is only active for 24 hours, so you will want to delay setting that up until you are ready to begin your assignment.
For the sake of this example, let's imagine that we have decided to explore one of the more straightforward strategic paths we proposed in Chapter 9, "The Lean Operation." Here is how we defined that path:
In this case, our goal is to understand if we can dip our net into the stream of people interested in and currently searching related topics/keywords to see what our conversion model could look like. In essence, in this microtest, our first step is to see if the market is interested in our most simplified proposition, and part of the beauty of microtesting is that we may have many tests of different executions on the same path and different paths running simultaneously.
If you have ever heard of crafting a 30-second "elevator pitch" to effectively pitch a new offering to a prospect, you could think of what we are creating as closer to an "escalator pitch." Having 30 seconds to lay out our proposition on the web is a luxury we do not have, and we are realistically closer to the time we would have to talk to someone passing us on the down escalator while we were riding the up escalator.
At the highest level, we have to condense the most important "hook" of the strategic path into an ad totaling 130 characters. 35 characters of that is the URL you are linking to, so, as for usable message space, we are looking at a scant 95 characters to depict our proposition.
This may sound intimidating, but here are a few elements playing significantly into your favor:
For the sake of testing "The Lean Operation" strategic path, let us suppose we want to test the initial viability of three test propositions.
Test Proposition 1: "Tired of Mowing?" In this test, we will actively pursue people shopping for more conventional lawn supplies and attempt to "intercept" them and gauge interest around the "Grass grows slowly" and "I mow less" concepts in the path.
Test Proposition 2: "Savings/spend calculator" In this test, we will again intercept those searching for more conventional lawn supplies. This time, we will call out how much the average home spends on lawn products annually, and what they can save by converting to Native Seed X. We will personalize the message by creating a calculator that will allow the homeowner to enter some basic inputs and get a realistic savings number. This proposition is centered around "No fertilizer needed" and "I mow less."
Test Proposition 3: "Better seed" As a bit of a control, instead of intercepting those with "conventional lawn" interests, in this proposition, we will attempt to sell the prospects of Native Seed X to those already actively searching for native seed. We could consider this as a bit of a counter-strategy to the other two, as the size of the market actively searching for native lawn seed is likely minuscule as compared to conventional lawn products (we will be able to quantify this in a moment). This proposition is centered around "Native seed."
From here, we would go about writing the actual PPC ads for each of the three test propositions in AdWords. Now, of course, we are not going to be the only advertiser in the space, which is also exactly what we desire for the test: to gauge how our proposition performs not in a lab setting, but in the real world, alongside competition.
The ads themselves are static, and so we must select those keywords which are related to the content of our ads to determine when they will appear. Almost in the sense of the Cognitive Map itself, we want our ads to essentially parallel when someone is searching for information related to the selected path (i.e., staying on our strategy). This, in essence, is what provides the revealed preference testing. We are not performing a mall intercept survey, or asking random groups of people online... we are placing our proposition in front of those who are actively engaging in the topic and who may be actively looking to purchase products with *real* money.
We would select our keywords based on both our learnings through research and tools to help us make informed keyword decisions in regard to quality and traffic, which we will examine in the next topic.
For "Tired of Mowing," our keywords could be centered around high traffic terms we would want to intercept like "lawn fertilizer," and perhaps we would test lower traffic terms like "mow less" or "low growth lawn."
The keywords for "Savings/spend calculator" could also be similar, but could also perhaps extend to "lawn savings" or "fertilizer coupon" to try to appeal to those who already show a desire to spend less on lawn products.
"Better seed" keywords could be more closely related to the seed itself, as this test is for those already searching for native seeds. "Native lawn seed," "North American grasses," and the like would be our keywords here.
A landing page, by definition, is the page someone "lands" on after taking an action. Overwhelmingly, that action is clicking on an ad.
The goal of the landing page is to "continue the thought" of the ad, and to quickly express the proposition and urge the visitor to take the desired action. If the PPC ad itself was the "escalator proposition," the landing page is the "elevator proposition," as we may be designing for 30 seconds of attention as opposed to 6 seconds.
Landing page design and high-level optimization is, in and of itself, a science. There are literally thousands of people who do nothing but shift elements of a webpage around, test colors, and revise messages to gauge how it may change response and purchase behavior. In our case, because we are simply looking for "signs of life" in our propositions and to begin to understand which may rise to the top, we do not need stunning levels of landing page refinement like an Amazon would.
What we do need is a landing page which we believe expresses the proposition, and has a measurable call to action clearly on the page. Whether that call to action is a pre-order, a catalog request, a sample request, or an order of the product itself, we want the prospect to take some "next step." Ideally, the next step is indeed purchasing the product in question, but given that we may be in pre-release, an "email me when this product is available" may be a logical replacement.
The proposition itself may be expressed in video, image, text or a mix of all, or, in the case of a concept like the "Savings/spend calculator," a very simple and straightforward calculator. Again, all we are looking to do is to provide that 30 seconds of proposition and interest to engage the visitor and make them take the next step.
Please watch the following 6:02 video.
In testing the early proposition, chances are that we do not have access to massive IT or design resources, and importantly, we are by no means in position to need them. At this point, we are trying to find those propositions for the offering show signs of life so that we may build and refine on them, and importantly, talk to those early adopters to understand what brought them to the offering in the first place.
While I would love to devote an entire course to microtesting (perhaps someday!), what I would like to do is introduce a few tools which can help those of us in resource-constrained, "start up mentality" positions who need to test propositions. Importantly, while mastery of these tools may come with attention and experience, they may be used effectively by those with limited expertise, and you will also find ample tutorials and resources for many of these platforms.
As mentioned previously, anything having to do with your PPC ad is created within AdWords. From housing and modifying the ads to setting daily budgets and keywords, it's in AdWords. There are many, many beginner tutorial videos on AdWords on YouTube, as well as very specific topic-oriented optimization videos. If you have the will to learn, one can almost guarantee there is a tutorial or resource on AdWords to help.
To provide some idea, Google's own AdWords channel on YouTube [18] hosts some 460 videos as of the time of writing.
SEMRush is effective in compiling data on PPC ads, keywords, and competitors into one dashboard, and is, therefore, a great tool for informing us on potential keywords of interest.
What can be extremely useful in Black and Gray Space innovations is that you have the ability to look up a competitor's website to see what keywords and ads they are currently running, and approximately how much is being spent on those keywords... as well as many other valuable research metrics.
For example, for the intercept strategy I would like to consider for "Tired of Mowing?," I would want to research those companies and websites I would want to intercept like Scott's/Miracle-Gro, Lowe's, Home Depot, and others. This would give me some idea of those lawn care and lawn fertilizer keywords they currently use, as well as the PPC ads using those keywords. We can also approach it from another angle by entering the keyword itself, and SEMRush will show us those companies who are currently using those keywords in their PPC ads.
Here is a 3:47 demo video of what SEMRush can do. We will be using this tool in this week's assignment.
Unbounce is a specific program to allow for the fast design and testing of landing pages, offering 60 or so relatively turnkey templates [21] which you can easily modify and post without having to set up web domains, hosting, or a raft of other IT-related issues. It also has the ability to track the success of your various landing page designs to determine which has been the most successful to date, which is useful as you go through the process of creating multiple landing pages.
Additionally, it has the ability to change the text on your landing page to match the content of the PPC ad on which someone clicked. So, for example, it can change the headline of the landing page on the fly to match the PPC ad headline for Person A, and show a different headline based on a different ad for Person B. This can allow you to test quite a few more messages without having to create a landing page for each and every one.
The following 2:21 video gives you some feel for the interface:
Squarespace is not as purely focused on landing pages as Unbounce, but think of Squarespace's strength as giving you the ability to create a larger "landing site," or "microsite" as they are known. Microsites may only be a few pages deep, but provide deeper information than does a landing page, but prevents having to send someone to your full website and overwhelming/distracting them. Furthermore, your offering is likely still undergoing testing and therefore isn't yet ready to be included on the full site (which also likely requires IT and other constrained resources).
In our process, microsites can become useful for us for those offerings which have already received a couple rounds of testing and refinement via landing pages, so we have some feel for our strongest propositions and what is "working."
Squarespace also provides templates, but more importantly, a visual drag/drop type interface that just works. To give you some idea, I created this microsite for PIG Difference [24] in a few days, and it uses a modified Squarespace backend.
Please watch the following 13:50 video, you don't need to watch the whole thing (unless you want to), but if you scrub through the timeline you can see a little bit about how it works:
After you have deployed the microtest, you will want to not only understand how each proposition PPC ad performs, but also how long the landing page is able to hold visitors, how many signups/purchases you gain from each (referred to as "conversions") and other interesting data which may pop up.
The most efficient way to do this (and again, the method which will provide you with seemingly endless tutorials/resources/help) is to link your Google AdWords with Google Analytics. This is a one-minute task, is handled semi-automatically within something like Unbounce or Squarespace, and allows you to understand the entire picture of how your propositions are performing relative to each other and overall.
Here is a brief video on the most common metrics for AdWords. Please watch the following 3:09 video.
The video below is specifically about the linking of AdWords and Google Analytics, and its value in allowing us to understand more about the path and actions visitors take after clicking an ad. Please watch the following 4:31 video.
Aside from what you will learn from the quantitative side of analytics, you can not underestimate pairing those learnings with the qualitative insights you can gain from talking to early adopters. Whether it is something formalized such as an online survey to those visitors who took an action, or simply a phone call a week later to understand their thoughts and expectations, this small step can be invaluable to understand the story behind the analytics.
It is important for us to remember there are people represented by all of those analytics and metrics, and if they have purchased or signed up for more information, their identity is known. What you may find is that you can fall into a certain "stock ticker" mentality as you sift through all of the analytics, where you believe that all answers can be found in the numbers. Sometimes you may find that an ad did extremely well in bringing people in to the landing page, but the landing page did not "convert" well... or that the landing page did an excellent job of keeping visitors, but few purchased or took action. These are the cases when you would want to take those in the minority and contact them to see if there were obstacles they saw, but were able to overcome.
For example, in the case where visitors are spending an average of 10 minutes on the landing page but not taking action, you can take the small handful of those who did order and talk to them. They may say things like, "I had a really hard time finding the order box, but when I did, I was OK," or "the site was really, really slow," or "the video crashed twice, but worked the third time, and that's why I bought." Any one of those insights will help you clear the analytic fog to understand what may have been the obstacles causing the majority of others to leave.
Believe me that in artistic matters the words hold true: Honesty is the best policy. Better to put a bit more effort into serious study than being stylish to win over the public.
Occasionally, in times of worry, I've longed to be stylish, but on second thoughts I say no–just let me be myself–and express severe, rough, yet true things with rough workmanship. I won't run after the art lovers or dealers, let those who are interested come to me.
In due time we shall reap if we faint not.
- Vincent van Gogh to Theo van Gogh, March 11, 1882 [29]
Up until this point in the process and our time together, we have gone through painstaking amounts of rigor and research to frame opportunities in the sustainability space, performed initial fieldwork to understand the mind space, mapped the thoughts and feelings of customers, created strategic paths, and perhaps done some hypothetical testing.
But is it now, in microtesting, that the rough, true things about the offering and its potential in the market only begin to become known. There is only one truth, and that is how the offering performs in the live environment.
It has been a long road until this point, but this is the path of creating an original offering based on insight and understanding, not duplication or fabrication. Make no mistake, if successful, others are likely to copy the offering, but in virtually all cases, they will not have the insights underlying their work. It is the insight which allows you to extend the offering and understand it at a deeper level than simply Xeroxing someone else's work. The insight is what allows meaningful, resonant creation.
The offering will continue to be honed and iterated, along with the messaging and other cues. There is no "resolution"... there is no "We're there!" moment when you get to open that bottle of champagne in the back of your filing cabinet.
It is also in this phase where we purposefully avoid marketing gloss, PR, and other forms of publicity. We want to understand how the complete proposition performs by itself, unaided and unclouded by extra marketing. The basic proposition should prove value in and of itself, before we begin to go "pedal down" on marketing and engage agencies.
There is a very specific reason for this: At this point, we are more concerned with understanding "what's in the box" as opposed to "what's on it." Our goal is to understand the core proposition, not what added benefit or buzz our ad agency can create.
This isn't to say that we somehow suppress or undersell the proposition in microtesting, just that we shouldn't cloud it with celebrity endorsements or introductory discounts or flashy gifts.
We must always remember that no matter how promising or disappointing the initial results are, we are incredibly limited in our understanding of the offering in the market.
As microtesting and other learnings progress over the weeks, this will become arguably one of your most difficult decisions–whether to continue to iterate on a proposition, change to a new strategy/path, or abandon the effort altogether.
What makes it all the more difficult is your role in creating sustainability-driven innovation. You are the expert on this offering, you understand its weaknesses and potentialities at many levels, and you are expected within the organization to be its lead advocate.
It is difficult to know when we are advocating, and when we are too personally interwoven with an initiative. I can tell you from experience, this threshold is very difficult to understand without meaningful measurements and a strong partner/contrarian voice. The goal is to partner with someone you can brief on the program, who does not have a business interest in it, and whom you trust. Ultimately, you want to have someone who can provide counsel and frank conversations.
I believe you will also find that it is far easier to move on to a new approach or opportunity when you have them already identified. This is a strength of the approach we have taken, as you will likely have a handful of approaches and opportunities... when you start getting "bad signals" from the current opportunity, it is that much more easy to shift gears onto the next opportunity.
In either case, the decision to either continue to try to push through the current opportunity or abandon and shift to another is a difficult one, indeed. If you ever need an impartial voice or some detached feedback, always feel free to reach out to me. Consider it a perk.
To refresh ourselves, our objectives for this Lesson are to:
To help cement these concepts in our mind a bit and illuminate the potential for microtesting, this week's assignment will allow us to use a live version of an excellent tool for identifying keywords while framing of a microtesting strategy.
When you are ready:
A free option with similar functionality (but a little less user friendly) is www.keywordspy.com [31] . If you click the "Keyword" radio button below their search box, it will return keyword results, competitors, etc. If you do click on any of the "View More" below primary results, it will ask you to sign up for a "lifetime free trial" if you provide your name and email. You can likely get everything you need for the assignment without the trial.
Links
[1] mailto:axj153@psu.edu
[2] https://www.youtube.com/watch?v=AYAe7RdK1Q8
[3] https://www.centercode.com/
[4] https://www.flickr.com/photos/nasamarshall/
[5] http://www.flickr.com/photos/nasamarshall/
[6] https://creativecommons.org/licenses/by-nc/2.0/
[7] https://www.flickr.com/photos/bonnevillepower/6848956681/in/photolist-brdGd2
[8] https://www.flickr.com/photos/bonnevillepower/
[9] https://creativecommons.org/licenses/by-nc-nd/2.0/
[10] https://www.youtube.com/watch?v=05we2g3Edgs
[11] https://www.youtube.com/watch?v=BsLpi86xea4
[12] http://www.semrush.com/
[13] https://www.flickr.com/photos/philc/3283713840/in/photolist-a7GznL-8oMXLd-5dcQ5W-pf2bGC-nrBUui-61aTHS-9WGeLi-a5vMru-a5vHsS-h6RbXD-fp4Uc9-h6PLVy-foPGLp-ea2VwH-9WdYoR-9WbqP8-9WbyWv-ea2WcX-5dcQ5N-dYtoTW-9WdG7g-9WbFqa-fQ7oVT-fQoW6W-fp56Xm
[14] https://www.flickr.com/photos/philc/
[15] https://www.youtube.com/watch?v=KvgnvrFxGiE
[16] https://www.flickr.com/photos/bre/552152780
[17] https://www.flickr.com/photos/bre/
[18] https://www.youtube.com/user/learnwithgoogle
[19] https://www.youtube.com/watch?v=oBfei3HYOpU0
[20] mailto:mail@semrush.com
[21] http://unbounce.com/landing-page-templates/
[22] https://www.youtube.com/watch?v=pE3SfXi2qHs
[23] http://www.salestipaday
[24] http://www.pigdifference.org/
[25] https://www.youtube.com/watch?v=TsZLn7cG6Es
[26] https://www.youtube.com/watch?v=JfPoRGWGs98
[27] https://www.youtube.com/watch?v=8EmXFM1_xEo
[28] https://el.m.wikipedia.org/wiki/%CE%91%CF%81%CF%87%CE%B5%CE%AF%CE%BF:Van_Gogh_-_Der_Holzhacker_(nach_Millet).jpeg
[29] http://vangoghletters.org/vg/letters/let210/letter.html
[30] http://www.semrush.com
[31] http://www.keywordspy.com