The development and marketing of our AI based sales acceleration platform launched us into the Generative AI space at a pace we did not foresee. We moved from a traditional consultancy to an AI advisory firm aimed at solving complex business issues. That comes with a few perks, such as a huge amount of time to experiment (read: play around) with new technology.
One recent experiment is worth sharing and we’re considering making it a reality. Gysho recently completed two market studies, which proved to be a great opportunity to explore how AI can be used to accelerate timelines and improve accuracy.
Using artificial intelligence shortened market studies from 3 months to 1 week and compiled over 3000 verified sources into a single strategic source.
I have a fairly long history in marketing and sales, working in anything from frontline positions to leading teams launching new products. Reliable market research has always been crucial to inform effective strategies, but they come with challenges:
It’s a perfect challenge for our experimental AI practice to solve.
We started as usual, by defining the goals, structure and process of the studies. We tested to see what AI would be able to do here and found that human input is crucial to create a solid foundation.
Where your expertise is needed:
When we prompted AI to do this job, we found the results too generic, inconsistent and sometimes overly complex. However, AI was good at giving us feedback.
Where AI can help:
Currently AI is not good enough to define a study from scratch, but it can play the role of assistant in giving ideas and checking drafts.
The data collection phase often takes a lot of time and can be laborious, searching online for reliable sources, reading and summarising them. This is where AI showed its true potential for market researchers.
We used three AI solutions with mixed results. We asked each of them to:
Microsoft’s Bing AI search was the unexpected leader in the field. Whilst search results are not as rich as Google’s, its ability to stay accurate and communicate shortcomings in available data was impressive.
OpenAI’s ChatGPT just publicly launched its browsing capability, so we naturally tried this solution too. We found that summaries were even better than Bing’s, but it was less capable of mining larger datasets/sources and often timed out. This has improved since.
Google’s Bard was a big disappointment. We were initially extremely excited about the detailed responses we got and looked at it as the absolute leader of the pack. That was until we performed our validation exercise to safeguard against AI hallucinations.
Artificial Intelligence can be prone to hallucinations, which is where it makes up information to answer a question. Clearly not something you want in a market study.
To safeguard against this, we performed a validation:
Bing and ChatGPT were mostly aligned, we did not find any hallucinations and they both highlighted if specifics from answers could not be validated.
It turned out Bard had consistently made things up. Both in manual verification and by checking with Bing/GPT, it showed that sources did not actually exist. Obviously, statements made by Bard turned out to be completely inaccurate. It was a major disappointment and we decided to exclude all data produced by Bard.
With our dataset growing rapidly we started exploring how we could use AI to derive meaningful insights. Our dataset contained over 140 summaries from 3000 sources, which made a manual analysis increasingly complex.
At this stage we did not want to expose our data and interactions to the outside world, so we created a custom solution within Azure to analyse our dataset. This involved creating a vector database, which allowed us to analyse relationships, context and dependencies. In short, it allows us to ask questions on our dataset as if we’re talking to an expert.
What we found:
After completing both studies, we evaluated the results and found them to be transformative:
In short, AI proved extremely capable of solving the challenges we set out to solve. Lead times decreased substantially, whilst datasets were larger and analyses more accurate. Moreover, the value to strategic planning increased thanks to the pragmatic implementation of assistants.
The tool we created for data analysis performed beyond our expectations. As we developed research reports and created strategic plans, we found ourselves using the tool as an assistant which continuously provides input to questions and strategic decisions.
Is this the perfect market research assistant to inform the strategic thinking of entrepreneurs? Moreover, can it form the foundation for more agile business strategies in SME's that are continuously informed by fact?
Together with our partner Wink-IT (Thomas Wink) we develop experimental AI solutions to solve complex business challenges. We’re considering developing this experiment into a reusable market research assistant for others.
Do you want to do more, better and faster market research? Get in touch with us!