I don’t think we’ve shared the story of our first iteration of methinks. My co-founder Philip Yun recounted our founding in VentureBeat last month. But there’s a Lean Startup story in our founding that helps people understand how we got to market and created customer momentum so quickly. I’ll share that story here as context for the news about our big video upgrade arriving later this year.
The methinks culture is about solving for big problems in remote research. For a long time, app developers have been limited to simple screen sharing capability where users could only share their mobile screens while having the app open. This has been the industry standard because all the startups racing to create the best qualitative video research platform are using the same or similar video technology vendors. I can’t speak for other companies, but when we started to build our prototype of methinks, we used “off-the-shelf” solutions and then migrated to best-of-breed solutions. And, when we started to get good feedback, we literally followed the Lean Development handbook and made Minimal Viable Products that we were nearly embarrassed about, so we could get more feedback from our early customers in games and media.
All of that worked well. But, as we attracted larger customers in auto, consumer electronics, banking, the engineering overhead of working with standard solutions created limitations. Worse, many off-the-shelf standards in video require that we propagate the limitations of the underlying video technologies. That didn’t feel right.
As I noted above, in most cases, it’s impossible to share the mobile device’s screen when the underlying app is closed, and as a result, users are limited to sharing content within the app. This limitation creates friction in the research interviews — it puts screen capture requirements on the research subject, and that’s just awkward. Moreover, watching a livestream in real-time with sub-second latency as a researcher on the other end of the video chat requires Adobe Flash, which will be obsolete in 2020. Conventional livestream technologies can eliminate the requirement of Flash, but they always come with a huge cost – latency.
methinks lets you interview consumers using the best video chat technology available, with screen-sharing, recording, video file-sharing and annotation so you can capture key consumer insights impacting your whole company. methinks is the best qualitative research solution available.
As soon as we received funding this year, we started building video research capabilities that are super easy to use, even for the non-tech savvy. Traditional video chat and screen sharing solutions typically require a certain level of technical expertise. Relying on a third-party vendor has prevented us from further improving the user experience of the methinks platform and open methinks to all age groups with any level of technical expertise. We’re happy to say that this period of development is behind us.
So, to restate the problem: How to let methinks Thinkers share their mobile device’s screen with just a single tap, regardless of what app they open, wirelessly, and engage with the researcher in a real-time video chat with only milliseconds delay?
Solution: We developed our own video chat back-end. The new back-end is built on the Web Real-Time Communication protocol (WebRTC). With our improved new video chat back-end, we dramatically reduce the technical requirements of conducting and taking part in remote user research, which in return opens the door of remote qualitative user research to a much broader audience, with greater insights to be learned via better, less-clunky screen sharing.
Another benefit of building our own video chat back-end, our apps can easily enable ultra low latency livestream, allowing researchers to observe users’ unmoderated usage of their products, with live intercept features, anywhere with internet access. This capability and low-latency is critical to having smooth conversations where everyone benefits from a back-and-forth conversation that is natural, clear without everyone talking over each other or waiting for the on-screen visuals to catch up with the conversation.
We’re almost done testing the first deployment of this new back-end, and we expect to deploy it widely for all of our customers by the end of the year.