For almost as long as the Internet has existed there have been attempts to connect those who have questions to the right answers. More recently the moves have been to identify those people who are qualified to provide those answers and rank them accordingly.
Google toyed with Authorship which aimed to link individuals to their work on the web thus building greater influence in topics for which they were considered experts.
Bing included articles from “People who know” in the sidebar of search results linking to authors (mainly journalists) who had written pieces relevant to your query.
Quora is the leading Q&A service on the web which seeks to use its community to modify and fact check both questions and answers to provide “the definitive answer for everybody forever” – a bold statement.
And now there is Jelly. Again.
After the relaunch of Jelly, some have been asking why it is even called a search engine when it is appears to be more a Q&A service like Quora, albeit with the intention of being much simpler to use.
Jelly combines technology with humanity, using an element of AI to route questions to relevant topics and, therefore, people who are most likely to provide an answer.
But this is only the beginning.
In an email sent to users Biz Stone advises that “this is just the beginning of some very ambitious plans” so, I could be very wrong, but I’d like to lay out how I envisage Jelly could grow and develop.
Q&A or Search?
Jelly is currently inefficient, its largely human approach and lack of search beyond asking a question mean the same queries get made again and again leaving many to go without a response. As such, the system will need to develop to reduce duplication and provide better answers.
You may be wondering “what’s the point of a search engine without a search function?” but if Jelly is able to achieve what I think it can then you won’t need input beyond asking a question – the system will do the rest for you.
The difference between a search engine and a Q&A service is that the former needs a database to pull its answers from so, to begin with, usage will help to populate that database.
We are currently in the data collection phase: Jelly is learning about who we are, what we know and are interested in, but it will also be learning answers to common questions. More on this later.
Although you are not required to fill out an extensive profile you are advised to select the topics that interest you so that questions can be routed correctly. But this is only half the story.
Rather than a traditional search engine, Jelly is actually building an expertise engine using your answers as a guide. The system learns more about you the more answers you provide but it has a secret weapon in its arsenal – marking answers as helpful.
How “helpful” you are is the only publicly visible metric and, I’d wager, is also going to power aspects of the service going forward. Luckily, it is a metric which cannot be gamed as it is determined by the person who asked the question.
Jelly is designed to connect those asking questions to other human beings who have direct knowledge, experience or opinion rather than returning a sea of blue links which leave users to fend for themselves. It still, however, needs to deal with the inefficiency I mentioned above.
Automation may seem counter to the central idea but the Jellybot already auto-responds to certain items – mainly those that appear to be test questions or basic queries about the service itself, but even over the past days I have seen it respond increasingly often. Maybe it’s just that more people are asking questions, or maybe it’s because it is learning.
Jelly may want to enable answers from real people with similar experience or first hand knowledge, but as the service grows it is likely to become impractical for this to always occur in real time – question asked, question answered. This is where the helpful measure comes in to play.
Currently, the AI routes questions to those most likely able to answer them and the system will be learning. Answers which are marked as helpful are, no doubt, weighted to assist the AI in routing but, in future, helpful answers may power further automation. There is no reason that common, consistent answers to frequently asked questions cannot be returned automatically by the AI with a good deal of accuracy.
Common questions still receive relevant, insightful, human replies but those replies are instant, improving the user experience and leaving those answering questions free to tackle the more unique queries.
But not all questions are the same or seek the same type of response – this is why Biz advised that Jelly might be the only search engine that “can say you asked the wrong question” or “give you answers you wanted but didn’t think to ask.” Being helpful is not the only criteria that may be at play.
When providing an answer you can also fill out a “How do you know” box to demonstrate why you are qualified to do so. This might allow Jelly to distinguish between:
– academic knowledge (I have a degrees in x)
– eye witness reports (I was there when)
– opinion or conjecture
Consequently, different answer types can have weighting and we could potentially be offered those different types based on how we word our questions and the type of questions we seek.
But what about search?
It’s a search engine so shouldn’t it, like, have search? Well, this is where things can get very interesting if they head the direction I feel they could.
Jelly is human “powered” search but that is quite a loose term in the current context. Yes, we ask questions and get one or more responses but this still conforms more to the Q&A model – until you start linking questions to historic data.
What if, in addition to automated responses after posting Jelly was to drastically reduce question duplication by pointing the user to existing responses before they even submit the questions to the stream? The AI could scan the question, identify if it is sufficiently common and immediately suggest a number of answers – all relevant, all written by real people just not in real time.
If the answers provided do not sufficiently answer the question, or the question is unique enough not to have reliably accurate answers available, it will then get routed to those best suited to answer.
One point of input but multiple potential outcomes.
No need for a secondary search feature.
Human powered means to be powered by opinion, feeling, emotion and bias – it is not always A → B. Human powered means grey areas and fuzzy logic.
What if one answer isn’t always right? What if we are automatically presented with alternatives, opinions, discussion points? What if the AI could scan questions and (based on their nature) link them to, not only the right answers, but also the most thought provoking? What if the AI can cross link related questions allowing to explore a topic more deeply than from our limited question?
Being able to interact with those answering questions by way of comments allows us to set additional context or suggest counterpoints so that data may become almost as important as the answer itself.
I may be getting way ahead of myself here.
Not knowing how advanced the Jelly AI is I don’t know how much of this is possible or how long it would take to get there. Although Biz says that this is just the start I also don’t know if the degree of AI involvement would even be on the roadmap or is anathema to Jelly’s whole concept.
Either way, providing we get past the initial shiny new toy stage, I can see great things ahead for Jelly.