In case you’re saying, “Wolfram what?”, here’s a little reading:
http://www.wolframalpha.com/
http://www.bbc.co.uk/blogs/technology/2009/05/does_wolfram_work.html
http://news.bbc.co.uk/1/hi/technology/8052798.stm
http://www.guardian.co.uk/news/blog/2009/may/18/wolfram-review-test-google-search
http://www.theregister.co.uk/2009/05/19/dziuba_wolfram/
http://www.theregister.co.uk/2009/03/17/wolfram_alpha/
http://www.theregister.co.uk/2009/05/18/wolfram_alpha/
OK – I’ll start by announcing a vested interest here. I occasionally write software that attempts to make sense out of straight English questions and phrases, and then by cunning trickery makes the response from the program appear ‘sensible’ as well. So I know something about how to make software appear smarter than it actually is. And I’m afraid that at first glance I regard Wolfram Alpha as over-hyped, under-delivering and pretty much unsure of it’s position in the world.
But, the folks at Wolfram Research score highly for getting the coverage they’ve managed!
WA is described as a Computational Knowledge Engine, rather than a search engine. However, it’s raison d’etre is to answer questions, and nowadays any piece of software on the internet that does that is always going to be regarded by users as some sort of search engine, and the ‘Gold Standard’ against which all search engines tend to be judged is Google. So, first question…
Is it fair to compare WA and Google?
Not really, and Wolfram himself acknowledges this. WA is regarded by the company as a means of getting information out of teh raw data to be found on the Web, and it does this by having what’s called ‘curated’ data – that is, Wolfram’s team manage sources used for the data and also the rpesentation of the data. This makes it very good at returning solid factual and mathematically oriented data in a human readable form.
Whereas Google will return you a list of pages that may be useful, WA will return data structured in to a useful looking page of facts – no links, just the facts. And a list of sources used to derive the infromation. The results displayed are said to be ‘computed’ by Wolfram Research, rather than just listed as is the case of a search engine.
Is it a dead end?
WA relies on curated data – that is, a massaging and manipulation process to get the existing web data in to a format that is searchable by the WA algorithms and that is then also presentable in a suitable fomat for review. This is likely to be a relatively labour intensive process. Let’s see why…
In a perfect world, all web data would be tagged with ‘semantic tagging’ – basically additional information that allows the meaning of a web page to be more explicitly obvious. Google, for all it’s cleverness, doesn’t have any idea about the meaning of web page content – just how well or poorly it’s connected to other web pages and what words and phrases appear withjin the page. They do apply a bit of ‘secret sauce’ to attempt to get teh results o your search closer to what you really want, assuming you want roughly the same as others who’ve searched the Google search space for the same thing. Semantic tagging would allow a suitably written search engine to start building relationships between web pages based on real meaning. Now, you might just see the start of a problem here…..
If a machine can’t derive meaning from a web page, then the Semantic tagging is going to have to be human driven. So for such a tool to be useful we need to have some way of ensuring as much web data as possible would be tagged. Or, start from tomorrow and say that every new page should be tagged, and write off the previous decade of web content. You see the problem.
What the WA team have done is taken a set of data from the web, and massaged and standardised it in to a format that their software can handle, then front-ended this system with a piece of software that makes a good stab at natural langauge processing to get the meaning of your question out of your phrase. For example, typing in ‘Compare the weather in the UK and USA’ might cause the system to assume that you want comparative weather statistics for those two countries. (BTW – it doesn’t, more on this later)
The bottom line here is that the data set has had to be manually created – something that is clearly not posisble on a regular basis. And a similar process would ahve to be carried out to get things semantically tagged. And if we COULD come up with a piece of sofwtare that could do the semantic analysis of any piece of text on the web, then neither of tehse approaches would be needed anyway.
In a way, WA is a clever sleight of hand; but ultimately it’s a dead end that could potentially swallow up a lot of valuable effort.
Is it any good?
The million dollar question. Back to my ‘Compare the weather in the UK and US’ question. the reason I picked this was that WA is supposed to have a front end capable of some understanding of the question, and weather data is amongst the curated data set. I got a Wolfram|Alpha isn’t sure what to do with your input. response. So, I simplified and gave WA : “Compare rainfall london washington” – same response. I then went to Google and entered the same search. And at the bottom of Page 1 found a link : http://www.skyscrapercity.com/showthread.php?t=349393 that had the figures of interest. Now, and before anyone starts on me, I appreciate that the data that would have been provided by WA would have been checked and so would be accurate. But I deliberately put a question to WA that I expected it should be able to answer if it was living up to the hype.
I then gave WA ‘rainfall london’ as a search and got some general information (not a lot) about London. Giving ‘rainfall london’ to Google and found links to little graphs coming out of my ears. A similar search on rainfall washington to Google gave me similar links to data on Washington rainfall.
WA failed the test, I’m afraid.
Will it get better?
The smartness of any search tool depends upon the data and the algorithms. As we’re relying on curated data here, then improvements might come through modifications to data, but that might require considerable effort. If the algorithms are ‘adaptive’ – i.e. they can learn whether answers they gave were good or bad – then there might be hope. This would rely on a feedback mechanism from searchers to the sofwtare, basically saying ‘Yes’ or ‘No’. If the algorithms have to be hand crafted – improvement is likely BUT there is the risk of over-fitting the algorithms to suit the questions that people have asked – not the general searching of what MAY be asked.
And time passes…
As it turned out, this post never moved from ‘Draft’ to ‘Published’ because of that thing called ‘Life’. So, a month or two have passed, and I’ve decided to return to Wolfram Alpha and see what’s changed….
Given the current interest in the band Boyzone, I did a quick search. WA pointed me to a Wiki entry – good – but nothing else. Google pointed me to stacks of stuff. Average rainfall in London got me some useful information about rainfall in the last week. OK….back to one of my original questions ‘Compare rainfall London Washington’ – this time I got the London data with the Washington equivalent on it as well – sort of what I wanted. Google was less helpful this time than back when I wrote this piece.
So…am I more impressed? Maybe a little. Do I feel it’s a dead end? Probably, yes, except in very specific areas taht might already be served by things like Google and Wiki anyway.
Do I have an alternative solution for the problem?
If I did, do you think I’d blog it here and expose myself to all that criticism? 🙂