Monthly Archives: May 2012

Zuckerberg worth billions – what does it mean ?

In a couple of presentations over the past few years I have explored the fact that our brains are not really wired to understand exponential growth and large numbers (this is a product of out evolutionary past; our mental wiring is the result of the survival pressures our ancestors faced millions of years ago and these concepts were not important in that context).

This is the reason why in my day job, in business intelligence, I strongly advocate against using logarithmic scales on charts, except with people who are used to working with them and have learned to get round our innate ability to mis-read them (and, such people in my experience are few and far between).

Interestingly, exponentials also have an effect in the financial world, where IPOs make people billionaires and national debts run into trillions of dollars.

Millions, billions and trillions are exponentials on steroids, because each is not one but three orders of magnitude (i.e. 10 x 10 x 10 = 1000) greater than its predecessor … and, we can slip from one to the other without really appreciating the difference.

In my experience, the only way to get a hold of this is to bring it back to numbers we have some sort of gut feel for.

For example, following Facebook’s IPO, last week, and given Mark Zuckberberg’s net worth, what means as much to him as a cup of coffee means to the average 28 year old? Well …

Mark Zukerberg’s remaining stake in Facebook plus the money he raised in the IPO last week total about $20 billion.

The average net worth of a 28 years old in the US is $8,525 (according to the calculator at http://cgi.money.cnn.com/tools/networth_ageincome/index.html).

So a $3.50 latte represents 0.0411% of the average 28 year old’s net worth.

0.0411% of Mark Zucherberg’s net worth is just over $8.2 million (or the equivalent of a latte a day for the next 6,427 years) !

And, even with all that, I am not sure I can get my head round the size of these numbers, if anyone has a better way I would love to know.

 

Google is not the nirvana for BI

I often hear people say: “Google is the nirvana for BI“. By this, they usually mean that they believe the ideal BI interface is a simple text box into which you can type a query, e.g. “what was the revenue for sprockets in Asia last quarter” and get an answer.

This is wrong on so many levels, one of which is highlighted by Google itself …

When Google can infer a meaning from a search request, over and above looking at it as a string of keywords, it uses this meaning to return more structured results. This can produce amusing results (depending on your sense of humor) as well as useful ones. For example, try typing any of the following words/phrases into a Google search box (or just click on the links) and check out the result (which is not always obvious):

askew, do a barrel roll, anagram, binary, (or octal or hex), kerning

Or, my personal favorite: recursion

All of these depend on the meaning of the word to give special results. There are many more useful examples, first returning simple results:

age of the earth, 10 dollars in pounds, 10 kilometers in inches, US gallon in UK gallons, seconds in 47 years, time in san francisco, sunset in London, define visualization, eleven times 42, e^(i*pi),

Then, progressively more complex results:

population San Rafael, SAP quote, Arsenal football club , NYY, BA85, movies in Hammersmith London, flights from Kelowna, graph of sin x

And, just for good measure a couple of more fun ones :-

number of horns on a unicorn, answer to life the universe and everything

The point is that by understanding more about what the end-user wants to see in response to a query, Google is able to give them significantly improved information, albeit with more work on their part (both in understanding the requirements and in the effort of producing the results).

This is a huge lesson for BI.

When we deliver results to end users, the more context and understanding we inject into the results the more valuable these results are. Self-service (certainly in terms of ad-hoc query type capabilities) is rarely the answer. In fact self-service BI is often more likely to be “BI hell” than “BI nirvana” and BI Apps are increasingly the answer organizations are turning to.

For more details on these and other related topics take a look at this on demand webinar.

Zen, Xcelsius and the Art of XWIS

Since the “#AllAccessSAP” Webinar a few weeks ago (which I wrote about here) many people have asked me what I think about “project Zen” and what Antivia’s strategy is in relation to it.

From what I have seen so far, Zen looks like it will be a great, new dashboard design/development tool, but for me, the most important aspect of Zen is that IMO it sets exactly the right tone for the future of dashboarding. I guess that is only to be expected given that, the more I see of Zen, the more similarity I see between it and our XWIS Advantage product, albeit in targeted at different environments (Zen is targeted at HTML5/JavaScript and XWIS Advantage is targeted at Xcelsius/SAP Dashboards).

The core similarity between Zen and XWIS Advantage is that they are both based on an OLAP foundation and OLAP is key to enabling the new style of dashboards (or BI Applications) which we are seeing emerge in the market (for more on BI Apps see my recent blog post here , SCN Article here and on-demand Webinar here).

The reason that OLAP makes BI Apps easier is it simplifies the implementation of essential features like drill-down, drill-across, cross-component synchronization, ad-hoc “slice and dice”, and many others. Putting these features inside a design environment like Xcelsius suddenly opens up a new way of delivering information, in a dramatic evolution from static “at-a-glance” dashboards to interactive BI Apps.

In practical terms, this means that XWIS customers can implement Zen-like capabilities in Xcelsius today, allowing them to deliver the BI Applications their users are demanding. Additionally, due to the OLAP similarities between between the products, should they wish to reuse their XWIS/Xcelsius content in Zen at some point in the future, they should have a considerably easier time doing so than they would with non-XWIS dashboards.

Ultimately, this should be a big win-win-win for Antivia, for SAP and for our joint customers. And once again, it is vindication of the Xcelsius SDK and the SAP partner ecosystem ethos.

The Zen SDK will also play a part, in the future, allowing us to bring XWIS features into the Zen environment to make sure that whichever BI App/Dashboard tool SAP customers use, they can benefit from the unique capabilities of XWIS.

So the short answers to the questions I posed at the start are:

1) We are excited about Zen and in particular the vision for dashboarding which it embodies

2) Over the coming months and years we will continue to innovate with XWIS to help Xcelsius customers deliver better dashboards today and increase their interoperability with Zen in the future, and with the Zen SDK we will add our unique capabilities into the Zen environment as well.

For more detail on these topics visit our Zen Today, Zen Tomorrow microsite.

Personal Analytics – A Cautionary Tale for Big Data Projects ?

Big DataIn a recent blog post Stephen Wolfram, creator of Mathematica, author of “A New Kind of Science”, creator of Wolfram|Alpha, and founder and CEO of Wolfram Research, wrote how, over a long period of time, he amassed “probably one of the world’s largest collections of personal data“. In the post, he walks through various analyses he recently performed on this data.

In my opinion the results of these analyses hold an interesting, cautionary tale for people working in the new world of Big Data, where there is a risk of analysing data just because it is there.

My assessment of the various results of Stephen’s analysis would be:

  • 95% obvious – (e.g. “there’s been a progressive increase in my email volume over the years”, “peaks [in email] are often associated with intense early-stage projects, where I am directly interacting with lots of people” )
  • 5% interesting but not useful – (e.g. “7% of all keystrokes are backspaces”, “a large volume of Stephen’s work has been done between midnight and 6am”)

The working-at-night observation might be more interesting if data were analysed across many people to see if there was a correlation between not requiring much sleep and success. Alas, even if this were true it would not be that useful (unless you believe that you can train yourself to require less sleep).

There is one area which might be useful, that is the analysis of the amount of time Stephen worked on his book “A New Kind of Science“; the data here may help him, in the future, estimate more accurately how long another book would take to write but, ironically, it is unlikely this would help him write another book any faster.

One would have hoped that with all this data, collected over so many years, and studied by someone with such an analytical mind, that there would be some usable insights, ones which could be acted on to make a change for the better … but, alas it would seem not.

And, that is the cautionary tale for the BI world

Be wary of “analysis for its own sake” or those suggesting that expensive Big Data projects “don’t need business requirementsbecause they are “finding insight that wasn’t known about before”. All too often, these types of projects produce interesting results but no useful information (and by useful I mean actionable, i.e. information which can be used to drive action and therefore hope to change something).

Don’t get me wrong, I am as convinced as everyone else that the analysis of “Big Data” will provide many valuable insights over the coming months and years. I am just concerned that if we plunge headlong into it, without thinking, the amount of time, money and effort wasted will far outweigh the benefits.

As I have said before, with any business intelligence / analytics project, it should always come down to business requirements. Always have an idea of what you are looking for and why. A project to “analyse our web site data to understand if there is a better way of laying out our site to keep people on it for longer” is many, many more times more likely to produce a tangible result than a project to “analyse clickstream data to see if there is anything interesting we didn’t know”.

Perhaps I am being too cynical, if I am I would love to hear any stories about a data analysis project which produced something truly unexpected that was used to make a significant business impact (apocryphal stories about beer and diapers/nappies need not apply).

For more thoughts on big data, particularly as it affects dashboards, watch my on-demand webinar here.