The cognitive computing company

Developing next generation technologies at the intersection of semantics, machine-learning, artificial life, social networking and other technologies.

Sunday, October 18, 2009

Data Tsunami

The legendary investor Warren Buffett is reported to have said "...it takes all of 20 years to build a reputation and a few minutes to destroy it". The recent turmoil in the markets and the sudden and astonishing collapse of Wall street "titans" such as Bear Sterns, Lehman Brothers etc. has only underscored this aphorism. As the most clichéd of clichés go "hindsight is always 20/20", however there is an extremely serious lesson to be learned from the experience of the last crisis; that the current financial meltdown did not occur due to a lack of information. In fact, it has been argued that investors, regulators were so overwhelmed with data to make any sense. Many question why enforcement was so sorely lacking; while it is very easy to second-guess and blame the so-called regulators - under-staffed and under-resourced that they are - we believe the crux of the problem was that "sense-making" from this tsunami of data was woefully inadequate.

At Cognika we are working hard to make information usable and to make sense of the data deluge . We shall shortly be announcing a suite of products, that we hope, addresses many of aforesaid issues. We are looking for beta users to test-drive our products, please contact us if this is of interest to you. It not only makes sound business sense but is also a moral and ethical responsibility.

To participate please email us beta@cognika.com


Tuesday, October 13, 2009

Swimming in a sea of data

A recent NY Times article highlights the issue of data overload. It points out some of the approaches gathering momentum to keep up with the pace of data given the enormous computational requirements. Modern algorithms are not just enormously complex to design they are ravenously processor hungry and require some clever juggling to compute within a reasonable cost. However approaches such as Hadoop (as pointed out in the article), HBase, Hypertable etc. offer a quantum leap in capabilities.

At Cognika we have been leveraging Hadoop extensively for many of our machine-learning activities. Lately we have developed some image-processing and feature detection algorithms which normally would require significant investments in hardware. However, w have managed to achieve comparable performance using commodity hardware and squeeze out some impressive results. Please post your experiences and/or lessons learned and we are happy to share ours.

About Cognika