With the in-memory capabilities of the Hekaton initiative, will SQL Server® 2014 be the tool to bring Big Data into more widespread use in the enterprise? But, even though it may be simple, that shouldn’t hide the fact that the increased workload involved will still be a matter for specialists.
Having partnered a number of recent Microsoft events (like the SQL Server Days in France in December 2013, and the subsequent TechDays in February this year), Bull has noticed the significant buzz being generated around the Hekaton In-Memory decision support engine: one of the key innovations planned for the next version of SQL Server, due out this year.
Thanks to index columns and in-memory storage, it’s now possible to perform complex analytical calculations directly on transactional data, while avoiding the kind of pre-processing activities (sorting, filters, aggregations…) that greatly increase the complexity of Business Intelligence systems.
In-Memory by Microsoft
The principle of In-Memory computing is certainly nothing new, but it takes on real importance when it becomes part of the Microsoft ecosystem. What’s so interesting about the Microsoft solution is that it focuses on the ability to mix data stored on disk and in memory, so performance improvements can be targeted on the largest and most heavily used tables (CCH).
When combined upstream with StreamInsight (Microsoft’s complex event processing (CEP) platform) and downstream with BI Power (the Excel data analysis add-on) Hekaton truly, and perhaps for the first time, makes Big Data accessible to the largest number of end users. Taken together, the usability of BI Power and the power of Hekaton will open up unprecedented opportunities for users to explore event data (logs, conversations, social networks, sensors…). And that, in turn, will unlock new prospects for business innovation.
No need for a Big Bang implementation
Staying in a Microsoft environment means implementation costs and timescales are limited, and users get to grips faster with the new functionality. As a result, launching Big Data pilot projects, and testing and validating new concepts under real-life conditions, is easier and less risky. This freedom to experiment offered by the SQL Server 2014 BI stack allows organizations to approach Big Data gently. There’s no need for a Big Bang when you start to use Big Data: instead, there can be a gradual, pragmatic evolution, with a progressive ramping up of the skills of all stakeholders involved. When it comes to the actual information technology, using well-understood, non-invasive technology – so there is no need to start again from scratch with the whole architecture – capitalizes on what is already in place and allows you to devote more resources to analysis capabilities. And for end users, this kind of approach lets them build up the necessary maturity to put a first foot in the door of Big Data.
Looking ahead to industrialization
However, the time will inevitably come when this learning curve will come up against really ‘big’ Big Data. As the amount of data items, flows and transactions expodes, and the number of users multiplies; then performance, reliability and safety requirements become critical. In short, it’s time for the Business Intelligence project to move into its industrial phase. And bringing in SQL Server 2014, and all its attendant infrastructure, to support this increased workload remains a specialist affair. The organization will need help from true BI expert, capable of providing end-to-end support along this path of gradual transformation. Bull offers just such support, from proof-of-concept definition to the industrialization of target architectures – appropriately scaled according usage, data volumes and expected levels of service – in its own services centers. SQL Server 2014 promises to be the perfect tool to help you take your first steps into the world of Big Data; but an industrial partner specialising in performance will still be essential if it is to deliver its full potential.