You are currently browsing the tag archive for the ‘Hadoop’ tag.


Organizations should consider multiple aspects of deploying big data analytics. These include the type of analytics to be deployed, how the analytics will be deployed technologically and who must be involved both internally and externally to enable success. Our recent big data analytics benchmark research assesses each of these areas. How an organization views these deployment considerations may depend on the expected benefits of the big data analytics program and the particular business case to be made, which I discussed recently.

According to the research, the most important capability of big data analytics is predictive analytics (64%), but among companies vr_Big_Data_Analytics_08_top_capabilities_of_big_data_analyticsthat have deployed big data analytics, descriptive analytic approaches of query and reporting (74%) and data discovery (64%) are more readily available than predictive capabilities (57%). Such statistics may be a function of big data technologies such as Hadoop, and their associated distributions having prioritized the ability to run descriptive statistics through standard SQL, which is the most common method for implementing analysis on Hadoop. Cloudera’s Impala, Hortonworks’ Stinger (an extension of Apache Hive), MapR’s Drill, IBM’s Big SQL, Pivotal’s HAWQ and Facebook’s open-source contribution of Presto SQL all focus on accessing data through an SQL paradigm. It is not surprising then that the technology research participants use most for big data analytics is business intelligence (75%) and that the most-used analytic methods — pivot tables (46%), classification (39%) and clustering (37%) — are descriptive and exploratory in nature. Similarly, participants said that visualization of big data allows analysts to perform faster analysis (49%), understand context better (48%), perform root-cause analysis (40%) and display multiple result sets (40%), but visualization does not provide more advanced analytic capabilities. While various vendors now offer approaches to run advanced analytics on big data, the research shows that in terms of big data, organizational capabilities still revolve around more basic analytic access.

For companies that are implementing advanced analytic capabilities on big data, there are further analytic process considerations, and many have not yet tackled those. Model building and model deployment should be manageable and timely, involve specialized personnel, and integrate into the broader enterprise architecture. While our research provides an in-depth look at adoption of the different types of in-database analytics, deployment of advanced analytic sandboxes, data mining, model management, integration with business processes and overall model deployment, that is beyond the topic here.

Beyond analytic considerations, a host of technological decisions vr_Big_Data_Analytics_13_advanced_analytics_on_big_datamust be made around big data analytics initiatives. One of these is the degree of customization necessary. As technology advances, customization is giving way to more packaged approaches to big data analytics. According to our research, the majority (54%) of companies that have already implemented big data analytics did custom builds using big data-specific languages and interfaces. The most of those that have not yet deployed are likely to purchase a dedicated or packaged application (44%), followed by a custom build (36%). We think that this pre- and post-deployment comparison reflects a maturing market.

The move from custom approaches to standardized ones has important implications for the skills sets needed for a big data vr_Big_Data_Analytics_14_big_data_analytics_skillsanalytics initiative. In comparing the skills that organizations said they currently have to the skills they need to be successful with big data analytics, it is clear that companies should spend more time building employees’ statistical, mathematical and visualization skills. On the flip side, organizations should make sure their tools can support skill sets that they already have, such as use of spreadsheets and SQL. This is convergent with other findings about training needs, which include applying analytics to business problems (54%), training on big data analytics tools (53%), analytic concepts and techniques (46%) and visualizing big data (41%). The data shows that as approaches become more standardized and the market focus shifts toward them from customized implementations, skill needs are shifting as well. This is not to say that demand is moving away from the data scientist completely. According to our research, organizations that involve cross-functional teams or data scientists in the deployment process are realizing the most significant impact. It is clear that multiple approaches for personnel, departments and current vendors play a role in deployments and that some approaches will be more effective than others.

Cloud computing is another key consideration with respect to deploying analytics systems as well as sandbox modelling and testing environments. For deployment of big data analytics, 27 percent of companies currently use a cloud-based method, while 58 percent said they do not and 16 percent do not know what is used. Not surprisingly, far fewer IT professionals (19%) than business users (40%) said they use cloud-based deployments for big data analytics. The flexibility and capability that cloud resources provide is particularly attractive for sandbox environments and for organizations that lack big data analytic expertise. However, for big data model building, most organizations (42%) still utilize a dedicated internal sandbox environment to build models while fewer (19%) use a non-dedicated internal sandbox (that is, a container in a data warehouse used to build models) and others use a cloud-based sandbox either as a completely separate physical environment (9%) or as a hybrid approach (9%). From this last data we infer that business users are sometimes using cloud-based systems to do big data analytics without the knowledge of IT staff. Among organizations that are not using cloud-based systems for big data analytics, security (45%) is the primary reason that they do not.

Perhaps the most important consideration for big data analytics is choosing vendors to partner with to achieve organizational objectives. When we understand the move from custom technological approaches to more packaged ones and the types of analytics currently being implemented for big data, it is not surprising that a majority of research participants (52%) are looking to their business intelligence systems providers to supply their big data analytics solution. However, a significant number of companies (35%) said they will turn to a specialist analytics provider or their database provider (34%). When evaluating big data analytics, usability is the most important vendor consideration but not by as wide a margin as in categories such as business intelligence. A look at criteria rated important and very important by research participants reveals usability is the highest ranked (94%), but functionality (92%) and reliability (90%) follow closely. Among innovative new technologies, collaboration is important (78%) while mobile access (46%) is much less so. Coupled with the finding that communication and knowledge sharing combined is an important benefit of big data analytics, it is clear that organizations are cognizant of the collaborative imperative when choosing a big data analytics product.

Deployment of big data analytics starts with forethought and a well-defined business case that includes the expected benefits I discussed in my previous analysis. Once the outcome-driven framework is established, organizations should consider the types of analytics needed, the enabling technologies and the people and processes necessary for implementation. To learn more about our big data analytics research, download a copy of the executive summary here.

Regards,

Tony Cosentino

VP & Research Director


SAS Institute, a long-established provider analytics software, showed off its latest technology innovations and product road maps at its recent analyst conference. In a very competitive market, SAS is not standing still, and executives showed progress on the goals introduced at last year’s conference, which I covered. SAS’s Visual Analytics software, integrated with an in-memory analytics engine called LASR, remains the company’s flagship product in its modernized portfolio. CEO Jim Goodnight demonstrated Visual Analytics’ sophisticated integration with statistical capabilities, which is something the company sees as a differentiator going forward. The product already provides automated charting capabilities, forecasting and scenario analysis, and SAS probably has been doing user-experience testing, since the visual interactivity is better than what I saw last year. SAS has put Visual Analytics on a six-month release cadence, which is a fast pace but necessary to keep up with the industry.

Visual discovery alone is becoming an ante in the analytics market, vr_predanalytics_benefits_of_predictive_analytics_updatedsince just about every vendor has some sort of discovery product in its portfolio. For SAS to gain on its competitors, it must make advanced analytic capabilities part of the product. In this regard, Dr. Goodnight demonstrated the software’s visual statistics capabilities, which can switch quickly from visual discovery into regression analysis running multiple models simultaneously and then optimize the best model. The statistical product is scheduled for availability in the second half of this year. With the ability to automatically create multiple models and output summary statistics and model parameters, users can create and optimize models in a more timely fashion, so the information can be come actionable sooner. In our research on predictive analytics, the most participants (68%) cited competitive advantage as a benefit of predictive analytics, and companies that are able to update their models daily or more often, our research also shows, are very satisfied with their predictive analytics tools more often than others are. The ability to create models in an agile and timely manner is valuable for various uses in a range of industries.

There are three ways that SAS allows high performance computing. The first is the more traditional grid approach which distributes processing across multiple nodes. The second is the in-database approach that allows SAS to run as a process inside of the database. vr_Big_Data_Analytics_08_top_capabilities_of_big_data_analyticsThe third is extracting data and running it in-memory. The system has the flexibility to run on different large-scale database types such as MPP as well Hadoop infrastructure through PIG and HIVE. This is important because for 64 percent of organizations, the ability to run predictive analytics on big data is a priority, according to our recently released research on big data analytics. SAS can run via MapReduce or directly access the underlying Hadoop Distributed File System and pull the data into LASR, the SAS in-memory system. SAS works with almost all commercial Hadoop implementations, including Cloudera, Hortonworks, EMC’s Pivotal and IBM’s InfoSphere BigInsights. The ability to put analytical processes into the MapReduce paradigm is compelling as it enables predictive analytics on big data sets in Hadoop, though the immaturity of initiatives such as YARN may relegate the jobs to batch processing for the time being. The flexibility of LASR and the associated portfolio can help organizations overcome the challenge of architectural integration, which is the most widespread technological barrier to predictive analytics (for 55% of participants in that research). Of note is that the SAS approach provides purely analytical engine, and since there is no SQL involved in the algorithms, its overhead related to SQL is non-existent and it runs directly on the supporting system’s resources.

As well as innovating with Visual Analytics and Hadoop, SAS has a clear direction in its road map, intending to integrate the data integration and data quality aspects of the portfolio in a singlevr_Info_Optimization_04_basic_information_tasks_consume_time workflow with the Visual Analytics product. Indeed, data preparation is still a key sticking point for organizations. According to our benchmark research on information optimization, time spent in analytic tasks is still consumed most by data preparation (for 47%) and data quality and consistency (45%). The most valuable task, interpretation of the data, ranks fourth at 33 percent of analytics time. This is a big area of opportunity in the market, as reflected by the flurry of funding for data preparation software companies in the fourth quarter of 2013. For further analysis of SAS’s data management and big data efforts, please read my colleague Mark Smith’s analysis.

Established relationships with companies like Teradata and a reinvigorated relationship with SAP position SAS to remain at the heart of enterprise analytic architectures. In particular, the co-development effort that allow the SAS predictive analytic workbench to run on top of SAP HANA is promising, which raises the question of how aggressive SAP will be in advancing its own advanced analytic capabilities on HANA. One area where SAS could learn from SAP is in its developer ecosystem. While SAP has thousands of developers building applications for HANA, SAS could do a better job of providing the tools developers need to extend the SAS platform. SAS has been able to prosper with a walled-garden approach, but the breadth and depth of innovation across the technology and analytics industry puts this type of strategy under pressure.

Overall, SAS impressed me with what it has accomplished in the past year and the direction it is heading in. The broad-based development efforts raise a final question of where the company should focus its resources. Based on its progress in the past year, it seems that a lot has gone into visual analytics, visual statistics, LASR and alignment with the Hadoop ecosystem. In 2014, the company will continue horizontal development, but there is a renewed focus on specific analytic solutions as well. At a minimum, the company has good momentum in retail, fraud and risk management, and manufacturing. I’m encouraged by this industry-centric direction because I think that the industry needs to move away from the technology-oriented V’s toward the business-oriented W’s.

For customers already using SAS, the company’s road map is designed to capture market advantage with minimal disruption to existing environments. In particular, focusing on solutions as well as technological depth and breadth is a viable strategy. While it still may make sense for customers to look around at the innovation occurring in analytics, moving to a new system will often incur high switching costs in productivity as well as money. For companies just starting out with visual discovery or predictive analytics, SAS Visual Analytics provides a good point of entry, and SAS has a vision for more advanced analytics down the road.

Regards,

Tony Cosentino

VP and Research Director

Twitter Updates

Top Rated

Blog Stats

  • 73,747 hits
Follow

Get every new post delivered to your Inbox.

Join 122 other followers

%d bloggers like this: