8 Big Data Pain Points and How to Address Them - Part 2
August 03, 2018

Kamesh Pemmaraju
ZeroStack

Share this

There are many pain points that companies experience when they try to deploy and run Big Data applications in their complex environments or use public or private cloud platforms, and there are also some best practices companies can use to address those pain points. Here are 5 more pain points and corresponding best practices.

Start with 8 Big Data Pain Points and How to Address Them - Part 1

PAIN POINT 4 – BIG DATA TOOLS EXPLOSION AND DEPLOYMENT COMPLEXITY

In the past decade, technologies such as Hadoop and MapReduce have become common frameworks to speed up processing of large datasets by breaking up them up into small fragments, running them in distributed farms of storage and processors clusters, and then collating the results back for consumption. Companies like Cloudera, Hortonworks and others have addressed many of the challenges associated with scheduling, cluster management, resource and data sharing, and performance tuning of these tools. And typically, such deployments are optimized to run on bare metal or on virtualization platforms like VMware, and therefore tend to remain in their own silo because of the complexity of deploying and operating these environments.

Modern big data use cases, however, need a whole bunch of other technologies and tools. You have Docker. You have Kubernetes. You have Spark. You have NoSQL Databases such as Cassandra and MongoDB. And when you get into machine learning you have several options.

Deploying Hadoop, which is quite complex, is one thing, arguably made relatively easy by companies like Cloudera and Hortonworks, but then if you need to deploy Cassandra or MongoDB, you have to put in effort to write scripts to deploy them. And depending on the target platform (bare metal, VMware, Microsoft), you will need to maintain and run multiple scripts. You then have to figure out how to network the Hadoop cluster with the Cassandra cluster and of course, inevitably, deal with DNS services, load balancers, firewalls, etc. Add other Big Data tools to be deployed, managed, and integrated, and you will begin to appreciate the challenge.

IT teams should address this challenge with a unifying platform that can not only deploy multiple Big Data tools and platforms from a curated "application and big data catalog," but also provide a way to virtualize all the underlying infrastructure resources along with an infrastructure-as-code framework via open API access This greatly simplifies the IT burden when it comes to provisioning the underlying infrastructure resources, and end users can simply deploy the tools they want and need with a single click and have the ability to use APIs to automate their deployment, provisioning, and configuration challenges.

PAIN POINT 5 – ONE BIG DATA CLUSTER DOESN'T ADDRESS ALL NEEDS

Organizations have diverse Big Data teams, production and R&D portfolios, and sometimes conflicting requirements for performance, data locality, cost, or specialized hardware resources. One single, standardized data cluster is not going to meet all of those needs. Companies will need to deploy multiple, independent Big Data clusters with possibly different underlying CPU, memory, and storage footprints. One cluster could be dedicated and fine-tuned for a Hadoop deployment with high local storage IOPS requirements, another may be running Spark jobs with more CPU and memory-bound configurations, and others like machine learning will need GPU infrastructure. Deploying and managing the complexity of such multiple diverse clusters will place a high operational overhead on the IT team, reducing their ability to respond quickly to Big Data user requests, and making it difficult to manage costs and maintain operational efficiency.

To address this pain point, the IT team should again have a unified orchestration/management platform and be able to set up logical business units that can be assigned to different Big Data teams. This way, each team gets full self-service capability within quota limits imposed by the IT staff, and each team can automatically deploy its own Big Data tools with a few clicks, independently of other teams.

PAIN POINT 6: SKYROCKETING IT OPERATIONS COSTS

Developing, deploying, and operating large-scale enterprise big data clusters can get complex, especially if it involves multiple sites, multiple teams, and diverse infrastructure, as we have seen. The operational overhead of these systems can be expensive and manually time-consuming. For example, IT operations teams still need to set up firewalls, load balancers, DNS services, and VPN services, to name a few. They still need to manage infrastructure operations such as physical host maintenance, disk additions/removals/replacements, and physical host additions/removals/replacements. They still need to do capacity planning, and they still need to monitor utilization, allocation, and performance of compute, storage, and networking.

IT teams should look for a solution that addresses this operational overhead through automation and the use of modern SaaS-based management portals that help the teams optimize sizing, perform predictive capacity planning, and implement seamless failure management.

PAIN POINT 7 – CONSISTENT POLICY-DRIVEN SECURITY AND CUSTOMIZATION REQUIREMENTS

Enterprises have policies around using their specifically hardened and approved gold images of operating systems. The operating systems often need to have security configurations, databases, and other management tools installed before they can be used. Running these on public cloud may not be allowed, or they may run very slowly.

The solution is to enable an on-premises data center image store where enterprises can create customized gold images. Using fine-grained RBAC, the IT team can share these images selectively with various development teams around the world based on the local security, regulatory, and performance requirements. The local Kubernetes deployments are then carried out using these gold images to provide the underlying infrastructure to run containers.

PAIN POINT 8 – DR STRATEGY FOR EDGE COMPUTING AND BIG DATA CLUSTERS

Any critical application and the data associated with it needs to be protected from natural disasters regardless of whether or not these apps are based on containers. None of the existing solutions provides an out-of-the-box disaster recovery feature for critical edge computing clusters or Big Data analytics applications. Customers are left to cobble together their own DR strategy.

As part of a platform's multi-site capabilities, IT teams should be able to perform remote data replication and disaster recovery between remote geographically-separated sites. This protects persistent data and databases used by these clusters.

Infrastructure management for Big Data projects can be extremely complex, but with centralized management of virtualized or cloud-based resources, it can be far easier.

Kamesh Pemmaraju is VP of Product at ZeroStack
Share this

The Latest

October 20, 2020

Although cost control/expense management remains top of mind, organizations are realizing the necessity of technology solutions to enable them to steer the business during these turbulent times, according to IDG's CIO Pandemic Business Impact Study ...

October 19, 2020

The COVID-19 pandemic has compressed six years of modernization projects into 6 months. According to a recent report, IT leaders have accelerated projects aimed at increasing productivity and business agility, improving application performance and end-user experience, and driving additional revenue through existing channels ...

October 15, 2020

There is no doubt that automation has become the key aspect of modern IT management. The end-user computing market is no exception. With a large and complex technology stack and a huge number of applications, EUC specialists need to handle an ever-increasing number of changes at an ever-increasing rate. Many IT organizations are starting to realize that they can no longer control the flow of changes. It is time to think about how to facilitate change ...

October 14, 2020

Starting this September, the lifespan of an SSL/TLS certificate has been limited to 398 days, a reduction from the previous maximum certificate lifetime of 825 days. With this change, everyone needs to more carefully monitor SSL certificate expiration and server characteristics ...

October 13, 2020

Nearly 6 in 10 responding organizations have accelerated their digital transformations due to the COVID-19 pandemic, according to The IBM Institute for Business Value study COVID-19 and the Future of Business ...

October 08, 2020

Two-thirds (67%) of those surveyed expect the sheer quantity of data to grow nearly five times by 2025, according to a new report from Splunk: The Data Age Is Here. Are You Ready? ...

October 07, 2020

Gaming introduced the world to a whole new range of experiences through augmented reality (AR) and virtual reality (VR). And consumers are really catching on. To unlock the potential of these platforms, enterprises must ensure massive amounts of data can be transferred quickly and reliably to ensure an acceptable quality of experience. As such, this means that enterprises will need to turn to a 5G infrastructure powered by an adaptive network ...

October 06, 2020

A distributed, remote workforce is the new business reality. How can businesses keep operations going smoothly and quickly resolve issues when IT staff is in San Jose, employee A is working remotely in Denver at their home and employee B is a salesperson still doing some road traveling? The key is an IT architecture that promotes and supports "self-healing" at the endpoint to take care of issues before they impact employees. The essential element to achieve this is hyper-automation ...

October 05, 2020

In Episode 10, Prem Naraindas, CEO of Katonic.ai, joins the AI+ITOPS Podcast to discuss how emerging technologies can make life better for ITOps ...

October 02, 2020

Sean McDermott on the AI+ITOPS Podcast: "AIOps is really about the processing of vast amounts of data and the ability to move into a more analytical, prescriptive and automated methodology."