8 Big Data Pain Points and How to Address Them - Part 1
August 02, 2018

Kamesh Pemmaraju
ZeroStack

Share this

The word "Big" in Big Data doesn't even come close to capturing what is happening today in our industry and what is yet to come. The volume, velocity, and variety of data that is being generated has overwhelmed the capabilities of infrastructure and analytics we have today.

We are now experiencing Moore's law for data growth: data is doubling every 18 months. No wonder IDC forecasts that the global datasphere will grow to 163 zettabytes (a trillion gigabytes) by 2025. That's ten times the data generated in 2016.

Data scientists typically may have to simultaneously combine data from various sources with different volume, variety, and velocity needs to gain useful insights, but that in turn puts different demands on processing power, storage and network performance, latencies etc. Here's a quick look at the different types of Big Data sources:

Unstructured data: The type of data generated by sources such as social media, log files, and sensor data is not very structured and hence is generally not amenable to traditional database analysis methods. A large variety of Big Data tools, techniques, and approaches have emerged in the last few years to ingest, analyze, and extract customer sentiment from social media data. Newer approaches include Natural Language Processing, News Analytics, unstructured text analysis, etc.

Semi-structured data: Some unstructured data may in fact have some structure to them. Examples include email, call center logs, and IoT data. Some in the industry have coined a new term, "Semi-structured data," to describe these data sources. These may require a combination of traditional databases and newer Big Data tools to extract useful insights from these types of data.

Streaming data brings in the dimension of higher velocity and real-time processing constraints. The velocity of data varies widely depending on the type of application: IoT data tends to be small packets of data regularly streamed at low velocity, while 4K video streams stretch the velocity to the highest end of the spectrum.

The alluring promise of these new use cases – and associated emerging technologies and tools – is that they can generate useful insights faster so that companies can take actions to achieve better business outcomes, improve customer experience, and gain significant competitive advantage.

No wonder Big Data projects have been on the CIO top ten initiatives for the past decade – almost 70 percent of Fortune 1000 firms rate big data as important to their businesses; over 60 percent already have at least one big data project in place.

While data scientists are dealing with the complexity of how to derive value from diverse data sources, IT practitioners need to figure out the most efficient way to deal with the infrastructure requirements of Big Data projects. Traditional bare-metal infrastructure, with its siloed management of servers, storage, and networks, is not flexible enough to tackle the dynamic nature of the new Big Data workloads. This is where cloud-based systems shine. However, many challenges remain to be addressed in the areas of workload scaling, performance and latency, data migration, bandwidth limitations, and application architectures.

There are many pain points that companies experience when they try to deploy and run Big Data applications in their complex environments or use public or private cloud platforms, and there are also some best practices companies can use to address those pain points.

PAIN POINT 1: LONG COMMUTE FROM STORAGE TO COMPUTE

As data amounts grow from terabyte to petabyte and beyond, the time it takes to transport this data closer to compute resources and perform data processing and analytics takes longer and longer, impeding the agility of the organization. Public cloud vendors like AWS, who are all about centralized data centers, want to get your data into their cloud and go to extreme lengths (see AWS snowmobile) to get it. Furthermore, data transfer fees are mostly unidirectional, i.e., only data that is going out of an AWS service is subject to data transfer fees. Not only is this a classic lock-in scenario, but it also goes against other key emerging trends:

Edge Computing and Artificial Intelligence, especially for use cases such as IoT, 5G, image/speech recognition, Blockchain, and others, where there is a need to place processing and data closer to each other and/or closer to where the user or device is. Edge computing delivers faster data analytics results with the data being closer to processing while simultaneously reducing the cost of transporting data to the cloud.

Artificial Intelligence systems are more effective the more data they are given. For example, in deep learning, the more cases (data) you give to the system, the more it learns and the more accurate its results become. This is a case where you need massive parallel processing (e.g., using GPUs) of large data sets. Big Data analytics and AI can complement each other to improve speed of processing and produce more useful and relevant results.

To address the need to get data to where the compute resources are, IT leaders should look for hyper-converged, scale-out solutions that bring together compute, storage, and networking, thus reducing data I/O latency and improving data processing and analytics times. For even better performance, they should look for solutions that can bring the computing units (VMs or containers) as close to the physical storage as possible, without losing the manageability of the storage solution and while maintaining multi-tenancy across the cluster. For example, a Hadoop Data Node VM running on the same physical host and accessing local SSDs will experience the highest performance and faster results overall without impacting other workloads running within other tenants.

IT leaders can take advantage of many emerging memory technologies such as persistent memory (a new memory technology between DRAM and flash that will be non-volatile, with low latency and higher capacity than DRAMs), NVMe, and faster flash drives. With prices falling rapidly, there seems little need for spinning disks for primary storage.

IT administrators should implement a central way to manage all the edge computing sites, with the ability to deploy and manage multiple data processing clusters within those sites. Access rights to each of these environments should be managed through strict BU-level and Project-level RBAC and security controls.

PAIN POINT 2: DISTRIBUTED TEAMS, LOCAL PERFORMANCE NEEDS

For data science development and testing use cases, companies do not build a single huge data processing cluster in a centralized data center for all of their big data teams spread around the world. Building such a cluster in one location has DR implications, not to mention latency and country-specific data regulation challenges. Typically, companies want to build out separate local/edge clusters based on location, type of application, data locality requirements, and the need for separate development, test, and production environments.

Having a central pane of glass for management becomes crucial in this situation for operational efficiency, simplifying deployment, and upgrading these clusters. Having strict isolation and role-based access control (RBAC) is often a security requirement.

IT administrators should implement a central way to manage diverse infrastructures in multiple sites, with the ability to deploy and manage multiple data processing clusters within those sites. Access rights to each of these environments should be managed through strict BU-level and Project-level RBAC and security controls.

PAIN POINT 3: STUCK ON BARE METAL AND ITS SILO INEFFICIENCIES

Companies still run the majority of their Big Data workloads, particularly Hadoop-based workloads, on bare metal. This is obviously not as scalable, elastic, or flexible as a virtual or cloud platform. Traditional bare metal environments are famous for creating silos where various specialist teams (storage, networking, security) form fiefdoms around their respective functional areas. Silos impede velocity because they lead to complexity of operations, lack of consistency in the environment, and lack of automation. Automating across silos turns into an exercise of custom scripts and lot of "glue and duct tape," which makes maintenance and change management complex, slow, and error-prone.

A virtualized environment for Big Data allows data scientists to create their own Hadoop, Spark or Cassandra clusters and to evaluate their algorithms. These clusters need to be self-service, elastic and high performing. IT should be able to control the resource allocation to data scientists and teams using quotas and role-based access control.

Better yet, IT managers should look for an orchestration platform that can deal with both bare metal and virtual environments, so IT can place workloads in the best target environment based on performance and latency requirements.

Read 8 Big Data Pain Points and How to Address Them - Part 2, to learn about 5 more big data pain points.

Kamesh Pemmaraju is VP of Product at ZeroStack
Share this

The Latest

March 21, 2019

Achieving audit compliance within your IT ecosystem can be an iterative process, and it doesn't have to be compressed into the five days before the audit is due. Following is a four-step process I use to guide clients through the process of preparing for and successfully completing IT audits ...

March 20, 2019

Network performance issues come in all shapes and sizes, and can require vast amounts of time and resources to solve. Here are three examples of painful network performance issues you're likely to encounter this year, and how NPMD solutions can help you overcome them ...

March 19, 2019

"Scale up" versus "scale out" doesn't just apply to hardware investments, it also has an impact on product features. "Scale up" promotes buying the feature set you think you need now, then adding "feature modules" and licenses as you discover additional feature requirements are needed. Often as networks grow in size they also grow in complexity ...

March 18, 2019

Network Packet Brokers play a critical role in gaining visibility into new complex networks. They deliver the packet data and information IT and security teams need to identify problems, recognize security issues, and ensure overall network performance. However, not all Packet Brokers are created equal when it comes to scalability. Simply "scaling up" your network infrastructure at every growth point is a more complex and more expensive endeavor over time. Let's explore three ways the "scale up" approach to infrastructure growth impedes NetOps and security professionals (and the business as a whole) ...

March 15, 2019

Loyal users are the key to your service desk's success. Happy users want to use your services and they recommend your services in the organization. It takes time and effort to exceed user expectations, but doing so means keeping the promises we make to our users and being careful not to do too much without careful consideration for what's best for the organization and users ...

March 14, 2019

What's the difference between user satisfaction and user loyalty? How can you measure whether your users are satisfied and will keep buying from you? How much effort should you make to offer your users the ultimate experience? If you're a service provider, what matters in the end is whether users will keep coming back to you and will stay loyal ...

March 13, 2019

What if I said that a 95% reduction in the amount of IT noise, 99% reduction in ticket volume and 99% L1 resolution rate are not only possible, but that some of the largest, most complex enterprises in the world see these metrics in their environments every day, thanks to Artificial Intelligence (AI) and Machine Learning (ML)? Would you dismiss that as belonging to the realm of science fiction? ...

March 12, 2019
As a consumer, when you order products online, how do you expect them to get delivered? Some key requirements are: the product must arrive on time, well-packed, and ultimately must give you an easy gateway to return it if it is not as per your expectations. All this has been made possible via a single application. But what if this application doesn't function the way you want or cracks down mid-way, or probably leaks off information about you to some potential hackers? Technical uncertainty and digital chaos are the two double-edged swords dangling over this billion-dollar ecommerce market. Can Quality Assurance and Software Testing save application developers from this endless juggle? ...
March 11, 2019

Of those surveyed, 96% of organizations have a digital transformation strategy, with 57% approaching it as an enterprise-wide priority, with a clear emphasis on speed of business, costs, risk, and customer satisfaction, according to IDC’s Aligning IT Strategies and Business Expectations for Digital Transformation Success, sponsored by EasyVista ...

March 08, 2019

One of my ongoing areas of focus is analytics, AIOps, and the intersection with AI and machine learning more broadly. Within this space, sad to say, semantic confusion surrounding just what these terms mean echoes the confusions surrounding ITSM ...