Image Optimization Best Practices for Device Breakpoints - Part 1
October 21, 2019

Ari Weil
Akamai

Share this

An effective breakpoint strategy helps deliver sharp, properly sized images, which are some of the most compelling pieces of content on a web page. Lack of such a strategy can lead to jagged images or ones that take too long to render due to excessive size, potentially reducing the overall effectiveness of web pages — and driving down the quality of the user experience.

Creation of derivative images at properly determined breakpoints is critical but often challenging for web developers and designers to achieve. Generating the right number of variants — and at the correct widths and spacings — for their users' critical devices can provide a great balance between byte savings and cache dilution.

While it's true that having too few breakpoints can improve offload, it can also deliver many unused bytes and place an unnecessarily high rendering burden on the device and browser. At the same time, when there are too many breakpoints, it may result in the opposite effect. Finding the right balance mandates some time and resource commitment, but the benefits are truly worth the exercise.

In this 2-part blog, we will explore just how significant image breakpoints are to businesses, and some important device-related factors to consider in image breakpoint decisions — from screen resolution market share and user base, to device characteristics to pixel density — to deliver the optimally-sized web image every time.

The Byte Loss Breakdown

Getting image breakpoints right is especially important at larger image widths. To illustrate, let's look at this example in which a web page is displaying two images that need to be resized by the browser 50 pixels in both height and width:


As you can see, delivering a 200 X 200 pixel (px) image for a 150 X 150 px use case (50 extra pixels in width and height) results in 70,000 wasted bytes required in memory for the browser to display the image than if the image were delivered at 150 X 150 px.

When looking at the larger image example, delivering a 600 X 600 px image instead of a 550 X 550 px use case (still 50 extra pixels in height and width) resulted in 230,000 wasted bytes, or 3.3-times more wasted bytes. This illustrates why it's recommended to have more breakpoints at larger image sizes (in this case, images larger than 700 px in width).

Another example to consider involves displaying images on a typical smartphone, in this case, the Samsung Galaxy S5. This model has a screen resolution of 1080 X 1920 px. Take, for example, a server sending an image of 1200 X 2000 px dimensions that must be resized to 1080 X 1920 px. This would result in more than 1,305,600 bytes — that's 1.3 MB — waste, not to mention a significant degradation in user experience.

Read Image Optimization Best Practices for Device Breakpoints - Part 2, covering 4 tips for getting image breakpoints right.

Ari Weil is Global VP of Product and Industry Marketing at Akamai
Share this

The Latest

November 14, 2019

A brief introduction to Applications Performance Monitoring (APM), breaking it down to a few key points, followed by a few important lessons which I have learned over the years ...

November 13, 2019

Research conducted by ServiceNow shows that Gen Zs, now entering the workforce, recognize the promise of technology to improve work experiences, are eager to learn from other generations, and believe they can help older generations be more open‑minded ...

November 12, 2019

We're in the middle of a technology and connectivity revolution, giving us access to infinite digital tools and technologies. Is this multitude of technology solutions empowering us to do our best work, or getting in our way? ...

November 07, 2019

Microservices have become the go-to architectural standard in modern distributed systems. While there are plenty of tools and techniques to architect, manage, and automate the deployment of such distributed systems, issues during troubleshooting still happen at the individual service level, thereby prolonging the time taken to resolve an outage ...

November 06, 2019

A recent APMdigest blog by Jean Tunis provided an excellent background on Application Performance Monitoring (APM) and what it does. A further topic that I wanted to touch on though is the need for good quality data. If you are to get the most out of your APM solution possible, you will need to feed it with the best quality data ...

November 05, 2019

Humans and manual processes can no longer keep pace with network innovation, evolution, complexity, and change. That's why we're hearing more about self-driving networks, self-healing networks, intent-based networking, and other concepts. These approaches collectively belong to a growing focus area called AIOps, which aims to apply automation, AI and ML to support modern network operations ...

November 04, 2019

IT outages happen to companies across the globe, regardless of location, annual revenue or size. Even the most mammoth companies are at risk of downtime. Increasingly over the past few years, high-profile IT outages — defined as when the services or systems a business provides suddenly become unavailable — have ended up splashed across national news headlines ...

October 31, 2019

APM tools are ideal for an application owner or a line of business owner to track the performance of their key applications. But these tools have broader applicability to different stakeholders in an organization. In this blog, we will review the teams and functional departments that can make use of an APM tool and how they could put it to work ...

October 30, 2019

Enterprises depending exclusively on legacy monitoring tools are falling behind in business agility and operational efficiency, according to a new study, Prevalence of Legacy Tools Paralyzes Enterprises' Ability to Innovate conducted by Forrester Consulting ...

October 29, 2019

Hyperconverged infrastructure is sometimes referred to as a "data center in a box" because, after the initial cabling and minimal networking configuration, it has all of the features and functionality of the traditional 3-2-1 virtualization architecture (except that single point of failure) ...