Skip to main content

I/Os Per Second Myths

Terry Critchley

The performance of an application depends on the availability of adequate IT resources, such as CPU, memory, storage and so on.

Storage metrics of interest are:
■ Data capacity
■ Input/output capacity (I/O performance)
■ Durability, space, cooling, cost, ROI and other mainly commercial factors.

We are concerned in this blog with the second item, I/O capability, which is not as simple as my system does X input/output operations per second (IOPs). First, let us look at some background to input/output. The classical I/O time for a disk access is:

TCPU+TCTL+TSEEK+TWAIT+TSEARCH+TACC+TXFR+TCOMP

TCPU = Time to parse and generate the I/O request in the processor

TCTL = Time for the controller to format and issue the request to the HDD, plus the time for the request to reach the HDD

TSEEK = Time to move to the correct track on the HDD (called a SEEK)

TWAIT = Time waiting to reach the required record

(In case of disk subsystems with set sector capability, the channel disconnects from the particular I/O until the record position is about to be reached on the track, then reconnects to complete the I/O. In the meantime it can do something else with its time. Prior to this feature, the channel would wait until the head reached the right position and then release it after the I/O was complete.)

TACC = Time to access the record (SEARCH) which will have an overhead depending on the format of the data (RDBMS, flat file, RAID x and so on)

TXFR = Transfer time of the accessed data to the processor via the controller/channel

TCOMP = Time to complete/post the end of the I/O.

This time is divided into 1 second to get I/Os per second (IOPs). Is physical I/O speed all that matters then?

Records: A record to an application usually means a logical record, for example, the name and address of a client. This can be made up of more than one physical record, which is normally retrieved as a block of a certain size, for example, 2048 bytes. Some though, a physical record may contain more than one logical record.

Disk Access: An I/O operation consists of several activities and the list of these depends how far you go back in the chain from data need to fulfillment. This is shown in the I/O time equation above.

Myth 1

This myth is propagated widely in internet articles and is totally erroneous, so beware. The misconception is a follows:

■ if an I/O operation (seek, search, read) takes X milliseconds, then that disk arm is capable of supporting 1000/X I/Os per second (IOPs). Yes it is, if you don't mind a response time of approximately infinity, give or take a few ms as the arm would be running at 100% utilization.

A sensible approach would be to do this calculation and settle for, say, 40% of this IOPs rate as an average which might be sustained.

Myth 2

If we make the allowance above, then a storage subsystem supporting X IOPs will perform better than one supporting 0.8X IOPs. In its raw form, this statement is not true I'm afraid, since the I/Os needed to satisfy an application's request for data depends on other factors, many within the designer's control:

■ the positioning of the physical data and its fragmentation, the former no longer in the control of the programmer, the latter a fact of life, except for the ability to defragment when necessary

■ the type of application (email, query, OLTP etc.) and access mode (random, sequential, read or write intensive)

■ block sizes and other physical characteristics, such as rotational speed (up to 15,0000 rpm)

■ the use of memory caching or disk caching, which can eliminate some I/Os

■ the design of the database layout, which is crucial and trees have been sacrificed writing about this topic

■ what RAID level, or other access method, is employed

■ the program's mode of accessing logical records (see below) might be sub-optimal (to be mild about it); does it chain reads/writes, save records or retrieve them again and so on

■ the key and indexing should be optimized to avoid long synonym chains to compose a single record - the shorter the key the greater chance of synonyms

■ Other factors and storage subsystem parameters

The upshot of this is that very fast I/O performance can be negated by poor design and often is. If the items above are properly thought through then, and only then, will the system supporting X IOPs outperform the system supporting 0.8X IOPs. These design features assume that any metadata, such as logs, indexes, copies etc. are not written to the disks containing the application data.

Dr. Terry Critchley is the Author of “High Availability IT Services” ISBN 9781482255904 (CRC Press).

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

I/Os Per Second Myths

Terry Critchley

The performance of an application depends on the availability of adequate IT resources, such as CPU, memory, storage and so on.

Storage metrics of interest are:
■ Data capacity
■ Input/output capacity (I/O performance)
■ Durability, space, cooling, cost, ROI and other mainly commercial factors.

We are concerned in this blog with the second item, I/O capability, which is not as simple as my system does X input/output operations per second (IOPs). First, let us look at some background to input/output. The classical I/O time for a disk access is:

TCPU+TCTL+TSEEK+TWAIT+TSEARCH+TACC+TXFR+TCOMP

TCPU = Time to parse and generate the I/O request in the processor

TCTL = Time for the controller to format and issue the request to the HDD, plus the time for the request to reach the HDD

TSEEK = Time to move to the correct track on the HDD (called a SEEK)

TWAIT = Time waiting to reach the required record

(In case of disk subsystems with set sector capability, the channel disconnects from the particular I/O until the record position is about to be reached on the track, then reconnects to complete the I/O. In the meantime it can do something else with its time. Prior to this feature, the channel would wait until the head reached the right position and then release it after the I/O was complete.)

TACC = Time to access the record (SEARCH) which will have an overhead depending on the format of the data (RDBMS, flat file, RAID x and so on)

TXFR = Transfer time of the accessed data to the processor via the controller/channel

TCOMP = Time to complete/post the end of the I/O.

This time is divided into 1 second to get I/Os per second (IOPs). Is physical I/O speed all that matters then?

Records: A record to an application usually means a logical record, for example, the name and address of a client. This can be made up of more than one physical record, which is normally retrieved as a block of a certain size, for example, 2048 bytes. Some though, a physical record may contain more than one logical record.

Disk Access: An I/O operation consists of several activities and the list of these depends how far you go back in the chain from data need to fulfillment. This is shown in the I/O time equation above.

Myth 1

This myth is propagated widely in internet articles and is totally erroneous, so beware. The misconception is a follows:

■ if an I/O operation (seek, search, read) takes X milliseconds, then that disk arm is capable of supporting 1000/X I/Os per second (IOPs). Yes it is, if you don't mind a response time of approximately infinity, give or take a few ms as the arm would be running at 100% utilization.

A sensible approach would be to do this calculation and settle for, say, 40% of this IOPs rate as an average which might be sustained.

Myth 2

If we make the allowance above, then a storage subsystem supporting X IOPs will perform better than one supporting 0.8X IOPs. In its raw form, this statement is not true I'm afraid, since the I/Os needed to satisfy an application's request for data depends on other factors, many within the designer's control:

■ the positioning of the physical data and its fragmentation, the former no longer in the control of the programmer, the latter a fact of life, except for the ability to defragment when necessary

■ the type of application (email, query, OLTP etc.) and access mode (random, sequential, read or write intensive)

■ block sizes and other physical characteristics, such as rotational speed (up to 15,0000 rpm)

■ the use of memory caching or disk caching, which can eliminate some I/Os

■ the design of the database layout, which is crucial and trees have been sacrificed writing about this topic

■ what RAID level, or other access method, is employed

■ the program's mode of accessing logical records (see below) might be sub-optimal (to be mild about it); does it chain reads/writes, save records or retrieve them again and so on

■ the key and indexing should be optimized to avoid long synonym chains to compose a single record - the shorter the key the greater chance of synonyms

■ Other factors and storage subsystem parameters

The upshot of this is that very fast I/O performance can be negated by poor design and often is. If the items above are properly thought through then, and only then, will the system supporting X IOPs outperform the system supporting 0.8X IOPs. These design features assume that any metadata, such as logs, indexes, copies etc. are not written to the disks containing the application data.

Dr. Terry Critchley is the Author of “High Availability IT Services” ISBN 9781482255904 (CRC Press).

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...