Computer Science (CS) and Information Technology (IT): Part 2
Are they one and the same thing?
September 28, 2018

Terry Critchley
Author of "Making It in IT"

Share this

Part 1 of this blog dealt briefly with the differences between CS and IT in their applicability to computing needs in today's workplace. This was the reason I developed a list of topic names, which represent the modern IT, for comparison with practically any CS course curriculum worldwide.

Start with Computer Science (CS) and Information Technology (IT): Part 1

This I expanded into a Glossary, aimed at introducing topics and indicating where simple further reading can be found, aiming it much more than a simple expansion of acronyms or 1-line description. Thus, I feel it is useful as a learning tool and a reference document.

View a sample list of Glossary topics at the Glossary link(173 pages):

I have added a small additional selection here dealing with more than just the letter "a" to show its scope and depth:

bi-modal IT

Bimodal IT is the state where organizations need two speeds of IT — often called traditional IT (business as usual) and agile IT (Gartner calls them mode-1 and mode-2). Traditional IT is focused on "doing IT right", with a strong emphasis on efficiency and safety, approval-based governance and price-for-performance.

Agile IT is focused on accelerated development and implementation, supporting prototyping and iterative development, rapid delivery, continuous and process-based methodology. Its role is to support and add value to business initiatives to generate value and/or revenue earlier that would otherwise be the case.

business continuity planning (BCP)

Sometimes used in place of BCM although some people may interpret them differently. In essence they both mean "keeping the business show on the road" when disaster strikes the IT setup and is much broader than IT disaster recovery. It includes, for example, temporary premises for displaced staff, transport activities and other non-IT considerations after the event.

Related links:
Business continuity planning
Business continuity planning steps to keep your organization running
High Availability IT Services

converged infrastructure (CI)

"Converged infrastructure, sometimes known as converged architecture, is an approach to data centre management that packages compute, networking, servers, storage and virtualization tools on a prequalified turnkey appliance. Converged systems include a toolkit of management software."

Converged infrastructure is an approach to data centre management that seeks to minimize compatibility issues between servers, storage systems and network devices while also reducing costs for cabling, cooling, power and floor space. A converged infrastructure can be implemented with a CI reference architecture, with standalone appliances or with a software driven hyper-converged approach.

A CI is the IT equivalent of an off-the-peg suit from a tailor, that is, a premade suit but not made to measure for a particular person. It is unlikely that the vendor will supply such a ready-made suit with a 50 in. chest coat and trousers of 22 in. length, unless there is big demand for such a very odd requirement or he wants to go out of business. Vendor longevity and trustworthiness are required here.

Related links:
Converged Infrastructure

DevOps

DevOps is a term emerged from the combination of Development and Operations. The role of a DevOps engineer is to automate all the operational work in the way that a developer would do. The idea is to encourage frequent releases to increase quality and get early feedback. See the entries "continuous delivery" and "continuous testing" (q.v.).

DevOps is not a technology but a way of doing things, or methodology, according to Webtorials.

"Hence, according to me [author of the article below; Nilesh Kanawade], the main two objectives of DevOps are increasing the speed and quality of the deliveries."

Related links:
DevOps Management: New Teams, New Processes
What is DevOps?
10 Things Your CIO Should Know About DevOps
DevOps Glossary

encryption algorithms

There are lots of encryption algorithms and NIST has approved some of them: Advanced Encryption Standard (AES), TripleDES, and Skipjack. On the asymmetric side, approved algorithms are DSA, RSA and ECDSA [NIST, 2013].

Related links:
AES
Triple-DES
Triple Data Encryption Algorithm (TDEA)
DSA [digital signature]
RSA
ECDSA
Skipjack
NIST Encryption Algorithm Approval Source

erasure codes

"Erasure coding (EC) is a method of data protection in which data is broken into fragments, expanded and encoded with redundant data pieces and stored across a set of different locations or storage media." [see link below]

One key point about erase coding is its ability to recover from corrupted storage faster than RAID configurations, especially involving very large data sets (databases etc.) and storage volumes.

Related links:
Erasure Coding
Erasure Codes for Storage Systems
Erasure Coding vs. RAID
Erasure Coding vs. RAID

intent based networking (IBN)

Although not a totally new concept, IBN has been pushed by Cisco as a way forward in function-effective and cost-effective networking. It is a subset of automated or even autonomic (q.v.) computing where intelligence is used to monitor and manage resources.

Related links:
What is intent based networking

There are also articles by Cisco and Gartner, the latter with risk and other cautionary factors about IBN.
Cisco DNA
FAQ

I hope I have made my "CS is not IT" point without malice and introduced the discerning reader to a source on information on "modern computing" aka "workplace IT." Until this nettle is grasped, bodies will continue to imagine that boosting CS teaching will solve the IT Skills shortage. It will not.

The UK Government has recently shelled out £ 100 m. to increase the UK "digital skills" … to academia, in the form of five Universities' CS faculties! I have asked each one of the five what their planned topics are and received one reply – they are doing 2 very specific, very narrow topics which I believe will solve little of the problem.

Dr. Terry Critchley is an IT consultant and author who previously worked for IBM, Oracle and Sun Microsystems
Share this

The Latest

March 18, 2024

Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...

March 15, 2024

In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...

March 14, 2024

The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...

March 13, 2024

In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...

March 12, 2024

eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...

March 11, 2024

Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...

March 07, 2024
In this digital era, consumers prefer a seamless user experience, and here, the significance of performance testing cannot be overstated. Application performance testing is essential in ensuring that your software products, websites, or other related systems operate seamlessly under varying conditions. However, the cost of poor performance extends beyond technical glitches and slow load times; it can directly affect customer satisfaction and brand reputation. Understand the tangible and intangible consequences of poor application performance and how it can affect your business ...
March 06, 2024

Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...

March 05, 2024

Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...

March 04, 2024

This year's Super Bowl drew in viewership of nearly 124 million viewers and made history as the most-watched live broadcast event since the 1969 moon landing. To support this spike in viewership, streaming companies like YouTube TV, Hulu and Paramount+ began preparing their IT infrastructure months in advance to ensure an exceptional viewer experience without outages or major interruptions. New Relic conducted a survey to understand the importance of a seamless viewing experience and the impact of outages during major streaming events such as the Super Bowl ...