With a focus on GenAI, industry experts offer predictions on how AI will evolve and impact IT and business in 2024. Part 4 covers the challenges of AI.
Start with: 2024 AI Predictions - Part 1
Start with: 2024 AI Predictions - Part 2
Start with: 2024 AI Predictions - Part 3
Go to: predictions about AIOps
Go to: predictions about AI in software development
AI IMPLEMENTATION
Tech leaders today are being challenged to adopt AI to drive further advancements for their organizations. Many are grappling with the question of, "Where and how do we deploy AI?" AI will accelerate many things by filling in gaps of information, but it's not a complete substitute for data-driven decision-making or analysis. We must also consider the distinction between the value of generative AI and other tools like machine learning and pattern recognition. It will be imperative heading into the new year that we take a deeper look at the technology our organizations already have in place to determine if it is truly scalable with AI. Before diving full in, we must ask how our businesses can specifically benefit from investing in AI and if it will provide the right outcomes.
Ian Van Reenen
CTO, 1E
AI bias
Bracing for the rise in AI bias before we see better days — In the short term, the rapid development and adoption of AI tools and products leveraging AI services will lead to an increase in biased outputs. As most AI services scrape the internet to build training data, the inherent biases of the internet will propagate into these offerings until needed controls can be added. However, this will serve as a catalyst for the industry to establish rigid ethical guidelines and training interventions, ultimately enhancing the ability of AI tools to be more discerning and impartial. But it's going to get worse before it gets better.
David DeSanto
VP of Product, GitLab
Despite the excitement surrounding AI advancements, there's fear over its use and potential bias beyond human interference that may lead to distrust or, worse, lawsuits. For example, what if a bank uses AI to decide who gets a loan, and the data the model was trained on discriminates against a specific gender or race? Companies must be prepared to protect customers and stakeholders from bias, which extends to data protection to avoid misusing personal information in training data, for example. Bias mitigation is often easier said than done — it's not easy to eradicate it. Most companies should build upon a well-known foundation model to effectively recognize and root out the bias, ensuring they get off to the best start possible. This can simplify and accelerate their project while avoiding many bias-related pitfalls. Having more foundational and anti-bias models showcases a company's commitment to the highest standard of care.
Rod Cope
CTO, Perforce Software
AI HALLUCINATIONS
AIs can hallucinate. Unless you have the background to understand that something is completely insane, you're going to believe it. It's only because you have the knowledge, that you can evaluate the answers. The assumption that junior people will be able to use the answers and AI will provide them with the knowledge and the abilities of a senior person: no, not exactly, if you don't understand the answer or ask the right question.
Ron Williams
Analyst, Gigaom
Building Trust
Artificial intelligence has boomed over the past year and become the latest buzzword — even though it is a technology that has been used in business for many years. Free tools like ChatGPT have ushered it into mainstream conversations and turned it into a household term. However, when it comes to enterprises, the downside of tools like ChatGPT is that while it can easily generate content such as company messaging, it isn't able to adequately understand context or produce the nuances and uniqueness that humans can. Sales and Marketing are fields largely focused on human psychology — we must keep that human involvement while we continue to build trust in AI powered technologies. 2024 will be the year of building trust in AI. Business leaders will need to heavily invest in building trust in AI in addition to the technologies themselves. Prioritize ethical decision-making training within the workforce for AI models to emulate. Additionally, evaluate data from AI systems to remove biases and validate in real-world business scenarios.
Stephen Tarleton
CMO, 1E
In the next year, we'll see companies pushing for bettering AI attribution. In order for the technology to be reliable, users will need to know how the systems they're using learned how to give them specific answers. Without attribution and acknowledgment for the authenticity of and validity of expertise, we won't be able to rely on AI. It's not a reliable source for information just because it's on the internet.
Steve Martinelli
Director of Developer Advocacy and Community, Equinix
TRAINING DATA
Ingress to customer networks is about to become a mission-critical element of the next wave of AI adoption. In 2024, organizations' ability to scale AI tools will no longer be primarily about optimizing algorithms and large language models. The inflection point for AI tech will be in what data new AI tools are trained on. Today's AI is trained on large, generic data sets and we can already see that that's not enough. The next generation of AI companies will need to solve a secure connectivity problem in order to train their models. Transferring training data out of customer networks is a non-starter because it risks data security and data sovereignty in addition to its cost and performance implications. AI software will instead need to run in customer networks next to the data it is working with.
Alan Shreve
Founder and CEO, ngrok
DATA QUALITY
2024 will be the year companies realize that their success with AI is directly correlated to high-quality data. While we've heard about how important data quality is for training AI models, data will be brought into focus for many organizations as they continue to
embrace generative AI and yet experience poor results due to useless or badly collected data. This hard
truth will reveal that AI-driven outcomes are only as good as the quality of the data available.
Ram Chakravarti
CTO, BMC Software
UNSTRUCTURED DATA
Unstructured data becomes a core enterprise challenge.
Over the last few years, we have seen explosive growth in the semi-structured data world (log files, models, snapshots, artifactory code) which has, in turn, driven the growth of object storage. In 2024, we'll see an enterprise explosion of truly unstructured data (audio, video, meeting recordings, talks, presentations) as AI applications take flight. This is highly "learnable" content from an AI perspective and gathering it into the AI data lake will greatly enhance the intelligence capacity of the enterprise as a whole, but it also comes with unique challenges. There are distinct challenges with maintaining performance at 10s of petabytes. Those generally cannot be solved with traditional SAN/NAS solutions — they require the attributes of a modern, highly performant object store. This is why most of the AI/ML technologies leverage object stores and why most databases are moving to be object storage centric.
Anand Babu Periasamy
Co-Founder and CEO, MinIO
AI SPRAWL
Get ready for AI sprawl — If not in 2024 then by 2025, we'll see a number of AI projects that had been spun up for a specific purpose sitting and using up a considerable amount of resources while not being utilized. We've seen this in the past with physical servers and with virtual machines where people won't want to get rid of them. They take a lot to get up and running online so people will leave them running until they're ready with a data set.
Steve Martinelli
Director of Developer Advocacy and Community, Equinix
HARDWARE SUPPLY CHAIN SHORTAGES
While AI adoption in data centers is still in the early stages, the industry must prepare for potential challenges going into next year. For starters, a projected 1.6 to 2 million H100 GPU shipments are being put into production next year, with AMD just announcing 300K-400K of their latest MI100 GPU to deliver in 2024. Neither Nvidia nor AMD can keep up with demand — customers want more. This will only add to the demand pressures that the data center industry is experiencing now, in addition to organic cloud growth. An estimated 80 to 90 percent of these shipments will land domestically in the US, adding 2.5 to 3.0 gigawatts of demand pressure on the market. With the recent OpenAI corporate drama highlighting existential concerns about AI's potential to save or destroy the world, the physical bottlenecks present a nearer term concern as AI innovations may stall if capacity can't be delivered to enable AI hardware to operate.
Tom Traugott
SVP of Strategy, EdgeCore Digital Infrastructure
VALIDATING AI VENDORS
What we will see in 2024 will be a lot of (dot) AI added to products or company names. Some may have robust AI capabilities, and some may have AI in name only. Enterprise IT leaders will have to wade through all of these vendor options to find those who are truly leveraging or adopting AI in their platforms. Researching vendor claims and validating functionality will get incrementally more difficult so we will see a need for more diligence here.
Tim Flower
VP of DEX, Nexthink
AI MONOPOLY
We're headed more towards a generative AI monopoly. Hyperscalers will continue to innovate and own generative AI because they are adopting GenAI startups business plans and ideas and consolidating quickly. We are heading towards limited choices when it comes to building out our AI stack. As a result, there's going to be quick regulatory pressure from the EU, US Congress, and so on but it will be too late.
Patrick McFadin
VP of Developer Relations, DataStax
AI Governance
We're starting to see the pace of AI governance-focused discussions and guardrails pick up as the innovations, developments and use cases for the technology multiplies. It's a promising start to see the recent White House AI executive order creating an early set of safeguards, following a similar course of action from 2022 when the Securing Open Source Software Act was introduced in the Senate and later as a bill in 2023, which recognized the importance of open source software for technology development, national security and the economy. Introducing executive orders and the like sparks meaningful conversations for more standards and policy changes and pushes organizations to consider the repercussions of ignoring such oversight. As more people's eyes are focused on the ways AI integrates into their social and professional lives, organizations will be forced to pay attention and get serious about self-policy and funding around future AI projects, or face negative consequences — notably, legal action and lawsuits is something we can predict will grow at unprecedented rates in 2024. This will pressure technology vendors and government regulators to put parameters around AI technology and other emerging innovations, like virtual reality. Imagine a future scenario where someone is denied by AI for a virtual property loan, and they can't build a virtual house where they want it? It may seem like a far-out concept, but that reality will be here before we know it. The more we can talk about regulation, the better.
Rod Cope
CTO, Perforce Software
we already began to see this towards the end of 2023, but in 2024, we can expect governments and AI service providers to continue to implement policies regulating the development of AI. The key differentiator will be if these entities have moved beyond the shock and awe of AI to focus on the benefits. Risk assessment will continue to be a part of the equation as it should with any advancement in technology, but prioritizing innovation in these policies rather than fear will set countries apart. In 2023, we focused on the potential risks of AI. In 2024, it will be essential to focus on the potential opportunities.
Kev Breen
Director of Cyber Threat Research, Immersive Labs
AI governance becomes C-level imperative, causing CDOs to reach their breaking point — The practice of AI governance will become a C-level imperative as businesses seek to leverage the game-changing opportunities it presents while balancing responsible and compliant use. This challenge is further emphasized by the emergence of generative AI, adding complexity to the landscape. AI governance is a collective effort, demanding collaborative efforts across functions to address the ethical, legal, social, and operational implications of AI. Nonetheless, for CDOs, the responsibility squarely rests on their shoulders. The impending introduction of new AI regulations adds an additional layer of complexity, as CDOs grapple with an evolving regulatory landscape that threatens substantial fines for non-compliance, potentially costing millions. This pressure will push certain CDOs to their breaking point. For others, it will underscore the importance of establishing a fully-resourced AI governance capability, coupled with C-level oversight. This strategic approach not only addresses immediate challenges, but strengthens the overall case for proactive and well-supported AI governance going forward.
Helena Schwenk
VP, Chief Data & Analytics Office, Exasol
AI ETHICS
The widespread availability of AI technologies has been a milestone moment in the timeline of technology for modern society. This topic will continue to be more relevant to policy makers, technology developers, and business owners over the next several years. Much like the initial widespread adoption of the internet, we must recognize that AI is here to stay, so our business and government policies must reflect this new frontier. With that in mind, I think that ethics in AI must become a top talking point to ensure that use of AI is implemented in such a way that privacy and intellectual property are protected, but also that AI models are being developed responsibly and ethically to protect users from risk and abuse.
Melissa Bischoping
Director, Endpoint Security Research Specialist, Tanium
Go to: 2024 AI Predictions - Part 5, the final installment in this series, covering the results AI will deliver.
The Latest
In MEAN TIME TO INSIGHT Episode 11, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Secure Access Service Edge (SASE) ...
On average, only 48% of digital initiatives enterprise-wide meet or exceed their business outcome targets according to Gartner's annual global survey of CIOs and technology executives ...
Artificial intelligence (AI) is rapidly reshaping industries around the world. From optimizing business processes to unlocking new levels of innovation, AI is a critical driver of success for modern enterprises. As a result, business leaders — from DevOps engineers to CTOs — are under pressure to incorporate AI into their workflows to stay competitive. But the question isn't whether AI should be adopted — it's how ...
The mobile app industry continues to grow in size, complexity, and competition. Also not slowing down? Consumer expectations are rising exponentially along with the use of mobile apps. To meet these expectations, mobile teams need to take a comprehensive, holistic approach to their app experience ...
Users have become digital hoarders, saving everything they handle, including outdated reports, duplicate files and irrelevant documents that make it difficult to find critical information, slowing down systems and productivity. In digital terms, they have simply shoved the mess off their desks and into the virtual storage bins ...
Today we could be witnessing the dawn of a new age in software development, transformed by Artificial Intelligence (AI). But is AI a gateway or a precipice? Is AI in software development transformative, just the latest helpful tool, or a bunch of hype? To help with this assessment, DEVOPSdigest invited experts across the industry to comment on how AI can support the SDLC. In this epic multi-part series to be posted over the next several weeks, DEVOPSdigest will explore the advantages and disadvantages; the current state of maturity and adoption; and how AI will impact the processes, the developers, and the future of software development ...
Half of all employees are using Shadow AI (i.e. non-company issued AI tools), according to a new report by Software AG ...
On their digital transformation journey, companies are migrating more workloads to the cloud, which can incur higher costs during the process due to the higher volume of cloud resources needed ... Here are four critical components of a cloud governance framework that can help keep cloud costs under control ...
Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted ...
Organizations continue to struggle to generate business value with AI. Despite increased investments in AI, only 34% of AI professionals feel fully equipped with the tools necessary to meet their organization's AI goals, according to The Unmet AI Needs Surveywas conducted by DataRobot ...