Skip to main content

Curbing the Generative AI Spam Machine

Kapil Tandon
Perforce

Efficiency. It's the latest buzz word that enterprise leaders everywhere are focused on this year, driven mainly by the promise of generative AI to help reduce the amount of time spent on the mundane and increase the workforce's productivity. It's these promised benefits that have driven increased interest in AI investment, and while technology possesses the opportunity to transform the way we work, just like anything else in history, it also has its shortcomings as well.

. The more dependent the workforce becomes on generative AI, despite its inaccuracies, the more mistakes will be made that can end up being costly. Until AI advances and its accuracy increases, all the AI investments organizations are currently investing in may not pay off after all.

What Does Generative AI Spam Look Like?

When people hear the word spam, they're typically thinking of mass marketing emails or unsolicited sales messages. However, in the case of generative AI in the enterprise, the spam it can produce looks a little different.

One tactic generative AI will be used for is to increase the number of scams shared through spam bots which can be leveraged for things like phishing emails. With AI tools available to the average citizen, any person regardless of their technical ability can now use AI to craft human-like messages and research better tactics for cyberattacks.

Typically, one method recipients can use to identify a spam email that may be a phishing scam is through incorrect grammar and spelling, but with generative AI chatbots like ChatGPT, bad actors can now leverage these tools to make their emails appear more legitimate. These actors can prompt ChatGPT or other large language models (LLMs) to draft emails with concise grammar and correct spelling, making it indistinguishable for consumers.

Another form of spam that will be produced is that overwhelming amount of data that end-users will flood generative AI models with. As more prompts are put in, and training data remains outdated or stagnant, the more likely it is for systems to produce copious amounts of incorrect information. With both the machine and humans using it. inputting and outputting massive amounts of irrelevant, inaccurate data, the less efficient AI will be for organizational productivity.

How Will This Impact the Bottom Line?

The rise in spam will coincide with the decrease in productivity gains that decision-makers were predicting AI systems would bring, therefore decreasing the value of using these tools. The democratization of AI, while great for education and awareness around technology, will only result in everyday users accidentally engaging with spam bots or phishing emails more frequently, in turn increasing the number of cyber incidents. With nearly half of cybersecurity leaders predicted to change jobs by 2025 due to elevated levels of stress and burnout, already overworked cyber teams will become extremely overburdened as cyberattacks due to AI rise and talent shortages continue.

With data revealing 96% of organizations address supply chain security problems on an adhoc basis, but only half have a formalized DevOps supply chain security strategy in place, it's expected AI will only exasperate software supply chain attacks and draw light to these gaps in strategy. The increase in security incidents from AI will lead to more organizations eventually adopting infrastructure state management and continuous compliance. This is critical to meet security standards and address growing concerns. However, until these mitigations are in place, overburdened security teams will continue to face challenges regarding DevOps supply chain security.

Additionally, as generative AI outputs begin to decrease in accuracy the older the model gets, the more time employees will have to spend sifting through garbage outputs to find valuable insights. When employees are assured that the outputs are accurate and that generative AI increases productivity, they will begin to become overdependent on antiquated models and work off incorrect data. If the generative AI model employees rely on is outdated and requires new training data, it will take away from the promised productivity gains and organizational efficiency. Not only will employees have to sift through responses, but developers will now have to dedicate more time to system maintenance and finding new training data.

Stopping the Generative AI Spam Machine

Even though AI adoption is inevitable and has been for a long time, organizations must be prepared to deal with the implications of outdated training data when the day comes. To prevent generative AI spam, developer teams should focus on continual data upkeep, making incremental additions where applicable to ensure the system is trained on the most up-to-date information. As more of the workforce becomes educated and comfortable with AI, the learning curve will be less steep, and users will begin to understand what a correct vs incorrect output is.

Similarly, use cases for AI will begin to narrow and become more specific to streamline the actual benefits. Decision-makers will weed out areas where AI is not adding real value and focus efforts on the areas where AI can increase operational efficiency. As the technology becomes more advanced, it will be easier to identify where leveraging AI may not be the best fit. AI may not be ready to make higher level decisions or compute substantial amounts of code, but if organizations focus on the right use cases and ensuring data is up to date, this will one day be a potential benefit for leaders in all industries.

While individual organizations may not be able to limit potential cyber criminals from leveraging chatbots and learning from AI, they can provide the correct education for employees. Teaching employees other ways to spot spam vs real requests, as well as general AI education, will help combat the wave of cyberattacks to come. Eventually, regulations will be introduced to handle these issues, especially around AI spam bots on social media, but in the meantime, educating employees is critical for organizational security.

Don't Fall for the Generative AI Spam Machine Trap

AI holds tremendous opportunities to upend and improve our daily lives. Whether it is being used to help brainstorm content, provide quick insights on research topics, and more, the current state of AI does hold benefits for many. However, beyond basic assistance tasks, generative AI is not in the place where its promise has been fully realized.

When making investments in AI, be sure to evaluate the areas of the business that may not be equipped to handle integrating AI into their workflows. When leaders focus on the tried-and-true use cases that have shown proven benefits, they can devote time to innovating for the AI model of tomorrow. Like any new advent in history, AI will display both the good and bad, and it is up to leaders to protect their enterprise and navigate the issues of today.

Kapil Tandon is VP of Product Management at Perforce

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

Curbing the Generative AI Spam Machine

Kapil Tandon
Perforce

Efficiency. It's the latest buzz word that enterprise leaders everywhere are focused on this year, driven mainly by the promise of generative AI to help reduce the amount of time spent on the mundane and increase the workforce's productivity. It's these promised benefits that have driven increased interest in AI investment, and while technology possesses the opportunity to transform the way we work, just like anything else in history, it also has its shortcomings as well.

. The more dependent the workforce becomes on generative AI, despite its inaccuracies, the more mistakes will be made that can end up being costly. Until AI advances and its accuracy increases, all the AI investments organizations are currently investing in may not pay off after all.

What Does Generative AI Spam Look Like?

When people hear the word spam, they're typically thinking of mass marketing emails or unsolicited sales messages. However, in the case of generative AI in the enterprise, the spam it can produce looks a little different.

One tactic generative AI will be used for is to increase the number of scams shared through spam bots which can be leveraged for things like phishing emails. With AI tools available to the average citizen, any person regardless of their technical ability can now use AI to craft human-like messages and research better tactics for cyberattacks.

Typically, one method recipients can use to identify a spam email that may be a phishing scam is through incorrect grammar and spelling, but with generative AI chatbots like ChatGPT, bad actors can now leverage these tools to make their emails appear more legitimate. These actors can prompt ChatGPT or other large language models (LLMs) to draft emails with concise grammar and correct spelling, making it indistinguishable for consumers.

Another form of spam that will be produced is that overwhelming amount of data that end-users will flood generative AI models with. As more prompts are put in, and training data remains outdated or stagnant, the more likely it is for systems to produce copious amounts of incorrect information. With both the machine and humans using it. inputting and outputting massive amounts of irrelevant, inaccurate data, the less efficient AI will be for organizational productivity.

How Will This Impact the Bottom Line?

The rise in spam will coincide with the decrease in productivity gains that decision-makers were predicting AI systems would bring, therefore decreasing the value of using these tools. The democratization of AI, while great for education and awareness around technology, will only result in everyday users accidentally engaging with spam bots or phishing emails more frequently, in turn increasing the number of cyber incidents. With nearly half of cybersecurity leaders predicted to change jobs by 2025 due to elevated levels of stress and burnout, already overworked cyber teams will become extremely overburdened as cyberattacks due to AI rise and talent shortages continue.

With data revealing 96% of organizations address supply chain security problems on an adhoc basis, but only half have a formalized DevOps supply chain security strategy in place, it's expected AI will only exasperate software supply chain attacks and draw light to these gaps in strategy. The increase in security incidents from AI will lead to more organizations eventually adopting infrastructure state management and continuous compliance. This is critical to meet security standards and address growing concerns. However, until these mitigations are in place, overburdened security teams will continue to face challenges regarding DevOps supply chain security.

Additionally, as generative AI outputs begin to decrease in accuracy the older the model gets, the more time employees will have to spend sifting through garbage outputs to find valuable insights. When employees are assured that the outputs are accurate and that generative AI increases productivity, they will begin to become overdependent on antiquated models and work off incorrect data. If the generative AI model employees rely on is outdated and requires new training data, it will take away from the promised productivity gains and organizational efficiency. Not only will employees have to sift through responses, but developers will now have to dedicate more time to system maintenance and finding new training data.

Stopping the Generative AI Spam Machine

Even though AI adoption is inevitable and has been for a long time, organizations must be prepared to deal with the implications of outdated training data when the day comes. To prevent generative AI spam, developer teams should focus on continual data upkeep, making incremental additions where applicable to ensure the system is trained on the most up-to-date information. As more of the workforce becomes educated and comfortable with AI, the learning curve will be less steep, and users will begin to understand what a correct vs incorrect output is.

Similarly, use cases for AI will begin to narrow and become more specific to streamline the actual benefits. Decision-makers will weed out areas where AI is not adding real value and focus efforts on the areas where AI can increase operational efficiency. As the technology becomes more advanced, it will be easier to identify where leveraging AI may not be the best fit. AI may not be ready to make higher level decisions or compute substantial amounts of code, but if organizations focus on the right use cases and ensuring data is up to date, this will one day be a potential benefit for leaders in all industries.

While individual organizations may not be able to limit potential cyber criminals from leveraging chatbots and learning from AI, they can provide the correct education for employees. Teaching employees other ways to spot spam vs real requests, as well as general AI education, will help combat the wave of cyberattacks to come. Eventually, regulations will be introduced to handle these issues, especially around AI spam bots on social media, but in the meantime, educating employees is critical for organizational security.

Don't Fall for the Generative AI Spam Machine Trap

AI holds tremendous opportunities to upend and improve our daily lives. Whether it is being used to help brainstorm content, provide quick insights on research topics, and more, the current state of AI does hold benefits for many. However, beyond basic assistance tasks, generative AI is not in the place where its promise has been fully realized.

When making investments in AI, be sure to evaluate the areas of the business that may not be equipped to handle integrating AI into their workflows. When leaders focus on the tried-and-true use cases that have shown proven benefits, they can devote time to innovating for the AI model of tomorrow. Like any new advent in history, AI will display both the good and bad, and it is up to leaders to protect their enterprise and navigate the issues of today.

Kapil Tandon is VP of Product Management at Perforce

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...