Generative AI: A Double-Edged Sword for Business Cybersecurity

Posted on Sunday, March 31, 2024

Generative artificial intelligence (AI) has emerged as one of our time’s most transformative and disruptive technologies. 

Systems like ChatGPT, DALL-E, Midjourney, and others can produce highly realistic text, images, code, and other content often indistinguishable from what a human would create. 

These tools have also democratized hacking and cybercrime tactics such as phishing and present an enormous reservoir of information for hackers to access. 

Thousands of logins to ChatGPT and other tools are available on the darknet, exposing countless individuals to the risks of exposing their AI chats – which might include sensitive or proprietary data. 

While generative AI offers immense potential to enhance productivity and innovation for businesses across all industries, it also introduces significant new cybersecurity risks that must be understood and mitigated.

The Risks of AI-Powered Social Engineering

One of the most concerning risks of generative AI from a business cybersecurity perspective is its potential to be used for highly sophisticated social engineering attacks.

OpenAI and Microsoft recently found that threat actors worldwide use AI tools to launch social engineering attacks at people and businesses. 

Cybercriminals can leverage the language generation capabilities of systems like ChatGPT to craft extremely convincing phishing emails, chat messages, and other communications designed to trick employees into:

  • Revealing sensitive information
  • Installing malware
  • Sending payments to fraudulent accounts

A recent study found that phishing emails written by ChatGPT had a higher success rate in fooling recipients than human-written phishing attempts. 

The AI-generated messages used more advanced and psychologically persuasive language that built trust and a sense of legitimacy. 

As generative AI advances, it will become increasingly difficult for employees to distinguish malicious AI-generated communications from authentic business emails and messages.

Data Leakage and Unauthorized Access

Generative AI also poses major risks around data leakage and unauthorized access

Many businesses are eager to leverage generative AI to increase efficiency and automate tasks. However, in doing so, they may inadvertently expose highly sensitive business data and customer information to the AI models and the companies behind them.

  • For example, a business might use a generative AI coding assistant to help developers write software faster. 
  • In that case, any proprietary code or data shared with that system is potentially at risk of being retained by the model, shared with the AI company, or even exposed to the model’s outputs. 
  • Similarly, a generative AI customer support chatbot could ingest and leak sensitive customer information, leading to regulatory issues

There are already real-world examples of generative AI systems divulging private data they were exposed to during training. 

In one high-profile case, the ChatGPT model was found to be inadvertently leaking personally identifiable information (PII) of some individuals whose data was included in its training set. 

While leading AI companies will likely harden their systems against these types of leaks over time, businesses must be extremely cautious about what data they feed into generative AI.

The Threat of Deepfakes and Synthetic Media

Another leading concern is that generative AI is being used to create highly realistic deepfakes and synthetic media for fraud or reputation attacks against businesses and executives. 

Cybercriminals can leverage AI image and video generation to convincingly spoof the identities of business leaders, allowing them to spread misinformation or con employees and business partners.

For instance, an attacker could use generative AI to create a fake video of a company’s CEO announcing a major (but false) financial issue or legal trouble for the business. 

When the company responds, its stock price and public reputation could be severely damaged. As generative AI makes fake content generation accessible to anyone, businesses will need robust media authentication and incident response capabilities to combat deep fakes.

Malicious Insiders and IP Theft

Beyond external cyber threats, generative AI can also be misused by malicious insiders within a business. 

Rogue employees could potentially use generative AI to rapidly exfiltrate massive amounts of sensitive business data and intellectual property (IP). 

By prompting an AI system to reproduce confidential documents, source code, product designs, and more, insiders can steal IP at an unprecedented speed and scale.

Expanding Attack Surface

Generative AI also risks massively expanding businesses’ attack surface by introducing countless new AI-powered apps and services, each with its own vulnerabilities. 

As just one example, cybercriminals have already begun exploiting popular AI image generation services to quickly create profile pictures for fake social media accounts used in scams and disinformation campaigns. 

Each new AI app a business uses widens the threat landscape.

Mitigating the Risks: Best Practices for Secure Generative AI Adoption

So how can businesses reap the benefits of generative AI while safeguarding against its cybersecurity threats and other risks? 

It starts with establishing clear policies and guidelines around generative AI systems. Businesses should:

  • Restrict access to generative AI to only essential personnel
  • Carefully vet and monitor all AI vendors for robust security and data privacy practices
  • Keep generative AI isolated from internal systems and siloed from sensitive business data
  • Carefully screen outputs from generative AI (e.g., text, code, images) before use
  • Invest in employee training to boost awareness of AI-powered social engineering and fraud techniques

In the longer term, businesses should explore ways to adopt generative AI systems that can be locally hosted or self-served to reduce dependence on external AI vendors and the potential for data exposure. 

Techniques like federated learning, differential privacy, and model watermarking can help mitigate security and IP theft risks when leveraging generative AI.

The Future of Generative AI and Cybersecurity

As generative AI technologies evolve and mature, we can expect to see even more sophisticated cybersecurity threats emerge. For example, researchers have already demonstrated how generative AI can be used to:

  • Craft highly targeted and personalized social engineering attacks by analyzing a victim’s online footprint
  • Generate synthetic training data to help attackers develop more effective malware and exploits
  • Automate the creation of realistic deep fake videos and images for misinformation campaigns and fraud

To stay ahead of these evolving threats, businesses will need to adapt their cybersecurity strategies and invest in cutting-edge defenses continuously. 

This may include adopting AI-powered security tools that can detect and respond to AI-generated threats in real time and participating in industry-wide efforts to develop standards and best practices for secure generative AI use.

Balancing the Benefits and Risks

Despite the significant cybersecurity challenges posed by generative AI, it’s important to recognize the technology’s immense potential to drive business value and innovation. 

From automating content creation and customer support to accelerating R&D and software development, generative AI can help businesses operate more efficiently, creatively, and competitively.

The key is to approach generative AI adoption with a balanced mindset that acknowledges benefits and risks. 

By implementing robust security controls and governance from the outset, businesses can position themselves to reap the rewards of generative AI while minimizing its potential downsides.


Generative AI is a double-edged sword for business cybersecurity. While it offers transformative potential for innovation and productivity, it also introduces new and complex security risks that must be carefully managed. 

As AI technology continues to advance rapidly, businesses must remain vigilant and proactive in their approach to securely harnessing its power.

Businesses can use generative AI without compromising their cybersecurity posture by establishing clear policies, implementing best practices for secure use, and staying abreast of the latest threats and defenses. 

The future belongs to those who can strike this critical balance.

Mustard IT can help you prepare for the future of cyber security. Contact us today to learn more about our services.