Generative AI has spawned a series of emerging cyber threats. Hackers have more opportunities to exploit vulnerabilities and can execute malicious activities in more ways.
Fortunately, the opposite is also true: generative AI can strengthen the defensive capabilities of enterprises. In the short term, generative AI will make once burdensome security processes more efficient. By analyzing vast amounts of data and identifying patterns and anomalies, generative AI can detect newly emerging threats in a timely manner.
As malicious actors continuously adopt new attack methods, cybersecurity teams must also keep pace and stay one step ahead.
Generative AI and large language models pose risks. But how can we also make AI a measure to protect data security? I believe we may see more AI pitted against each other to maintain balance in the ecosystem.
Almost all executives surveyed (96%) stated that adopting generative AI could lead to security vulnerabilities in their organizations within the next three years.
Generative AI introduces entirely new risks and threats.
Generative AI provides cyber attackers with a new arsenal. Today’s hackers are no longer just forging emails; they can mimic voices, faces, and even personalities to deceive victims. And this is just the beginning.
As generative AI continues to proliferate in the next six months to one year, experts expect new forms of intrusion attacks to reach unprecedented scales, speeds, complexities, and sophistication, with various new threats continuously emerging. From the perspectives of possibility and potential impact, large-scale autonomous attacks will become the most significant risk. However, surveyed executives expect that hackers impersonating or impersonating trusted users will have the greatest impact on businesses, followed by creating malicious code.
The way organizations implement generative AI may also introduce new risks. In fact, 47% of executives surveyed are concerned that adopting generative AI in operations could trigger new types of attacks against their organizations’ autonomous AI models, data, or services. Almost all executives surveyed (96%) stated that adopting generative AI could lead to security vulnerabilities in their organizations within the next three years.
The average cost of a global data breach is $4.45 million, with the U.S. figure reaching as high as $9.48 million. In this context, many enterprises are increasing their investments to address emerging cybersecurity risks. Executives surveyed indicated that their organizations’ AI cybersecurity budgets for 2023 have increased by 51% compared to 2021. Moreover, they expect this budget to increase by another 43% by 2025.
View generative AI as an urgently needed platform that requires protection.
Cybersecurity leaders are urged to take immediate action to address the risks posed by generative AI rather than taking incremental steps.
Understand the current state of AI risk. Hold board-level meetings to bring together cybersecurity, technology, data, and operations leaders to discuss the evolving risks, including which uses of generative AI may expose sensitive data and allow unauthorized access to systems. Make everyone aware of the emerging “adversarial” AI—where even introducing almost imperceptible changes into core data sets can lead to malicious outcomes.
Ensure the entire AI pipeline is secure. Focus on protecting and encrypting the data used to train and tune AI models. Continuously scan for vulnerabilities, malware, and corruption during the model development process. After model deployment, monitor for AI-specific attacks (e.g., data poisoning and model theft).
Invest in deploying new defense measures specifically designed to protect AI. While existing security controls and expertise can be extended to protect the infrastructure and data supporting AI systems, detecting and preventing adversarial attacks against AI requires entirely new approaches.
Without secure data, achieving trustworthy generative AI is out of the question.
Data is the lifeblood of generative AI. All models rely on data to answer queries and provide insights, which is why training data has become a primary target for cyber attacks.
Hackers still aim to steal data and sell it at high prices, but data infiltration provides a new avenue for illicit profits. If hackers can alter the data driving an organization’s generative AI model, they can influence business decisions through targeted manipulation or misinformation. This evolving threat brings a host of new legal, security, and privacy issues that CEOs need to manage across the enterprise.
Executives see the severity of the problem. In adopting generative AI, surveyed executives expect a variety of risks—84% of surveyed executives are concerned that widespread or catastrophic cybersecurity attacks could trigger new vulnerabilities. One-third of surveyed executives stated that without entirely new forms of governance, such as comprehensive regulatory frameworks and independent third-party audits, these risks cannot be managed.
Overall, 94% of surveyed executives believe that ensuring the security of AI solutions before deployment is critical. However, only 24% of generative AI projects will incorporate cybersecurity components within the next six months. Additionally, 69% of surveyed executives stated that innovation takes precedence over cybersecurity in deploying generative AI.
This indicates a clear disconnect between the understanding of the need for generative AI cybersecurity and the implementation of cybersecurity measures. To avoid costly and unnecessary consequences, CEOs need to take strong measures to address data cybersecurity and data provenance issues, including investing in data protection measures (such as encryption and anonymization). Data tracking and provenance systems can provide better integrity assurances for the data used by generative AI models.
Make trustworthy data the pillar of the organization.
Continuously iterate cybersecurity practices, taking into account the requirements of various generative AI models and data services.
Establish trust and security in AI applications. Prioritize implementing data strategies and controls centered around security, privacy, governance, and compliance. Communicating transparency and accountability is crucial in managing risks to prevent bias, hallucinations, and other issues.
Protect the data that powers AI. Have the Chief Information Security Officer responsible for identifying and classifying sensitive data used in training or tuning. Implement data loss prevention technologies to prevent data leaks through prompts. Implement access policies and controls around machine learning datasets. Expand threat modeling to cover AI-specific threats, such as data poisoning, containing sensitive data, and producing inappropriate content.
View cybersecurity as a product and stakeholders as customers. Cybersecurity is vital for ensuring the smooth progress of AI initiatives and driving revenue growth. To ensure the secure use of AI in products, make your team aware of the cybersecurity threats posed by generative AI. Emphasize the value of changing behaviors to improve data and security hygiene.
Generative AI will become a “multiplier” for cybersecurity.
If applied in the field of cybersecurity, generative AI can be a business accelerator. Generative AI can automate repetitive and time-consuming tasks, allowing teams to focus on more complex and strategic security matters. Generative AI can also detect and investigate threats and learn from past incidents to adjust the organization’s response strategies in real-time.
Due to significant benefits, CEOs face pressure to rapidly and widely adopt generative AI. However, to avoid structural collapse in growth, business executives urgently need to leverage generative AI to enhance resilience. This way, executives can not only mitigate the risks of generative AI but also harness its power to make organizations stronger.
More than half of the surveyed executives (52%) stated that generative AI will help them better allocate resources, capabilities, talent, or skills; while 92% of surveyed executives indicated that after adopting generative AI, the technology is more likely to enhance or augment rather than replace their cybersecurity personnel.
These emerging technological tools can help teams reduce complexity and focus on the most important tasks, which is perhaps why 84% of surveyed executives plan to prioritize deploying generative AI cybersecurity solutions over traditional cybersecurity solutions.
Using generative AI in cybersecurity helps achieve a “multiplier effect” across the entire enterprise ecosystem. 84% of surveyed executives stated that open innovation ecosystems are crucial for their organization’s future growth strategy. As business executives seek to establish collaborative relationships that support innovation and growth, most surveyed executives expect generative AI capabilities will influence their organization’s choices of cloud computing (59%) and ecosystem partners across the business (62%) in the next two years.
Realign cybersecurity investments around speed and scale.
Make AI an essential tool for strengthening security defenses. Encourage cybersecurity leaders to embed generative AI and automation into their toolkits to respond to security risks and incidents quickly and at scale. This will significantly enhance productivity and make cybersecurity a driver of business growth.
Accelerate achieving security outcomes with AI. Automate routine tasks that do not require human expertise and judgment. Use generative AI to simplify tasks that involve human-technology collaboration, such as security policy generation, threat hunting, and incident response.
Deploy AI to detect new threats. Update tools and technologies to enable your team to keep pace with attackers in speed, scale, accuracy, and complexity. Use generative AI to more quickly identify patterns and anomalies, enabling teams to detect new threat vectors in a timely manner to prevent issues.
Leverage the power of collaboration. Work with trusted partners to jointly define AI security maturity and implement a comprehensive generative AI strategy to create value across the organization.