Skip to content




Why AI Belongs in Your Crisis Planning Playbook

Featured Replies

Crisis AI

THERE’S a phrase that seems to be everywhere in the business world right now, but it is likely missing from most companies’ crisis management plans: Artificial Intelligence (AI).

Crack open any decent crisis planning playbook, and you’ll find detailed roadmaps for navigating natural disasters, system failures, and traditional cyberattacks. These risks are well understood, and crisis management planners have often seen how other organizations have handled these setbacks or even dealt with them themselves.

Although AI now touches on great swaths of our professional and personal lives, it is still a very young technology. And while most people vaguely understand that AI introduces some new level of risk, these dangers largely have yet to materialize in the sorts of public disasters that make headlines and get business leaders to take notice.

Although no one can predict exactly how AI-related risks will unfold in the years to come, businesses should start incorporating the technology into their crisis management plans now. Bad actors are already using (and misusing) the technology, and some of the vulnerabilities in early AI deployments are starting to reveal themselves. Armed with this knowledge, organizations can prepare for AI-driven incidents before these events cause full-blown crises.

How AI Is Reshaping Cyber Threats

Unfortunately, AI is already making cyber attackers faster and more effective. Attacks that once required ample time, expertise, and manual effort to carry out can now be automated and scaled. The technology is also opening organizations to new attack types meant to leverage the vulnerabilities of AI systems.

Consider phishing attacks - a form of social engineering in which users are tricked into clicking a malicious link, downloading an infected file, or providing sensitive information such as passwords or banking information. With the help of AI, attackers can generate countless highly personalized messages, tailoring their tone, language, and details to specific targets. This makes fraudulent communications more difficult for employees to identify, increasing the likelihood of a successful breach.

At the same time, AI is introducing entirely new categories of risk. Many businesses are deploying the technology for processes such as customer service, which involve troves of sensitive information. Emerging cyber-attacks such as prompt injection, data poisoning, and model manipulation can be used to expose this information, or to manipulate AI outputs in ways that harm the business.

Finally, AI is blurring the line between fact and fiction. With deepfake video or audio messages, attackers have impersonated executives or colleagues, creating the trust needed to convince employees to take potentially disastrous actions.

Bringing a Crisis Planning Lens to AI

Perhaps understandably, many organizations still treat AI as a mostly technical capability aimed at transforming business outcomes. However, leaders must also carefully consider the risks of the technology. Looking at AI through a crisis planning lens means considering it with the same seriousness that teams bring when planning for a potential natural disaster, a system outage, or a data breach that exposes customer payment information.

Crisis management teams must think through how they would respond if an operations or management system were compromised by external AI. For instance: What is the role of legal, public relations, and product teams if a company’s chatbot begins providing users harmful or biased responses? What steps will the organization take if an attacker impersonates the CEO with a deepfake video that leads to a large fraudulent transaction or jeopardizes the company’s reputation? And what happens if a previously unknown vulnerability in an AI tool makes confidential human resources data available to users across the company or, worse, external bad actors?

AI is evolving quickly; crisis plans must be revisited frequently. It’s important that these conversations include cross-functional teams, because that is who will be responding to virtually any crisis involving AI. IT Security teams may be the first to detect an issue, but legal departments, communications professionals, and executive leadership will all likely play critical roles in determining how the organization responds. Aligning these groups ahead of time will avoid delays and confusion when the time comes to act.

Although all the risks surrounding AI may not yet be fully understood, we can say with certainty that the technology will play a role in future high-profile crises. Organizations that wait for an incident to force action will find themselves making critical, on-the-spot decisions under extraordinary pressure. But those that begin integrating AI into their crisis planning now will be able to respond from a position of preparedness rather than panic.

* * *

Leading Forum
Steven B. Goldman is an internationally recognized expert and consultant in Business Resiliency, Crisis Management, Crisis Leadership, and Crisis Communications. He has over 40 years’ experience in the various aspects of these disciplines, including program management, plan development, training, exercises, and response strategies. He is the Director of the program offered through MIT Professional Education. The 2026 sessions run live on campus July 13-17 and online during the last two weeks of October. This comprehensive program provides important knowledge, current assessments, and several case studies on issues that affect you and your organization — regulations and standards, response strategies, cyber security, supply chain, crisis leadership, artificial intelligence, communications, news media, social media, federal/state/local government response, drills and exercises — from the experts involved with these efforts.

* * *

instagram.png TwitterAdLogo.png Follow us on Instagram and X for additional leadership and personal development ideas.

 

Explore More

AI Survival Competing in the Age of AI

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.