Book cover of The Leader's Guide to Managing Risk by K. Scott Griffith

K. Scott Griffith

The Leader's Guide to Managing Risk

Reading time icon16 min readRating icon3.9 (7 ratings)

Have you ever thought about the dangers you don't see—the ones buried beneath the surface—and how they could derail your plans or safety in an instant?

1. Risk Management Begins with Real-Life Lessons

Life often thrusts us into unexpected moments that redefine our understanding of risk. For K. Scott Griffith, surviving a terrifying plane crash marked the beginning of his journey into the world of safety and reliability. Risk, he realized, isn’t just an obstacle – it’s a reality that requires proactive and thoughtful management.

Drawing from his airline experiences, Griffith developed innovative programs like the Aviation Safety Action Program (ASAP). This encouraged aviation personnel to report safety concerns without fear of reprisal. The result? A stunning 95% drop in fatal airline accidents—a testament to what happens when risks are addressed head-on.

Later, Griffith expanded his expertise to healthcare, where the stakes are equally high. By rethinking how patient care systems work, he helped reduce medical errors while balancing competing priorities like patient expectations and cost. His work emphasized addressing both technical glitches and human lapses, always focusing on consistent outcomes.

Examples

  • Griffith's work with NASA and FAA to tackle microbursts in aviation inspired new predictive safety technologies.
  • The ASAP program fostered a safer aviation culture by promoting open communication.
  • Innovations in healthcare systems, such as error-tracking programs, have saved lives and improved outcomes.

2. The Iceberg of Hidden Risks

Most dangers hide in plain sight, much like the iceberg that sank the Titanic. Griffith’s "iceberg model" underscores the reality that what you don't see is often far more threatening than what you do.

Take organizations like Facebook and Apple. Facebook faced a reputational crisis over unseen privacy concerns, while Apple battled disruptions in supply chains. These challenges remind us that failing to anticipate hidden risks can cripple even the strongest systems. Similarly, in medicine, gastric ulcers were long attributed to stress until the hidden bacterial culprit was identified, reshaping treatment approaches.

Griffith advises leaders to look beyond surface-level risks. Identifying hidden hazards involves scrutinizing systems for vulnerabilities. Improvements must account for both organizational processes and human decision-making, ensuring that overlooked risks don’t lead to future crises.

Examples

  • Apple's unforeseen supply chain issues during global disruptions highlighted the need to anticipate vulnerabilities.
  • Sports research revealed hidden risks of repeated head injuries, changing safety standards in athletics.
  • The annual development of flu vaccines highlights the complexity of predicting "unseen" viral mutations.

3. Building Reliable Systems in Layers

Systems fail. It’s not a question of if—they will. Griffith explains that reliability doesn’t eliminate failures but builds layered defenses to manage them effectively.

Reliable systems consist of three primary layers: barriers, redundancies, and recoveries. Barriers, like rules or safeguards, prevent initial failures. Redundancies provide additional layers of security, such as backup power supplies. Finally, recoveries mitigate the damage when the first two fail—think of airplane parachutes or system restores on computers.

A tragic hospital incident exemplifies the stakes. A nurse turned off a cardiac monitor alarm while troubleshooting another issue. When the patient’s heart stopped, no alarm sounded. Layered solutions like auto-reboot features for monitors could’ve prevented this tragedy, demonstrating how systems must account for errors at every level.

Examples

  • Physical barriers like airport fences reduce risks but require additional safeguards against human error.
  • Redundant airplane engines ensure flights continue even after one engine failure.
  • Recovery steps, such as emergency braking systems, prevent escalations when other defenses fail.

4. Human Errors: Why Mistakes Happen

Humans are fallible. One moment of distraction or overconfidence can lead to devastating outcomes. To improve reliability, Griffith argues, we must focus on the systems surrounding humans rather than solely blaming individuals.

Cognitive biases like overconfidence or mental shortcuts often contribute to mistakes. For example, a busy doctor missing critical details in a patient chart illustrates how systemic flaws—like time pressure—compound human error. Additionally, automation can lead to complacency, as seen when pilots overly rely on autopilot systems.

Griffith emphasizes strategies like checklists, alarms, and nudging behaviors to guide humans away from errors. By addressing the root causes—such as stress, distractions, or risky habits—humans can perform more reliably within better-designed systems.

Examples

  • Pilots ignoring manual protocols during turbulence because of autopilot dependability.
  • Doctors skipping over detailed histories, increasing risks in surgery.
  • Building designs that reduce driver distractions in urban traffic to minimize accidents.

5. Organizations as Living Systems

Organizations are ecosystems where systems, people, and culture constantly interact. Like any ecosystem, instability in one area can destabilize the whole.

Griffith highlights how unstable systems—like inconsistent workplace processes—erode employee performance and team cohesion. Layers of poor training and overloaded schedules can make even the best-intentioned employees falter. Moreover, Griffith emphasizes leadership's role in creating a clear mission while cultivating positive workplace cultures that reinforce reliability.

Culture drives behaviors, for better or worse. Toxic environments can cause negligence, while supportive setups encourage teamwork and safety. By observing these dynamics and aligning human and system efforts, organizations can adapt to changing conditions while maintaining high standards.

Examples

  • NASA's space shuttle disasters reveal the risks of conflicting priorities between safety and pressures like costs or deadlines.
  • Family units adapting to shifts in dynamics—like parenting multiple kids—mirror organizational shifts to external changes.
  • A sports team’s success often hinges on aligning leadership strategy and team morale.

6. Proactive Predictions Save Lives

Reactive thinking only tells you what went wrong after the fact; predictive strategies anticipate what might go wrong before it happens. Griffith provides tools, like probabilistic risk assessments, that identify future hazards and map ways to prevent them.

Consider the 2008 Los Angeles train collision. While human error seemed central, systemic issues like outdated safety controls played an overlooked role. Innovations like positive train control now prevent such risks altogether by automating reliable operation.

Predictive modeling echoes weather forecasting—it calculates the probabilities of cascading failures. As shown in the practice of mapping system fault trees, Griffith urges collaboration with real-world operators to ensure preventive measures match reality.

Examples

  • Weather models predicting hurricanes involve tracing atmospheric changes far from the storm.
  • Train control systems now prevent head-on collisions before engineers make mistakes.
  • Safety plans in utility industries reduce sensory overload for operators, cutting accidents.

7. Predictive Risk Requires Collaboration

Predictive reliability isn’t a solo endeavor; engaging stakeholders strengthens results. Griffith’s work emphasizes team engagement, from frontline workers to leadership, to find practical risk solutions.

When utility company drivers continued having accidents, involving employees in diagnosing systemic issues led to counterintuitive solutions—fewer alerts reduced distractions. Empowering teams creates realistic fixes.

Griffith argues organizations thrive by fostering dialogue between system architects, managers, and hands-on users. Without collaboration, solutions veer theoretical, bypassing implementable opportunities.

Examples

  • Utility staff input highlighted sensory overload, leading to pragmatic alarm reductions.
  • Metrolink’s capacity to prevent railroad accidents improved via employee-driven interface redesigns.
  • Advances in air traffic coordination tapped into collaborative expertise among controllers.

8. Resilience Is Strength amid Setbacks

Failures aren’t total defeats—they reveal areas to grow and adapt. Resilience defines how swiftly a system rebounds, ready for tomorrow’s challenges.

For example, airline innovations following disasters enhanced safety protocols worldwide. Griffith suggests proactive changes also ensure systems adapt to future unknown risks as organizational "immune systems."

Routines must accommodate disruptions—whether in policymaking, engineering, or personal behavior—to recover swiftly. Resilience isn’t simply bouncing back; it’s bouncing forward stronger.

Examples

  • Airline safety greatly improved post-crashes through adaptive mechanisms.
  • Backup recovery efforts post-earthquake provided communities second safeguards.
  • Software protection like redundancy brought resilient algorithm stability globally.

9. Future Risks Are Preventable Today

By identifying risks early, tomorrow’s catastrophes are avoidable. Griffith stresses studying positive patterns for hidden warning signs. The focus shifts from waiting for disaster to preventing it.

Predictive observation is invaluable. Any sphere—climate change, automotive safety, public health—benefits from preemptive interventions rather than post-mortem reflection.

Griffith concludes outdated reaction systems must pivot society-aggressively-detect risks tomorrow arriving early proactivities save lives future-ready infrastructure preservation fine-adjustment.

Examples

  • Insurance shifts transparency breakdown assessments public financial behavioral alignment against missed catastrophe disruption misled.

Takeaways

  1. Adopt the "iceberg" perspective—look beyond visible risks and search for hidden dangers that could affect outcomes.
  2. Layer systems with barriers, redundancies, and instant recoveries to tackle failures before they magnify.
  3. Involve teams and employees thoroughly when designing predictive solutions, ensuring practicality and buy-in.

Books like The Leader's Guide to Managing Risk