AI in Safety: The strategic questions safety executives should be asking
Page Published Date:
May 11, 2026
Thinking back to our recent event in Melbourne, The Next Step & The Safe Step CEO Aaron Neilson details some key considerations for HSE decision makers and leaders.

A career in safety recruitment gives you a specific view of what separates high-performing safety functions from those that plateau. Capability matters, of course. But the deeper differentiator is almost always whether the safety leadership team has the access, the language, and the confidence to drive strategic decisions at the board and executive level.
AI is now one of those decisions. Our Human-Led, AI-Enabled event in Melbourne in April, with senior leaders from Qantas, Coles, and Accenture, sharpened my view of where the leading edge is, and where most organisations still have ground to cover.
The 'reasonably practicable' question
Mark Lipman made a point that has direct implications for every board in Australia. When you consider the volume of safety data sitting in organisational systems (like incident reports, fatigue, rostering, operational logs), applying AI to surface that data faster and more reliably is increasingly arguable as a ‘reasonably practicable’ obligation, not simply an efficiency gain.
For safety executives making the case internally, that framing is a useful entry point. For boards carrying safety governance obligations, it deserves a direct conversation.
Where AI adds value in safety
Qantas applies AI to 130,000 annual safety reports, to surface trends, identify patterns, and accelerate the move from signal to action. Trend identification, category clustering, cross-period comparison: these are functions AI handles well and that improve the quality of safety decision-making at the senior level.
Computer vision in high-risk physical environments offers a similar model. The technology monitors, flags, and records. The experienced safety professional contextualises and acts. That division keeps the system responsive while protecting human judgment at the decision points that matter.
Where precision matters most
Mark cited research showing AI is least reliable under conditions of high urgency and novel risk, which are precisely the scenarios where safety controls are most critical. The implication is straightforward: be deliberate about where AI sits within your safety management system and govern those boundaries explicitly.
Qantas's governance position on employment decisions is a useful model: generative AI will not be used to make or assist in making those decisions, full stop. The parallel for safety is ensuring that the boundaries around AI in safety-critical decision-making are just as clearly stated and consistently applied.
A note on data sovereignty
Worth noting for safety-sensitive organisations: Qantas's model over safety reports is bespoke rather than a commercial LLM. They wanted the model supervised and specific, with no safety data flowing into external training sets. For organisations operating in regulated environments with sensitive incident data, this is a governance decision that warrants deliberate consideration before any deployment.
The capability profile is shifting
As AI takes on more of the data processing and administrative burden in safety functions, the premium shifts toward contextual judgment, stakeholder influence, coaching, and the ability to translate safety insight into executive and board-level decisions. Technical safety skills remain essential but the safety leaders with the most impact will be those who can operate across both the technical and strategic domains.
Building that capability requires a deliberate workforce strategy, not just a training plan. At The Safe Step, it's the question we're working through with safety leaders across the country: what does the capability profile of a high-performing safety function look like in an AI-augmented environment, and how do you build toward it?
Aaron Neilson is CEO of The Next Group and a founding partner of The Strategic Step Advisory. If you'd like to continue this conversation, reach out directly.




