AI in Safety: The strategic questions safety executives should be asking

Aaron Neilson

Page Published Date:

May 11, 2026

Thinking back to our recent event in Melbourne, The Next Step & The Safe Step CEO Aaron Neilson details some key considerations for HSE decision makers and leaders.

Worker in high-vis vest and leader in business shirt look at computer screen in warehouse.
A career in safety recruitment gives you a specific view of what separates high-performing safety functions from those that plateau. Capability matters, of course. But the deeper differentiator is almost always whether the safety leadership team has the access, the language, and the confidence to drive strategic decisions at the board and executive level.
AI is now one of those decisions. Our Human-Led, AI-Enabled event in Melbourne in April, with senior leaders from Qantas, Coles, and Accenture, sharpened my view of where the leading edge is, and where most organisations still have ground to cover.


The 'reasonably practicable' question

Mark Lipman made a point that has direct implications for every board in Australia. When you consider the volume of safety data sitting in organisational systems (like incident reports, fatigue, rostering, operational logs), applying AI to surface that data faster and more reliably is increasingly arguable as a ‘reasonably practicable’ obligation, not simply an efficiency gain.

"If you're a director thinking about the safety data in your system, it is reasonably practicable to put an AI over that data and surface information much quicker, giving you much more and deeper insight.

- Mark Lipman, Qantas

For safety executives making the case internally, that framing is a useful entry point. For boards carrying safety governance obligations, it deserves a direct conversation.


Where AI adds value in safety

Qantas applies AI to 130,000 annual safety reports, to surface trends, identify patterns, and accelerate the move from signal to action. Trend identification, category clustering, cross-period comparison: these are functions AI handles well and that improve the quality of safety decision-making at the senior level.

Computer vision in high-risk physical environments offers a similar model. The technology monitors, flags, and records. The experienced safety professional contextualises and acts. That division keeps the system responsive while protecting human judgment at the decision points that matter.


Where precision matters most

Mark cited research showing AI is least reliable under conditions of high urgency and novel risk, which are precisely the scenarios where safety controls are most critical. The implication is straightforward: be deliberate about where AI sits within your safety management system and govern those boundaries explicitly.

Qantas's governance position on employment decisions is a useful model: generative AI will not be used to make or assist in making those decisions, full stop. The parallel for safety is ensuring that the boundaries around AI in safety-critical decision-making are just as clearly stated and consistently applied.


A note on data sovereignty

Worth noting for safety-sensitive organisations: Qantas's model over safety reports is bespoke rather than a commercial LLM. They wanted the model supervised and specific, with no safety data flowing into external training sets. For organisations operating in regulated environments with sensitive incident data, this is a governance decision that warrants deliberate consideration before any deployment.


The capability profile is shifting

As AI takes on more of the data processing and administrative burden in safety functions, the premium shifts toward contextual judgment, stakeholder influence, coaching, and the ability to translate safety insight into executive and board-level decisions. Technical safety skills remain essential but the safety leaders with the most impact will be those who can operate across both the technical and strategic domains.

Building that capability requires a deliberate workforce strategy, not just a training plan. At The Safe Step, it's the question we're working through with safety leaders across the country: what does the capability profile of a high-performing safety function look like in an AI-augmented environment, and how do you build toward it?


Aaron Neilson is CEO of The Next Group and a founding partner of The Strategic Step Advisory. If you'd like to continue this conversation, reach out directly.


Looking for a new HSE role? 

Search HSE jobs 

Need help filling a position? 

Contact Us

Aaron Neilson • May 11, 2026

Other articles you might be interested in

Panel discussion in a conference room, with speakers seated on stage and slides showing panelists behind them
By Stephen Coldicutt May 6, 2026
Insights from the 2026 AIHS OHS Breakfast. Steve Coldicutt on why compliance is no longer enough, the generosity of the safety profession, and what AI really means for HSE leaders in Australia.
By Aaron Neilson April 24, 2026
Reflecting on 10 years on safety recruitment in Australia, and what that means for the future
Panel discussion with four speakers seated before an audience in a conference room
By Aaron Neilson April 23, 2026
How are safety leaders applying AI without compromising judgment? Insights from our Melbourne panel on risk, governance, computer vision, pattern recognition and responsible AI in safety.
SHOW MORE