October 2025
I recently had the privilege of contributing to the Nautical Institute (NI) Global Conference 2025 in Istanbul, Türkiye. My presentation explored a question that continues to challenge our industry: does the integration of Artificial Intelligence (AI) in Dynamic Positioning (DP) operations truly reduce human error – or are we creating new dependencies that may erode competence in the long run?
The purpose of sharing these reflections is not only to highlight the tremendous opportunities AI offers, but also to spark meaningful dialogue about the risks, responsibilities, and training imperatives that come with it. The discussions in Istanbul reaffirmed that this is far more than a technical issue. It is also a human, organizational, and ethical challenge that demands new perspectives and proactive solutions.
AI is steadily reshaping the maritime industry, particularly in DP operations. Its promise is compelling: safer seas, fewer human errors, and optimized efficiency. In DP, precision is critical; small deviations can quickly escalate, endangering lives, assets, and the environment.
AI-supported systems bring undeniable advantages:
Faster and more accurate decision-making
Improved stability and positioning efficiency
Reduction of certain categories of human error through automation
Like autopilot in aviation, AI reassures us by calculating optimal responses in milliseconds and supporting operators in complex, high-pressure environments. As offshore operations grow in scale and complexity, the industry’s appetite for AI is understandable. Yet beneath this optimism lies an uncomfortable paradox.
Automation can prevent mistakes, but it can also dull the very skills needed when systems fail. As operators transition from active controllers to passive supervisors, situational awareness may decline. When failure occurs, confidence and practiced decision-making may not be there when most needed.
Human error remains the leading cause of maritime incidents. Replacing judgment with algorithms risks trading one form of error for another: from misjudgment to unpreparedness.
Parallels are found across industries:
Aviation - over-reliance on autopilot has reduced pilots’ readiness for manual recovery.
Medicine - robotic-assisted surgery still requires surgeons able to intervene when automation falters.
Automotive - self-driving systems can lull drivers into passivity, delaying reactions to unexpected hazards.
Maritime operations are no different. A false sense of security in AI systems could prove as dangerous as the human errors those systems were designed to eliminate.
The goal is not to resist AI, but to reshape how we train. Traditional DP training often focuses on predictable, stable scenarios. The next generation must emphasize resilience, adaptability, and critical thinking under uncertainty.
Key elements include:
In-depth simulation of degraded modes and unstable conditions.
Scenario-based drills that keep cognitive skills sharp when automation fails.
Human-in-the-loop integration, encouraging operators to question AI outputs and remain decision-makers.
Human-factors awareness, addressing automation bias and over-trust.
Our philosophy is simple: AI must reduce error without reducing competence. Operators should not only understand the system. They must also trust themselves to take control when it truly counts.
Beyond skill and technology, accountability becomes a central question. When AI makes a wrong call, who is responsible: the programmer, the certifier, or the operator? While regulators and companies continue to shape compliance frameworks, removing human accountability cannot be the answer.
If anything, AI heightens the need for competence: operators must stay vigilant, analytical, and ready to intervene. Training, therefore, must go beyond compliance checklists. It must cultivate adaptability, ethical awareness, and the courage to challenge automation when something feels wrong.
AI is not the enemy of maritime safety – it is a powerful ally. But technology alone does not create resilience; people do. True safety lies in the synergy between AI capability and human expertise.
The future of DP is not AI versus human, but AI plus human.
By investing in training that balances innovation with human judgment, we build professionals who are both technically skilled and mentally agile – ready to respond when technology falters.
The paradox of AI in DP is not a reason for hesitation. It is a call to action. By confronting dependency risks openly and keeping humans at the center of operations, we can ensure that innovation fulfils its promise: safer seas, stronger decisions, and more resilient crews.
IMO MSC.428(98) – Maritime Cyber Risk Management in Safety Management Systems, 2017.
Dekker, S. (2017). The Field Guide to Understanding ‘Human Error’. CRC Press.
Endsley, M.R. (2019). Automation and Situation Awareness: Progress and Prospects. Human Factors, 61(4).
© Maersk Training