Preventing AI Pitfalls in Financial Decision-Making

Artificial Intelligence (AI) continues to dominate the fintech industry in 2025, with firms seeking innovative ways to integrate it into their operations. As this technology evolves, understanding the risks and challenges surrounding AI in financial decision-making becomes increasingly vital, as stated in The Fintech Times. While AI’s potential is immense, experts caution against relying on it without proper testing and oversight.
Mohamed Elgendy, co-founder and CEO of AI testing firm Kolena, underscores the danger of AI failures in financial decisions, noting that they can lead to serious consequences, ranging from faulty loan approvals to market-impacting algorithmic trading errors. He advocates for rigorous AI testing, emphasizing that organizations need to develop systematic frameworks to evaluate AI performance across various scenarios. «The solution isn’t to use AI less, but rather to test it more rigorously,» Elgendy says. He stresses that continuous validation, comprehensive testing, and human oversight are essential to avoid failures that could affect customers’ financial outcomes.
Adam Ennamli, Chief Risk and Security Officer at General Bank of Canada, shares a similar view, highlighting the existential risks posed by AI failures, such as financial losses, market distrust, and regulatory penalties. «When AI tells you what you want to hear, you tend to ‘forget’ or at least minimise the risks that come with automation dependence,» Ennamli warns. He advises financial institutions to maintain flexibility in automated systems while ensuring proper human oversight, particularly in complex situations where human judgment remains essential.
Satayan Mahajan, CEO of Datalign Advisory, stresses that AI’s vast potential in the financial sector requires equally robust preparations. Drawing on past examples, such as the 2010 Flash Crash and the 2019 Apple credit card algorithm controversy, Mahajan emphasizes the need for responsible AI development. «Failures in the financial industry are expensive and generate low trust with consumers,» he says. AI’s rapid advancement in the financial sector demands a matching leap in compliance, risk management, and institutional investment.
Michael Gilfix, Chief Product and Engineering Officer at KX, argues that to successfully apply AI in financial decision-making, firms must implement strong monitoring processes. He explains that robust monitoring detects algorithm drift or bias, triggering necessary recalibrations to maintain AI performance. Gilfix also suggests that firms must decide whether they want AI to automate decisions or merely offer recommendations to human decision-makers. This flexibility ensures that AI is a tool within a broader strategy for improving business outcomes.
Finally, Jay Zigmont, PhD, CFP, founder of Childfree Wealth, reflects on the importance of quality assurance in AI. He points out that human advisors make mistakes daily, and it is essential to ensure that AI undergoes the same rigorous scrutiny. «AI is only as good as its programming, training, and quality assurance,» he concludes, raising the question of whether humans would perform better if held to the same quality assurance standards.
While AI offers tremendous opportunities in financial decision-making, experts emphasize that firms must prioritize testing, oversight, and continuous monitoring to mitigate the risks associated with its deployment. The balance between leveraging AI’s capabilities and maintaining human judgment is key to navigating the complexities of AI in finance.