At the Communify Intelligence Experience, Mark Bordeaux of NTT Data highlighted the operational gains possible from agentic and generative AI in wealth management while urging careful, selective deployment to protect client privacy and meet rising regulatory expectations.
- AI can drive operational efficiency when applied selectively.
- Protecting client data is a precondition for generative AI use.
- Marry regulatory requirements with design and break work into value-focused steps.
What happened
Speaking at the Communify Intelligence Experience, Mark Bordeaux, Client Executive at NTT Data, discussed how agentic and generative AI are beginning to deliver operational efficiency gains in wealth management workflows. He cautioned that generative models raise specific risks around client data and therefore demand careful, limited deployment.
Bordeaux offered practical guidance for innovation under tightening regulation, advising firms to understand the impacts of each step in a process and to align design decisions with compliance needs.
Why it matters
Wealth managers face a trade-off between efficiency and risk: AI can automate and streamline tasks, but misuse of generative models can expose sensitive client information and attract regulatory scrutiny. Getting the balance right determines whether AI is value-creating or risk-amplifying.
Embedding regulatory constraints into product and process design, and decomposing projects into discrete, value-added elements, reduces deployment risk and helps firms move decisively rather than being stalled by compliance uncertainty.
What to watch next
Expect firms to prioritise selective pilots that limit client-data exposure, adopt stronger data controls, and instrument process-level impact analysis before wider rollouts. Monitor how design-led approaches to compliance — where rules are translated into implementation constraints — influence time-to-market and operational outcomes.
Wealth managers should also track the evolving regulatory backdrop and industry best practices for testing and governance so they can be pragmatic and assertive in scaling AI while keeping client protections intact.