A pre‑market shuffle no one noticed
Just after dawn in London, a tier‑one bank’s treasury bot scans the liquidity position, matches it to day‑ahead cash‑flow forecasts and silently sweeps £210 million from a low‑yield reserve account into the overnight repo market. A risk notice and rationale land in the human treasurer’s inbox minutes later, but the decision, trade execution and journal entry were all performed end‑to‑end by software. Similar scenes are playing out in credit‑card pricing teams in Singapore and fraud‑monitoring squads in New York. The common actor behind them is agentic AI: autonomous, goal‑seeking systems that perceive, reason, act and, crucially, learn inside production environments.
Only eighteen months ago the conversation was still about prompt engineering. Today, the frontier has shifted to whether banks can allow software “colleagues” to run unsupervised for minutes, hours or even days. The answer is already a tentative yes, and the implications, commercial, operational and regulatory, are profound.
From generative to agentic: adding the feedback loop
Generative AI dazzled executives by spinning up text, code and images; yet it still required a human hand on the prompt. Agentic AI bolts a continuous feedback loop onto that capacity. The agent ingests streaming data, evaluates it against a set of objectives and constraints, decides on an action, executes the action via APIs or internal systems and then watches the outcome to refine its future policy. The World Economic Forum has described this closed‑loop autonomy as finance’s next step toward “process self‑governance.” It is not artificial general intelligence as each agent is still narrow, but when hundreds operate in concert, the functional perimeter of the firm begins to change.
Why finance?
Financial services deliver the very things agentic systems feed on: dense, structured data; repeatable, rule‑bound workflows; and markets that punish latency. The industry also suffers from margin compression: cost‑to‑income ratios at many global banks still sit uncomfortably above 50 percent, which makes even small efficiency gains valuable. Finally, customers transact around the clock. A human trader may pause for sleep; an agent never does.
Those conditions explain why banks, asset managers and insurers have become early laboratories. The work rarely makes front‑page news, but internal programme names: Quest IndexGPT, Eliza, GPT Store, are already part of everyday Slack chatter in the institutions that run them.
Quest IndexGPT: a data scientist that never clocks out
J.P. Morgan’s asset‑management arm went live with IndexGPT in May 2024. The tool asks an LLM to generate keywords around an investment theme, for example, “circular economy” or “quantum‑safe cybersecurity,” then pipes the list into a separate NLP engine that trawls filings and news, scores corporate exposure and re‑balances a real index. Human portfolio managers still sign‑off, but the cognitive lift has shifted to silicon. The bank touts faster time‑to‑market for bespoke thematic baskets and lower running costs for small, long‑tail indices that would have been uneconomic under a fully human process.
BBVA’s GPT Store: autonomy through crowdsourcing
Spain’s BBVA took a different route. In late 2024 it rolled out an internal GPT Store where any employee could publish an approved agent and any colleague could reuse it. Within four months the store held roughly 3,000 micro‑agents handling tasks from legal‑query triage to sentiment analysis of call‑centre transcripts. Licence utilisation among the initial user base exceeded 80 percent, a figure that astonished even the bank’s AI leadership team. The lesson: once knowledge‑workers taste autonomy at the task level, adoption can snowball without top‑down mandates.
BNY Mellon’s Project Eliza: custodianship meets ChatGPT
February 2025 brought another milestone when custody giant BNY Mellon announced a partnership with OpenAI to co‑develop Eliza, a proprietary agentic platform set to underpin every product line from securities services to payments. BNY’s chief information officer framed the move bluntly: “We no longer think of AI as a bolt‑on. It is the operating system of the bank.” The firm’s roadmap calls for thousands of self‑service agents, each governed by a central risk office but deployed and iterated by business users.
2025: the “year of the agent”
If these early adopters feel niche, consider the macro trend. Executives polled at the Reuters NEXT summit predict that autonomous agents will dominate the AI agenda in 2025, shifting boardroom metrics from top‑line growth to margin expansion as tasks that once consumed hours of analyst labour collapse to seconds of compute. Venture capital is shifting likewise: deal memos now obsess over agent‑centric architectures rather than foundation‑model bragging rights.
Goldman Sachs analysts go further. In a March 2025 note they argue that tomorrow’s infrastructure super‑cycle, billions in cloud capex, “hinges on AI agents” able to keep data‑centre utilisation above 90 percent and dynamically arbitrage compute across regions. In other words, the financial logic of the cloud itself may soon depend on autonomous software rulers.
Regulators step onto the pitch
Supervisors are anything but idle observers. Singapore’s Monetary Authority (MAS) completed a thematic review of AI model‑risk controls in mid‑2024 and published a 28‑page information paper outlining expectations for generative and agentic systems, from data‑lineage tracking to kill‑switch design. A follow‑up circular on cyber‑risks highlighted the prospect of “malicious prompt injection” that could redirect an agent’s objectives without breaching the underlying model, an attack vector far subtler than SQL injection yet potentially as damaging.
Across the Channel, the EU’s AI Act put financial applications into its “high‑risk” tier, demanding technical documentation, human oversight and post‑market monitoring. Critics warn the Act’s product‑safety framing will age badly as agents evolve, but for now compliance officers must treat every credit‑scoring or robo‑advice agent as though it were a medical device. The Bank of England’s latest industry survey puts data protection, model explainability and talent shortages at the top of banks’ AI pain points, a hierarchy that neatly mirrors the MAS findings.
Governance moves from slides to source‑code
For years “human‑in‑the‑loop” was the comfort blanket of AI risk frameworks; agents force a harder conversation. The emerging consensus looks like this: the board sets an agent charter linked to enterprise risk appetite; a central AI risk unit validates models, red‑teams behaviours and signs off on every new objective function; immutable logs feed a real‑time dashboard monitored by operations staff authorised to pull the plug. Crucially, the fail‑safe is coded as a circuit‑breaker on specific metrics, market‑value‑at‑risk spikes, unexplained model drift, rather than a generic panic button. MAS explicitly endorses such role‑based, telemetry‑driven controls in its guidance.
Talent wars: less data scientist, more AI ops engineer
Autonomy also rewrites job descriptions. BBVA doubled its dedicated AI headcount to more than 400 in 2024 and opened “AI Factories” in Mexico and Turkey. The fastest‑growing title in this space is neither quant nor prompt engineer but “AI operations officer” professionals fluent in Basel III liquidity ratios as well as retrieval‑augmented generation pipelines. These hybrid operatives babysit swarms of agents, write policy tests and negotiate with regulators. Banks that fail to cultivate such talent risk strangling innovation in second‑line approvals; those that succeed gain an engine of perpetual experimentation.
Competitive lines are redrawn
Who is best placed to own the agentic future? The card schemes start with global authentication rails and data on billions of transactions; add decision autonomy and they could morph into de‑facto credit‑decision utilities. Hyperscalers control the foundation‑model stack and sell agent orchestration as a service; the danger for banks is dependency on computational landlords with competing retail ambitions. Incumbent banks retain the balance‑sheet licences and decades of labelled data, but they must move fast, often by taking equity stakes in start‑ups that supply agentic middleware.
The Path Ahead
By year‑end 2025, analysts expect agentic AI to run perhaps five percent of intraday liquidity buffers at global systemically important banks. The first supervisory stress‑tests that explicitly model agent failure channels are penciled in for 2026. And by 2027, at least one advanced economy may permit autonomous underwriting for retail loans, matched by a new “algorithmic accountability” statute somewhere in Asia. Whether these dates slip or accelerate, the trajectory points in only one direction: deeper machine agency over the financial stack.
“Trust, but verify”
Agentic AI is not just another incremental efficiency play; it is a delegation of decision rights that strikes at the orthodox structure of a bank. The upside is extraordinary: 24‑hour trading desks that never tire, personalised offers generated, and risk‑priced, in real time, operating ratios that finally look more like fintech than legacy finance. The downside, should governance fail, is systemic: black‑box trades, cascading model drift, concentration risk in a handful of foundation models.
The pragmatic path is already visible in the institutions leading the charge: small, revenue‑bearing pilots like IndexGPT; crowd‑sourced innovation sandboxes like BBVA’s GPT Store; platform‑level commitments like BNY’s Eliza; and regulator‑aligned guard‑rails that make every agent auditable by design. Banks that treat agents as genuine colleagues, complete with job descriptions, performance reviews and the occasional disciplinary memo, will convert the promise of autonomy into sustainable competitive advantage. Those that do not will discover, perhaps abruptly, that trust without verification is simply abdication.
Either way, when the markets open tomorrow morning, an invisible software co‑worker will already have taken the first trade. It is time the rest of us caught up.
Read the full article here