Your New AI Financial Analyst (Weekly Roundup)
Faulty financial AI avatars, Grok on Microsoft, and more.
Good morning! This week’s roundup covers UBS’s AI-extended experts, Microsoft offers LLMs on a “service parity” basis, Microsoft unveils NLWeb to bring structure to an AI-centric internet, and clues to the Republican AI agenda.
Your banking expert is now represented by an LLM avatar. Zurich-based bank, UBS, has taken a bold step into the world of AI-generated content by transforming their analysts into digital avatars, using OpenAI and Synthesia models to create AI-generated scripts and lifelike videos based on thier banking experts. The AI generated promotional video for the technology claims that these videos take 30-40% less “effort” than going into the studio. There are plenty of extremely powerful use cases for this technology, but customers have generally been resistant to interact AI generated videos (see: the debate in the film industry).
Another small problem with this is that AI is not known to be the best at financial advice (see performance against `FinanceBench’). LLMs are improving in this field. But research shows them to be biased (less risky for women), and overconfident in their answers — but maybe that’s why its a perfect fit for the industry(?!).
Source: UBS via the Financial Times While general-purpose LLMs might fail to accurately answer financial questions (making them ill suited to client-facing information tools), many analysts already use specialized AI models to identify trends, and top banks have been experimenting using them to replace entry level bankers doing grunt work like writing legal documents and making PowerPoints. Rogo and Magnifi use AI to advise customers on investment strategies and automatically move investments to maximize returns. This is one of the many examples of AI threatening white-collar jobs, although we maintain that models should be suited to the tasks they’re being used for. But how to tell? Experimentation in a highly regulated industry, such as banking, is to risky but to be applauded when evaluated carefully. Post-deployment monitoring of any regulatory sandboxes is vital to ensure customers are not being taken advantage of by such new technology, provided with bad, or harmful, advice. After all, it wasn’t took too long ago that Charle Schwab’s robo-advisors were giving customers bad advice to benefit their own bottom line.
Microsoft aims for “service parity”. In a significant shift in the AI landscape, Microsoft announced Monday that it will offer Elon Musk's xAI models to its cloud customers with “service parity” to OpenAI's products. The move signals Microsoft's cooling relationship with OpenAI despite having invested more than $13 billion in the ChatGPT maker since 2019. Microsoft's Eric Boyd put it plainly: "We don't have a strong opinion about which model customers use. We [just] want them to use Azure". This follows a series of moves by Microsoft to include first and third party models in Microsoft's products. It begs the question though: were — or are — some AI models on Azure not being offered to customers in a “service parity” manner with OpenAI’s models? And what does that “parity” look like in practice?
Microsoft is increasingly positioning itself as a neutral platform rather than being wedded exclusively to OpenAI, creating competitive pressure that may benefit customers. Microsoft CEO Satya Nadella’s belief that leading models will be “commoditized” is taking concrete form — the company will now rank models on the cloud to help customers choose "top performing" options for specific tasks. With over 1,900 models to choose from on Microsoft’s cloud platform alone, it’s seeming less likely that any one model developer will be the sole winner — and Microsoft is changing strategy accordingly.
Let me speak to your agent.1 Microsoft has unveiled NLWeb: “designed to simplify the creation of natural language interfaces for websites — making it easy to turn any site into an AI-powered app”. The GitHub is up. The goal seems to be a decentralized or “distributed” challenger to OpenAI’s more centralized model of data access and usage, through a protocol‑centric ecosystem — similar to the ones that that powered the internet (“HTML for the agentic web”).
It lets publishers add chat‑style search and agent‑friendly interfaces on top of their own content. Every NLWeb instance doubles as a Model Context Protocol (MCP) server, making site content discoverable and accessible to cooperating agents across the emerging “agentic web.” This comes as OpenAI appears to be testing MCP support for ChatGPT and Microsoft announces “broad first-party support for Model Context Protocol (MCP) across its agent platform and frameworks”. Standard protocols will be needed for new AI agents to effectively communicate with one another and companies are eager to get in on the ground floor of standard setting. It’s exciting to see support for a more distributed power structure. We hope these protocols can create the foundation for future attribution and compensation mechanisms in the emerging AI-centric internet.No state-level AI regulation. Tucked away in the unwieldy tax-bill currently before congress is a provision that prohibits any state legislation regulating AI for the next 10 years. The bill is struggling to pass the House, and if it does, the provision will likely be removed as it’s illegal to include general policy changes in a tax bill. Regardless of whether the bill is passed or not, it offers a clue as to the Republican Party’s AI agenda, which has so far been largely unclear. Despite donating a record amount of money to Trump’s inauguration fund, Big Tech companies have not yet seen any obvious return on their investment. This provision, however, signals a concession to industry, which understandably fears over-regulation from a patchwork of inconsistent state-level AI regulations (“More European than ‘Europe’”, according to John Thornhill). Reducing regulatory complexity and state-level consistency through a 10 year federally-enforced state-wide ban on any sort of AI regulation seems like the wrong way to address these issues though.
There has to be a middle ground between total federal power and 50 unique regulatory schemas. One way to develop a common language between states is to make AI concerns a priority of existing federal agencies, which can then infuse their existing regulatory mechanisms with AI considerations, making AI-related risks more specific and precise. This may be unrealistic. If so, larger states may then end up coming up with the regulatory language for AI — with all of the pros and cons that such state-led experiments entail.
O’Reilly Media has signed up to this, but had no hand in writing this week’s weekly roundup. Tim O’Reilly is a co-director for this project.