Hello and welcome to Eye on AI. In this edition…a mega seed round for ex-OpenAI CTO Mira Murati’s new startup…the impact of AI on cognitive skills…and why the effects of AI automation may vary so much across sectors.
Insurance is not considered the most cutting edge industry. But AI has been making slow, steady in-roads in the sector for years. Many companies have begun using computer vision applications that automatically assess damage—whether that is to cars following a collision or to the roofs of houses following a major storm—to help claims adjusters work more efficiently. Companies are also using machine learning algorithms to help detect fraud and build risk models for underwriting. And, of course, like many other industries, insurance companies are using AI to boost productivity in many support functions, from chatbots that can answer customer queries to AI that can help design marketing materials to AI coding assistants to help internal tech teams.
Which insurance companies are doing it best? That’s what the London-based research and analytics firm Evident Insights set out to discover with a new index assessing major insurance firms’ AI prowess. Evident has become known in recent years for its detailed benchmarking of banks’ AI capabilities. But this is the first time the research firm has moved beyond banking to look at another sector.
Like its banking index, Evident’s assessment is based almost entirely on quantitative metrics derived mostly from public sources of information—management statements in financial disclosures, press releases, company websites, social media accounts, patent filings, LinkedIn profiles, and news articles. In all, Evident looked at 76 individual metrics, organized into four “pillars” that the research firm said it believes are critical to deploying AI successfully: talent (which counts for 45% of the overall ranking), innovation (30%), leadership (15%), and transparency of responsible AI activity (10%). It used these to rank the 30 largest North American and European insurers when judged by total premiums underwritten or total assets under management.
Two insurers, Axa and Allianz emerged as clear leaders in Evident’s assessment. They were the only two to rank in the top five across all four pillars and had a substantial lead over third-place insurer USAA.
Even in the age of AI, human capital can be decisive
Alexandra Mousavizadeh, the co-founder and co-CEO of Evident, tells me that the result is surprising, in part because both Axa and Allianz are based in Europe, where large companies have generally been seen as lagging their North American peers in AI adoption. (And in Evident’s banking index, all of the highest ranked firms are North American.) But Mousavizadeh says that she thinks Axa and Allianz have a common corporate cultural trait that may explain their AI dominance. “My theory on this is that it’s embedded in an engineering culture,” she says. “Axa and Allianz have been doing this for a very long time and if you look at their histories, there has been much more of an engineering leadership and engineering mindset.”
Mousavizadeh says that claims and underwriting automation are both big engineering challenges that require large teams of skilled developers and technology experts to make work at scale. “You have got to have more engineers,” she says. “For that last mile of getting a use case into production, you have to have AI product managers, and you have to have AI software engineering.”
Companies that invest most heavily in human AI expertise are most likely to excel at using AI to run their businesses more efficiently, opening up an ever-widening gap between these companies and those that are AI laggards. (Of course, in Evident’s methodology, it helps if management talks about what it’s doing with AI and publicizes its AI governance policies too. USAA actually ranks first on Evident’s talent pillar, but falls down to third place because it ranks near the bottom of the pack on both “leadership”—which is mostly about management’s statements about how the company is using AI—and “transparency of responsible AI policies.”)
Show us the money
Still, as in many industries, there still seems to be a substantial gap in the insurance sector between AI hype and actual ROI. Of the 30 insurers Evident evaluated, only 12 had disclosed at least one AI use case with “a tangible business outcome.” Just three insurers—Intact Financial, Zurich Insurance Group, and Aviva—had publicly disclosed a monetary return from their AI efforts. That’s pretty poor.
The most transparent of this group was Canada-based Intact Financial, a property and casualty insurer that said publicly in 2024 that it had invested $500 million in technology (that’s all tech, not just AI) across its business, had deployed 500 AI models, and had seen $150 million dollars in benefit so far. One of its use cases was using AI models that transform speech-to-text and then language models on top of those transcripts to assess the quality of how its human customer service agents handled the up to 20,000 customer calls the company receives daily.
That’s still a cost-savings example—a way of boosting the bottom line—and not one in which a company is using AI to grow its sales or move into new business areas. Evident found that insurers were primarily applying AI this way—attacking the industry’s largest cost centers, namely claims processing, customer service, and underwriting. As the research firm notes: “Revenue-generating AI is yet to appear on our outside-in assessment.”
The story here isn’t just about insurance—it’s about every industry grappling with AI. Executives everywhere are still figuring out which AI investments will pay off, but the early winners share a common thread: they’re not just buying AI tools, they’re building AI teams. They’re hiring engineers, experimenting relentlessly, measuring results—and then expanding the successful use cases everywhere they can. And benchmarking, like the kind Evident is doing, can play a vital role in both informing executives about what seems to be working—and pushing entire industries to adopt AI faster, as well as to being more transparent about how they’re using AI and what policies they have in place around its responsible use. That’s a lesson worth learning, whether you’re insuring cars or building them.
With that, here’s more AI news. And, before we get to the other sections, I want to flag this deep dive article from my colleagues Sharon Goldman and Allie Garfinkle into the background behind Meta’s $14 billion investment into Scale AI and the hiring of Scale co-founder and CEO Alexandr Wang for a major new role at Meta. Their story is a must-read. Check it out here.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
Want to know more about how to use AI to transform your business? Interested in what AI will mean for the fate of companies, and countries? Then join me at the Ritz-Carlton, Millenia in Singapore on July 22 and 23 for Fortune Brainstorm AI Singapore. This year’s theme is The Age of Intelligence. We will be joined by leading executives from DBS Bank, Walmart, OpenAI, Arm, Qualcomm, Standard Chartered, Temasek, and our founding partner Accenture, plus many others, along with key government ministers from Singapore and the region, top academics, investors and analysts. We will dive deep into the latest on AI agents, examine the data center build out in Asia, examine how to create AI systems that produce business value, and talk about how to ensure AI is deployed responsibly and safely. You can apply to attend here and, as loyal Eye on AI readers, I’m able to offer complimentary tickets to the event. Just use the discount code BAI100JeremyK when you checkout.
AI IN THE NEWS
Former OpenAI CTO Mira Murati’s AI startup raises record $2 billion seed round. The company, called Thinking Machines Lab, was valued at $10 billion in the funding, which was led by Andreessen Horowitz, with participation from Accel, Conviction Partners, and others, Bloomberg News reported. Meanwhile, The Information reported that Murati’s startup plans to offer AI models that are customized around each businesses key performance metrics. Investors are, according to the tech publication, calling it “[reinforcement learning] for business.” The lab also plans a consumer-focused product, the publication said.
DeepSeek is working for China’s military and intelligence agencies, U.S. official says That’s according to a Reuters report citing an unnamed senior State Department official. The U.S. government believes Chinese AI startup DeepSeek is supporting China’s military and intelligence operations and has attempted to evade U.S. export controls by using Southeast Asian shell companies to access restricted Nvidia chips, the official said. The official said DeepSeek is also likely sharing user data with Beijing’s surveillance system and has been referenced over 150 times in procurement records from China’s military, though the company denies or has not responded to these claims.
Meta considered acquiring AI video generation company Runway. The negotiations never progressed to a formal offer and have since ended, Bloomberg News reported citing unnamed sources it said were familiar with the discussions. The talks were part of Zuckerberg’s broader effort to recruit top AI talent, including a multibillion-dollar stake in Scale AI and another reported multibillion-dollar deal with prominent AI investors Nat Friedman and Daniel Gross, as well as prior talks about potentially acquiring AI search startup Perplexity.
Federal judge rules that training AI models on copyrighted works can be “fair use,” but only if initial works are obtained legally. In a case involving authors who sued AI company Anthropic for using their copyrighted books to train its AI models, a federal district court in California ruled that Anthropic’s use of copyrighted material for AI training constituted “fair use” so long as Anthropic obtained the books legally in the first place. In this case, Anthropic had downloaded millions of pirated digital copies of books from an online source, as well as having purchased millions more physical books, which it then digitized itself to use for AI training. The judge said that the latter was essentially okay—that Anthropic did not need to license the books it had purchased for AI training—but that the former constituted a potential violation of copyright law. Anthropic was not immediately available to comment on the ruling, which may set a precedent for other closely watched copyright cases against AI companies. You can read more from Reuters here.
EYE ON AI RESEARCH
Your Brain on ChatGPT. One of my biggest concerns about the AI revolution is that if we become over reliant on the technology, it will impair our own critical thinking and writing abilities. Now a new study from researchers at MIT entitled “Your Brain on ChatGPT” is offering some initial evidence that this fear is not unfounded. The researchers examined students who used ChatGPT to write essays over a period of four months and compared them to two other groups: one that had access to a traditional search engine, but not ChatGPT, and another that had access to neither digital tool.
The students who used ChatGPT struggled to recall what they had produced and reported a low sense of ownership over their work. The researchers also studied the brain activity of both groups of students an electroencephalogram (or EEG, a non-invasive way of monitoring electrical signals in the brain) and found that those who used ChatGPT had much weaker neural activity and seemingly showed less connection between different brain regions, than either of the two groups without LLM access. The group without digital tools seemed to show the strongest brain connections, which are thought to be important for memory formation, learning, and creativity. And what’s more, they seemed to retain these connections even when later allowed to use a chatbot to help them. You can read the study here.
Some critics pointed out flaws with the not-yet-peer-reviewed study, including its small sample size (54 students in total, with only 18 having participating in the final phase), focus on a single LLM, and reliance on EEG, which does not allow particularly fine-grained analysis of brain activity. Others have said that the study’s conclusion was not surprising, but that it does not say anything about whether educational processes could be designed that would still allow students to use LLMs without enabling them to outsource all of the hard-work of writing and critical thinking to the AI model. A more hybrid approach, these folks argue, might produce a best-of-both-worlds outcome where students get the benefit of AI assistance without damaging their own cognitive skills. Since the study did not examine this kind of approach, we have no real insight into whether that Goldilocks solution is possible.
FORTUNE ON AI
Hinge’s CEO says dating isn’t something people should leave up to AI—but it could coach users along the way —by Beatrice Nolan
New York plans to construct a nuclear-power facility, the first in over 15 years —by Chris Morris
Tesla robotaxi finally launches but hiccups include long wait times, Pokemon-style hunts for the car, and even driving in wrong lane —by Christiaan Hetzner
Leading AI models show up to 96% blackmail rate when their goals or existence is threatened, Anthropic study says —by Beatrice Nolan
AI CALENDAR
July 8-11: AI for Good Global Summit, Geneva
July 13-19: International Conference on Machine Learning (ICML), Vancouver
July 22-23: Fortune Brainstorm AI Singapore. Apply to attend here.
July 26-28: World Artificial Intelligence Conference (WAIC), Shanghai.
Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend here.
Oct. 6-10: World AI Week, Amsterdam
Dec. 2-7: NeurIPS, San Diego
Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.
BRAIN FOOD
What effect will AI have on wages and employment? Traditional economic theory looks at this question by examining what tasks AI can automate. And it assumes that jobs that involve similar tasks will see similar effects. The traditional economic theories say that the more skills that are automated in a job, the more both wages and employment will fall.
A new working paper by MIT labor economist David Autor and Neil Thompson, the director of MIT’s Future of Tech research project, and published in the National Bureau of Economic Research challenges these ideas. The new paper concludes that the impact of automation on wages and employment depends entirely on the level of expertise required for the non-automatable tasks that remain part of the job.
Autor and Thompson argue that in cases where the tasks left to the human still require a high degree of specialized knowledge, employment will fall but wages will actually rise. In cases where the remaining tasks are relatively non-expert, wages will fall but employment will go up, as the automation democratizes access to that job, leading to an increase in the supply of labor to take on those remaining tasks. (They illustrate this by looking at the example of how computer software has affected accounting clerks vs. inventory stock clerks.) You can read the paper here.