This is a contributor content by Samuel Pearton, Chief Marketing Officer at Polyhedra.
AI is advancing at a tremendous speed with little consideration for verifiability, leaving individuals and society vulnerable to undetected AI errors. As AI models become more complex, they face a trust gap that hinders large-scale adoption by users and companies.
To ensure sustainable development, AI companies should adopt verifiable tech like zero-knowledge proofs (ZKPs) or trusted execution environments (TEEs) to balance innovation and verifiability.
AI has a trust issue
Transparency is critical for building trust within the AI industry. Yet the prevalence of black box AI models prevents a transparent understanding of how an algorithm analyzes data and makes decisions. It remains unclear to users and clients how the AI generates a particular output, communicates, and performs information delivery procedures.
If AI models can’t prove the legitimacy of their results, the AI trust deficit will grow. Documented evidence of the trust gap between AI agents and humans is becoming more prevalent; Meta’s open-source AI, Llama 2, rates just 54 out of 100 in Stanford’s Centre for Research on Foundation Models’ transparency score.
On the other hand, new products from emerging sectors like DeFAI are prone to mistakes and hallucinations, which can compromise user funds and jeopardize trust. In November 2024, a user convinced an AI agent on Base to send $47K despite being programmed never to do so. Although part of a game, the incident showed the flaws of AI agents in autonomously handling financial operations.
Such incidents corroborate a 2023 KPMG study, which reported that 61% of people are skeptical about trusting AI systems. To improve transparency, AI firms run internal and external audits, bug bounty programs, and red team exercises to identify potential exploits in their codebase. But this isn’t enough to trust AI logic because doubts remain about safeguards from malicious prompts and sophisticated attacks.
Doubts regarding an AI model’s transparency are also prevalent among technical people. A Forrester survey published in Harvard Business Review reports that 21% of analysts admitted a lack of transparency in AI/ML models, and 25% consider the lack of trust in AI a major concern.
In spite of concerns, the American AI industry has embraced rampant innovation with little concern for safety and verifiability. Russel Wald, executive director at the Stanford Institute for Human-Centered Artificial Intelligence, explained the AI industry scenario, “Safety is not going to be the primary focus, but instead, it’s going to be accelerated innovation and the belief that the technology is an opportunity, and safety equals regulation, regulation equals losing that opportunity.”
Verifiability becomes crucial in this industrial developmental environment while building new AI systems. With its transparency and trustworthy operating procedures, ZKP-powered verifiable AI is the safety valve that solves the industry’s critical trust deficit and safety issues.
Verifiable tech is essential for AI
ZKPs are essential to address the AI industry’s trust deficit.
Zero Knowledge Machine Learning (zkML) enhances data integrity and security through provable output generation without revealing model inner-workings. Further, zkML-powered oracles provide verifiable data to AI models, ensuring data reliability.
With ZKPs, healthcare companies can use patient data to train verifiable AI models without including additional private, identifiable information. Similarly, financial agencies can deploy ZK-powered AI agents to responsibly handle lending-borrowing operations using verifiable, privacy-enabled credit score data.
Currently though, despite the accessibility of zkML and other verifiable technologies, the trust deficit remains high within the AI industry. According to a poll at The Wall Street Journal’s CIO Network Summit, most of America’s top IT leaders said a lack of reliability is their primary concern regarding AI.
Developers and engineers should consider verifiability as the default when building AI systems and applications. Leveraging existing verifiable AI technology would alleviate much of the existing concerns.
Verifiable tech like zkML can provide visible assurance to companies that AI models are suitable for work due to provable output generation. It also assures users that AI can be trusted without understanding the internal working logic of black box models.
zkML is to AI what HTTPS is to the internet. Before HTTPS, internet users had no way to verify who they were communicating with or keep sensitive data secure. Similarly, traditional AI systems demand access to raw data and offer little transparency in return. zkML flips that model. It enables cryptographic proof that an AI model executed correctly, without revealing sensitive model details. Just as HTTPS made the internet safe for banking and commerce, zkML makes AI safe for sensitive, high-stakes applications.
Building confidence among all stakeholders is necessary as the AI industry gears up to contribute $15.7 trillion to the global economy by 2030. ZKPs accelerate AI innovation without jeopardizing user trust and transparency.
Read Also: How Crypto Startups Are Still Raising Like It’s 2021—and Why That Needs to Change
Disclaimer: This is a contributor article, a free service allowing blockchain and crypto industry professionals to share their experiences or opinions with AlexaBlockchain’s audience. The content above has not been created or reviewed by the AlexaBlockchain team, and AlexaBlockchain expressly disclaims all warranties, whether express or implied, regarding the accuracy, quality, or reliability of the content. AlexaBlockchain does not guarantee, endorse, or accept responsibility for the content in any manner. This article is not intended to serve as investment advice. Readers are advised to independently verify the accuracy and relevance of any information provided before making any decisions based on the content. To submit an article, please contact us via email.
Image Credits: Canva