Open-source AI isn’t the end-all game—Bringing AI onchain is | Opinion

Disclaimer: The opinions expressed in this article are solely those of the author and do not necessarily reflect the views of the editorial team at crypto.news.

In January 2025, DeepSeek’s R1 overtook ChatGPT as the most popular free app on the US Apple App Store. DeepSeek, unlike proprietary models like ChatGPT, is open-source, allowing anyone to access the code, study it, share it, and utilize it for their own purposes.

You may also like: DeepSeek, China and Russia AI collaboration: Western world takes notice | Opinion

This development has sparked enthusiasm about transparency in AI, pushing the industry towards greater openness. Just recently, in February 2025, Anthropic introduced Claude 3.7 Sonnet, a hybrid reasoning model that is partially open for research previews, further advancing the conversation about accessible AI.

While these advancements drive innovation, they also reveal a misconception: that open-source AI is inherently more secure than closed models.

The potential and the risks

Open-source AI models like DeepSeek’s R1 and Replit’s latest coding agents demonstrate the power of accessible technology. DeepSeek claims to have built its system for just $5.6 million, a fraction of the cost of Meta’s Llama model. Meanwhile, Replit’s Agent, powered by Claude 3.5 Sonnet, enables anyone, even non-programmers, to create software from natural language prompts.

The implications are significant. This means that virtually everyone, including small companies, startups, and independent developers, can now utilize these robust models to develop new specialized AI applications at a lower cost, faster pace, and with greater ease. This could lead to a new AI economy where accessibility to models is key.

However, where open-source excels in accessibility, it also faces increased scrutiny. While free access democratizes innovation, it also exposes vulnerabilities to cyber risks. Malicious actors could manipulate these models to create malware or exploit vulnerabilities before patches are implemented.

Open-source AI is not without safeguards. It builds on a tradition of transparency that has strengthened technology for years. Historically, engineers relied on “security through obfuscation,” concealing system details behind proprietary barriers. This approach proved ineffective as vulnerabilities emerged, often first discovered by malicious actors. Open-source changed this paradigm, exposing code to public scrutiny, like DeepSeek’s R1 or Replit’s Agent, fostering resilience through collaboration. However, neither open nor closed AI models inherently ensure robust verification.

The ethical implications are equally crucial. Open-source AI, similar to closed models, can reflect biases or produce harmful outputs based on training data. This is not a flaw exclusive to open-source; it is a challenge of accountability. Transparency alone does not eliminate these risks or fully prevent misuse. The difference lies in how open-source encourages collective oversight, a strength that proprietary models often lack, although it still requires mechanisms to ensure integrity.

The necessity for verifiable AI

For open-source AI to gain more trust, it requires verification. Without it, both open and closed models can be altered or misused, amplifying misinformation or biasing automated decisions that increasingly influence our world. Accessibility alone is not sufficient; models must also be auditable, tamper-proof, and accountable.

By utilizing distributed networks, blockchains can certify that AI models remain unchanged, their training data remains transparent, and their outputs can be validated against established benchmarks. Unlike centralized verification, which depends on trusting a single entity, blockchain’s decentralized, cryptographic approach prevents tampering by bad actors. It also shifts control away from third parties, distributing oversight across a network and incentivizing broader participation.

A blockchain-powered verification framework adds layers of security and transparency to open-source AI. Storing models on-chain or using cryptographic fingerprints ensures that modifications are openly tracked, allowing developers and users to verify they are using the intended version.

Recording the origins of training data on a blockchain demonstrates that models are sourced from unbiased, high-quality data, reducing the risks of hidden biases or manipulated inputs. Additionally, cryptographic techniques can validate outputs without exposing users’ personal data, striking a balance between privacy and trust as models become more robust.

Blockchain’s transparent and tamper-resistant nature provides the accountability that open-source AI urgently needs. While AI systems currently rely on user data with minimal protection, blockchain can reward contributors and safeguard their inputs. By incorporating cryptographic proofs and decentralized governance, we can construct an AI ecosystem that is open, secure, and less reliant on centralized entities.

The future of AI relies on trust… on-chain

Open-source AI plays a crucial role, and the AI industry should strive for even greater transparency. However, being open-source is not the ultimate goal.

The future of AI and its impact will be built on trust, not just accessibility. Trust cannot be open-sourced; it must be established, verified, and reinforced at every level of the AI stack. Our industry should focus on the verification layer and the integration of safe AI. Currently, bringing AI on-chain and leveraging blockchain technology is the safest approach to building a more trustworthy future.

Learn more: Big Tech’s dominance in AI threatens the future | Opinion

David Pinger

David Pinger is the co-founder and CEO of Warden Protocol, a company dedicated to promoting safe AI in web3. Prior to co-founding Warden, he led research and development at Qredo Labs, driving innovations in web3 such as stateless chains, webassembly, and zero-knowledge proofs. Before Qredo, he held various roles in product, data analytics, and operations at Uber and Binance. David began his career as a financial analyst in venture capital and private equity, funding high-growth internet startups. He holds an MBA from Pantheon-Sorbonne University.