Feb 9th, 2024 | Laobai ABCDE
After more than a year since the release of ChatGPT, discussions about AI+Crypto have once again heated up in the market. AI is seen as one of the most important tracks in the bull market of 2024–2025. Even Vitalik Buterin himself published an article titled “The promise and challenges of crypto + AI applications,” exploring the possible directions for future AI+Crypto exploration.
This article will not make too many subjective judgments, but will simply summarize the entrepreneurial projects combining AI and Crypto observed over the past year from a primary market perspective. It will examine from which perspectives entrepreneurs have entered the market, what achievements have been made so far, and which areas are still being explored.
Throughout 23, we’ve talked to nearly dozens of AI+Crypto projects, among which distinct cycles can be observed.
Before the release of ChatGPT at the end of 22, there were few blockchain projects related to AI in the secondary market. The main ones that come to mind are FET, AGIX, and a few other veteran projects. Similarly, there weren’t many AI-related projects available in the primary market.
From January to May of 23 could be considered as the first concentrated outbreak period for AI projects. After all, the impact brought by ChatGPT was significant. Many old projects in the secondary market pivoted to the AI track, and almost every week in the primary market, AI+Crypto projects could be discussed. Similarly, during this period, AI projects seemed relatively simple. Many of them were based on a “skin-deep” adaptation of ChatGPT combined with blockchain modifications, with almost no core technological barriers. Our in-house development team could often replicate a project framework in just a day or two. This also led to numerous meetings with AI projects during this period, but ultimately, no action was taken.
From May to October, the secondary market began to turn bearish. Interestingly, during this time, the number of AI projects in the primary market also decreased significantly. It wasn’t until the last month or two that the quantity started to pick up again, and discussions, articles, etc., about AI+Crypto became richer. We once again entered a period where we could encounter AI projects every week. Half a year later, it was evident that a new batch of AI projects emerged with a better understanding of the AI track, the landing of commercial scenarios, and an improved integration of AI+Crypto compared to the first wave of AI hype. Although the technological barriers were still not strong, the overall maturity level took a step forward. It was only in 24 that we finally made our first bet on the AI+Crypto track.
In Vitalik’s article on “promise and challenges”, he provides predictions from several relatively abstract dimensions and perspectives:
We, on the other hand, will summarize the AI projects currently seen in the primary market from a more specific and direct perspective.
Most AI+Crypto projects are centered around the core of Crypto, which is “technological (or political) decentralization + commercial assetization.”
Regarding decentralization, there isn’t much to say, it’s all about Web3… Based on the categories of assetization, we can roughly divide them into three main tracks:
This is a relatively dense track, as besides various new projects, there are also many old projects pivoting. For example, on the Cosmos side, there’s Akash, and on the Solana side, there’s Nosana. After pivoting, the tokens have all experienced a crazy surge, which also indirectly reflects the market’s optimism towards the AI track. Although RNDR primarily focuses on decentralized rendering, it can also serve AI purposes. Therefore, many classifications include RNDR-like computing power-related projects in the AI track.
Computing power assetization can be further subdivided into two directions based on the use of computing power:
One is represented by Gensyn, which is “decentralized computing power used for AI training.”
The other is represented by most pivots and new projects, which is “decentralized computing power used for AI inference.”
In this track, we can observe an interesting phenomenon, or perhaps a chain of disdain:
Traditional AI → Decentralized inference → Decentralized training
The main reason lies in the technical aspect because AI training (especially for large model AI) involves massive amounts of data. What’s even more exaggerated than the data requirement is the bandwidth demand formed by the high-speed communication of these data. In the current environment of Transformer large models, training these large models requires a computational matrix composed of a large number of high-end graphics cards like the 4090 series/H100 professional AI cards + communication channels of the hundred-gigabit level formed by NVLink and professional fiber switches. Can you imagine decentralizing this stuff? Hmm…
The demand for computing power and communication bandwidth in AI inference is far less than in AI training. Naturally, the possibility of decentralized implementation is much greater for inference than for training. That’s why most computing power-related projects focus on inference, while training is primarily left to major players like Gensyn and Together, which have raised hundreds of millions in financing. However, from the perspectives of cost-effectiveness and reliability, at least at this stage, centralized computing power for inference is still far superior to decentralized options.
This explains why those focused on decentralized inference look down on decentralized training and think, “You can’t make it happen at all,” while traditional AI views decentralized training and inference as “unrealistic in terms of training technology” and “unreliable in terms of inference commercially.”
Some say that when BTC/ETH first came out, everyone thought the model of having distributed nodes compute everything was relatively “not making sense” compared to cloud computing. But in the end, didn’t it succeed? Well, that depends on the requirements for correctness, immutability, redundancy, and other dimensions of AI training and inference in the future. Purely in terms of performance, reliability, and price, it’s currently impossible to surpass centralized solutions.
This is also a crowded track for projects and relatively easier to understand compared to computing power assetization because one of the most well-known applications after the popularity of ChatGPT is Character.AI. With it, you can seek wisdom from ancient philosophers like Socrates and Confucius, engage in casual conversations with celebrities like Elon Musk and Sam Altman, or even indulge in romantic talks with virtual idols like Hatsune Miku and Raiden Shogun. All of this showcases the charm of large language models. The concept of AI Agents has become deeply ingrained in people’s minds through Character.AI.
What if figures like Confucius, Elon Musk, or Raiden Shogun were all NFTs?
Isn’t this AI X Crypto?!
So, rather than calling it model assetization, it’s more apt to say it’s the assetization of Agents built on top of large models. After all, large models themselves cannot be put on the blockchain. It’s more about mapping Agents on top of models into NFTs to create a sense of “model assetization” in the AI X Crypto space.
There are now Agents that can teach you English or even engage in romantic relationships with you, among various other types. Additionally, related projects such as Agent search engines and marketplaces can also be found.
The common issue in this track is firstly, there are no technological barriers. It’s basically just the tokenization of Character.AI. Our in-house tech wizards can create an Agent that speaks and sounds like a specific character, such as our co-founder BMAN, in just one night using existing open-source tools and frameworks. Secondly, the integration with blockchain is very light. It’s somewhat akin to Gamefi NFTs on Ethereum, where the metadata stored may only be a URL or hash, and the models/Agents reside on cloud servers. The on-chain transactions only represent ownership.
The assetization of models/Agents is still one of the main tracks in AI X Crypto in the foreseeable future. We hope to see projects with relatively higher technological barriers and closer integration with blockchain that are more native in the future.
Logically speaking, data assetization is the most suitable aspect of AI+Crypto because traditional AI training mostly relies on visible data available on the internet, or to be more precise, public domain traffic data. These data may only represent a small percentage, around 10–20%, with the majority of data actually lying within private domain traffic (including personal data). If this traffic data can be utilized for training or fine-tuning large models, we can undoubtedly have more professional Agents/Bots in various verticals.
What’s the best slogan of Web3? Read, Write, Own!
Therefore, through AI+Crypto, under the guidance of decentralized incentives, releasing personal and private domain traffic data and assetizing it to provide better and richer “food” for large models sounds like a very logical approach. Indeed, there are several teams deeply involved in this field.
However, the biggest challenge in this track is that data is difficult to standardize like computing power. In decentralized computing power, the model of your graphics card directly translates into how much computing power you have. On the other hand, the quantity, quality, and purpose of private data are all difficult to measure. If decentralized computing power is like ERC20, then assetizing decentralized AI training data is somewhat like ERC721, and it’s like having many projects like APE, Punk, Azuki, different NFT with differnent traits mixed together. The difficulty in liquidity and market-making is much more challenging than ERC20. Therefore, projects focusing on AI data assetization are currently facing significant challenges.
Another aspect worth mentioning in the data track is decentralized labeling. Data assetization operates at the “data collection” step, and the collected data needs to be processed before being fed to AI, which is where data labeling comes in. This step is currently mostly centralized and labor-intensive. Through decentralized token incentives, transforming this labor work into decentralized, labeling to earn, or similar to crowdsourcing platforms, distributing work, is also a viable approach. A few teams are currently working in this area.
Let’s briefly discuss from our perspective the missing puzzle pieces in this track currently:
You might ask why a VC like us can come up with certain scenarios before entrepreneurs in the market? That’s because we have 7 experts in our in-house AI team, 5 of whom have PhDs in AI. As for the understanding of Crypto by the ABCDE team, you know…
Finally, although from the perspective of the primary market, AI x Crypto is still very early and immature, this doesn’t prevent us from being optimistic about 24–25', where AI x Crypto will become one of the main tracks of this bull market cycle. After all, is there a better way to combine productivity liberated by AI and production relations liberated by blockchain? :)
ABCDE is a VC focused on leading investment in top Crypto Builders. It was co-founded by Huobi Cofounder Du Jun and former Internet and Crypto entrepreneur BMAN,who both have been in the Crypto industry for more than 10 years. The co-founders of ABCDE have built multi-billion dollar companies in the Crypto industry from the ground up, including listed companies(1611.HK), exchanges(Huobi), SAAS companies(ChainUP.com), media(CoinTime.com), and developers platforms(BeWater.xyz).
Twitter:https://twitter.com/ABCDELabs
Website:www.ABCDE.com
【免责声明】市场有风险,投资需谨慎。本文不构成投资建议,用户应考虑本文中的任何意见、观点或结论是否符合其特定状况。据此投资,责任自负。