Accelerating the adoption of AI in Web3

Accelerating the adoption of AI in Web3

Part 1: Motivation and challenges

The current AI landscape is dominated by centralized, closed source and oligopolistic tech giants. A small number of corporations control the most high performant models and this is largely due to the extreme centralizing forces that are conducive to model development and inference.

Creating a machine learning model typically involves three main phases: pre-training, fine-tuning, and inference. These phases are crucial for the development of a robust and accurate model that can generalize well to new, unseen data.

The pre-training phase

During the pre-training phase, the model is trained on a large, general dataset. This dataset is not specific to the task the model will eventually perform but is instead designed to help the model learn a wide array of features and patterns. For example, in the case of language models, this might involve learning language structure, grammar, and a broad vocabulary from a massive corpus of text. The goal here is to develop a model that has a good understanding of the underlying structure of the data it will work with, whether that's text, images, or any other form of data. 

Within the pre-training phase, there exist a number of centralizing forces:

1. Data Collection and Sorting - The pre-training phase is critically dependent on aggregating extensive datasets from varied sources, including literature, digital articles, and specialized databases. Industry behemoths, such as Google, have historically capitalized on user-generated content to craft models of unparalleled efficacy—a practice that continues today with entities like Microsoft and OpenAI, which secure top-tier data through exclusive alliances and proprietary platforms. This aggregation of capabilities within a handful of corporations contributes to a pronounced centralization within the AI sector.

Moreover, the reliance on proprietary datasets for model training introduces significant ethical considerations and the risk of perpetuating biases. AI algorithms, by design, derive their operational paradigms from the underlying data, rendering them susceptible to embedding and replicating any inherent biases. This scenario underscores the imperative for meticulous scrutiny and ethical oversight in the development process, ensuring that models reflect equitable and intentional patterns and associations.

2. Resource Requirements - It has been established that the efficacy of models enhances logarithmically with the volume of training data, indicating that models benefiting from the most extensive GPU compute cycles generally perform the best. This dynamic introduces a significant centralizing influence in the pre-training phase, driven by the economies of scale and productivity advantages held by major tech and data corporations. This trend is evident in the dominance of industry giants such as OpenAI, Google, Amazon, Microsoft, and Meta in the field. These corporations own and operate the majority of data centers worldwide and have access to the newest, top of the line hardware coming out of NVIDIA.

The fine-tuning phase

Once the model has been pre-trained, it undergoes fine-tuning. In this phase, the model is further trained on a smaller, task-specific dataset. The purpose is to adjust the weights and features that the model has learned during pre-training to make it better suited for the specific task at hand. This might involve teaching a language model to understand medical terminology or training an image recognition model to differentiate between different species of birds.

The fine-tuning phase allows the model to specialize and improve its performance on tasks that are of interest to the end-user. Again within the fine-tuning phase there are a number of centralizing forces, the most important one being closed source models & verifiability.

In the fine-tuning stage, the model's parameters are refined and set, shaping its functionality and performance. The prevailing trend towards proprietary AI models, such as OpenAI's GPT series and Google's Gemini, means that the inner workings and parameters of these models are not disclosed. As a result, when users request inferences, they are unable to verify if the responses truly come from the model they believe they are interacting with.

This lack of transparency can disadvantage users, particularly in scenarios where trust and verifiability are paramount. For instance, in healthcare, where AI models might assist in diagnosing diseases or recommending treatments, the inability for practitioners to confirm the source and accuracy of the model's inference could lead to mistrust or even misdiagnosis. If a medical professional cannot ascertain that the AI's recommendation is based on the most reliable and updated model, the consequences could directly impact patient care and outcomes, underscoring the critical need for transparency and accountability in AI deployments.

The inference phase

The inference phase is where the model is actually put to use. At this point, the model has been trained and fine-tuned, and it's ready to make predictions on new data. In the case of an AI model, this could mean answering questions, translating languages, or providing recommendations. This phase is all about applying the trained model to real-world problems and is typically where the value of the model is realized.

Within the inference phase, the factors leading to centralization are:

1. Access - Centralized frontends for AI model access pose a risk by potentially cutting users off from API or inference access. When a few entities control these gateways, they can, at their discretion, deny access to essential AI services for various reasons, including policy changes or disputes. This centralization highlights the need for decentralized approaches to ensure broader, more resilient access to AI technologies, mitigating risks of censorship and access inequity.

2. Resource Requirements - The cost of inference in AI, particularly for tasks requiring extensive computational resources, acts as a centralizing force within the technology sector. High inference costs mean that only large companies with substantial financial resources can afford to run advanced AI models at scale. This financial barrier limits the ability of smaller entities or individual developers to leverage the full potential of cutting-edge AI technologies.

As a result, the landscape becomes increasingly dominated by a few powerful players, stifling competition and innovation. This centralization not only impacts the diversity of AI development but also limits access to AI's benefits to a select group of well-funded organizations, creating a significant imbalance in the technological ecosystem.

Within the centralized AI landscape, several recurring themes emerge, particularly regarding the evolution of Web2 companies. Initially founded as open networks, these entities often shift their focus towards maximizing shareholder value. This transition frequently leads to the closing off of their networks, with algorithms being adjusted to discourage out-linking, often acting contrary to the best interests of their users. 

This misalignment between company incentives and user needs typically occurs as the organization matures and secures external funding. In fact, we have already seen this phenomenon with OpenAI, which began as a non-profit with the ambition to democratize AI access, illustrating how such shifts in focus can manifest in the industry. It is easy to blame these individual companies, but we believe this reflects a systemic issue stemming from the centralizing forces within the tech industry, where the concentration of power and resources often leads to misalignments between company incentives and broader user needs.

Possible future - AI 🤝 Web 3

Crypto furnishes artificial intelligence with a foundation that allows for seamless, open, and secure exchanges of information and value. Blockchain technology delivers a clear and accountable system for managing transactions and recording data. At the convergence of crypto and AI, numerous opportunities arise where both domains can mutually enhance and benefit from each other's capabilities. 

Incentive Aligned

Decentralized computing holds significant value in the pre-training and fine-tuning phases of model development. Foundation models, known for their extensive need for GPU compute cycles, usually run these processes in centralized data centers. Decentralised Physical Infrastructure Networks (DePIN) can offer decentralized, permissionless computing access. Through crypto economic incentives, softwares can autonomously compensate for hardware usage without the need for a centralized governing entity. This enables the users of the network to control the network, aligning incentives and ensuring both data and model providers are sufficiently compensated for their contribution. 

Verifiability

The current landscape of artificial intelligence infrastructure predominantly leans towards proprietary models, wherein users are required to place their trust in the inference providers to execute their queries through the designated model and generate a legitimate output. In this context, cryptographic proving systems emerge as a pivotal technology, offering a mechanism to validate model outputs on the blockchain. This process enables a user to submit a query, which the inference provider processes using the agreed-upon model, subsequently producing an output accompanied by a cryptographic proof. This proof serves as verifiable evidence that the query was indeed processed through the specified model.

The primary aim of these initiatives is to offload the burden of heavy computational tasks to off-chain environments, while ensuring that the results can be verified on-chain. This approach not only alleviates the computational load on the blockchain but also introduces a layer of transparency and trustworthiness by providing immutable proof of the accuracy and completion of the computations conducted off-chain.

Incorporating these cryptographic proofs into the AI model validation process addresses several key concerns associated with closed source AI systems. It mitigates the risk of opaque or unverified computations, enhances the integrity of the computational process, and fosters a trust-based relationship between the users and the inference providers. Furthermore, this methodology aligns with the broader movement towards decentralized and transparent systems, echoing the foundational principles of blockchain technology.

Composability

One of the primary advantages of decentralized finance and blockchain networks has been the composability they enable. Composability allows for the proliferation of "Money Legos" in DeFi, i.e. the combination of different protocols and outputs, to create new applications. Although this composability introduces new forms of risk into the system, it also simplifies application development for developers, increasing innovation and the speed of development, and enables for easier user experience and convenience.

Similar to how crypto has enabled composability for financial products, it will also create composable AI networks and applications, by acting as a permissionless and trustless layer on top of which AI modules can build on and work independently while staying interconnected with other modules to form a network capable of providing various services. Through blockchain network effects and crypto, decentralized AI projects and applications can connect with each other to complete the AI stack.

For example, pre-processed data using Akash or Filecoin can be used to train models using Marlin, Gensyn or Together. Following fine-tuning, these trained models can respond to user queries (inference) through Bittensor. Although seemingly more complicated, an end user would only interact with a single front-end, while developers and applications benefit off of building on top of various stacks and applications.

Another important aspect of composability enabled through decentralized networks is data composability. As users become increasingly interested in owning the data they generate, and demand that they are able to carry it around from AI protocol to AI protocol, they will require that their data is not constrained within walled gardens. Decentralized and open source AI applications enable data portability.

Data Protection

Decentralized compute, augmented with external data and privacy solutions, offers users more autonomy over their data, making it a more compelling option than its centralized counterparts. In particular, methods such as Fully Homomorphic Encryption (FHE), allow for computations on encrypted data without needing to decrypt it first. 

Through FHE, the training of machine learning models using encrypted data sets, thereby maintaining data privacy and security throughout the entire training process is enabled. This provides an end-to-end security solution with robust cryptographic assurances, allowing for privacy-preserving model training, particularly in edge networks, and allows for the development of AI systems that protect user privacy while leveraging advanced AI capabilities.

The role of FHE extends to running large language models securely on encrypted data in cloud environments. This not only safeguards user privacy and sensitive information but also enhances the ability to run models on applications with inherent privacy. As the integration of AI in various sectors, especially sensitive ones like finance, intensifies, the need for technologies like FHE that protect against potential information leaks becomes paramount.

Auto-upgradeability

AI can be used to maintain, update and auto-upgrade smart contracts based on a set of changes and conditions. For example, AI can be used on the protocol side to adjust risk parameters, based on changes in risk and other market conditions. A commonly given example here is in the case of money markets. Money markets currently rely on external organizations or DAO decisions to adjust risk parameters for lending and borrowing on assets. AI agents could streamline updating and upgrading certain parameters instead, which would be a stark improvement when compared to humans and DAOs, which can be slow and inefficient.

Challenges of decentralized AI

Decentralized AI presents a range of challenges, particularly in balancing the open-source nature of cryptography with the security concerns in AI, as well as the computational requirement for AI. In cryptography, open source is crucial for ensuring security, but in AI, making a model or its training data publicly available increases its vulnerability to adversarial machine learning attacks. This dichotomy poses a significant challenge in developing applications that leverage both technologies. 

Furthermore, applications of AI in blockchain, such as AI-driven arbitrage bots, prediction markets, and decision-making mechanisms, raise issues around fairness and manipulation. AI can potentially improve efficiency and decision-making in these areas, but there's a risk of AI not fully grasping the nuances of human-driven market dynamics, leading to unforeseen consequences.

Another area of concern is using AI as an interface in crypto applications. While AI can assist users in navigating the complex world of cryptocurrency, it also poses risks like being susceptible to adversarial inputs or causing users to over-rely on AI for critical decisions. Moreover, integrating AI into the rules of blockchain applications, such as DAOs or smart contracts, is fraught with risks. Adversarial machine learning could exploit AI model weaknesses, resulting in manipulated or incorrect outcomes. Ensuring that AI models are accurate, user-friendly, and secure from manipulation is a significant challenge.

Additionally, the integration of AI with ZK proofs or MPC is not only computationally intensive but also faces hurdles such as high computational costs, memory limitations, and model complexity. The current state of tooling and infrastructure for zero-knowledge machine learning (zkML) is still underdeveloped, and there's a lack of skilled developers in this niche area. These factors contribute to the considerable amount of work required before zkML can be implemented at a scale suitable for consumer products. 

Conclusion

Balancing decentralization and trust, especially in cases where AI uses trusted hardware or specific data governance models, remains a critical challenge in maintaining the decentralized ethos of blockchain while ensuring AI system reliability. In the next part of this article, we shall delve deeper into the technologies that can power decentralized AI and how Marlin’s infrastructure plays a crucial role in making it a reality.

Part 2: An overview of implementation techniques

In the previous part of this article, we explored the pitfalls of centralized AI and how web 3 could alleviate them. However, running models on-chain is impossible without paying extremely high gas fees. Attempting to increase the computation power of the underlying blockchain would increase node requirements for validators, which could reduce decentralization, since small, home validators would struggle.

In the following sections, we’ll go through some of the popular tools and techniques that are necessary for the further development of AI with Web3, namely, ZKPs, FHE and TEEs.

ZKP and ZKML

Zero Knowledge Proofs (ZKPs) are particularly important for AI and Web3 as they improve scaling and privacy. They allow for computations to be performed off-chain and then verified on-chain (verified compute), which is much more efficient than re-executing computations on all the nodes of a blockchain, thereby alleviating the load on the network and supporting more complex operations. zkML can, thus, enable AI models to operate in an on-chain environment. This ensures that the output from these off-chain computations is both trustworthy and verified.

Additionally, zkML can verify specific aspects of machine learning processes, such as confirming that a particular model was used for making predictions or that a model was trained on a specific dataset. zkML also can be used to verify computational processes. For example, it allows compute providers to demonstrate, via verifiable proof, that they have processed data using the correct model. This is especially relevant for developers who rely on decentralized compute providers (such as Akash) and want assurance about the accuracy and integrity of the computations. 

zkML is also useful for users who need to run models on their data but wish to keep their data private. They can execute the model on their data, generate a proof, and subsequently verify the correct model's usage without compromising data privacy.

FHE

As discussed previously, fully homomorphic encryption (FHE) allows for computations to be performed directly on encrypted data without the necessity to decrypt it first. This technology has significant applications in the field of AI, particularly in the context of machine learning and the handling of sensitive data.

One of the primary applications of FHE is in the training of machine learning models using encrypted data sets. This approach ensures that the data remains encrypted and secure throughout the entire training process. As a result, FHE provides a comprehensive security solution that maintains data privacy from the beginning to the end of the machine learning pipeline. This is especially crucial in edge networks, where data security and privacy are paramount, and the computational resources are typically more limited compared to centralized data centers.

The utilization of FHE allows for the development of AI systems that preserve user privacy while still leveraging the advanced capabilities of AI. By ensuring that the data remains encrypted during both storage and processing, FHE offers robust cryptographic assurances against unauthorized access and data breaches. This is particularly relevant in scenarios where sensitive information is being processed, such as personal data in healthcare applications or confidential financial records.

FHE extends its utility to the operation of large language models in cloud environments. By enabling these models to process encrypted data, FHE ensures that user privacy and sensitive information are protected. This capability is increasingly important as more AI applications are deployed in cloud environments, where data security is a major concern. The ability to run models securely on encrypted data enhances the applicability of AI in fields that require strict confidentiality, such as legal, healthcare, and finance sectors.

FHE addresses the critical need to protect against potential information leaks and unauthorized access to sensitive data. In sectors where data privacy is not just a preference but a regulatory requirement, FHE provides a way to leverage AI's power without compromising on data security and compliance standards.

TEE

Trusted Execution Environments (TEEs) come with significant advantages when it comes to training and performing AI inference, in particular in terms of security assurances, isolation and data privacy and protection. As TEEs act as secure enclaves, they provide robust security and integrity for data and computations.

The first major benefit is enhanced security assurance. TEEs are specifically designed to counter vulnerabilities in systems with extensive Trusted Computing Bases (TCBs), which include the OS kernel, device drivers, and libraries. These components, due to their large attack surface, are more susceptible to exploits. TEEs, by offering a secure execution environment, protect critical applications even if the host operating system is compromised, maintaining the integrity and confidentiality of software within the enclave.

Another key advantage is isolation. Within enclaves, code and data are securely stored, and access is restricted solely to the code within the enclave. This design prevents external accesses, including those from other virtual machines or the hypervisor, safeguarding against physical attacks and threats from other virtual machines.

TEEs provide remote attestations which facilitate the process of verifying that the software is executed inside a genuine TEE. This feature is crucial for ensuring the authenticity and integrity of the software running within the enclave. It enables the establishment of trust between remote entities and the TEE, assuring that the software and its execution environment are secure and have not been tampered with.

Lastly, TEEs excel in data protection. The hardware-implemented security properties of a TEE safeguard the confidentiality and integrity of computations. This includes secure provisioning of code and data, such as cryptographic keys, into the enclave. TEEs also establish trusted communication channels for retrieving computing results and outputs, ensuring data remains secure throughout its lifecycle within the enclave. These features make TEEs an ideal environment for training AI and performing AI inference, particularly in applications requiring high levels of security and data integrity.

Marlin Oyster

Marlin Oyster is an open platform for developers to deploy custom computation tasks or services over TEEs. Similar to Intel’s SGX and AWS Nitro Enclaves, through Oyster, developers can execute code in isolation, and ensure that neither the host nor any other application in it can alter the integrity of computations within the TEE. Apart from the computational integrity and confidentiality guarantees offered by TEEs, the Oyster platform provides additional benefits:

1. Uptime: Oyster ensures application availability through a monitoring protocol that penalizes nodes for downtime and reassigns tasks to operational nodes. This mechanism guarantees developers deploying on Oyster continuous app functionality and liveness for their end-users.

2. Serverless: Similar to AWS Lambda, Oyster's serverless framework allows developers to deploy applications without dedicating to a specific node rental. Developers benefit from cost savings and reduced management overhead by paying only for the runtime of their applications.

3. Networking: Oyster enclaves come pre-equipped with networking capabilities, facilitating secure TLS connections within the enclave. This feature enables the execution of external API queries and the operation of services with exposed endpoints, enhancing application integration with the internet.

4. Relay: Oyster supports the offloading of computationally intensive tasks to off-chain environments through relay contracts. These smart contracts enable the execution of functions on Oyster, ensuring reliable outcomes and event-based responses, thus optimizing on-chain resource usage.

Benchmarks

In the benchmarking comparison between zkML frameworks and teeML (Oyster), the performance metrics suggest that Oyster operates with greater efficiency. Specifically, the Oyster framework demonstrates significantly lower total computation times across all machine learning models tested.

zkML vs teeML

For the ordinary least squares model on Iris data, the zkML framework (RisQ) required over 32 seconds for proving and verification, whereas Oyster completed the task in just 0.047 seconds. Similarly, for the neural network on the same dataset, zkML (EZKL framework) had a total time of over 212 seconds for 500 inputs, in contrast to Oyster's 0.045 seconds. This substantial difference in processing time indicates that Oyster is vastly more efficient in these instances.

The LeNet models on MNIST data further solidify this observation. EZKL's zkML framework had proving and verification times amounting to 60 seconds, while Oyster required only 0.056 seconds. Even DDKang's zkML framework, which performed better than EZKL with a total time of approximately 3.33 seconds, was still outperformed by Oyster's 0.056 seconds.

Overall, the data reflects that Oyster offers a more efficient solution for machine learning tasks compared to the zkML frameworks tested. Its quicker computation times suggest that for the benchmarks provided, Oyster can handle the same tasks with significantly less processing time, making it preferable in terms of efficiency and speed.

For the widespread adoption of verifiable, decentralized AI, off-chain cryptographic verification systems must evolve beyond executing simple tasks like ordinary least squares calculations. The critical advancement required is the capability to process more complex tasks, specifically, to efficiently run prompts through popular LLMs. This necessitates enhancements in computational power, algorithm efficiency, and scalability of these systems to handle the sophisticated and resource-intensive demands of modern LLMs, thereby enabling more complex and diverse AI applications within a decentralized framework. zkML frameworks are still in their infancy and at this current stage, their ability to process these prompts is heavily impaired due to the computationally intensive task for generating zk proofs. It is hoped that zk proof marketplaces like Kalypso make zkML more affordable over time.

Although there have yet to be demonstrations of zkML protocols processing prompts for LLMs, it is reasonable to assume that the difference in processing times between Oyster's Trusted TEE and these zkML frameworks is at least as significant as in the examples previously discussed. Utilizing Marlin's Oyster, benchmark results for various LLMs can be established:

New LLMs on Oyster
GPT2-XL Benchmarks Inside Oyster:

Enclave config: 12cpu 28gb memory (c6a.4xlarge)

Prompt: Ethereum is the community-run technology

Result: 'Ethereum is the community-run technology that enables the internet to function. In the same way that Linux did for computing, Ethereum will empower the internet to function in the future.’

Time taken to generate output: 22.091819524765015 sec

Tokens per second: 1.6295624703815754

Enclave config: 16cpu 30gb memory (c6a.8xlarge)

Prompt: Ethereum is the community-run technology

Result: 'Ethereum is the community-run technology which enables the decentralized application or smart contract to be built. The platform operates with no central authority or CEO. Instead, people working together issue and use smart contracts: apps that run independently on a large number of'

Time taken to generate output: 11.616417407989502 sec

Tokens per second: 4.304253044971607

Conclusion

The development and distribution of AI technologies are increasingly dominated by a select group of major corporations, possessing advanced hardware and sophisticated models. This concentration has raised concerns regarding censorship, inherent biases, and the challenge of verifying the integrity and fairness of AI systems. In contrast, the foundational principles of crypto—namely, permissionlessness and resistance to censorship—offer a pathway to democratize access to AI technologies. 

The decentralized and open-source nature of blockchain technology enables a competitive landscape where decentralized AI can rival its centralized counterparts. This is facilitated through mechanisms such as DePINs, cryptographic proofs, and the use of public-private key pairs, which collectively ensure secure, transparent, and equitable AI development and usage. For decentralized AI to achieve its full potential, especially within blockchain ecosystems, it necessitates a robust infrastructure for offchain computation. This is critical for processing complex AI tasks efficiently, accurately, and verifiably. 

Currently, TEEs emerge as the most viable solution for this requirement. TEEs provide a secure and isolated execution space for code, safeguarding the confidentiality and integrity of the data being processed. This makes them an optimal choice for offchain computations necessary for AI applications on the blockchain. As the field evolves, advancement of technologies like zkML, FHE and the enhancement of TEEs will be crucial for the decentralized AI ecosystem to overcome current limitations. This progress will facilitate a more open, accessible, and secure AI landscape, aligning with the decentralization ethos of the crypto community.

Follow our official social media channels to get the latest updates as and when they come out!

Twitter | Telegram Announcements | Telegram Chat | Discord | Website

Stay connected

Subscribe to our newsletter.