Marlin Verida Case Study

Case Study: Verida’s use of Marlin to compute on personal data securely in TEEs

As artificial intelligence (AI) and decentralized applications (dApps) continue to evolve, they have the potential to enable users to delegate decision-making and automate tasks by securely leveraging their private data. In the future, AI-powered financial assistants could manage expenses, healthcare agents might monitor health records for early risk detection, and identity services could streamline verification, all while ensuring that users retain control over their sensitive data. For this vision to become a reality, AI agents must be able to access and compute private data securely and verifiably, without exposing it to third parties.

However, this presents key challenges:

  • Control & Privacy: Users want AI agents to act on their behalf in a predictable and trustworthy manner, ensuring they only perform expected actions without the risk of unauthorized or malicious behavior, all while keeping sensitive data private from third parties.
  • Security Risks: Traditional centralized systems store user data in a way that makes them vulnerable to breaches or unauthorized access.
  • Developer Complexity: Decentralized approaches for secure data processing vastly differ from conventional development workflows, making them challenging for developers to adopt. The lack of familiar tooling and the complexity of cryptographic implementations often result in steep learning curves and suboptimal UX, making integration with AI-powered automation far from seamless.

Verida acts as a confidential database storage and compute layer for the self-sovereign data economy, enabling users to store and access private data securely. But storing data alone isn’t enough—users also need a way to enable AI agents to process this data and make decisions on their behalf without exposing it.

To bridge this gap, Verida leverages Trusted Execution Environments (TEEs) through Marlin, allowing AI agents to compute over private data in a secure, verifiable environment. This ensures that users retain complete control over their information while benefiting from AI-driven automation and protecting their sensitive data from unauthorized access.

Enter Marlin TEEs: Enabling Secure Private Compute

Marlin is a decentralized confidential compute marketplace that provides Trusted Execution Environments (TEEs)—hardware-isolated execution areas that ensure data remains private, even during computation.

By leveraging Marlin’s TEEs, Verida achieves three critical breakthroughs:

  1. Secure APIs: Prevent unauthorized access by binding APIs to a DNS-locked TEE enclave.
  2. Secure Data Access: Extracting user data from third-party APIs without exposing credentials.
  3. Private Data Compute: Enabling AI models to process private data securely inside a TEE.

The following sections break down each of these, demonstrating how Verida successfully implemented them.

1. Secure APIs: Trustless API Authentication with TEEs & DNS Locking

Traditional APIs rely on TLS encryption to protect data in transit, but TLS alone doesn’t guarantee that the backend processing the requests is trustworthy. If a malicious actor gains control over the server, they could intercept traffic, alter API responses, or redirect requests to unauthorized backends. Standard DNS certificates bind only to servers (domains), not to the specific code running on them, leaving users vulnerable if the server’s code is compromised or altered.

Verida uses Marlin TEEs and a secure DNS-locked enclave to ensure:

  • The API is only served by a TEE enclave with specific code
  • TLS connections are denied if the enclave changes
  • All requests are verifiably secure against interception
  • No third party, not even Verida or Marlin, can access user requests or data

With Verida, both developers and users can trust that API requests are processed within a secure and unmodified backend, without requiring complex cryptographic setups. Developers can integrate Verida seamlessly, while users can confidently grant access to third-party services, knowing their data remains protected from unauthorized modifications or misuse.

How it Works: Secure APIs

When the API server starts, it exposes a HTTPS endpoint managed by an ACME client running inside a Marlin TEE. To ensure that only the enclave can obtain a valid TLS certificate, CAA accounturi records are used. CAA records act as a lock on certificate issuance, restricting which Certificate Authorities (CAs) are allowed to issue certificates for a domain. With the accounturi extension, this restriction goes further by specifying exactly which ACME account is authorized to request and obtain certificates. By generating and managing the ACME account within the enclave, only the enclave can acquire the certificate, preventing unauthorized issuance.

The process works as follows:

  • Enclave-Based TLS Certificate Generation:
    • The ACME client inside the enclave generates a public-private key pair and uses it to request a TLS certificate from a Certificate Authority (CA).
    • The private key remains sealed inside the enclave, ensuring it cannot be accessed or extracted externally.
  • Binding the Certificate to the Enclave via DNS Locking:
    • A CAA (Certification Authority Authorization) accounturi record is added to the DNS.
    • This restricts certificate issuance to the ACME account controlled by the enclave.
  • Ensuring End-to-End API Security:
    • When a user or third-party service connects to the API, their client verifies the TLS certificate.
    • Since the certificate is cryptographically bound to the enclave, the connection is guaranteed to be securely terminated within a verifiable TEE.
    • If the enclave’s code is modified, the enclave must be restarted, causing the private key along with the issued TLS certificate to be permanently lost, ensuring that any unauthorized modification results in the inability to establish a valid TLS connection.

By enforcing TEE-bound certificate issuance, Verida ensures that API requests are always processed within a trusted enclave, preventing MITM (Man-in-the-Middle) attacks, API response manipulation, and credential hijacking.

Verida Marlin Secure APIs

A detailed explanation of the approach is available at https://blog.marlin.org/case-study-oyster-tee-3dns-on-chain-management-of-verifiable-decentralized-frontends

2. Secure Data Access: Credential Protection for External APIs

As a decentralized data layer, Verida enables applications to access user data from third-party services (e.g., social accounts, financial accounts, and healthcare records) in a secure and privacy-preserving manner. Instead of requiring users to manually retrieve and upload their data, Verida allows users to seamlessly grant access while ensuring their credentials remain protected and data retrieval happens within a verifiable, trusted execution environment (TEE).

To ensure that credentials are handled safely and data is accessed only as intended, Verida leverages Marlin TEEs, preventing unauthorized access, misuse, or overreach. Without such safeguards, traditional approaches introduce several risks:

  • Credential Exposure: If credentials are stored insecurely or transmitted incorrectly, they can be stolen or misused to steal user data, send emails, messages or create social media posts.
  • Over-fetching of Data: APIs often return more information than required, increasing privacy risks.
  • Lack of Usage Control: Once credentials are shared, there’s no way to restrict how they are used, leading to potential misuse.

How It Works: TEE-Based Credential Protection

To mitigate these risks, Verida uses Marlin TEEs to act as a secure execution environment where user credentials are processed without exposing them to the application, developers, or even Verida itself.

The process works as follows:

  • User Provides Credentials Securely:
    • The user completes the sign-in process for a third party application (ie: google, telegram) to generate their credentials.
    • These credentials are returned to Verida’s data connector service running within a Marlin TEE, which encrypts the credentials and stores them in the user’s private database storage on the Verida network.
  • TEE Handles API Authentication & Data Fetching:
    • Inside the enclave, the credentials are decrypted and used only for their intended purpose.
    • The TEE makes a request to the external API on behalf of the user.
    • Since the execution happens inside a verifiable TEE, there is no way for credentials to be leaked or misused.
    • Fetched user data is encrypted within the TEE and stored in the user’s private database storage on the Verida network.
  • Enforced Data Minimization & Secure Response Handling:
    • Third-party apps can request access to user data, which is granted by a user, returning an API key to the third-party app.
    • Third-party apps can then make API requests that are processed inside the enclave, ensuring that only the necessary data is returned.
    • Unwanted or excess data is discarded within the TEE, so only the specific, user-consented information is returned to the application.

3. Private Data Compute: AI on Private Data Without Exposure

AI agents are becoming essential for automating tasks and making personalized decisions on behalf of users, whether it’s managing financial transactions, monitoring health records, or verifying identities. However, for these agents to be truly useful, they need secure access to user data while ensuring that users retain complete control over what is shared, with whom, and for what purpose.

As a decentralized data layer, Verida ensures that AI agents can access only the data they need to make informed decisions, without exposing raw user information to external parties. By leveraging Oyster TEEs, Verida enables private, verifiable data access that lets AI agents function effectively while preserving user privacy and security.

How It Works: TEE-Based Secure Data Access for AI Agents

Verida Marlin Secure Data Access
  • User Grants AI Agent Access to Their Data
      • Users authorize specific AI agents to access their Verida-stored data under defined conditions.
      • Instead of sending raw data to the AI agent, Verida enables the agent to request only the necessary insights via a secure enclave.
  • TEE Ensures Data Privacy & Selective Access
      • Data is processed inside a Trusted Execution Environment (TEE), ensuring that AI agents retrieve only the specific data they are permitted to access, preventing overreach.
      • The AI agent never sees raw user data; it only receives pre-processed insights or structured outputs necessary for decision-making.
  • Privacy-Preserving AI Decision-Making
    • The AI agent uses the retrieved secure data to make real-time, automated decisions on behalf of the user, without ever storing or transmitting raw user information externally.
    • This ensures AI driven automation remains privacy-preserving, allowing users to benefit from intelligent automation while maintaining complete control over their data.

Why This Matters

  • Users Stay in Control: AI agents access only what is necessary, ensuring no over-fetching or data leaks.
  • Privacy-Preserving AI Automation: AI agents make decisions without ever seeing raw user data, preventing unauthorized exposure.
  • Secure Execution via TEEs: Ensures data queries are restricted, and AI agents operate only within user-defined permissions.

By leveraging TEEs and Verida’s decentralized data layer, AI agents can enhance user decision-making and automation without compromising security, privacy, or data ownership.

Developers can get started with the Verida AI Developer Console, the central hub for managing your Verida developer account, monitoring your application usage, and interacting with the Verida API ecosystem.

Agent Kyra is a personal AI assistant powered by Verida AI and Marlin technology, showcasing a new meta of private AI applications. Securely connect your personal Gmail, Calendar, Youtube, Telegram and Spotify to deliver private personal intelligence insights on demand. 

Conclusion:

Privacy is no longer an afterthought; it’s a fundamental requirement. Verida’s approach demonstrates how decentralized applications can securely interact with private data without compromising usability or security.

Follow our official social media channels to get the latest updates as and when they come out!

Twitter | Telegram Announcements | Telegram Chat | Discord | Website

Stay connected

Subscribe to our newsletter.