Oyster Serverless is a cutting-edge, high-performance serverless computing platform designed to securely execute JavaScript (JS) and WebAssembly (WASM) code in a highly controlled environment. Built using the Rust and Actix Web framework, Oyster serverless leverages the power and security of AWS Nitro Enclaves, Cloudflare workerd runtime, and cgroups to provide unparalleled isolation and protection for the executed code.
This innovative solution addresses the growing demand for secure and efficient serverless code execution in various industries, such as finance, healthcare, and government, where data protection and code integrity are paramount.
The Oyster serverless platform is designed to maximize the benefits of serverless architecture while mitigating security risks associated with running untrusted code. By employing the Rust language and Actix Web framework, Oyster serverless provides strong memory safety guarantees and efficient resource utilization. The use of AWS Nitro Enclaves offers a secure, isolated environment for executing code. Finally, cgroups ensure strict control over memory and CPU usage, preventing unauthorized resource access and safeguarding against potential vulnerabilities.
In summary, the Oyster Serverless Application represents a significant leap forward in secure serverless computing, offering a robust and scalable solution for running JS and WASM code in a highly secure and efficient environment.
Stakeholders
Operators: Entities that manage and maintain load balancers and serverless instances, ensuring high availability, scalability, and performance.
Users: Individuals or organizations that send requests to the serverless infrastructure, providing input and identifiers for the desired backend code to be executed.
Functional Requirements
Backend Code Management:
Design and implement a system for storing, retrieving, and updating backend code in calldata or a similar storage solution.
Ensure secure and efficient access to the backend code by authorized serverless instances.
Load Balancing:
Develop a load balancing system capable of efficiently distributing user requests among available serverless instances.
Monitor and optimize load balancer performance to ensure low latency and high availability.
Serverless Instance Management:
Deploy, and manage EC2 instances that host secure enclaves for executing user-requested code.
Enclave Execution Environment:
Develop and maintain secure enclaves for executing serverless applicaton in a protected environment.
Ensure strict isolation between enclaves to prevent unauthorized access to data or resources.
Security:
Implement best security practices to protect the serverless infrastructure, user-requested code execution, and data handling.
To optimize resource usage within the enclave, each request should be restricted to a limited amount of memory and CPU usage.
Monitoring and Logging:
Set up monitoring and logging systems to track the performance, security, and health of the serverless infrastructure.
Implement alerting mechanisms to notify relevant stakeholders of potential issues.
Non-Functional Requirements
Performance:
Low latency and high throughput for processing user requests.
Efficient use of system resources, such as CPU, memory, and network bandwidth.
Scalability:
Ability to handle a large number of concurrent user requests without compromising performance.
Seamless scaling of serverless instances and worker processes based on demand.
Reliability:
High availability of serverless infrastructure to minimize downtime and ensure consistent user experience.
Fault tolerance and redundancy to protect against infrastructure failures.
Usability:
Clear and concise error messages and logging for easier troubleshooting.
Intuitive APIs and interfaces for interacting with the serverless infrastructure.
Architecture
The Oyster serverless architecture consists of several interconnected components, including a storage server, a load balancer, EC2 enclaves, proxies, and the serverless application itself. The system leverages AWS Nitro Enclaves to securely run the serverless application and the workerd runtime, while utilizing cgroups for resource management. By distributing the load among available serverless enclave instances and using resource constraints, the platform ensures optimal performance and minimizes the impact on the underlying infrastructure. This architecture enables a seamless and secure flow of requests and responses between the user and the serverless platform, ultimately delivering a robust and reliable serverless computing experience.
Components
Storage Server: A key-value store that saves uploaded JavaScript code as a value, paired with a uniquely generated key for each code.
Load Balancer: Handles incoming user requests and distributes the load among available serverless enclave instances based on their current status.
EC2 Enclave: Hosts the serverless application inside a secure enclave.
Caddy Reverse Proxy and IP-to-VSOCK Proxy: Enables communication between the load balancer and the serverless application inside the enclave through a VSOCK socket connection.
VSOCK-to-IP Proxy: Redirects requests inside the enclave to the serverless HTTP application.
Serverless Application: Processes incoming requests, fetches the corresponding JavaScript code, and manages the workerd runtime execution.
Workerd Runtime: Executes the JavaScript code inside the enclave with the allocated resources and constraints.
cgroups: Resource management feature that enforces memory and CPU usage limits for the workerd runtime.
High level serverless platform architecture diagram
Serverless platform flow
User uploads the JavaScript code to the storage server, which stores it against a unique identifier.
User sends an HTTP request containing the unique identifier for the code along with other inputs to the load balancer in the JSON body.
Load balancer checks the status of all running workerd processes and redirects the request to the EC2 enclave with the least running workerd.
The request is forwarded to the serverless application inside the enclave via Caddy Reverse Proxy and IP-to-VSOCK Proxy.
Inside the enclave, the request is redirected to the serverless HTTP application using a VSOCK-to-IP Proxy.
The serverless application fetches the JavaScript code using the unique identifier and generates a JS file with a unique name.
A free port is found inside the enclave to run the workerd runtime.
A configuration file is generated using the JS file name and the free port.
A free cgroup with memory and CPU usage limits is selected.
The serverless application starts executing the workerd runtime inside the selected cgroup using the generated capnp configuration file and the downloaded JS file.
An HTTP request is made to the workerd runtime, including any input provided by the user in the original request.
Once a response is received from the workerd runtime, it is terminated using its PID, and the JS and configuration files are deleted from inside the enclave.
The response from the workerd runtime is forwarded to the load balancer, which marks the request as closed and sends the response back to the user.
Note: The storage server for storing the user's JS code is still under development. In the current setup, the code provided by the user for execution is stored in the calldata of a transaction on the Arbitrum testnet. The user provides the transaction hash for the transaction, along with other user inputs, when making a request to the server. The calldata is decoded using Ethers in Rust and saved as a JS file inside the enclave. All other steps mentioned in the serverless platform flow remain the same.
Current flow inside the serverless application:
API endpoint
For instructions on how to run the Oyster serverless application, please refer to the README file in the following github repository : oyster-serverless
This transaction hash contains the JavaScript code that finds the prime factors of a given number :