侧边栏壁纸
博主头像
早日退休博主等级

什么时候不更新了,就是退休去了

  • 累计撰写 42 篇文章
  • 累计创建 6 个标签
  • 累计收到 2 条评论

目 录CONTENT

文章目录

CoD

管理员
2023-11-01 / 0 评论 / 0 点赞 / 14 阅读 / 12629 字

Introduction

This document explores the different network architectures that service providers can deploy to participate in Bacalhau network, following the proposed architectural changes in Bacalhau Orchestration. Each service provider can follow a different architecture depending no their investment in the network and their availability target.

Architectures

Bacalhau Orchestration Diagram-Legend.drawio.png

Single Box

Bacalhau Orchestration Diagram-01 - Single Compute Node.drawio.png

This is the simplest form of participation in the network where a service provider can participate with a single compute node to execute jobs it receives from requester nodes in the network. The node will also host an SQLite database to persist state about the jobs it is executing to handle requester node status update queries, and to retry execution when the node is bounced.

The node will join a p2p network with other service providers so that requester nodes in the network can discover this node. Optionally, it can also join network stats GossipSub topic to periodically publish stats about itself to improve its rank in the network, and to build reputation about requester nodes to help decide which requests to accept or reject.

This scheme is applicable for simple users who want to participate in the network with their personal computers and have the least amount of upfront investment of technical knowledge to join. It also suffers from being the least available approach and not effective to build a better reputation.

Load Balanced Multi Boxes

Bacalhau Orchestration Diagram-02 - Load Balancer Compute Nodes.drawio.png

In this architecture, the service provider will have multiple compute node instances behind a load balancer for higher availability and failover of job execution. The compute nodes are treated as a single unit to the rest of the network, and all messages will be signed by the same key.

The state of execution will be stored in an external database so that each compute node can respond to requester node status update requests, and to detect stale/failed jobs to be retried on other compute nodes when the original node is no longer available.

Detection of unhealthy nodes and distributing its pending jobs across remaining nodes evenly is a challenging problem on its own, and be handled to a separate component, a leader elected compute node, or a distributed decision using consistent hash rings.

Each compute node will join the p2p network to publish their network stats and build reputation about the other nodes in the network. We need to allow the network stats protocol to aggregate messages from multiple instances that belong to the same service provider. This is obviously inefficient. Also the compute nodes will have to build reputation about others on their own, and newly joined nodes will start from fresh unless we allow state synchronization.

Load Balanced Multi Boxes with External Reputation Tracker

Bacalhau Orchestration Diagram-03 - Load Balancer Compute Nodes with Centralized Reputation Tracker.drawio-2.png

This approach adds more to the separation of concerns by extracting publishing and consuming p2p messages to a separate components, and allows the network to scale more by reducing the number of nodes in the p2p network and messages moving around.

The compute nodes will not join the p2p network, and instead will consult the Reputation Tracker about the reputation of the requester node to decide whether to accept or reject an incoming CallForBids request. The Reputation Tracker will consume network stats messages from GossipSub, build network reputation view, and persist snapshots for failover and recovery.

In addition to that, the compute nodes will publish their aggregated stats periodically to a Stats Aggregator service, which will add another round of aggregation before publishing a single stats message to the other peers representing the whole service provider stats.

Service Provider with Requester Node

Bacalhau Orchestration Diagram-04 - Service Provider with Requester Node.drawio.png

In addition to handling computation requests, this service provider is also able to handle end user job orchestration and submission requests by hosting set of requester nodes. A service provider can be motivated to host requester nodes if they get incentivized, or if it allows them to favour their own compute nodes to carry the execution, which should be acceptable as long as they they meet the end user’s expectation. Though verification can be tricky, unless verification is handled externally such as in ‣

Similar to compute nodes, the requester nodes will consult the Reputation Tracker to identify candidate compute nodes that could be part of its service provider, or belong to other service providers in the network. The requester node will directly communicate with the compute nodes to negotiate job execution, and will persist the job info in an external database to handle the client’s control plane queries, such as list and describe.

The requester node will also publish its stats to the Stats Aggregation service to be published to the rest of the network.

Service Provider with Frontend Service

Bacalhau Orchestration Diagram-05 - Service Provider with Frontend Service.drawio.png

This architecture introduces a Frontend service that can handle querying compute node execution status and requester node control plane queries, such as list and describe directly from the datastores without involving the compute or requester nodes. This approach has the following benefits:

  1. It improves the availability and utilization of the service provider, since most requests including job negotiation and status queries will be terminated at the frontend service, and only the actual job execution and orchestration will be forwarded to the compute and requester nodes respectively.

  2. Even when the compute nodes are browned out, the Frontend service will still be able to handle incoming status queries and reject CallForBids gracefully, which is critical to maintain a better reputation

  3. It allows adding placement and routing logic on which compute nodes should handle job execution rather than the load balancer’s round robin routing. This include selecting nodes based installed execution engines, resource utilizations, and so on.

  4. Frontend can trigger job execution retries when a requester node queries for status update, and realizes the compute node owning the job is no longer available

Utility Focused Service Provider

A network participant can provide a specific utility or service to the rest of the network, other than computation. This can be any of the following:

  • Reputation Tracker: A service provider can join the network only to build and expose a reputation tracker for the other service providers to call instead of each building and maintaining their own. This can be a paid service or for the better good to help with Bacalhau adoption

  • Job Explorer: A service provider consuming GossipSub messages only to build a queryable database for all jobs executed in the network. This allows a user to query all of their jobs that were executed across different service provider. This can allow Bacalhau to build a website to query jobs as an improvement to user experience.

  • Requester Node: A service provider only to orchestrate and negotiate jobs on behalf of the user. Can be for the better good to help with Bacalhau adoption, or the requester can be incentivized by the user or by taking a portion from the compute nodes.

Where does this end?!

Obviously, there are different combinations that service providers can choose from to build their service, and surely more options are available that can’t be covered all here. The main question is what should Bacalhau team deliver to enable such flexibility to each service provider?

Bacalhau can improve on the isolation and spec definition of the sub components/services, and provide a more pluggable and modular implementation that allow service providers to run all services together in a single box, or follow a more micro-service based architecture where each service is deployed on its own fleet of boxes. This is a break down of the services identified so far:

  • Compute Service: Owns the execution jobs

  • Requester Service: Owns compute node selection and negotiation

  • Result Tracker: Continuously queries compute nodes for job execution status and triggers job verification when computation is complete. This is currently shown as part of the requester node in the diagrams, but it doesn’t have to be, specially that it is more of a cron job.

  • Verifier: Verifies the proposed results. This is currently shown as part of the requester node in the diagrams, but it doesn’t have to be, and is proposed to be independent and implemented as a smart contract in ‣

  • Reputation Tracker: Consumes messages from the network (or other sources), and expose APIs for query about service provider reputation and rank for a given job

  • Stat Reporter: Collects, aggregates and publishes service provider stats periodically to the rest of the network

  • Job Datastore: A persistent datastore for jobs orchestrated by the service provider’s requester nodes. The eviction time is expected to be in months.

  • Execution Datastore: A persistent datastore for jobs executed or pending execution by the service provider’s compute node. The eviction time is expected to be in hours or days

  • Frontend Service: The entry point to the other services, which can route incoming requests to the datastores or other enabled services.

0

评论区