Storage Node / Cluster
A Storage Node (or cluster) stores and distributes IPLD blocks.
- many backends are possible (e.g. filesystem, object store, tape?)
- blocks are accessible via bitswap and HTTP trustless gateway
- advertises available CIDs using bitswap, DHT and IPNI announcements
- exposes a pinning service (users can ask the node to keep certain blocks)
- via pinning API
- via HTTP upload of CAR archives
- authenticated by Auth Service
- exposes storage attestation API, a HTTP API answering questions like
- do you store this CID? -- and it’s children?
- how long will you keep it?
- (TBD / maybe) follows a remote pinlist (e.g. as in ORCESTRA
pinlist.yaml
Additional Services¶
These are likely very useful, but it’s likely ok to have fewer replicas of these.
- Network Indexer (tracks, indexes & publishes IPNI updates)
- Delegated Routing Service (exposes routing information from DHT & Network Indexer via HTTP)
- Standalone HTTP Gateway (maybe less relevant in the longer term, exposes IPFS content via HTTP)
WPs¶
- specify requirements for the above mentioned services more thoroughly
- develop quota management (or something that ensures all participants can use the service without becoming unfair)
- specify security recommendations to operate the services in a safe manner
- implement storage node as blueprint for the infrastructure available to project partners
- install & operate such nodes at multiple sites
- install & operate (at least) one network indexer
- install & operate (at least) one delegated routing service
- (TBD / maybe later) develop system for transparent long-term archiving (e.g. Tape connector)