Request for Startups [FVM edition]
Request for Startups [FVM edition]

Request for Startups [FVM edition]

The Filecoin Virtual Machine unlocks boundless possibilities for innovation on the Filecoin network. Here are many ideas for systems, apps, and building blocks we’d like to see built. We are counting on you to capture these opportunities, and turn these ideas into reality!

This RFS also includes a star guide in order to call out specific items we want to see more of: watch out for 🌟 to see startups and tools we want to see prioritized. The more 🌟 the more we want to see this!

Version: v1.6
Date: 2024-02-02
image
🌱
Editor notes
Last updated on Feb 2, 2024

We continue to have a lot of interest here, especially after the release of open source models like Llama 2 and Code Llama! We also have partner projects (like Lilypad and Fluence) working on similar tools!

ML Model Storage and Augmentation 🌟🌟🌟

FVM is in a unique spot to contribute to the recent transformer revolution with OpenAI and its competitors. There are centralized catalogues of these models on sites like HuggingFace, which let you interact with the models directly from your browser.

FVM provides an ability to decentralize the storage of model training data and the models themselves. (See more here.)

Consider how powerful it could be to create better weights and biases on an LLM, based on an objective evaluation set (such as Human Eval). With the right incentives, participants could bootstrap from a known open source model, and continue to increase training and fine-tuning to the point that it starts to outperform known hosted models. The future of LLMs can be decentralized, open and more performant than proprietary LLM solutions with the right incentives.

More broadly, we can also incentivize the ability to augment these models in new and interesting ways. This can happen through rewarding the creation or collection of new training data for these models, new model architectures, and new inference tools for large language models. Each of these developments can allow for data organizations to provide tokens to contributors, testers and dataset curators.

A truly open solution to large language models will allow them to be used, developed and retrained by the members of the organizations that support them. This creates a compelling future for the transformer revolution β€” one that places the open source community at the center of future ML advances. It also serves as a useful competitor to the prevailing market approach β€” that of data collected by large organizations to train proprietary, closed-source models.

image
🌱
Editor notes
Last updated on May 8, 2023

We are interested in teams that can onboard lots of large video and VR files to the network. We want teams here that can focus on the end user experience with uploading, viewing, and experiencing large video / VR assets. The business development work and marketing needed here is key to making a successful video storage onramp.

Storage Onramps 🌟🌟

Filecoin needs to take advantage of unused storage capability that the network currently holds. It can do so through onramps for a variety of storage needs and data organizations that can incentivize the upload of more data onto the network.

Just like NFT.storage had specialized marketing and a new product for storing NFT metadata on Filecoin and IPFS, we need to create dapps that create onramps for new data verticals. This can be leveraged on FVM to build new incentive schemes for uploading specific kinds of data.

Some examples of vertically specific storage onramps we’d like to see:

video.storage

  • Need integration with Saturn to do subsecond video retrieval
  • Token gating / content policy constraints for certain copyrighted content
  • Can incorporate video processing timelines with Livepeer, which is on Filecoin

archive.storage

  • Query metadata and find subsets of data that needs to be retrieved
  • Decentralized compute to impute on this dataset

dev.storage

  • Clear CLI tooling for uploading npm packages, Docker containers, git repositories, apt containers
  • Massive optimizations possible because the structure in this data is well defined and repeated

Stock photo and video archives

  • Create a decentralized version of Shutterstock to incentivize creators to upload photo and video based on client needs
  • Clear, simple pricing based on token-gated paywalls for watermark-free download

image

Pay-Per-View

Create token protected pages, as well as token gated media, in order to see certain movies, music, and content. This can happen per view, impression, or download. Access can also be sold to a third party at a later date. Royalties can be modeled in this manner.

This can create new incentives for creating content that bypass the legacy middlemen of web2 content (record labels, copyright law and media agents).

image
🌱
Editor notes
Last updated on Mar 14, 2023

We would LOVE to fund game developers working to build large Minecraft-like worlds on FVM! Our network shines with large in-game assets that are core to the player experience.

Games 🌟🌟🌟

FVM can align incentives between game developers and players. Really high quality games in web3 can be composable and democratic. Communities can decide how virtual worlds evolve, rather than a centralized group of game developers. People that are early participants in the community can vote on characters, scenes and forks.

VR games in particular have a great need for FVM due to the size of the assets involved, and the modifications that are necessary on these assets. Open virtual worlds (like Decentraland) have shown some promise, but Minecraft is really the addictive gameplay we want to emulate on FVM.

image

Social 🌟🌟🌟

Decentralized social is a key piece of the evolving web3 landscape. FVM is in a unique position to provide assets, backing, and the photos and videos to drive unique data-driven social experiences.

Since this is a wide open space, feel free to be creative here β€” build unique social worlds that drive unique web3-native engagement models. Focus on more targeted social problems faced in the web3 community. Focus on magic and delight to users. Bring on the weird and wonderful! πŸ’«

image

Decentralized Science 🌟🌟🌟

FVM can be used to store open access papers and journals. You can store (as proofs) the results of reproducible experiments and provide incentives for scientists to upload new papers over time. This can raise funds for research that are unorthodox or come from a unique source.

This can enable massive scale data science on the data collected on these papers. This can make metadata analysis really easy because the data collected in a DeSci DAO can be analyzed and imputed on in a straightforward manner.

Some ideas for what this can enable:

  • Can the peer review process be made more fair (perhaps through quadratic voting)?
  • How can scientific code be made reproducible? Often these are written as one-off scripts and can be hard to recreate, even for famous publications.

Tracking and Reducing Filecoin Carbon Emissions 🌟

Filecoin Green is committed to building a sustainable network to store data. In order to do this, FVM can be a key player with regenerative finance. This includes incentivizing SPs to apply for gold-standard sustainability claims, promoting lending based on such claims, and assessing offsets based on the wallet / contract that is used to make storage deals.

This brings to light several projects that can be used to further these initiatives:

  • Building a verifiable credentials on-chain to store sustainability claim metadata about SPs (such as soulbound tokens, NFTs, or a DID project)
  • Tokenizing RECs (renewable energy certificates) on FVM
    • Can be bought by SPs on an open marketplace in order to make sustainability claims about their operations
image
🌱
Editor notes
Last updated on Mar 14, 2023

There are a lot players in the lending space already due to overwhelming SP need for FIL collateral.

However, there remain useful problems to solve in this space, including: better SP onboarding, SP underwriting for loan activity, and connections to retrieval performance.

Hardware-Collateralized Lending

Storage providers (SPs) have to post collateral (in FIL) to onboard storage capacity to the network, and to accept storage deals. While important for security, the need to pledge collateral creates friction and an immediate barrier that limits SP participation. Furthermore, the mining process requires large capital investments into hardware.

The Filecoin network has a large base of long-term token holders that would like to see the network grow, and are willing to lend their FIL to reputable and growth-oriented SPs.Β  In a pre-FVM world, a number of lending partners have stepped up (Darma, Anchorage, Coinlist) to facilitate this flow of capital. However these partners are not able to service all SPs in the market, nor are they able to work with all token holders (or protocols) that might want to get access to an inflation-indexed form of Filecoin.

We see lending as a core yield lego for the FVM ecosystem. This means we can see it being used for yield aggregation, perpetual storage, liquid staking, and a lot more. Getting lending right is key to getting the FVM ecosystem kickstarted, especially in the current yield-starved marco environment.

Storage providers can borrow collateral from lenders and the smart contract will lock the future income (block rewards) until the storage providers have repaid their loan.

Underwriting lending can be important, since borrowers can default on their loan.

Initial thoughts for what this could look like:

  • Permissioned lending pools that have a predefined list of lenders and borrowers (totally permissioned, underwriting done off-chain)
  • One-sided permissionless lending pools (akin to Maple on Ethereum / Solana) that have a predefined list of borrowers, but anybody can lend
  • Semi-permissioned lending pools where anybody can lend, but borrowers can automatically join as long as they have a list of defined criteria. For example:

image

Perpetual Storage 🌟🌟

We need to enable perpetual storage of data with a certain number of redundancies. This is important because it is a core use case that comes up again and again from our partners and builders.

Perpetual storage (or in the special case, permanent storage) allows clients to automate renewal of their deals in perpetuity. In many cases, clients want to be able to simply specify terms for how data should be stored (e.g. ensure there are always at least 10 copies of my data on the network) β€” without having to run infrastructure to manage repair or renewal of deals.

Note that because of Filecoin’s proofs - we can create contracts that operate with substantially higher capital efficiency, without sacrificing the security of a dataset.

This tweet thread shares a mental model for how one might create such a contract, along with a strategy for calculating the funds required to indefinitely fund storage via DeFi.

image

Programmable Storage Markets 🌟

If you regard the Filecoin storage network as a massive decentralized data warehouse whose state is being constantly proven to the public, you can think of the FVM as a programmable controller for it.

In a traditional cloud-based data center, the strategies and policies defining how data is inserted, placed, distributed, replicated, repaired, etc. are predetermined by the vendor, and users can only configure them in proprietary ways.

With the FVM, devs can envision and create data center logic to orchestrate, aggregate and broker storage capacity and data sitting all over the world in novel ways, giving rise to new storage primitives with their associated economies of scale.

Some ideas include:

  • Storage bounties, where storage providers compete to win deals, bringing the price down for clients.
  • Full-sector auctions, where clients get a discount for purchasing and occupying entire sectors.
  • Volume discounts, where the price is further reduced for purchasing multiple sectors at once.
  • Sector rebates, where the provider refunds the client (who could be a Data DAO!) on a trigger condition, e.g. when they purchase N sectors over a specific period of time.

These can compose like legos with one another to offer richer storage recipes. There is room for many automation solutions, variations and flavors to compete in the market. In the future, derivative markets may appear and interoperability solutions could enable seamless switching between providers.

image
🌱
Editor notes
Last updated on December 2, 2023

If you want to run FDT / Estuary / FDI, talk to @alvin @schreck @wings @cake on Filecoin Slack. They can point you to more resources!

Build with the FDT Deal Engine 🌟

You can leverage our FVM aggregator standard (interface & implementation & FRC), and implement it with your very own centralized deal engine if you want to build a centralized aggregator on FVM.

The good news: we have working code to create exactly that! A phenomenal team has built out a set of open source deal engine repos that you can fork and host, called FDT / Estuary / FDI. See Estuary repo here. Talk to the handles in the banner above if you want to learn more.

All you need to do is maintain this codebase and deploy your own FVM aggregator standard interface. As soon as you hook up the submit and complete functions in that interface, you can use the deal engine in order to take in requests for aggregation, and prove trustlessly you have aggregated them correctly using PoDSI.

image

Liquid Staking

Other chains (such as Ethereum mainnet) can enable liquid staking through block rewards for staking on a PoS chain. Filecoin can enable this functionality through distributing block rewards given to a winning SP per epoch. Teams such as Filmine are already hard at work building solutions here.

Initial thoughts for what this could look like:

  • Build a wrapper around a lending dapp. Here is one approach for how the wrapper could look:
    • Mint tFIL based on FIL deposited in the protocol
    • Generate a yield on FIL by lending it to SPs on a lending platform
    • Allow trading for tFIL for FIL – should maintain a 1-1 correspondence if the yield generated matches expectations
  • A few directional ideas that might be useful for inspiration:
    • Imagine porting ideas from Lido / RocketPool to the Filecoin ecosystem.
    • Note that you’ll need to make decisions about rebasing or value accrual - it might be useful to think about where you imagine the tokenized asset to be used (e.g. as collateral in other protocols).
      • Eigenlayer might have interesting elements to draw upon here
      • If aiming for 1:1 pegs, you may want to think about where you imagine the peg to be maintained (a Curve/Convex system on another chain? Versions of those deployed on FIL?)

image

Reputation and QoS systems 🌟

With close to 4000 active storage providers (SPs) servicing the Filecoin network, it isn’t easy for clients to choose who to store data with. Different clients may value different properties, and priorities vary across data sets from the same client (e.g. price, latency, bandwidth, availability, etc.) Storage providers could congregate and advertise their Quality of Service (QoS) on a portal, but the public would have to trust them, which isn’t good.

Reputation systems are L2 networks that patrol the Filecoin network, assessing the Quality of Service of storage providers. They perform storage deals on the open market, and capture their observations in a provable log from which reputation scores and metrics are computed and published, in a traceable and auditable form. Pando is a potential building block.

Reputation systems would offer their services via APIs, either on-chain, off-chain, or both, potentially with complex querying capabilities, allowing storage apps to programmatically explore and filter the universe of miners by the exact characteristics they care about, for every storage deal.

One idea to monetize here might be to offer credentialing as a service (i.e. storage providers pay the credentialing service to issue a verified credential on-chain). The service could embed in the metadata of the verified credential all the relevant info used to generate the attestation (making it easily verifiable for any client), allowing the verified credential to be used standalone as shorthand in other protocols (e.g. who might be eligible to participate in an auction to store data).

image
🌱
Editor notes
Last updated on February 2, 2024

We would love to see teams build here, but have seen limited engagement here so far. Please reach out if you want to learn more and build in this space!

In particular, it may be beneficial to focus on dataDAOs that can create amazing end user experiences. Think about those without an FVM wallet or a private key. What can you build with FVM to enable new dapps and interaction models? Focus on ML, social, gaming, video and VR β€” as we describe in subsequent sections.

DataDAOs

FVM enables a new kind of data-centric DAO that has heretofore been impossible. DataDAOs are DAOs whose mission revolves around the preservation, curation, augmentation, and promotion of datasets considered valuable by their stakeholders.

Examples include datasets valuable to humanity at large, like large genomic or research databases, historical indexed blockchain chain and transaction data, rollup data, the Wikipedia, NFT collections, metaverse assets, and more. But also datasets valued by narrower collectives, like football video recaps, statistics published by governments and city halls, or industry-specific machine learning datasets.

Because stake in DataDAOs can be tokenized, and the data stored by the DAO (as well as its status) can be cryptographically proven, the value and utility of data can be objectively expressed and transacted within markets. Tokens can then be used to pay for services to be performed on or with the data.

For example, interested parties could harvest tokens in a DAO-driven decentralized compute fabric by analyzing datasets, performing scientific and ML modelling, calculate statistics, and more, all of which are actions that augment the value, worth, and utility of the original data. SPs can get rewarded (akin to datacap) for replicating datasets, and CDNs can get rewarded for distributing data.

Stakeholders could in turn spend tokens to incentivise even more production of raw data, e.g. sensor data collection, generative art, and even human tasks (transcription, translation...), thus resulting in a circular data-centric economy.

All in all, DataDAOs can be used to create a new layer of incentives for datasets, either in conjunction or in lieu of FIL+ rewards.

Initial thoughts for what this could look like:

  • Incentivize storage for the metaverse through FVM. See more here.
  • Create a token-gated paywall for copyrighted datasets
  • Create a DataDAO for rare / dying languages
  • Create a DataDAO as the database of a decentralized artwork database
    • The artwork can include music, 3D models, movies and the spoken word
    • The paywall can be handled by a token-gating mechanism that rewards artists while also giving the artists distribution

πŸ‘‰
As a specific example, consider a wikiDAO. The purpose of the DAO is to curate, store and make better (english) Wikipedia. Assume that $WIKI was created by the DAO.

With the aggregator standard (interface & implementation & FRC), you can now incentivize different actions taken by a certain CID (which in this case might map to enwiki8) and send those participants tokens.

  • In order to send $WIKI to A:
    • You can wait on SubmitAggregatorRequest with your CID from the aggregator smart contract. If you are optimistic about aggregation latencies, you can reward A at the time of submission to the aggregator with this event.
    • You can wait until getActiveDeals(cid) returns an array that is nonzero in length. At this point, you know that A has been able to fully upload enwiki8 onto Filecoin.
  • In order to send $WIKI to B:
    • You can call getActiveDeals(cid) in order to confirm the number of active replicas for enwiki8. If B uploads another copy, you can reward them with $WIKI.
  • In order to send $WIKI to C:
    • The DAO would need some machinery to implement voting, and keeping track of candidates for β€œupgraded” enwiki8 datasets. These upgraded datasets would have a modified CID, and would already be uploaded onto Filecoin. The DAO should hold onto $WIKI in escrow for C upon upload of the modified CID.
    • The DAO could vote on the value of the modified CID. The escrow could get lifted (and the funds sent to C) based on a favorable vote.

With these incentives in places (and proper token management) the DAO can now create a better repository of human knowledge β€” that in time might be able to surpass wikipedia.

With the approach described above, you now have a machine that is incentivized to curate, replicate and improve datasets in all kinds of data niches. If your niche strikes a chord, you may find people willing to do the work of participants A, B and C. (Learn more here. See a deeper dive here.)

image
🌱
Editor notes
Last updated on Feb 2, 2024

With Fluence launching onchain later this year, along with the LLM / GenAI revolution, we expect this category to take off in 2024. Expect to see multiple teams building in this space!

Decentralized Compute 🌟🌟

Filecoin and IPFS distribute content-addressed datasets across storage providers around the world to increase data redundancy, availability and resiliency.

Such globally distributed data brings significant cost, availability, and reliability advantages, but can also mean that parts of a single dataset may end up stored geographically far away from one another. Executing computation jobs or data pipelines on globally distributed data is a difficult problem. On the other hand, having to regroup the data in a central location just to apply computation on it would defeat the purpose.

Pushing compute jobs to the edges and coordinating its execution is a brand new possibility with FVM actors. FVM actors can control and broker computational resources, incentivize compute execution, distribute workloads across available storage providers, and prove the validity of the computation's result in order to claim rewards. See projects like Bacalhau for a framework that builds on IPFS and yet has to be integrated with FVM.

Storage providers could enroll in compute networks through an FVM actor. Compute clients would post jobs to the actor. A mechanism would assign jobs to providers, and once executed, the provider would post a proof to claim rewards.

Many parts of the current computing stack can be decentralized using FVM: virtual machines, compilation, schedulers and monitors.

Different primitives are needed here. Amongst others:

  • Mechanisms to intelligently distribute, route and deliver compute jobs and their results.
  • Mechanisms to catalog, oversee, and monitor resources available globally.
  • Mechanisms to incentivize storage providers and retrieval nodes to participate in decentralized compute fabrics.
  • Mechanisms to prove the correctness of results (e.g. Lurk, or non-deterministic checks, like those available in Bacalhau) and penalize offenders.
  • Mechanisms to optimize execution plans, and react to unmet expectations from compute providers.

image
🌱
Editor notes
Last updated on Mar 14, 2023

The FVM core team is actively looking for someone to build this with a specific approach in mind. Please post on #fil-builders and tag @Deep Kapur @RaΓΊl Kripalani on the Filecoin Slack if you have a team and interest in learning more!

Trustless FIL+ Notaries 🌟🌟

Today, DataCap is allocated by Notaries. Notaries help add a layer of social trust to verify that clients are authentic (and prevent sybils from malicious actors). However, with smart contracts we can design permissionless notaries that make it economically irrational to try and sybil.

A trustless notary might look like an on-chain auction, where all participants (clients, storage providers) are required to lock some collateral to participate. By running an auction via a smart contract β€” everyone can verify that the winning bidder(s) came from a transparent process. Economic collateral (both from the clients and storage providers) can be used to create a slashing mechanism to disincentivize malicious actors.

For simply running the auction, the notary maintainer might collect some portion of fees for the deal clearing, collect a fee on locked collateral (e.g. if staked FIL is used as the collateral some slice of the yield), or some combination of both.

Note: Trustless notaries (if designed correctly) have a distinct advantage of being permissionless - where they can support any number of use cases that might not want humans in the loop (e.g. ETL pipelines that want to automatically store derivative datasets).

image

KYC and claims attestation

In order to prevent Sybil attacks and miners from gaming the protocol, different Filecoin programs (such as Evergreen) would benefit from having KYC systems built on-chain. These can be used by SPs to provably attest to the claims they are making about themselves. Can this happen in a way that is incentivized, decentralized, and made as autonomous as possible? Zero knowledge proofs may be useful for organizations to obtain proof of identity without revealing SP identity.

Initial thoughts for what this could look like:

  • Build / Port Notebook over to FVM for SPs

image

Decentralized Data Aggregator 🌟🌟

Build a decentralized data aggregator. This tool would trustlessly aggregate files off-chain, with a set of miners that are incentivized to store these files. Once data has been aggregated to 32 GB, a smart contract on FVM would negotiate deals with an SP to store the data that was aggregated.

This would make it possible to build a new storage onramp (such as video.storage) without having to build your own centralized servers to store and aggregate data. The founder of the new onramp would have to only focus on the key parts of making a new onramp work (marketing, new compression, tooling and business development) without having to invest in aggregator infrastructure. Effectively, this needs to operate as a layer where people can post files, get a guarantee the data will be aggregated, and that the data will eventually make it on chain.

Furthermore, this would provide existing onramps like Estuary, NFT.storage and web3.storage the ability to aggregate user submissions in a trustless, decentralized manner.

image

Insurance for Storage Providers 🌟

SPs with an established track record would take out insurance policies that protect them from active faults and ensure an active revenue stream in the case of failure. Certain characteristics (such as payment history, length of operation, availability, etc) can be used to craft insurance policies just as they can be used to underwrite loans to SPs.

In exchange for recurrent insurance premium payments, this protocol would check if the SP is in good standing with the Filecoin network. If it isn’t in good standing, the SP could issue a claim that the protocol would review and issue a payout if all coverage criteria were met.

Storage insurance brokers or marketplaces could emerge to offer single points of subscription and management for clients and providers.

In some ways, this is similar to Nexus Mutual for SPs on FVM. While Nexus guards against contract failure, this protocol would help SPs recover from an active fault or termination.

image

Access Control

With FVM, dapps can define their own access control rules for the data they are onboarding and storing. This enables smart contracts to programmatically govern who can access a dataset, without the contract having access to the data itself. This enables paywalling media content (such as music and movies), public decryption (a timelock where the data becomes visible to the world after a certain set time period) and many other interesting applications.

One example of this is Medusa (demo here), a threshold encryption network for on-chain access control. Tooling like this on FVM can help developers delegate access rules for encrypted data.

image

User-Friendly Names

The ability to assign user-friendly names to actors and identities on-chain is a basic feature in modern blockchain ecosystems. Humans just aren’t great at memorizing hashes or binary data. Some chains offer native and integrated naming systems (e.g. NEAR), while others rely on user-land services (e.g. ENS), keeping the protocol agnostic of them. The Filecoin community has not planned any native solutions, and there are greenfield opportunities to innovate and implement solid naming services in user-land, backed by their own rules and cryptoeconomies.

Desirable features include the ability to assign, reserve, and dispute names for actors and accounts; resolve address from names; reverse lookups; the ability to assign names to Code CID; access control through naming; composability with other solutions; history tracking; and more.

See this FIP discussion for some ideas.

image

Blockchain Nuts & Bolts

There are items required for every blockchain in order to succeed. These are required to make sure there is enough liquidity on-chain and (eventually) enough movement of funds from other chains. While these don’t leverage fundamental programmable storage primitives, they are nonetheless important for the functioning and stacking of the ecosystem.

🌱
Editor notes
Last updated on May 8, 2023

With the deployment of Sushiswap, the space for DEXes on FVM is getting more saturated. However there is still room to innovate here, especially with a CLOB model similar to Serum.

DEXes & Exchanges

Users on FVM need to be able to exchange FIL for other tokens issued on-chain. This may be a DEX [as simple as a fork of Uniswap or Sushi on EVM], or involve building a decentralized order book, similar to Serum on Solana.

🌱
Editor notes
Last updated on Feb 2, 2024

Tons of room to innovate here. We are missing CDPs such as AAVE or Compound. Talk to JV from Ansa if you want to learn more!

Overcollateralized Lending 🌟

Users on FVM need to be able to deposit a certain amount of FIL into a protocol, and withdraw another token, against their collateral. If the price of FIL relative to this token falls, the loan can eventually be dissolved. (This could be an AAVE or Compound fork.)

Token Bridges

While not something immediately on the roadmap, bridges are needed from EVM chains, Move chains and Cosmos chains in order to bring wrapped tokens from other ecosystems into the fold. With the current launch, we are more focused internally, since the value proposition of Filecoin is unique enough that it does not need to bootstrap TVL from other chains. However, in the long run, we expect FVM to be part of a broader family of blockchains.

Brownie points if you can build bridges that reward users for moving wrapped tokens into FVM (perhaps through a bridge token or another reward scheme).

General purpose cross-chain bridges

Storage is a universal need. Smart contracts sitting on other chains benefit from accessing the storage capabilities of the Filecoin network. Similarly, Filecoin actors should be able to interact with code on other chains, or generate proofs of Filecoin state or events that can be understood by agents sitting on other chains.

Building a suite of FVM actors that can process cryptographic primitives and data structures of other chains through IPLD (Interplanetary Linked Data) enablesΒ  cross-chain web3 storage utilization. For example, NFT registries could prohibit transactions unless it can be proven that the underlying asset is stored on Filecoin, or rollup contracts could verify that the aggregator has stored transaction data in Filecoin.

image

Oracles

Price Oracles

In order to understand the price of a token or dataset on Filecoin, we need teams to operate and run oracles. These will be necessary for overcollateralized lending, bridges, insurance, and some order book exchanges, depending on implementation.Β  While not a pressing need for launch, price oracles will be necessary in the long run, as more P1 / P2 use cases come to the fore.

🌱
Editor notes
Last updated on June 20, 2023

If you want to look at CNL efforts on retrievability, have a look at this repo and the related light paper. It describes a similar concept that can be extended and built on FVM.

Retrievability Oracles 🌟

Retrievability oracles are consortiums that allow for a storage provider to commit to a max retrievability price for a client. The basic mechanism is as follows:

  • When striking a deal with a client, a storage provider can commit to retrieval terms.
  • In doing so, the storage provider locks collateral with the retrievability oracle.
  • In normal operation, the client and the storage provider continue to store and retrieve data as normal.
  • In the event the storage provider refuses to serve data (against whatever terms previously agreed), the client can appeal to the retrievability oracle who can request the data from the storage provider.
    • If the storage provider serves the data to the oracle, the data is forwarded to the client.
    • If the storage provider doesn’t serve the data, the storage provider is slashed.

For running the retrieval oracles, the consortium may collect fees (either from storage providers for using the service, fees for accepting different forms of collateral, yield from staked collateral, or perhaps upon retrieval of data on behalf of the client).