Cointime

Download App
iOS & Android

ABCDE:A Deep Dive into ZK Coprocessor and Its Future

Validated Venture

With the recent surge in popularity of the co-processor concept over the past few months, this new ZK use case has been attracting increasing attention.

However, we have found that the concept of co-processors is still relatively unfamiliar to most people, especially when it comes to the precise positioning of co-processors — what they are, what they are not, is still somewhat unclear. There has yet to be a systematic comparison of the technical solutions in the Coprocessor race on the market. This article aims to provide the market and users with a clearer understanding of the Coprocessor track.

What is a Co-Processor, and What is It Not?

If you were asked to explain a co-processor to a non-technical or developer audience in just one sentence, how would you describe it?

Dr. Dong Mo’s statement is likely very close to the standard answer — a co-processor, in simple terms, is about “empowering smart contracts with the capabilities of Dune Analytics.”

How can we break down this statement?

Imagine a scenario where you are using Dune — you want to provide liquidity on Uniswap V3 to earn some transaction fees. So, you open Dune, find the recent transaction volumes for various pairs on Uniswap, the APR of fees over the last 7 days, the fluctuation range of mainstream pairs, and so on.

Or perhaps, during the popularity of StepN, you start trading sneakers, unsure of when to exit. In this case, you monitor StepN’s data on Dune daily — daily transaction volume, new user count, floor price of sneakers — planning to exit quickly once you notice a slowdown in growth or a downward trend.

Of course, it’s not just you keeping an eye on this data; the development teams of Uniswap and StepN are likely doing the same.

This data is meaningful — it not only helps in assessing changes in trends but also allows for various strategies, much like the “big data” approach commonly used by internet giants.

For example, based on the style and price of shoes users frequently buy and sell, recommending similar shoes.

Or based on the duration users hold Genesis shoes, introducing a “user loyalty rewards program,” offering loyal users more airdrops or benefits.

Or based on the TVL or transaction volume provided by LPs or traders on Uniswap, launching a VIP program similar to Cex, providing traders with fee reductions or increasing LP fee shares as benefits.

Now, here comes the problem — when internet giants play with big data and AI, it’s essentially a black box. They can manipulate it however they want, users can’t see it, and they don’t care.

But in the Web3 space, transparency and the ethos of decentralization are our natural political correctness — we reject black boxes! So, when you want to implement the scenarios mentioned earlier, you face a dilemma.

Either you use centralized means to achieve it, manually using Dune to collect and calculate this indexed data in the background, and then deploy the implementation.

Or you write a set of smart contracts to automatically fetch this data on-chain, perform calculations, and deploy it automatically.

The former puts you in a “politically incorrect” trust issue.

The latter generates astronomical gas fees on-chain, and your wallet (as a project) can’t bear it.

This is where the co-processor comes into play, combining the two methods mentioned earlier. At the same time, the “manual backend” step is “self-proven innocent” through technical means. In other words, using ZK technology to “self-prove innocence” for the off-chain “indexing + calculation” part, then feeding it to the smart contract. This resolves the trust issue, and the massive gas fees disappear — perfect!

Why is it called a ‘co-processor’? In fact, this term originates from the development history of Web 2.0 and the introduction of the ‘GPU.’ The reason GPU was introduced as a separate computing hardware, independent of the CPU, was because its design architecture could handle computations that were fundamentally challenging for the CPU, such as large-scale parallel repetitive calculations, graphics computations, and more. It is thanks to this ‘co-processor’ architecture that we have today’s spectacular CG movies, games, AI models, and so on. Therefore, this co-processor architecture is, in essence, a leap forward in computing system architecture. Now, various co-processor teams also aim to introduce this architecture into Web 3.0. Here, blockchain acts as the CPU of Web3.0, whether it’s L1 or L2, both inherently unsuitable for tasks involving ‘heavy data’ and ‘complex computational logic.’ Introducing a blockchain co-processor helps handle such calculations, greatly expanding the possibilities of blockchain applications.

So, summarizing what the co-processor does, it boils down to two things:

  1. Fetching data from the blockchain and proving through ZK that the data is genuine, without any adulteration.
  2. Performing the corresponding calculations based on the acquired data and proving through ZK that the calculated results are also genuine, without any adulteration. The calculated results can then be called by the smart contract with “low-cost + trustless.”

Recently, there was a concept gaining traction in Starkware called Storage Proof, also known as State Proof. It mainly focuses on step 1, representing Herodotus,Langrage, and many other cross-chain bridge technologies based on ZK. The co-processor is essentially completing step 1 and adding a step 2 — performing trustless calculations after extracting trustless data.

So, to put it more precisely in relatively technical terms, the co-processor should be considered a superset of Storage Proof/State Proof, a subset of Verifiable Computation.

One important note is that the coprocessor is not a Rollup.

Technically, Rollup’s ZK proofs are similar to step 2 mentioned above. The process of “fetching data” in step 1 is directly implemented through a Sequencer. Even in the case of a decentralized Sequencer, it’s done through some form of competition or consensus mechanism, not the ZK form of Storage Proof.More importantly, in addition to the computation layer, ZK Rollup needs to implement a storage layer similar to L1 blockchain. This storage is permanent, whereas ZK Coprocessor is “stateless”; after completing the computation, it doesn’t need to retain all states.

In terms of application scenarios, the co-processor can be seen as a service-oriented plugin for all Layer1/Layer2, while Rollup is a separate execution layer that helps with scaling the settlement layer.

Why insist on using ZK? Can’t we use OP?

After reading the above, you might have a question: Does a co-processor really have to be implemented with zero-knowledge proofs (ZK)? It sounds like a “The Graph with added ZK,” and it seems we don’t have much suspicion about the results on Graph.

It’s said like that because, in ordinary use of Graph, it’s usually not directly related to real money. These indexes serve off-chain services, and what you see on the frontend user interface, such as transaction volume and history, can be provided by various data indexing providers like Graph, Alchemy, Zettablock, etc. However, this data cannot be pushed back into the smart contract because doing so would add extra trust to this indexing service. When data is linked with real money, especially with large Total Value Locked (TVL), this additional trust becomes crucial. Imagine a friend asking to borrow $100 — you might readily agree. Now, imagine if they asked to borrow $10,000, or even $100,000?

But on the other hand, do all scenarios related to co-processors really have to be done using ZK? After all, in Rollup, we have two technological paths: Optimistic Rollup (OP) and ZK Rollup. The recent trend of ZKML also introduces the concept of OPML, suggesting that for co-processors, there might be an OP branch, like OP-Coprocessor.

And indeed, there is — However, at this point, we will keep the specific details confidential. Soon, we will release more detailed information.

A Comparison of Common Co-Processor Technical Solutions in the Market

1. Brevis:

Brevis’s architecture consists of three components: zkFabric, zkQueryNet, and zkAggregatorRollup. Below is an architectural diagram of Brevis:

zkFabric: Collects block headers from all connected blockchains and generates a Zero-Knowledge (ZK) proof validating the effectiveness of these block headers.

Through zkFabric, Brevis has achieved a co-processor that is interoperable across multiple chains, meaning it enables one blockchain to access any historical data from another blockchain.

zkQueryNet: An open ZK query engine marketplace that accepts data queries from dApps and processes them. Data queries use verified block headers from zkFabric to handle these queries and generate ZK query proofs. These engines offer both highly specialized functionality and a general query language to meet various application requirements.

zkAggregatorRollup: A ZK rollup blockchain acting as an aggregation and storage layer for zkFabric and zkQueryNet. It verifies proofs from both components, stores verified data, and submits the zk-verified state root to all connected blockchains.

As for zkFabric, ensuring the security of the part responsible for generating proofs for block headers is crucial. The architecture diagram for zkFabric is shown below:

zkFabric relies on Zero-Knowledge Proofs (ZKP) in its lightweight client to generate proofs, ensuring complete trustlessness without depending on any external verification entities. Its security is derived entirely from the underlying blockchain and mathematically reliable proofs.

The zkFabric Prover Network implements circuits for the light client protocol of each blockchain, generating proofs of block header validity. Provers can leverage accelerators such as GPU, FPGA, and ASIC to minimize proof time and costs.

zkFabric relies on the security assumptions of the underlying blockchain and the underlying encryption protocol. However, to ensure the effectiveness of zkFabric, at least one honest relay is needed to synchronize the correct fork. Therefore, zkFabric adopts a decentralized relay network instead of a single relay to optimize its effectiveness. This relay network can leverage existing structures, such as the status guardian network in the Celer network.

  • Prover Allocation: The prover network is a decentralized Zero-Knowledge Proof (ZKP) prover network that requires selecting a prover for each proof generation task and paying fees to these provers.
  • Current Deployments:Currently deployed as an example and conceptual verification for various blockchains, including Ethereum PoS, Cosmos Tendermint, and BNB Chain.
  • Brevis is currently collaborating with the Uniswap V4 hook.

The UNISWAP V4 hook is a programmable plugin for custom pool design. The hook significantly enhances customization for Uniswap pools, addressing the lack of effective data processing capabilities compared to centralized exchanges (CEX) to create features relying on extensive user transaction data, such as transaction volume-based loyalty programs.

With the assistance of Brevis, the hook addresses these challenges. The hook can now read from the complete historical chain data of users or LPs and run customizable calculations in a fully trustless manner.

2. Herodotus:

Herodotus is a powerful data access middleware that provides smart contracts with the ability to synchronously access current and historical on-chain data across Ethereum layers:

- L1 states from L2s

- L2 states from both L1s and other L2s

- L3/App-Chain states to L2s and L1s

Herodotus introduces the concept of storage proofs, a fusion of inclusion proofs (confirming the existence of data) and computation proofs (verifying the execution of multi-step workflows) to prove the validity of one or more elements in large datasets (such as the entire Ethereum blockchain or rollup).

The core of the blockchain is a database where data is cryptographically protected using data structures like Merkle trees and Merkle Patricia trees. The uniqueness of these data structures lies in their ability to generate evidence confirming that the data is included in the structure once it has been securely committed to them.

The use of Merkle trees and Merkle Patricia trees enhances the security of the Ethereum blockchain. By encrypting and hashing data at each level of the tree, it becomes nearly impossible to alter the data without detection. Any change to a data point requires changing the corresponding hash values up to the root hash, which is publicly visible in the blockchain headers. This fundamental feature of the blockchain provides a high level of data integrity and immutability.

Furthermore, these trees allow efficient data verification through inclusion proofs. For example, when verifying the inclusion of a transaction or the state of a contract, there’s no need to search the entire Ethereum blockchain — only the relevant paths within the associated Merkle trees need to be verified.

The storage proof defined by Herodotus is a fusion of the following:

1. Inclusion Proof: These proofs confirm the existence of specific data in encrypted data structures (such as Merkle trees or Merkle Patricia trees), ensuring that the relevant data indeed exists in the dataset.

2. Computation Proof: These proofs verify the execution of multi-step workflows, proving the validity of one or more elements in a broad dataset, such as the entire Ethereum blockchain or a rollup. In addition to indicating the existence of data, they also verify the transformations or operations applied to that data.

3. Zero-Knowledge Proof: Simplifies the amount of data smart contracts need to interact with. Zero-knowledge proofs enable smart contracts to confirm the validity of claims without processing all the underlying data.

Workflow:

1. Obtain Block Hash:

Every piece of data on the blockchain belongs to a specific block. The block hash serves as the unique identifier for that block, summarizing all its contents through the block header. In the workflow of the storage proof, the first essential step is to determine and validate the block hash of the block containing the data of interest.

2. Obtain Block Header:

Once the relevant block hash is obtained, the next step is to access the block header. To do this, the block header associated with the block hash obtained in the previous step is hashed. Subsequently, the hash value of the provided block header is compared to the obtained block hash:

Two ways to obtain the hash:

  • Using the BLOCKHASH opcode for retrieval.
  • Querying the Block Hash Accumulator for the hash of historically verified blocks.

This step ensures that the block header being processed is genuine. After completing this step, the smart contract can access any values within the block header.

3. Determine the Desired Roots (Optional)

With the block header in hand, we can delve into its contents, particularly:

- stateRoot: The cryptographic digest of the entire blockchain state when the block occurred.

- receiptsRoot: The cryptographic digest of all transaction outcomes (receipts) in the block.

- transactionsRoot: The cryptographic digest of all transactions that occurred in the block.

These roots can be decoded, enabling the verification of whether the block contains specific accounts, receipts, or transactions.

4. Verify Data Based on the Selected Roots (Optional)

With the chosen roots and considering Ethereum’s use of the Merkle-Patricia Trie structure, we can utilize Merkle inclusion proofs to verify the existence of data in the tree. The verification steps will vary based on the depth of the data and the data within the block.

Currently supported networks:

- From Ethereum to Starknet

- From Ethereum Goerli* to Starknet Goerli*

- From Ethereum Goerli* to zkSync Era Goerli*

3. Axiom:

Axiom provides a way for developers to query block headers, accounts, or storage values from the entire history of Ethereum. AXIOM introduces a new cryptographic method. All results returned by Axiom are verified on-chain through zero-knowledge proofs, allowing smart contracts to use them without other trust assumptions.

Axiom recently released Halo2-repl, a browser-based REPL written in Javascript for Halo2. This allows developers to write ZK circuits using standard Javascript without learning new languages like Rust, installing proof libraries, or dealing with dependencies.

Axiom consists of two main technical components:

  • AxiomV1: Ethereum blockchain cache starting from Genesis.
  • AxiomV1Query: Executes smart contracts for AxiomV1 queries.

Workflow

1) Caching block hashes in AxiomV1:

The AxiomV1 smart contract caches Ethereum block hashes in two forms since the genesis block:

Firstly, it caches the Keccak Merkle root of consecutive 1024 block hashes. These Merkle roots are updated through ZK proofs, verifying if the block header hash forms one of the latest 256 blocks directly accessible by the EVM or if it already exists in the AxiomV1 cache, ending the commitment chain.

Secondly, Axiom stores these Merkle roots in a Merkle Mountain Range starting from the genesis block. This Merkle Mountain Range is built on-chain, updating through the first part of the cached Keccak Merkle roots.

(2) Fulfilling queries in AxiomV1Query:

The AxiomV1Query smart contract is used for batch queries, allowing trustless access to arbitrary data of historical Ethereum block headers, accounts, and account storage. Queries can be performed on-chain, completed on-chain through ZK proofs against the block hashes cached by AxiomV1.

These ZK proofs check whether the relevant on-chain data is directly in the block header or in the account or storage Trie of the block by verifying inclusion (or non-inclusion) proofs of the Merkle-Patricia Trie.

4. Nexus

Nexus aims to establish a universal platform for verifiable cloud computing utilizing zero-knowledge proofs. It is currently machine architecture-agnostic, supporting RISC-V, WebAssembly, and EVM. Nexus utilizes the Supernova proof system, with the team testing memory requirements for proof generation at 6GB, aiming to optimize it further to enable proof generation on regular user-end devices in the future.

To be precise, the architecture is divided into two parts:

  • Nexus Zero: A decentralized verifiable cloud computing network supported by zero-knowledge proofs and a universal zkVM.
  • Nexus: A decentralized verifiable cloud computing network powered by multi-party computation, state machine replication, and a universal WASM virtual machine.

Nexus and Nexus Zero applications can be written in traditional programming languages, currently supporting Rust with plans to include more languages in the future.

Nexus application operates within a decentralized cloud computing network, which is essentially a universally connected ‘serverless blockchain’ directly linked to Ethereum.Nexus applications do not inherit Ethereum’s security but, in exchange, gain higher computational capabilities (such as computing, storage, and event-driven I/O) due to the reduced network scale. Nexus applications run on a dedicated cloud, achieving internal consensus and providing “proofs” (not true proofs but verifiable computations) through Ethereum’s internally verifiable global threshold signatures.

Nexus Zero applications do inherit Ethereum’s security as they are general-purpose programs with zero-knowledge proofs that can be verified on the BN-254 elliptic curve.

As Nexus can run any deterministic WASM binary in a replicated environment, it is expected to serve as a source of validity, decentralization, and fault tolerance for proof-generating applications, including zk-rollup sequencers, optimistic rollup sequencers, and other verifiers, such as Nexus Zero’s zkVM itself.

Comments

All Comments

Recommended for you

  • Making the Internet Alive Again

    The Internet is changing. Are we changing, too?

  • On Compressionism

    How a technological necessity is becoming the face of a burgeoning realm of cryptoart

  • Ethereum: The Infinite Story Machine 💫

    Yesterday, the first issue of the new ETH Investors Club (EIC) magazine went live digitally, with physical copies redeemable via NFTs on the way. The EIC effort is focused on spotlighting Ethereum’s current landscape through high-quality essays, and I’m honored that my piece, The Infinite Story Machine, was featured in the inaugural “Culture Corner” section.

  • BuildBear Labs Raises $1.9M to Accelerate Development of Web3 Tools for Secure dApp Creation

    Singapore-based BuildBear Labs has secured $1.9m in funding from investors including Superscrypt, Tribe Capital, and 1kx, as well as angel investors such as Kris Kaczor and Ken Fromm. The funds will be used to speed up development of the company's flagship platform, which provides developers with testing and validation solutions for secure decentralized applications. BuildBear Labs' platform is dedicated to dApp development and testing, offering developers the ability to create customised Private Testnet sandboxes across multiple EVM and EVM-compatible blockchain networks, with features including private faucets for unlimited Native and ERC20 token minting.

  • I Don't Like Layer 2 Anymore

    I had been quite vocal about Optimism on Twitter when it was trading at north of 5bn FDV back in June last year with a view that this red coin is criminally undervalued.

  • OnChainMonkey: Reimagining Bitcoin NFTs

    Exploring Ordinals As a Medium for Art and Programmability

  • Collusion-Resistant Impartial Selection Protocol (CRISP)

    We propose the Collusion-Resistant Impartial Selection Protocol (CRISP) to improve on MACI’s honest Coordinator assumption. By leveraging threshold cryptography and fully homomorphic encryption (FHE), we enable a distributed set of Coordinators (a “Coordinator Committee”) and shift the trust model from an honest Coordinator assumption to an assumption that there is no threshold of dishonest Coordinators in the Coordinator Committee. We propose to increase the trust model further by introducing economic disincentives for compromised Coordinators.

  • Multiple incidents of stETH being stolen and cross-chained to the Blast mainnet were discovered. The victim’s mnemonic words/private keys may have been leaked.

    SlowMist founder, Yu Xian, posted on X platform stating that SlowMist and MistTrack have received at least four cases of stETH being stolen and cross-chain transferred to the Blast mainnet. The common feature is that a small amount of ETH transaction fee is sent from an address with obvious traces (including exchanges) to the stolen address, and then stETH is cross-chain transferred to the Blast mainnet for subsequent transfer, and finally the remaining small amount of ETH in the victim's address is transferred to different ETH addresses. The known loss exceeds 100 stETH, and it is likely a group event. The mnemonic phrase/private key of these victims must have been leaked, and the attackers lurked to start on the Blast mainnet. Previously, Scam Sniffer monitoring showed that a certain address lost over 10 BTC pledged on Aave and some PANDORA due to interaction (clicking on the signature authorization) with a fake Blast airdrop website, with a total loss of approximately $717,817.

  • Hong Kong has closed the application for virtual asset trading platform licenses, and a total of 22 virtual asset trading platforms are waiting for approval.

    The Hong Kong Securities and Futures Commission website shows that the deadline for virtual asset trading platform license applications was yesterday (29th). As of the update on February 28th, there were a total of 22 virtual asset trading platform applicants.The applicants include Bybit, OKX, Crypto.com, Gate.io, HTX, Bullish, and others.Ammbr, BitHarbour, and Huobi HK withdrew their applications, while Meex had its application returned by the Securities and Futures Commission.In addition, virtual asset trading platforms operating in Hong Kong that did not submit license applications to the Securities and Futures Commission by yesterday (29th) must end their business in Hong Kong by May 31, 2024, at the latest.

  • In February, NFT sales on the Bitcoin chain were approximately US$301 million, down nearly 10% from the previous month.

    According to cryptoslam data, the sales of NFTs on the Bitcoin blockchain in February reached $301,983,035.33, a decrease of nearly 10% from the previous month's $335,121,977.66, and the fourth-highest monthly sales to date. The total number of NFT transactions on the Bitcoin blockchain in February was approximately 203,000, a decrease of about 18.4% from the previous month. In addition, there were 67,139 independent buyers and 57,724 independent sellers of NFTs on the Bitcoin blockchain last month.