Cointime

Download App
iOS & Android

ABCDE:A Deep Dive into ZK Coprocessor and Its Future

Validated Venture

With the recent surge in popularity of the co-processor concept over the past few months, this new ZK use case has been attracting increasing attention.

However, we have found that the concept of co-processors is still relatively unfamiliar to most people, especially when it comes to the precise positioning of co-processors — what they are, what they are not, is still somewhat unclear. There has yet to be a systematic comparison of the technical solutions in the Coprocessor race on the market. This article aims to provide the market and users with a clearer understanding of the Coprocessor track.

What is a Co-Processor, and What is It Not?

If you were asked to explain a co-processor to a non-technical or developer audience in just one sentence, how would you describe it?

Dr. Dong Mo’s statement is likely very close to the standard answer — a co-processor, in simple terms, is about “empowering smart contracts with the capabilities of Dune Analytics.”

How can we break down this statement?

Imagine a scenario where you are using Dune — you want to provide liquidity on Uniswap V3 to earn some transaction fees. So, you open Dune, find the recent transaction volumes for various pairs on Uniswap, the APR of fees over the last 7 days, the fluctuation range of mainstream pairs, and so on.

Or perhaps, during the popularity of StepN, you start trading sneakers, unsure of when to exit. In this case, you monitor StepN’s data on Dune daily — daily transaction volume, new user count, floor price of sneakers — planning to exit quickly once you notice a slowdown in growth or a downward trend.

Of course, it’s not just you keeping an eye on this data; the development teams of Uniswap and StepN are likely doing the same.

This data is meaningful — it not only helps in assessing changes in trends but also allows for various strategies, much like the “big data” approach commonly used by internet giants.

For example, based on the style and price of shoes users frequently buy and sell, recommending similar shoes.

Or based on the duration users hold Genesis shoes, introducing a “user loyalty rewards program,” offering loyal users more airdrops or benefits.

Or based on the TVL or transaction volume provided by LPs or traders on Uniswap, launching a VIP program similar to Cex, providing traders with fee reductions or increasing LP fee shares as benefits.

Now, here comes the problem — when internet giants play with big data and AI, it’s essentially a black box. They can manipulate it however they want, users can’t see it, and they don’t care.

But in the Web3 space, transparency and the ethos of decentralization are our natural political correctness — we reject black boxes! So, when you want to implement the scenarios mentioned earlier, you face a dilemma.

Either you use centralized means to achieve it, manually using Dune to collect and calculate this indexed data in the background, and then deploy the implementation.

Or you write a set of smart contracts to automatically fetch this data on-chain, perform calculations, and deploy it automatically.

The former puts you in a “politically incorrect” trust issue.

The latter generates astronomical gas fees on-chain, and your wallet (as a project) can’t bear it.

This is where the co-processor comes into play, combining the two methods mentioned earlier. At the same time, the “manual backend” step is “self-proven innocent” through technical means. In other words, using ZK technology to “self-prove innocence” for the off-chain “indexing + calculation” part, then feeding it to the smart contract. This resolves the trust issue, and the massive gas fees disappear — perfect!

Why is it called a ‘co-processor’? In fact, this term originates from the development history of Web 2.0 and the introduction of the ‘GPU.’ The reason GPU was introduced as a separate computing hardware, independent of the CPU, was because its design architecture could handle computations that were fundamentally challenging for the CPU, such as large-scale parallel repetitive calculations, graphics computations, and more. It is thanks to this ‘co-processor’ architecture that we have today’s spectacular CG movies, games, AI models, and so on. Therefore, this co-processor architecture is, in essence, a leap forward in computing system architecture. Now, various co-processor teams also aim to introduce this architecture into Web 3.0. Here, blockchain acts as the CPU of Web3.0, whether it’s L1 or L2, both inherently unsuitable for tasks involving ‘heavy data’ and ‘complex computational logic.’ Introducing a blockchain co-processor helps handle such calculations, greatly expanding the possibilities of blockchain applications.

So, summarizing what the co-processor does, it boils down to two things:

  1. Fetching data from the blockchain and proving through ZK that the data is genuine, without any adulteration.
  2. Performing the corresponding calculations based on the acquired data and proving through ZK that the calculated results are also genuine, without any adulteration. The calculated results can then be called by the smart contract with “low-cost + trustless.”

Recently, there was a concept gaining traction in Starkware called Storage Proof, also known as State Proof. It mainly focuses on step 1, representing Herodotus,Langrage, and many other cross-chain bridge technologies based on ZK. The co-processor is essentially completing step 1 and adding a step 2 — performing trustless calculations after extracting trustless data.

So, to put it more precisely in relatively technical terms, the co-processor should be considered a superset of Storage Proof/State Proof, a subset of Verifiable Computation.

One important note is that the coprocessor is not a Rollup.

Technically, Rollup’s ZK proofs are similar to step 2 mentioned above. The process of “fetching data” in step 1 is directly implemented through a Sequencer. Even in the case of a decentralized Sequencer, it’s done through some form of competition or consensus mechanism, not the ZK form of Storage Proof.More importantly, in addition to the computation layer, ZK Rollup needs to implement a storage layer similar to L1 blockchain. This storage is permanent, whereas ZK Coprocessor is “stateless”; after completing the computation, it doesn’t need to retain all states.

In terms of application scenarios, the co-processor can be seen as a service-oriented plugin for all Layer1/Layer2, while Rollup is a separate execution layer that helps with scaling the settlement layer.

Why insist on using ZK? Can’t we use OP?

After reading the above, you might have a question: Does a co-processor really have to be implemented with zero-knowledge proofs (ZK)? It sounds like a “The Graph with added ZK,” and it seems we don’t have much suspicion about the results on Graph.

It’s said like that because, in ordinary use of Graph, it’s usually not directly related to real money. These indexes serve off-chain services, and what you see on the frontend user interface, such as transaction volume and history, can be provided by various data indexing providers like Graph, Alchemy, Zettablock, etc. However, this data cannot be pushed back into the smart contract because doing so would add extra trust to this indexing service. When data is linked with real money, especially with large Total Value Locked (TVL), this additional trust becomes crucial. Imagine a friend asking to borrow $100 — you might readily agree. Now, imagine if they asked to borrow $10,000, or even $100,000?

But on the other hand, do all scenarios related to co-processors really have to be done using ZK? After all, in Rollup, we have two technological paths: Optimistic Rollup (OP) and ZK Rollup. The recent trend of ZKML also introduces the concept of OPML, suggesting that for co-processors, there might be an OP branch, like OP-Coprocessor.

And indeed, there is — However, at this point, we will keep the specific details confidential. Soon, we will release more detailed information.

A Comparison of Common Co-Processor Technical Solutions in the Market

1. Brevis:

Brevis’s architecture consists of three components: zkFabric, zkQueryNet, and zkAggregatorRollup. Below is an architectural diagram of Brevis:

zkFabric: Collects block headers from all connected blockchains and generates a Zero-Knowledge (ZK) proof validating the effectiveness of these block headers.

Through zkFabric, Brevis has achieved a co-processor that is interoperable across multiple chains, meaning it enables one blockchain to access any historical data from another blockchain.

zkQueryNet: An open ZK query engine marketplace that accepts data queries from dApps and processes them. Data queries use verified block headers from zkFabric to handle these queries and generate ZK query proofs. These engines offer both highly specialized functionality and a general query language to meet various application requirements.

zkAggregatorRollup: A ZK rollup blockchain acting as an aggregation and storage layer for zkFabric and zkQueryNet. It verifies proofs from both components, stores verified data, and submits the zk-verified state root to all connected blockchains.

As for zkFabric, ensuring the security of the part responsible for generating proofs for block headers is crucial. The architecture diagram for zkFabric is shown below:

zkFabric relies on Zero-Knowledge Proofs (ZKP) in its lightweight client to generate proofs, ensuring complete trustlessness without depending on any external verification entities. Its security is derived entirely from the underlying blockchain and mathematically reliable proofs.

The zkFabric Prover Network implements circuits for the light client protocol of each blockchain, generating proofs of block header validity. Provers can leverage accelerators such as GPU, FPGA, and ASIC to minimize proof time and costs.

zkFabric relies on the security assumptions of the underlying blockchain and the underlying encryption protocol. However, to ensure the effectiveness of zkFabric, at least one honest relay is needed to synchronize the correct fork. Therefore, zkFabric adopts a decentralized relay network instead of a single relay to optimize its effectiveness. This relay network can leverage existing structures, such as the status guardian network in the Celer network.

  • Prover Allocation: The prover network is a decentralized Zero-Knowledge Proof (ZKP) prover network that requires selecting a prover for each proof generation task and paying fees to these provers.
  • Current Deployments:Currently deployed as an example and conceptual verification for various blockchains, including Ethereum PoS, Cosmos Tendermint, and BNB Chain.
  • Brevis is currently collaborating with the Uniswap V4 hook.

The UNISWAP V4 hook is a programmable plugin for custom pool design. The hook significantly enhances customization for Uniswap pools, addressing the lack of effective data processing capabilities compared to centralized exchanges (CEX) to create features relying on extensive user transaction data, such as transaction volume-based loyalty programs.

With the assistance of Brevis, the hook addresses these challenges. The hook can now read from the complete historical chain data of users or LPs and run customizable calculations in a fully trustless manner.

2. Herodotus:

Herodotus is a powerful data access middleware that provides smart contracts with the ability to synchronously access current and historical on-chain data across Ethereum layers:

- L1 states from L2s

- L2 states from both L1s and other L2s

- L3/App-Chain states to L2s and L1s

Herodotus introduces the concept of storage proofs, a fusion of inclusion proofs (confirming the existence of data) and computation proofs (verifying the execution of multi-step workflows) to prove the validity of one or more elements in large datasets (such as the entire Ethereum blockchain or rollup).

The core of the blockchain is a database where data is cryptographically protected using data structures like Merkle trees and Merkle Patricia trees. The uniqueness of these data structures lies in their ability to generate evidence confirming that the data is included in the structure once it has been securely committed to them.

The use of Merkle trees and Merkle Patricia trees enhances the security of the Ethereum blockchain. By encrypting and hashing data at each level of the tree, it becomes nearly impossible to alter the data without detection. Any change to a data point requires changing the corresponding hash values up to the root hash, which is publicly visible in the blockchain headers. This fundamental feature of the blockchain provides a high level of data integrity and immutability.

Furthermore, these trees allow efficient data verification through inclusion proofs. For example, when verifying the inclusion of a transaction or the state of a contract, there’s no need to search the entire Ethereum blockchain — only the relevant paths within the associated Merkle trees need to be verified.

The storage proof defined by Herodotus is a fusion of the following:

1. Inclusion Proof: These proofs confirm the existence of specific data in encrypted data structures (such as Merkle trees or Merkle Patricia trees), ensuring that the relevant data indeed exists in the dataset.

2. Computation Proof: These proofs verify the execution of multi-step workflows, proving the validity of one or more elements in a broad dataset, such as the entire Ethereum blockchain or a rollup. In addition to indicating the existence of data, they also verify the transformations or operations applied to that data.

3. Zero-Knowledge Proof: Simplifies the amount of data smart contracts need to interact with. Zero-knowledge proofs enable smart contracts to confirm the validity of claims without processing all the underlying data.

Workflow:

1. Obtain Block Hash:

Every piece of data on the blockchain belongs to a specific block. The block hash serves as the unique identifier for that block, summarizing all its contents through the block header. In the workflow of the storage proof, the first essential step is to determine and validate the block hash of the block containing the data of interest.

2. Obtain Block Header:

Once the relevant block hash is obtained, the next step is to access the block header. To do this, the block header associated with the block hash obtained in the previous step is hashed. Subsequently, the hash value of the provided block header is compared to the obtained block hash:

Two ways to obtain the hash:

  • Using the BLOCKHASH opcode for retrieval.
  • Querying the Block Hash Accumulator for the hash of historically verified blocks.

This step ensures that the block header being processed is genuine. After completing this step, the smart contract can access any values within the block header.

3. Determine the Desired Roots (Optional)

With the block header in hand, we can delve into its contents, particularly:

- stateRoot: The cryptographic digest of the entire blockchain state when the block occurred.

- receiptsRoot: The cryptographic digest of all transaction outcomes (receipts) in the block.

- transactionsRoot: The cryptographic digest of all transactions that occurred in the block.

These roots can be decoded, enabling the verification of whether the block contains specific accounts, receipts, or transactions.

4. Verify Data Based on the Selected Roots (Optional)

With the chosen roots and considering Ethereum’s use of the Merkle-Patricia Trie structure, we can utilize Merkle inclusion proofs to verify the existence of data in the tree. The verification steps will vary based on the depth of the data and the data within the block.

Currently supported networks:

- From Ethereum to Starknet

- From Ethereum Goerli* to Starknet Goerli*

- From Ethereum Goerli* to zkSync Era Goerli*

3. Axiom:

Axiom provides a way for developers to query block headers, accounts, or storage values from the entire history of Ethereum. AXIOM introduces a new cryptographic method. All results returned by Axiom are verified on-chain through zero-knowledge proofs, allowing smart contracts to use them without other trust assumptions.

Axiom recently released Halo2-repl, a browser-based REPL written in Javascript for Halo2. This allows developers to write ZK circuits using standard Javascript without learning new languages like Rust, installing proof libraries, or dealing with dependencies.

Axiom consists of two main technical components:

  • AxiomV1: Ethereum blockchain cache starting from Genesis.
  • AxiomV1Query: Executes smart contracts for AxiomV1 queries.

Workflow

1) Caching block hashes in AxiomV1:

The AxiomV1 smart contract caches Ethereum block hashes in two forms since the genesis block:

Firstly, it caches the Keccak Merkle root of consecutive 1024 block hashes. These Merkle roots are updated through ZK proofs, verifying if the block header hash forms one of the latest 256 blocks directly accessible by the EVM or if it already exists in the AxiomV1 cache, ending the commitment chain.

Secondly, Axiom stores these Merkle roots in a Merkle Mountain Range starting from the genesis block. This Merkle Mountain Range is built on-chain, updating through the first part of the cached Keccak Merkle roots.

(2) Fulfilling queries in AxiomV1Query:

The AxiomV1Query smart contract is used for batch queries, allowing trustless access to arbitrary data of historical Ethereum block headers, accounts, and account storage. Queries can be performed on-chain, completed on-chain through ZK proofs against the block hashes cached by AxiomV1.

These ZK proofs check whether the relevant on-chain data is directly in the block header or in the account or storage Trie of the block by verifying inclusion (or non-inclusion) proofs of the Merkle-Patricia Trie.

4. Nexus

Nexus aims to establish a universal platform for verifiable cloud computing utilizing zero-knowledge proofs. It is currently machine architecture-agnostic, supporting RISC-V, WebAssembly, and EVM. Nexus utilizes the Supernova proof system, with the team testing memory requirements for proof generation at 6GB, aiming to optimize it further to enable proof generation on regular user-end devices in the future.

To be precise, the architecture is divided into two parts:

  • Nexus Zero: A decentralized verifiable cloud computing network supported by zero-knowledge proofs and a universal zkVM.
  • Nexus: A decentralized verifiable cloud computing network powered by multi-party computation, state machine replication, and a universal WASM virtual machine.

Nexus and Nexus Zero applications can be written in traditional programming languages, currently supporting Rust with plans to include more languages in the future.

Nexus application operates within a decentralized cloud computing network, which is essentially a universally connected ‘serverless blockchain’ directly linked to Ethereum.Nexus applications do not inherit Ethereum’s security but, in exchange, gain higher computational capabilities (such as computing, storage, and event-driven I/O) due to the reduced network scale. Nexus applications run on a dedicated cloud, achieving internal consensus and providing “proofs” (not true proofs but verifiable computations) through Ethereum’s internally verifiable global threshold signatures.

Nexus Zero applications do inherit Ethereum’s security as they are general-purpose programs with zero-knowledge proofs that can be verified on the BN-254 elliptic curve.

As Nexus can run any deterministic WASM binary in a replicated environment, it is expected to serve as a source of validity, decentralization, and fault tolerance for proof-generating applications, including zk-rollup sequencers, optimistic rollup sequencers, and other verifiers, such as Nexus Zero’s zkVM itself.

Comments

All Comments

Recommended for you

  • Polymarket adds new Olympics category

    Predicting market Polymarket adds "Olympics" category, including:Currently, among the countries predicted to win the most gold medals at the Paris Olympics, the United States accounts for 79% and China accounts for 20%. The amount of the prediction market project has reached 1.2 million US dollars; among the countries predicted to win the most medals, the United States accounts for 94% and China accounts for 6%. The amount of the prediction market project has reached 1.3 million US dollars.

  • Robert Kennedy Jr.'s four Bitcoin policies, including that BTC-USD transactions do not need to be reported or taxed to the IRS

    US presidential candidate Robert Kennedy praised the role that Bitcoin could play in improving the US economy at the Bitcoin 2024 conference. He proposed a comprehensive reform of US monetary policy and added that BTC could restore the US economy to its pre-Nixon era. He promised to issue four executive orders related to Bitcoin if elected:

  • Hong Kong Legislative Council Member Tam Yue-heng: Accelerate the issuance and trading of stablecoins that match the characteristics of the linked exchange rate system

    Tan Yueheng, a member of the National Committee of the Chinese People's Political Consultative Conference and the Legislative Council, published an article entitled "Consolidating the Status of Financial Center and Sharing the Dividend of Deepening Reform". In it, he pointed out that in the field of digital finance, the SAR government must continue to develop digital finance and qualified virtual products, explore new beneficial financial formats, and promote the new productive forces of the financial industry. Hong Kong must promote the participation of financial technology companies in the stable coin sandbox mechanism, accelerate the issuance and trading of stable coins with the characteristics of matching linked exchange rate system, expand the testing scope and landing scenarios of digital RMB as a cross-border payment tool, and focus on developing products that are linked to virtual assets and underlying real assets, transforming art, real estate, equity, and carbon emissions into digital tokens through blockchain technology.

  • Hong Kong’s virtual asset ETF market has established a mature structure including exchanges, market makers, primary and secondary custodians, etc.

    Wang Long, Chairman of the Greater Bay Area Financial Professionals Association, pointed out in an article published in Ta Kung Pao entitled "Web3.0 Promotes Diversification of Financial Products" that although Hong Kong's ETF market is still in its development stage compared to the United States, it has established a mature architecture, including exchanges, market makers, primary and secondary custodians, etc. The Hong Kong Securities and Futures Commission has approved six virtual asset spot ETFs and 14 virtual asset spot ETFs, including Hong Kong dollars, US dollars, and renminbi categories, for trading on platforms holding the Hong Kong Securities and Futures Commission license. Nowadays, more and more global investors are paying attention to how to invest in virtual assets, and both the United States and Hong Kong, China have approved the listing of ETF funds for virtual assets, and the investment scale is rapidly increasing.

  • Bloomberg ETF Analyst: XRP ETF may be the next exchange-traded fund product to be launched

    Bloomberg ETF analyst James Seyffart forwarded market news on X platform, stating that during the Bitcoin 2024 conference, Discover Crypto CEO Joshua Jake was interviewed and he said that XRP ETF could be the next possible exchange-traded fund product to launch.

  • Hong Kong's financial industry may study launching stablecoin trading desks and institutional custody services

    Hong Kong Monetary Authority recently announced the list of participants in the stablecoin issuer sandbox, including JD Coin Chain, Circle Coin Innovation, Standard Chartered Bank, Anni Group, Hong Kong Telecom and other institutions. Research reports released by Zeng Shengjun, a researcher at the Greater Bay Area Financial Research Institute of the Shenzhen Branch of Bank of China, and Guan Zhenqiu, a researcher at the Hong Kong Financial Research Institute of Bank of China, analyzed that the Hong Kong dollar stablecoin can improve the efficiency and inclusiveness of the Hong Kong financial system. Its stability, free convertibility, high security, high open source and cross-border mobility can provide support for a wider range of financial innovations.

  • 10 years on: How Ethereum’s ICO changed the crypto landscape

    Nick Johnson, lead developer of the Ethereum Name Service, shared his thoughts and memories of Ethereum on its 10th anniversary.

  • THORChain founder and his plan to ‘vampire attack’ all of DeFi

    After posing as an anon girl for six years, THORChain’s founder is now waging war against the “slow rugs” of DeFi.

  • When Musk Empire listing? Find love in The Sandbox and more: Web3 Gamer

    Web3 gaming is taking an unexpected turn this year says Delab Games head of strategy.

  • Bitcoin scaling network Mezo completes $7.5 million in financing, led by Ledger Cathay Fund

    Bitcoin scaling network Mezo has completed a $7.5 million financing round, with Ledger Cathay Fund leading the investment and Mantle EcoFund ecosystem projects from ArkStream Capital, Aquarius Fund, Flowdesk, GSR, Origin Protocol, and Bybit participating. This round of financing brings its total funding to $30 million.The new funds will be used for Mezo's plan to expand the adoption of its network, including integrating more products into its network, such as its Bitcoin staking platform Acre.