Make relayers subsidise deposit gas fees in exchange for the right to relay withdrawals

Hi! I have a suggestion on how to reduce deposit gas costs on L1 Ethereum:

  1. When a user (Alice) deposits, their leaf stored on-chain in a queue. No hashing is required. The deposit merkle root is not updated.
  2. Anyone (Bob) can merge a queue of multiple leaves to update the merkle root.
  3. In exchange for merging the leaves, Bob recieves the right to relay withdrawals for a certain time period.
    • This right-to-relay may have a half-life; e.g. for the first hour, only Bob may relay transactions; for the second hour, Bob has the right to relay every other transaction, for the third hour, Bob has the right to relay every 4th transaction, and so on.
    • The withdrawal fees that Bob earns covers the gas he spent to merge the leaves.
    • Bob may grief the system by refusing to relay withdrawals. That’s OK as his right to relay transactions decays over time and any relayer may resume withdrawals. Though some careful design needs to be done to strike a right balance.

An implementation of a Merkle tree which queues leaves rather than updates the root per insertion is here. maci/AccQueue.sol at v1 · appliedzkp/maci · GitHub

The benefits of this approach are:

  1. Lower deposit gas costs for users, so Tornado Cash can remain accessible for small amounts.
  2. The overall gas spent is lower. Let’s say the tree depth is 20 and there are 32 deposits. If each deposit updates the root, the gas spent is 20n * 32 = 640n. But if a relayer merges the 32 leaves on behalf of the users, the gas spent is 31n + 15n = 46n (you need 31 hashes to merge 32 leaves to a subroot, and another 15 to update the main merkle root. Apologies in advance for any off-by-one errors in this calculation).
  3. No changes are needed to the zk-snark circuits.

The drawbacks are:

  1. A new deposit pool is necessary as the technique won’t work with the existing pools. If gas costs remain high though, the benefits of creating a new pool may outweigh the costs of a new anonymity set. For instance, if the high gas costs mean that nobody wants to use the 0.1 ETH pool, but would be willing to use a cheaper 0.1 ETH pool, then a new pool would be useful.
  2. Relayers may DoS the system if a lot of relayers have the sole right to relay withdrawals but do nothing. There may be ways to work around this, such as creating time windows where anyone can relay withdrawals (e.g. the first 10 minutes of every hour), so as to limit the impact of malicious relayers.

Another way to reward relayers is to distribute token rewards.

1 Like

So you propose to batch deposits. How much gas do you save ? Not sure how much gas would that save since data cost are linear.

I also see more problems:

  • Currently users are free to bypass withdrawal and call the withdraw function themselves. Is still possible in this design ?

  • Deposits would not immediately withdrawable. That look like bad UX.

I think that the user will save a large proportion of gas as they only need to store their leaf. Could you elaborate on what you mean by linear data costs?

Users won’t be able to immediately withdraw. They’ll have to wait till someone merges the leaves to update the root. Only then may they call the withdraw function themselves. This definitely changes the UX for the worse though it may be worth the tradeoff. e.g. if a user is happy with waiting x days for a deposit and they assume that someone will merge the leaves. Fundamentally, this design assumes that there will be enough demand for cheap deposits, which will incentivise at least one relayer to merge the leaves so that they may earn more withdrawal fees.

There is another approach that saves approximately the same amount of gas, but doesn’t require a queue and inserts deposits immediately, at the cost of adding N (merkle tree height) snark inputs (which in turn can be fixed by supplying a single snark argument - hash of all inputs, at the cost of larger constraint count).

It works like this: imagine instead of a single N-level tree we have N trees with 0…N levels. When inserting a new leaf we try to insert it into the smallest tree (of depth 0), and if it’s already full, we merge it instead and propagate the result to higher depth tree. On average the insertion should consume around 1-2 hashes and 2-3 SSTOREs. I’ll probably need to publish a proper post on that topic.

Another way to decrease inserting new leaf into the tree is to prove the insertion with a snark, which should cost around 300k gas onchain, around 4x improvement.

The problem with new approaches is that the pool is already deployed and I think a rational thing to do is to integrate many such improvements into the new protocol version, rather than redeploying pools after each incremental change.