Cross-input signature aggregation – I’ll focus on full aggregation using Schnorr like in DahLIAS [0] – is often described as making large collaborative transactions smaller. See e.g. What block space savings would we get for coinjoins (and payjoins) if we had cross input signature aggregation? The implication then is that they’re cheaper, given the same fee per vbyte.
They achieve this by having fewer signatures in the witness, i.e. just one per transaction instead of one (or more with OP_CHECKSIGADD) per input.
But at the same time Taproot introduced a way to constrain the number of signature check operations in tapscript [1]:
Sigops limit The sigops in tapscripts do not count towards the block-wide limit of 80000 (weighted). Instead, there is a per-script sigops budget. The budget equals 50 + the total serialized size in bytes of the transaction input’s witness (including the
CompactSizeprefix). Executing a signature opcode (OP_CHECKSIG,OP_CHECKSIGVERIFY, orOP_CHECKSIGADD) with a non-empty signature decrements the budget by 50. If that brings the budget below zero, the script fails immediately. Signature opcodes with unknown public key type and non-empty signature are also counted.
One of the footnotes point out that:
The weight per sigop factor 50 corresponds to the ratio of BIP141 block limits: 4 mega weight units divided by 80,000 sigops.
So what do we do now?
If we continue to decrement the budget for each signature check op code then, because the signature isn’t in the input, wallets would have to pad 49 bytes. That would make it barely cheaper than just putting a 64 byte signature there.
Alternatively, we could modify the accounting, since we need a new tapscript version anyway – Will cross-input signature aggregation need a new output type?
One way is to have the budget only decrease by 50 at the last aggregated signature (or whichever field in the transaction the actual signature goes).
But the validation cost for each additional signature may not be zero, but rather exponentially decreasing, so you’d need a more complicated accounting scheme. And potentially end up having to pad the input (or use annex magic). Though at least you’d have fee savings.
The end result of the latter approach is having more signature check op codes per block, but the total signature check compute resources wouldn’t go up. Is that the idea?
[0] https://gnusha.org/pi/bitcoindev/[email protected]/
[1] https://github.com/bitcoin/bips/blob/master/bip-0342.mediawiki#resource-limits










