06:33:31 I posted a proposal issue on Github which would reduce the amount of data needed to verify PoW by 37%: https://github.com/monero-project/monero/issues/9147 07:28:13 but why? 07:28:35 the 32-byte merkle root hash is not stored anywhere, it's computed as needed 07:29:02 and the block hashing blob is small enough already to fit in a single network packet (even if it's json-encoded) 16:48:06 If you're not up to date with the network, you might need thousands and thousands of block header hashing blobs, which is when it would make a difference 16:48:34 If you're already up to date with the network, then yeah, a 28 byte difference doesn't mean much 16:49:31 what difference, these 32 bytes are not stored or transferred, they're computed from transaction hashes 16:49:34 Or if you're a node trying to settle a deep alternative chain, saving a third on header downloads for the alternative chain would make a difference 16:51:02 Right now they're not stored or transferred, we send the whole block body, which I should've clarified in my post, but these blobs are the smallest amount of information we actually need to transfer to validate PoW. 16:52:39 If there was a way to retreive block header hashing blobs from the nodes, we wouldn't have to send the block to verify PoW and we wouldn't have to fork or anything. Its just that no one has gone to the effort of actually writing that code 16:59:01 block hashing blob is 76-77 bytes, I don't see the point in optimizing it more. Reducing the merkle hash size from 32 bytes down to 4 brings cryptographic questions if it's safe to do. Also, you're trying to optimize code that doesn't even exist yet. 17:00:07 If you really wanted to reduce it, you'd do something like keccak(hashing_blob) and then use the 32-byte output as an input to RandomX 17:00:18 that'd reduce it from 76 to 32 bytes and be relatively safe 17:01:09 first step of RandomX is actually Blake2b hash (512 bit), so it already reduces the size to 64 bytes 17:03:18 But I don't see the way to reduce it a lot - you still need to know that this hashing blob comes from this specific block, so it must at least contain prev_id (+32 bytes) 17:04:51 How does the first Blake2 step help us since we would still have the run RandomX anyways (unlike the intermediate hash where we can sanity check by only running Blake2B)? 17:06:12 I thought about this, but this then has the trade-off of requiring an extra level of indirection before you can know the blockIDs, whereas compressing the merkle tree hashes doesn't 17:09:42 Reducing the merkle hash size down to 4/5 bytes does bring cryptographic questions if it's safe to do so, but only for the very very top block. Every block header after each block cryptographically binds the contents with a safe 256-bit hash 17:13:19 The only where it would be useful is when transferring many hashing blobs? 17:13:51 Then I'd just make it part of the RPC call specifiction thatn prev_id is skipped in all but the very first blob (because prev_id can be calculated from the previous blob) 17:13:56 here, 32 bytes saved right away 17:14:06 and no need to complicate things everywhere 17:17:27 what else can you save, hmm 17:17:46 block height (4 bytes), for example 17:18:04 so 36 bytes per blob saved without much effort 17:19:29 Block height already isn't needed for the header hashing blobs, but you might be onto something but recomputing the previous IDs 17:20:07 I was wrong, block height isn't stored in the hashing blob... 17:20:18 Since that's how block IDs are calculated anyways (except for height 202612 17:20:42 but timestamps are saved, so you could send timestamp delta instead of the actual timestamp, 3-4 bytes saved 17:22:15 We would need a new varint format that can store negative values, but yeah that could definitely save some space 17:22:44 I have one if it helps. 17:24:10 Yeah! Do you have a link? 17:25:40 The major block version must be equal to what it *should* be the hardforks.cpp, so we can exclude that field 17:28:20 https://git.townforge.net/townforge/townforge/commit/b188f4636d546c4e10f7d2bd7456311334ae5d2b 17:28:48 Probably better to look at the latest source though, in case it got patched up since, I don't recall. 17:29:26 Look for VARINT_FIELD_SIGNED