00:27:42 Would it really be adding that much of an increased burden? My thinking was that by halving both tail emission and block difficulty, it would simultaneously cut confirmation times in half while keeping mining rewards the same relative to work done. Outside of increased storage requirements, is there another bottleneck I’m not taking into consideration? Like networking? 00:32:02 Also before someone points out that cutting the tail emission would potentially harm trust in the stability of the network, it could instead be a rephrasing of the way payouts are (sorta) determined; 00:32:02 Instead of determining mining payouts as directly “n xmr per block mined”, it could be n xmr per “hour”, with an hour being 30 blocks (the current mining payout would be 12 xmr per hour btw, and halving both the tail emission and block difficulty would result in the same mining payout) 01:04:58 preland: aside from the fact that this isn't the room for this, your premise is incorrect: you don't cut the difficulty in half - you cut the target block time in half, and thus the resulting difficulty will be halved (assuming that the nethash stays constant). Adjusting the base reward in half would indeed keep things consistent from a financial perspective 01:05:57 In fact, if you read up on Monero's history, you'll see that a long time ago the devs did the exact opposite: they doubled the target block time and the base reward 22:02:50 x​FFFC0000: This may be useful for creating many wallets and transactions: https://github.com/ACK-J/Monero-Dataset-Pipeline 22:03:00 by xmrack 22:20:43 Wrote this one today: https://github.com/0xFFFC0000/monero-perf 22:22:06 the idea is to have two nodes running on the machine. One is the main one. Connected to mainnet and sync'ed. The other one is only connected to the first one. You do whatever operation you want (pop_blocks, etc) on the second one. And measure it. 22:23:17 Since the connection between second one and first one is local, then network latency is removed from equation. Run it many times to calculate mean/median etc. and since it is on mainnet, all the data are as real as it gets. 22:24:00 Very useful for RPC/sync-related benchmarking 22:27:07 The one rucknium mentioned much bigger. Mine is a script to benchmark while developing.