04:20:50 Yes, you're right, these should be updated. I can write the tests so that they use the master-derived values 04:21:53 @boog900:monero.social: Here is a more concrete sketch of how a PQ turnstile could work, including key image composition: https://gist.github.com/jeffro256/146bfd5306ea3a8a2a0ea4d660cd2243 08:36:05 does that one cuprate db striping thing really save 65% of disk space? 08:36:19 cant be for the blockchain data, right? 10:50:21 @monerobull:matrix.org: Hmm, its not saving 65%, its saving 35%, compared to our old DB. Its about 25% smaller than monerod. I'll ask hinto to updated the post making that clear. 10:51:01 ok but its still for the entire data or just monerod itself? 10:51:02 So a cuprate db is 195GB for a real number 10:51:46 compared to monerods 260? 10:51:47 thats pretty good 10:51:57 Yeah 10:52:04 wow, nice work 10:54:30 There is no compression as well, its all in how the data is stored. LMDB uses a btree, we just append the data directly in a file. > <@monerobull:matrix.org> cant be for the blockchain data, right? 10:54:59 For most tables, for some we still use LMDB 10:56:50 That should make it less likely to corrupt as well right? 11:06:12 Because we have 2 databases we no longer have fully atomic updates as you can commit a tx on one and then crash before committing on the other. So in this sense in could be worse. 11:06:12 We will have handling of this on startup to make sure the 2 DBs are in sync though. 11:06:12 Both LMDB and the tapes support atomic updates so as long as you have the right setting it should be unlikely your DB gets corrupted. 11:33:17 neat 11:34:12 corruption is pretty rare, most reports come from people syncing on a raspberry pi (where it can take weeks to sync) 13:11:22 << AI trigger warning >>i'll put this in the lounge because its ai slop: https://github.com/Gingeropolous/blocksizejavasim/tree/main , https://gingeropolous.github.io/blocksizejavasim/ . AI port of @spackle:monero.social 's https://github.com/spackle-xmr/Dynamic_Block_Demo/blob/main/Dynamic_Blocksize_econ_draft.py . WARNING: haven't manually reviewed the code to see that it matches or makes sense. 14:03:49 That's genuinely awesome to see, and way more accessible than the standalone python scripts. This could be a really helpful tool for getting people to understand the scaling. I would try looking things over right now, but today (and the next few days) I have stuffed with plans. 14:04:57 Thanks for trying this; I'll see about looking it over when I get the chance. 15:36:17 sech1: what would you call the operation now done under program_loop_store_hard_aes.inc, akin to existing ones (hashAes1Rx4 / fillAes1Rx4 / fillAes4Rx4 / hashAndFillAes1Rx4)? 15:37:27 mergeAesXXX? 15:37:43 4Rx4 probably 15:39:53 good enough for now 16:04:43 ~65% of monerod's size, I updated the post to make it more clear > <@monerobull:matrix.org> does that one cuprate db striping thing really save 65% of disk space? 16:31:09 well running 5 million blocks on that javascript is taking 10 minutes and counting... 16:31:35 Just run 2000 blocks with high fees and max flood 16:31:53 i wanna see the long term median adjust 16:32:11 So 200k blocks? 16:32:27 Will allow you to see at least some of it 16:32:55 sech1: Implemented V2 (already had commitments) + sample testcases for V2 as well https://git.gammaspectra.live/P2Pool/go-randomx/commit/6065a45778bf12784e060d5a69a97e00c217d172 16:32:57 Also, the fees used, what are they? woukd be nice to have the fee tiers as options 16:33:10 I have checked these against the V2 RandomX PR and they all match :) 16:33:44 nice 16:33:46 if you find it useful, tests.cpp https://privatebin.net/?c2f6614a8edab505#HMLzmc1y62rZhCSmU8vpQ5YQFzJyhhrAcGdoyfi42JFP 16:34:24 all using V2 (but ofc must test both so this is useful only for cross checking) 17:20:02 @gingeropolous: 50000 blocks are needed for the long term median to change 17:20:24 2000 is not enough 18:52:49 well its still running :( 21:15:25 Select large simulation mode to speed it up, with some loss of precision 21:18:20 Should not be so much less precise that you would notice a difference looking at graphs, but it does fudge things a bit. 21:19:22 But it will be much faster. Orders of magnitude faster for long simulations that build large blocks. 21:45:32 5 million is a lot, especially flooding. Try 500 thousand 22:17:55 5 million is more blocks than exist .. 22:18:20 263k per year 22:57:39 well it was still running. so i stopped it and am now trying 500k 23:02:04 i don't think this is working right. if the short term median is 30e6 at block 100k, then the long term median can't be 537k at the same spot.