-
br-m
<jeffro256> Yes, you're right, these should be updated. I can write the tests so that they use the master-derived values
-
br-m
<jeffro256> @boog900:monero.social: Here is a more concrete sketch of how a PQ turnstile could work, including key image composition:
gist.github.com/jeffro256/146bfd5306ea3a8a2a0ea4d660cd2243
-
br-m
<monerobull:matrix.org> does that one cuprate db striping thing really save 65% of disk space?
-
br-m
<monerobull:matrix.org> cant be for the blockchain data, right?
-
br-m
<boog900> @monerobull:matrix.org: Hmm, its not saving 65%, its saving 35%, compared to our old DB. Its about 25% smaller than monerod. I'll ask hinto to updated the post making that clear.
-
br-m
<monerobull:matrix.org> ok but its still for the entire data or just monerod itself?
-
br-m
<boog900> So a cuprate db is 195GB for a real number
-
br-m
<monerobull:matrix.org> compared to monerods 260?
-
br-m
<monerobull:matrix.org> thats pretty good
-
br-m
<boog900> Yeah
-
br-m
<monerobull:matrix.org> wow, nice work
-
br-m
<boog900> There is no compression as well, its all in how the data is stored. LMDB uses a btree, we just append the data directly in a file. > <@monerobull:matrix.org> cant be for the blockchain data, right?
-
br-m
<boog900> For most tables, for some we still use LMDB
-
br-m
<monerobull:matrix.org> That should make it less likely to corrupt as well right?
-
br-m
<boog900> Because we have 2 databases we no longer have fully atomic updates as you can commit a tx on one and then crash before committing on the other. So in this sense in could be worse.
-
br-m
<boog900> We will have handling of this on startup to make sure the 2 DBs are in sync though.
-
br-m
<boog900> Both LMDB and the tapes support atomic updates so as long as you have the right setting it should be unlikely your DB gets corrupted.
-
br-m
<monerobull:matrix.org> neat
-
br-m
<monerobull:matrix.org> corruption is pretty rare, most reports come from people syncing on a raspberry pi (where it can take weeks to sync)
-
br-m
<gingeropolous> << AI trigger warning >>i'll put this in the lounge because its ai slop:
github.com/Gingeropolous/blocksizejavasim/tree/main ,
gingeropolous.github.io/blocksizejavasim . AI port of @spackle:monero.social 's
github.com/spackle-xmr/Dynamic_Bloc…ain/Dynamic_Blocksize_econ_draft.py . WARNING: haven't manually reviewed the code to see that it matches or makes sense.
-
br-m
<spackle> That's genuinely awesome to see, and way more accessible than the standalone python scripts. This could be a really helpful tool for getting people to understand the scaling. I would try looking things over right now, but today (and the next few days) I have stuffed with plans.
-
br-m
<spackle> Thanks for trying this; I'll see about looking it over when I get the chance.
-
DataHoarder
sech1: what would you call the operation now done under program_loop_store_hard_aes.inc, akin to existing ones (hashAes1Rx4 / fillAes1Rx4 / fillAes4Rx4 / hashAndFillAes1Rx4)?
-
DataHoarder
mergeAesXXX?
-
sech1
4Rx4 probably
-
DataHoarder
good enough for now
-
br-m
<hinto> ~65% of monerod's size, I updated the post to make it more clear > <@monerobull:matrix.org> does that one cuprate db striping thing really save 65% of disk space?
-
br-m
<gingeropolous> well running 5 million blocks on that javascript is taking 10 minutes and counting...
-
br-m
<ofrnxmr:xmr.mx> Just run 2000 blocks with high fees and max flood
-
br-m
<gingeropolous> i wanna see the long term median adjust
-
br-m
<ofrnxmr:xmr.mx> So 200k blocks?
-
br-m
<ofrnxmr:xmr.mx> Will allow you to see at least some of it
-
DataHoarder
sech1: Implemented V2 (already had commitments) + sample testcases for V2 as well
git.gammaspectra.live/P2Pool/go-ran…45778bf12784e060d5a69a97e00c217d172
-
br-m
<ofrnxmr:xmr.mx> Also, the fees used, what are they? woukd be nice to have the fee tiers as options
-
DataHoarder
I have checked these against the V2 RandomX PR and they all match :)
-
sech1
nice
-
DataHoarder
-
DataHoarder
all using V2 (but ofc must test both so this is useful only for cross checking)
-
br-m
<articmine> @gingeropolous: 50000 blocks are needed for the long term median to change
-
br-m
<articmine> 2000 is not enough
-
br-m
<gingeropolous> well its still running :(
-
Guest3
Select large simulation mode to speed it up, with some loss of precision
-
Guest3
Should not be so much less precise that you would notice a difference looking at graphs, but it does fudge things a bit.
-
Guest3
But it will be much faster. Orders of magnitude faster for long simulations that build large blocks.
-
Guest3
5 million is a lot, especially flooding. Try 500 thousand
-
br-m
<ofrnxmr:xmr.mx> 5 million is more blocks than exist ..
-
br-m
<ofrnxmr:xmr.mx> 263k per year
-
br-m
<gingeropolous> well it was still running. so i stopped it and am now trying 500k
-
br-m
<gingeropolous> i don't think this is working right. if the short term median is 30e6 at block 100k, then the long term median can't be 537k at the same spot.