03:39:06 there is a way, via recursive zero-knowledge proofs (and using a virtual machine for your blockchain that can be proved in such a way). what it does is whenever there is a state transition (block N -> block N+1), you can create a small, constant-sized proof. this small piece of data mathematically proves that this transition AND all previous transitions have not violated the block 03:39:06 chain's rules. with a setup like this, you can take the proof at a given block height (even the most recent one), plus the state of the chain at that height (this would be the larger chunk of the data), and verify that that state is correct. this way you could full-sync a chain to the tip by downloading just those from the peers, no need for the history of the re-execution of it. 03:39:07 the implementation is much more complex and AFAIK relies on cryptographic hardness assumptions that are less tried-and-tested than what Monero relies on. there is a live network called Mina that implemented it and is exploring how to make it more practical. 05:26:18 *or the re-execution of it 05:52:35 jeffro256 Rucknium Following up on https://github.com/monero-project/monero/pull/9023, if my work doesn't over-select yet always selects per the decoy, I'm fine, right? 05:55:36 And then regarding over-selection, as needed for privacy, I am fine if when I select 99 decoys per gamma, I always select `[0. . 15]` as the ones to use? If one errors, I'd move to use `[16]`? 10:59:00 https://matrix.monero.social/_matrix/media/v1/download/monero.social/BrcJAsSkjbtgbQjezXfKroeS 10:59:01 between blocks 3102598 and 3102606 were over 400kb in size (largest Monero blocks ever) and all contained just 5 to 6 146-input transactions paying about 0.0081 XMR per tx. Total of ~0.025 XMR ($3.66) in fees per block. 11:00:06 https://matrix.monero.social/_matrix/media/v1/download/monero.social/hIkGymnXRGMbUhSVanbuypHS 11:01:24 Someone wanted their consolidation to go through quickly 11:31:40 grafik.png 11:31:43 could binance withdrawals have something to do with this 11:41:43 Yes if you select exactly as many decoys as you will use to sign (even taking into account unlock times etc), then yeah you don't have to worry about this effect 11:42:56 Also yes: if you're going to over select, then you should try adding in the original selected order 11:43:35 That's what the code does in that PR: it overselects but retains the original picking order 11:44:46 AFAIK, selecting the exact number of decoys will fail some of the time because you will select a locked output sometimes. `monerod` doesn't give the wallet info about locked outputs in the first selection step. So how does the Rust implementation of the Monero wallet do it? 11:47:45 You *could* select the exact amount of outputs if you downloaded information about lockedness at the same time as the output distribution, but AFAIK the daemon doesn't have an RPC endpoint for that. You could also do that while refreshing though 11:49:32 Cache it in the wallet? Yes, but your restore height would have to be set to the block height of the first RingCT tx to be sure you didn't miss any locked outputs. 11:51:53 Or if you assume that the chain won't reorg 6 years deep, you could include a small static data file that lists non standard lock times in the past 11:52:04 Kinda like the checkpoints file 16:49:02 oh, thank you for that. yeah I was gonna ask while reading it if this isn't what Mina is doing, but you answer it at the end 16:50:23 that seems pretty damn cool, if there are no major downsides. do you know of any? 17:01:59 aleenor: AFAIK, Mina has an interesting trick, but nodes would still have to keep most of the tx data like output public keys and amounts, assuming safe, realistic use. Without it, wallets have to store that data themselves. A corrupted local wallet file would lose its coins forever. No way to recover with a seed phrase. And wallets would have to stay connected to the internet to receive coins. Isn't that correct? Nodes could probably delete some signature data like Monero's pruned blockchain mode. 17:09:52 Couldn't a middle ground exist where a zk proof is generated every 300GB state. That would let a reasonable amount of time for people to sync, get their inputs, backup their local wallet file and be prepared for the next state. Tho I assume generating a recursive zk proof over 300GB of data might be expensive for consumer-grade hardware. 18:42:15 >corrupted local wallet file would lose its coins forever 18:42:21 this just doesnt fly 18:43:15 chainsize is not an issue and every proposed "fix" so far has terrible trade-offs 18:44:06 thorchain migrated tokens from one network to the other with 3 years notice 18:45:11 and yet i know a guy that lost 11 million USD when the day came 18:45:37 as well as countless others who "left crypto during the bearmarket" 18:47:25 forcing people to move coins or else loose them in order to save a few hundred GB of data in times where storage is mega cheap is not something you should do 20:14:20 Is there a way to see if any outputs from the first version of Monero (before the v2 hardfork) are still unspent (or if we can establish a method to check how many transactions they show up in, or if they have been upgraded to a newer output type after the fork) 20:15:48 For example, transactions with a ring size of 1 are known spends, right? So those can already be crossed out 20:18:07 And building from there, we might be able to get to a point where we can say something like: "We are 99.995% sure that all v1 outputs have been spent, and we consider that a reasonably acceptable risk" 20:19:14 And maybe even estimate the Monero value of the "unsure" outputs, to get an idea of the potential impact/damage 20:20:37 (And also see if any outputs from that time have definitely not been spent, ever) 20:22:43 Probably the Dulmage-Mendelsohn decomposition is best for this: https://moneroresearch.info/index.php?action=resource_RESOURCEVIEW_CORE&id=39 20:23:59 And for the 🦀s, this implementation is written in Rust! 20:24:19 But the amount of data that could be deleted is probably small. 20:33:21 If I'm reading page 19 correctly (of the published paper), there are ~18.8M pre-ringct rings, of which ~12.2M have 0 mixins, i.e. ring size 1, so they are known spends 20:34:12 v1 outputs have to be spent in rings with outputs that all have the same amount. 20:34:13 So you could count the number of v1 outputs with a certain amount and then count the amount of tx inputs with rings of that amount, then to get the amount of unspent outputs with that amount you would just get the difference between the 2. Then just repeat with all different output amounts to get the total value of all unspent v1 outputs. 20:35:22 The secons biggest contributor is ring size 3 (2 mixins) with 1.779M/2.941M (60%) traced outputs, according to the paper 20:35:48 Then there's ring size 4 and size 2 20:37:31 Overall, the paper claims ~86.9% traced outputs 20:40:40 Now, I don't know how many GB worth of data that actually corresponds to, but it still sounds like a significant amount 20:42:23 So if we actually verified that "all outputs from blocks 1-N have been generated and spent before that block", then, theoretically, we could move the genesis block to N? 20:43:00 Where N is the first block where we see an output that we're not sure that it has been spent 20:43:51 (Or something along those lines, anyway)