00:53:18 I have done some more thinking about rolling DNS checkpoints and am now less favorable on them again. Not that my opinion matters much haha. I do admit that rolling DNS checkpoints are a good pragmatic solution to "mitigate" deep re-orgs until a more comprehensive solution to selfish mining is ready. Here are some of my concer [... too long, see https://mrelay.p2pool.observer/e/sZbXq7MKSHlCTXct ] 01:02:17 1) the keys are individual and they are DNSSEC, that's also where tevador's 2/3rds +1 % (supermajority two-thirds rule) 01:02:55 these subdomains would be kept separate from the main domains which can be removed/rotated as needed 01:04:15 4) DNS servers DDOS as in, major ISP ddos across a distributed servers (and any of these queried work). 01:04:16 If the records don't agree or are obsolete, well, the miners always follow the "longest" chain 01:04:48 there isn't need for any code in place, that's the default behavior. the checkpointing just pins a specific height to a given id 01:05:22 it doesn't make the blocks appear or manually move the nodes forward, the nodes are always moving forward. it sets the "rear" 01:05:28 Perhaps the biggest issue with the rolling DNS checkpoint idea, is that creates an environment where the likelihood is higher (than pure longest chain rule) of prolonged chain splits. Although it is fairly unlikely that this would be an issue in practice, if the balance of hashrate following checkpoints was say about 60% and 4 [... too long, see https://mrelay.p2pool.observer/e/7OGDrLMKWFNXOUpC ] 01:05:50 and as for > If the checkpointing idea is released and adopted, can we be sure that it won't be used longer than necessary when other solutions are ready or that it won't be used for other purposes? 01:06:14 that's up to monero community to address (plus the previously discussed writing about what will be done with them) 01:07:01 the other alternative as it is now, before any long term hardfork, is that people can do worse on the chain and get invalidated decoys with the already discussed issues 01:07:30 How many different servers / signing keys would there be? Also what does DNSSEC mean? And tevador's proposal means that a minimum of 2/3rds vote is needed for a checkpoint to be valid? > 1) the keys are individual and they are DNSSEC, that's also where tevador's 2/3rds +1 % (supermajority two-thirds rule) 01:07:45 and remember - checkpoints are not a one way door. If deployment of them ends up with higher issues than expected due to network effect, they can just stop being issued or removed. 01:07:59 The suggestion is to increase from 4 -> 7 01:08:06 and 50%+1 to 2/3rd +1 01:08:17 right now 3/4 need to agree to have a checkpoint valid 01:08:46 if more are added and threshold is set, it'd be 5/7 that need to agree 01:09:21 each checkpointing subdomain has a set of signing keys 01:09:24 https://en.wikipedia.org/wiki/Domain_Name_System_Security_Extensions 01:09:49 it authenticates DNS records 01:10:11 these records can be signed offline, then pushed to DNS secondary servers that can serve but not sign new records 01:11:03 As an example I wrote a simple DNS + DNSSEC server that serves exactly the TXT records needed, signed, and allows DNS Zone transfers for secondary DNS servers to then provide https://git.gammaspectra.live/P2Pool/monero-highway#cmd-dns-checkpoints 01:11:20 https://dnsviz.net/d/checkpoints.gammaspectra.live/dnssec/?rr=all&a=all&ds=all&doe=on&ta=.&tk= 01:11:43 This automatically replicates right now across Hurricane Electric's DNS secondaries and 1984.hosting DNS secondaries 01:12:14 And how deep are the proposed checkpoints again? I still am of the opinion that the closer to 10 we are the better. As chain splits are more likely if the depth is to close to the tip. Especially if Qubic and CFB go maniac mode and want to cause chaos. I think it is already the case that CFB is hesistant to reorg deeper than 10, so that seems like the most reasonable number 01:13:50 my suggestion is depth of two "from tip", to account for around 5 previous records missing, due to TTL 01:14:01 If checkpoints are less than 5 blocks deep from the tip, then CFB could try to reorg deeper than that causing chain splits. I mean he could do that still with 10 block checkpoint, but it is harder and I think even he has limits to how much chaos he wants to create 01:14:10 that'd get it around the 10 mark with a sliding window of 10 checkpoints 01:14:22 > If checkpoints are less than 5 blocks deep from the tip, then CFB could try to reorg deeper than that causing chain splits. 01:14:28 I don't follow that point haha 01:14:32 if he can do 5 he can do 10 01:14:44 or 20, like, on demand 01:14:59 DNS checkpoints aren't instant 01:15:28 assume they can get to good clients within a few seconds, but many users DNS servers will lag behind around 5-7 minutes 01:16:06 you need to keep this into account to set the checkpoints with the point that they will be there when the clients receive them, in a workable way, for the purpose needed 01:16:27 if you set a checkpoint at 9 and the client receives it when we are 20 deep, they hold no weight except to make more splits 01:17:05 the closer to tip, it eliminates the ability to make splits, but you also want to leave the tip to behave naturally via highest work 01:18:04 Ok, I think I better understand now. You're saying that there are lags in people receiving all the checkpointed info so in practice a 2 depth checkpoint only enforces like a 10 block finality or something like that 01:19:00 you want to have a margin of error to account that clients would need 5/7 of the DNS domains matching some of the records 01:19:08 DataHoarder: Wouldn't it be easier for splits to occur though? As CFB just needs to reorg one block deeper than the checkpoint. But I might misunderstand the degree of incremental difficulty for him to reorg one block deeper 01:19:25 that is a reorg, which reorgs back 01:19:33 the point is to make 10+ infeasible 01:19:44 to prevent transaction invalidation and double spend 01:20:10 you can ensure the core players have a good setup, but everyone else that opts in also needs to receive good data in a timely manner even if lagging behind 01:20:36 the 10-block range is within the confirmation window, more and that's where the issues appear @fr33_yourself 01:20:38 I agree that to the extent that most mining hash follows the checkpoints then it would make 10+ double spend and reorgs infeasible 01:21:13 depth set to two, qubic tries to reorg below that, but it'd make no difference if it was a checkpoint or not. they can already do that 01:21:34 but extending this to 10+ is not something that is desired, and the point of the checkpoints 01:22:49 Correct. The "honest" mining pool admins need to be able to receive the data quickly from the signing servers. > you can ensure the core players have a good setup, but everyone else that opts in also needs to receive good data in a timely manner even if lagging behind 01:23:08 from measurements it's like 15s latency when you query things properly :) 01:23:58 depending on DNS setup as well (and we'd want some variety) that can add some as well 01:24:42 some DNS servers from ISPs can and will enforce 5m TTL 01:29:00 > depth set to two, qubic tries to reorg below that, but it'd make no difference if it was a checkpoint or not. they can already do that 01:29:00 Yes, but isn't it a possibility that Qubic could do a fairly deep reorg (via selfish mining) after checkpoints are enabled. And in this hypothetical scenario they could persist mining and building on their deep reorg chain, and possibly with some honest miners (if they don't follow the check pointed chain, but just follow stan [... too long, see https://mrelay.p2pool.observer/e/hvfZrLMKZmFrOEhp ] 01:30:33 In the current setup with selfish mining the reorg passes through which is disruptive, but after Qubics reorgs are "released" the network remains on a single chain. 01:31:30 see that'd invalidate some transactions but then the chain would come back 01:31:40 where the current situation is that they'd never come back 01:31:56 and it'd allow double spending, invalidation as well 01:32:45 if their point is "profit" their coins there would be useless 01:33:12 How would the checkpointing situation allow the chain to "come back"? As long as there are two meaningfully sized groups of hashrate building on different chains then this would cause a currency split if one of two groups doesn't defect in a shortperiod of time. > see that'd invalidate some transactions but then the chain would come back 01:33:38 > As long as there are two meaningfully sized groups of hashrate building on different chains then this would cause a currency split if one of two groups doesn't defect in a shortperiod of time. 01:33:38 The one that has the monetary majority 01:34:24 this includes hashpower and some specific merchants 01:35:32 > if their point is "profit" their coins there would be useless 01:35:32 Maybe, but isn't this contingent on exchanges, merchants and majority of the ecosystem configuring their nodes such that they follow the checkpoints (not just the miners following the checkpoints, but even exchanges etc). Because otherwise if some exchanges nodes or merchants nodes are configured to simply follow longest chain [... too long, see https://mrelay.p2pool.observer/e/ovDxrLMKWm1SSHEt ] 01:35:56 note DNS checkpoints were originally released to explicitly address that situation of a "split" due to consensus issues coming from a bug 01:36:18 nodes would opt-in as needed 01:36:29 Yes, I agree that after the "currency split". Then the more valuable currency and it's accompanying chain will be followed / mined > The one that has the monetary majority 01:36:31 also - note that the current attacker being qubic does not mine 24/7 01:38:18 also read up https://github.com/monero-project/monero/issues/10064 01:38:23 DataHoarder: What does this point mean? Why is this relevant to our current discussion? Ohhhh I see what you mean. It would be difficult for them to persistently continue building on their "naughty reorg chain" because they only mine in marathons. So in the event of a chainsplit their chain would die unless CFB overrules their current decentralized AI B.S. and starts mining Monero full time 01:38:38 this is where the 2/3rds was mentioned in the comments 01:38:56 correct, fr33_yourself. 01:39:27 checkpoints also prevent a generally covert attacker from implementing one-off attacks 01:40:26 You mostly mean merchants that do meaningful transaction volume, exchanges, and proportion of hashpower distributed between the two competing split chains. Whichever has "more" of those aspects would end up winning the split. Pretty much like what happened with BTC and BCH > this includes hashpower and some specific merchants 01:41:03 Except for one side they know which is the canonical chain 01:41:25 DataHoarder: Yep, because if they can't persistently build on their reorg chain, then it just gets orphaned off. It's like a game of stamina. 01:41:39 and the other would temporarily be elsewhere, then flip. it not being permanent matters, plus users on the wrong side can still opt-in to checkpoints or at least see the warnings 01:42:04 it is more valuable to do short selfish attacks or literally just mine 01:42:23 they are orphaning others atm but their implementation is not strictly giving higher profits 01:44:37 DataHoarder: How would it still be valuable to do short selfish mining attacks if the checkpoint depth is 2 blocks? Wouldn't that make short reorgs impossible? Unless it is like exactly 1 or 2 blocks deep. Or you mean even with checkpointed depth of 2, Qubic could still possibly pass a 3 block depth reorg due to latency? 01:45:12 depth of 2 is, if blocks 100 101 102 exist, you are checkpointing 100 01:45:27 they could still make their own version of 101, 102, and 103 and publish these 01:45:44 ending up with 2-1 orphaned blocks depending how many they do 01:46:02 they could also do 102, 103 and orphan 102 only 01:46:23 say clients have an older one 01:46:33 reorg could happen there, but it's still within conf window 01:46:40 then as blocks get built it reorgs back 01:46:52 all of that can happen within the 10-block interval and it's all ok 01:47:43 the point being that blocks coming out of the 10-conf window should be well checkpointed by overlapping height/time intervals to account for lucky chances or network delays on DNS records 01:48:44 I see where you guys are coming from and you are making good points. It is a powerful practical tool, but I guess the philosophical angle is still a bit murky since the rolling dns checkpoints still introduce a centralized point of trust. But in practice it perhaps it isn't such a big deal because all honest miners and network [... too long, see https://mrelay.p2pool.observer/e/oZmirbMKX2hSdmdT ] 01:49:06 what was mentioned on previous MRL meeting as well was to start building up documentation around the purpose itself and what activation window we are talking about for the bandaid 01:49:20 > the philosophical angle is still a bit murky since the rolling dns checkpoints still introduce a centralized point of trust 01:49:26 yeah, read on https://github.com/monero-project/monero/issues/10064 and that's a major point 01:50:04 Roger, I will read it now. I appreciate your points and explanations. 01:51:16 good testing has been done on testnet to find pain points and issues to bring that up, and others that need fixing on monero to get a better system for it, but these are not consensus breaking changes (they are just in the DNS checkpoints subsystem) 02:02:29 The most important point is rolling dns checkpointing is temporary bandaid (and can be switched off) until more permanent solution is ready, i lost track if PoP will be implemented alongside it? 02:07:55 yeah. they must never become a permanent solution on their own. their purpose is for emergency situations as originally deployed, for consensus split resolution, so this is stretching usage a bit already 02:08:35 lightly adapting them to fit this better as well, but when other solutions come in, they should be gone quick 02:14:57 @fr33_yourself: At a depth of 3 or more 02:17:04 For 10 years, the network has essentially never disagreed / reorged beyond 3 blocks. Any discrepency that large is likely to be a dishonest attempt to have an unfair advantage (selfish mining or otherwise mining while disconnected from the rest of network) 02:17:39 the checkpoints should be following the chain post-any honest reorgs 02:25:30 DNS checkpointing only really helps against deep reorgs (9+), but it doesn’t address selfish mining. Many miners, myself included, are noticing reduced rewards it feels like our work is being undercut, and over time that could make small/medium-scale mining harder to sustain. 02:25:30 I’m curious: aside from DNS checkpointing, are there any other solutions currently being discussed to mitigate selfish mining on Monero? 02:27:52 > it doesn’t address selfish mining 02:27:53 if applied on those depths, it reduces their ability to do long chains and "wait" for the chain to catch up. it limits them to short lucky chains, and their work resets every couple of heights 02:28:11 they can't count on monero getting unlucky and them being lucky anymore 02:28:47 so their risk their chain gets orphaned is way higher, which makes it less profitable for them to use that strategy as optimally (or not really for them) as they have been doing 02:29:06 but yeah, it is still viable but with reduced orphan depths 02:29:55 note they also are getting quite orphaned by monero itself https://qubic-snooper.p2pool.observer/tree/ :) 02:41:57 @rucknium:monero.social can you edit https://github.com/monero-project/meta/issues/1263#issuecomment-3255876843 to have the logs in a "code" block via ``````? I think github is trying to sanitize html tags so only users with invalid html names get shown over the bridge :D 20:16:27 > <@fr33_yourself> Rucknium is also looking into Proportional reward splitting which is even more secure than "work shares" (fruit-chains) by themselves. But if my understanding is correct Proportional reward splitting with work shares is problematic for Monero in practice because of RandomX. And we should definitely keep [... too long, see https://mrelay.p2pool.observer/e/5d76zLMKcV9sUWhS ] 20:16:27 Just to be clear PRS is the paper we wrote about workshares. 20:25:17 > <@rucknium> @mr_x_tcv:matrix.org: If Workshares are the same as Proportional Rewards Splitting (PRS), described in Aumayr et al. (2025) "Optimal Reward Allocation via Proportional Splitting" https://arxiv.org/abs/2503.10185 , then I don't think it limits re-org depth at all. PRS would increase block verification time to [... too long, see https://mrelay.p2pool.observer/e/youbzbMKdWxNZ1FZ ] 20:25:17 This is the paper that the Quai team wrote on workshares. In terms of block verification time in real-time execution, the workshares come in throughout the interblock interval and their work can be leisurely validated and then put into a valid workshare cache. When the block comes in, it can contain on the valid shares, but th [... too long, see https://mrelay.p2pool.observer/e/youbzbMKdWxNZ1FZ ] 20:28:08 > <@rucknium> I also think that PRS may actually be worse for solo miners because they wont be able to place most of their work shares in a block within the reward window (most of their hashes will be out of date). I think that's how it works. Tell me if I am mistaken. The paper doesn't discuss this solo mining problem. 20:28:08 Everyone is incentivized to include shares to make their block the heaviest and most likely to win in the case of a re-org. Since the reward is paid out proportional to the shares, not divided amongst shares per block, there is only positive incentive to include, thus the conclusion in PRS that this is in Nash Equilibrium. It [... too long, see https://mrelay.p2pool.observer/e/ucClzbMKbTRmNTdo ] 20:29:38 It's not free at all. It would have a significant impact on sync times. Like 120 hours of CPU time to sync 1 year of blocks instead of 1 hour of CPU time. 20:30:11 ^ Numbers only include PoW verification. 20:33:17 > <@vtnerd> As stated on Twitter, the prs crowd nearly has me convinced that it's better than tevadors proposal, but no one has a good response to the increased sync time 20:33:17 All of the resources used here are inconsequential to any real execution parameter on almost any node. For example, if we look at the validation of a RandomX hash it should take say 5ms. A typically block takes 2 seconds to validate. If we did 100 shares per block the validation time would go from 2 seconds to 2.5 seconds an increase of 25%. 20:33:17 If you wanted to be lighter on the resources you could do as few as 10 shares per block which would only increase the validation time by 50ms or 2.5%. This would still get most of the benefit. Optimality, depending on desired node and sync properties lies somewhere between 10-100 workshares per block. 20:34:31 tevador: This is not true. Most of the time in sync is done in block verification not in randomX hash verification. A block is about 2 seconds, whereas a hash is 5ms. 20:35:06 If you had 10 workshares per block you would increase sync time 2.5%. 30 shares per block 7.5% and so on. 20:35:40 in light mode, on slow devices without hw aes or hw float that can be different 20:36:24 > <@rucknium> Aumayr et al. (2025) does not directly analyze how many shares would have to be put in each block to make their protocol work. They analyze in pieces: 1) Assume hashpower estimation is done with zero error, then how effective is PRS against selfish mining? 2) How much estimation error do you get at different numbers of workshares per block? 20:36:24 This is part of the simulations. I probably can provide you the source code. 20:37:16 1) RandomX in light mode is more like 15 ms with a fast CPU. 2) Average block verifies much faster than 2 seconds. It's probably closer to 0.5 seconds (<0.1 for empty blocks). So block verification would go from 0.5 s to 2.5 s. 20:37:32 @kiltsonfire:matrix.org: Probably not gaining anything meaningful by having lightnodes, which aren't mining, validate anything but the blockhash itself. Certainly at depth. 20:38:11 tevador: depends on the number of shares. You can get most of the benefit with as little as 10 shares. 20:39:02 in-depth could be verified a mix of using https://github.com/tevador/RandomX/pull/265 + main hash, then only closer tip would need further, specifically for light nodes 20:39:24 DataHoarder: yes. 20:39:30 10 shares means an effective block time of 12 seconds, so you don't really need shares, it could just be blocks. Ethereum has a block time of 12 seconds. 20:39:52 oddly p2pool like parameters :) 20:42:44 tevador: Ok so lets move to 12 second blocks. But even if you do you will still be vulnerable to a 30% attacker, probably more so given the natural uncle rate of fast block times. Shares help to increase attacker resilience regardless of block time. The real limit is how much resources you want to allocate to it, ie bandwidth [... too long, see https://mrelay.p2pool.observer/e/i__azbMKeTVLWlFo ] 20:44:17 DataHoarder: That is one way to think about this. You are ameliorating selfish mining, long range re-orgs and decentralizing pools on-chain. 20:45:23 @kiltsonfire:matrix.org: Posting the simulation code would be great. Giving it an open source license would be even better. Thanks. > <@kiltsonfire:matrix.org> This is part of the simulations. I probably can provide you the source code. 20:45:38 Thanks for coming here to discuss, too :) 20:50:18 RS with a block time of 12 seconds has the same selfish mining resistance as PRS with a workshare time of 12 seconds. 20:50:28 > <@vtnerd> My original interpretation of the paper was that you would have an x block window to post your own shares. This wouldn't help solo mining at all. 20:50:28 What PRS is fundamentally saying is that fruitchains never gets finality but is "perfectly fair". If you want to have practical finality, you have to compromise on fairness. The way to do it is have a finite time to inclusion of shares, but the number of blocks in which you allow inclusion is related the the liklihood that an [... too long, see https://mrelay.p2pool.observer/e/wqb3zbMKN0J0d3dZ ] 20:56:24 tevador: This is not true, because shares are independent events. They do not create state transitions nor do they independently carry weight. They only carry weight when included in a block. By having a block with say 10 workshares the likelihood of a 30% attacker being able to get lucky for a block is around 3% while they wi [... too long, see https://mrelay.p2pool.observer/e/6YiNzrMKam9YaVBZ ] 21:00:49 @rucknium: Simulation code for PRS: https://github.com/commonprefix/proportional-reward-splitting-MDP 21:01:04 No, you misunderstood PRS. An attacker with 30% still has a 30% chance of mining a block even with 1 workshare per second. The difference is in the reward distribution. 21:02:23 tevador: You are correct, but they won't be able to withhold profitably, nor will they be able to continue to extend a heavier chain past 1 block frequently. 21:02:41 A workshare is exactly the same as a block without transactions in RS. 21:03:00 Rs uses uncles in place of overlapping workshares. 21:03:41 @kiltsonfire:matrix.org: Thank you! 21:03:45 Therefore RS with 12s/block is the same as PRS with 12s/share, except RS will confirm transactions faster. 21:05:18 tevador: Clarify the abbreviation RS for me. 21:05:48 RS = Reward Splitting - see https://github.com/monero-project/research-lab/issues/144 21:06:00 Proposal 2 21:07:09 I proposed a block time of 60 seconds. 12 seconds might also be possible. 1s/sample is definitely too much for RandomX, 12s/sample MIGHT be acceptable, 60s/sample is definitely acceptable. 21:11:54 tevador: Specifically referencing RS in #141, we explored the idea of exponential decay of blocks are shares when doing PRS. Any non-equal weighing of blocks are shares leads to making selfish mining worse, not better. I was initially in this camp. It has been almost a year since we did the work, so I do not remember why, but it definately was the case. 21:17:41 There are basically 2 orthogonal proposals in #144: 1. Publish or Perish, which makes selfish reorgs harder to achieve (by giving higher weight to "in-time" blocks) and 2. is RS, which splits block rewards more fairly based on uncle blocks. 21:17:53 PoP is sort of like a decay. 21:18:09 Late blocks get 0, in-time blocks get 1. 21:18:14 I believe it had to do with the attacker witholding their shares and including everyone elses shares, which gives them a weight advantage over the honest chain, which they cannot recover from because the shares of the dishonest actor when broadcast as part of their block are then not as heavy for the honest chain. 21:21:04 We definately explored all variations of these ideas and concluded the two following things: 1) All shares need to be paid the same within the inclusion window 2) the weight of the block or share to be proportional to the payment of the block or share. 3) Rewards have to vary per block based on the total included shares, not s [... too long, see https://mrelay.p2pool.observer/e/tJ7nzrMKbGZvS2Rk ] 21:21:39 Those are the 3 conclusions, which I think no matter the implementation of blocks, uncles or shares, has to be true to eliminate selfish mining and achieve fairness. 21:23:06 tevador: My only issue with PoP is that I think it is gameable for a non-economically significant amount. 21:23:35 Monero would have some problems with variable block rewards. We basically need a fixed base reward per block to make the fee scaling work. 21:24:35 In which sense is PoP gameable? 21:24:40 tevador: can you elaborate? 21:24:58 tevador: I can setup many nodes and lie about the time that I saw a block. 21:26:22 Found this explanation: https://monero.stackexchange.com/questions/2531/how-does-the-dynamic-fee-calculation-work 21:26:46 My node doesn't care when your node claims to have seen a block. 21:27:49 In PoP, nodes don't share information when they saw a block. They share the block itself and each node keeps its own stats. 21:28:20 So the only gameable thing is when you publish a block. 21:30:22 @kiltsonfire:matrix.org: In this case you would just set R0 = k * shares. 21:31:29 R0 needs to be a constant. 21:33:07 Like this: https://github.com/monero-project/monero/blob/master/src/cryptonote_config.h#L56 21:34:07 You could probably make it work with the fee scaling, but it would be even more complicated than the current fee scaling design...let's say by a factor of 2 😬 21:36:06 @kiltsonfire:matrix.org: Sorry, I didn't catch it: Which of the authors of the paper are you? Or you can maintain an anonymity set of 5 😉 22:04:21 Very exciting to see Dr.K in here working with Monero devs! From Monero Talk to the MRL! 22:33:54 tevador: can it be constant in expectation? 22:35:17 for example you could say 1 share = 1 XMR. 1 block is expected to have 10 shares which means each block = 10 XMR on average. But you could have a 5 XMR block and a 15 XMR block for any given block. 22:35:32 just play numbers of course