-
br-m<fr33_yourself> I have done some more thinking about rolling DNS checkpoints and am now less favorable on them again. Not that my opinion matters much haha. I do admit that rolling DNS checkpoints are a good pragmatic solution to "mitigate" deep re-orgs until a more comprehensive solution to selfish mining is ready. Here are some of my concer [... too long, see mrelay.p2pool.observer/e/sZbXq7MKSHlCTXct ]
-
DataHoarder1) the keys are individual and they are DNSSEC, that's also where tevador's 2/3rds +1 % (supermajority two-thirds rule)
-
DataHoarderthese subdomains would be kept separate from the main domains which can be removed/rotated as needed
-
DataHoarder4) DNS servers DDOS as in, major ISP ddos across a distributed servers (and any of these queried work).
-
DataHoarderIf the records don't agree or are obsolete, well, the miners always follow the "longest" chain
-
DataHoarderthere isn't need for any code in place, that's the default behavior. the checkpointing just pins a specific height to a given id
-
DataHoarderit doesn't make the blocks appear or manually move the nodes forward, the nodes are always moving forward. it sets the "rear"
-
br-m<fr33_yourself> Perhaps the biggest issue with the rolling DNS checkpoint idea, is that creates an environment where the likelihood is higher (than pure longest chain rule) of prolonged chain splits. Although it is fairly unlikely that this would be an issue in practice, if the balance of hashrate following checkpoints was say about 60% and 4 [... too long, see mrelay.p2pool.observer/e/7OGDrLMKWFNXOUpC ]
-
DataHoarderand as for > If the checkpointing idea is released and adopted, can we be sure that it won't be used longer than necessary when other solutions are ready or that it won't be used for other purposes?
-
DataHoarderthat's up to monero community to address (plus the previously discussed writing about what will be done with them)
-
DataHoarderthe other alternative as it is now, before any long term hardfork, is that people can do worse on the chain and get invalidated decoys with the already discussed issues
-
br-m<fr33_yourself> How many different servers / signing keys would there be? Also what does DNSSEC mean? And tevador's proposal means that a minimum of 2/3rds vote is needed for a checkpoint to be valid? > <DataHoarder> 1) the keys are individual and they are DNSSEC, that's also where tevador's 2/3rds +1 % (supermajority two-thirds rule)
-
DataHoarderand remember - checkpoints are not a one way door. If deployment of them ends up with higher issues than expected due to network effect, they can just stop being issued or removed.
-
DataHoarderThe suggestion is to increase from 4 -> 7
-
DataHoarderand 50%+1 to 2/3rd +1
-
DataHoarderright now 3/4 need to agree to have a checkpoint valid
-
DataHoarderif more are added and threshold is set, it'd be 5/7 that need to agree
-
DataHoardereach checkpointing subdomain has a set of signing keys
-
DataHoarder
-
DataHoarderit authenticates DNS records
-
DataHoarderthese records can be signed offline, then pushed to DNS secondary servers that can serve but not sign new records
-
DataHoarderAs an example I wrote a simple DNS + DNSSEC server that serves exactly the TXT records needed, signed, and allows DNS Zone transfers for secondary DNS servers to then provide git.gammaspectra.live/P2Pool/monero-highway#cmd-dns-checkpoints
-
DataHoarder
-
DataHoarderThis automatically replicates right now across Hurricane Electric's DNS secondaries and 1984.hosting DNS secondaries
-
br-m<fr33_yourself> And how deep are the proposed checkpoints again? I still am of the opinion that the closer to 10 we are the better. As chain splits are more likely if the depth is to close to the tip. Especially if Qubic and CFB go maniac mode and want to cause chaos. I think it is already the case that CFB is hesistant to reorg deeper than 10, so that seems like the most reasonable number
-
DataHoardermy suggestion is depth of two "from tip", to account for around 5 previous records missing, due to TTL
-
br-m<fr33_yourself> If checkpoints are less than 5 blocks deep from the tip, then CFB could try to reorg deeper than that causing chain splits. I mean he could do that still with 10 block checkpoint, but it is harder and I think even he has limits to how much chaos he wants to create
-
DataHoarderthat'd get it around the 10 mark with a sliding window of 10 checkpoints
-
DataHoarder> If checkpoints are less than 5 blocks deep from the tip, then CFB could try to reorg deeper than that causing chain splits.
-
br-m<fr33_yourself> I don't follow that point haha
-
DataHoarderif he can do 5 he can do 10
-
DataHoarderor 20, like, on demand
-
DataHoarderDNS checkpoints aren't instant
-
DataHoarderassume they can get to good clients within a few seconds, but many users DNS servers will lag behind around 5-7 minutes
-
DataHoarderyou need to keep this into account to set the checkpoints with the point that they will be there when the clients receive them, in a workable way, for the purpose needed
-
DataHoarderif you set a checkpoint at 9 and the client receives it when we are 20 deep, they hold no weight except to make more splits
-
DataHoarderthe closer to tip, it eliminates the ability to make splits, but you also want to leave the tip to behave naturally via highest work
-
br-m<fr33_yourself> Ok, I think I better understand now. You're saying that there are lags in people receiving all the checkpointed info so in practice a 2 depth checkpoint only enforces like a 10 block finality or something like that
-
DataHoarderyou want to have a margin of error to account that clients would need 5/7 of the DNS domains matching some of the records
-
br-m<fr33_yourself> DataHoarder: Wouldn't it be easier for splits to occur though? As CFB just needs to reorg one block deeper than the checkpoint. But I might misunderstand the degree of incremental difficulty for him to reorg one block deeper
-
DataHoarderthat is a reorg, which reorgs back
-
DataHoarderthe point is to make 10+ infeasible
-
DataHoarderto prevent transaction invalidation and double spend
-
DataHoarderyou can ensure the core players have a good setup, but everyone else that opts in also needs to receive good data in a timely manner even if lagging behind
-
DataHoarderthe 10-block range is within the confirmation window, more and that's where the issues appear @fr33_yourself
-
br-m<fr33_yourself> I agree that to the extent that most mining hash follows the checkpoints then it would make 10+ double spend and reorgs infeasible
-
DataHoarderdepth set to two, qubic tries to reorg below that, but it'd make no difference if it was a checkpoint or not. they can already do that
-
DataHoarderbut extending this to 10+ is not something that is desired, and the point of the checkpoints
-
br-m<fr33_yourself> Correct. The "honest" mining pool admins need to be able to receive the data quickly from the signing servers. > <DataHoarder> you can ensure the core players have a good setup, but everyone else that opts in also needs to receive good data in a timely manner even if lagging behind
-
DataHoarderfrom measurements it's like 15s latency when you query things properly :)
-
DataHoarderdepending on DNS setup as well (and we'd want some variety) that can add some as well
-
DataHoardersome DNS servers from ISPs can and will enforce 5m TTL
-
br-m<fr33_yourself> > <DataHoarder> depth set to two, qubic tries to reorg below that, but it'd make no difference if it was a checkpoint or not. they can already do that
-
br-m<fr33_yourself> Yes, but isn't it a possibility that Qubic could do a fairly deep reorg (via selfish mining) after checkpoints are enabled. And in this hypothetical scenario they could persist mining and building on their deep reorg chain, and possibly with some honest miners (if they don't follow the check pointed chain, but just follow stan [... too long, see mrelay.p2pool.observer/e/hvfZrLMKZmFrOEhp ]
-
br-m<fr33_yourself> In the current setup with selfish mining the reorg passes through which is disruptive, but after Qubics reorgs are "released" the network remains on a single chain.
-
DataHoardersee that'd invalidate some transactions but then the chain would come back
-
DataHoarderwhere the current situation is that they'd never come back
-
DataHoarderand it'd allow double spending, invalidation as well
-
DataHoarderif their point is "profit" their coins there would be useless
-
br-m<fr33_yourself> How would the checkpointing situation allow the chain to "come back"? As long as there are two meaningfully sized groups of hashrate building on different chains then this would cause a currency split if one of two groups doesn't defect in a shortperiod of time. > <DataHoarder> see that'd invalidate some transactions but then the chain would come back
-
DataHoarder> As long as there are two meaningfully sized groups of hashrate building on different chains then this would cause a currency split if one of two groups doesn't defect in a shortperiod of time.
-
DataHoarderThe one that has the monetary majority
-
DataHoarderthis includes hashpower and some specific merchants
-
br-m<fr33_yourself> > <DataHoarder> if their point is "profit" their coins there would be useless
-
br-m<fr33_yourself> Maybe, but isn't this contingent on exchanges, merchants and majority of the ecosystem configuring their nodes such that they follow the checkpoints (not just the miners following the checkpoints, but even exchanges etc). Because otherwise if some exchanges nodes or merchants nodes are configured to simply follow longest chain [... too long, see mrelay.p2pool.observer/e/ovDxrLMKWm1SSHEt ]
-
DataHoardernote DNS checkpoints were originally released to explicitly address that situation of a "split" due to consensus issues coming from a bug
-
DataHoardernodes would opt-in as needed
-
br-m<fr33_yourself> Yes, I agree that after the "currency split". Then the more valuable currency and it's accompanying chain will be followed / mined > <DataHoarder> The one that has the monetary majority
-
DataHoarderalso - note that the current attacker being qubic does not mine 24/7
-
DataHoarderalso read up monero-project/monero #10064
-
br-m<fr33_yourself> DataHoarder: What does this point mean? Why is this relevant to our current discussion? Ohhhh I see what you mean. It would be difficult for them to persistently continue building on their "naughty reorg chain" because they only mine in marathons. So in the event of a chainsplit their chain would die unless CFB overrules their current decentralized AI B.S. and starts mining Monero full time
-
DataHoarderthis is where the 2/3rds was mentioned in the comments
-
DataHoardercorrect, fr33_yourself.
-
DataHoardercheckpoints also prevent a generally covert attacker from implementing one-off attacks
-
br-m<fr33_yourself> You mostly mean merchants that do meaningful transaction volume, exchanges, and proportion of hashpower distributed between the two competing split chains. Whichever has "more" of those aspects would end up winning the split. Pretty much like what happened with BTC and BCH > <DataHoarder> this includes hashpower and some specific merchants
-
DataHoarderExcept for one side they know which is the canonical chain
-
br-m<fr33_yourself> DataHoarder: Yep, because if they can't persistently build on their reorg chain, then it just gets orphaned off. It's like a game of stamina.
-
DataHoarderand the other would temporarily be elsewhere, then flip. it not being permanent matters, plus users on the wrong side can still opt-in to checkpoints or at least see the warnings
-
DataHoarderit is more valuable to do short selfish attacks or literally just mine
-
DataHoarderthey are orphaning others atm but their implementation is not strictly giving higher profits
-
br-m<fr33_yourself> DataHoarder: How would it still be valuable to do short selfish mining attacks if the checkpoint depth is 2 blocks? Wouldn't that make short reorgs impossible? Unless it is like exactly 1 or 2 blocks deep. Or you mean even with checkpointed depth of 2, Qubic could still possibly pass a 3 block depth reorg due to latency?
-
DataHoarderdepth of 2 is, if blocks 100 101 102 exist, you are checkpointing 100
-
DataHoarderthey could still make their own version of 101, 102, and 103 and publish these
-
DataHoarderending up with 2-1 orphaned blocks depending how many they do
-
DataHoarderthey could also do 102, 103 and orphan 102 only
-
DataHoardersay clients have an older one
-
DataHoarderreorg could happen there, but it's still within conf window
-
DataHoarderthen as blocks get built it reorgs back
-
DataHoarderall of that can happen within the 10-block interval and it's all ok
-
DataHoarderthe point being that blocks coming out of the 10-conf window should be well checkpointed by overlapping height/time intervals to account for lucky chances or network delays on DNS records
-
br-m<fr33_yourself> I see where you guys are coming from and you are making good points. It is a powerful practical tool, but I guess the philosophical angle is still a bit murky since the rolling dns checkpoints still introduce a centralized point of trust. But in practice it perhaps it isn't such a big deal because all honest miners and network [... too long, see mrelay.p2pool.observer/e/oZmirbMKX2hSdmdT ]
-
DataHoarderwhat was mentioned on previous MRL meeting as well was to start building up documentation around the purpose itself and what activation window we are talking about for the bandaid
-
DataHoarder> the philosophical angle is still a bit murky since the rolling dns checkpoints still introduce a centralized point of trust
-
DataHoarderyeah, read on monero-project/monero #10064 and that's a major point
-
br-m<fr33_yourself> Roger, I will read it now. I appreciate your points and explanations.
-
DataHoardergood testing has been done on testnet to find pain points and issues to bring that up, and others that need fixing on monero to get a better system for it, but these are not consensus breaking changes (they are just in the DNS checkpoints subsystem)
-
br-m<privacyx> The most important point is rolling dns checkpointing is temporary bandaid (and can be switched off) until more permanent solution is ready, i lost track if PoP will be implemented alongside it?
-
DataHoarderyeah. they must never become a permanent solution on their own. their purpose is for emergency situations as originally deployed, for consensus split resolution, so this is stretching usage a bit already
-
DataHoarderlightly adapting them to fit this better as well, but when other solutions come in, they should be gone quick
-
br-m<ofrnxmr> @fr33_yourself: At a depth of 3 or more
-
br-m<ofrnxmr> For 10 years, the network has essentially never disagreed / reorged beyond 3 blocks. Any discrepency that large is likely to be a dishonest attempt to have an unfair advantage (selfish mining or otherwise mining while disconnected from the rest of network)
-
br-m<ofrnxmr> the checkpoints should be following the chain post-any honest reorgs
-
br-m<privacyx> DNS checkpointing only really helps against deep reorgs (9+), but it doesn’t address selfish mining. Many miners, myself included, are noticing reduced rewards it feels like our work is being undercut, and over time that could make small/medium-scale mining harder to sustain.
-
br-m<privacyx> I’m curious: aside from DNS checkpointing, are there any other solutions currently being discussed to mitigate selfish mining on Monero?
-
DataHoarder> it doesn’t address selfish mining
-
DataHoarderif applied on those depths, it reduces their ability to do long chains and "wait" for the chain to catch up. it limits them to short lucky chains, and their work resets every couple of heights
-
DataHoarderthey can't count on monero getting unlucky and them being lucky anymore
-
DataHoarderso their risk their chain gets orphaned is way higher, which makes it less profitable for them to use that strategy as optimally (or not really for them) as they have been doing
-
DataHoarderbut yeah, it is still viable but with reduced orphan depths
-
DataHoardernote they also are getting quite orphaned by monero itself qubic-snooper.p2pool.observer/tree :)
-
DataHoarder@rucknium:monero.social can you edit monero-project/meta #1263#issuecomment-3255876843 to have the logs in a "code" block via ```<logs>```? I think github is trying to sanitize html tags so only users with invalid html names get shown over the bridge :D
4 hours ago