02:54:25 huh, so they found 2 blocks within 30 seconds, published those to the net, and kept mining privately, then it took 2 mins for the next block, then 1 for the next... 03:01:25 i mean, with their pattern, publish or perish might thwart most of their selfish mining efforts 03:03:02 cause it looks like they are getting away with selfish mining even when they aren't lucky. even their blocks that are 2 mins apart, they orphan a block 03:04:43 3502765, 3502766, 3502767 03:05:27 i think we really need to distill what we're dealing with and how it can be addressed 03:08:39 like, the fact that the issue is still called "mining pool centralization" during MRL meetings. This really has nothing to do with mining pool centralization IMO. If there weren't a mining pool associated with this, maybe they couldn't pull it off. My point is you don't need a pool, you just need to command massive amounts of hashrate. 03:15:27 presumably they do need money to fund this, and from the last 1k blocks they've earned 168 xmr, $50k USD, which they've achieved due to selfish mining. If they were publishing their blocks like a normal peer they would have less 03:20:01 dang, a month of this they've earned 1.5 mill when it should be closer to 750k or something 03:21:51 and meanwhile the honest players are getting boned 03:22:25 which could eventually lead to attrition 03:25:09 like even the super magical PoS finality layer wouldn't fix this unless the finality layer is 2 blocks deep 03:27:18 for some reason i have a hunch that PoP with dynamic k would be the ticket 03:40:13 i wonder if you could mix in peer voting / consensus to modify the k as well. like if you see the same altchain proposed from 8 peers, then you accept k deep to override PoP and just use longest chain. If you see the altchain proposed from just 1 peer, you accept n*k deep. 03:41:38 because getting the same altchain from multiple peers means that it got to them under their d parameter 03:41:57 well no.... no. 03:41:59 hrm 03:43:57 and yeah could just spoof. need some of that PoW for relay sweetness 14:03:58 Rolling checkpoints are now working on testnet with the moneropulse domains: testpoints.moneropulse.se testpoints.moneropulse.org testpoints.moneropulse.net testpoints.moneropulse.ch testpoints.moneropulse.de testpoints.moneropulse.fr 14:05:04 Huge thank you to DataHoarder, ofrnxmr, and binaryFate for getting this off the ground. Big step toward mainnet deployment. 14:06:06 for people that note that is six and not seven domains: last domain is pending activation 14:09:19 You can check the records at the Linux terminal with dig -t txt testpoints.moneropulse.net +dnssec 14:10:06 is there anything we can do to help testing? like mining on testnet or just running a node? 14:11:05 if you do not have a valid DNSSEC resolver locally, you can test via $ dig +dnssec +multi testpoints.moneropulse.ch TXT @1.1.1.1 14:11:24 DNS_PUBLIC=tcp://1.1.1.1 if so for monerod 14:11:59 pumpxmr: Running a node on testnet would help, yes. You need to compile this branch because checkpoints are disabled on testnet in the Monero release version: https://github.com/nahuhh/monero/tree/dns_testpoints 14:12:47 They you would run it with ./monerod --testnet --enforce-dns-checkpointing 14:13:06 that above branch also matches https://github.com/monero-project/monero/pull/10075 14:13:12 set_log checkpoints:TRACE input into the monerod terminal would show you waht the node is doing with checkpointing. 14:13:29 DataHoarder: 100% match? 14:14:12 it is pulling from that branch, yes 14:27:41 rucknium: ok i will give it a try. it's been some time since i last compiled monero 14:29:02 @rucknium: For it has extra changes to enable checkpoints on testnet. Mainnet has broken logic in come places, where it always uses mainnet checkpoints and other places where it disabled checkpoints if not testnet 14:29:35 I plan to open a second to allow checkpoints on testnet/stagenet, but dont want those changes to block 10075 14:32:14 git clone https://github.com/nahuhh/monero.git 14:32:14 cd monero 14:32:14 git checkout dns_testpoints 14:32:16 The dns_testpoints branch also checks every 60s instead of 120 as planned for mainnet. this dns_testpoints-public branch (https://github.com/nahuhh/monero/tree/dns_testpoints-public) has the 120s updates (same as 10075) 14:32:18 pumpxmr: ^ 14:32:40 Then one of these instructions should work, depending on your OS: https://github.com/moneroexamples/monero-compilation/blob/master/README.md 14:32:46 sorry. yes dns_testpoints is different 14:32:52 I read dns_checkpoints 14:34:02 For users who want to help test on testnet, use the dns_testpoints-public branch 14:34:15 ok 14:34:25 https://github.com/nahuhh/monero/tree/dns_testpoints-public 14:35:03 $ git clone --branch dns_testpoints-public https://github.com/nahuhh/monero.git 14:36:31 testnet blockchain takes a while to download and verify. If you want a shortcut, you can download this file into .bitmonero/testnet : http://185.141.216.147/data.mdb (http only) 14:36:45 It's a snapshot about a month old 14:37:52 thx 14:48:37 pumpxmr: Sorry, you have to put data.mdb in ~/.bitmonero/testnet/lmdb/ 14:48:37 One directory deeper. 15:02:47 yes, figured as much, thanks 15:34:51 Just attempted to split the net 15:34:59 Lets see if it heals 15:35:50 one of my 2 checkpointed nodes that was behind, rolled back to the checkpoint successfully 15:36:06 i'm still compiling 15:41:21 rolled back but doesnt want to pull in the new blocks now :s 15:42:08 @ofrnxmr:xmr.mx: Does your branch use the looser alt block propagation rules? 15:43:46 https://mrelay.p2pool.observer/p/3ceS4bYKMkFHaFZr/1.txt (code snippet, 6 lines) 15:44:06 @rucknium: Wdym? 15:44:31 IIRC, DataHoarder was working on that 15:45:02 I don't have P2P rules, just RPC ones 15:45:17 I worked on ways to broadcast these but not within monero 15:45:22 I don't think my uncheckpointed node saw the reorg attempt: https://testnetnode1.moneroconsensus.info/ 15:46:30 checkpointed one never saw the alt :) 15:46:39 I was able to reorg one of my checkpointed nodes - one had lastest checkpoint as 88, other had 91. I reorged at 91 so that one of the nodes accepted it and the other blocked it. 15:46:39 iirc the height was ~95 at the time at the time 15:46:55 @ofrnxmr:xmr.mx: This error is from one of my checkpointed nodes 15:47:29 also useful for anyone, set_log checkpoints:TRACE 15:47:37 that logs whenever checkpoints are checked 15:48:10 DataHoarder: This is good but misses the errors when checkpoints fail 15:48:34 I set to lvl to see why this node is stuck 15:49:28 > 170.17.141.244:28080 494bbc04de8c8e1b synchronizing 180 2837199 0 kB/s, 1 blocks / 0.000141 MB queued 15:49:58 I think there might be 2 chains going - the real tip is 207 15:50:58 Yeah, this bug needs solving > <@ofrnxmr:xmr.mx> https://mrelay.p2pool.observer/p/3ceS4bYKMkFHaFZr/1.txt (code snippet, 6 lines) 15:52:01 testnet version of blocks btw https://testnet-blocks.p2pool.observer/ 15:52:16 tip is https://testnet-blocks.p2pool.observer/block/dcfc4ba04d966abf510f91537df74e7df8fea03d106f7f4915043d86ee260fee 15:52:53 checkpoints at https://testnet-blocks.p2pool.observer/block/3b7904954dc11bf7f38422fb761b4decac391910439ecdad3d9318f3510fdaaf 15:53:19 you are getting this one https://testnet-blocks.p2pool.observer/block/5a5c32a390d6f53dd9d1d1552fb53a3223ab7d66be0eec73c829ae93ec3e56b9 15:53:26 @ofrnxmr:xmr.mx: i think the node pops back to the checkpointed height, but does not reinstate the old block 15:53:40 So my 91 is on the wrong chain 15:54:31 My node is on height 91 - it was reorged at 96 15:54:50 Then popped back to 91, now seems unable to proceed 16:03:12 Maybe some relation with "Blockchain: fix temp fails causing alt blocks to be permanently invalid" https://github.com/monero-project/monero/pull/9395 16:08:26 I think L2090-2130 of blockchain.cpp might not account for an orphaned chain becoming the main chain 16:10:04 Having taken a brief look at the code, I was fearing that. Does it permanently mark an orphan as invalid? Not very sane. 16:10:19 seems like it 16:10:34 If i unban my local nodes, they are rebanned immediately 16:11:29 On #9395, iamamyth said "I agree with the premise that an invalid block cache doesn't make much sense" 16:14:03 https://mrelay.p2pool.observer/p/17eB4rYKaW1iYkFf/1.txt (code snippet, 27 lines) 16:15:30 If that's true, even if it's fixed in the new version, it would be a big problem for nodes that don't upgrade because they wouldn't accept the honest orphaned chain becoming the valid chain again after it wins against the adversary. 16:16:51 But didn't we run a test where the attacking chain was overpowered, and everything worked OK? 16:17:08 It may just occur when blocks are popped due to failing a checkpoint 16:17:54 alternatively, checkpoints could be posted to a new set on subdomains 16:18:08 that are looked for by fixed nodes 16:18:36 while old checkpoints endpoint gets checkpoints at depth to prevent multi-day reorgs 16:27:45 Old checkpoints obly update hourlu 16:28:16 old versions* 16:29:15 They probably shouldnt be enforcing on an old version, and even if they are, the chance of a mismatch is lower because their checkpoints only update once an hour 16:31:00 @rucknium: yeah, but not if an enforcing chain was reorged to below the latest checkpoint, then had to recover once receiving the checkpoint (which is after seeing the checkpointed chain as an alt) 16:33:19 @ofrnxmr:xmr.mx I mean if older versions are seen problematic 16:35:54 I dont think they would be. For 59.9mins/hr, they only know old checkpoints, and are essentially the same as a node that isnt enforcing. If there if a reorg that forces them ro rollback beyond the latest checkpoint, they wont ever receive a checkpoint that conflicts with the alt chain 16:36:11 They only receive / store 1 checkpoint per hr 16:37:07 If they got the whole list, that might be an issue but the reorg block N-N has no conflicting checkpoints when they pull the checkpoints an hour later 16:38:00 They only receive / store 1 checkpoint per hr 16:38:05 they could receive a very recent one 16:38:17 and trigger the same condition 16:38:20 (Or if the reorg happens in a short window before they pull the checkpoints) 16:40:33 DataHoarder: Node receives checkpoint 100 16:40:33 node is reorged from block 101-> 109 16:40:33 np. No conflict 16:40:33 node is reorged 97-101[... more lines follow, see https://mrelay.p2pool.observer/e/nL_i4rYKMmZnWDJ4 ] 16:41:37 there is no code difference from your branch to an older checking node right? verification wise 16:41:49 so the same exact situation could have happened 16:42:22 DataHoarder: Right 16:42:31 so they can't switch to any alt chain? 16:43:06 like, they'd get there/pop and can't receive new 16:45:01 I doubt many mainnet nodes today use --enforce-dns-checkpointing 16:45:34 even if they do, we can issue deeper checkpoints there instead 16:46:50 but the first important part, // FIXME: is it even possible for a checkpoint to show up not on the main chain? 16:46:58 we answered the question yes :) 16:47:24 DataHoarder: i think this exacerbates the issue 16:47:31 specially on a consensus issue 16:47:47 hmm ofrnxmr:xmr.mx? 16:47:55 deeper checkpoints, say a day old? 16:48:25 or this piece of code 16:48:35 i dont think day-old checkpoints would have any real world effect? 16:48:55 1906 if (parent_in_alt || parent_in_main) is the condition that fails / causes the fallthrough afaict 16:49:02 multi-day reorgs for any would-be covert attacker currently 16:50:55 enforcing nodes store all of the checkpoints, so they already have old ones if the node has been online 16:51:32 yes. I mean for nodes that would have this issue 16:51:49 existing checkpoints.* get old at deep only 16:51:53 not new at tip 16:52:10 We need to solve this issue as-is anyway 16:52:12 a fastpoints.* gets the fast/fixed ones, or checkpoingsv2 16:52:24 which would be queried by nodes without the issue 16:52:31 (aka, new release) 16:53:11 or even don't change anything on existing subdomains, if there is a chance to trigger the issue 16:53:36 Or fix the issue 16:53:38 Lol 16:53:44 AND fix the issue 16:53:53 but old nodes won't automagically update 16:53:56 This isnt intended behavior imo, just an edge case 16:54:01 someone might have it enforced 16:54:12 maybe they'd notice then :) 16:55:04 enforcing the checkpoints on old nodes would be rare tk hit the issue, and restarting the node fixes it 16:55:20 can it trigger? 16:55:31 if affirmative we should treat the old ones with care 16:55:50 it should fail-safe 16:56:06 getting a node stuck is not safe 16:56:17 I dont think we need to treat bugs with care - users should update 16:56:22 yes, they should update, and probably rare to be enforcing and not updating 16:56:41 They are using a buggy feature atm 16:56:53 Why not set this issue aside for now and get the correct behavior in the current patch? 16:56:54 but given the solution for us is so trivial (issue on fastpoints instead of checkpoints as example) 16:57:00 why not do that instead 16:57:26 yeah, that code I need to make a flowchart for 16:57:46 it's half checkpoints half consensus 16:58:44 > parent_in_alt || parent_in_main 16:59:29 block_exists in LMDB 16:59:34 is that only for main blocks? 17:00:01 seems to be a different bucket, yes 17:00:44 so this is just checking if parent exists, we know the incoming block is alt 17:01:20 @ofrnxmr:xmr.mx on your logs, can you see which checkpoint height:id was enforced? 17:02:58 @rucknium: it's a flag that's recommended on p2pool setup 17:03:32 (it is now recommended, yes, but not in the past) 17:03:45 I assume whoever changed it now will updated in the short future 17:03:50 didn't know it didn't use to be recommended :) 17:04:12 in fact --disable was recommended due to DNS queries were not async in the past 17:04:47 The docs say "It is probably a good idea to set enforcing for unattended nodes." https://docs.getmonero.org/interacting/monerod-reference/#server 17:05:00 But few people probably read the docs that deeply. 17:05:13 I remember reading that line :) 17:05:24 I changed my setup due to blocking DNS queries in the past 17:05:30 @helene:unredacted.org: Recently 17:05:31 few people read the docs* 17:05:36 ftfy :P 17:05:59 91 was the last one > @ofrnxmr:xmr.mx on your logs, can you see which checkpoint height:id was enforced? 17:06:08 Not from logs, but becauae i was watching 17:06:34 so https://testnet-blocks.p2pool.observer/block/52b0a4a448f5c27c153bce1de8ad83928c57fa50dfb0f8ddf2972b841cf0cd1c ? 17:07:08 effectively 2837191:52b0a4a448f5c27c153bce1de8ad83928c57fa50dfb0f8ddf2972b841cf0cd1c checkpoint 17:07:38 Yes 17:07:56 thanks, I'll find what happens on all the edges 17:09:15 BH: 2837191, TH: 6a74590e12bf9674609b09b57eaa9be0ed701d4ae28ad0d6407731d6d6fb168c, DIFF: 210792, CUM_DIFF: 1127320617056, HR: 1756 H/s 17:09:24 Diff shows this though 17:10:20 that is some BIG if statement 17:10:43 IKR smh 17:10:48 that is 90 17:10:56 https://testnet-blocks.p2pool.observer/block/6a74590e12bf9674609b09b57eaa9be0ed701d4ae28ad0d6407731d6d6fb168c 17:11:25 some of the reported numbers are +1 17:11:34 yeah, it popped back to that, but wont add 91 now (because its maked as alt?) 17:11:38 zero-indexed is truth :D 17:11:41 yeah, lemme see 17:11:45 Marked as orphaned* 17:12:01 you are at 2837190. 2837191 has 52b0a4a448f5c27c153bce1de8ad83928c57fa50dfb0f8ddf2972b841cf0cd1c as checkpoint 17:12:09 you received 52b0a4a448f5c27c153bce1de8ad83928c57fa50dfb0f8ddf2972b841cf0cd1c at some point as well (?) 17:12:21 Yes 17:12:38 my node was on main tip of like height 196 or so 17:13:07 The 94 checkpoint came in on node 2, and before node 1 noticed it, i released my reorg from node3 17:13:10 it fails to find parent on new block 17:13:18 so db fails to find block 2837190 17:13:24 id 6a74590e12bf9674609b09b57eaa9be0ed701d4ae28ad0d6407731d6d6fb168c 17:13:37 that is the only condition that message would be emitted 17:14:00 so it rolled to 2837190 but ALSO removed it from db 17:14:07 but it still has kept it as tip??? 17:14:17 bool parent_in_alt = m_db->get_alt_block(b.prev_id, &prev_data, NULL); 17:14:17 bool parent_in_main = m_db->block_exists(b.prev_id); 17:14:20 https://mrelay.p2pool.observer/p/4Z3e47YKaHZweDd2/1.txt (code snippet, 5 lines) 17:14:29 it can't find 6a74590e12bf9674609b09b57eaa9be0ed701d4ae28ad0d6407731d6d6fb168c on either of those two db calls 17:15:22 cursed, that calls /getinfo internally??? 17:15:39 No idea, probably not get_info 17:15:49 and then getblockheadersrange 17:15:56 then prints 17:16:43 yeah this all uses RPC 17:17:00 yes, but idk what calls its making 17:17:11 Note: it only passes the 88 checkpoint now, not the 91 17:17:14 can you do RPC get_block on the hash, then on the height? 17:18:00 curl http://127.0.0.1:18081/json_rpc -d '{"jsonrpc":"2.0","id":"0","method":"get_block","params":{"hash":"6a74590e12bf9674609b09b57eaa9be0ed701d4ae28ad0d6407731d6d6fb168c"}}' -H 'Content-Type: application/json' 17:18:02 afaik 17:18:07 well 28081 17:21:02 Rpc blocks 17:21:19 Conn reset by peer 17:21:31 Oh 17:21:36 Its because its banned 127.0.0.1 :D lmao 17:23:52 so? what do you get? 17:25:01 it gives a valid response 17:25:34 https://mrelay.p2pool.observer/p/trCH5LYKVGtUc3ZU/1.txt (code snippet, 32 lines) 17:26:39 get_block has different logic that block_exists 17:26:40 one sec 17:26:59 trying to get you a way to trigger this edge alternatively to confirm it's that 17:32:09 ugh, it's a binary request 17:39:10 @ofrnxmr:xmr.mx: curl http://127.0.0.1:28081/json_rpc -d '{"jsonrpc":"2.0","id":"0","method":"get_block_template","params":{"wallet_address":"9wuSwB1qbWpah9b1xgsB14Xmp5wg3pMitW15WtQjAv6US4wBv5HYqQ2LhR4rwUAsK6S3ZHkmCtfw8cPrCaf21Sdx7hFkgTz"}}' -H 'Content-Type: application/json' 17:39:23 that should call that function internally 17:40:30 actually 17:40:39 need an undocumented parameter (again) 17:41:29 curl http://127.0.0.1:28081/json_rpc -d '{"jsonrpc":"2.0","id":"0","method":"get_block_template","params":{"prev_block":"6a74590e12bf9674609b09b57eaa9be0ed701d4ae28ad0d6407731d6d6fb168c","wallet_address":"9wuSwB1qbWpah9b1xgsB14Xmp5wg3pMitW15WtQjAv6US4wBv5HYqQ2LhR4rwUAsK6S3ZHkmCtfw8cPrCaf21Sdx7hFkgTz"}}' -H 'Content-Type: application/json' 17:41:47 that will trigger a specific edge in get_block_template that calls m_db.block_exists 17:41:49 <0xfffc> very strange! > get_block has different logic that block_exists 17:41:57 pop_blocks probably 17:42:21 Just trying to 100% verify this via triggering it with the known failing hash 17:46:55 @0xfffc: it should be the same, though, from a quick look. as get_block does return get_block_blob_from_height(get_block_height(h)); 17:48:43 I'm also curious about main blockchain wrong height 17:49:18 > CHECK_AND_ASSERT_MES(m_db->height() > alt_chain.front().height, false, "main blockchain wrong height"); 17:49:31 so the alt_chain is greater or equal to m_db height 17:49:51 happens at the same time as "Block recognized as orphaned and rejected" 17:52:42 from what I see first pass on handle_alternative_block gets into the check, hits build_alt_chain, then triggers assert 17:53:06 second pass doesn't go into if (parent_in_alt || parent_in_main) 17:53:12 and triggers Block recognized as orphaned and rejected 17:55:09 my testnet node is synced now: Height: 2837251/2837251 (100.0%) on testnet, not mining, net hash 1.79 kH/s, v16. Also set log lvl to TRACE 17:55:09 need to brb 18:02:34 https://mrelay.p2pool.observer/p/reqO5bYKNW03MWdH/1.txt (code snippet, 16 lines) 18:04:10 pumpxmr: Are you seeing the log messages about checkpoints? Should occur every 5 minutes. 18:10:54 not really, i used `set_log TRACE` or should it be something else? 18:11:31 oh i see `set_log checkpoints:TRACE` mentioned 18:12:45 Every2mins * 18:16:18 `2025-09-19 18:15:43.541 I CHECKPOINT PASSED FOR HEIGHT 2837259 ` 18:16:34 Ah, right. Every 2 minutes. I will try to write out the protocol specification. 18:16:55 ye somehow the second pass fails, weird ofrnxmr 18:17:08 you don't happen to have a debugger attached on this? :D 18:21:29 DataHoarder: I dont 18:22:36 Relatively simple to reproduce. Run 3 nodes. 1 attacking. 1 checkpointing, launched on odd-minute. 1 checkpointing, launched on even-minute 18:23:26 have the attacking uae add-exclusive-node to check-node1 and check-node2 18:24:11 When one of the checkpoints would reject your reorg, but the other node hasnt received it yet, release youe reorg 18:26:15 Result: 1 node reorgs, the other rejecrs it. 18:26:15 The reorged node will soon after receive a checkpoint that matched the orphaned chain, and the node will pop blocks, but will not reinstate or sync the prev-orphaned chain 18:26:48 @jack_ma_blabla:matrix.org i cant open your dm btw 19:42:49 how are these domains setup? are the moneropulse pointing to another domain? or is there a direct pipe to the moneropulse DNS entry? 19:43:59 for some reason in my head i imagine a useful setup would be for whoever is responsible for creating the checkpoints to do it using a non-moneropulse domain, and then the owner of the moneropulse can just use whatever DNS copy is available... like CNAME or whatever 19:44:21 that could be done as well, the other options is DNS delegation 19:44:38 (so other person can issue records on the server they control directly, instead of CNAME) 19:44:44 no need to have another domain 19:44:50 right so whats the setup now? 19:44:50 DNS delegation has been tested 19:45:12 right now records are being issued directly to subdomains, just to find breakage, on testpoints 19:45:19 breakage was found :) 19:45:30 gotcha 19:46:51 yeah, in my head having a split makes sense. I dunno what others think. But community members / pool operators could run their own checkpointing DNS thinger, and then monero core could just redirect the moneropulse to whatever they see fit / is working. 19:53:19 note the setup can change over time with improvements while long term solution is discussed/developed 19:53:28 this is not the final setup if not intended 21:42:18 <321bob321> https://blog.cloudflare.com/you-dont-need-quantum-hardware/ 21:42:18 <321bob321> "You don’t need quantum hardware for post-quantum security" 23:42:04 oh dang. i can't speak in lab anymore? 23:44:41 @gingeropolous: bruh, because chat went offtopic, you can continue here 23:45:00 The entire room was muted. 23:45:07 wowzers 23:46:22 i look forward to your research dude. can't say i'll understand it, but i'll give it my best. 23:47:45 i still think more attention needs to be given to the fact that a finality layer won't address selfish mining 23:49:14 it'll lessen the damage selfish mining can do, but it'll still be there, stealing honest hashrate and forcing finality layer intervention .... ? 23:50:14 and i haven't read through the ccs recently but methinks selfish mining mitigation isn't in scope 23:53:51 Complaints and feedback about moderation in #MRL can be directed here 23:53:58 <--- 23:54:05 ^