-
br-m
<gingeropolous> huh, so they found 2 blocks within 30 seconds, published those to the net, and kept mining privately, then it took 2 mins for the next block, then 1 for the next...
-
br-m
<gingeropolous> i mean, with their pattern, publish or perish might thwart most of their selfish mining efforts
-
br-m
<gingeropolous> cause it looks like they are getting away with selfish mining even when they aren't lucky. even their blocks that are 2 mins apart, they orphan a block
-
br-m
<gingeropolous> 3502765, 3502766, 3502767
-
br-m
<gingeropolous> i think we really need to distill what we're dealing with and how it can be addressed
-
br-m
<gingeropolous> like, the fact that the issue is still called "mining pool centralization" during MRL meetings. This really has nothing to do with mining pool centralization IMO. If there weren't a mining pool associated with this, maybe they couldn't pull it off. My point is you don't need a pool, you just need to command massive amounts of hashrate.
-
br-m
<gingeropolous> presumably they do need money to fund this, and from the last 1k blocks they've earned 168 xmr, $50k USD, which they've achieved due to selfish mining. If they were publishing their blocks like a normal peer they would have less
-
br-m
<gingeropolous> dang, a month of this they've earned 1.5 mill when it should be closer to 750k or something
-
br-m
<gingeropolous> and meanwhile the honest players are getting boned
-
br-m
<gingeropolous> which could eventually lead to attrition
-
br-m
<gingeropolous> like even the super magical PoS finality layer wouldn't fix this unless the finality layer is 2 blocks deep
-
br-m
<gingeropolous> for some reason i have a hunch that PoP with dynamic k would be the ticket
-
br-m
<gingeropolous> i wonder if you could mix in peer voting / consensus to modify the k as well. like if you see the same altchain proposed from 8 peers, then you accept k deep to override PoP and just use longest chain. If you see the altchain proposed from just 1 peer, you accept n*k deep.
-
br-m
<gingeropolous> because getting the same altchain from multiple peers means that it got to them under their d parameter
-
br-m
<gingeropolous> well no.... no.
-
br-m
<gingeropolous> hrm
-
br-m
<gingeropolous> and yeah could just spoof. need some of that PoW for relay sweetness
-
br-m
<rucknium> Rolling checkpoints are now working on testnet with the moneropulse domains: testpoints.moneropulse.se testpoints.moneropulse.org testpoints.moneropulse.net testpoints.moneropulse.ch testpoints.moneropulse.de testpoints.moneropulse.fr
-
br-m
<rucknium> Huge thank you to DataHoarder, ofrnxmr, and binaryFate for getting this off the ground. Big step toward mainnet deployment.
-
DataHoarder
for people that note that is six and not seven domains: last domain is pending activation
-
br-m
<rucknium> You can check the records at the Linux terminal with dig -t txt testpoints.moneropulse.net +dnssec
-
pumpxmr
is there anything we can do to help testing? like mining on testnet or just running a node?
-
DataHoarder
if you do not have a valid DNSSEC resolver locally, you can test via $ dig +dnssec +multi testpoints.moneropulse.ch TXT @1.1.1.1
-
DataHoarder
DNS_PUBLIC=tcp://1.1.1.1 if so for monerod
-
br-m
<rucknium> pumpxmr: Running a node on testnet would help, yes. You need to compile this branch because checkpoints are disabled on testnet in the Monero release version:
github.com/nahuhh/monero/tree/dns_testpoints
-
br-m
<rucknium> They you would run it with ./monerod --testnet --enforce-dns-checkpointing
-
DataHoarder
that above branch also matches
monero-project/monero #10075
-
br-m
<rucknium> set_log checkpoints:TRACE input into the monerod terminal would show you waht the node is doing with checkpointing.
-
br-m
<rucknium> DataHoarder: 100% match?
-
DataHoarder
it is pulling from that branch, yes
-
pumpxmr
rucknium: ok i will give it a try. it's been some time since i last compiled monero
-
br-m
<ofrnxmr> @rucknium: For it has extra changes to enable checkpoints on testnet. Mainnet has broken logic in come places, where it always uses mainnet checkpoints and other places where it disabled checkpoints if not testnet
-
br-m
<ofrnxmr> I plan to open a second to allow checkpoints on testnet/stagenet, but dont want those changes to block 10075
-
br-m
-
br-m
<rucknium> cd monero
-
br-m
<rucknium> git checkout dns_testpoints
-
br-m
<ofrnxmr> The dns_testpoints branch also checks every 60s instead of 120 as planned for mainnet. this dns_testpoints-public branch (
github.com/nahuhh/monero/tree/dns_testpoints-public) has the 120s updates (same as 10075)
-
br-m
<rucknium> pumpxmr: ^
-
br-m
<rucknium> Then one of these instructions should work, depending on your OS:
github.com/moneroexamples/monero-compilation/blob/master/README.md
-
DataHoarder
sorry. yes dns_testpoints is different
-
DataHoarder
I read dns_checkpoints
-
br-m
<ofrnxmr> For users who want to help test on testnet, use the dns_testpoints-public branch
-
pumpxmr
ok
-
DataHoarder
-
DataHoarder
$ git clone --branch dns_testpoints-public
github.com/nahuhh/monero.git
-
br-m
<rucknium> testnet blockchain takes a while to download and verify. If you want a shortcut, you can download this file into .bitmonero/testnet :
185.141.216.147/data.mdb (http only)
-
br-m
<rucknium> It's a snapshot about a month old
-
pumpxmr
thx
-
br-m
<rucknium> pumpxmr: Sorry, you have to put data.mdb in ~/.bitmonero/testnet/lmdb/
-
br-m
<rucknium> One directory deeper.
-
pumpxmr
yes, figured as much, thanks
-
br-m
<ofrnxmr:xmr.mx> Just attempted to split the net
-
br-m
<ofrnxmr:xmr.mx> Lets see if it heals
-
br-m
<ofrnxmr:xmr.mx> one of my 2 checkpointed nodes that was behind, rolled back to the checkpoint successfully
-
pumpxmr
i'm still compiling
-
br-m
<ofrnxmr:xmr.mx> rolled back but doesnt want to pull in the new blocks now :s
-
br-m
<rucknium> @ofrnxmr:xmr.mx: Does your branch use the looser alt block propagation rules?
-
br-m
-
br-m
<ofrnxmr:xmr.mx> @rucknium: Wdym?
-
br-m
<rucknium> IIRC, DataHoarder was working on that
-
DataHoarder
I don't have P2P rules, just RPC ones
-
DataHoarder
I worked on ways to broadcast these but not within monero
-
br-m
<rucknium> I don't think my uncheckpointed node saw the reorg attempt:
testnetnode1.moneroconsensus.info
-
DataHoarder
checkpointed one never saw the alt :)
-
br-m
<ofrnxmr:xmr.mx> I was able to reorg one of my checkpointed nodes - one had lastest checkpoint as 88, other had 91. I reorged at 91 so that one of the nodes accepted it and the other blocked it.
-
br-m
<ofrnxmr:xmr.mx> iirc the height was ~95 at the time at the time
-
br-m
<ofrnxmr:xmr.mx> @ofrnxmr:xmr.mx: This error is from one of my checkpointed nodes
-
DataHoarder
also useful for anyone, set_log checkpoints:TRACE
-
DataHoarder
that logs whenever checkpoints are checked
-
br-m
<ofrnxmr:xmr.mx> DataHoarder: This is good but misses the errors when checkpoints fail
-
br-m
<ofrnxmr:xmr.mx> I set to lvl to see why this node is stuck
-
br-m
<ofrnxmr:xmr.mx> > 170.17.141.244:28080 494bbc04de8c8e1b synchronizing 180 2837199 0 kB/s, 1 blocks / 0.000141 MB queued
-
br-m
<ofrnxmr:xmr.mx> I think there might be 2 chains going - the real tip is 207
-
br-m
<ofrnxmr:xmr.mx> Yeah, this bug needs solving > <@ofrnxmr:xmr.mx>
mrelay.p2pool.observer/p/3ceS4bYKMkFHaFZr/1.txt (code snippet, 6 lines)
-
DataHoarder
testnet version of blocks btw
testnet-blocks.p2pool.observer
-
DataHoarder
-
DataHoarder
-
DataHoarder
-
br-m
<ofrnxmr:xmr.mx> @ofrnxmr:xmr.mx: i think the node pops back to the checkpointed height, but does not reinstate the old block
-
br-m
<ofrnxmr:xmr.mx> So my 91 is on the wrong chain
-
br-m
<ofrnxmr:xmr.mx> My node is on height 91 - it was reorged at 96
-
br-m
<ofrnxmr:xmr.mx> Then popped back to 91, now seems unable to proceed
-
br-m
<rucknium> Maybe some relation with "Blockchain: fix temp fails causing alt blocks to be permanently invalid"
monero-project/monero #9395
-
br-m
<ofrnxmr:xmr.mx> I think L2090-2130 of blockchain.cpp might not account for an orphaned chain becoming the main chain
-
br-m
<rucknium> Having taken a brief look at the code, I was fearing that. Does it permanently mark an orphan as invalid? Not very sane.
-
br-m
<ofrnxmr:xmr.mx> seems like it
-
br-m
<ofrnxmr:xmr.mx> If i unban my local nodes, they are rebanned immediately
-
br-m
<rucknium> On #9395, iamamyth said "I agree with the premise that an invalid block cache doesn't make much sense"
-
br-m
-
br-m
<rucknium> If that's true, even if it's fixed in the new version, it would be a big problem for nodes that don't upgrade because they wouldn't accept the honest orphaned chain becoming the valid chain again after it wins against the adversary.
-
br-m
<rucknium> But didn't we run a test where the attacking chain was overpowered, and everything worked OK?
-
br-m
<rucknium> It may just occur when blocks are popped due to failing a checkpoint
-
DataHoarder
alternatively, checkpoints could be posted to a new set on subdomains
-
DataHoarder
that are looked for by fixed nodes
-
DataHoarder
while old checkpoints endpoint gets checkpoints at depth to prevent multi-day reorgs
-
br-m
<ofrnxmr:xmr.mx> Old checkpoints obly update hourlu
-
br-m
<ofrnxmr:xmr.mx> old versions*
-
br-m
<ofrnxmr:xmr.mx> They probably shouldnt be enforcing on an old version, and even if they are, the chance of a mismatch is lower because their checkpoints only update once an hour
-
br-m
<ofrnxmr:xmr.mx> @rucknium: yeah, but not if an enforcing chain was reorged to below the latest checkpoint, then had to recover once receiving the checkpoint (which is after seeing the checkpointed chain as an alt)
-
DataHoarder
@ofrnxmr:xmr.mx I mean if older versions are seen problematic
-
br-m
<ofrnxmr:xmr.mx> I dont think they would be. For 59.9mins/hr, they only know old checkpoints, and are essentially the same as a node that isnt enforcing. If there if a reorg that forces them ro rollback beyond the latest checkpoint, they wont ever receive a checkpoint that conflicts with the alt chain
-
br-m
<ofrnxmr:xmr.mx> They only receive / store 1 checkpoint per hr
-
br-m
<ofrnxmr:xmr.mx> If they got the whole list, that might be an issue but the reorg block N-N has no conflicting checkpoints when they pull the checkpoints an hour later
-
DataHoarder
They only receive / store 1 checkpoint per hr
-
DataHoarder
they could receive a very recent one
-
DataHoarder
and trigger the same condition
-
br-m
<ofrnxmr:xmr.mx> (Or if the reorg happens in a short window before they pull the checkpoints)
-
br-m
<ofrnxmr:xmr.mx> DataHoarder: Node receives checkpoint 100
-
br-m
<ofrnxmr:xmr.mx> node is reorged from block 101-> 109
-
br-m
<ofrnxmr:xmr.mx> np. No conflict
-
br-m
<ofrnxmr:xmr.mx> node is reorged 97-101[... more lines follow, see
mrelay.p2pool.observer/e/nL_i4rYKMmZnWDJ4 ]
-
DataHoarder
there is no code difference from your branch to an older checking node right? verification wise
-
DataHoarder
so the same exact situation could have happened
-
br-m
<ofrnxmr:xmr.mx> DataHoarder: Right
-
DataHoarder
so they can't switch to any alt chain?
-
DataHoarder
like, they'd get there/pop and can't receive new
-
br-m
<rucknium> I doubt many mainnet nodes today use --enforce-dns-checkpointing
-
DataHoarder
even if they do, we can issue deeper checkpoints there instead
-
DataHoarder
but the first important part, // FIXME: is it even possible for a checkpoint to show up not on the main chain?
-
DataHoarder
we answered the question yes :)
-
br-m
<ofrnxmr:xmr.mx> DataHoarder: i think this exacerbates the issue
-
DataHoarder
specially on a consensus issue
-
DataHoarder
hmm ofrnxmr:xmr.mx?
-
DataHoarder
deeper checkpoints, say a day old?
-
DataHoarder
or this piece of code
-
br-m
<ofrnxmr:xmr.mx> i dont think day-old checkpoints would have any real world effect?
-
br-m
<ofrnxmr:xmr.mx> 1906 if (parent_in_alt || parent_in_main) is the condition that fails / causes the fallthrough afaict
-
DataHoarder
multi-day reorgs for any would-be covert attacker currently
-
br-m
<ofrnxmr:xmr.mx> enforcing nodes store all of the checkpoints, so they already have old ones if the node has been online
-
DataHoarder
yes. I mean for nodes that would have this issue
-
DataHoarder
existing checkpoints.* get old at deep only
-
DataHoarder
not new at tip
-
br-m
<ofrnxmr:xmr.mx> We need to solve this issue as-is anyway
-
DataHoarder
a fastpoints.* gets the fast/fixed ones, or checkpoingsv2
-
DataHoarder
which would be queried by nodes without the issue
-
DataHoarder
(aka, new release)
-
DataHoarder
or even don't change anything on existing subdomains, if there is a chance to trigger the issue
-
br-m
<ofrnxmr:xmr.mx> Or fix the issue
-
br-m
<ofrnxmr:xmr.mx> Lol
-
DataHoarder
AND fix the issue
-
DataHoarder
but old nodes won't automagically update
-
br-m
<ofrnxmr:xmr.mx> This isnt intended behavior imo, just an edge case
-
DataHoarder
someone might have it enforced
-
DataHoarder
maybe they'd notice then :)
-
br-m
<ofrnxmr:xmr.mx> enforcing the checkpoints on old nodes would be rare tk hit the issue, and restarting the node fixes it
-
DataHoarder
can it trigger?
-
DataHoarder
if affirmative we should treat the old ones with care
-
DataHoarder
it should fail-safe
-
DataHoarder
getting a node stuck is not safe
-
br-m
<ofrnxmr:xmr.mx> I dont think we need to treat bugs with care - users should update
-
DataHoarder
yes, they should update, and probably rare to be enforcing and not updating
-
br-m
<ofrnxmr:xmr.mx> They are using a buggy feature atm
-
br-m
<rucknium> Why not set this issue aside for now and get the correct behavior in the current patch?
-
DataHoarder
but given the solution for us is so trivial (issue on fastpoints instead of checkpoints as example)
-
DataHoarder
why not do that instead
-
DataHoarder
yeah, that code I need to make a flowchart for
-
DataHoarder
it's half checkpoints half consensus
-
DataHoarder
> parent_in_alt || parent_in_main
-
DataHoarder
block_exists in LMDB
-
DataHoarder
is that only for main blocks?
-
DataHoarder
seems to be a different bucket, yes
-
DataHoarder
so this is just checking if parent exists, we know the incoming block is alt
-
DataHoarder
@ofrnxmr:xmr.mx on your logs, can you see which checkpoint height:id was enforced?
-
br-m
<helene:unredacted.org> @rucknium: it's a flag that's recommended on p2pool setup
-
DataHoarder
(it is now recommended, yes, but not in the past)
-
DataHoarder
I assume whoever changed it now will updated in the short future
-
br-m
<helene:unredacted.org> didn't know it didn't use to be recommended :)
-
DataHoarder
in fact --disable was recommended due to DNS queries were not async in the past
-
br-m
<rucknium> The docs say "It is probably a good idea to set enforcing for unattended nodes."
docs.getmonero.org/interacting/monerod-reference/#server
-
br-m
<rucknium> But few people probably read the docs that deeply.
-
DataHoarder
I remember reading that line :)
-
DataHoarder
I changed my setup due to blocking DNS queries in the past
-
br-m
<ofrnxmr:xmr.mx> @helene:unredacted.org: Recently
-
br-m
<helene:unredacted.org> few people read the docs*
-
br-m
<helene:unredacted.org> ftfy :P
-
br-m
<ofrnxmr:xmr.mx> 91 was the last one > <DataHoarder> @ofrnxmr:xmr.mx on your logs, can you see which checkpoint height:id was enforced?
-
br-m
<ofrnxmr:xmr.mx> Not from logs, but becauae i was watching
-
DataHoarder
-
DataHoarder
effectively 2837191:52b0a4a448f5c27c153bce1de8ad83928c57fa50dfb0f8ddf2972b841cf0cd1c checkpoint
-
br-m
<ofrnxmr:xmr.mx> Yes
-
DataHoarder
thanks, I'll find what happens on all the edges
-
br-m
<ofrnxmr:xmr.mx> BH: 2837191, TH: 6a74590e12bf9674609b09b57eaa9be0ed701d4ae28ad0d6407731d6d6fb168c, DIFF: 210792, CUM_DIFF: 1127320617056, HR: 1756 H/s
-
br-m
<ofrnxmr:xmr.mx> Diff shows this though
-
DataHoarder
that is some BIG if statement
-
br-m
<ofrnxmr:xmr.mx> IKR smh
-
DataHoarder
that is 90
-
DataHoarder
-
DataHoarder
some of the reported numbers are +1
-
br-m
<ofrnxmr:xmr.mx> yeah, it popped back to that, but wont add 91 now (because its maked as alt?)
-
DataHoarder
zero-indexed is truth :D
-
DataHoarder
yeah, lemme see
-
br-m
<ofrnxmr:xmr.mx> Marked as orphaned*
-
DataHoarder
you are at 2837190. 2837191 has 52b0a4a448f5c27c153bce1de8ad83928c57fa50dfb0f8ddf2972b841cf0cd1c as checkpoint
-
DataHoarder
you received 52b0a4a448f5c27c153bce1de8ad83928c57fa50dfb0f8ddf2972b841cf0cd1c at some point as well (?)
-
br-m
<ofrnxmr:xmr.mx> Yes
-
br-m
<ofrnxmr:xmr.mx> my node was on main tip of like height 196 or so
-
br-m
<ofrnxmr:xmr.mx> The 94 checkpoint came in on node 2, and before node 1 noticed it, i released my reorg from node3
-
DataHoarder
it fails to find parent on new block
-
DataHoarder
so db fails to find block 2837190
-
DataHoarder
id 6a74590e12bf9674609b09b57eaa9be0ed701d4ae28ad0d6407731d6d6fb168c
-
DataHoarder
that is the only condition that message would be emitted
-
DataHoarder
so it rolled to 2837190 but ALSO removed it from db
-
DataHoarder
but it still has kept it as tip???
-
DataHoarder
bool parent_in_alt = m_db->get_alt_block(b.prev_id, &prev_data, NULL);
-
DataHoarder
bool parent_in_main = m_db->block_exists(b.prev_id);
-
br-m
-
DataHoarder
it can't find 6a74590e12bf9674609b09b57eaa9be0ed701d4ae28ad0d6407731d6d6fb168c on either of those two db calls
-
DataHoarder
cursed, that calls /getinfo internally???
-
br-m
<ofrnxmr:xmr.mx> No idea, probably not get_info
-
DataHoarder
and then getblockheadersrange
-
DataHoarder
then prints
-
DataHoarder
yeah this all uses RPC
-
br-m
<ofrnxmr:xmr.mx> yes, but idk what calls its making
-
br-m
<ofrnxmr:xmr.mx> Note: it only passes the 88 checkpoint now, not the 91
-
DataHoarder
can you do RPC get_block on the hash, then on the height?
-
DataHoarder
curl
127.0.0.1:18081/json_rpc -d '{"jsonrpc":"2.0","id":"0","method":"get_block","params":{"hash":"6a74590e12bf9674609b09b57eaa9be0ed701d4ae28ad0d6407731d6d6fb168c"}}' -H 'Content-Type: application/json'
-
DataHoarder
afaik
-
DataHoarder
well 28081
-
br-m
<ofrnxmr:xmr.mx> Rpc blocks
-
br-m
<ofrnxmr:xmr.mx> Conn reset by peer
-
br-m
<ofrnxmr:xmr.mx> Oh
-
br-m
<ofrnxmr:xmr.mx> Its because its banned 127.0.0.1 :D lmao
-
DataHoarder
so? what do you get?
-
br-m
<ofrnxmr:xmr.mx> it gives a valid response
-
br-m
-
DataHoarder
get_block has different logic that block_exists
-
DataHoarder
one sec
-
DataHoarder
trying to get you a way to trigger this edge alternatively to confirm it's that
-
DataHoarder
ugh, it's a binary request
-
br-m
<datahoarder> @ofrnxmr:xmr.mx: curl
127.0.0.1:28081/json_rpc -d '{"jsonrpc":"2.0","id":"0","method":"get_block_template","params":{"wallet_address":"9wuSwB1qbWpah9b1xgsB14Xmp5wg3pMitW15WtQjAv6US4wBv5HYqQ2LhR4rwUAsK6S3ZHkmCtfw8cPrCaf21Sdx7hFkgTz"}}' -H 'Content-Type: application/json'
-
DataHoarder
that should call that function internally
-
DataHoarder
actually
-
DataHoarder
need an undocumented parameter (again)
-
br-m
<datahoarder> curl
127.0.0.1:28081/json_rpc -d '{"jsonrpc":"2.0","id":"0","method":"get_block_template","params":{"prev_block":"6a74590e12bf9674609b09b57eaa9be0ed701d4ae28ad0d6407731d6d6fb168c","wallet_address":"9wuSwB1qbWpah9b1xgsB14Xmp5wg3pMitW15WtQjAv6US4wBv5HYqQ2LhR4rwUAsK6S3ZHkmCtfw8cPrCaf21Sdx7hFkgTz"}}' -H 'Content-Type: application/json'
-
br-m
<datahoarder> that will trigger a specific edge in get_block_template that calls m_db.block_exists
-
br-m
<0xfffc> very strange! > <DataHoarder> get_block has different logic that block_exists
-
br-m
<datahoarder> pop_blocks probably
-
br-m
<datahoarder> Just trying to 100% verify this via triggering it with the known failing hash
-
br-m
<datahoarder> @0xfffc: it should be the same, though, from a quick look. as get_block does return get_block_blob_from_height(get_block_height(h));
-
br-m
<datahoarder> I'm also curious about main blockchain wrong height
-
DataHoarder
> CHECK_AND_ASSERT_MES(m_db->height() > alt_chain.front().height, false, "main blockchain wrong height");
-
DataHoarder
so the alt_chain is greater or equal to m_db height
-
DataHoarder
happens at the same time as "Block recognized as orphaned and rejected"
-
DataHoarder
from what I see first pass on handle_alternative_block gets into the check, hits build_alt_chain, then triggers assert
-
DataHoarder
second pass doesn't go into if (parent_in_alt || parent_in_main)
-
DataHoarder
and triggers Block recognized as orphaned and rejected
-
pumpxmr
my testnet node is synced now: Height: 2837251/2837251 (100.0%) on testnet, not mining, net hash 1.79 kH/s, v16. Also set log lvl to TRACE
-
DataHoarder
need to brb
-
br-m
-
br-m
<rucknium> pumpxmr: Are you seeing the log messages about checkpoints? Should occur every 5 minutes.
-
pumpxmr
not really, i used `set_log TRACE` or should it be something else?
-
pumpxmr
oh i see `set_log checkpoints:TRACE` mentioned
-
br-m
<ofrnxmr:xmr.mx> Every2mins *
-
pumpxmr
`2025-09-19 18:15:43.541 I CHECKPOINT PASSED FOR HEIGHT 2837259 <f5fa29b2d80f6563c980f51692ecf3a0a4afe057179e389392284ae0504e2a02>`
-
br-m
<rucknium> Ah, right. Every 2 minutes. I will try to write out the protocol specification.
-
DataHoarder
ye somehow the second pass fails, weird ofrnxmr
-
DataHoarder
you don't happen to have a debugger attached on this? :D
-
br-m
<ofrnxmr> DataHoarder: I dont
-
br-m
<ofrnxmr> Relatively simple to reproduce. Run 3 nodes. 1 attacking. 1 checkpointing, launched on odd-minute. 1 checkpointing, launched on even-minute
-
br-m
<ofrnxmr> have the attacking uae add-exclusive-node to check-node1 and check-node2
-
br-m
<ofrnxmr> When one of the checkpoints would reject your reorg, but the other node hasnt received it yet, release youe reorg
-
br-m
<ofrnxmr:xmr.mx> Result: 1 node reorgs, the other rejecrs it.
-
br-m
<ofrnxmr:xmr.mx> The reorged node will soon after receive a checkpoint that matched the orphaned chain, and the node will pop blocks, but will not reinstate or sync the prev-orphaned chain
-
br-m
<ofrnxmr:xmr.mx> @jack_ma_blabla:matrix.org i cant open your dm btw
-
br-m
<gingeropolous> how are these domains setup? are the moneropulse pointing to another domain? or is there a direct pipe to the moneropulse DNS entry?
-
br-m
<gingeropolous> for some reason in my head i imagine a useful setup would be for whoever is responsible for creating the checkpoints to do it using a non-moneropulse domain, and then the owner of the moneropulse can just use whatever DNS copy is available... like CNAME or whatever
-
DataHoarder
that could be done as well, the other options is DNS delegation
-
DataHoarder
(so other person can issue records on the server they control directly, instead of CNAME)
-
DataHoarder
no need to have another domain
-
br-m
<gingeropolous> right so whats the setup now?
-
DataHoarder
DNS delegation has been tested
-
DataHoarder
right now records are being issued directly to subdomains, just to find breakage, on testpoints
-
DataHoarder
breakage was found :)
-
br-m
<gingeropolous> gotcha
-
br-m
<gingeropolous> yeah, in my head having a split makes sense. I dunno what others think. But community members / pool operators could run their own checkpointing DNS thinger, and then monero core could just redirect the moneropulse to whatever they see fit / is working.
-
DataHoarder
note the setup can change over time with improvements while long term solution is discussed/developed
-
DataHoarder
this is not the final setup if not intended
-
br-m
-
br-m
<321bob321> "You don’t need quantum hardware for post-quantum security"
-
br-m
<gingeropolous> oh dang. i can't speak in lab anymore?
-
br-m
<basses:matrix.org> @gingeropolous: bruh, because chat went offtopic, you can continue here
-
br-m
<kayabanerve:matrix.org> The entire room was muted.
-
gingeropolous
wowzers
-
br-m
<gingeropolous> i look forward to your research dude. can't say i'll understand it, but i'll give it my best.
-
br-m
<gingeropolous> i still think more attention needs to be given to the fact that a finality layer won't address selfish mining
-
br-m
<gingeropolous> it'll lessen the damage selfish mining can do, but it'll still be there, stealing honest hashrate and forcing finality layer intervention .... ?
-
br-m
<gingeropolous> and i haven't read through the ccs recently but methinks selfish mining mitigation isn't in scope
-
br-m
<rucknium> Complaints and feedback about moderation in #MRL can be directed here
-
br-m
<rucknium> <---
-
br-m
<rucknium> ^