09:46:09 I'm about to publish my findings, of sweeping through the various selfish strategies. Which ones are more profitable, which ones are more disruptive. However, I want to pause and ask about the wisdom of this, given that Qubic monitors this chat. 09:47:38 It'd be reasonable to wait for the bandaid that is DNS checkpoints ... which is still blocked by an issue within monero code 09:49:17 Okay. If anyone is interested, I can send the results privately. 13:29:54 Why not tie it to the transaction? I'm thinking of something like view tags for wallets, a fast sort of pre-check to see if the TX is potentially valid. Preferably one that takes longer to generate than verify, in this case > <@articmine> Since the POW is not tied to the TX 13:30:04 seems more..... idk, natural, to me 15:30:41 MRL meeting in this room in 1.5 hours. 17:00:45 Meeting time! https://github.com/monero-project/meta/issues/1278 17:00:52 1. Greetings 17:01:00 Hi 17:01:11 hello 17:01:49 Hello 17:02:20 waves 17:02:57 2. Updates. What is everyone working on? 17:03:21 @jeffro256:monero.social: ping 17:03:34 me: mostly fcmp++/carrot alpha stressnet bug squashing / investigating 17:03:55 I've posted an implementation proposal for PoWER: https://github.com/monero-project/research-lab/issues/133#issuecomment-3377869740 17:04:07 me: me: Helping get stressnet stressed. Squashed bugs in my R package to spam transactions on stressnet (https://github.com/Rucknium/xmrspammer). At least two other people are using it to spam. Keeping https://stressnetnode1.moneronet.info/ and https://stressnetnode2.moneronet.info/ collecting and displaying node performance data. Largest block so far was 10 MB in size AFAIK. 17:05:21 Me: still testing carrot integration in lws. The balance key, including spend tracking has been tested as working , but the subaddress+carrot balance key testing and integration is still on going. Should've be too bad 17:05:21 me: updated own p2pool/monero libraries to support carrot/fcmp++ stressnet and provided some feedback on specific areas where p2pool might require changes (and changes were made, many thanks!). Made a light notes document for anyone making changes on mining related projects 17:05:21 https://git.gammaspectra.live/P2Pool/consensus/src/branch/fcmp/monero/address/carrot/STRESSNET.md 17:06:00 Howdy 17:06:37 @vtnerd:monero.social: Any thoughts about MyMonero shutting down? 17:06:45 Me working on the parameters for the sanity median. 17:06:45 Also researching un economical transaction attacks 17:08:04 <0xfffc> Hi everyone. Sorry for being late. 17:08:41 We've been wondering what they would do with carrot changes, the upgrade was going to be painful. I feel a little responsible for forging ahead with lwsf instead of upgrading their stack - otoh upgrading lwsf for carrot will be much easier because it hooks into monero codebase unlike mymonero 17:09:12 <0xfffc> Me: I was fighting with a syncing issue I had in stressnet. Will be involved this week on stressnet. 17:09:35 There's at least one lwsf based wallet in progress, but unclear whether it gets released, dtc 17:10:07 3. Carrot follow-up audit (https://gist.github.com/jeffro256/12f4fcc001058dd1f3fd7e81d6476deb). 17:10:16 Anything about this item? 17:10:39 No I think it can be closed now. 17:11:14 4. Proof-of-Work-Enabled Relay ("PoWER") (https://github.com/monero-project/research-lab/issues/133). 17:12:44 Some things left in my proposal are picking the target difficulty, and penalties for misbehaving nodes 17:13:24 My understanding is that PoWER was to be applied per connection 17:13:29 I also thought about only including a nonce in the challenge instead of including a recent block hash 17:13:35 Public RPC should expose it as an option as well 17:14:06 @articmine:monero.social: it is 17:14:08 @hinto: yeah I would suggest that 17:14:16 Then only do it for large transactions? 17:15:08 I would also make the nonce per connection 17:15:10 If it's a public API and it the transaction's input count is greater than POWER_INPUT_THRESHOLD (set to 8), then PoWER is mandatory in the proposal 17:15:12 @jberman: I was referring to the above 17:16:22 "If it's a public API and it the transaction's input count is greater than POWER_INPUT_THRESHOLD (set to 8), then PoWER is mandatory in the proposal" -> this sgtm, spec makes it a little unclear in Interfaces section 17:17:04 I think just nonce no hash also sounds reasonable 17:17:36 without a recent-ish hash these can be made well in advance, right? 17:17:56 or nonce per connection that requires these to be calculated then 17:18:05 The point of the hash is to prevent building up PoW although with a large enough nonce (64 bits?) it should be okay? 17:18:08 each hop would require that nonce for any large tx 17:18:24 @boog900: If you don't do this you allow multiple connections with 1 PoW solution 17:18:38 DataHoarder: Just ROC? 17:18:48 RPC 17:18:53 @hinto:monero.social: the nonce is the challenge right? 17:19:05 POWER_NONCE_WINDOW=60, POWER_NONCE_LEEWAY=1 -> as in, if a nonce is 100s old, it is accepted? but not if 130s? 17:19:11 yes 17:19:23 RPC to node, this node then does PoW connections to other nodes... how do other nodes broadcast their txs? more PoW? 17:19:36 they have to pay the price for someone making big txs -> they might as well drop all big txs 17:20:00 @jberman:monero.social: yes, the previous could be accepted as leeway 17:20:03 specially with dandelion++, where they spread it 17:20:34 The big TX price is paid by the wallet if it is RPC 17:20:39 I don't think we need a LEEWAY or a timeout on a nonce if per connection 17:20:58 if nonces can be reused per TX, that works. I'm talking about having it be per-connection 17:21:08 @hinto: If the nonce has a max validity window of 120s, I don't see how block hash is better at preventing building up PoW unless I'm missing something 17:21:10 Correction Pubic RPC 17:21:19 Public 17:21:36 but you can also submit txs via P2P, right? 17:22:22 @boog900: you can't build it up if you need to connect for your challenge first, unless I am missing something? 17:22:25 @jberman: It must be a recent block hash, where as the nonce is just a integer that could be per-computed, although this might be fine too since I think it will be infeasible to do so 17:22:32 @boog900: if you allow lazy proofs on relay after making the initial connection, it seems to make sense 17:23:36 @hinto: gotcha, might as well go for 128 bits of randomness imo 17:24:11 One issue I see with a fixed nonce per daemon per time-frame: is it just as easy to open 1000 connections as it is 1. As soon as the PoW for that nonce for that timeframe is used, it can be reused for the other 999 connections. 17:24:12 @boog900: The leeway is for the brief interim when the daemon refreshes its nonce, similar to OTP systems allow the previous code 17:24:32 there is no need to refresh a nonce if it is per connection tho 17:24:50 all connections have different nonces 17:26:19 hmm okay... that does make the RPC/ZMQ more complicated to implement, the current proposal is stateless on the daemon side 17:26:43 Also: timing it around a moving window is clunky. The daemon should generate a symmetric secret on startup. Then have an endpoint where it issues new nonce every single request, returning a MAC-ed message with a challenge, timestamp, deadline, and difficulty target. If you can submit PoW for that specific challenge by the dead [... too long, see https://mrelay.p2pool.observer/e/-PHu8rwKeGNNcHZ1 ] 17:26:47 we can always have different behavior for P2P/RPC PoWER 17:27:50 Hello, sorry for being late, I received a time-sensitive email right and was replying as the meeting started. I have nothing I've worked on to publicly state yet other than more traditional development, and the hopes we'll have some paper work re: our Generalized Bulletproofs further touched up soon. 17:28:34 but we are always going to need some sort of state to verify each connection has independent PoW 17:29:06 you can make the randomness be tied to a local counter + incoming address details 17:29:22 Yes to avoid say 2000 connections using the same POW 17:29:27 ip/port/other side id + local incrementing counter 17:29:35 BS company 17:29:56 Blockchain surveillance 17:30:10 then you take a hash of this, that's the nonce, so you don't keep generating duplicates 17:30:47 but you then need a counter which is state, right? 17:30:49 or purely randomly generating it, would work as well, using both sides chosen entropy to agree on a nonce 17:31:41 boog900: as a global counter to ensure it's not repeated, not per connection 17:31:45 @jeffro256: would exposing that endpoint be an easy DoS vector? 17:32:55 Shouldn't be. AES is pretty fast 17:32:56 I'm not sure how much impl complexity is okay for the wallet/vendor side it was kept simple 17:32:58 anything other than deterministically calculating (hash this entropy) the nonce via connection parameters (and global randomness) would require state to be kept around, indeed 17:34:04 DataHoarder: how would you keep track of what number you gave out? 17:34:43 like if I keep responding to the PoW with the same number for the counter how would you tell its been used before? 17:35:42 you know the incoming connection addr/port (and target port), you know your global randomnness (which may be changed depending on time, the counter), and getting the current nonce is H(connection || randomness || counter) 17:36:14 you'd need to connect on same address/port with same origin IP and port. which in that case, the connection is not allowed 17:36:33 (if peer id is kept, that is additional state) 17:36:57 no what if I send a bad tx, then I can reconnect and send another tx with the same PoW 17:37:17 aha. but that would ban the peer, right? 17:37:20 bad tx 17:37:39 new nonce per unique connection does prevent PoW reuse, what would qualify as a "unique" connection? 17:38:12 I have thought about this too although couldn't an attacker send multiple transactions under a single IP? 17:38:35 TCP connections are identified by the local ip/port + remote ip/port 17:38:43 DataHoarder: true, I don't know how RPC would handle that but yeah 17:38:47 the key part is involving the ip/port 17:39:01 not just ip, as part of the nonce given out 17:39:14 @hinto: No disconnect 17:40:05 @hinto: for P2P, txs are verified one at a time 17:40:07 usually this source port is chosen randomly, so each connection on same ip would get different PoW nonces. In the case they reuse the same ip/port, this connection is not a valid TCP connection so it'd be refused 17:40:14 this is assuming TCP is used. 17:40:36 in case of Tor nodes, the ip/port combo is the local tor ip + random source port 17:42:36 so to clarify, say you are running monero P2P or RPC on TCP 1.2.3.4:100, you get an incoming connection from 7.8.9.1:4567 . H(1.2.3.4:100 || 7.8.9.1:4567 || global random || global time counter) gives you the current connection nonce, like TOTP 17:42:58 no other TCP connection can be concurrently active on the 1.2.3.4:100, 7.8.9.1:4567 pair 17:43:44 if your node exposes multiple IP addresses or ports they could connect to the second ip/port, but that would also change the nonce 17:44:16 regularly global time counter is increased like in TOTP > POWER_NONCE_WINDOW=60, POWER_NONCE_LEEWAY=1 17:44:30 TOTP is 30, +-1 17:44:48 Is it possible here to get around a temporary ban after the first bad TX. I am not sure . 17:44:51 so it's quite similar with those parameters, allow previous+current 17:45:08 They could connect from a different ip, but that'd end up having to solve more PoW 17:45:13 I'm not sure if this solves the complexity for hinto 17:45:23 for p2p I think this is more complex 17:45:29 but for RPC 🤷 17:45:59 If the scope keeps increasing, I think it would be a much simpler impl if it was PoW per TX 17:46:06 RPC if connection is closed, it'd pick a different source port next connection. if connection is kept open (keep alive, or http/2) that'd allow multiple requests in 17:46:10 rather than just generating a 16 byte value on connection and attaching it with the other connection details and using that for PoW 17:47:04 PoWER for bad node behavior in general can actually be very useful 17:47:17 No just a bad TX 17:47:23 👀 17:47:30 challenge/response dependent on network protocol I think is overly complex 17:48:34 single endpoint, add a little state per connection, I think is an acceptable level of complexity 17:48:46 they are currently all TCP :) 17:49:19 I say trigger PoWER after: 17:49:19 1) A significant time out 17:49:19 2) Specific bad node behavior 17:50:02 it could still be generated like this, or be state, I think how to generate this randomness can be left for out of meeting or later discussion, no? as long as we can guarantee for further topics that it is "unique" within relevant time periods 17:50:05 Or above 17:50:34 For p2p we can pass in the handshake different values for difficulty depending on what the connecting node wants to do 17:51:06 i.e. tx_relay_difficulty, minimum_difficulty etc 17:51:21 but I would leave this for a later discussion 17:51:27 Yes 17:51:46 okay, I'll update the proposal further, I think we can move on 17:51:59 ty hinto 17:52:24 5. Transaction volume scaling parameters after FCMP hard fork (https://github.com/ArticMine/Monero-Documents/blob/master/MoneroScaling2025-07.pdf). Revisit FCMP++ transaction weight function (https://github.com/seraphis-migration/monero/issues/44). 17:53:58 @rucknium:monero.social: what was that huge block size on stressnet again? 17:55:04 Almost 10MB 17:55:52 http://stressgguj7ugyxtqe7czeoelobeb3cnyhltooueuae2t3avd5ynepid.onion/block/2850011 17:56:04 ^ Tor browser required 17:56:13 10MB blocks in <5 days of heavy stressing 17:56:30 Mostly 1in/16out because of the spam pattern at the time. 17:56:56 Penalty was almost completely used since I set those txs to highest fee. 17:56:59 My one comment here is that the penalty is quadratic with transaction weight. This leads to quadratic transaction fees with weight. 17:56:59 This being said the penalty does not have to be quadratic with number of inputs, size, verification time etc 17:58:03 What happened with the 10 MB blocks? 17:58:47 On stressnet 18:00:37 Just saying on my end, 10MB block seems kind of high to me in less than a week 18:02:15 I still haven't had the chance to go through the latest proposal. But I'll be curious to see what max growth would look like under the latest 18:02:43 @jberman: One actually needs ~20x factor to accomodate holiday shopping. This is the experience with VISA 18:03:30 At least before the advent of widespread delivery with a ~ week time factor 18:03:56 Delivery apps are like in store purchases here 18:04:32 You can grow the block limit rapidly by paying high fees, which has happened on stressnet. When fees are set to their minimum, blocks grow very slowly. 18:04:37 @jberman: Tbf, last time I checked, the sum total of XMR in mempool fees was ~5.9 XMR, ~9.8x higher than block subsidy 18:05:08 Normal fees are ~3% of block subsidy 18:05:25 So what is stopping stressnet is a lack of "spam" 18:06:05 To pay the fees 18:06:44 @rucknium: That is by design 18:09:12 It also comes into play when one reaches the long term median cap 18:09:32 On the short term median 18:12:04 AFAIK, on stressnet there have been three fee strategies. By default, my large-volume spam pays the min fee (tier 1). That level of fees can keep the txpool full and still fill blocks to their penalty-free limit so that the progress toward larger block sizes is not lost. @ofrnxmr:monero.social 's spam pays tier 3 fees I think. [... too long, see https://mrelay.p2pool.observer/e/0_CU9LwKZHhhcmJI ] 18:13:17 AFAIK, you would not get to 10MB blocks without paying very high fees. The 10MB block had fee total 10.8 XMR because most or all txs paid the tier 4 fee. 18:13:24 Are you limited by the amount of stressnet XMR? 18:14:01 @articmine:monero.social: It's manageable. Next stressnet, we should have a plan to systematically distribute XMR for spamming purposes. 18:14:05 I can mine larger blocks by loss of profit but burning up base reward :) 18:14:35 I have a lot of stressnet XMR because I had a lot of main-testnet XMR. And of course it transfer to the hard forked stressnet. 18:14:39 @rucknium: Lower the difficulty on stressnet 18:15:01 kico made a faucet, too 18:15:10 This may be issue here 18:15:17 We are still in the scaling parameters agenda item 18:15:25 surly you only need to pay as much XMR as the block reward spread across all txs to cause the block to increase? 18:15:55 *cause the block to be the max size 18:15:57 Actually to max it out one needs 4x 18:16:10 The block reward 18:16:12 https://faucet.xmr.pt/ 18:16:30 next time just 1000x reward :') 18:16:32 @articmine: ah ok 18:18:00 Should we move on to the stressnet agenda item? 18:18:42 Realistically we should be looking at BSV max transactions rates to test this out 18:19:00 Sure 18:19:10 6. FCMP alpha stressnet (https://monero.town/post/6763165). 18:19:32 Several bugs have been identified already. Some fixed. 18:20:47 Will be pushing a fix for the edge case you identified of 2 block reorg immediately after wallet creation today, continuing investigation into connection issues that mostly seem to revolve around brushing up against byte size limits 18:21:19 I'm currently looking into the higher-than-expected memory usage 18:22:12 https://coingeek.com/152-million-transactions-in-a-day-another-new-record-for-bsv/ 18:22:25 The FCMP transaction CPU verification efficiency has exceeded my expectations :) 18:23:22 @rucknium: Still on a single thread? 18:23:31 @articmine: @articmine:monero.social: Higher tx volume than now is possible, but nodes are already having connectivity issues, so it make not be useful to push it beyond where it is now. 18:24:37 What kind of bandwidth per node? 18:24:48 Could be wrong here, but seems like blocks the past day or so have been smaller than before. Some problems w/byte size limits trigger around 4mb, but it seems like people are still having connectivity issues the past day even though blocks haven't been that big? 18:24:49 Yes, still single-threaded AFAIK. But, not too much worst than last year's RingCT stressnet, byte-for-byte. Of course, FCMP txs are larger than RingCT, so tx-for-tx it is much worse. 18:25:10 @jberman:monero.social: txpool is larger. I think that's the problem. 18:25:44 How is TX pool stored 18:25:49 Yesterday, block size ran up and then the txpool was exhausted. That sends the block limit lower once the 100-block trailing median falls. 18:26:50 @articmine: We still have the mainnet tx relay that sends the whole tx to every peer. Bandwidth is a lot. 18:27:43 So then what are the limitations? 18:28:13 bugs 18:29:08 I'd like to briefly highlight the txpool self-limiting. It is visible on Rucknium's monitor node. 18:29:35 And preventing different threads from blocking each other. I want to get cuprate on the next stressnet for a RingCT stress to compare. 18:29:47 @spackle: that one has been on my list to tackle for a bit 18:31:20 I think for next FCMP stressnet, the new tx weight and scaling parameters need to be implemented and PoWER. 18:32:10 @0xfffc:monero.social and @boog900:monero.social 's tx relay efficiency implementation could be good to test in the stressnet, too. 18:32:42 Worth mentioning, the issues we're seeing now for the most part don't seem to stem from FCMP++ (except that obvious reorg bug) 18:32:49 @rucknium: I expect this to have a significant impact 18:33:13 There was no complete network chaos and netsplits like the beginning of last year's stressnet. The main problems that caused that have been fixed AFAIK. 18:33:42 @jberman: @jberman:monero.social: Good point to mention :) 18:33:46 tx weight and scaling 100%, I don't think PoWER is a requirement. PoWER is mostly a DDoS repellent, not expected to have an impact on node perf > <@rucknium> I think for next FCMP stressnet, the new tx weight and scaling parameters need to be implemented and PoWER. 18:35:46 More about stressnet? Stressnet discussion happens in #monero-stressnet:monero.social on Matrix and ##monero-stressnet on Libera IRC. 18:36:05 I would not use PoWER on stressnet, at least for now 18:37:09 7. Mining pool centralization: Temporary rolling DNS checkpoints (https://github.com/monero-project/monero/issues/10064), Share or Perish (https://github.com/monero-project/research-lab/issues/146), and Lucky transactions (https://github.com/monero-project/research-lab/issues/145). 18:38:46 Qubic did some selfish mining recently. Any more comments, DataHoarder? 18:39:15 Not more comments other than they seem to get more hashrate recently (1.8 GH/s) but then also went down 18:39:20 they stopped selfish mining now. 18:40:28 DNS checkpoints implementation is stalled because of a edge case bug that is being tracked down. AFAIK, @0xfffc:monero.social was working on it. 18:40:29 I can't debug C++, so I cannot help there. 18:41:46 I don't know how to push it toward completion. I think other devs either don't want to get distracted from their current work to try to finish implementing rolling DNS checkpoints, or don't fully agree with the approach. Or they agree but don't want to get blamed if it's deployed and something goes wrong 😉 18:43:08 Or are maybe like tevador and agree with the approach, but only wants to code in clean pure C instead of getting into dirty C++ 😛 18:45:32 Once stressnet can "coast", I hope to look more closely at Share or Perish, especially trying to implement a Markov Decision Process to analyze it. 18:46:42 Any more discussion on this agenda item? 18:48:04 I have preliminary results for selfish mining simulation under realistic network modeling and difficulty adjustment, sweeping through a range of permutations (strategies) they could use. 18:48:16 20:40:29 I can't debug C++, so I cannot help there. 18:48:16 I tried to reproduce, but I don't have a proper repro case 18:50:11 Ready to pubish, but I'm not sure if it's the best idea to give Qubic free analysis on which strategies are best for network disruption. It might not be real problem, but I thought I'd at least ask the question before publishing 18:51:15 @bawdyanarchist:matrix.org: How does it compare to results already published in the literature? Is the difference that you consider difficulty adjustment, but MDP papers do not? 18:53:06 In terms of profitability, simulation corroborates the Eyal-Sirer (classic) selfish mining MDP, but diverged from the stubborn mining publication. The simulation models realistic network delays and difficulty adjustment. Gamma for example, is an output of the sim, not an input. Honest forks are created by modeled latency, as opposed to heuristics. 18:54:10 Is this the first research where gamma is endogenous? 18:54:10 As far as I'm aware, yes. 18:54:37 (Endogenous means that the variable is determined inside the system, not a variable of the system that is assumed to have a particular value.) 18:56:13 what is this held back by now? https://github.com/monero-project/monero/pull/10075#discussion_r2407941460 saw this comment regarding monotonic time vs wall clock > <@rucknium> DNS checkpoints implementation is stalled because of a edge case bug that is being tracked down. AFAIK, @0xfffc:monero.social was working on it. 18:56:20 When running with purely honest hashpower, and ping set at 70ms, I'm replicating Monero's natural fork rate, at about 0.24%. This is running with 9 total pools, mirroring hashrate distirbution we normally see. 18:56:20 And yes, gamma is a derivative of the simulation, a byproduct of stochastic latency 18:56:52 Maybe share it with me and DataHoarder. We (especially DataHoarder) can evaluate the risk of Qubic getting useful info from the research. Others could help, too. 18:57:57 @spirobel:kernal.eu: A specific sequence causes nodes that enforce checkpoints to fail to sync new blocks. 18:58:03 Yes, I think it would be good for at least one other person to take a look privately first. The report is ready to go, and not terribly long. 18:58:29 I can take a look 👍. Qubic already implemented other countermeasures from previous public discussions so they should be delayed (a reasonable time, not forever) until bandaid is there 18:59:55 Sounds good to me. Thank you, @bawdyanarchist:matrix.org , for working on this! It should be interesting. 19:02:06 Any more discussion? 19:03:31 @rucknium: so a second PR is necessary to address this part ? quoting ofrnxmr from the PR description here: Note: there is an issue unrelated to the PR that can leave nodes in a bad state 19:03:31 Node is reorged 19:03:31 Node receives a checkpoint that references the now orphaned chain[... more lines follow, see https://mrelay.p2pool.observer/e/kpfP9bwKa185UE53 ] 19:05:36 @spirobel:kernal.eu: Yes, I think that is it. It would be great if you wanted to help :D 19:05:36 @ofrnxmr:monero.social , DataHoarder, and I could get the sequence running again on testnet 19:05:57 in a bit. I tried attacking myself as well 19:07:00 @rucknium:monero.social: i am busy with wallet code. Not going to promise anything :D 19:07:37 We can end the meeting here. Thanks everyone. 19:13:11 the logic of handling checkpoints should be separated from handling reorgs because of longest chain rule. It seems like part of the struggle is that those two consensus rules are intermingled. 19:14:39 This should also be a topic interesting for kaya as the same situation would apply to the finality layer 19:15:47 Thanks 19:17:43 my understanding is, (correct me if I am wrong) that a finality layer would be a more decentralized version of checkpointing. So practically its important to figure out the logic of this codepath and how to untangle it. 19:20:53 <0xfffc> https://github.com/monero-project/monero/pull/9933#issuecomment-3382892067 19:42:15 @spirobel:kernal.eu: Yeah. There is 2 different codepaths that handle the reorg. The first goes through the conditions on L2090 of blockchain.cpp, and i believe the first is_a_checkpoint should be triggering here. But it always evaluates as false 19:44:17 The latter (i dont remember where it is) causes the chain to roll back to a block before the checkpoint (instead of switching to the alt chain that has the checkpoint). After which the node blocks a) incoming blocks that dont match the checkpoint b) incoming blocks that do match the checkpoint, but are orphaned c) newly mined blocks, because they dont match the checkpoint 19:44:57 Forcing is_a_checkpoint = true, seems to solve the issue. So thats as far as i understand whats going on 20:09:39 @ofrnxmr: the logic really shouldnt be in this function as these two things are not related. Longest chain rule is normal operation, while checkpointing is above that. It is a separate rule so why should it be handled in the same function? 20:13:08 also we are in big trouble btw if the 6 dns domains give conflicting checkpoints. Are there measures in place to prevent this and assure there is consensus among the checkpointing nodes? 20:18:02 The checkpointing script serves the same checkpoints to all domains. The only problem can happen with DNS record latency. If a supermajority of records received by a node that enforces the checkpoints don't match, then the node just doesn't bind the checkpoint. 20:25:53 We also have to separate here between nodes that miners operate and nodes that users rely on. Checkpointing is a tool for honest hashrate to coordinate and form the longer chain, even if a malicious party with a large amount of hashrate tries to disrupt the network. 20:30:54 Another aspect is that currently the blocking behavior is intermingled with this as well. We have 3 topics that need to get untangled: the two consensus rules and blocking behavior. 20:55:39 @spirobel:kernal.eu: They have to be 2/3+1 agreeing before the node accepts them 20:55:46 So, in practice, 5/7 have to match 20:55:48 22:13:08 also we are in big trouble btw if the 6 dns domains give conflicting checkpoints. Are there measures in place to prevent this and assure there is consensus among the checkpointing nodes? 20:56:12 they will never set checkpoints that do not have previous checkpoint as part of their chain 22:43:47 @rucknium:monero.social: DataHoarder I DM'd you with the link to a private github repo with the report. 23:28:36 You probably need to DM me @bawdyanarchist:matrix.org 23:28:42 This user is in IRC 23:29:09 So send to the Monero.social one (funny how the bridge makes that work so well)