15:02:02 MRL meeting in this room in two hours. 15:33:53 matrix.org downtime seems still ongoing and they are taking the slow path 15:34:29 https://bsky.app/profile/matrix.org/post/3lxuslbzjuc2t for anyone in matrix.org having issues they can temporarily join IRC :) 17:00:33 Meeting time! https://github.com/monero-project/meta/issues/1263 17:00:37 1. Greetings 17:00:46 Hi 17:01:05 Hello 17:01:15 hi 17:01:30 waves 17:02:39 hi 17:02:56 Hello 17:02:59 <0xfffc> Hi everyone 17:04:24 hi connecting from IRC 17:04:30 We are probably missing people from matrix.org Matrix servers. Logging into the Libera IRC network still works. 17:04:38 2. Updates. What is everyone working on? 17:05:30 me: primarily bugs in lws and lwsf. sadly several were reported nearly the same time, and I’ve been going through them 17:06:18 me: Testing rolling DNS checkpoints and created this issue about it: https://github.com/monero-project/monero/issues/10064 . Reading papers about selfish mining. Productionizing transaction spamming code for FCMP alpha stressnet. 17:06:46 me: For fun implemented a DNS + DNSSEC server that can server a single DNS checkpointing domain directly, via nameserver delegation https://git.gammaspectra.live/P2Pool/monero-highway and holding own keys. Supports Ed25519/ECDSA and other keys 17:07:09 @jeffro256:monero.social: Ping 17:07:27 me: sped up popping blocks/reorg handling in the wallet to handle trimming the fcmp++ tree quickly, fixed a bug in the wallet's tree builder path member reference counter, benchmarked tx and membership proof verification using kayaba's latest 17:07:31 Howdy 17:07:57 I have been working on a monte carlo simulation of Publish-Or-Perish 17:08:52 3. https://gist.github.com/jeffro256/12f4fcc001058dd1f3fd7e81d6476deb. 17:09:23 me: lots of review, documenting and refining HW wallet support for FCMP++, and refactoring in preparation of a key image generator hash function change proposed by @kayabanerve:monero.social 17:10:29 I have updated the fee calculations in https://github.com/seraphis-migration/monero/issues/44 With an increase in the penalty free zone, ZM from 1000000 bytes to 200000 bytes, an increase in the reference transaction size from 10000 bytes to 20000 bytes and an increase in fees by 4x by elimination the lo fee in the implementation. Then we can have a flat fee structure with fee proportional to weight up to 128 inputs. 17:11:00 @diego:cypherstack.com: reached out to me with a first draft of the follow up audit, we're discussing it now. Thanks Cypherstack! Not too much to report until after that's done tho 17:12:01 Haven't heard back in a couple days, I wonder if it might have to do with the Matrix federation security issues 17:15:03 4. https://github.com/monero-project/research-lab/issues/142. 17:15:22 4. https://github.com/monero-project/research-lab/issues/142. 17:17:08 Do markdown links not appear properly on IRC side, or is it just the monerologs.net parsing that erases the text? 17:17:40 So far no problems for me with the links of today 17:18:42 Anything to say about hash-to-point now? 17:19:01 In some cases it is lag. I am actually on IRC at the same time 17:19:54 See my comment above 17:20:18 I am feeling some lag on the Matrix side. Maybe because martix.org is coming back online? 17:20:57 matrix.org* 17:21:11 5. https://github.com/ArticMine/Monero-Documents/blob/master/MoneroScaling2025-07.pdf. https://github.com/seraphis-migration/monero/issues/44 17:21:59 19:17:08 Do markdown links not appear properly on IRC side, or is it just the monerologs.net parsing that erases the text? 17:21:59 there is a markdown parser, but it removes the description as I could not find a proper format for it. Let's discuss alternate formats for that after the meeting 17:22:43 Ok I will post them as regular, separate links for now. 17:22:56 The current agenda item is "Transaction volume scaling parameters after FCMP hard fork. " 17:23:13 As I mentioned before I am recommending both an increase in the minimum penalty free zone and an increase in fee to address these issues. 17:24:02 this is also after reviewing the comments in the last MRL regarding the use of 4 in transactions 17:25:14 Personally, I think that the penalty free zone shouldn't be increased to much over 2x the max tx size + a coinbase tx size. A 6x increase is pretty aggressive when the average FCMP++ transaction isn't going to be 6x the size of a 16-member CLSAG tx. What's the primary thrust of why it would be increased so much ? 17:25:16 My initial read of this latest is that fees are determined entirely by overall tx byte size, and unaffected by tx verif time, or membership proof size/verification (aside from the effect of the memb proof on the overall tx size) 17:25:54 Yes this is correct 17:26:09 A larger penalty free zone means a malicious miner can spam the blockchain more. 17:26:30 ^^^ 17:26:32 Yes but there is also an increase in fees 17:26:46 Fees don't affect the miner. 17:26:52 not if colluding with a miner 17:27:03 A penalty free zone 2x the max tx fee was tried in 2017 and it did not work 17:27:14 The fees are just too high 17:27:49 Only for max size txs, though, right? 17:27:57 Assuming other traffic 17:28:03 in 2017 it was all txs 17:29:51 We had a 2in 2 out tx at 135000 bytes and a penalty free zone of 60000 bytes 17:30:06 2017 was before bulletproofs. I feel like the change to bulletproofs (a huge drop in transaction size) lowered the fees more than the increase in the minimum penalty-free zone 17:30:10 But I'm just spitballing 17:30:18 Even after the increase to 300000 bytes fee were still very hihg 17:31:02 It was only by not reducing the penalty free zone after bulletprrof that we go reasonable fees 17:31:27 I'm a bit biased because I'm also of the opinion that the fees should be higher than they currently are 17:31:40 If the tx volume is sufficient, the penalty free zone will adjust. A smaller minimum value is more spam resistant. 17:32:02 I am actually proposing both a fee increase and an increase in the penalty free zone 17:33:11 The current fee structure is very well received by the users. A massive increase in fees is something I cannot support 17:33:35 > Even after the increase to 300000 bytes fee were still very hihg 17:33:35 > So you agree that increasing the penalty-free zone didn't do much to lower small-tx fees? 17:34:11 It did not go far enough in 2017 17:34:12 So if you also want to raise fees, then why are we raising the penalty free zone? 17:34:50 We needed a penalty free zone in the 1000000 - 200000 bytes back then 17:35:09 to 2000000 bytes 17:35:55 We need both 17:36:09 raise fees and the penalty free zone 17:36:14 why do we need it 17:36:22 What would be the verification time for a 2MB block? 17:37:52 @ofrnxmr:monero.social: Insight about that ^ with FCMP private testnet experiments? 17:38:10 ~12.5x that of a 128 input tx 17:38:28 we have to be realistic here 17:39:14 If all 1-in,2-out, then 2MB*(1 tx / 6261 B)*(30 ms / tx) ~ 9600 ms of CPU time 17:39:32 One we can use parallel processing to the block, unlike single large txs 17:39:39 (in response to tevador's question) 17:40:00 IMO 1 MB blocks is already too big 17:40:29 which was the original proposal 17:40:40 I have said for a long time that we are going to need parallel processing to address verification time 17:42:06 you can parallel process at every level theoretically, even within verifying a single 128-in tx. but finding the sweet spot of where to apply the parallel verif to maximize CPU utilization is another q 17:42:35 @jeffro256: Also that 9.6s figure for 2MB of FCMP++ transactions didn't include BP+ range proof verification. It is a small, but certainly noticeable part of the verification time 17:42:39 right now the implementation batch verifies all FCMP++ proofs synchronously 17:43:38 so verification time of a large block should be somewhat faster than time to verify each proof in the block individually, ignoring parallelism at any level 17:44:33 are you excluding any palatalization implementations at the OS level? 17:44:37 This is true, I am not considering batching. That's if verifying independently 17:45:13 But we already do multi-threaded verification in monerod 17:46:18 fcmp++ verification is currently batched, and not multi-threaded 17:47:22 On a CPU with say 64 threads this cam make a major difference 17:48:11 yes 17:49:11 I will limit this agenda item to 30 minutes of discussion. That means ending at 17:52 17:49:15 I think it could be easily updated to batch in groups of some max size, and do parallel batch verification within each block 17:49:28 Jeffro,that must be 9600ms if firsttime seeing the txs? 17:49:44 @ofrnxmr:monero.social: yes for full verification 17:50:21 Ok, because if i aready have the txs, a 15mb block takes about 5 seconds 17:50:27 During block propagation, assuming the tx is already in the pool, the FCMP would have already been verified, so it won't take that long when verifying an incoming block 17:51:24 5 seconds from notified to "synced". So not all verification time, some of that is bandwidth 17:51:41 I don't think I would support going above 600 KB with the minimum penalty free zone. That's still ~80 tx per block. Min tx fee for 8000 byte tx would be ~0.0001, about $0.03. 17:52:14 If my calculations are correct. 17:52:20 I cannot support anything below 1000000 17:52:37 bytes 17:54:01 I don't find a fee of $0.03 to be excessive. Might even be too low IMO. 17:54:01 Let's continue the agenda: 17:54:05 6. FCMP alpha stressnet planning https://github.com/seraphis-migration/monero/issues/53#issuecomment-3053493262 17:54:45 There has been movement in https://github.com/seraphis-migration/monero/pull/81 , the last thing to be merged before alpha stressnet 17:56:18 In PR 81, I modified some of wallet2's reorg handling logic to: 17:56:18 -- always request blocks starting from 1 higher from current known tip 17:56:18 -- if reorg detected, then request 3 blocks back 17:56:18 -- if reorg still detected, then request 100 blocks back (100 is the default max reorg depth) 17:57:13 @jeffro256:monero.social highlighted how this incurs extra cost for daemons to handle reorgs (e.g. in a 10 block reorg, wallets will end up requesting 100 blocks back from tip to handle the reorg) 17:58:05 I'm planning to revert back to behavior that does not incur extra cost to handle such a reorg, hopefully will complete that today 17:59:32 I honestly don't think that 81 should be a blocker personally 18:00:11 ofrn reported various refresh issues similar to issue #45 that the PR solved for him 18:00:26 Okay fair 18:00:52 Which part of the PR actually fixeD the reorg issue ? 18:01:06 it was an issue with reorgs right ? 18:03:07 yep, the reason I widened the PR's scope to remove init hash download was because that part touches on similar areas that would need changing. So I figured better to kill 2 birds with 1 stone and not need to make more changes separately 18:03:39 @jeffro256: There were numerous isues 18:04:01 One of them was that the wallet was broken if you had a non-0 restore height 18:04:27 I dm'd you another one a moment ago and will send another in a few mins 18:05:04 IMHO, with stressnet, keeping wallets working without manual fixes will be important. 18:05:10 Starting with my scripts that I used to spam last year's stressnet, I have written "easy-to-use" functions that can create an arbitrary number of wallets and monero-wallet-rpc instances. Wallets need to be generated programatically because 1in/2out tx creation times seem to go from 0.1 seconds to 5 seconds with current FCMP [... too long, see https://mrelay.p2pool.observer/e/-auB0rEKcmJKZXYx ] 18:06:03 On last year's stressnet, I had 3-4 wallets spamming at a time IIRC. I could manually fix them when they encountered problems, but dozens of wallets would be harder. 18:06:11 Lots of wallets or slow blocktimes 18:06:32 I'm testing 15 subaccounts right now - i think subaccounts might be broken 18:06:56 @ofrnxmr: On FCMP? 18:07:04 Yeab 18:07:18 My spamming functions don't work without accounts. 18:07:36 I was promised accounts work 😢 18:08:43 Did you limit subaddress lookahead? I don't know if that could help anything. 18:09:16 They should work AFAIK and are planned to worl but plz lmk if they don't 18:09:24 check dm 18:10:35 More discussion of stressnet planning can happen in #monero-stressnet:monero.social ( ##monero-stressnet on IRC I think). 18:11:16 I am wondering if the spamming code should be published or not. That discussion can happen at another time or asynchronously. 18:11:31 Any other major things about FCMP alpha stressnet to discuss? 18:12:23 nothing from me 18:12:40 7. Mining pool centralization: Temporary rolling DNS checkpoints https://github.com/monero-project/monero/issues/10064 and Publish or Perish https://github.com/monero-project/research-lab/issues/144 18:13:11 Do we talk to talk about DNS checkpoints and PoP together, or separately and in which order? 18:13:19 bug found: when checkpointed chain reverts a reorg, the shared-ring-db gets borked 18:14:23 Does that happen with every re-org, or just ones involving checkpointing? 18:14:32 just checkpointing 18:14:47 DNS checkpoints can work by themselves, so I think it can be discussed separately. They can prevent deep reorgs that are bad for UX. But they can't stop selfish mining and might even help the selfish miner in some cases. 18:15:40 Note that the network connectivity of the Qubic adversary is quite poor. They are losing almost every block propagation race. DataHoarder: is that correct? 18:16:01 Only if it's a race. usually it's not a race and they are ahead 18:16:21 I think they are losing because they have to broadcast their txs with the blocks 18:16:23 They take an extra penalty for unknown txs having to get verified 18:16:23 But maybe they could improve their network connectivity in the future. 18:16:36 Plus sometimes their blocks or changes are delayed 8-20 seconds 18:16:37 If they were doing just empty blocks, would be faster 18:16:41 What might help would be to connect the checkpointing node directly to a few large honest pool nodes. 18:17:03 DNS checkpoints help the selfish miner if they can push their blocks to honest miners faster/earlier. 18:17:06 AFAIK 18:17:32 p2pool is testing a few changes to broadcast non-p2pool blocks across its network 18:17:38 which submits to monerod directly 18:17:43 It's enough if they push their blocks faster to the checkpointing node. 18:17:57 so that makes all blocks spread across faster, alt or not 18:17:59 tevador: Right. maintaining connectivity and a common view of the network's block is important. 18:18:26 Regarding PoP, basic monte carlo sim is set up (honest miner case, with fork-resolution-policy, the soft fork proposal, no vesting / reward-splitting). The selfish-miner is implemented as well but lacks "proficiency" in my model. it performs not well / based on a simple heuristic currently. 18:19:19 I am testing a "connector" network where they share view of multiple monerod nodes, local or remote, and share available information across (for checkpointing purposes). It needs more work to show as a proof of concept, but has more uses besides checkpointing (good for pools to broadcast blocks to each other quick and other information) 18:19:37 i noticed (probably obvious) that the fork probability (even in the honest case only), depends on the window and the block frequency. so D=5s on 1 min blocks (more forks) than on 2 min blocks. up to 3.5% 18:19:47 couldn't qubic just join p2pool? 18:20:11 I wrote some thoughts about the heart of selfish mining last meeting, but didn't post because they were related to changing the hashpower sampling rate. tevador has some good points against increasing the sampling rate. But I think it's useful to think about common network view: 18:20:30 Speaking of Monte Carlo, I released my first rev of a Monero sim. Still a WIP, but normal honest miner behavior appears to be working, the structure is solid, and it's at a point that adding new strategies is fully pluggable. 18:20:30 https://github.com/BawdyAnarchist/Monero-Simulator 18:20:42 Guest28: in regards to what? No need to even join p2pool, it's just to speed up block transmission across the network. any monerod from a user using p2pool will have its blocks also broadcasted 18:20:56 In "Lay Down the Common Metrics: Evaluating Proof-of-Work Consensus Protocols' Security", the authors have a section "What Goes Wrong: Information Asymmetry" that gets to the heart of the matter. I think it explains why an attacker with minority hashpower can gain an advantage over honest miners (and honest merchants). A minor [... too long, see https://mrelay.p2pool.observer/e/s5C70rEKOXlsQzRM ] 18:21:11 I have developed my own analogy. 18:21:11 It's a gambling analogy, of course. 18:21:24 In blackjack, "the house always wins" because the ruleset creates biased odds...or does it? https://en.wikipedia.org/wiki/Card_counting can help a blackjack player get an advantage over the casino. If, by chance, the cards remaining in the dealing deck are high-numbered cards, then the player is temporarily at an advantage. In [... too long, see https://mrelay.p2pool.observer/e/wOa80rEKM0FsNEVB ] 18:21:37 A selfish miner acts in the same way. It knows the blocks produced by itself and the honest miners. When it is ahead of the honest miners because of randomly being luckier than its hashrate would normally allow, it "bets big" by withholding blocks. I think this analogy can help us understand why minority-hashpower selfish miners can get an advantage and maybe how they could be defeated. 18:22:36 Aumayr et al. (2025) "Optimal Reward Allocation via Proportional Splitting" https://arxiv.org/abs/2503.10185 is like re-shuffling the deck often, which decreases the card-counter's edge. But that costs RandomX hash verification time. Too much, probably. And it would require a hard fork. 18:22:38 @rucknium:monero.social: The "Optimal Reward Allocation" team said that they followed closely the MDP implementation from "Laying Down the Common Metrics" (cited the same MATLAB source you found. 18:22:38 > In our case, we only adapted the "reward splitting" MDP to follow our reward sharing variant ("proportional reward splitting" - PRS), so it is mostly the same as this repo's implementation. 18:22:38 https://github.com/commonprefix/proportional-reward-splitting-MDP 18:23:08 in those same analogues, a checkpoint would be a "deck change" @rucknium where any new information from old deck is useless unless it was already abused? 18:23:16 @bawdyanarchist:matrix.org: Awesome. Thank you! 18:23:56 The DNS checkpointing issue didn't receive many comments. So I guess we can go ahead with it? Are any changes in monerod needed? It can stay opt-in since it's a soft fork. 18:24:21 DataHoarder: I think it would be similar. Here are other card counting countermeasures: https://en.wikipedia.org/wiki/Card_counting#Countermeasures 18:24:24 "20:13:19 bug found: when checkpointed chain reverts a reorg, the shared-ring-db gets borked" 18:24:24 ^ that'd need addressing 18:24:42 the thing about opt-in is the solo miners joining the selfish-mining, etc 18:24:45 also, alt blocks even close to a tip do not get shared across peers unless they are longer, or forced flushed 18:25:04 @vtnerd:monero.social has done some "procedure smoothing" coding on monerod. 18:25:22 if we want the network to switch, at least, some of these should get broadcasted across the network and not just kept locally 18:25:23 If the majority of the network hashrate opts in, everyone will eventually end up on the checkpointed chain. 18:25:30 DataHoarder has been thinking of ways to get alt blocks propagated more reliably. 18:25:40 there’s also whether we have one source publishing checkpoints or multiple (as they could disagree and cause some headaches) 18:25:44 the problem is for the majority of the network to even get those blocks in the first place, if it's a race 18:26:00 Tevador - need changes to frequency of checking and reduction of bantime. Patches for that are ready 18:26:06 we can "manage" for public nodes. additionally RPC has some issues with old blocks (which Qubic encountered and blamed pools instead!) 18:26:32 tevadoryour correct, its just that having people on the “wrong chain” temporarily hurts my head a bit 18:26:39 I guess its no different than a reorg though 18:26:43 I have some of my own local WiP patches for me to address this, but some proper solution might be nice. Can talk a bit after discussion 18:26:43 I think @kayabanerve:matrix.org 's idea for an RPC method to checkpoint blocks is a good idea for the checkpointing nodes. You want them to stand on a block without the round-trip latency of querying DNS. 18:27:11 ^ local temporary checkpoint that doesn't survive restarts? 18:27:14 https://mrelay.p2pool.observer/m/monero.social/FvzPJgCrjLhCvYkHEvibiqMv.svg (test_run2.svg) 18:27:16 call it, block pinning? 18:27:22 is this received on IRC side 18:27:25 ? 18:27:30 yes, nice svg 18:28:15 my other thought with banning is whether we create a semi-permanent netsplit, the banning side would need to keeping retrying those nodes for a bit 18:28:18 Some group of nodes tell a checkpointing script that they all (or a {super} majority) have some block and wish to "finalize" it. Then the checkpointing script sends a finalize command to all those nodes and updates the TXT DNS record. 18:28:21 it's the simulation. with view for each miner (lane) .. including their branches (which they would have to maintain up to k , with the soft fork proposal) 18:28:39 Im not sure if the current retry algorithm is sufficient for that potential netsplit case 18:29:04 I guess the fact that the checkpointed chain won't be relayed if qubic's chain is longer might be an issue. 18:29:08 the last lane is the selfish one 18:29:16 @rucknium I had that "finalize" step on majority on my test tool but didn't involve getting these back to monerod somehow, I should incorporate that in my simulations 18:29:42 indeed tevador. There are workarounds *now* but a .patch release that allows broadcasts would help a lot 18:30:01 even just a couple of nodes doing so would unclog most of the network if we want these altchains 18:30:12 @venture:monero.social: Did you publish your code yet for it? Can you share the link? 18:30:17 we can reach public nodes directly, but not hidden or outbound only nodes 18:30:32 My basic checkpointing script on testnet just assume that the single checkpointing node will get the DNS checkpoint instruction soon after it's posted, but there are gaps there with DNS caching and having 4 domains. 18:30:33 I think it would be quite safe to relay alt block up to a certain depth even if the chain is shorter. 18:30:41 @bawdyanarchist:matrix.org: i will. but no, it's not yet on github 18:31:09 an alternative would be querying a local DNS server that the checkpointing script runs, @rucknium, that has the most recent fetched value faster 18:31:26 I wonder what the reason for not relaying alt blocks that clear the difficulty threshold is? If they clear difficulty, DDoS risk is low. 18:31:49 tevador: something like max desired reorg depth, or, up to last randomx epoch, those were my two "sane" min/max bounds for that distance 18:32:29 Exactly, the DoS risk is nonexistent if the PoW is checked first. 18:33:01 By the way, on testnet we were able to produce empirical evidence that a 10-block re-org can invalidate txs: https://libera.monerologs.net/monero-research-lounge/20250903#c579433-c579443 18:33:14 I'm looking at nodes out there with open rpc and over time checking their /get_alt_blocks_hashes output too, fetching the full blocks, then relaying these blocks around 18:33:49 I'm not 100% sure if the 7 invalidated txs were actually mined in a block or were just waiting in txpool. (If in txpool, they would have been included in the next block). 18:34:04 Thanks for that, Rucknium! Nice to have it documented 18:34:15 The fact that transactions get stuck in the mempool for a week is much worse than just invalidating. 18:34:50 DataHoarder: Ah I saw someone sent my node a deep alt block and was a bit confused :D 18:35:14 @ofrnxmr:monero.social was able to spend those outputs later because he cleared his node's txpool. I think another node mined it, not his. But @ofrnxmr:monero.social can confirm 18:35:15 to make it more obvious, the transactions with the key image stay in mempool so other new transactions can't replace this invalid one 18:35:21 just in case the old chain comes back 18:35:33 And clear some things in the wlalet cache I think. 18:35:44 Ordinary users would find it difficulty to clear everything. 18:36:05 @boog900: it might not have been me, snatching some of qubic lost chains is tricky so these are the ones I'm focused in finding 18:36:20 and it'd stay in other node mempools 18:36:39 so it'd block transmission or broadcasts, or mining 18:37:40 In a scenario where the community and other agents are reluctant to enable DNS checkpoints, a grim trigger strategy can be followed https://en.wikipedia.org/wiki/Grim_trigger 18:38:27 Set up infrastructure for DNS checkpoints, but don't post checkpointed blocks to the DNS records unless Qubic re-orgs more than 9 blocks (or they facilitate a short-chain double-spend tx). If they do, issue checkpointed blocks indefinitely. 18:38:58 Rolling DNS checkpoints reduces Monero's decentralization, so it isn't idea for the Monero protocol, either. 18:39:14 I don't see much opposition to it at the moment. 18:39:32 it's understood it's a bandaid until measures can be implemented in the longer term 18:40:09 The Core Team would need to agree since they manage the DNS records. You need most big mining pools to agree and enable checkpoint enforcing (and probably a new monerod release with procedure smoothing). 18:40:49 It's in the mining pools' interest to enable checkpointing enforcement if they are mostly all in it together. 18:40:53 the cadence changes and improvements can be done ahead of time, as they also smooth over any future need of it even if it's not for qubic 18:41:28 (broadcasts, reorg ring db fix, check intervals, etc.) before agreeing to issuing them or not 18:41:55 that way it's ready, instead of lagging for all that to be available. monero users can sometimes delay updates quite a bit 18:41:59 If there are not objections here, then timeline, tasks, and personnel should be decided. 18:42:13 i.e. who does what, and when. 18:43:14 DataHoarder has good understanding of the DNS issues involved. And the block propagation issues. He could lead on that. @vtnerd:monero.social could help with additional smoothing changes to monerod. 18:43:51 Tracking issues can be openened first. 18:44:07 The Core Team would need to reach a consensus on this. @articmine:monero.social could lead on that if desired. 18:44:32 Mining pools need to be contacted and their internal decision processes need to be worked through, and their decision communicated. 18:44:44 i will bring it up with the core team 18:45:09 ArticMine: Thank you! 18:45:50 From previous experience, hashvault, moneroocean, supportxmr, and nanopool seemed quick to implement changes to their mining procedures when contacted: https://rucknium.me/posts/monero-transactions-60-seconds-faster/ 18:46:06 I have a list of issues that we encountered while monitoring and operating monerod in a different capacity, and data to back the reasoning behind these fixes. as long as scope is limited @rucknium I can take care of that part 18:46:41 DataHoarder: Fantastic. Thank you! 18:47:36 Try to pass improvements to @ofrnxmr:monero.social and myself, which are performing checkpointing experiments on testnet. 18:47:36 hashvault, moneroocean, supportxmr, and nanopool are together >50%, enough for a soft fork. 18:49:01 Who will craft the message and contact mining pools? Last time, at least @ofrnxmr:monero.social and @ack-j:matrix.org helped contact mining pools. 18:49:20 Plowsof also helped 18:49:47 "ofrnxmr was able to spend those outputs later because he cleared his node's txpool. I think another node mined it, not his. But ofrnxmr can confirm" correct 18:50:09 By last time, I mean the block template updating config fix: https://rucknium.me/posts/monero-transactions-60-seconds-faster/ 18:50:23 selsta would be the person to coordinate a new point release of monerod. He has said he is standing by to assist on that. 18:51:07 What else would need to be done? 18:52:43 address MoneroPulse page if it's decided to change so people are informed on their opt-in decision if wanted 18:52:45 If anyone thinks of something else that can be done and/or wants to take on a task, say it later in this room and/or on the GitHub issue: https://github.com/monero-project/monero/issues/10064 18:54:12 DataHoarder: Good idea. That could be folded in with crafting a blog post on getmonero.org and broadcasting info to Core'e email list so that the message is consistent. 18:55:04 i wanted to ask, do we have numbers on orphan rate pre qubic? that way the propagation time can be inferred from the poisson distribution 18:55:27 @rucknium: I may take on these communication tasks if no one else wants to. 18:56:46 @venture we have historical p2pool logs and what miners were mining when they found their shares, every 10 seconds. sometimes you see half the miners mining on an orphan block. unknown if we have other longer term data besides monero nodes keeping alternate blocks pre-qubic 18:57:19 @venture:monero.social: Did you see my info about empirical block propagation times in issue 10064? 18:58:15 There is also an analysis of logs in a research-lab issue by @chaser:monero.social IIRC 19:00:02 ah nice. will check it out. no, wasn't aware of 10064 19:00:19 Next: Publish or Perish: https://github.com/monero-project/research-lab/issues/144 19:00:55 ah shit. Thought was already on the agenda. my svg related. but work in progress :) 19:02:48 I have been considering the methodology of these papers. 19:04:27 Most of these papers use Markov Decision Process (MDP) to decide the adversary's optimal behavior. 19:04:45 But there are two limitations to MDP. They can only model stationary processes. MDP is a subset of Stochastic Dynamic Programming (SDP), which can also model non-stationary processes. 19:05:01 my guess is they implement a miner from scratch (without PoW), and have a gloval eventqueue that calls on_mine based on poisson distribution and thinning by shares. 19:05:01 That queue gets emptied at each t and calls on_deliver 19:05:08 MDP also can only model a single agent's decision. The honest miners are basically on autopilot and inert. 19:05:26 and once that was set-up, they did the policy MDP.. 19:05:44 A process is stationary if its statistical/probability processes do not depend on time. 19:06:03 @rucknium: yes, autopilot, ie always broadcasts on_mine 19:06:37 Mining is a Poisson process, which is memoryless. So far, so good. But difficulty adjustment is not memoryless. And a selfish minor alters difficulty adjustment. 19:07:23 yeah.. diff adjustment is often not considered in these papers. only the one with selfish-mining re-examined 19:07:28 Grunspan & Pérez-Marco (2019) "On profitability of selfish mining" https://arxiv.org/abs/1805.08281 give a critique of modeling selfish mining in a MDP framework in this ^ basis. 19:07:41 on this basis* 19:08:15 I don't know how much this critique matters in the current scenario. 19:09:03 Related, I winder if the objective function in the PoP's MDP could be easily modified from relative revenue to "propaganda value", or something, since Qubic seems to care a lot about that. 19:10:24 unfortunately they didn't publish their solved MDP I thinik 19:12:06 On the single-player limitation to MDP, I have only seem two-player modeling in Aumayr et al. (2025) "Optimal Reward Allocation via Proportional Splitting" https://arxiv.org/abs/2503.10185 19:12:09 They say their protocol has a Nash equilibrium for some parameter values. Maybe other papers do game theory modeling. I haven't read them all in this area. 19:12:11 Aumayr et al. (2025) also does MDP as a complement to their game theory modeling. 19:12:19 @venture:monero.social: Here was the orphan analysis by @chaser:monero.social https://github.com/monero-project/research-lab/issues/102#issuecomment-1577827259 19:13:16 Did Negy, Rizun, & Sirer (2020) "Selfish mining re-examined" use MDP? 19:15:19 My big-picture impression of PoP is that it bites most fiercely in its k parameter. The other changes seem to have the biggest effect for tie/races/near-ties. For a very strong adversary, k rules. 19:15:38 @rucknium: around 15 reorgs per month, that's insanely low, good propagation. man qubic seems well connected in sniping in between with their blocks 19:17:09 k is a mirror image of the rule in Bitcoin Cash (for example) that won't re-org to a chain when the re-org is more than n blocks deep. The k rule is that a node will not re-org to a chain that is less than k deep. k = infinity means that both the N and k rules are in effect, basically. That is what I understand. 19:17:24 ls 19:17:59 *sorry. stray keypres 19:18:52 It is possible that Negy, Rizun, & Sirer (2020) "Selfish mining re-examined" doesn't use MDP because they agree with the critique of Grunspan & Pérez-Marco (2019) , but say they don't go far enough. 19:20:09 > In case of two competing chains of equal weight, the PoP paper calls for a random selection to be made, but I would suggest to use a deterministic tie breaking rule (e.g. with hashing). 19:20:09 @tevador: how would that work ? 19:20:15 rucknium: with k=∞ you can still reorg, just not to a selfish chain that will have a weight of zero 19:20:55 i think diff adjustment is not elastic enough in monero for a selfish-miner.. def. worth looking into, but "static" is also valuable and should help short-term selfish-mine mitigation 19:21:04 antilt: I've been thinking about this and it seems that the random selection they propose has a reason. The attacker can't know in advance if they will win a race. 19:22:04 With deterministic selection, you plug in both chains into a hash function and that will tell you which chain to select. The attacker can do this check before deciding to publish its chain. 19:22:19 yeah. that's a downside. but uniform splits the hashrate in half 19:22:50 but even with deterministic selection, he can't the uncle in... 19:23:48 can't *get 19:24:18 "About 80 percent of blocks arrived at all five Monero nodes within a one-second interval." https://rucknium.me/posts/monero-pool-transaction-delay/ 19:24:18 In early 2023 19:24:21 The magic of fluffy blocks 19:24:58 Stochastic dynamic programming could relax the stationarity assumption. In theory I could try that since Stochastic dynamic programming is used in economics a lot, but I don't know how difficult it would be in this case. 19:25:10 With deterministic selection, the attacker will only release a block if it wins the selection. So a probability of 1.0 to win. With random selection, the chance is (1.0+α)/2, which is < 1.0. 19:26:48 probability of 1 omits the cases where he lost hashrate effort and never broadcasts (that's not visible, but still there) 19:27:11 tevador with k=3 this case would happen quite often. How do checkpoints fit in here? 19:27:15 If he loses the selection, he mines a secret block N+2 with the honest uncle. 19:27:58 Doesn't the attacker have to waste more hashpower to get a winning block wit deterministic selection? Throw a whole block away if it is losing? 19:28:36 tevador: i need to get my head around this memoryless thing. he would start over at this point.. but maybe that has no downside.. 19:29:24 He wouldn't throw it away, he'd continue mining N+2, so his chain will have an equal weight if N+2 is released in time using the honest N+1 uncle. So the overall chance of winning is 75% for the 2 attempts. 19:31:16 i hope i can share some simulation numbers soon, with the 2 variants, uniform, deterministic 19:31:16 but what about changing the second rule, to only go for the heighest weight, if it is >= 2 19:32:31 ^ Then you would have longer honest chain splits. 19:33:56 All this will probably need to be simulated first. 19:34:02 We can end the meeting here. Feel free to continue discussing. Thanks everyone. 19:34:31 antilt: the idea was to use k=∞ with checkpoints. 19:34:54 k=3 makes more sense without checkpoints 19:38:35 I will respond to the fee and minimum penalty free issue in the GitHub issue 19:42:01 tevador: ping 19:42:04 thx. an attacker would <<51% and a custom strategy iiuc 19:42:38 *would need 19:45:14 Rucknium next p2pool release will fast propagate all Monero blocks, so we can expect <1 second propagation times once enough p2pool miners update. My tests show propagation delay <3 ms per one hop, compared to Monero's 400+ ms 19:48:46 relevant code https://github.com/SChernykh/p2pool/commit/50634e5e79292dcd16ac4fb45593525bd6287b4d :) 19:49:22 I touched on it lightly sech1 as well :) 19:50:02 so if pools want faster propagation, all they have to do is run *a* p2pool instance, no need to mine on it, connected to one of their monerod 19:50:56 p2pool should receive blocks via ZMQ and broadcast to other peers who don't have it, which in turn if these have other p2pool (like mini/nano) they would also broadcast it to most participants 19:51:14 p2pool will also submit_block to that monerod with new incoming blocks 19:51:15 sech1: Great :) 19:51:15 According to my network construction simulations, all reachable nodes should get a message within 4 hops: https://github.com/monero-project/monero/pull/9939#discussion_r2285992169 19:51:28 so it's a two-way bridge for faster blocks 19:52:19 Add reachable nodes and you would get a max of 5 total since all unreachable nodes must have an outbound connection to a reachable node (unreachable nodes cannot connect to each other). 19:52:34 Yeah, most important is that major Monero pools start running p2pool nodes connected to their block producing nodes, via zmq 19:53:07 But even without it, this change will speed up the network 19:56:29 I expect pools' effective hashrate to get >1% free boost :) Just because everyone will get new blocks faster 23:01:23 put my current simulator online here https://github.com/venture-life/mining-simulator