01:34:45 when is the next meeting? 01:35:34 Check github meta issues 01:35:47 Same time every week 01:35:56 If i knew, id tell you 01:36:36 https://github.com/monero-project/meta/issues/1256 01:36:40 https://github.com/monero-project/meta/issues/1261 01:36:51 https://github.com/monero-project/meta/issues/1261 01:36:53 Yea 12:30:03 Is the meeting ok to attend for all ? 12:30:31 Absolutely 15:02:49 MRL meeting in this room in two hours. 17:00:50 Meeting time! https://github.com/monero-project/meta/issues/1261 17:00:51 1) Greetings 17:01:00 Hi 17:01:11 Hello 17:01:17 Rare hi from me 17:01:24 Hi 17:01:26 *waves* 17:02:54 2) Updates. What is everyone working on? 17:03:11 Ping jeffro256 17:03:45 Howdy 17:03:50 thanks for the ping 17:04:29 me: Reading papers about selfish mining countermeasures and decentralized consensus protocols. Helping test rolling DNS checkpoints on testnet. Making fixes to moneroconsensus.info and moneronet.info 17:05:04 me: working on special build for dns checkpoint testing, and looking at carrot+lws stuff 17:05:15 me: fixed a couple bugs testing forking from current testnet (in the migration and in scanning), cleaned up the FFI (removed unwraps/asserts, used int error returns, de-duplicated some code with macros, clippy+fmt), implemented consolidated paths in the RPC for getting paths in the tree by global output id, reorganized curve trees db logic into a saner structure in prep for PR(s) 17:05:17 to the main monero repo, tested kayaba's latest prove/verify optimizations (good results!!) 17:05:44 me: researching selfish mining countermeasures (Hi) 17:05:51 Repeating from NWLB: good news for the stressnet: kayabanerve implemented linear prove() times, dropping 128-input tx construction down from 5m30s to ~1m!! Huge 17:05:56 me: initiated communication about Carrot follow-up review by Cypherstack, updated FCMP++ benchmark tool for all the latest FCMP++ changes (github.com/jeffro256/clsag_vs_fcmppp_bench/), working on supporting high input counts in benchmark tool, working on patches in Monero core repo, re-reviewing the de-dup PR by rbrunner7, reviewed several PRs in seraphis-migration, did a write-up for an upcoming vuln disclosure, helped prepare v0.18.4.2 release. The de-dup PR does a great job at naturally excluding nodes from the known spy node list, which is great! https://github.com/monero-project/monero/pull/9939/#issuecomment-3228779989 17:05:59 I updated my comments on the FCMP++ weight function. I am also working on a full write-up on Monero s calling generally. 17:06:01 I am also working on the block signing remix 17:06:47 monero-oxide migration, optimizations and improvements. 17:09:34 3) [Carrot follow-up audit](https://gist.github.com/jeffro256/12f4fcc001058dd1f3fd7e81d6476deb). 17:11:00 In communication w/ Cypherstack right now, clearing up scope and such b4 a quote. Is there anything about the scope that people here think should be changed? 17:11:12 Or object generally? 17:11:41 Or any questions? 17:11:56 It LGTM 17:13:31 Same here - as far as I understand such stuff ... 17:14:21 One broad point about the scope we could try to tackle now is adding security arguments for a quantum-resistant migration. 17:15:15 The amount commitments opening should be quantum resistant, the openings for address spend pubkeys I'm less sure about. Going down that path would surely expand the scope significantly. We could also try to tackle that later 17:15:31 Or we might deem it unncessary 17:15:52 I would lean toward NACK on that for now since we have more auditing to get through to take fcmp++ to mainnet 17:15:54 and Carrot 17:16:37 I would generally agree, but the option is there 17:17:12 Are the reasons for the protocol tweaks specified somewhere? 17:18:13 Yes, but it's scattered about in different git commits. I should write down the rationals in one place 17:18:42 Yes, I think it should be part of the scope (to verify that the goal of each change is achieved). 17:19:05 I haven't reviewed this and have no comment other than deferral to jeffro 17:21:53 Any more discussion of this item? 17:22:30 @tevador: 17:22:31 By adding the amount to the derivation of the amount blinding factor, we add ~32 bits of security to Carrot-style openings of the amount commits, which potentially allows quantum-resistant migrations in the future. 17:22:33 By removing the cofactor clear, our X25519 ECDH is A) faster, and B) easier to tweak a standards conformant library to get correct results. 17:22:35 By removing K_s from anchor_sp, for accounts where there are two or more main addresses (i.e. hybrid key hierarchies), you would have to calculate-and-check anchor_sp that many times (which is not secure against collisions), or disable special enotes to yourself. 17:22:37 By removing K^j_v from d_e, once s_sr (the X25519 ECDH exchange in normal enotes) is derived, you no longer need the private view-incoming key or subaddress table to scan normal enotes, which simplifies the scanning code. 17:23:38 OK, I think we can move onto the next item. 17:24:18 Generally agree with tevador's idea that above rationale makes sense to include in scope 17:24:41 4) [Discussion: Replace Monero's hash-to-point function with the FCMP++ Upgrade](https://github.com/monero-project/research-lab/issues/142). 17:26:30 We rely on the hash to point to be a collision resistant hash function to resolve the burning bug. The current HtP isn't a good CRH. We can add a new HtP in ~10 lines. jeffro256: volunteered to update the codebase so all CARROT outputs, already a new type, define their key image generator via a HtP. The FCMP++ upgrade makes it trivial to update the HtP. I've discussed this as a po I wrote my opinion on github. I don't think that the change is worth it. The security benefit is questionable. 17:26:31 tential change for over a year. I just finally did the work to review our existing HtP and confirm it _should_ be replaced. 17:27:14 tevador has prior advocated against the change due to the marginal increase in security. I'll note that while there's an explicit marginal benefit, the hash function also has implicit bias I'm not sure has ever been formally bounded. 17:28:07 Even the unbiased version I'm proposing is bounded to 10/sqrt(q), so ~123 bis of security to my understanding. That raises the question of how much worse the current version is. 17:28:43 As to whether the security is worth it, I leave that to people smarter than me. However, if there is a valid argument for the increase in security, I am willing to update all my relevant code to handle that change. 17:29:50 We can also make more than 10 lines of edits (100 lines?) To become standards compliant, as we *almost* are compliant with what became the standard, and optimize the resulting function by ~20-40% (but this isn't a performance critical fn after FCMP++). The main benefit is anyone who _uses_ Monero _without_ using the Monero C++ would be able to write a key image generator function without writing bespoke elliptic curve arithmetic (due to any impl of the standard being relevant). 17:30:21 If we define the security of our protocol as the security of the weakest component, I wouldn't be surprised if this HtP was _technically_ it. 17:30:30 It will be far more than 10 lines of code. Just the fact that RCT and FCMP transactions will coexist for some time will need to be coded for. 17:30:42 my opinion, ^ I have been bitten by the bespoke ec math for hash to point specifically, otherwise I was able to make everything else using generic libraries 17:30:44 From my view: the benefit seems pretty small, and the change seems pretty small as well. It adds some complexity to manage as tevador notes 17:30:54 The unbiased hash to point function is ~10 lines tevador 17:31:20 The _call sites_ will be more, hence why the old RingCT code remains as-is and we use the separation _already introduced for CARROT outputs_ for the new HtP. 17:31:34 We already have to delineate old outputs from CARROT outputs. cc jeffro256 17:31:43 FWIW in tree building we do already have an indicator determined by whether or not an output is Carrot or not (if Carrot, then consensus already made sure the output does not have torsion) 17:32:09 So it wouldn't be such a major change to introduce in there 17:32:13 Even if it is just a few bits or just half a bit, we shouldn't sacrifice them for naught. The moment we realize any cryptography, we realize losses in our security. Our job should always be to minimize those. 17:32:43 Else, these will add up and add up, and we'll end up with a protocol whose practical bound is lower than desired. 17:32:46 On the flip side, if someone wants to write a Monero wallet that can deal with older txs pre-FCMP++, I figure they would still need the old hash to point implemented too 17:32:47 Since the biased function is based on Elligator, its security might have been already studied. Could be worth looking into it more. 17:33:16 It has, and the recommendation for an unbiased version when you need a CRH is from that research. 17:33:35 Also, the 10 / sqrt(q) bound for calling it twice and summing the result is from an Elligator author. 17:34:02 You wrote above: "the hash function also has implicit bias I'm not sure has ever been formally bounded" 17:34:41 It depends on the exact curve parameters due to the reduction from the order of the field the curve is defined over to the order of the curve. 17:35:15 You'd need research specific to Curve25519, which may scrape by due to how small the primes are after the leading bit? 17:35:19 FWIW, working with Carrot outputs in the Carrot libs already don't use much of the higher-level key image helper functions that the current code uses, because those are meant for univariate key images and the API would have to be modified 17:36:13 But that research has been largely forsaken AFAICT because there's other reasons for bias leading people to simply invoke it twice for am unbiased variant, and formalize the bias on that instead 17:36:42 It's pretty likely that the bias is exactly 1 bit since it covers half of the curve. 17:36:53 But you're right, I may have missed a prior formalized bound for single-invocation Elligator over Curve25519 17:37:39 tevador: I'm pretty sure some resulting points appear more frequently due to the difference in orders. 17:38:58 It's possible. 17:39:15 It's part of the commentary from Hamburg when the IRTF standardized hash to points. 17:39:47 That's the 'implicit, unformalized' bias I'm gesturing at. 17:40:09 But again, Curve25519 moduli are favorable _and_ the torsion clear may also be beneficial? 17:40:34 It's just a such a mess we can either research it or fix it, and it's the perfect time to fix it. 17:40:37 I think we should investigate and also model worst case attacks. I think even if you can find a collision in Hp, I don't think it implies you can burn an ouput. 17:40:50 See jberman, jeffro256 noting we have the perfect spots to deal with this. 17:41:11 It does because if you have the outgoing view key. 17:41:35 You can create two outputs with the same discrete logarithm for their key image, and then you solely need a key image generator collision to perform a burn. 17:42:34 For RCT it does not imply burning an output. I'm not very familiar with Carrot. 17:43:02 It's an FCMP++ comment, not a CARROT comment. You're right for RingCT. 17:43:36 If FCMP is more vulnerable than RCT, the update can make more sense. 17:43:36 FCMP++ allows users to publish what _were_ RingCT private spend keys and are _now_ FCMP++ outgoing view keys for _newly created wallets_. 17:44:05 The only reason that's safe is because the key image generator is a CRH binding to the outgoing view key _and_ the private spend key. 17:44:43 See the difficulties Seraphis had with the burning bug, leading Seraphis to define the outgoing view key as a point _and_ a scalar whose ratio formed the linking tag. 17:44:50 ^ I don't think this was mentioned in your proposal on github. That's a pretty strong point. 17:45:17 I did say we require it to be a CRH to stop the burning bug 😅 17:45:32 But I'm sorry I didn't provide sufficient context/background on that 17:45:35 Although it might be that if people scan CARROT enotes honestly, they can always avoid a burn even if the hash-to-point is not coliision resistant since it would always be the case that x!=x' for some O = x G + y T and O' = x' G + y' T if O!=O'. 17:47:03 I don't believe so since they only have to be able to scan one of them, not both. 17:48:53 Yes, and the other one is guaranteed to have the same one-time address EC point. Assuming the receiver is scanning honestly (and thus recomputes the one-time address as a function of their spend pubkey), only the receiver should know the discrete logarithm of that point, so the other can't spend it 17:49:27 IMO, this has sufficient feasibility and acknowledgement to move forward, at least within the scope of this meeting which is 50m in over only a few topics. 17:49:29 jeffro256: No, they aren't. 17:49:40 They're guaranteed to have the same key image discrete logarithm and the same key image generator. 17:49:58 The whole ides is two different one-time addresses can share a key image if the CRH property of the HtP is broken. 17:50:42 They're allowed to have distinct `y` values. 17:51:30 Discussion can continue in the GitHub issue and next meeting. Thanks. 17:51:37 5) MRL rooms moderation. 17:51:47 SyntheticBird: You suggested this item. 17:51:56 Hi 17:51:59 Yes 17:52:01 Now, is CARROT sufficient to convert the problem from finding a collision to second preimage, which would remain secure? Maybe. Even if, we shouldn't place that bound on CARROT. 17:52:25 Too much spam & trolling in lounge, not enough kicks or mutes 17:52:54 A member here has sent nothing of value, insulted me for days, and arguably even made a threat. 17:53:38 I have a few complaints regarding the current moderation of the research channel. It comes to no surprise i think that the quality of discussions have degraded in the recent qubic events. What would before be reserved to #monero is now disrupting monero lounge, rarely monero lab. I would like to discuss if there is a possibility to increase moderators in these zones. The discussio ns are not constructive, arguments repetetive and often turns into shilling fight whenever a random or a bot comes in to stir the pot. 17:53:53 more moderation would be welcome 17:53:59 If the increase in moderator is not justified then I would ask the moderators to be more strict 17:54:02 Conversations should move according to the procedure: #MRL (heaven) -> #MRL-Lounge (purgatory) -> #Monero-Beef (hell). 17:54:04 I think that CARROT converts the problem of finding the same x to also finding a collision on the hash-to-scalar, but I'll think about it some more. Though I do agree that it's a bit sketchy for CARROT to have that bound 17:55:06 I dont even think beef is necessary for a lot of this, not that lounge is purgatory. some of these topics arent "off topic" but are time sinks and distractions 17:55:13 Yeah ngl, interestingband case is kind of annoying. He have a beef with the people with permissioned which either ignore him or maybe do not want to appear like mod abusing. But he has passed the point of being constructive 17:55:13 I'm not sure how moderation can work with the relays, at least from the IRC side. But I agree that this room should be more strictly moderated. 17:55:30 "Stir the pot" and "producing more heat than light" should be avoided in all MRL rooms, observed voluntarily on the part of participants, if possible. 17:55:51 some pretty lazy attempts at manipulation/subversion happening in lounge on regular basis since qubic 17:56:24 Rucknium: I totally agree and in the fact no moderator intervene when this is obviously the case. 17:56:38 even to the point of harassing devs just to waste time 17:56:40 I have been trying to sync up the new bridge to deploy it soon, probably starting with specific monero rooms first. It can make moderation from Matrix side easier, as each user on IRC will appear distinct 17:56:42 A scientific tone is appreciated. 17:56:44 And worse, long term members (..and mods) are dragged into it and make it worse 17:57:12 It's not just a matter of ban but also telling people to stop the false discussion because its disrupting other to introduce another useful talk 17:57:22 Mapping of bans/kicks on each side is a feature that it has, although not widely tested (and relay on irc side would need at least half-op rights) 17:57:29 I think lounge should be related to work, and people working 17:57:35 Not AMA 17:57:47 This is in response of your ask about relays, tevador 17:57:55 DataHoarder: thanks 17:58:26 I think that any kind of hyperbole in that channel seems detrimental 17:58:28 monero.social Matrix gods allow, this should finish syncing up today (it's up to the 25th of July now, it has been syncing up from 2024 events) 17:59:09 DataHoarder has been MVP recently 🎉 17:59:16 DataHoarder, very excited to see the new bridge operational 17:59:20 (Not minimum viable product, but most valuable player) 17:59:37 I was MVP 20+ years ago :) 17:59:50 Can i be the lowercase mvp? 17:59:52 People find their way in these channels and usually get asked to move elsewhere for specific discussions, but some just ... continue 17:59:54 I think first things we could try get a rough consensus on is whether new moderators are necessary 18:00:10 after that, a mute or enforcement could be necessary to prevent channel spam 18:00:13 Any more to say on this item? 18:00:23 [@diego:cypherstack.com](https://matrix.to/#/@diego:cypherstack.com) cc 18:01:11 Rucknium: i think i'm good, the discussion can happen later. 18:01:46 isn't Rucknium mod here and in lounge yet? 18:01:48 keeping out Qubic sock puppets and concern trolls IMHO is paramount, otherwise valuable discussions and valuable contributors will be turned away from participation. the current admins seem to be MIA. I wouldn't mind having more of them. 18:02:05 No 18:02:18 I'm not mod of any room. I was mod of -beef 😢 18:02:34 Butcherium 18:02:45 You are 😆 18:03:12 I prefer not to have mod powers because mod powers involves you in controvery unnecessarily, but if it's for the collective good, then I can get mod powers. 18:03:27 The collective good has spoken, it seems. 18:03:41 you moderate meetings here, lounge can be handled by banhammer 18:03:52 lounge can be handled by ruck 18:03:57 [Note to IRC side: I am admin now in Matrix MRL room] 18:04:03 I'd volunteer, but yknow how that goes 18:04:07 if he wishes so tho 18:04:27 6) [Transaction volume scaling parameters after FCMP hard fork](https://github.com/ArticMine/Monero-Documents/blob/master/MoneroScaling2025-07.pdf). 18:05:00 Lel 18:05:04 articmine updated comment today 18:05:20 Rucknium: I prefer turkey 18:05:39 I'm still not understanding why the disincentive should be so strong for 128-in versus 16x 8-in 18:06:16 chain bloat 18:06:21 My thought on this is that we can separate the weight that is due to proofs from the rest of the transaction weight and then apply a function so that after the quadratic penalty the fee tracks verification time more closely 18:06:33 chain bloat is easier with lower input txs 18:06:50 https://github.com/seraphis-migration/monero/issues/44#issuecomment-3228882784 18:06:56 mea culpa i thought the opposite 18:07:32 1 and 2 input txs are the longest verification and highest size 18:07:34 if one want to maximize spam with mined transaction the smallest weight is optimal 18:07:57 sinnce this is where the penalty is lowest 18:08:48 So there is a case to counterbalance this by favoring large transactions 18:09:38 I'm not sure I fully follow, Artic are you proposing to update the original proposal with new forumlas? 18:10:27 2c: If fees are set linearly per-input, higher input txs end up paying more per byte than lower input 18:10:32 If we want the fees to be close to linear with verification time then we will have to apply a weight formula to favor large transactions 18:11:13 So if this is the wish of the community then yes 18:12:03 My main concern here is that the transactions be economic to mine 18:12:11 linear would mean that it is roughly comparable cost to construct 16x 8-in's compared to 1x 128-in, not to make one more favorable than the other 18:12:32 Yes 18:12:38 I think we should set an 8-in/4-out limit and not _have_ large transactions. 18:12:41 (although I think there is a stronger argument for 16x 8-in's to cost more) 18:13:14 This needs a square root weight formula on the proof part of the weight 18:13:22 Alternatively, a linear weight sounds fine without favoring either side. If anything, I'd call for large TXs to pay notably more than smaller TXs to provide an economic incentive for people to move to small TXs, as we may require in the future. 18:13:51 My point is we can tune the weight formula to what ever is desired 18:14:00 P2Pool coinbase outputs :) 18:14:07 Why should my wallet bloat the chain by splitting my tx into 16 instead of just sending the damn thing 18:14:26 ofrnxmr: Transaction uniformity. 18:14:32 those tend to be plenty, and small, which with this change would make using p2pool more expensive than a centralized pool due to even higher fees 18:14:45 And again, this may become a hard requirement in the future. We want people to get prepared now with solutions now. 18:15:06 I have been told no to 8/4 and I think you all are horrible people who hate privacy and don't understood anything /s 18:15:20 me but unironically 18:15:22 Tx uniformity would be a fixed 1 or 2 input tx, not 8 4 2 1 18:15:36 So i propose we separate the proof parts of the total weight from the rest of the transaction weight, then I can provide options for consideration 18:15:42 On a legitimate note, if we have an end goal of uniformity, we will decrease from 128 in. I understand we aren't doing so now. We should still nudge people towards less and less inputs per TX. 18:15:51 8 in/4 out is a valid uniformity target ofrnxmr 18:15:58 Dummy inputs/dummy outputs 18:16:28 > I intend to break out membership proof size and verify time into 2 additional columns 18:16:32 I can do this today 18:16:56 my input count on ringct has privacy issues (cospends). What issues for fcmp? 18:17:05 4/3 is a minimum established by CARROT. 18:17:58 ..afaict, uniformity doesnt add privacy w/o knowledge linking the inputs together 18:18:39 And with fcmp, its my understanding that the history of the inputs arent linkable to their origins 18:18:53 I (non-sarcastically) agree reducing the number of in/out combinations is a legitimate and viable way toward tx uniformity, and would nudge people toward that even without universal dummies and IVC 18:19:22 ofrnxmr: More inputs signifies you're an exchange or service provider. 18:19:30 no it doesnt 18:19:46 It's a long-term goal for all TXs to be indistinguishable, which would include fixed input/output count. 18:19:47 Number of inputs in a FCMP tx probably only gives information to the tx recipient, not an external blockchain observer. 18:19:56 ofrnxmr: it still betrays a lower bound on how many enotes you own in that wallet 18:20:15 With statistical likelihood? Yes. A new user who just joined the network won't have 128 inputs on day one. 18:20:27 I can have 128 x 10$ inputs and be spending $1000 18:20:33 It leaks how many TXs the creator of the TX was involved with. 18:20:44 A very important point. If we have minimums with 4in then the reference transaction weight will have to be changed 18:21:00 and very likely the minimum penalty free zone 18:21:27 So I do want to encourage Monero down that path, but obviously, we can't jump down to the end of 8/4. 18:21:49 So an economic incentive for fewer inputs may be a good first step. 18:22:15 i think this is a non-issue, and disagree that there should be incentive to blocat the chain for pseudo privacy 18:22:49 Its not an economic incentive for node runners to store more data for less money 18:23:04 In other words if the minimum transaction weight is going to be say between 10000 bytes and 20000 bytes then ZM will meed to increase to 2000000 bytes 18:23:09 Also, we're limiting inputs to 128 with FCMP++, and I'd support limiting to 64 in the future as well. 18:23:15 Taking steps. 18:23:17 ArticMine: that 4/3 is a minimum bound for a maximum limit rule. Txs can still have 1-3 ins 18:23:21 Is dummy input support planned for FCMP? Proposed here: https://github.com/monero-project/research-lab/issues/96#issuecomment-2104091836 18:23:46 Not planned. 18:23:49 not at the moment no 18:23:51 ... but will they be the same weight as 4in? 18:23:59 But it'd work and wouldn't be the most difficult, nor the most costly :) You did good tevador 18:24:04 Although I think it's really more like 3/2, but it depends on how you count it 18:24:35 ArticMine: depends on how we end up defining weight. Byte size? No. Should it be the same weight? Almost certainly noyt 18:25:11 TL;DR IMO, long-term goal of uniformity. TX cost should respect byte size _and_ verification cost, as the Bulletproof clawback does. We can step towards uniformity by additionally penalizing large TXs so that it's cheaper to do small TXs instead. 18:25:38 Low input txs*** 18:25:42 We don't have to agree on uniformity as a goal now. We can start just by discussing if weight should be linear to size and time, or just size, or what 18:25:49 Those are larger per input, so not "small" 18:25:52 I can sidetrack us with a penalty on large TXs later 18:26:03 Back-tracking a bit, my opinion on weights assuming we maintain the current plan to support up to 128 inputs: I think a formula with linearly increasing weight will probably be simplest and easiest to justify. But will review artic's updates to the proposal, and will break out membership proof size and verification times 18:26:30 What i do need is typical transaction weights for the bulk of the transactions currently 2in 18:26:43 if these are replace by 3in or 4in 18:26:53 What would be the size of 4/3? 18:27:10 that is what I need to knnpw 18:27:18 This is smooth brained, but again: charging a fixed amount _per input_ ends up more costly _per byte_ for higher input txs, with no obvious penalty to the user 18:27:29 @kayaba 18:27:50 Here is a table with all FCMP++/Carrot tx sizes and verification times, for all input and output combinations: https://github.com/seraphis-migration/monero/issues/44#issuecomment-3150754862 18:27:54 my understanding is between 10000 and 20000 bytes 18:28:06 for 4in 18:28:38 The table says 10462 18:28:49 I'd be fine with byte size + fixed amount per input to be respective of verification time. That doesn't encourage making smaller TXs though. Larger TXs are still favored due to their smaller overall byte size. 18:29:10 4in are still below 13000 bytes 18:29:15 Byte size for Carrot 4-in is 10240-12216 18:29:27 But I'd be fine with it. I'm not trying to force in my policies on uniformity today. I've conceded it's a long-term goal at best. I thought an economic penalty may be a decent first step. 18:29:36 A (nearly) fixed tx size would remove a lot of the issues with fee scaling. 18:29:47 So most likely changing TR to 15000 bytes and ZM to 1500000 bytes would work 18:30:07 it is a very simple change 18:30:11 on my part 18:30:41 Miners get paid more per byte but less per verification time. Users dont see any difference 18:30:56 Is ZM the penalty free minimum ? I don't know how I feel about increasing that... 18:31:24 Miners get paid per byte of weight 18:31:42 Yes XM is the penalty free minimum 18:32:04 ZM 18:33:05 My question is: do we have consensus for 4in as a standard 18:33:40 If so i can implement the scaling changes 18:33:44 No 18:33:53 Miners ostensibly don't care about transaction verification time unless it affects the speed of their block propagation. Ideally, each transaction in their mined block is already in honest nodes' mempools already 18:34:49 I don't think 4-in should be a standard IMO, IIRC <1% of current Monero txs are 4-in 18:35:45 jberman @jberman:monero.social: has asked I submit the optimized Field25519 impact which impacts verification time via a patch (not waiting for upstream to merge) and to get numbers accurate to the final target. I'll try to do so shortly after this. 18:36:11 TL;DR Numbers are still variable and may change by ~20% by tomorrow on this point alone. 18:37:03 jeffro256: 3 isn't a power of 2, and we can't do 2 in unless in < out :C 18:37:43 fair point 18:37:48 So if I understand correcty 4in is the minimu? 18:37:53 A 3-in is effectively entirely as costly as 4-in, on proof size and verification time. It saves the 32 bytes in the key image store, and ~1kb for the signatures + the commitments we can't assume are zero (but still pay the verification cost on). 18:38:27 Long-term goal: less input TXs. 18:38:29 IMO, for now: TX weight linearly respective to size and time, potentially penalizing larger TXs in a way encouraging smaller TXs instead. 18:38:47 Instead of forcing people to write better code, we can have them save money if they write better code. 18:39:14 at the cost of my ssd storage .. and bandwidth 18:39:24 ArticMine: 4 is the minimum value we should set the *maximum limit* rule to, not the minimum input count. In other words, we should never add a rule which limits the input count to less than 4. But txs with <4 ins can exist 18:40:00 And will exist, probably continuing to be the majority of txs like it is today 18:40:19 As a miner, i earn more if i store less data = logic is backwords 18:40:23 Backwards* 18:40:46 In other other words, 4-in transactions need to be able to exist 18:41:12 I understand that tx with less than 4 can exist but they will have the same or similar weight as 4 in. If so then I must change the scaling paramenters 18:41:17 Penalizing large in with higher fees means miners are incentivized to prefer large in txs 18:41:43 No they should not have a similar weight as a 4-in IMO 18:42:12 I think easiest to justify / achieve consensus on would be: linear increasing weights, clamping to higher powers of 2 to incentivize powers of 2 18:42:35 (Though Monero TXs are so cheap right now that may not matter 😅) 18:43:11 ofrnxmr: that requires users make large input TXs for miners to have that option, and users won't be incentivized to. 18:43:20 I agree with jberman. Now the question is: linear with respect to the input count itself, or linear with respect to the proof size of that many inputs? 18:43:23 I don't think miners are a relevant part of the puzzle here. 18:43:36 If there's a congested mempool, the existing fee market handles all. 18:43:56 not if i make my own block templates .. 18:43:58 IMHO, it could be a good idea to pin tx fee levels to network hashpower. Hashpower follows the purchasing power of 1 XMR closely. Or, closely enough. 18:44:03 What matter is the weight of the bulk of the transactions currently 2in or less? So are the proposed FCMP++ 2in weights going to hold 18:44:15 thats pinning to usd 18:44:19 I would say linear to TX size and linear to inputs. Two separate terms in the equation jeffro256 18:44:21 Model space and time 18:44:32 That could end all the discussions of "what if XMR purchasing power increases suddenly" for good. 18:44:35 ofrnxmr: Even with your own block template, you can't make higher paying TXs appear out of thin air. 18:44:50 It's pinning to purchasing power of a CPU. 18:44:54 kayabanerve: Not necessarily. What we want to avoid is people adding transactions into the mempool which miners aren't incentivized to mine so they stick around. This can happen when the penalty free zone is saturated 18:45:31 Correct 18:45:48 and/or a kilowatt hour of electricity. 18:45:50 Yes you can. Mine empty blocks until ppl raise their fees 18:46:24 You need a mining cartel to do that 18:46:37 Let's continue the agenda 18:46:44 We need to use node relay to block not economic transactions 18:46:51 See "Monopoly without a monopolist" paper 18:46:53 ofrnxmr: that attack always exists 18:46:56 7) [FCMP alpha stressnet planning](https://github.com/seraphis-migration/monero/issues/53#issuecomment-3053493262). 18:47:21 i think would be nice to have kayabas tx creation speedups in for stressnet 18:47:24 I have to attend another meeting 18:47:30 re: alpha stressnet, just 1 more blocker PR needed 18:47:38 ty artic 18:47:43 ofrnxmr: Did you try much with the Monero Research Computing Cluster? 18:48:28 I don't think it needs to be a blocker, we can merge it in after launch/people can run it if they want even without a merge 18:48:33 No, i synced up a couple nodes but havent started spamming from there yet 18:48:42 A very "tidy" setup would be to have a docker container or something with a node, a monero-wllet-rpc, the spam script, and a unique wlalet in each. 18:48:50 I took a deeper dive into 0xfffc 's 9494 PR, and I'm satisified with it. I'm currently reviewing PR #81 in seraphis-migration. I think we could start planning a launch date 18:48:58 The Monero FCMP++ branch has a PR from me moving from j-bermam/fcmp-plus-plus (a fork of kayabaNerve/fcmp-plus-plus) to monero-oxide/monero-oxide#fcmp++ which now hosts the FCMP++ libraries. 18:49:38 Junior in MRC has plenty of storage for lots of stressnet nodes, but Senior doesn't. Maybe some storage could be moved there. 18:49:43 I would propose launch date 7 days after merging #81 18:49:45 Except w/o docker 18:49:47 This could be done, but IDK if deterministic builds for GUIX and Rust are ready yet, so users would have to just trust a single person 18:50:23 The monero-oxide/monero-oxide#fcmp++ code has most of the optimizations I've made recently. I also proposed a faster verifier, but it's on a branch and will not be included as it'd need to be audited (or at least heavily reviewed) to be merged. 18:50:25 I don't like docker either. Any way to do dockerless docker? :P 18:50:28 It's 15-20% faster for 128-input TXs though. 18:50:46 (8% for 64, and rather negligible after) 18:51:25 Last time on stressnet the spamming monero-wallet-rpc instances could not stay connected to nodes easily. So I did one node per rpc instance IIRC. 18:51:29 monero-oxide/monero-oxide#fcmp++ does target Rust 1.69 for the relevant libraries, as we need for cross-compilation and Guix. cc tobtoht 18:51:45 Thats due to the 100mb txpool 18:51:47 Restricted-rpc works 18:52:03 I'll also try to submit the patch for the Field25519 arithmetic tonight for jberman's benches, but I see no reason we can't also include it in stressnet. 18:52:56 jberman 's proposal "I would propose launch date 7 days after merging #81" sounds fine to me. 18:53:55 I plan to hop back on reviewing #81 when I'm done with reviewing the peer dedup PR 18:54:39 jeffro256: I removed the spy nodes agenda item for this meeting. Should it be put back in, or can it be skipped today? 18:55:01 I noticed your new comments on the peer subnet deduplication PR 18:55:19 Just the one thinf about dns blocklist 18:55:21 Any more discussion on alpha stressnet planning? 18:55:44 Consensus to update blocklist to remove a couple old ones and add active subnet(s)? 18:56:04 "Any more discussion on alpha stressnet planning?" -> nothing from me 18:56:11 ofrnxmr: This one? https://github.com/monero-project/meta/issues/1242 18:56:18 Yeah 18:56:26 It can be skipped IMO 18:56:58 Maybe more thumbs can be upped on https://github.com/monero-project/meta/issues/1242 18:57:15 8) CCS proposal: [kayabaNerve Finality Layer Book](https://repo.getmonero.org/monero-project/ccs-proposals/-/merge_requests/604). 18:58:34 What are the advantages and disadvantages of a finality layer compared to rolling N-block (e.g. 10) checkpoints, where nodes refuse to re-org to a new alt-chain deeper than N blocks? 18:59:21 The fact one can have formally-proven safety in an asynchronous model and one immediately fails even in the synchronous model? 19:00:10 I still dont realy know a good, decetnralized, fair and secure way of selecting the validators. 19:00:11 If Qubic achieves a 10-block reorg, which IIRC they've demonstrated multiple 9-block reorgs, suggesting they could, it'd immediately netsplit off all new nodes. 19:00:16 (any nodes not online at the time of reorg) 19:00:18 Are you aware of any scholarship on N-block rolling checkpoints? 19:00:36 An easy fix is to have deterministic tiebreaking at N block re-orgs. 19:00:52 Deterministic tiebreaking has been studied in many papers. 19:00:55 Until the 'didn't reorg' group achieved a chain with more work, if they ever did, but then the nodes which followed Qubic's chain won't reorganize back. 19:01:01 (Instead of first-seen rules) 19:01:17 theyd have to maintain the longest chain, else the offline nodes will eventually be reorged onto the "honest" chain 19:01:21 We're at a point where we have to replace 10 with 20, if we were to discuss not reorganizing past N IMO. 19:01:30 (cc Rucknium for actual statistics on this) 19:01:47 And all of this, IMO, implies PoW has been shown to be insufficient for the Monero network as it stands. 19:02:08 While there are proposed improvements, I'm not convinced they do any better than marginal. 19:02:11 to get back to the real chain they would need to do a reorg bigger than 10 blocks, right? 19:02:13 Are you referring to my analysis here? https://github.com/monero-project/research-lab/issues/102#issuecomment-2402750881 19:02:15 A proper finality layer would also presumably remove the 10-block lock. 19:02:15 A deterministic tie break won't fix chain splits. The attacker will instead publish an 11-block reorg + 1 block on top of the honest chain simultaneously. 19:02:22 oh the offline nodes 19:02:37 ofrnxmr: No because they won't reorg off Qubic's due to the same max reorg rule. 19:02:41 Depends on how long the lead was maintained 19:03:14 Rucknium: Probably. The question would be since Qubic exists, with the attacks they've demonstrated, what reorg is now sufficiently improbable? 19:03:49 And is there one with their proximity to 51%, even if it's just occasionally ~30-40% over a 8 hour period? 19:04:14 tevador: So attack the honest chain and attacking chain simultaneously? Yes, I suppose that could work in the attacker's favor to cause a netsplit. 19:04:17 Do we wish to move to a (8 * 60 / 2)-block lock, recommended confirmations, and max reorg depth? 19:05:10 Anything with a max reorg depth is not PoW anymore. 19:05:11 kayabanerve: How much of a factor do you think, is the inclusion of time as an independent variable in consensus, a major factor in a finality layer? Dont most PoS systems use time slots? 19:05:21 isnt the main reason 10+ block reorgs are an issue, due to invalidating decoys 19:05:36 A proper finality layer solves this. A finality layer can replace the 10-block lock. The risk of a finality layer is the finality layer stalling, performing censorship, though that'd enable a social slash, or equivocating. I believe all of those can be made less problematic than PoW today. 19:05:59 According to what I'm reading by Roughgarden, the less synchronous you become, the more permissioned the validation must be. 19:06:01 tevador: indeed 19:06:05 *much* less 19:06:06 21:00:11 If Qubic achieves a 10-block reorg, which IIRC they've demonstrated multiple 9-block reorgs, suggesting they could, it'd immediately netsplit off all new nodes. 19:06:12 They have been able to get +14 ahead 19:06:31 And the amount of discussions DNS checkpoints have gotten, which would be centralized, solely justify my position IMO. 19:06:31 they released when monero got 9 deep 19:06:51 @tevador we already (effectively) have a max reorg depth of a few thousand blocks 19:07:04 BawdyAnarchist: My proposal is asynchronous BFT which doesn't have bounds on time. 19:07:28 > they've demonstrated multiple 9-block reorgs 19:07:29 @kayabanerve They also had the capacity to do 16 blocks and decided not to, according to DataHoarder's research 19:08:29 ofrnxmr: technicality 19:08:37 kayabanerve: What do you want to accomplish with this agenda item? General countermeasures to a malicious mining pool is next. 19:09:28 Can I have my CCS merged and can the community agree PoW isn't a fundamental tenant of Monero, security and decentralization is? 😅 19:09:51 Pow is a fundamental tenant of monero IMO 19:09:57 Can we not use comma splices? :P 19:10:14 kayabanerve: "A proper finality layer solves this..." comment: the finality layer would offer a strong economic incentive against it being stalled/being coopted/censorious. it's not a high-probability event. equivocations (slashing stake for provable misbehavior) can be quasi-automated within the consensus rules. 19:10:25 1) CCS merged - maybe, 2) There will likely be stiff opposition to PoS. 19:10:31 At least consensus on no-comma-splices. 19:10:39 We can't say "we declare pos" and move to a finality layer over night. I want the idea to be fairly treated. I believe there's more than sufficient justification to move forward with research and a proposal. 19:11:24 I think there is a case for the CCS to be merged. If people donate the required amount, the research should be done. 19:11:35 chaser @chaser:monero.social: It solves it within its bounds. The only solution a max reorg depth is is within the bound 'no miner can attempt more than a 10 block reorg'. 19:11:53 That's such a disastrous bound, which has already failed, any solution premised on it is a failure. 19:12:17 But you're right a finality layer is only a solution to a bounded problem (n = 3f + 1). 19:13:04 Finality layer doesn't solve short-chain selfish mining, does it? And it doesn't solve empty block attacks, but you hinted it might. Or how would it? 19:13:05 Sorry. In a fully synchronous network, you can assume no nodes are offline and all nodes receive the honest blockchain before any alt is published. 19:13:59 Is that an amendment to your statement about N-block rolling checkpoints? 19:14:01 It can if it finalizes the honest chain before the selfishly mined chain is publish, preventing re-org'ing to itm 19:14:05 It doesn't solve a miner producinga empty blocks. 19:14:12 kayabanerve: to clarify, I added the comment because I think the majority of the opposition to the finality layer is rooted in people not having an understanding of proof-of-stake consensus mechanics and nomenclature 19:14:35 A sufficiently low latency finality layer, on an optimal network, would mean 1% of hash power earns 1% of blocks (again, in an ideal world). 19:14:49 Depending on the speed of finalization, selfish mining might still be possible. 19:14:57 Then you need it explain it. Lots of terms have not been explained. I mean, I would prefer if terms were explained. 19:15:42 Or offer a reference, which kayabanerve declined to provide when I requested :P 19:15:54 Rucknium: Yes. A N-block rolling checkpoint works if you have no offline nodes and propagation is instant. 19:16:02 Who here has 100% uptime on their node? 19:16:07 Does everyone? 19:16:28 I do 19:16:38 I have triple digit uptime it's not a myth guy 19:16:42 Sorry, what reference did I decline to provide? 19:16:56 I 100% uptime 19:17:14 Are you talking about the reference of my proposed book? I believe I offered to provide that as soon as my CCS is merged and I actually do the work :p 19:17:19 You can check empirical update in the last month of many nodes here: https://xmr.ditatompel.com/remote-nodes 19:17:30 the propagation being instant is the bigger problem than uptime 19:17:46 I agree there's confusion and a lack of understanding. That's why my CCS exists. 19:18:03 Rucknium: a finality layer could serve as a foundation for force-inclusion of transactions in blocks, which would actually prevent empty-block attacks. 19:18:07 I can live with offline nodes potentially following the bad chain although its not something I like 19:18:11 Also because I'm tired of explaining things in lounge for the umpteenth time. 19:18:19 I can't live with nodes online being split be a reorg right at the boundary 19:18:33 by* 19:20:43 I think a high-hashpower attacker can even get around the force-inclusion of txs by broadcasting high-fee txs and mining them itself. 19:21:17 Hence why a comment I left is in order to prevent censorship, we have to burn TX fees (at least partially) 😅 19:21:19 Rucknium: also, a finality layer, where a block is finalized, I believe treats any length of reorgs the same, short or long. 19:21:26 This is something I've already noted. 19:22:17 wouldnt this just be offset by the block reward 19:22:19 Rucknium, kayabanerve: that's true. 19:22:46 We already "burn" some block reward 19:24:16 According to ArticMine, the penalty already is an effective burn. 19:25:24 Aha, here is what I was referring to: https://libera.monerologs.net/monero-research-lounge/20250813#c557210 19:25:25 > kayabanerve: Any recommended introductory texts on blockchain finality layers? 19:26:17 More comments on this item? 19:26:23 burning more of the fees in exchange for making empty-block attacks less feasible sounds like a good trade. 19:27:05 AFAIK, fee burning would reduce the security budget, so there is a tradeoff. 19:27:19 I'm not sure I agree, you can still fill blocks up quite high without hitting the penalty. With FCMP it will go up even more. 19:27:49 just increase fees to cover the drop in proportion going to the miner? 19:28:18 boog900: Do you have some elasticity estimates 👀 19:28:26 I think fees could probably do with increasing anyway 19:28:34 The elasticity of tx volume with respect to tx fee 19:29:08 9) PoW mining pool centralization. [Monero Consensus Status](https://moneroconsensus.info/). [Bolstering PoW to be Resistant to 51% Attacks, Censorship, Selfish Mining, and Rented Hashpower](https://github.com/monero-project/research-lab/issues/136). [Mining protocol changes to combat pool centralization](https://github.com/monero-project/research-lab/issues/98). 19:29:11 Raising tx fees is the best way to disincentivize empty blocks. 19:29:35 My contribution to this agenda item: https://github.com/monero-project/research-lab/issues/144 19:29:51 We have a new. Right. New issue from tevador on Publish or Perish 19:30:30 Based on my research, this would be the most efficient PoW-only solution. 19:30:33 A later paper by the same authors suggested PoP wasn't very good. 19:30:57 One of the papers you cite, "Laying Down the Common Metrics" 19:31:26 My referenced paper [6] measured the performance of PoP and it performed the best out of all soft forking solutions. 19:31:39 The profitablility threshold for a selfish miner, according to that later paper, is .25 19:32:09 Table II 19:32:18 Zhang & Preneel (2019) "Lay down the common metrics: Evaluating proof-of-work consensus protocols’ security." https://doi.org/10.1109/SP.2019.00086 19:32:30 Profitability threshold is only part of the story. 19:32:58 I looked closely at proportional reward splitting. It does a lot better than the others, but requires a hard fork and faster PoW verification, as you note. 19:33:07 Rucknium: Relative performance depended alot on assumed γ value (percent of honest HP mining on the attacker's chain). PoP outperformed NC up to 0.4 19:34:29 See Fig. 2, PoP performs well. Outperformed only by NC with gamma = 0, which is unrealistic. 19:34:33 You mean when gamma is zero, Nakamoto Consensus outperforms PoP up to 0.4, right? 19:35:33 ^ Correct. 19:35:45 But gamma is never zero. 19:35:59 Well, I have a paper that says selfish mining can be defeated up to selfish mining hashpower = 0.5 19:36:06 Not sure how reliable this paper is 19:36:26 Ghoreishi & Meybodi (2024) "New intelligent defense systems to reduce the risks of Selfish Mining and Double-Spending attacks using Learning Automata" https://arxiv.org/abs/2307.00529 19:36:42 Also doesn't require a hard fork. Only change in miner strategy. 19:37:22 Let me post my short thoughts: 19:37:23 Ghoreishi & Meybodi (2024) "New intelligent defense systems to reduce the risks of Selfish Mining and Double-Spending attacks using Learning Automata" https://arxiv.org/abs/2307.00529 seems to suggest an automatic way for honest miners to build on honest blocks when a selfish miner is active. It says that it can completely eliminate the selfish miner's profit advantage when the se lfish miner has less than 50% of hashpower (this claim makes me skeptical). Part of the procedure uses block time stamps, which _we_ assume are actually spoofable, within some limits. I don't know how well their system works without honest time stamps. They cite a paper from 2014 about non-spoofable block time stamps, but I haven't looked at that paper. 19:37:33 I'll read it, but I'm skeptical. 19:37:56 I read that 2014 paper. 19:38:09 The non-spoofable timestamp paper is.. Right [4] One Weird Trick to Stop Selfish Miners: Fresh Bitcoins, A Solution for the Honest Miner. https://eprint.iacr.org/2014/007 19:38:11 It's very very impractical. 19:38:59 It needs a trusted authority which generates unspoofable timestamps. 19:40:12 PoP, on the other hand, only needs to measure the relative arrival of competing blocks, so it has zero requiremnts on clock synchronization or timestamp accuracy. 19:41:59 I'll read the PoP paper. 19:42:18 I'm working on a nodejs simulation to compare various honest/selfish strategies. Can add an arbitrary number of pools, set their strategies and HP, and also attempts to simulate probablistic network latency with tunable input constants. 19:42:48 ^ cool, I need to simulate some proposals 19:43:12 It would be nice to double check the numbers from the PoP paper. 19:43:23 I think I'm about 80% of the way to an alpha release. Hoping to post a first rev before next week. 19:43:24 BawdyAnarchist: Try to look at the Markov Decision Process (MDP) methodology of "Lay down the common metrics" and Aumayr et al. (2025) "Optimal Reward Allocation via Proportional Splitting" https://arxiv.org/abs/2503.10185 19:44:20 It seems like MDP can require lots of CPU and RAM, but MRL has that if needed. 19:45:27 In addition to the papers already mentioned I read (half of) Lewis-Pye & Roughgarden (2024) "Permissionless Consensus" https://arxiv.org/abs/2304.14701 19:45:55 Mostly these are impossibility results at a high level. I am also getting some terms precisely defined. 19:47:33 tevador: Is a higher rate of hashpower sampling completely infeasible? I wonder how much CPU time will be spent on a typical block's FCMP verification compared to one RandomX PoW 19:48:30 Aumayr et al. (2025) "Optimal Reward Allocation via Proportional Splitting" is supposed to get the profitability threshold to 0.38, but it does require more PoW hashes per block. 19:50:27 BawdyAnarchist: AFAIK, the papers with MDP methodology did not publish their code. Maybe try to email the authors. Maybe there is off-the-shelf code for MDP that can be adopted to the scenarios. 19:51:48 I'll see what I can find. 19:52:33 More comments on this agenda item? 19:52:38 My proposal has a block time of 60 s, so it already doubles the PoW cost. 19:53:59 The second one, "Hard-forking proposal" 19:55:54 Yes. 19:56:23 We can end the meeting here. Thanks everyone. 19:56:30 PoW is more sensitive to performance than tx verificaion. For example, SPV-style wallets only verify PoW. 19:57:45 So I don't think PoW cost of >1 second per block would be great. That would become 10 seconds on a raspberry pi. 21:40:27 I have a concern with the proposed finality layer or POS switch largely surrounding coin distribution. Monero’s emission curve was extremely aggressive and IMHO a blemish on the protocol as it favored the few early adopters too greatly. This combined with early mining software being sabotaged to reduce efficiency leads one to wonder what the distribution of coins actually looks 21:40:27 like. There’s nothing we can do about that now but switching to POS knowing this seems insane to me. I currently support tevadors proposals to stay with POW, increase fees and research creative solutions to selfish mining. 21:40:29 I support kayabanerve CCS proposal but I’d like these concerns addressed in the book. 21:47:38 FWIW it is an apparently common misconception. The bad miner only caused an skewed distribution vs hash rate for the first few weeks (assuming the CN people mined it). No extra monero was generated. So AFAIK the high bound on this is very small by now. 21:48:33 So using it to claim PoS would be insane is ill informed at best. There are other much more persuasive arguments to be made. 21:57:32 and 11 years since that happened, plenty of time for them to sell 21:58:05 sorry, didn't realize the channel I was in 22:01:51 Thanks moo, good to know. I wasn’t around back then and have only heard stories. 22:08:09 My first point still stands 22:16:24 The distribution of the coins being completely unknown _is_ a security worry, IMHO. AFAIK, the only thing that can really be known is that X coins haven't been spent since RingCT was introduced ( jeffro256 produced the numbers on that IIRC) and any of the few public valid reserve proofs that exist. Even public view keys aren't very reliable because they don't show outbound spends. 22:16:51 A transparent blockchain would give hints about who owns certain piles of coins. 22:17:33 ~9% of funds in pre-RingCT outputs have not yet been spent 22:18:28 So like 70% of the supply? :P 22:18:48 The remaining enotes have been spent, but that doesn't necessarily mean that they've changed hands. They could have just been churned into RingCT enotes owned by the same people 22:20:44 Do you know how many xmr = 9% of pre-ringct? 22:22:16 For a point of comparison (BTC and BCH spent outputs 2017 - 2022), see my analysis: https://rucknium.me/posts/pre-fork-btc-bch-spending/ 22:30:05 People that wanted increased privacy did indeed send to themselves 23:38:34 I've got a chat bot people can play with too. 23:38:35 Invite @kimi:chaodynami.com and use !request to get whitelisted if you're interested in trying it. My home server doesn't support room versions under 3 or over 11, though, and the bot doesn't actually support e2ee 23:41:06 Dick Veney: Please move to #monero-offtopic:monero.social