09:20:52 Don't have much more time to gather more data (can adjust / tune up this and make charts later) but some interim sweep data from mini. The rest is being generated at the moment https://git.gammaspectra.live/WeebDataHoarder/p2pool-sweep-analysis/src/branch/master/mini.md 09:21:10 We know the total value swept, and fee, data is available as CSV on https://git.gammaspectra.live/WeebDataHoarder/p2pool-sweep-analysis/src/branch/master/data (for main/mini) 09:21:58 picked only sweeps of 8 >= inputs and at least 80% of owned decoys by same miner across all input rings 09:23:41 FCMP++ Estimate is currently fee * 4, without taking into account what fee level the miner picked (assuming they keep the same behavior). If input count > 128, I add an extra fudge, but should get better numbers for the constant cost of splitting a sweep later on 09:26:29 for people sweeping even 10-20 inputs their fees go from 2-3% to 10-15% of the mined value 09:28:02 I'll post the full output of the program later today, my RPC full node is quite slow :) 14:39:53 Yes fees are in my latest proposal > Have we gotten final values for fees for given byte size for FCMP++? Otherwise I'll assume ~4x byte size and just list that, no "new fee" just old one 14:42:06 The increase in the minimum penalty free zone is from 300000 to 1000000. bytes 14:43:04 This deals with most of the transaction weight increase 14:53:27 That's for block sizing mainly, right? I guess it affects the default priority transactions are sent at 14:53:58 I'll update the code after, it's still running :) 15:00:53 MRL meeting in this room in one hour. 15:00:54 In two hours I mean 17:00:23 Meeting time! https://github.com/monero-project/meta/issues/1296 17:00:39 1. Greetings 17:01:18 hello 17:01:40 waves 17:03:11 2. Updates. What is everyone working on? 17:03:21 P2Pool Sweep scan just finished (see mini https://git.gammaspectra.live/WeebDataHoarder/p2pool-sweep-analysis/src/branch/master/mini.md and main on that repo 17:03:40 ^ working on getting relevant numbers out of that 17:04:46 testing fixes for fcmp++ ooms w/ berman and mayhem 17:05:46 Submitted PR's to mitigate OOM's caused by FCMP++ verification on the stressnet, submitted a PR to fix locking behavior in monerod + a couple other daemon fix PR's upstream from stressnet, reviewed @0xfffc:monero.social 's tx relay v2 + dynamic span + runaway span fix, continuing with some OOM testing and getting GUI binaries prepared hopefully for alpha stressnet v1.5 17:05:56 me: Packet analysis using Kopyciok, Y., Schmid, S., & Victor, F. 2025. "Friend or Foe? Identifying Anomalous Peers in Moneros P2P Network" as an alternative to the peer ID spoofing. So far, the ban list nodes are highly correlated with some of the anomalous packet categories, but none of them have zero percent false positive [... too long, see https://mrelay.p2pool.observer/e/wIqdlMgKbFowdGFy ] 17:06:28 Hi - forgot about time change - working on subaddr lookahead client side. Ran into a db issue (again) so investigating that first 17:08:27 I think I pinged everyone on GitHub when I forgot to scrub the "@" in front of usernames in the logs 😅 . I'll remember to do that in the future or scrub it with the formatting script. 17:08:52 And I will now deliberately ping @jeffro256:monero.social 17:09:11 3. Coinbase Consolidation Tx Type (https://github.com/monero-project/research-lab/issues/108). 17:10:16 Fun topic. It may affect other miners too, but a fee increase for P2Pool under FCMP++ might make it unprofitable compared to centralized options, even when P2Pool main for example reduces outputs with smaller PPLNS window size 17:10:29 The main reasons to do this are to have reasonable consolidation fees for p2pool miners and to reduce blockchain bloat from p2pool consolidations. 17:11:14 How would this reduce bloat post-fcmp? 17:11:15 From recent crunched data using identified sweeps on-chain (see https://git.gammaspectra.live/WeebDataHoarder/p2pool-sweep-analysis/src/branch/master/mini.md towards the end) it's not as bad for recent outputs on Mini 17:11:42 how 17:11:45 it's still increasing 1-2.5% fee relative to swept value to 4-10% 17:12:20 Not to attack p2pool, but I will try to take stock of p2pool: The p2pool share of hashpower has stayed at 5-10 percent for years. p2pool also did little to prevent the Qubic episode. 17:12:26 p2pool has an alternative option which is to maintain long term state in its sidechain of who has contributed what hashrate across epochs, and increase min payout 17:12:37 These sweeps had a different reason to allow consolidation, which is that they were easily identifiable regardless. Under FCMP++ that is no longer the case, so the topic is mainly an efficiency issue 17:13:14 as jberman said the alternative is to keep long term state in p2pool (no longer ephemeral sidechain) 17:13:44 p2pool share has stayed lower, either due to complex setup to mine, even when good programs and integration in GUI exist 17:14:46 effectively losing 5-10% of mined value when sweeping as is at the moment would kill p2pool 17:15:24 long term state would be needed, as people already abandon p2pool when awaiting for a share (increasing payout minimum = increasing work to get a share at the moment) 17:15:56 So you would set a minimum payout for p2pool if the sidechain had long-term state? Or payout for long time windows, regardless of how much a miner earned in the period? 17:16:49 I'd be keen for the second option, as to make it handle payouts without wallets the only way it has to pay is ... finding blocks 17:17:37 Allowing a balance to accumulate past PPLNS windows can open for specific griefing attacks as well 17:18:07 The other option considered that doesn't require much except from miners is they might optionally decide to mine their own sweep transactions in their own block founds 17:18:27 And do so for other miners, when block space allows for it (empty-ish blocks) 17:18:32 DataHoarder: DataHoarder: Such as pool switching to get an advantage? 17:19:12 Such as below the min relay fee? 17:19:20 rucknium: depending how and when miners can choose to get paid out, specially as there's not much other than the pubkeys as way to identify as miner, they can cause early payouts, or orphan out someone else's share more efficiently 17:19:42 yeah. an example given is zero-fee txs, but these would be kept local/shared out of band 17:19:51 Then other nodes would not have the slow-to-verify txs and block propagation would be slower. That could increase the p2pool orphan rate 17:20:09 It would, yes 17:20:49 Miner outputs are already special (exposed amounts, generally agreed to only able to have main addresses mine on it, specially under carrot) 17:22:07 Allowing an optional single-time one way sweep for miners to efficiently aggregate these might be relevant. Or not. That's why I asked the topic brought 17:23:16 Even 9 inputs totaling 0.01 XMR end up with 1.2% fee on current system, end up with 5% assuming 4x at same picked fee level 17:23:42 I wasn't aware of these other ideas to reduce consolidation costs, especially the long-term sidechain state idea. 17:23:51 can you expand on that? I imagine current p2pool also has a similar griefing vector > Rucknium: depending how and when miners can choose to get paid out, specially as there's not much other than the pubkeys as way to identify as miner, they can cause early payouts, or orphan out someone else's share more efficiently 17:24:18 jberman: currently p2pool you get paid, even if the share ends up orphaned 17:25:00 why not keep track of orphans long term as well? 17:25:07 block found is a block found. state tracking systems further than that that are account-less (aka, don't ask user to have additional keys to sign requests) end up with those problems 17:25:41 Long term state tracking would need to get explored, regardless. I don't think we have a good level of detail on it currently, or sech1 pondered on it 17:26:19 it generally seems like a solvable problem to me 17:26:33 (not easily solvable, but solvable) 17:26:56 "effectively losing 5-10% of mined value when sweeping as is at the moment would kill p2pool" this is purely an issue with the size of the payouts. 17:27:20 Its 10% because payouts are only 10x the size of the fee 17:27:28 it is definitely an issue on size of payouts, but only way that p2pool can payout without intermediary is via wallets, yes 17:27:52 P2Pool Main has dynamic PPLNS window and as such it mainly stays at around 3-5% under FCMP++ increase 17:28:10 Smart contracts on the side chain? :D 17:28:11 centralized pool payouts are like 100x the size of p2pool. I dont see why p2pool needs payouts that are miniscule 17:28:14 though small miners don't mine there 17:28:51 centralized pool payouts can already do the zero fee tx (or even normal txs) and include them in their own blocks only, anyhow 17:29:10 they dont need to though 17:29:16 They just have high payouts 17:30:26 they also have an account-based database that can just keep track of the state 17:30:35 the pool takes a % of the %, its in their own economic interests to charge tx fees even on their own withdrawals 17:30:39 knowledge of addr = you can click and payout. 17:30:59 except on p2pool they are all public, and the only way it can payout is by making blocks 17:31:12 long term state tracking and fixed min payouts can be gamed by large hashrate miners. It was what I wanted initially for p2pool, but I couldn't figure out how to make it fair for small miners 17:31:45 the problem is p2pool can only payout in 0.6 XMR chunks - it can't pay more or less per block 17:32:08 it could be larger, depending on txs fees at the moment 17:32:20 it also can't choose not to pay out 17:32:27 it will keep finding blocks 17:32:32 no matter which logic is used, big hashrate miners can game it and switch their mining wallets as soon as they get paid out more than they would get under fair distribution 17:33:23 because if a small hashrate miner doesn't get paid in a block (despite having shares in PPLNS window), that small piece of the block reward will go to someone else 17:33:50 and someone else can "run away with the money" (change the mining wallet address) 17:34:17 you give a larger proportion of the reward to the smaller miners once they reach min payout. large hashrate miners aren't necessarily incentivized to switch because then p2pool breaks down and isn't usable. it's similar as to why a larger hash rate miner doesn't immediately switch to a centralized pool once they receive a shar [... too long, see https://mrelay.p2pool.observer/e/2faElcgKb09NYmcy ] 17:35:36 I tried to simulate "p2pool with memory and min payout". Even when all miners mine fairly, small hashrate miners got underpaid. I did it in 2021 before releasing p2pool (the idea was to have the normal min payout and long term tracking). 17:37:11 min payout logic works and works well when there is a pool wallet. But when there is a strict 0.6 XMR that needs to be paid out with each block - it leads to problems and some miners get overpaid while some others get underpaid 17:37:40 on a "trusted" setup (you expect participants to not disappear and sign when needed) mining to multisig, then allowing multisig payouts when intended can work, but not for trustless p2pool 17:37:44 you have to give a larger proportion of future payouts to the smaller miners who miss prior blocks 17:38:00 Of course my simulations were not very rigorous and "scientific". Maybe something for MRL researchers to look at 17:38:38 p2pool can find multiple blocks within a single window, ya? 17:38:43 yes 17:40:10 p2pool would also need to handle deep reorgs as part of payouts if we need to track "paid out" state across time 17:40:20 jberman I tried several different variants of how to fix the disproportion in payouts. I couldn't find the working logic 17:41:09 Then this looks like the path to take a closer look sech1 to get it workable? 17:41:38 I do think it may be worth a revisit 17:42:25 It's definitely better to produce only as few outputs as needed, alternatively if found unworkable a combination of both + maybe making our own out-of-band tx inclusion system might be relevant for P2Pool to allow cheaper sweeps under existing setup 17:42:56 There was one "working" variant. It's to have a hard cutoff. For example, 0.001 XMR min payout - so no more than 600 payouts per block. PPLNS window can be 6000 shares, and only miners with 10+ shares get paid with each block 17:43:12 It cuts off small hashrate miners completely, but everyone else gets paid fairly 17:43:49 Yeah, the closer to solo mining you are the better it is :) 17:43:56 So if your hashrate is < 1/600 of pool hashrate, you don't get paid at all and have to switch to a smaller pool 17:44:39 ^ which ends with streaks of several weeks without blocks found and people making "day XX of no payouts on nano" on reddit and people hopping around away from p2pool 17:44:50 they can get shares there, but no found blocks to pay out with :) 17:45:29 There are actually a few papers about p2pool with smart contracts. 17:45:34 1/600 = 420 kh/s for p2pool-main at the moment, and 31 kh/s for p2pool-mini 17:46:21 and 4 kh/s for p2pool-nano 17:46:31 so it doesn't really work for 1 kh/s miners 17:46:49 even if they get lucky, luck averages out when you need to have 10+ shares in PPLNS window to get paid 17:47:09 the issue sech1 raised as well, say a pool gets mined by many small miners and a couple big miners. Big miners get payouts, are done with it, leave. Now small ones opt to get paid out. But effectively they can only get still dust but need to find full blocks (where does the rest of XMR go to? :D) 17:47:39 tbh sech1 if we can have long term windows, mini/nano would not be relevant 17:48:00 long term windows means p2pool needs to sync much more and use much more memory 17:48:10 at that level shares could come faster 17:48:41 indeed, that'd requite also syncing this data when miners come in 17:49:41 my long-term mini block db (with compact compressed blocks, one full block every 32) is ~28 GiB for its lifetime 17:49:46 18 GiB for main 17:50:11 nano is 1.1 GiB for just a couple of months. 17:51:55 if "balance" is considered as long term state, I don't know if stuff similar utreexo would be applicable here, by moving the needs to the miner side 17:52:10 that'd make p2pool mining more complex, though. 17:53:09 Any more about this topic for now? Some things to consider and investigate. 17:53:21 I think looking into long-term state sech1 would be relevant before concluding this topic ^ but can be kept for another meeting later on (or closed here if the consensus is no coinbase sweeps, nothing for p2pool) 17:53:56 > long term windows means p2pool needs to sync much more and use much more memory 17:53:56 more sync yes, more memory not necessarily but storage yes. I think the main goal here is offloading work from the Monero chain onto p2pool for this 17:53:56 I wrote the message before yours rucknium, I mean "concluding" as closing the issue. I don't think we need to discuss further today 17:56:19 4. Transaction volume scaling parameters after FCMP hard fork (https://github.com/ArticMine/Monero-Documents/blob/master/MoneroScaling2025-11.pdf). Revisit FCMP++ transaction weight function (https://github.com/seraphis-migration/monero/issues/44). 17:57:14 I opened a PR to set weight approximately equal to byte size here: https://github.com/seraphis-migration/monero/pull/232 17:57:30 I uploaded the proposed weights spreadsheet 17:58:08 haven't had the chance to dig into @articmine:monero.social 's latest scaling proposal 17:59:34 With respect to the scaling side the change is to cap the block rather than the short term median to 16x ML 18:00:45 As for the weights. The x in 2 out size follow a good linear regression with the number of inputs 18:01:26 One can actually calculate the weight with a 2 parameter linear equation 18:02:37 Then have an add on for additional outputs 18:03:47 The one question I see is do we want to adjust at all for powers of 2 and verification time jumps? 18:03:55 I am free either way 18:04:34 The adjusted weights do that for up to 8 in 18:04:53 The raw weights do not 18:05:49 Both sets of weights follow the principle of roughly equal to size in the PR 18:06:56 As for fees. There proposal is flat like now with a 2 In 2 out low fee of about 47 micro XMR 18:07:20 Very close to the current fees 18:07:42 Any questions at this time 18:10:40 5. Simple Market-Based Monero Fee Proposal (https://github.com/monero-project/research-lab/issues/152). 18:11:14 @articmine:monero.social suggested that this agenda item be split from the previous one. 18:11:30 Thank you 18:14:44 Any discussion of this? 18:15:27 6. FCMP alpha stressnet (https://monero.town/post/6763165). 18:16:23 repeating my update from before: Submitted PR's to mitigate OOM's caused by FCMP++ verification on the stressnet, submitted a PR to fix locking behavior in monerod + a couple other daemon fix PR's upstream from stressnet, reviewed 0xfffc 's tx relay v2 + dynamic span + runaway span fix, continuing with some OOM testing and getting GUI binaries prepared hopefully for alpha stressnet v1.5 18:17:25 I'm hoping v1.5 will be the last release of alpha stressnet, and we can move to beta stressnet next. I'd say the biggest TODO for beta is settling on scaling 18:19:46 Does the 16x cap work? 18:19:47 When a tx has a high number of inputs, FCMPs need to consume more RAM than rings because each ring is a separate proof, but a FCMP is one single proof for all inputs. Right? 18:20:19 @articmine: need to review the 16x cap in totality with the proposal 18:22:29 Sorry for missing the meeting thus far. My update is just what I've done due to being pinged by jberman re: memory use. 18:22:29 I am against adding transparent TXs for reduced fees for any usecase. 18:22:29 FCMPs do need to simultaneously represent the state of all inputs within an FCMP when verifying, causing higher RAM use. I identified an oversight which brought us down 33%. jberman identified a problem area that I did the math on and identified could be hundreds of MB. I then wrote a patch to attempt fixing it, which was bad [... too long, see https://mrelay.p2pool.observer/e/m6e1lsgKM19GZGs2 ] 18:23:50 tbc, the latter change is an additional 66% reduction in RAM usage in exchange for ~3x slower verification 18:24:35 It is not being suggested or merged at this time. It was solely an experiment for one specific area jberman identified, and I estimated was likely a problem. 18:24:41 Is that because the CPU is working on raw computations or is the higher time because of RAM allocations? 18:24:44 Why amounts of RAM are we talking about? 18:25:09 800mb vs 1.2gb 18:25:14 But with a nontrivial amount of work (4-20 hours), could possibly implement the reduced memory without such computational overhead. 18:25:34 initial RAM 1.2GB, after first change to oversight 800mb, after this last change that isn't usable because it 3x's verification time 266mb 18:25:38 Well, for the experiment, 266 MB vs 800 MB 18:26:15 And that's just one specific spot which jberman identified as a top contender. As I've said prior, we could spend weeks going through the entire codebase for such improvements 18:27:02 It needs to be an ongoing project 18:27:24 Judging by the ASNs of a lot of reachable nodes, a high share of reachable nodes are probably on VPSes, which usually have low RAM. 18:27:37 @rucknium:monero.social: The experiment was to transform a vector of sparse vectors into a single sparse vector. The issue is, you can no longer index into the outer vector for the elements for a specific vector. You have to iterate the entire sparse vector and filter as it's no longer organized. 18:27:43 @rucknium: Im syncing on 2gb ram system rn 18:28:16 @ofrnxmr:monero.social: Stressnet? 18:28:21 Yss 18:28:29 @mayhem69:matrix.org still no OOM? 18:28:52 800mb change + runawayspans + dynamic span + malloc env variable. Single core 18:29:04 We could use a HashMap, instead of a non-sparse vector, so we don't pay the cost of empty vectors (16 bytes for a nullptr and a length of 0, but we have thousands of them), but then we have the overhead of the HashMap. That's why it'd be some amount of hours to go through and decide the best way to represent this vector. 18:29:39 The best way is probably as done in the experiment, but a single-pass filter, instead of the current linear filter, but that'd require rewriting how it's called and the actual verifier code. 18:30:16 @jberman: Mayhem had an OOM w/o malloc env var, but added it. Not sure if OK now 18:30:31 Hence why it'd be some time to resolve 18:31:43 @jberman: I got an OOM on what seems like FMCP verification but I was running without MALLOC_ARENA_MAX=1 on that run and didn't have log-level 2 set, I've restarted and it seems to be fine so far for the last 8-10 hours. Still syncing and watching and will provide updates on if it OOM's again 18:32:16 Nice, thank you :) 18:33:12 Thanks everyone for your work on this. 18:34:32 I'll note that this not having runaway memory doesn't mean it won't require more memory, and that this may be fine albeit with a new min mem requirement 18:36:10 Are you using a specific linear algebra crate for FCMP? 18:37:07 what's linear algebra 18:37:22 /s :p 18:38:22 The Generalized Bulletproof crate has its own representation for linear combinations to its needs for use in cryptography. I don't believe there's any off-the-shelf library which would be usable. 18:38:34 The provided impl is sane, using sparse representations to keep memory usage low. The non-sparse vector of sparse vectors was for a parameter which should've been small. 18:39:13 It just isn't small here, it's ~1000, and then every input creates ~400 expressions, and most of them reference something with a _large_ index due to the layout of the circuit. 18:40:00 So we had 128*1000*4000 * 16 bytes to denote an empty vector at the first 1000 positions before the final sparse vector at the end for terms for the 1000th entry. 18:40:56 Anything more about stressnet? 18:41:44 nothing on my end 18:42:43 Noting a starting index, instead of padding with empty vectors, doesn't solve the case where the first commitment and the 1000th commitment are referenced, but it'd reasonably solve the case where solely the 1000th commitment is referenced, which should be _most_ instances of the problem optimized by the experiment. 18:42:52 inb4 the 4-20 hours of work I mentioned 18:43:32 7. Mining pool centralization: Temporary rolling DNS checkpoints (https://github.com/monero-project/monero/issues/10064), Share or Perish (https://github.com/monero-project/research-lab/issues/146), and Lucky transactions (https://github.com/monero-project/research-lab/issues/145). 18:46:34 Nothing on this for now. 18:46:35 No updates since last time on the topic of Monero patches 18:46:49 We can end the meeting here. Thanks everyone. 18:46:50 Qubic is having issues, broken centralized XMR task server , and some spamming 18:46:58 Though still alive. 18:47:48 Thanks for having the coinbase topic in! If any of you have relevant papers/suggestions you can bring these up in #monero-research-lounge or #p2pool-log (they should both be bridged) 18:50:24 Well Qubic has breached 0.000001 USD. We will see if they breach their all time low on coinmarketcap