-
DataHoarder
Don't have much more time to gather more data (can adjust / tune up this and make charts later) but some interim sweep data from mini. The rest is being generated at the moment
git.gammaspectra.live/WeebDataHoard…-analysis/src/branch/master/mini.md
-
DataHoarder
We know the total value swept, and fee, data is available as CSV on
git.gammaspectra.live/WeebDataHoard…eep-analysis/src/branch/master/data (for main/mini)
-
DataHoarder
picked only sweeps of 8 >= inputs and at least 80% of owned decoys by same miner across all input rings
-
DataHoarder
FCMP++ Estimate is currently fee * 4, without taking into account what fee level the miner picked (assuming they keep the same behavior). If input count > 128, I add an extra fudge, but should get better numbers for the constant cost of splitting a sweep later on
-
DataHoarder
for people sweeping even 10-20 inputs their fees go from 2-3% to 10-15% of the mined value
-
DataHoarder
I'll post the full output of the program later today, my RPC full node is quite slow :)
-
br-m
<articmine> Yes fees are in my latest proposal > <DataHoarder> Have we gotten final values for fees for given byte size for FCMP++? Otherwise I'll assume ~4x byte size and just list that, no "new fee" just old one
-
br-m
<articmine> The increase in the minimum penalty free zone is from 300000 to 1000000. bytes
-
br-m
<articmine> This deals with most of the transaction weight increase
-
DataHoarder
That's for block sizing mainly, right? I guess it affects the default priority transactions are sent at
-
DataHoarder
I'll update the code after, it's still running :)
-
br-m
<rucknium> MRL meeting in this room in one hour.
-
br-m
<rucknium> In two hours I mean
-
br-m
<rucknium> Meeting time!
monero-project/meta #1296
-
br-m
<rucknium> 1. Greetings
-
DataHoarder
hello
-
br-m
<jberman> waves
-
br-m
<rucknium> 2. Updates. What is everyone working on?
-
DataHoarder
-
DataHoarder
^ working on getting relevant numbers out of that
-
br-m
<ofrnxmr> testing fixes for fcmp++ ooms w/ berman and mayhem
-
br-m
<jberman> Submitted PR's to mitigate OOM's caused by FCMP++ verification on the stressnet, submitted a PR to fix locking behavior in monerod + a couple other daemon fix PR's upstream from stressnet, reviewed @0xfffc:monero.social 's tx relay v2 + dynamic span + runaway span fix, continuing with some OOM testing and getting GUI binaries prepared hopefully for alpha stressnet v1.5
-
br-m
<rucknium> me: Packet analysis using Kopyciok, Y., Schmid, S., & Victor, F. 2025. "Friend or Foe? Identifying Anomalous Peers in Moneros P2P Network" as an alternative to the peer ID spoofing. So far, the ban list nodes are highly correlated with some of the anomalous packet categories, but none of them have zero percent false positive [... too long, see
mrelay.p2pool.observer/e/wIqdlMgKbFowdGFy ]
-
br-m
<vtnerd> Hi - forgot about time change - working on subaddr lookahead client side. Ran into a db issue (again) so investigating that first
-
br-m
<rucknium> I think I pinged everyone on GitHub when I forgot to scrub the "@" in front of usernames in the logs 😅 . I'll remember to do that in the future or scrub it with the formatting script.
-
br-m
<rucknium> And I will now deliberately ping @jeffro256:monero.social
-
br-m
<rucknium> 3. Coinbase Consolidation Tx Type (
monero-project/research-lab #108).
-
DataHoarder
Fun topic. It may affect other miners too, but a fee increase for P2Pool under FCMP++ might make it unprofitable compared to centralized options, even when P2Pool main for example reduces outputs with smaller PPLNS window size
-
br-m
<rucknium> The main reasons to do this are to have reasonable consolidation fees for p2pool miners and to reduce blockchain bloat from p2pool consolidations.
-
br-m
<ofrnxmr> How would this reduce bloat post-fcmp?
-
DataHoarder
From recent crunched data using identified sweeps on-chain (see
git.gammaspectra.live/WeebDataHoard…-analysis/src/branch/master/mini.md towards the end) it's not as bad for recent outputs on Mini
-
br-m
<ofrnxmr> how
-
DataHoarder
it's still increasing 1-2.5% fee relative to swept value to 4-10%
-
br-m
<rucknium> Not to attack p2pool, but I will try to take stock of p2pool: The p2pool share of hashpower has stayed at 5-10 percent for years. p2pool also did little to prevent the Qubic episode.
-
br-m
<jberman> p2pool has an alternative option which is to maintain long term state in its sidechain of who has contributed what hashrate across epochs, and increase min payout
-
DataHoarder
These sweeps had a different reason to allow consolidation, which is that they were easily identifiable regardless. Under FCMP++ that is no longer the case, so the topic is mainly an efficiency issue
-
DataHoarder
as jberman said the alternative is to keep long term state in p2pool (no longer ephemeral sidechain)
-
DataHoarder
p2pool share has stayed lower, either due to complex setup to mine, even when good programs and integration in GUI exist
-
DataHoarder
effectively losing 5-10% of mined value when sweeping as is at the moment would kill p2pool
-
DataHoarder
long term state would be needed, as people already abandon p2pool when awaiting for a share (increasing payout minimum = increasing work to get a share at the moment)
-
br-m
<rucknium> So you would set a minimum payout for p2pool if the sidechain had long-term state? Or payout for long time windows, regardless of how much a miner earned in the period?
-
DataHoarder
I'd be keen for the second option, as to make it handle payouts without wallets the only way it has to pay is ... finding blocks
-
DataHoarder
Allowing a balance to accumulate past PPLNS windows can open for specific griefing attacks as well
-
DataHoarder
The other option considered that doesn't require much except from miners is they might optionally decide to mine their own sweep transactions in their own block founds
-
DataHoarder
And do so for other miners, when block space allows for it (empty-ish blocks)
-
br-m
<rucknium> DataHoarder: DataHoarder: Such as pool switching to get an advantage?
-
br-m
<rucknium> Such as below the min relay fee?
-
DataHoarder
rucknium: depending how and when miners can choose to get paid out, specially as there's not much other than the pubkeys as way to identify as miner, they can cause early payouts, or orphan out someone else's share more efficiently
-
DataHoarder
yeah. an example given is zero-fee txs, but these would be kept local/shared out of band
-
br-m
<rucknium> Then other nodes would not have the slow-to-verify txs and block propagation would be slower. That could increase the p2pool orphan rate
-
DataHoarder
It would, yes
-
DataHoarder
Miner outputs are already special (exposed amounts, generally agreed to only able to have main addresses mine on it, specially under carrot)
-
DataHoarder
Allowing an optional single-time one way sweep for miners to efficiently aggregate these might be relevant. Or not. That's why I asked the topic brought
-
DataHoarder
Even 9 inputs totaling 0.01 XMR end up with 1.2% fee on current system, end up with 5% assuming 4x at same picked fee level
-
br-m
<rucknium> I wasn't aware of these other ideas to reduce consolidation costs, especially the long-term sidechain state idea.
-
br-m
<jberman> can you expand on that? I imagine current p2pool also has a similar griefing vector > <DataHoarder> Rucknium: depending how and when miners can choose to get paid out, specially as there's not much other than the pubkeys as way to identify as miner, they can cause early payouts, or orphan out someone else's share more efficiently
-
DataHoarder
jberman: currently p2pool you get paid, even if the share ends up orphaned
-
br-m
<jberman> why not keep track of orphans long term as well?
-
DataHoarder
block found is a block found. state tracking systems further than that that are account-less (aka, don't ask user to have additional keys to sign requests) end up with those problems
-
DataHoarder
Long term state tracking would need to get explored, regardless. I don't think we have a good level of detail on it currently, or sech1 pondered on it
-
br-m
<jberman> it generally seems like a solvable problem to me
-
br-m
<jberman> (not easily solvable, but solvable)
-
br-m
<ofrnxmr> "effectively losing 5-10% of mined value when sweeping as is at the moment would kill p2pool" this is purely an issue with the size of the payouts.
-
br-m
<ofrnxmr> Its 10% because payouts are only 10x the size of the fee
-
DataHoarder
it is definitely an issue on size of payouts, but only way that p2pool can payout without intermediary is via wallets, yes
-
DataHoarder
P2Pool Main has dynamic PPLNS window and as such it mainly stays at around 3-5% under FCMP++ increase
-
br-m
<rucknium> Smart contracts on the side chain? :D
-
br-m
<ofrnxmr> centralized pool payouts are like 100x the size of p2pool. I dont see why p2pool needs payouts that are miniscule
-
DataHoarder
though small miners don't mine there
-
DataHoarder
centralized pool payouts can already do the zero fee tx (or even normal txs) and include them in their own blocks only, anyhow
-
br-m
<ofrnxmr> they dont need to though
-
br-m
<ofrnxmr> They just have high payouts
-
DataHoarder
they also have an account-based database that can just keep track of the state
-
br-m
<ofrnxmr> the pool takes a % of the %, its in their own economic interests to charge tx fees even on their own withdrawals
-
DataHoarder
knowledge of addr = you can click and payout.
-
DataHoarder
except on p2pool they are all public, and the only way it can payout is by making blocks
-
sech1
long term state tracking and fixed min payouts can be gamed by large hashrate miners. It was what I wanted initially for p2pool, but I couldn't figure out how to make it fair for small miners
-
sech1
the problem is p2pool can only payout in 0.6 XMR chunks - it can't pay more or less per block
-
DataHoarder
it could be larger, depending on txs fees at the moment
-
DataHoarder
it also can't choose not to pay out
-
DataHoarder
it will keep finding blocks
-
sech1
no matter which logic is used, big hashrate miners can game it and switch their mining wallets as soon as they get paid out more than they would get under fair distribution
-
sech1
because if a small hashrate miner doesn't get paid in a block (despite having shares in PPLNS window), that small piece of the block reward will go to someone else
-
sech1
and someone else can "run away with the money" (change the mining wallet address)
-
br-m
<jberman> you give a larger proportion of the reward to the smaller miners once they reach min payout. large hashrate miners aren't necessarily incentivized to switch because then p2pool breaks down and isn't usable. it's similar as to why a larger hash rate miner doesn't immediately switch to a centralized pool once they receive a shar [... too long, see
mrelay.p2pool.observer/e/2faElcgKb09NYmcy ]
-
sech1
I tried to simulate "p2pool with memory and min payout". Even when all miners mine fairly, small hashrate miners got underpaid. I did it in 2021 before releasing p2pool (the idea was to have the normal min payout and long term tracking).
-
sech1
min payout logic works and works well when there is a pool wallet. But when there is a strict 0.6 XMR that needs to be paid out with each block - it leads to problems and some miners get overpaid while some others get underpaid
-
DataHoarder
on a "trusted" setup (you expect participants to not disappear and sign when needed) mining to multisig, then allowing multisig payouts when intended can work, but not for trustless p2pool
-
br-m
<jberman> you have to give a larger proportion of future payouts to the smaller miners who miss prior blocks
-
sech1
Of course my simulations were not very rigorous and "scientific". Maybe something for MRL researchers to look at
-
br-m
<jberman> p2pool can find multiple blocks within a single window, ya?
-
sech1
yes
-
DataHoarder
p2pool would also need to handle deep reorgs as part of payouts if we need to track "paid out" state across time
-
sech1
jberman I tried several different variants of how to fix the disproportion in payouts. I couldn't find the working logic
-
DataHoarder
Then this looks like the path to take a closer look sech1 to get it workable?
-
br-m
<jberman> I do think it may be worth a revisit
-
DataHoarder
It's definitely better to produce only as few outputs as needed, alternatively if found unworkable a combination of both + maybe making our own out-of-band tx inclusion system might be relevant for P2Pool to allow cheaper sweeps under existing setup
-
sech1
There was one "working" variant. It's to have a hard cutoff. For example, 0.001 XMR min payout - so no more than 600 payouts per block. PPLNS window can be 6000 shares, and only miners with 10+ shares get paid with each block
-
sech1
It cuts off small hashrate miners completely, but everyone else gets paid fairly
-
DataHoarder
Yeah, the closer to solo mining you are the better it is :)
-
sech1
So if your hashrate is < 1/600 of pool hashrate, you don't get paid at all and have to switch to a smaller pool
-
DataHoarder
^ which ends with streaks of several weeks without blocks found and people making "day XX of no payouts on nano" on reddit and people hopping around away from p2pool
-
DataHoarder
they can get shares there, but no found blocks to pay out with :)
-
br-m
<rucknium> There are actually a few papers about p2pool with smart contracts.
-
sech1
1/600 = 420 kh/s for p2pool-main at the moment, and 31 kh/s for p2pool-mini
-
sech1
and 4 kh/s for p2pool-nano
-
sech1
so it doesn't really work for 1 kh/s miners
-
sech1
even if they get lucky, luck averages out when you need to have 10+ shares in PPLNS window to get paid
-
DataHoarder
the issue sech1 raised as well, say a pool gets mined by many small miners and a couple big miners. Big miners get payouts, are done with it, leave. Now small ones opt to get paid out. But effectively they can only get still dust but need to find full blocks (where does the rest of XMR go to? :D)
-
DataHoarder
tbh sech1 if we can have long term windows, mini/nano would not be relevant
-
sech1
long term windows means p2pool needs to sync much more and use much more memory
-
DataHoarder
at that level shares could come faster
-
DataHoarder
indeed, that'd requite also syncing this data when miners come in
-
DataHoarder
my long-term mini block db (with compact compressed blocks, one full block every 32) is ~28 GiB for its lifetime
-
DataHoarder
18 GiB for main
-
DataHoarder
nano is 1.1 GiB for just a couple of months.
-
DataHoarder
if "balance" is considered as long term state, I don't know if stuff similar utreexo would be applicable here, by moving the needs to the miner side
-
DataHoarder
that'd make p2pool mining more complex, though.
-
br-m
<rucknium> Any more about this topic for now? Some things to consider and investigate.
-
DataHoarder
I think looking into long-term state sech1 would be relevant before concluding this topic ^ but can be kept for another meeting later on (or closed here if the consensus is no coinbase sweeps, nothing for p2pool)
-
br-m
<jberman> > long term windows means p2pool needs to sync much more and use much more memory
-
br-m
<jberman> more sync yes, more memory not necessarily but storage yes. I think the main goal here is offloading work from the Monero chain onto p2pool for this
-
DataHoarder
I wrote the message before yours rucknium, I mean "concluding" as closing the issue. I don't think we need to discuss further today
-
br-m
<rucknium> 4. Transaction volume scaling parameters after FCMP hard fork (
github.com/ArticMine/Monero-Documen…lob/master/MoneroScaling2025-11.pdf). Revisit FCMP++ transaction weight function (
seraphis-migration/monero #44).
-
br-m
<jberman> I opened a PR to set weight approximately equal to byte size here:
seraphis-migration/monero #232
-
br-m
<articmine> I uploaded the proposed weights spreadsheet
-
br-m
<jberman> haven't had the chance to dig into @articmine:monero.social 's latest scaling proposal
-
br-m
<articmine> With respect to the scaling side the change is to cap the block rather than the short term median to 16x ML
-
br-m
<articmine> As for the weights. The x in 2 out size follow a good linear regression with the number of inputs
-
br-m
<articmine> One can actually calculate the weight with a 2 parameter linear equation
-
br-m
<articmine> Then have an add on for additional outputs
-
br-m
<articmine> The one question I see is do we want to adjust at all for powers of 2 and verification time jumps?
-
br-m
<articmine> I am free either way
-
br-m
<articmine> The adjusted weights do that for up to 8 in
-
br-m
<articmine> The raw weights do not
-
br-m
<articmine> Both sets of weights follow the principle of roughly equal to size in the PR
-
br-m
<articmine> As for fees. There proposal is flat like now with a 2 In 2 out low fee of about 47 micro XMR
-
br-m
<articmine> Very close to the current fees
-
br-m
<articmine> Any questions at this time
-
br-m
<rucknium> 5. Simple Market-Based Monero Fee Proposal (
monero-project/research-lab #152).
-
br-m
<rucknium> @articmine:monero.social suggested that this agenda item be split from the previous one.
-
br-m
<articmine> Thank you
-
br-m
<rucknium> Any discussion of this?
-
br-m
<rucknium> 6. FCMP alpha stressnet (
monero.town/post/6763165).
-
br-m
<jberman> repeating my update from before: Submitted PR's to mitigate OOM's caused by FCMP++ verification on the stressnet, submitted a PR to fix locking behavior in monerod + a couple other daemon fix PR's upstream from stressnet, reviewed 0xfffc 's tx relay v2 + dynamic span + runaway span fix, continuing with some OOM testing and getting GUI binaries prepared hopefully for alpha stressnet v1.5
-
br-m
<jberman> I'm hoping v1.5 will be the last release of alpha stressnet, and we can move to beta stressnet next. I'd say the biggest TODO for beta is settling on scaling
-
br-m
<articmine> Does the 16x cap work?
-
br-m
<rucknium> When a tx has a high number of inputs, FCMPs need to consume more RAM than rings because each ring is a separate proof, but a FCMP is one single proof for all inputs. Right?
-
br-m
<jberman> @articmine: need to review the 16x cap in totality with the proposal
-
br-m
<kayabanerve:matrix.org> Sorry for missing the meeting thus far. My update is just what I've done due to being pinged by jberman re: memory use.
-
br-m
<kayabanerve:matrix.org> I am against adding transparent TXs for reduced fees for any usecase.
-
br-m
<kayabanerve:matrix.org> FCMPs do need to simultaneously represent the state of all inputs within an FCMP when verifying, causing higher RAM use. I identified an oversight which brought us down 33%. jberman identified a problem area that I did the math on and identified could be hundreds of MB. I then wrote a patch to attempt fixing it, which was bad [... too long, see
mrelay.p2pool.observer/e/m6e1lsgKM19GZGs2 ]
-
br-m
<jberman> tbc, the latter change is an additional 66% reduction in RAM usage in exchange for ~3x slower verification
-
br-m
<kayabanerve:matrix.org> It is not being suggested or merged at this time. It was solely an experiment for one specific area jberman identified, and I estimated was likely a problem.
-
br-m
<rucknium> Is that because the CPU is working on raw computations or is the higher time because of RAM allocations?
-
br-m
<articmine> Why amounts of RAM are we talking about?
-
br-m
<ofrnxmr> 800mb vs 1.2gb
-
br-m
<kayabanerve:matrix.org> But with a nontrivial amount of work (4-20 hours), could possibly implement the reduced memory without such computational overhead.
-
br-m
<jberman> initial RAM 1.2GB, after first change to oversight 800mb, after this last change that isn't usable because it 3x's verification time 266mb
-
br-m
<kayabanerve:matrix.org> Well, for the experiment, 266 MB vs 800 MB
-
br-m
<kayabanerve:matrix.org> And that's just one specific spot which jberman identified as a top contender. As I've said prior, we could spend weeks going through the entire codebase for such improvements
-
br-m
<articmine> It needs to be an ongoing project
-
br-m
<rucknium> Judging by the ASNs of a lot of reachable nodes, a high share of reachable nodes are probably on VPSes, which usually have low RAM.
-
br-m
<kayabanerve:matrix.org> @rucknium:monero.social: The experiment was to transform a vector of sparse vectors into a single sparse vector. The issue is, you can no longer index into the outer vector for the elements for a specific vector. You have to iterate the entire sparse vector and filter as it's no longer organized.
-
br-m
<ofrnxmr> @rucknium: Im syncing on 2gb ram system rn
-
br-m
<rucknium> @ofrnxmr:monero.social: Stressnet?
-
br-m
<ofrnxmr> Yss
-
br-m
<jberman> @mayhem69:matrix.org still no OOM?
-
br-m
<ofrnxmr> 800mb change + runawayspans + dynamic span + malloc env variable. Single core
-
br-m
<kayabanerve:matrix.org> We could use a HashMap, instead of a non-sparse vector, so we don't pay the cost of empty vectors (16 bytes for a nullptr and a length of 0, but we have thousands of them), but then we have the overhead of the HashMap. That's why it'd be some amount of hours to go through and decide the best way to represent this vector.
-
br-m
<kayabanerve:matrix.org> The best way is probably as done in the experiment, but a single-pass filter, instead of the current linear filter, but that'd require rewriting how it's called and the actual verifier code.
-
br-m
<ofrnxmr> @jberman: Mayhem had an OOM w/o malloc env var, but added it. Not sure if OK now
-
br-m
<kayabanerve:matrix.org> Hence why it'd be some time to resolve
-
br-m
<mayhem69:matrix.org> @jberman: I got an OOM on what seems like FMCP verification but I was running without MALLOC_ARENA_MAX=1 on that run and didn't have log-level 2 set, I've restarted and it seems to be fine so far for the last 8-10 hours. Still syncing and watching and will provide updates on if it OOM's again
-
br-m
<jberman> Nice, thank you :)
-
br-m
<rucknium> Thanks everyone for your work on this.
-
br-m
<kayabanerve:matrix.org> I'll note that this not having runaway memory doesn't mean it won't require more memory, and that this may be fine albeit with a new min mem requirement
-
br-m
<rucknium> Are you using a specific linear algebra crate for FCMP?
-
br-m
<kayabanerve:matrix.org> what's linear algebra
-
br-m
<kayabanerve:matrix.org> /s :p
-
br-m
<kayabanerve:matrix.org> The Generalized Bulletproof crate has its own representation for linear combinations to its needs for use in cryptography. I don't believe there's any off-the-shelf library which would be usable.
-
br-m
<kayabanerve:matrix.org> The provided impl is sane, using sparse representations to keep memory usage low. The non-sparse vector of sparse vectors was for a parameter which should've been small.
-
br-m
<kayabanerve:matrix.org> It just isn't small here, it's ~1000, and then every input creates ~400 expressions, and most of them reference something with a _large_ index due to the layout of the circuit.
-
br-m
<kayabanerve:matrix.org> So we had 128*1000*4000 * 16 bytes to denote an empty vector at the first 1000 positions before the final sparse vector at the end for terms for the 1000th entry.
-
br-m
<rucknium> Anything more about stressnet?
-
br-m
<jberman> nothing on my end
-
br-m
<kayabanerve:matrix.org> Noting a starting index, instead of padding with empty vectors, doesn't solve the case where the first commitment and the 1000th commitment are referenced, but it'd reasonably solve the case where solely the 1000th commitment is referenced, which should be _most_ instances of the problem optimized by the experiment.
-
br-m
<kayabanerve:matrix.org> inb4 the 4-20 hours of work I mentioned
-
br-m
<rucknium> 7. Mining pool centralization: Temporary rolling DNS checkpoints (
monero-project/monero #10064), Share or Perish (
monero-project/research-lab #146), and Lucky transactions (
monero-project/research-lab #145).
-
br-m
<rucknium> Nothing on this for now.
-
DataHoarder
No updates since last time on the topic of Monero patches
-
br-m
<rucknium> We can end the meeting here. Thanks everyone.
-
DataHoarder
Qubic is having issues, broken centralized XMR task server , and some spamming
-
DataHoarder
Though still alive.
-
DataHoarder
Thanks for having the coinbase topic in! If any of you have relevant papers/suggestions you can bring these up in #monero-research-lounge or #p2pool-log (they should both be bridged)
-
br-m
<articmine> Well Qubic has breached 0.000001 USD. We will see if they breach their all time low on coinmarketcap