-
nioc
anon: full-time on daemon and fcmp is now fully funded!
ccs.getmonero.org/proposals/anon-10302025.html
-
br-m
<gingeropolous> i figured out github pages
fountain5405.github.io/adaptiveblocksizesim
-
br-m
<gingeropolous> guh it bothers me that you click the button and nothing happens. dang bots
-
br-m
<gingeropolous> i mean, it starts, but there's no interface response blah
-
br-m
<monero.arbo:matrix.org> 25 seconds implies that it would take 3 months to sync a years worth of blockchain data > <@ofrnxmr:xmr.mx> For IBD, 25s for 32mb blocks isnt disastrous, and while fully synced, it should still be much faster due to fluffyblocks (you already have the txs, inlst cases)
-
br-m
<kayabanerve:matrix.org> If 32 MB is potentially being considered as extreme, I'm happy it's my suggestion as a limit. Imagine if it was even worse 😱😱😱
-
br-m
<monero.arbo:matrix.org> I don't know that it's extreme for a hard cap but when I see people saying "25 seconds for a block isn't disastrous" I feel like I gotta add some context (:
-
br-m
<kayabanerve:matrix.org> We do currently have a hard cap of what, 100 MB? Where current nodes technically won't sync such blocks (except maybe if transactions are segregated out via certain methods of relaying?)
-
br-m
<kayabanerve:matrix.org> I support evolving the P2P protocol and improving our effective bandwidth. I don't support allowing blocks in excess of our bandwidth.
-
br-m
<monero.arbo:matrix.org> damn, they think 20 threads is an average CPU > <@rucknium> @monero.arbo:matrix.org: janowitz says
monero-project/meta #1303#issuecomment-3592432820
-
br-m
<monero.arbo:matrix.org> somebody remind me, we get better light wallets under carrot, yes?
-
br-m
<monerobull:matrix.org> yeah
-
br-m
<monerobull:matrix.org> bcs of the custom viewkeys
-
br-m
<monerobull:matrix.org> that allow pre-sorting
-
br-m
<kayabanerve:matrix.org> ... I think I only have four threads on my computer
-
br-m
<monero.arbo:matrix.org> so right now we encourage people generally to run their own node, because light wallets give up privacy to the operator and because remote nodes can't really be trusted, plus privacy issues with them too
-
br-m
<monero.arbo:matrix.org> so under carot, do we want to think about changing that recommendation at all? it could give us more room to push the envelope on nodes if we know that people with lower end hardware can safely run light wallets under carot
-
br-m
<monero.arbo:matrix.org> @kayabanerve:matrix.org: damn bro somebody buy this guy a computer from the general fund
-
br-m
<monero.arbo:matrix.org> shit I'd mail you an old laptop
-
br-m
<kayabanerve:matrix.org> I had more! But than motherboard broke and I had a spare, but only of the basement tier
-
br-m
<monero.arbo:matrix.org> compiling for testing sounds like a real chore on something like that
-
br-m
<kayabanerve:matrix.org> Eh, building rocksdb takes like 40 minutes and is my biggest issue when I clear a cache/update a base and can't use a prebuilt instance of it
-
br-m
<kayabanerve:matrix.org> You just have to get really good at working on the next thing while the prior thing's tests build/run
-
br-m
<kayabanerve:matrix.org> It's also a somewhat artificial constraint as I _could_ buy a spare motherboard of the original, higher-end chip I had. I just haven't bothered as it was a slow PC before and is just a slower PC now. That doesn't mean it isn't functional.
-
br-m
<kayabanerve:matrix.org> Also, it's silly to keep buying nicer parts when I keep having reliability issues. I really just have to move to a different manufacturer... Always one more thing to do.
-
br-m
<monero.arbo:matrix.org> damn I feel so gluttonous with my fat CPU now
-
br-m
<monero.arbo:matrix.org> I pretend to justify it by mining
-
br-m
<kayabanerve:matrix.org> Well, you need twenty threads to run a Monero node
-
br-m
<kayabanerve:matrix.org> How's the tier one fiber install coming along?
-
br-m
<kayabanerve:matrix.org> Does not every average house have such?
-
br-m
<monero.arbo:matrix.org> I actually do have gigabit luckily enough but unfortunately my server only has 16 threads so it seems I will have to replace it
-
br-m
<kayabanerve:matrix.org> Lol
-
br-m
<monero.arbo:matrix.org> it's currently enough to run Bitcoin, Litecoin, Electrum, and Monero servers but alas
-
br-m
<kayabanerve:matrix.org> Scalability requires we target average hardware, not your bespoke weak ones
-
br-m
<kayabanerve:matrix.org> What next, the Pi 1?
-
br-m
<articmine> So we stop people from using Monero in order to run nodes on 20 year old computers with 2-4 GB of RAM and small HDDS.
-
br-m
<articmine> No
-
br-m
<kayabanerve:matrix.org> I don't believe we should support the Pi 1 but the comment that the average PC has twenty threads was absurd
-
br-m
<kayabanerve:matrix.org> We do need to ensure Monero is accessible with minimal or without any trust assumptions
-
br-m
<kayabanerve:matrix.org> That includes not requiring home-server-class hardware to run a node
-
br-m
<articmine> @kayabanerve:matrix.org: To a very small number of people paying 100 USD fees?
-
br-m
<articmine> That is not accessible.
-
br-m
<articmine> In any case I know your position and rationale for your proposal
-
br-m
<kayabanerve:matrix.org> I don't believe 32 MB blocks, effecting twice the current Bitcoin network throughput even with your adjustment for relative transaction sizes, will cause 100 USD fees
-
br-m
<kayabanerve:matrix.org> If it does, even if we allowed a larger block size, we'd need fundamental improvements to the node itself for it to not break
-
br-m
<sgp_> Again the only way to truly guarantee fees won't be $100 is with unlimited block capacity. Even if we allow crazy high scaling, there's still a possibility for high fees, especially in the short term
-
br-m
<articmine> @sgp_: This is correct
-
br-m
<sgp_> Monero does not exist in a vacuum. If fees on Monero become high temporarily, people will consider using Bitcoin, Litecoin, Ethereum, Solana, etc. Yeah it doesn't have the privacy but people will use what saves them money, and that's fine. Their transactions are more efficient than Monero transactions so they should cost less
-
br-m
<articmine> Which is why Monero has an adaptive blocksize
-
br-m
<sgp_> No I mean literally unlimited at all times
-
br-m
<articmine> @sgp_: Or people are using Monero instead of Bitcoin
-
br-m
<sgp_> Monero does not and should not allow instantaneous unlimited capacity
-
br-m
<articmine> @sgp_: I don't have the time for endless arguments . I know the small blocker position and that is all I need.
-
br-m
<sgp_> The fact you classify anything other than 100x annual scaling as "small blocker" means your position and this conversation is doomed
-
br-m
<ofrnxmr> Closer to 30mb atm > <@kayabanerve:matrix.org> We do currently have a hard cap of what, 100 MB? Where current nodes technically won't sync such blocks (except maybe if transactions are segregated out via certain methods of relaying?)
-
br-m
<ofrnxmr> (serialization)
-
br-m
<ofrnxmr> People syncing can test this on stressnet by popping some blocks, flushing txpool, and increasing the block sync size, ie --batch-max-weight=50
-
br-m
<ofrnxmr> Or we can just spam to see if we can get blocks large enough where you cant bootstrap a node
-
br-m
<datahoarder> Less about compute tbh, but privacy-specific networks like Tor have limited bandwidth. Dropping these due to large sync sizes (without aggregated proof sync) would directly affect the usage of Monero in such private environments. > <@articmine> So we stop people from using Monero in order to run nodes on 20 year old computers with 2-4 GB of RAM and small HDDS.
-
br-m
<rucknium> IMHO, monerod shouldn't be required to support Tor usage. @boog900:monero.social and I argued against it in our MoneroKon talk. You're right that its bandwidth is too limited. And it has been DDoSed many times.
-
br-m
<sgp_> I agree. Nodes completely behind Tor should not be the max limit unfortunately. It's a nice to have not a requirement
-
DataHoarder
Then the minimum realistic supported network should be listed somewhere. Wired connections are getting faster, but also a lot of the world is going mobile. If we consider just doing remote sync (so pruned txs) in mobile conditions that's not as bad
-
DataHoarder
also seems data caps are back in the menu, how's the data around that for the US? hard caps vs soft caps (slowdown after)
-
DataHoarder
100Mbit/s was/is a good target as a minimum connection. That's 8s to grab a 100 MiB block
-
DataHoarder
100 MiB block sent over 2 minutes is 7 Mbit/s
-
br-m
<elongated:matrix.org> If tx fees is 100 usd, fiat would have been irrelevant > <@articmine> To a very small number of people paying 100 USD fees?
-
DataHoarder
let's say a reasonable target is 10s to download tx data for the block. that'd be 83 Mbit/s
-
DataHoarder
100 Mbit dl -> 10 Mbit dl under cable, but maybe let's just consider semi symmetric (1/2 up than down) in these cases
-
DataHoarder
32 MiB look reasonable, 100 MiB painful (you'd catch up very slowly, but catch up nonetheless)
-
br-m
<sgp_> 100 Mbps recommended seems reasonable enough to me. I wouldn't want to flood that entire bandwidth consistently though lol
-
br-m
<sgp_> Recommended, not strictly required ofc
-
br-m
<ofrnxmr:xmr.mx> Upload bandwidth matters more than download
-
br-m
<ofrnxmr:xmr.mx> You can only download as fast as one can upload it to you
-
br-m
<ofrnxmr:xmr.mx> (this is referring to IBD or secret blocks). Syncing at tip shouldnt be too rough considering fluffy blocks and downloading txs "1 at a time" means that youre potentially only downloading ~100kb at a time
-
DataHoarder
Tip syncing should be fine as long as it's not chugging that continuously, that's why I took into account download time (to ensure you can catch up reasonably)
-
br-m
<ofrnxmr:xmr.mx> Like 100KiB/s shpuld be fine at tip imo
-
br-m
<sgp_> Yeah I just mean in general, telling people that the software is "best enjoyed" with a reasonable connection lol
-
br-m
<ofrnxmr:xmr.mx> Yeah. Any catch-up sync is bound to be problematic is upload speeds are low
-
br-m
<ofrnxmr:xmr.mx> Monerod will actually ban peers if it takes over N seconds to send the batch
-
br-m
<kayabanerve:matrix.org> I have high-latency high-bandwidth internet via a series of trained carrier pidgeons with microSD cards and a friend at a public library 12.7 miles northwest. Will that still qualify if it has an average of N Mbit/s even if its really 1TB three times an hour?
-
br-m
<kayabanerve:matrix.org> ofrnxmr's commentary seems to frown upon my exceptionally private lifestyle :(
-
br-m
<kayabanerve:matrix.org> *I solely had the dumb joke, I don't have an immediate contribution on target bps other than to say Monero isn't just for the northern hemisphere.
-
DataHoarder
Think of the sites that just get several passes per day from slow satellites
-
br-m
<spackle> Having uncapped block size scaling practically guarantees that there is some point in the future where the network will break from scaling. That said, if the network is not ossified (and the known break is far enough in the future to adapt to) it will be able to adapt.
-
br-m
<spackle> Setting a fixed block size maximum assumes that the network will not ossify before a future hard fork. If this assumption is wrong, then Monero will join Bitcoin in failing to scale.
-
br-m
<spackle> Keeping uncapped dynamic scaling assumes the network has not ossified AND that the break from scaling is far enough in the future to be dealt with. If this is wrong, then Monero can break from uncapped scaling.
-
br-m
<spackle> If there is agreement that the network will not ossify and that the break from scaling is known to be far away, then we can set the break to be far enough away to deal with.
-
br-m
<datahoarder> @jeffro256:monero.social: is it possible under carrot to not bind to amount as part of generating the one time pub in coinbase outputs? That allows multisig participants to pre-sign a possible output ahead of time instead of having to either, signing with zero transactions (or predefined block targets, as we can't know which t [... too long, see
mrelay.p2pool.observer/e/34GEqs4KWXlud2NZ ]
-
br-m
<datahoarder> I'm looking into the output aggregation and if the aggregation groups can ahead of time spontaneously sign possible payouts for N blocks ahead without remaking this every 5-10s as new txs are available that will be vastly efficient
-
br-m
<datahoarder> It's also the speed bottleneck for p2pool for deriving outputs (needs recalculation each 2-5s) but that's an entirely different topic, as it can be done locally. Spontaneous multisig groups would need to have presigned fallback txs ahead of time
-
br-m
<spackle> Even very slow uncapped block growth is preferable to a hard cap in my opinion. Sure, we can add a safety median that will make it take years to go over 32MB if we must; but to remove Monero's ability to scale in the far future is a severe change in design philosophy. In my opinion it would be a massive mistake.
-
br-m
<datahoarder> Not even talking about having subaddresses on miner outputs, I assume that will not be possible
-
br-m
<datahoarder> @spackle: There's currently a hard cap reachable if attackers decide to malleate txs specifically, the 100 MiB limit
-
br-m
<jeffro256> Not possible unless you want to partially re-introduce the burning bug AFAIK
-
br-m
<datahoarder> You could find to fixed amounts regardless and still have the burning bug no?
-
br-m
<datahoarder> Like split 0.6 into 0.3 and 0.3 with same randomness
-
br-m
<jeffro256> You have to wait 120 blocks to spend coinbase anyway. Why is multisig signing every 5-10 seconds?
-
br-m
<datahoarder> Specifically on miner outputs
-
br-m
<datahoarder> @jeffro256: Other miners would be mining there. They need to know they can take payout fallback even in the case that one of the participants is non cooperative
-
br-m
<jeffro256> @datahoarder: I'm not sure I know what you mean here.
-
br-m
<datahoarder> These are groups of 16 formed at semi random within P2Pool current miners, decide in an address via multisig and send rewards instead of individual outputs there
-
br-m
<datahoarder> However if a block is found - and that output has not been pre-signed (chaining) for future tx build, one of the members can disappear and that be unspendable/lost
-
br-m
<datahoarder> By the time miners are mining (before finding the block) they need to have the output presigned, and the output depends on tx fees, which change constantly as you include new txs
-
br-m
<datahoarder> @jeffro256: Burning bug is still doable within miner outputs, within the same block. As output index is not in context they can split the reward evenly, same amount, and have same anchor
-
br-m
<jeffro256> Consensus rules for Carrot transaction enforce that one-time addresses within single transactions are unique, so that avenue isn't possible
-
br-m
<datahoarder> As such not commiting to amount would also make that viable? Given there is no specific amount commitment, and input context includes block height
-
br-m
<datahoarder> I'll come back in a few minutes - in front of the code and computer this time
-
br-m
<jeffro256> It allows a burning bug where chain data integrity isn't a given (e.g. HW devices, offline signing, etc). The input context can be faked, but the amount commitment cannot without also failing to make a valid SA/L.
-
br-m
<ofrnxmr> Isnt it 60 > <@jeffro256> You have to wait 120 blocks to spend coinbase anyway. Why is multisig signing every 5-10 seconds?
-
br-m
<jeffro256> You're right, was confusing blocks and minutes in my head
-
br-m
<datahoarder> It is, still in the future so didn't want to be pedantic :)
-
br-m
<ofrnxmr> @jeffro256: I think we should lengthen it 🙃
-
br-m
<datahoarder> @jeffro256: Right. So they could sign for an output they can't verify amounts for, and this whole set of data needs to be encoded in the one time address. Given new p2pool sidechain blocks come also regularly that can change the reward regardless it's not even about just including txs, but regular syncs
-
br-m
<datahoarder> Not having that would explicitly allow Coinbase txs to be impacted (but not others) in these signing contexts, not good
-
br-m
<datahoarder> I wonder how bad doing a broadcast multisig agreement (at least this is less complex as participants are semi public and just sign their temporary keys) every 5-10 seconds within the p2pool network would be
-
br-m
<datahoarder> At least they need at least some shares to be able to produce these so that's embedded PoW in this concept to prevent spam
-
br-m
<datahoarder> I totally forgot, I am stupid. These are fully custom transactions so we can effectively commit to something different, as long as we have the presigned txs on the other side. We also ensure these presigned txs are aggregated again or go to chain, so these outputs should not stay unspent for long (so they will never have to prove derivation to a turnstile style PQ transition)
-
br-m
<datahoarder> all these multisig produced txs always within the context of p2pool and verified by all other members (not just the N/N multisig group), payments get out to users later using normal derivations
-
br-m
<datahoarder> That means that we'd open ourselves to "The input context can be faked" but these members would always have full context verification, and they'd have to do this only on coinbase tx (not any further aggregation)
-
DataHoarder
Specifically, that'd involve changing C_a = k_a G + a H a to a known amount or value ahead of time, everything would be left unchanged, or have this commit to a specific p2pool specific multisig context here