01:03:25 anon: full-time on daemon and fcmp is now fully funded! https://ccs.getmonero.org/proposals/anon-10302025.html 04:16:09 i figured out github pages https://fountain5405.github.io/adaptiveblocksizesim/ 04:20:04 guh it bothers me that you click the button and nothing happens. dang bots 04:20:28 i mean, it starts, but there's no interface response blah 11:23:57 25 seconds implies that it would take 3 months to sync a years worth of blockchain data > <@ofrnxmr:xmr.mx> For IBD, 25s for 32mb blocks isnt disastrous, and while fully synced, it should still be much faster due to fluffyblocks (you already have the txs, inlst cases) 11:29:59 If 32 MB is potentially being considered as extreme, I'm happy it's my suggestion as a limit. Imagine if it was even worse 😱😱😱 11:32:24 I don't know that it's extreme for a hard cap but when I see people saying "25 seconds for a block isn't disastrous" I feel like I gotta add some context (: 11:36:13 We do currently have a hard cap of what, 100 MB? Where current nodes technically won't sync such blocks (except maybe if transactions are segregated out via certain methods of relaying?) 11:36:49 I support evolving the P2P protocol and improving our effective bandwidth. I don't support allowing blocks in excess of our bandwidth. 11:43:50 damn, they think 20 threads is an average CPU > <@rucknium> @monero.arbo:matrix.org: janowitz says https://github.com/monero-project/meta/issues/1303#issuecomment-3592432820 11:49:10 somebody remind me, we get better light wallets under carrot, yes? 11:55:55 yeah 11:55:58 bcs of the custom viewkeys 11:56:04 that allow pre-sorting 11:57:34 ... I think I only have four threads on my computer 11:57:37 so right now we encourage people generally to run their own node, because light wallets give up privacy to the operator and because remote nodes can't really be trusted, plus privacy issues with them too 11:57:37 so under carot, do we want to think about changing that recommendation at all? it could give us more room to push the envelope on nodes if we know that people with lower end hardware can safely run light wallets under carot 11:57:59 @kayabanerve:matrix.org: damn bro somebody buy this guy a computer from the general fund 11:58:29 shit I'd mail you an old laptop 12:00:36 I had more! But than motherboard broke and I had a spare, but only of the basement tier 12:01:13 compiling for testing sounds like a real chore on something like that 13:07:17 Eh, building rocksdb takes like 40 minutes and is my biggest issue when I clear a cache/update a base and can't use a prebuilt instance of it 13:07:37 You just have to get really good at working on the next thing while the prior thing's tests build/run 13:16:44 It's also a somewhat artificial constraint as I _could_ buy a spare motherboard of the original, higher-end chip I had. I just haven't bothered as it was a slow PC before and is just a slower PC now. That doesn't mean it isn't functional. 13:18:46 Also, it's silly to keep buying nicer parts when I keep having reliability issues. I really just have to move to a different manufacturer... Always one more thing to do. 13:19:00 damn I feel so gluttonous with my fat CPU now 13:19:36 I pretend to justify it by mining 13:21:54 Well, you need twenty threads to run a Monero node 13:22:29 How's the tier one fiber install coming along? 13:22:46 Does not every average house have such? 13:24:06 I actually do have gigabit luckily enough but unfortunately my server only has 16 threads so it seems I will have to replace it 13:24:22 Lol 13:24:44 it's currently enough to run Bitcoin, Litecoin, Electrum, and Monero servers but alas 13:26:04 Scalability requires we target average hardware, not your bespoke weak ones 13:26:14 What next, the Pi 1? 14:12:57 So we stop people from using Monero in order to run nodes on 20 year old computers with 2-4 GB of RAM and small HDDS. 14:12:57 No 14:13:38 I don't believe we should support the Pi 1 but the comment that the average PC has twenty threads was absurd 14:14:05 We do need to ensure Monero is accessible with minimal or without any trust assumptions 14:14:29 That includes not requiring home-server-class hardware to run a node 14:16:46 @kayabanerve:matrix.org: To a very small number of people paying 100 USD fees? 14:16:46 That is not accessible. 14:17:32 In any case I know your position and rationale for your proposal 14:19:59 I don't believe 32 MB blocks, effecting twice the current Bitcoin network throughput even with your adjustment for relative transaction sizes, will cause 100 USD fees 14:20:46 If it does, even if we allowed a larger block size, we'd need fundamental improvements to the node itself for it to not break 14:27:36 Again the only way to truly guarantee fees won't be $100 is with unlimited block capacity. Even if we allow crazy high scaling, there's still a possibility for high fees, especially in the short term 14:28:07 @sgp_: This is correct 14:28:45 Monero does not exist in a vacuum. If fees on Monero become high temporarily, people will consider using Bitcoin, Litecoin, Ethereum, Solana, etc. Yeah it doesn't have the privacy but people will use what saves them money, and that's fine. Their transactions are more efficient than Monero transactions so they should cost less 14:28:54 Which is why Monero has an adaptive blocksize 14:29:11 No I mean literally unlimited at all times 14:29:47 @sgp_: Or people are using Monero instead of Bitcoin 14:31:19 Monero does not and should not allow instantaneous unlimited capacity 14:31:30 @sgp_: I don't have the time for endless arguments . I know the small blocker position and that is all I need. 14:32:12 The fact you classify anything other than 100x annual scaling as "small blocker" means your position and this conversation is doomed 15:20:41 Closer to 30mb atm > <@kayabanerve:matrix.org> We do currently have a hard cap of what, 100 MB? Where current nodes technically won't sync such blocks (except maybe if transactions are segregated out via certain methods of relaying?) 15:23:05 (serialization) 15:25:23 People syncing can test this on stressnet by popping some blocks, flushing txpool, and increasing the block sync size, ie --batch-max-weight=50 15:25:59 Or we can just spam to see if we can get blocks large enough where you cant bootstrap a node 16:29:52 Less about compute tbh, but privacy-specific networks like Tor have limited bandwidth. Dropping these due to large sync sizes (without aggregated proof sync) would directly affect the usage of Monero in such private environments. > <@articmine> So we stop people from using Monero in order to run nodes on 20 year old computers with 2-4 GB of RAM and small HDDS. 16:46:26 IMHO, monerod shouldn't be required to support Tor usage. @boog900:monero.social and I argued against it in our MoneroKon talk. You're right that its bandwidth is too limited. And it has been DDoSed many times. 16:52:24 I agree. Nodes completely behind Tor should not be the max limit unfortunately. It's a nice to have not a requirement 16:54:03 Then the minimum realistic supported network should be listed somewhere. Wired connections are getting faster, but also a lot of the world is going mobile. If we consider just doing remote sync (so pruned txs) in mobile conditions that's not as bad 16:55:15 also seems data caps are back in the menu, how's the data around that for the US? hard caps vs soft caps (slowdown after) 16:56:52 100Mbit/s was/is a good target as a minimum connection. That's 8s to grab a 100 MiB block 16:57:33 100 MiB block sent over 2 minutes is 7 Mbit/s 16:57:54 If tx fees is 100 usd, fiat would have been irrelevant > <@articmine> To a very small number of people paying 100 USD fees? 16:58:35 let's say a reasonable target is 10s to download tx data for the block. that'd be 83 Mbit/s 17:00:06 100 Mbit dl -> 10 Mbit dl under cable, but maybe let's just consider semi symmetric (1/2 up than down) in these cases 17:01:02 32 MiB look reasonable, 100 MiB painful (you'd catch up very slowly, but catch up nonetheless) 19:44:34 100 Mbps recommended seems reasonable enough to me. I wouldn't want to flood that entire bandwidth consistently though lol 19:44:51 Recommended, not strictly required ofc 19:45:34 Upload bandwidth matters more than download 19:45:50 You can only download as fast as one can upload it to you 19:47:16 (this is referring to IBD or secret blocks). Syncing at tip shouldnt be too rough considering fluffy blocks and downloading txs "1 at a time" means that youre potentially only downloading ~100kb at a time 19:49:51 Tip syncing should be fine as long as it's not chugging that continuously, that's why I took into account download time (to ensure you can catch up reasonably) 19:55:50 Like 100KiB/s shpuld be fine at tip imo 20:15:31 Yeah I just mean in general, telling people that the software is "best enjoyed" with a reasonable connection lol 20:16:34 Yeah. Any catch-up sync is bound to be problematic is upload speeds are low 20:16:46 Monerod will actually ban peers if it takes over N seconds to send the batch 20:18:43 I have high-latency high-bandwidth internet via a series of trained carrier pidgeons with microSD cards and a friend at a public library 12.7 miles northwest. Will that still qualify if it has an average of N Mbit/s even if its really 1TB three times an hour? 20:19:05 ofrnxmr's commentary seems to frown upon my exceptionally private lifestyle :( 20:20:07 *I solely had the dumb joke, I don't have an immediate contribution on target bps other than to say Monero isn't just for the northern hemisphere. 20:31:22 Think of the sites that just get several passes per day from slow satellites 21:07:05 Having uncapped block size scaling practically guarantees that there is some point in the future where the network will break from scaling. That said, if the network is not ossified (and the known break is far enough in the future to adapt to) it will be able to adapt. 21:08:07 Setting a fixed block size maximum assumes that the network will not ossify before a future hard fork. If this assumption is wrong, then Monero will join Bitcoin in failing to scale. 21:08:50 Keeping uncapped dynamic scaling assumes the network has not ossified AND that the break from scaling is far enough in the future to be dealt with. If this is wrong, then Monero can break from uncapped scaling. 21:10:09 If there is agreement that the network will not ossify and that the break from scaling is known to be far away, then we can set the break to be far enough away to deal with. 21:11:35 @jeffro256:monero.social: is it possible under carrot to not bind to amount as part of generating the one time pub in coinbase outputs? That allows multisig participants to pre-sign a possible output ahead of time instead of having to either, signing with zero transactions (or predefined block targets, as we can't know which t [... too long, see https://mrelay.p2pool.observer/e/34GEqs4KWXlud2NZ ] 21:12:39 I'm looking into the output aggregation and if the aggregation groups can ahead of time spontaneously sign possible payouts for N blocks ahead without remaking this every 5-10s as new txs are available that will be vastly efficient 21:13:55 It's also the speed bottleneck for p2pool for deriving outputs (needs recalculation each 2-5s) but that's an entirely different topic, as it can be done locally. Spontaneous multisig groups would need to have presigned fallback txs ahead of time 21:14:34 Even very slow uncapped block growth is preferable to a hard cap in my opinion. Sure, we can add a safety median that will make it take years to go over 32MB if we must; but to remove Monero's ability to scale in the far future is a severe change in design philosophy. In my opinion it would be a massive mistake. 21:15:49 Not even talking about having subaddresses on miner outputs, I assume that will not be possible 21:17:37 @spackle: There's currently a hard cap reachable if attackers decide to malleate txs specifically, the 100 MiB limit 21:18:41 Not possible unless you want to partially re-introduce the burning bug AFAIK 21:19:28 You could find to fixed amounts regardless and still have the burning bug no? 21:19:43 Like split 0.6 into 0.3 and 0.3 with same randomness 21:19:48 You have to wait 120 blocks to spend coinbase anyway. Why is multisig signing every 5-10 seconds? 21:19:51 Specifically on miner outputs 21:20:21 @jeffro256: Other miners would be mining there. They need to know they can take payout fallback even in the case that one of the participants is non cooperative 21:20:55 @datahoarder: I'm not sure I know what you mean here. 21:22:37 These are groups of 16 formed at semi random within P2Pool current miners, decide in an address via multisig and send rewards instead of individual outputs there 21:22:38 However if a block is found - and that output has not been pre-signed (chaining) for future tx build, one of the members can disappear and that be unspendable/lost 21:23:39 By the time miners are mining (before finding the block) they need to have the output presigned, and the output depends on tx fees, which change constantly as you include new txs 21:26:54 @jeffro256: Burning bug is still doable within miner outputs, within the same block. As output index is not in context they can split the reward evenly, same amount, and have same anchor 21:27:57 Consensus rules for Carrot transaction enforce that one-time addresses within single transactions are unique, so that avenue isn't possible 21:30:19 As such not commiting to amount would also make that viable? Given there is no specific amount commitment, and input context includes block height 21:30:35 I'll come back in a few minutes - in front of the code and computer this time 21:37:48 It allows a burning bug where chain data integrity isn't a given (e.g. HW devices, offline signing, etc). The input context can be faked, but the amount commitment cannot without also failing to make a valid SA/L. 21:42:45 Isnt it 60 > <@jeffro256> You have to wait 120 blocks to spend coinbase anyway. Why is multisig signing every 5-10 seconds? 21:59:45 You're right, was confusing blocks and minutes in my head 22:00:27 It is, still in the future so didn't want to be pedantic :) 22:01:53 @jeffro256: I think we should lengthen it 🙃 22:04:37 @jeffro256: Right. So they could sign for an output they can't verify amounts for, and this whole set of data needs to be encoded in the one time address. Given new p2pool sidechain blocks come also regularly that can change the reward regardless it's not even about just including txs, but regular syncs 22:06:52 Not having that would explicitly allow Coinbase txs to be impacted (but not others) in these signing contexts, not good 22:09:42 I wonder how bad doing a broadcast multisig agreement (at least this is less complex as participants are semi public and just sign their temporary keys) every 5-10 seconds within the p2pool network would be 22:10:09 At least they need at least some shares to be able to produce these so that's embedded PoW in this concept to prevent spam 22:25:54 I totally forgot, I am stupid. These are fully custom transactions so we can effectively commit to something different, as long as we have the presigned txs on the other side. We also ensure these presigned txs are aggregated again or go to chain, so these outputs should not stay unspent for long (so they will never have to prove derivation to a turnstile style PQ transition) 22:27:09 all these multisig produced txs always within the context of p2pool and verified by all other members (not just the N/N multisig group), payments get out to users later using normal derivations 22:28:49 That means that we'd open ourselves to "The input context can be faked" but these members would always have full context verification, and they'd have to do this only on coinbase tx (not any further aggregation) 22:36:00 Specifically, that'd involve changing C_a = k_a G + a H a to a known amount or value ahead of time, everything would be left unchanged, or have this commit to a specific p2pool specific multisig context here