01:10:17 I think at some point, soon, we need to move on without ArcticMine because these kind of suggestions are wasting everyone's time > <@articmine> I GB blocks is 100 Mbps bandwidth. This is appropriate for a hard cap 01:10:33 it's just going in circles with them refusing to budge, when nobody else is really aligned with how they see things. it's not productive 01:36:46 We have all the way until fcmp and carrot are upstreamed to decide on this detail 01:37:00 So i disagre with "soon", as its not really blocking anything 01:38:31 its not a simple matter of who's opinion prevails, but what facts are brought to the table to support the decisions 01:40:02 What most people agree on, is that the hardware and software need to be capable of supporting whatever we go with, and i think there is majority consensus on hard forking to increase scaling should there be breakthroughs in hard/soft limits 05:27:48 <0xfffc> https://news.ycombinator.com/item?id=46072786 time to run fcmp paper through deepseek math v2 (Math Olympic gold level) too. 09:11:49 If this is the attitude then leaving the scaling parameters as they are is the simplest and best solution > <@monero.arbo:matrix.org> I think at some point, soon, we need to move on without ArcticMine because these kind of suggestions are wasting everyone's time 09:17:53 A cap at 1GB blocks scaled at 1.5 x per year is BELOW all of the suggestions I have seen regarding the long term median. 09:17:53 This cap is in conjunction with my existing proposal with the lowest of the two block sized controlling 09:19:15 By the way a soft fork with a much lower cap is also part why I am looking at. 09:24:25 Can someone please explain to me what is so unreasonable about a bandwidth requirement of 100 Mbps when fibre residential connections at 1Gbps are readily available? 09:24:25 We are talking about a sanity check here, not what the market will likely demand 14:36:38 First it is not up to the miners. Did you read the latest proposal 14:37:54 The history of Bitcoin proves my point. There is simply no guarantee and no reason is needed 14:40:08 I have been around cryptocurrency since 2011. 14:40:50 In your view why hasn't bitcoin increased its block size? 14:41:01 @articmine: is this the one ? https://github.com/ArticMine/Monero-Documents/blob/master/MoneroScaling2025-11-02.pdf 14:41:21 @articmine: There was a reason back then, to avoid spam 14:42:34 @lm:matrix.baermail.fr: Do you really believe that. I have a bridge to sell you. Will accept XMR 14:43:13 The real t is why was it not changed? 14:43:24 Reason 14:43:44 @articmine: The issue with bitcoin is that they don't accept change, monero community do 14:44:35 There has been a lot of change in Bitcoin 14:45:24 If the community don't like the idea of big blocks we will just have a HF to remove them? I don't think primitively making stupidly big blocks possible is going to preserve dynamic blocks if there is consensus against it. 14:46:59 actually it would just be a soft fork to remove it you wouldn't even need full consensus 14:47:17 @boog900: Big block has been been possible in Monero since it's launch 14:47:51 completely avoided my comment again 14:48:07 its so hard to talk to you 14:48:40 @boog900: you dont even need a soft fork. Someone able to produce 51/100 blocks can reduce it by producing small block templates 14:49:43 @ofrnxmr:xmr.mx: soft fork requires 50% of hash power, but yeah you are completely correct it just needs 50% of blocks, which is 33% with selfish mining IIRC 14:50:23 @ofrnxmr:xmr.mx: One can do this by miner voting, but that is not enough. 14:50:52 no, i mean, you can disrupt the medians by producing atificially small blocks 14:51:35 Like qubic, who didnt include normal txs. Each time they produced 51/100 blocks, they effectively reset the short term scaling 14:53:04 @ofrnxmr:xmr.mx: Yes if over 50% mines below a certain threshold. The medians will not move beyond that threshold so one has a cap 14:53:14 That is my point 14:54:29 @ofrnxmr:xmr.mx: They could not reset a median that had not moves 14:55:20 lets say blocks had grown to 2mb. qubic mining 51/100 blocks would have reset it back to 300kb 14:55:52 That is correct 14:56:17 So a community that is against scaling will be able to prevent scaling anyway 14:56:22 ... and they can keep it there 14:56:41 @boog900: Correct no HF needed 14:56:41 ^ > <@boog900> If the community don't like the idea of big blocks we will just have a HF to remove them? I don't think primitively making stupidly big blocks possible is going to preserve dynamic blocks if there is consensus against it. 14:57:27 No need for a HF. Just don't mine them 14:57:56 over 51% 14:58:30 Furthermore it is completely reversible 14:58:47 So the argument that we might not be able to remove the slow growth if it is added is not a great argument IMO. 14:59:44 Do you trust the community? 15:00:34 @articmine: I trust the community but not miners that hop around not understanding that monero doesn't have ASICs just for $$$. If someone gives 2x $$$ they will do anything 15:00:38 I trust that whatever happens in 2 years the community can change what they like anyway. 15:01:12 Who is the community? 15:02:35 @articmine: The good question is does the mining hash power represent well the community. 15:02:56 In the end miners are deciding, devs are only proposing. 15:03:39 @articmine: The combined decision processes of devs, miners, node operators. 15:03:42 Etc 15:04:05 i think the biggest issue with large blocks is medium-long term storage (N GiB / day), IBD (unable to sync blocks over ~30mb, and definitely over 100mb), and wallet sync. The latter probably the biggest issue for p2p cash. High tx throughput will, on its own, be bottlenecked by bandwidth, verification, txpool limits, and again.. wallet sync, as wallet also need to parse the txpool 15:05:49 Instead of hashpower deciding it should be the economic majority deciding then. Time to open PoS discussions again so PoS can decide blocks. Then someone can just pay $$$ to get their option chosen instead of going via the Qubic $$$ method 15:06:23 @lm:matrix.baermail.fr: Miners can only decide on so much without majority hash rate, we can change the algorithm so it doesn't adjust so quickly for example 15:06:45 @boog900: I wanted stm to be like..720 blocks 15:07:05 @ofrnxmr:xmr.mx: Passing block headers around via Tor is already slow, and that just includes txids, now imagine passing 1GB blocks per tor node multiple ways. At that point we better fund Tor itself as it'd be most of the traffic 15:07:34 @boog900: How does this help with a failure point at 100NB? 15:08:07 Artic read the room. No one wants massive blocks in a year. Your proposal doesn't make sense. Even kayaba's hardcored value (which can be changed, I don't buy that just this one change will be permanent) is way easier to support. 15:08:12 You might have 1Gbit connection but then use Tor. Is Tor something that is to be supported by Monero in the future? Then a cap does make sense 15:08:55 You will then have at most 2-10Mbit/s when communicating across Tor peers 15:09:04 @articmine: I wasn't talking about that, that was a direct reply to their point 15:09:19 That's 14 minutes to sync 1 GiB 15:09:36 ~80 seconds for 100 MiB blocks 15:09:59 25s for 32 MiB blocks 15:10:16 Usually depending on guard nodes and other conditions this will be way slower 15:11:08 ofc, this gets spread over time as it's transaction data, but they would still start falling behind 15:11:14 For IBD, 25s for 32mb blocks isnt disastrous, and while fully synced, it should still be much faster due to fluffyblocks (you already have the txs, inlst cases) 15:11:46 @ofrnxmr:xmr.mx: and that's spread over time (tx pool) 15:12:03 not instant when block is received, but should account for that bad case of all txs being in block only 15:12:20 Like when maraton or qubic doesnt share their txs 15:12:34 at 2 Mbit/s it's ~2m or so 15:13:17 @datahoarder: This is not bandwidth 15:13:26 @ofrnxmr:xmr.mx: This is obv an attack vector, large blocks w/ an emtity that broadcasts a large block that had private txs 15:13:37 @articmine: This is tor. see above messages. 15:13:52 On tor that bandwidth ends up shared across a few guard connections 15:14:12 if not one only 15:15:24 Do we have to sync over TOR? 15:16:03 Do we want to support Tor for end users or operators? That might be in areas where they have to use it? Will the community accept removing Tor as an option? 15:16:22 Yes but how 15:17:49 I have seen discussion over the years where both Tor and clear net are used 15:18:19 That's relaying txs only. But more important is the ability of a user to hide their Monero usage 15:18:38 which if you end up using clearnet, you can't, even if using centralized VPNs 15:19:00 ... and run a full node 15:20:59 There was discussion on last MRL of research of methods to aggregate individual proofs, then place that in the block, and be able to throw away individual tx proofs. That'd allow semi-pruned block and tx sync (where you only sync pre-aggregated blocks) for way lower bandwidth size 15:21:26 That'd support bigger blocks (weighted on initial tx size) but easier to sync for limited nodes 15:21:59 that's in research, and would require a hardfork if it is indeed possible. So that's when raising this limit higher could be considered 15:23:30 Yes but all of this requires a fixed limit? 15:24:38 1. penalty free zone = KB = 300, 500, 750, 1000 15:24:38 2. STM = blocks = 100, 720 15:24:38 3. max ST block size = MB = 16, 32 15:24:38 4. max LTM block growth = multiplier = 1.2, 1.7, 2, neilsons law, moores law 15:24:39 5. 300 or 500[... more lines follow, see https://mrelay.p2pool.observer/e/0f_z9s0KanFROFdx ] 15:24:51 We also have a fixed block time, which this takes into account. So indeed it has to be bounded at the moment, I don't know if 32 MiB is the limit to pick here. That's already 24 GiB per day 15:25:11 @ofrnxmr:xmr.mx: 5 6 7 8 are supposed to be my choices for 1 2 3 4. Matrix changed the numbers on me 15:25:36 Let's give the assumption that while Tor bandwidth is limited, AI has not consumed all chip production so you can still grab SSDs and HDDs. FCMP++ afaik should make HDD syncing better right? 15:26:14 I think ginger synced on an HDD 15:26:26 If so, id say yes 15:27:02 So high weight spread over long time is less of a problem than the instant peak/max weight for the block that needs to sync right there and then, or you fall behind. And if most of the network is falling behind, you start opening issues like alt blocks being more common. Which causes more sync times as well :) 15:27:47 @ofrnxmr:xmr.mx: At worst a mixed SSD cache + HDD archive can be used, I guess, if SSDs become limited due to AI bullshit. 15:28:22 HDD syncing is weird, even fast syncing is slow when it is not doing any ring lookups 15:28:48 @boog900: For fcmp as well? 15:29:03 @gingeropolous:monero.social did you sync fcmp on hdd? 15:29:21 Well FCMP wouldn't change anything that would change that 15:29:42 ^ no ring lookups, just key image lookups + layers right? 15:30:09 Cuprate is the same FWIW 15:30:41 First HDD synchronization is just inflicting oain 15:30:47 If I had to guess I would guess LMDB is the bottleneck on a HDD 15:30:59 Also TXID lookups IIRC. The daemon explicitly checks if a transaction ID is already in the chain before inserting 15:31:20 What I mean overall, the end critical points for what is an acceptable size is first, what can the software actually manage (stressnet shown some issues there, plus existing limits); second what is a reasonable time to sync the txs making the blocks up over a supported limited connection. If Tor is a supported connection, then [... too long, see https://mrelay.p2pool.observer/e/_MaM980KRlp0MnJ1 ] 15:31:24 @boog900: It is. But why HDD? 15:31:35 Which shouldn't be needed with key image lookups AFAIK but it does it anyways 15:32:22 @articmine: Because LMDB uses copy on write btrees and reuses previous pages. Which leads to pages being spread out the database. Again just a guess 15:32:23 @jeffro256: That can be implemented as a mixed mode, I guess, caches in SSD + HDD for archival bulk data 15:32:46 @boog900: On an ssd the read performance makes this fine 15:33:09 I mean why use an HDD over SSD? 15:33:20 Oh. Cost? 15:33:45 SSDs are increasing in cost, higher capacity is becoming harder due to AI sucking in all chip production. 15:33:54 It might be temporary, it might last 4 years. 15:34:11 So if HDD syncing (for bulk archival) is fine, then that's not a worry 15:34:23 People can have small SSD + big HDD for full nodes. 15:35:29 No wonder it is taking forever to sync 15:36:55 Cuprate has a split database so it would be interesting to test with the tapes on a HDD and have LMDB on an SSD 15:37:02 I'll try it later 15:37:05 I have to disclose my conflict of interest 15:37:05 I own a part of a company that sells devices to run Monero nodes on SSD 15:37:49 @boog900: What about having the tapes on tapes? I have some LTO-8 here locally! Just slightly bad seek times... 15:38:43 @datahoarder: Go for broke with punch cards 15:40:06 @datahoarder: The interface is generic so I mean if you can fit it to the abstraction you can give it a go lol 15:40:25 I still remember the 2MB limit on the University mainframe in 1979 15:41:50 @boog900: https://github.com/Cuprate/Tapes/blob/638a528635524fc9eb6e945b8def399c660d856f/src/memory.rs#L22 15:42:30 @articmine: I'd hardly argue that nodo is relevant here. They arent particularly fast. The ssd's are fast, but the processors arent. migrating the mainnet db to fcmp takes 26hrs 15:43:09 @boog900: looks like an mmap? so it can write anywhere or just append to it (resize + write to the end) 15:50:02 @datahoarder: Yeah the 2 currently supported backends are a memory mapped file or just in memory bytes 15:51:19 Anyway running Monero on HDDs is not an argument for a hard cap hard fork. We need more than that. 15:51:31 @articmine: I was saying the opposite... 15:52:06 SSD 15:52:17 That storage shouldn't be the cap, we seem to be fine. The end points are in the previous message 15:52:23 > <@datahoarder> What I mean overall, the end critical points for what is an acceptable size is first, what can the software actually manage (stressnet shown some issues there, plus existing limits); second what is a reasonable time to sync the txs making the blocks up over a supported limited connection. If Tor is a suppo [... too long, see https://mrelay.p2pool.observer/e/3tHZ980KZngyM19k ] 15:52:23 ^ this one 15:53:06 It's about what the software can actually handle, and then sync speed/time (ignoring storage backend, just P2P stuff like Tor) 15:56:45 Has anyone tried up date hardware? 15:58:14 Yeah. I have been syncing and working on stressnet with an AMD 9900X3D + various backing NVMe + 128GB ram 15:58:59 What kind of issues 15:59:10 and it was suffering there with the big blocks. But I have to assume not everyone has my specs, but if sync only it should be fine on HDD and lesser CPU/ram for at least what I handled 15:59:45 How was it suffering? 15:59:55 The people who develop the code have raised these issues here, they aren't just FCMP++ specific. So I won't repeat again what they have said here a couple of times. It feels we are going in circles. 16:00:30 There are serious software issues I know. 16:00:49 I am talking about the hardware 16:00:51 If the people actually developing the code aren't listened to who has the authority to say that the piece of software is ready for 32, 64, 100 MB or 1 GiB blocks 16:01:10 I had issues with the software. Not hardware. 16:01:45 @datahoarder: THank you 16:02:09 This is why all the talk is "software isn't ready" is about :) 16:02:28 Besides the verification time of blocks being quite bad for mining (20+ seconds at times before Monero moves to tip, and as such miners switch to new template) 16:02:49 But that is AFAIK workable in different ways 16:02:59 It's my conflict of interest, P2Pool 16:03:02 So the primary issue is software 16:03:43 I was running clearnet. I should test tor for funsies there. I run some nodes on mainnet on Tor + P2Pool seed nodes in Tor, that's why I brought that issue as well 16:04:01 Knowing Tor limitations that aren't really going away much, though Tor capacity increased 16:09:26 More things that start breaking as block sizes go up currently 16:10:17 As also said, there's research that could allow aggregate proofs for blocks (so txs would sync pruned yet still fully verifiable as a set) that can make the limits for constrained P2P be lesser. 16:11:58 Ofc, that'd require a hardfork, which then could have any limits on block size (which regardless of number, are needed for current technical reasons, as unless a hardfork is had people can perfectly use older Monero releases) 16:12:17 Let us consider the 100 MB issue. I see two options 16:12:17 1) Fix the software problem 16:12:17 2) Place a HF hard cap below 100 MB[... more lines follow, see https://mrelay.p2pool.observer/e/5b2i-M0KWnE0RXRR ] 16:14:06 It's not even 100 MB, the issues start becoming quite bad well below it. 16:14:09 Arguing over the rates of growth of scaling parameters will get us nowhere 16:14:37 @datahoarder: I know that is an example 16:14:42 1. takes time and as said, "as unless a hardfork is had people can perfectly use older Monero releases" 16:15:00 it should be worked on. when the implementations are ready, bump number up. 16:20:38 What I really like about this proposal is that it forces the issue https://github.com/monero-project/research-lab/issues/154 16:21:49 It actually may be necessary, that is the sad part. 16:22:33 ... but it will be controversial. 16:23:05 ...and not just here in this room 16:23:57 In any case I just don't have the time for this. 19:32:40 @ofrnxmr:xmr.mx: yeah the dell580s is HDD 23:31:40 @monero.arbo:matrix.org: janowitz says https://github.com/monero-project/meta/issues/1303#issuecomment-3592432820 > <@monero.arbo:matrix.org> it's just going in circles with them refusing to budge, when nobody else is really aligned with how they see things. it's not productive 23:31:40 > I am one of the few being fully with ArticMine. 23:39:00 @rucknium: Yeah, those numbers listed work well for clearnet. For usage in more restricted applications it'd end up behind Tor, with way more limited bandwidth. Storage part is still true, besides the current 2x-4x increase (quoted data is 2020-2023) https://www.pcgamer.com/hardware/memory/keep-up-to-date-with-the-pc-memory-and-ssd-supply-crisis-as-we-track-prices-and-the-latest-news/ 23:39:51 Note that's for chips, not end products, which will end up slowly rising over years (or maybe not dropping as much). It's still reasonable, even if considering HDDs