-
br-m
<monero.arbo:matrix.org> I think at some point, soon, we need to move on without ArcticMine because these kind of suggestions are wasting everyone's time > <@articmine> I GB blocks is 100 Mbps bandwidth. This is appropriate for a hard cap
-
br-m
<monero.arbo:matrix.org> it's just going in circles with them refusing to budge, when nobody else is really aligned with how they see things. it's not productive
-
br-m
<ofrnxmr> We have all the way until fcmp and carrot are upstreamed to decide on this detail
-
br-m
<ofrnxmr> So i disagre with "soon", as its not really blocking anything
-
br-m
<ofrnxmr> its not a simple matter of who's opinion prevails, but what facts are brought to the table to support the decisions
-
br-m
<ofrnxmr> What most people agree on, is that the hardware and software need to be capable of supporting whatever we go with, and i think there is majority consensus on hard forking to increase scaling should there be breakthroughs in hard/soft limits
-
br-m
<0xfffc>
news.ycombinator.com/item?id=46072786 time to run fcmp paper through deepseek math v2 (Math Olympic gold level) too.
-
br-m
<articmine> If this is the attitude then leaving the scaling parameters as they are is the simplest and best solution > <@monero.arbo:matrix.org> I think at some point, soon, we need to move on without ArcticMine because these kind of suggestions are wasting everyone's time
-
br-m
<articmine> A cap at 1GB blocks scaled at 1.5 x per year is BELOW all of the suggestions I have seen regarding the long term median.
-
br-m
<articmine> This cap is in conjunction with my existing proposal with the lowest of the two block sized controlling
-
br-m
<articmine> By the way a soft fork with a much lower cap is also part why I am looking at.
-
br-m
<articmine> Can someone please explain to me what is so unreasonable about a bandwidth requirement of 100 Mbps when fibre residential connections at 1Gbps are readily available?
-
br-m
<articmine> We are talking about a sanity check here, not what the market will likely demand
-
br-m
<articmine> First it is not up to the miners. Did you read the latest proposal
-
br-m
<articmine> The history of Bitcoin proves my point. There is simply no guarantee and no reason is needed
-
br-m
<articmine> I have been around cryptocurrency since 2011.
-
br-m
<boog900> In your view why hasn't bitcoin increased its block size?
-
br-m
-
br-m
<lm:matrix.baermail.fr> @articmine: There was a reason back then, to avoid spam
-
br-m
<articmine> @lm:matrix.baermail.fr: Do you really believe that. I have a bridge to sell you. Will accept XMR
-
br-m
<articmine> The real t is why was it not changed?
-
br-m
<articmine> Reason
-
br-m
<lm:matrix.baermail.fr> @articmine: The issue with bitcoin is that they don't accept change, monero community do
-
br-m
<articmine> There has been a lot of change in Bitcoin
-
br-m
<boog900> If the community don't like the idea of big blocks we will just have a HF to remove them? I don't think primitively making stupidly big blocks possible is going to preserve dynamic blocks if there is consensus against it.
-
br-m
<boog900> actually it would just be a soft fork to remove it you wouldn't even need full consensus
-
br-m
<articmine> @boog900: Big block has been been possible in Monero since it's launch
-
br-m
<boog900> completely avoided my comment again
-
br-m
<boog900> its so hard to talk to you
-
br-m
<ofrnxmr:xmr.mx> @boog900: you dont even need a soft fork. Someone able to produce 51/100 blocks can reduce it by producing small block templates
-
br-m
<boog900> @ofrnxmr:xmr.mx: soft fork requires 50% of hash power, but yeah you are completely correct it just needs 50% of blocks, which is 33% with selfish mining IIRC
-
br-m
<articmine> @ofrnxmr:xmr.mx: One can do this by miner voting, but that is not enough.
-
br-m
<ofrnxmr:xmr.mx> no, i mean, you can disrupt the medians by producing atificially small blocks
-
br-m
<ofrnxmr:xmr.mx> Like qubic, who didnt include normal txs. Each time they produced 51/100 blocks, they effectively reset the short term scaling
-
br-m
<articmine> @ofrnxmr:xmr.mx: Yes if over 50% mines below a certain threshold. The medians will not move beyond that threshold so one has a cap
-
br-m
<articmine> That is my point
-
br-m
<articmine> @ofrnxmr:xmr.mx: They could not reset a median that had not moves
-
br-m
<ofrnxmr:xmr.mx> lets say blocks had grown to 2mb. qubic mining 51/100 blocks would have reset it back to 300kb
-
br-m
<articmine> That is correct
-
br-m
<boog900> So a community that is against scaling will be able to prevent scaling anyway
-
br-m
<articmine> ... and they can keep it there
-
br-m
<articmine> @boog900: Correct no HF needed
-
br-m
<boog900> ^ > <@boog900> If the community don't like the idea of big blocks we will just have a HF to remove them? I don't think primitively making stupidly big blocks possible is going to preserve dynamic blocks if there is consensus against it.
-
br-m
<articmine> No need for a HF. Just don't mine them
-
br-m
<articmine> over 51%
-
br-m
<articmine> Furthermore it is completely reversible
-
br-m
<boog900> So the argument that we might not be able to remove the slow growth if it is added is not a great argument IMO.
-
br-m
<articmine> Do you trust the community?
-
br-m
<datahoarder> @articmine: I trust the community but not miners that hop around not understanding that monero doesn't have ASICs just for $$$. If someone gives 2x $$$ they will do anything
-
br-m
<boog900> I trust that whatever happens in 2 years the community can change what they like anyway.
-
br-m
<articmine> Who is the community?
-
br-m
<lm:matrix.baermail.fr> @articmine: The good question is does the mining hash power represent well the community.
-
br-m
<lm:matrix.baermail.fr> In the end miners are deciding, devs are only proposing.
-
br-m
<boog900> @articmine: The combined decision processes of devs, miners, node operators.
-
br-m
<boog900> Etc
-
br-m
<ofrnxmr:xmr.mx> i think the biggest issue with large blocks is medium-long term storage (N GiB / day), IBD (unable to sync blocks over ~30mb, and definitely over 100mb), and wallet sync. The latter probably the biggest issue for p2p cash. High tx throughput will, on its own, be bottlenecked by bandwidth, verification, txpool limits, and again.. wallet sync, as wallet also need to parse the txpool
-
br-m
<datahoarder> Instead of hashpower deciding it should be the economic majority deciding then. Time to open PoS discussions again so PoS can decide blocks. Then someone can just pay $$$ to get their option chosen instead of going via the Qubic $$$ method
-
br-m
<boog900> @lm:matrix.baermail.fr: Miners can only decide on so much without majority hash rate, we can change the algorithm so it doesn't adjust so quickly for example
-
br-m
<ofrnxmr:xmr.mx> @boog900: I wanted stm to be like..720 blocks
-
br-m
<datahoarder> @ofrnxmr:xmr.mx: Passing block headers around via Tor is already slow, and that just includes txids, now imagine passing 1GB blocks per tor node multiple ways. At that point we better fund Tor itself as it'd be most of the traffic
-
br-m
<articmine> @boog900: How does this help with a failure point at 100NB?
-
br-m
<sgp_> Artic read the room. No one wants massive blocks in a year. Your proposal doesn't make sense. Even kayaba's hardcored value (which can be changed, I don't buy that just this one change will be permanent) is way easier to support.
-
br-m
<datahoarder> You might have 1Gbit connection but then use Tor. Is Tor something that is to be supported by Monero in the future? Then a cap does make sense
-
br-m
<datahoarder> You will then have at most 2-10Mbit/s when communicating across Tor peers
-
br-m
<boog900> @articmine: I wasn't talking about that, that was a direct reply to their point
-
br-m
<datahoarder> That's 14 minutes to sync 1 GiB
-
br-m
<datahoarder> ~80 seconds for 100 MiB blocks
-
br-m
<datahoarder> 25s for 32 MiB blocks
-
br-m
<datahoarder> Usually depending on guard nodes and other conditions this will be way slower
-
br-m
<datahoarder> ofc, this gets spread over time as it's transaction data, but they would still start falling behind
-
br-m
<ofrnxmr:xmr.mx> For IBD, 25s for 32mb blocks isnt disastrous, and while fully synced, it should still be much faster due to fluffyblocks (you already have the txs, inlst cases)
-
br-m
<datahoarder> @ofrnxmr:xmr.mx: and that's spread over time (tx pool)
-
br-m
<datahoarder> not instant when block is received, but should account for that bad case of all txs being in block only
-
br-m
<ofrnxmr:xmr.mx> Like when maraton or qubic doesnt share their txs
-
br-m
<datahoarder> at 2 Mbit/s it's ~2m or so
-
br-m
<articmine> @datahoarder: This is not bandwidth
-
br-m
<ofrnxmr:xmr.mx> @ofrnxmr:xmr.mx: This is obv an attack vector, large blocks w/ an emtity that broadcasts a large block that had private txs
-
br-m
<datahoarder> @articmine: This is tor. see above messages.
-
br-m
<datahoarder> On tor that bandwidth ends up shared across a few guard connections
-
br-m
<datahoarder> if not one only
-
br-m
<articmine> Do we have to sync over TOR?
-
br-m
<datahoarder> Do we want to support Tor for end users or operators? That might be in areas where they have to use it? Will the community accept removing Tor as an option?
-
br-m
<articmine> Yes but how
-
br-m
<articmine> I have seen discussion over the years where both Tor and clear net are used
-
br-m
<datahoarder> That's relaying txs only. But more important is the ability of a user to hide their Monero usage
-
br-m
<datahoarder> which if you end up using clearnet, you can't, even if using centralized VPNs
-
br-m
<articmine> ... and run a full node
-
br-m
<datahoarder> There was discussion on last MRL of research of methods to aggregate individual proofs, then place that in the block, and be able to throw away individual tx proofs. That'd allow semi-pruned block and tx sync (where you only sync pre-aggregated blocks) for way lower bandwidth size
-
br-m
<datahoarder> That'd support bigger blocks (weighted on initial tx size) but easier to sync for limited nodes
-
br-m
<datahoarder> that's in research, and would require a hardfork if it is indeed possible. So that's when raising this limit higher could be considered
-
br-m
<articmine> Yes but all of this requires a fixed limit?
-
br-m
<ofrnxmr:xmr.mx> 1. penalty free zone = KB = 300, 500, 750, 1000
-
br-m
<ofrnxmr:xmr.mx> 2. STM = blocks = 100, 720
-
br-m
<ofrnxmr:xmr.mx> 3. max ST block size = MB = 16, 32
-
br-m
<ofrnxmr:xmr.mx> 4. max LTM block growth = multiplier = 1.2, 1.7, 2, neilsons law, moores law
-
br-m
<ofrnxmr:xmr.mx> 5. 300 or 500[... more lines follow, see
mrelay.p2pool.observer/e/0f_z9s0KanFROFdx ]
-
br-m
<datahoarder> We also have a fixed block time, which this takes into account. So indeed it has to be bounded at the moment, I don't know if 32 MiB is the limit to pick here. That's already 24 GiB per day
-
br-m
<ofrnxmr:xmr.mx> @ofrnxmr:xmr.mx: 5 6 7 8 are supposed to be my choices for 1 2 3 4. Matrix changed the numbers on me
-
br-m
<datahoarder> Let's give the assumption that while Tor bandwidth is limited, AI has not consumed all chip production so you can still grab SSDs and HDDs. FCMP++ afaik should make HDD syncing better right?
-
br-m
<ofrnxmr:xmr.mx> I think ginger synced on an HDD
-
br-m
<ofrnxmr:xmr.mx> If so, id say yes
-
br-m
<datahoarder> So high weight spread over long time is less of a problem than the instant peak/max weight for the block that needs to sync right there and then, or you fall behind. And if most of the network is falling behind, you start opening issues like alt blocks being more common. Which causes more sync times as well :)
-
br-m
<datahoarder> @ofrnxmr:xmr.mx: At worst a mixed SSD cache + HDD archive can be used, I guess, if SSDs become limited due to AI bullshit.
-
br-m
<boog900> HDD syncing is weird, even fast syncing is slow when it is not doing any ring lookups
-
br-m
<ofrnxmr:xmr.mx> @boog900: For fcmp as well?
-
br-m
<ofrnxmr:xmr.mx> @gingeropolous:monero.social did you sync fcmp on hdd?
-
br-m
<boog900> Well FCMP wouldn't change anything that would change that
-
br-m
<datahoarder> ^ no ring lookups, just key image lookups + layers right?
-
br-m
<boog900> Cuprate is the same FWIW
-
br-m
<articmine> First HDD synchronization is just inflicting oain
-
br-m
<boog900> If I had to guess I would guess LMDB is the bottleneck on a HDD
-
br-m
<jeffro256> Also TXID lookups IIRC. The daemon explicitly checks if a transaction ID is already in the chain before inserting
-
br-m
<datahoarder> What I mean overall, the end critical points for what is an acceptable size is first, what can the software actually manage (stressnet shown some issues there, plus existing limits); second what is a reasonable time to sync the txs making the blocks up over a supported limited connection. If Tor is a supported connection, then [... too long, see
mrelay.p2pool.observer/e/_MaM980KRlp0MnJ1 ]
-
br-m
<articmine> @boog900: It is. But why HDD?
-
br-m
<jeffro256> Which shouldn't be needed with key image lookups AFAIK but it does it anyways
-
br-m
<boog900> @articmine: Because LMDB uses copy on write btrees and reuses previous pages. Which leads to pages being spread out the database. Again just a guess
-
br-m
<datahoarder> @jeffro256: That can be implemented as a mixed mode, I guess, caches in SSD + HDD for archival bulk data
-
br-m
<boog900> @boog900: On an ssd the read performance makes this fine
-
br-m
<articmine> I mean why use an HDD over SSD?
-
br-m
<boog900> Oh. Cost?
-
br-m
<datahoarder> SSDs are increasing in cost, higher capacity is becoming harder due to AI sucking in all chip production.
-
br-m
<datahoarder> It might be temporary, it might last 4 years.
-
br-m
<datahoarder> So if HDD syncing (for bulk archival) is fine, then that's not a worry
-
br-m
<datahoarder> People can have small SSD + big HDD for full nodes.
-
br-m
<articmine> No wonder it is taking forever to sync
-
br-m
<boog900> Cuprate has a split database so it would be interesting to test with the tapes on a HDD and have LMDB on an SSD
-
br-m
<boog900> I'll try it later
-
br-m
<articmine> I have to disclose my conflict of interest
-
br-m
<articmine> I own a part of a company that sells devices to run Monero nodes on SSD
-
br-m
<datahoarder> @boog900: What about having the tapes on tapes? I have some LTO-8 here locally! Just slightly bad seek times...
-
br-m
<articmine> @datahoarder: Go for broke with punch cards
-
br-m
<boog900> @datahoarder: The interface is generic so I mean if you can fit it to the abstraction you can give it a go lol
-
br-m
<articmine> I still remember the 2MB limit on the University mainframe in 1979
-
br-m
-
br-m
<ofrnxmr> @articmine: I'd hardly argue that nodo is relevant here. They arent particularly fast. The ssd's are fast, but the processors arent. migrating the mainnet db to fcmp takes 26hrs
-
br-m
<datahoarder> @boog900: looks like an mmap? so it can write anywhere or just append to it (resize + write to the end)
-
br-m
<boog900> @datahoarder: Yeah the 2 currently supported backends are a memory mapped file or just in memory bytes
-
br-m
<articmine> Anyway running Monero on HDDs is not an argument for a hard cap hard fork. We need more than that.
-
br-m
<datahoarder> @articmine: I was saying the opposite...
-
br-m
<articmine> SSD
-
br-m
<datahoarder> That storage shouldn't be the cap, we seem to be fine. The end points are in the previous message
-
br-m
<datahoarder> > <@datahoarder> What I mean overall, the end critical points for what is an acceptable size is first, what can the software actually manage (stressnet shown some issues there, plus existing limits); second what is a reasonable time to sync the txs making the blocks up over a supported limited connection. If Tor is a suppo [... too long, see
mrelay.p2pool.observer/e/3tHZ980KZngyM19k ]
-
br-m
<datahoarder> ^ this one
-
br-m
<datahoarder> It's about what the software can actually handle, and then sync speed/time (ignoring storage backend, just P2P stuff like Tor)
-
br-m
<articmine> Has anyone tried up date hardware?
-
br-m
<datahoarder> Yeah. I have been syncing and working on stressnet with an AMD 9900X3D + various backing NVMe + 128GB ram
-
br-m
<articmine> What kind of issues
-
br-m
<datahoarder> and it was suffering there with the big blocks. But I have to assume not everyone has my specs, but if sync only it should be fine on HDD and lesser CPU/ram for at least what I handled
-
br-m
<articmine> How was it suffering?
-
br-m
<datahoarder> The people who develop the code have raised these issues here, they aren't just FCMP++ specific. So I won't repeat again what they have said here a couple of times. It feels we are going in circles.
-
br-m
<articmine> There are serious software issues I know.
-
br-m
<articmine> I am talking about the hardware
-
br-m
<datahoarder> If the people actually developing the code aren't listened to who has the authority to say that the piece of software is ready for 32, 64, 100 MB or 1 GiB blocks
-
br-m
<datahoarder> I had issues with the software. Not hardware.
-
br-m
<articmine> @datahoarder: THank you
-
br-m
<datahoarder> This is why all the talk is "software isn't ready" is about :)
-
br-m
<datahoarder> Besides the verification time of blocks being quite bad for mining (20+ seconds at times before Monero moves to tip, and as such miners switch to new template)
-
br-m
<datahoarder> But that is AFAIK workable in different ways
-
br-m
<datahoarder> It's my conflict of interest, P2Pool
-
br-m
<articmine> So the primary issue is software
-
br-m
<datahoarder> I was running clearnet. I should test tor for funsies there. I run some nodes on mainnet on Tor + P2Pool seed nodes in Tor, that's why I brought that issue as well
-
br-m
<datahoarder> Knowing Tor limitations that aren't really going away much, though Tor capacity increased
-
br-m
<datahoarder> More things that start breaking as block sizes go up currently
-
br-m
<datahoarder> As also said, there's research that could allow aggregate proofs for blocks (so txs would sync pruned yet still fully verifiable as a set) that can make the limits for constrained P2P be lesser.
-
br-m
<datahoarder> Ofc, that'd require a hardfork, which then could have any limits on block size (which regardless of number, are needed for current technical reasons, as unless a hardfork is had people can perfectly use older Monero releases)
-
br-m
<articmine> Let us consider the 100 MB issue. I see two options
-
br-m
<articmine> 1) Fix the software problem
-
br-m
<articmine> 2) Place a HF hard cap below 100 MB[... more lines follow, see
mrelay.p2pool.observer/e/5b2i-M0KWnE0RXRR ]
-
br-m
<datahoarder> It's not even 100 MB, the issues start becoming quite bad well below it.
-
br-m
<articmine> Arguing over the rates of growth of scaling parameters will get us nowhere
-
br-m
<articmine> @datahoarder: I know that is an example
-
br-m
<datahoarder> 1. takes time and as said, "as unless a hardfork is had people can perfectly use older Monero releases"
-
br-m
<datahoarder> it should be worked on. when the implementations are ready, bump number up.
-
br-m
<articmine> What I really like about this proposal is that it forces the issue
monero-project/research-lab #154
-
br-m
<articmine> It actually may be necessary, that is the sad part.
-
br-m
<articmine> ... but it will be controversial.
-
br-m
<articmine> ...and not just here in this room
-
br-m
<articmine> In any case I just don't have the time for this.
-
br-m
<gingeropolous> @ofrnxmr:xmr.mx: yeah the dell580s is HDD
-
br-m
<rucknium> @monero.arbo:matrix.org: janowitz says
monero-project/meta #1303#issuecomment-3592432820 > <@monero.arbo:matrix.org> it's just going in circles with them refusing to budge, when nobody else is really aligned with how they see things. it's not productive
-
br-m
<rucknium> > I am one of the few being fully with ArticMine.
-
br-m
<datahoarder> @rucknium: Yeah, those numbers listed work well for clearnet. For usage in more restricted applications it'd end up behind Tor, with way more limited bandwidth. Storage part is still true, besides the current 2x-4x increase (quoted data is 2020-2023)
pcgamer.com/hardware/memory/keep-up…we-track-prices-and-the-latest-news
-
br-m
<datahoarder> Note that's for chips, not end products, which will end up slowly rising over years (or maybe not dropping as much). It's still reasonable, even if considering HDDs