-
br-m<reject-all:matrix.org> In regards to this BS stuff, Is monero moving towards nodes only on data centers and high-end infrastructure?
-
br-m<elongated:matrix.org> @reject-all:matrix.org: If it’s heavily spammed yes
-
br-m<reject-all:matrix.org> If users are unable to run a node (without access to data centers, high-end equipment) does this threaten the decentralization (and therefore the censorship resistance) of Monero?
-
br-m<ofrnxmr:xmr.mx> @reject-all:matrix.org: Obviously
-
br-m<ofrnxmr:xmr.mx> Or (less rude) by definition, yes.
-
br-m<reject-all:matrix.org> So you would say it's a requirement for the 'common' user with consumer hardware and typical bandwidth limitations to be capable of running a full node, and if otherwise Monero won't be a decentralized/permissionless?
-
br-m<reject-all:matrix.org> Is this sufficiently being taken into account with regards to BS/scaling?
-
br-m<reject-all:matrix.org> @ofrnxmr:xmr.mx
-
br-m<ofrnxmr:xmr.mx> i wouldnt say a requirement, but a preference
-
br-m<ofrnxmr:xmr.mx> Currently monero is running quite well on stressnet with 10+ mb blocks, including a couple old, hdd, quad core systems and a single core 2gb ram vm
-
br-m<ofrnxmr:xmr.mx> As far as bandwidth, no. I don't think monero aims to support limited bandwidth, though we are working to reduce bandwidth by upwards of 70% from current
-
br-m<ircmouse:matrix.org> New Monero research paper just dropped! "Inside Qubic’s Selfish Mining Campaign on
-
br-m<ircmouse:matrix.org> Monero: Evidence, Tactics, and Limits"
-
br-m<ircmouse:matrix.org> Link: arxiv.org/pdf/2512.01437 (arxiv.org/pdf/2512.01437)
-
br-m<ircmouse:matrix.org> Credit to @ack-j:matrix.org for pointing it out in the MRL channel. Didn't see it posted here so wanted to share.
-
br-m<datahoarder> ^ I commented about this paper in MRL channel. TL;DR, they had limited/not granular data, estimated similar numbers as we did empirically from granular data
-
br-m<elongated:matrix.org> @ofrnxmr:xmr.mx: How much storage does these “old” systems need to have ? To be future proof for 3-4yrs
-
br-m<reject-all:matrix.org> @ofrnxmr:xmr.mx: Interesting, I'll try and get setup with stressnet on my PC.
-
br-m<reject-all:matrix.org> But I do find something a bit unclear:
-
br-m<reject-all:matrix.org> Users able to run full nodes without data centers/high-end equipment is by definition decentralization/censorship resistance.[... more lines follow, see mrelay.p2pool.observer/e/5fvFs88KeDRoR29V ]
-
br-m<ofrnxmr> Dependa what you define as "common user" and "consumer hardware" and "typical bandwidth"
-
br-m<datahoarder> @reject-all:matrix.org: there's a future with aggregated proofs that would allow a mixed version of pruned/full/archival. One where it downloads pruned txs, but each block has an aggregated proof that fully verifies the transactions. Archival nodes would keep the per-transaction full proofs, but they aren't needed for these lighter full verification nodes.
-
br-m<datahoarder> Storage and bandwidth requirements for these would be vastly lower
-
br-m<ofrnxmr> tevador's proposal seems to intend to keep up with "consumer hardware" advancements
-
br-m<ofrnxmr> For full/archival nodes
-
br-m<ofrnxmr:xmr.mx> @elongated:matrix.org: future proof with 10mb blocks = 3tb for year 1 :)
-
br-m<elongated:matrix.org> @ofrnxmr:xmr.mx: Isn’t the consensus 100mb limit ?
-
br-m<elongated:matrix.org> Just assuming some agency has its life mission to spam xmr 😅
-
br-m<elongated:matrix.org> 30tb/yr ? With 100mb limit
-
DataHoardermaking it highly centralized due to storage/compute/bandwidth costs -> then strike the central locations :)
-
br-m<ofrnxmr:xmr.mx> @elongated:matrix.org: 100mb is the packet size limit, wont be hit for 6yrs under tevador's or articmines proposals
-
br-m<ofrnxmr:xmr.mx> Yeah, ppl yelling fud about 90mb limit dont realize that 90mb is 65gb per day
-
DataHoarderin chain data* the limit can strike due to other factors
-
br-m<elongated:matrix.org> @ofrnxmr:xmr.mx: Thx to artic fans
-
br-m<ofrnxmr:xmr.mx> i dont even think artic fans, just ppl who are claiming that im "breaking a promise" "for no reason"
-
br-m<ofrnxmr:xmr.mx> Pointing to getmonero's retarded faq as proof that monero blocks are currently "unlimited"
-
DataHoarderPeople don't realize we use size_t and not arbitrary precision integers for packed and block sizes
-
DataHoardercan't even go past uint64_t block sizes!
-
br-m<ofrnxmr:xmr.mx> getmonero.org/get-started/faq/#anchor-block-limit
-
br-m<ofrnxmr:xmr.mx> > No, Monero does not have a hard block size limit. Instead, the block size can increase or decrease over time based on demand. It is capped at a certain growth rate to prevent outrageous growth (scalability).
-
br-m<elongated:matrix.org> @ofrnxmr:xmr.mx: Needs to be fixed
-
DataHoardertechnically there is a cap even if we send packets less bad
-
DataHoarderif block header itself reaches packet limit :)
-
DataHoarder100 MiB block headers would be ... interesting
-
DataHoarderjust about 3 million tx hashes
-
br-m<ofrnxmr:xmr.mx> WHY ARE YOU BREAKING MONERO'S PROMISE! DATAHOARDER IS A FED WHO IS TRYING TO HIJACK MONERO
-
DataHoarder1 exabyte (2^63) block size :)
-
br-m<ofrnxmr:xmr.mx> has anyone looked to see if zano has the packet limit?
-
DataHoarderdamn, you can no longer address the storage of the world in a single uint64
-
br-m<rbrunner7> After reading this, and the Twitter thread it links to, I fear we could be near a total breakdown of any sensible discussion about block sizes. Out go technical arguments and sound logical reasoning, and emotions totally rule the day: old.reddit.com/r/Monero/comments/1p…ill_be_the_only_contender_for_sound
-
br-m<rbrunner7> (There is currently a very detailed response to my comment from ArticMine caught in some filter, waiting for release.)
-
DataHoarder> This is all really strange considering that the current average block size is 100 kB and it's not possible to up it up to anything really big in a fast manner.
-
DataHoarderstressnet disagrees
-
br-m<rbrunner7> Say, how much would it cost to produce a valid 50 MB block, mine it, and bring the network down with it? Can't be more than a few thousand dollars, I would guess? If I was rich I would be tempted to do that as an attempt to bring people to their senses.
-
DataHoarderyeah the temporary part (like, not even has to make it to the hardfork if fixed before) has been totally lost to the wind
-
DataHoarderif it's a miner, rbrunner7, effectively "free"
-
DataHoardereven better if they do 51%
-
DataHoarderthey can pad their own blocks with txs to grow the median for free
-
br-m<rbrunner7> Ah, yes, of course, because you get your expenses back :)
-
DataHoarderwithout majority hashrate you still need to spam, but you can get some better efficiency if you are already a mining pool
-
DataHoarderpad what you can and spam the rest
-
br-m<rbrunner7> Maybe we can win over M5M400?
-
br-m<elongated:matrix.org> @ofrnxmr:xmr.mx: They are safe with 0.01 zano tx fees
-
DataHoarderfunnily qubic was padding their blocks with withheld txs
-
br-m<ofrnxmr:xmr.mx> zano = 100mb (same code as monero) github.com/hyle-team/zano/blob/mast…%2Finclude%2Fnet%2Flevin_base.h#L90
-
br-m<ofrnxmr:xmr.mx> And 50mb p2p github.com/hyle-team/zano/blob/mast…rency_core%2Fcurrency_config.h#L141
-
DataHoarder... but the max number of txs they could mine was, 20.
-
br-m<rbrunner7> No, seriously, I think there are people right now that can only return to their senses quickly by hitting them on the head with a hammer.
-
DataHoarderso literally qubic had set a hardcoded limit into how many transactions could be included
-
DataHoarderzano 50mb limit!!!!
-
DataHoarderactually, we also do have 50mb packet size
-
DataHoarderand 100mb for levin
-
br-m<rbrunner7> Yeah, but anyway not a contender for "sound money", so ...
-
DataHoarderMAX_RPC_CONTENT_LENGTH = 1048576 // 1 MB
-
DataHoarderDEFAULT_RPC_SOFT_LIMIT_SIZE 25 * 1024 * 1024 // 25 MiB
-
br-m
-
br-m<ofrnxmr:xmr.mx> monero has that same 50mb line
-
DataHoarderso maybe we are worse than we thought :)
-
DataHoarderin a worse place than*
-
br-m<rbrunner7> Don't think you can get away with "unlimited logical block size, with a limit of 100 MB for individual block parts". See the word "limit" in there? That will be enough for people to freak out :)
-
DataHoarderblock parts = txs
-
DataHoarderwhich is already bounded
-
br-m<ofrnxmr:xmr.mx> Rbrunner, you didnt read getmonero.org? Blocks are unlimited
-
DataHoarder#define CRYPTONOTE_MAX_TX_SIZE 1000000
-
DataHoarderoh also
-
DataHoarder#define CRYPTONOTE_MAX_TX_PER_BLOCK 0x10000000
-
DataHoarder^ also size limit
-
br-m<ofrnxmr:xmr.mx> oh thats racist
-
br-m<rbrunner7> I am stealth "small blocker", what do you expect.
-
DataHoarderthat is 2^28
-
br-m<ofrnxmr:xmr.mx> you have to change that to infinity
-
DataHoarder1000000 * 2^28 bytes to TiB = 244 TiB blocks
-
DataHoarderyet another limit
-
br-m<rbrunner7> Well, maybe we could live with that limit if we drop blocktime down to 1 second.
-
br-m<rbrunner7> More transactions that way.
-
DataHoarderbring it down enough that speed of light and distance starts making 10+ blocks orphanable so all miners need to coexist the same server rack
-
br-m<kayabanerve:matrix.org> If we had asynchronous consensus, blocks could be produced per throughput, not an arbitrary time interval.
-
br-m<rbrunner7> Note to self: If people around me throw reason and logic overboard and act almost purely on emotion, it doesn't help if I do likewise as my reaction to this happening.
-
sech1With the current monerod limitations, miners will start limiting block sizes way before 100 MB
-
sech1I mean performance limitations
-
sech1Qubic even mined empty blocks for a while to be "more efficient"
-
sech1P2Pool has a packet size limit of 128 KB which limits it to max 4k transactions per block
-
br-m<gingeropolous> we need to build a fab
-
br-m<sgp_> lazy developers need to do their job reddit.com/r/Monero/comments/1peug7…o_developers_are_on_track_to_add_an
-
br-m<sgp_> shame on you all for prioritizing fcmp++. We all know that will lead to way worse privacy than simply allowing big blocks. shame!
-
br-m<rbrunner7> A sad day for Monero. I can hear Monero's enemies rejoice.
-
br-m<sgp_> This scaling death cult was a sleeping issue all along unfortunately. These network vulnerabilities finally being challenged is a step in the right direction
-
br-m<rbrunner7> I would like to see LMDB manage a multi-terabyte blockchain file. Would be an interesting exercise.
-
br-m<syntheticbird> @rbrunner7: LMDB2: eletric boogaloo when
-
br-m<boog900> I am happy to see some push back on reddit, starting a propaganda war is stupid.
-
br-m<ofrnxmr:xmr.mx> so.. wen serialization limit fixes? 7999 8867 9433
-
br-m<boog900> cuprate has already done it :p
-
br-m<ofrnxmr:xmr.mx> Since those limit blocks to ~30mb
-
br-m<ofrnxmr:xmr.mx> @boog900: Right, but were fussing about 100mb genesis when we have a 30mb limit added in 2020
-
br-m<ofrnxmr:xmr.mx> Thats been fixed since like 2021
-
br-m<boog900> I am really surprised it has taken so long for 9433
-
br-m<boog900> like the others I kinda get taking a while to review and whatever, but that should be a simple change.
-
br-m<ofrnxmr:xmr.mx> Considering 9433 is just a stop-gap/bandaid, im also surprised that it hasnt yet been reviewed/merged
-
DataHoarderuntested, removed the never used txin/txout values irc.gammaspectra.live/b11e6d8f7bdf2…and-serialization-code-for-tx.patch
-
DataHoardermainnet node just works :')
-
DataHoarder> 6 files changed, 5 insertions(+), 292 deletions(-)
-
br-m<ofrnxmr:xmr.mx> Had anyone compared 7999 and 8867 to see which actually perforns better?
-
br-m<boog900> Proposed some different scaling parameters: seraphis-migration/monero #44#issuecomment-3617687600
-
br-m<ofrnxmr:xmr.mx> 8867 has, aiui, started to be merged in pieces, but 7999 is the smaller pr and (again, aiui) has demonstrated much improved performance
-
br-m<gingeropolous> so these things could address the 90MB block limit. And have been sitting in PR limbo since 2021
-
br-m<gingeropolous> so it'll soon be 5 years that these fixes have sat there.
-
br-m<ofrnxmr:xmr.mx> @gingeropolous: The 30mb limit
-
br-m<ofrnxmr:xmr.mx> The 90/100mb limit is unaddressed
-
br-m<ofrnxmr:xmr.mx> Theres also a 50mb p2p packet limit, also inherited
-
br-m<boog900> @ofrnxmr:xmr.mx: I am working on a proposal to change how blocks are synced. Hopefully fixing this and adding a couple nice features.
-
br-m<boog900> @ofrnxmr:xmr.mx: Also I checked and I can't see where this is enforced.
-
br-m<ofrnxmr:xmr.mx> Fluffy blocks during ibd w/ split messages
-
br-m<ofrnxmr:xmr.mx> @ofrnxmr:xmr.mx: ?*
-
br-m<boog900> I mean we could reuse the messages but that wouldn't be ideal IMO. But if you just mean just the gist of fluffy blocks then yes.
-
br-m<ofrnxmr:xmr.mx> download all block headers first, then add the txs 🧠
-
br-m<boog900> If you are taking the mick, then I don't see why. I want to add more to it than just that, for example adding support for not always sending the miner tx in a broadcast and not disconnecting if the block has enough PoW but is invalid, plus some more. I would prefer us get all these changes in at once as its a good time to do it.
-
br-m<boog900> Having a spec we can discus before I just put some code in Cuprate is the better way to do this.
-
niocI will comment here as I don't have a github account, the comment "The spam attacks we have had in Monero were stopped by by the short term median."
-
nioc1) so we can distinguish spam :)
-
nioc2) I thought why this wasn't successful is that the blocks did not grow at the expected rate, that there was a bug that kept fees too low to expand the blocks
-
niocI am thinking of the most recent episode, am I remembering this correctly?
-
br-m<rucknium> nioc: Mostly the spam was using minimum fee. If the real users had auto-adjusted their fee to the next level, their txs would not have been delayed. I don't think the auto-adjust would have increased block size much because the vast majority of txs were the low-fee spam. More info: github.com/Rucknium/misc-research/b…d/pdf/monero-black-marble-flood.pdf
-
niocyeah I though the low-fee spam was low fee due to incorrect auto-adjust
-
niocvague memories
-
br-m<321bob321> CRS
-
plowsoffor nioc 2) monero-project/monero #9219
-
br-m<tigerix:matrix.org> I believe in the good will of the people in this community with the Blocksize limit.
-
br-m<tigerix:matrix.org> Satoshi also introduced a Blocksize limit with good will for safety reasons. This turned out to be the nail in the coffin for Bitcoin as money.
-
br-m<tigerix:matrix.org> This shouldn't be done, because temporary things usually stay the way they are. That's just life experience![... more lines follow, see mrelay.p2pool.observer/e/zuK5zc8KX19ob2VN ]
-
DataHoarderit's already in the code and introduced. we are trying to remove it.
-
DataHoarderit came with cryptonote.
-
br-m<tigerix:matrix.org> Zcash has a Blocksize limit of 2 MB and thus will never be more than private gold. Monero can be more than that!
-
br-m<redsh4de:matrix.org> To be clear, the blocksize will still be dynamic. The limit is not arbitrary like 1MB or 2MB, it is literally under what would break the Monero network with the current code if it gets there
-
br-m<redsh4de:matrix.org> things start breaking at 32MB already
-
br-m<redsh4de:matrix.org> the cap is 3x that
-
br-m<tigerix:matrix.org> It sounds reasonable, but isn't this a nice problem to have?
-
br-m<tigerix:matrix.org> I mean, if Monero gets used that much, great! We can introduce an emergency fix for that. But does it have to be rushed beforehand?
-
br-m<redsh4de:matrix.org> @tigerix:matrix.org: It is not a nice problem to have if it renders the network unusable. What good is a unlimited block size if the nodes can't sync those blocks?
-
br-m<redsh4de:matrix.org> Plan is to set a temporary cap on growth which would not be reached within 6 years. During that time it should be enough to resolve the underlying technical debt and fix the serialization issues with with the C++ daemon that prevent us from safely scaling. After that, the cap can be forked away, because nobody wants it to be t [... too long, see mrelay.p2pool.observer/e/4KOIzs8KLXBfUV9k ]
-
br-m<redsh4de:matrix.org> On Bitcoin, the 1MB block size limit was set to avoid spam. The 90MB block growth cap here is to ensure Monero doesn't literally die by bigger blocks than the reference client can chew if it gets that much activity
-
br-m<articmine> The 90MB cap doesn't do anything that is not addressed in my proposal, unless the fix for the 100 MB bug is not fixed in six (6) years
-
br-m<redsh4de:matrix.org> Yes, it is imperative to fix the 100MB bug asap
-
br-m<tigerix:matrix.org> If Monero gets used more and more, we see it way before the worst case Szenario happens.
-
br-m<tigerix:matrix.org> To be fair, currently there is no Blockstream in Monero luckily, who wants to make money by offering custodial services. But we never know which state actor is trying to stear Monero in the wrong direction.
-
br-m<articmine> What in reality is going on here is that people are arguing for this cap in order to avoid dealing with this bug during the next 6 years
-
br-m<articmine> @tigerix:matrix.org: There does exist a conflict of interest with strong links to US Law enforcement
-
DataHoarder21:36:03 <br-m> <tigerix:matrix.org> I mean, if Monero gets used that much, great! We can introduce an emergency fix for that. But does it have to be rushed beforehand?
-
DataHoarderthat's what we are trying to. not have to rush it. then forked away (or even remove before next hardfork if we fix the issues)
-
br-m<articmine> Way worse than Blockstream
-
br-m<redsh4de:matrix.org> @articmine: Not setting the cap would be a "gun to your head" to get it fixed within 6 years, yes. Unironically can be a motivating factor
-
DataHoarderlike. it can be exploited today even worse with existing scaling
-
DataHoarderthe packet size cap is 50 MiB, levin deserialization 100 MiB ... and as listed last night there are other fixed caps existing already inherited from cryptonote
-
br-m<articmine> With existing scaling one needs about 5 months
-
br-m<articmine> In fairness to cryptonote. In 2013 they were looking at over 2000 TPS. That was less than VISA back then
-
br-m<articmine> The TX size was like ~500 bytes
-
br-m<articmine> It was still a bad idea back then
-
br-m<gingeropolous> i mean call me crazy pants, but I think 6 million transactions of FCMP is better privacy than 900 gajillion transactions of ringsize 16. FCMP gets in, then its optimize and fix all the things, like the PRs that have been sitting since 2021 that are kinda related i think
-
br-m<articmine> Actually no
-
br-m<gingeropolous> i really think we're missing the forest for the trees
-
br-m<articmine> Especially with quantum computerss
-
br-m<gingeropolous> well thats a whole other kerfuffle
-
br-m<redsh4de:matrix.org> @gingeropolous: anon / perfect-daemon had made PRs that upgrade serialization/etc, right? Maybe we'll see something of that sort from him now again after his CCS got funded
-
DataHoarder21:57:29 <br-m> <articmine> Especially with quantum computerss
-
DataHoarder^ especially. FCMP++/Carrot includes specific changes to improve PQ
-
DataHoardercurrent addressing scheme does not.
-
br-m<articmine> @gingeropolous: It t actually a valid research topic
-
br-m<boog900> @articmine: are you saying RingCT is better than FCMP for QCs?
-
br-m<datahoarder> ^ > <@datahoarder> The conflict of interest here is delaying FCMP++ due to scaling issues which would already cover the part of breaking surveillance for rings, so that must be prioritized. Adding sanity scaling parameters/adjustments so that can exist happily with current implementations can speed the process of deploying this in an agreeable way and stopping BS
-
br-m<articmine> I am saying that with QC forward privacy can be broken by combining BS with QC
-
DataHoardergoal shifting now, FCMP++ deals with BS, but suddenly, that's irrelevant
-
br-m<articmine> One needs to hide the public keys. This is in the Carrot specification
-
br-m<boog900> yes, currently you can break it without even the public keys. Its even worse.
-
DataHoarderthe carrot specification was changed recently :)
-
DataHoarderI implemented it
-
br-m<articmine> DataHoarder: It is not irrelevant, but it is not a complete panecea
-
br-m<articmine> DataHoarder: Do you still need to hide the public keys to have forward secrecy?
-
DataHoardergiven current BS and you placing that much importance in it, I'd say FCMP++ completely neuters them except in specific future cases, which we built protection/fallbacks for (PQ turnstile test being one)
-
br-m<boog900> @articmine: even if you did this is a step up.
-
br-m<articmine> It is a yes or no question?
-
DataHoarderinternal sends (change) is protected, even if they know all your public keys ArticMine. non-internal sends are protected, given that all public keys are not shared.
-
DataHoarderif they know explicitly your target address (not any) they can do quantum stuff there to learn amounts
-
DataHoarderlearning a different subaddress is not
-
DataHoarderalso - jeffro256/carrot #6
-
br-m<articmine> ... but some pubic are available to a BS adversary
-
br-m<articmine> Public keys
-
br-m<boog900> why are we taking about this at all?
-
br-m<boog900> talking*
-
DataHoardergoal shifting boog900
-
br-m<boog900> 100%
-
DataHoarderarticmine: you have 0,2, BS has 0,3, they learn nothing
-
DataHoarderthey have 0,2, they learn amounts.
-
DataHoarderin quantum
-
br-m<articmine> What does the sender have?
-
DataHoarderif exchange sends, they have 0,2
-
DataHoarderthat is what BS has
-
DataHoarderbut then you receive with 0,3. they get nothing
-
br-m<articmine> So then BS has 0.2 for some of the public keys
-
DataHoarder0,2 IS the public key
-
DataHoarderit's not shared with 0,3 or 1,2
-
DataHoarderthat's why they are derivated with proper methods
-
DataHoarder(I mean 0,2 as account/index)
-
DataHoarderbasically. exchange sends you money at address A (0,2).
-
DataHoarderThey can break it! (but they already have the info). They can later break using quantum outputs received with specifically A (0,2)
-
DataHoarderyou receive using B (0,3). this is not broken, this is an entirely new set of public keys
-
br-m<articmine> Yes but all the current outputs were received with 0.2
-
DataHoarderwhat do you mean
-
DataHoarderso they already know the details of the outputs?
-
DataHoarderthen why do they need to learn them
-
DataHoarderthere isn't carrot deployed yet
-
DataHoarderthat's why it's important to have it
-
br-m<articmine> The existing outputs if the public keys are known
-
br-m<articmine> The address
-
DataHoarderthere aren't carrot existing outputs
-
br-m<articmine> Correct, but if they are not transferred after FCMP++ they are still vulnerable
-
DataHoarderthey are vulnerable regardless. it's not carrot outputs
-
DataHoarderso yes, migrating to FCMP/Carrot is important
-
DataHoarderwithout carrot you don't even need pubkeys > 22:01:24 <br-m> <boog900> yes, currently you can break it without even the public keys. Its even worse.
-
br-m<articmine> DataHoarder: You have to
-
DataHoarderwhy are non-carrot outputs even considered. they are broken under a quantum adversary directly
-
DataHoarderwhen you bring it up. migrating would be a factor. I'm answering > 22:00:07 <br-m> <articmine> I am saying that with QC forward privacy can be broken by combining BS with QC
-
br-m<articmine> Because BS relies on correlations between outputs,. So broken some not broken
-
br-m<articmine> They are not isolated from each other
-
br-m<articmine> The worst part of all of this is it doesn't even have to work. All the government has to do is to convince a judge and not a professional mathematician that it works
-
br-m<articmine> Then they can convict an innocent person
-
DataHoarderso back to hypothetical that regardless what we deploy even sending everything to burn addresses, a judge can convict you
-
DataHoarderso doesn't matter what we do. close the chain right?
-
br-m<articmine> My point is that we need multiple layers. Not just one protection
-
br-m<articmine> ... and yes sheer volume can and should be part of the equation
-
DataHoarderso let's deploy these layers no? specially the ones that cover PQ and ring surveillance
-
br-m<articmine> I am not against FCMP++ What I am against is a fanatical push to keep the existing chain as small as possible
-
DataHoarderI don't think it's fanatical nor push to keep at small. but to allow it to grow safely without exploding and causing chain splits that require emergency changes
-
br-m<articmine> To give an example. Many of the devs are concerned about a growth rate of 2 and propose growth rates between 1.2 and 1 7. I come up with an effective growth rate of 1.085 and they ask for more and more drastic restrictions
-
DataHoarderso bring people or gather devs for making it work now, instead of bringing people to bicker around 90 MiB permanent size forever, which was not discussed at all. We can remove it before next hard fork, let's do so, but otherwise a bomb is left planted (which is already there)
-
br-m<boog900> way to misrepresent it
-
br-m<boog900> disgusting
-
br-m<boog900> trying to win the propaganda war again
-
br-m<boog900> I have said again and again my position, here it is: seraphis-migration/monero #44#issuecomment-3617687600
-
br-m<boog900> your 1.085 increases no matter what
-
br-m<articmine> Your position is 1.2.
-
br-m<boog900> not exactly equivalent
-
br-m<articmine> I am offering 1.085
-
br-m<boog900> oh my days
-
br-m<articmine> @boog900: Over time it is
-
br-m<boog900> if my proposal was really more, you would like it more, the more dangerous the better right?
-
br-m<articmine> @boog900: No I am arguing for shot to medium term flexibility
-
br-m<articmine> Long term no more than 1.5x per year
-
br-m<articmine> I originally had a long term sanity median of 1000000 blocks
-
br-m<articmine> With a growth rate of 2
-
br-m<articmine> I actually believe that Tevador's proposal is way better for a sanity cap
-
br-m<boog900> @articmine: where? not in the proposal I am looking at
-
br-m<articmine> I have given multiple talks with the long term sanity median of 1000000 bytes
-
br-m<boog900> as so this one proposal in the past that wasn't the one you wanted for FCMP?
-
br-m<boog900> like come on
-
br-m<articmine> The last was at MonerKon 2025
-
br-m<boog900> I wont be talking about this with you anymore, we gone round in circles enough over the past couple weeks.
-
br-m<articmine> MoneroKon
-
br-m<articmine> I even discussed this there with Jeffro256 who told me it was unnecessary. That is why I took it out
-
br-m<articmine> Of CT this was fi FCMP
-
br-m<articmine> Of course
-
br-m<articmine> @boog900: Then don't.
-
br-m<articmine> I know what is really going on her. It has nothing to do with scaling parameters
-
br-m<articmine> here*
-
DataHoarderwe have pointed at the specific code that would break already. bring people, or let the existing people to fix it without sending hordes in misinterpreted social messages
-
DataHoarderthere isn't a consensus for 90 MiB anymore, besides the 5m where there was an abstain from you. so, why all of this?
-
DataHoardersame limit exists on all other cryptonote derivations, too
-
DataHoarderin the end it doesn't matter if the technical limit is in or not. BS will exploit it and kill the network :)
-
DataHoarderor well, have some emergency deployment. wouldn't that be fun
-
sech1I think it's more of a philosophical question. Any software has its limits. Even if Monero declares "unlimited" block size, there's always a physical limit of what the network can handle. Dev team's responsibility is to ensure that this limit is always bigger than the real world usage at any time, but setting a fail-safe (hard cap for the known
-
sech1value of the limit) is perfectly normal, assuming that this hard cap gets increased with every node optimization (every new software release)
-
br-m<articmine> I ABSTAIN that does not mean I support it. When I see posts on r/BTC on this. This tells me that this 90 MB limit is very controversial outside of t MRL
-
DataHoarderwe couldn't do an emergency release for dns checkpoints either because of existing technical debt, too. it's not a first
-
DataHoarderofc, you abstaining can still mean you are against. usually it means that you let the rest of the consensus move forward and not move instead to try to misrepresent it elsewhere
-
br-m<articmine> I am not misrepresenting this
-
DataHoardernot what I have seen on reddit comments, unless that's an impostor, if so they have done a great job.
-
br-m<articmine> I was actually shocked by the reaction to me including Tevador's proposal into mine
-
DataHoarderas sech1 said "assuming that this hard cap gets increased with every node optimization (every new software release)" < I think that's the point of the technical cap.
-
sech1So I'm against making this a consensus rule (fixed max block size). Rather make it a constant that can be changed in a point release, and ensure that scaling rules don't let the network reach the cap quickly
-
sech1so the team has the time to react if network load changes
-
sech1quickly -> in less than 2-3 years
-
br-m<articmine> sech1: Honestly this does not work
-
sech1I disagree
-
br-m<articmine> It actually broke Bitcoin in 2013
-
sech1Make scaling rules work such that 90 MB can't be reached in less than 3 years
-
sech1If blocks start to grow, team has 3 years to react and optimize the node, and increase the hard cap
-
br-m<articmine> sech1: My proposal means it cannot be reach in over 6 years
-
DataHoarderthey can feed these blocks via other means, not chain growth
-
br-m<articmine> YeT this is not enough for some people
-
DataHoarderit will get deserialized
-
sech1yes, 90 MB blocks can be crafted and sent via RPC, or even P2P as a new top chain candidate. Nodes will have to process them
-
sech1which is why a limit is needed, but not as part of consensus rules
-
br-m<articmine> DataHoarder: How.?
-
sech1it's a technical limitation, not a consensus rule
-
sech1you have to choose between node crashing or node just refusing such blocks
-
sech1either way, there is a hard cap (implicit in the first case)
-
br-m<articmine> If this can be done outside of consensus rules then I change my vote from ABSTAIN to YES > <sech1> which is why a limit is needed, but not as part of consensus rules
-
br-m<articmine> On the 90 MB cap
-
DataHoarder> <sech1> which is why a limit is needed, but not as part of consensus rules
-
DataHoarder^ semi-consensus, if 90 MiB blocks cannot be broadcasted then nodes can fall behind if network is fed txs in specific ways
-
DataHoarderif it could be done in point releases, that'd be nice.
-
sech1if it gets to the point when we have 90+ MB blocks and some nodes can't sync, these nodes have to update, right?
-
sech1Because the fixed version will be available at this point
-
sech1Remember about the 3+ years lead time due to scaling rules
-
br-m<articmine> So a node relay rule. No problem here
-
DataHoarderunless those are mining nodes and they make a longer chain, sech1
-
DataHoardertx node relay rule gets skipped for found blocks with those txs as example
-
sech1Then it's just miner consensus, not a problem
-
sech1I think pools will self-regulate when blocks get big
-
sech1They won't allow their nodes to become too slow
-
br-m<articmine> DataHoarder: A longer chain that crashes
-
DataHoardermaybe it can be brought to the table next MRL with more details
-
sech1so they'll limit blocks to a few MB or whatever value their servers can handle
-
DataHoarderthat longer chain has less block size, so no it doesn't ArticMine
-
DataHoarderthat's why they made it longer
-
DataHoarderbut then - the existing limit is already there :')
-
br-m<articmine> DataHoarder: This did not work for Bitmain in 2018
-
DataHoarderthough a well placed limit is consistent instead of having secondary pieces throw exceptions or err
-
br-m<articmine> That is history
-
DataHoarderpeople moved from MineXMR but they went to Qubic
-
DataHoarder:)
-
br-m<articmine> Yes but blocks refusing to relay over 90 MB because they crash is very difficult to fight.
-
br-m<articmine> Then there is my proposal that blocks over 90 MB for over 6 years
-
br-m<syntheticbird> DataHoarder: if people = cfb and its llm bots then yeah surely
-
br-m<articmine> In consensus
-
br-m<articmine> The way to harden a node relay rule on this is to set the node relay cap at 45 MB. Then miners will need over 51% to override the nodes.
-
br-m<articmine> So I will support a node relay rule at 45 MB
-
br-m<diego:cypherstack.com> Anything that delays FCMP is bad imo
-
br-m<diego:cypherstack.com> Once FCMP is ready, we need to get it live. Monero's privacy is currently porous.
-
br-m<diego:cypherstack.com> I have no such links, and I only care about the proposal that gets us to FCMP++ the fastest. > <@articmine> There does exist a conflict of interest with strong links to US Law enforcement
-
br-m<diego:cypherstack.com> And I would say anyone who says anything other than FCMP++ as an absolute priority is the one with suspect intentions, given how substandard Monero's current privacy protocol is in comparison other serious privacy tech.
-
br-m<diego:cypherstack.com> Though that's potentially an inflammatory argument that looks too much at people, so I say it very lightly.
-
br-m<diego:cypherstack.com> I know I am in no way the decision maker anywhere, but I want FCMP launch Q2 2026
-
br-m<diego:cypherstack.com> And I have been burning my crypto boy candles at both ends to get it there.
-
br-m<diego:cypherstack.com> FCMP first, scaling immediately after if need be.
-
br-m<diego:cypherstack.com> It's not an indefinitely pushed discussion. It is just a very very VERY distant second to get FCMPs out. Once out, it can be first on the agenda.
-
br-m<diego:cypherstack.com> One more elaboration if I may, the cryptographers working for me are also concerned about Monero scaling. We would be among the first to insist on and contribute to further scaling discussions after FCMPs goes live.
-
br-m<rucknium> @diego:cypherstack.com: If I can prod you a bit, your position is also not an enlightened one. "Set scaling discussion aside" on its face means keep the current scaling algorithm. Many people think the current scaling algorithm allows large blocks too quickly. This is the "anchoring" problem in negotiations. It also shows how [... too long, see mrelay.p2pool.observer/e/w5rH0c8KTHNFdy14 ]
-
br-m<diego:cypherstack.com> I am "fix it immediately after fcmp" not "fix it later"
-
br-m<articmine> One can actually do FCMP++ with the current scaling untouched
-
br-m<diego:cypherstack.com> Later is nebulous. "Immediately after fcmp" is expectation if a hard fork within the next year if not sooner after FCMP to implement scaling solutions.
-
br-m<articmine> This does actually work
-
br-m<diego:cypherstack.com> @articmine: This was my understanding, yes.
-
br-m<diego:cypherstack.com> I hate to break it to everyone, but we don't have a massive flood of people just waiting for FCMP before they do all of their txs which will bring us right to the brink right away.
-
br-m<diego:cypherstack.com> We have time. Not infinite time. And not enough time to sit on our laurels, but a bit of time. A year's worth at least.
-
br-m<diego:cypherstack.com> (yes "a year" is pulled out of my butt)
2 minutes ago