-
br-m<reject-all:matrix.org> In regards to this BS stuff, Is monero moving towards nodes only on data centers and high-end infrastructure?
-
br-m<elongated:matrix.org> @reject-all:matrix.org: If it’s heavily spammed yes
-
br-m<reject-all:matrix.org> If users are unable to run a node (without access to data centers, high-end equipment) does this threaten the decentralization (and therefore the censorship resistance) of Monero?
-
br-m<ofrnxmr:xmr.mx> @reject-all:matrix.org: Obviously
-
br-m<ofrnxmr:xmr.mx> Or (less rude) by definition, yes.
-
br-m<reject-all:matrix.org> So you would say it's a requirement for the 'common' user with consumer hardware and typical bandwidth limitations to be capable of running a full node, and if otherwise Monero won't be a decentralized/permissionless?
-
br-m<reject-all:matrix.org> Is this sufficiently being taken into account with regards to BS/scaling?
-
br-m<reject-all:matrix.org> @ofrnxmr:xmr.mx
-
br-m<ofrnxmr:xmr.mx> i wouldnt say a requirement, but a preference
-
br-m<ofrnxmr:xmr.mx> Currently monero is running quite well on stressnet with 10+ mb blocks, including a couple old, hdd, quad core systems and a single core 2gb ram vm
-
br-m<ofrnxmr:xmr.mx> As far as bandwidth, no. I don't think monero aims to support limited bandwidth, though we are working to reduce bandwidth by upwards of 70% from current
-
br-m<ircmouse:matrix.org> New Monero research paper just dropped! "Inside Qubic’s Selfish Mining Campaign on
-
br-m<ircmouse:matrix.org> Monero: Evidence, Tactics, and Limits"
-
br-m<ircmouse:matrix.org> Link: arxiv.org/pdf/2512.01437 (arxiv.org/pdf/2512.01437)
-
br-m<ircmouse:matrix.org> Credit to @ack-j:matrix.org for pointing it out in the MRL channel. Didn't see it posted here so wanted to share.
-
br-m<datahoarder> ^ I commented about this paper in MRL channel. TL;DR, they had limited/not granular data, estimated similar numbers as we did empirically from granular data
-
br-m<elongated:matrix.org> @ofrnxmr:xmr.mx: How much storage does these “old” systems need to have ? To be future proof for 3-4yrs
-
br-m<reject-all:matrix.org> @ofrnxmr:xmr.mx: Interesting, I'll try and get setup with stressnet on my PC.
-
br-m<reject-all:matrix.org> But I do find something a bit unclear:
-
br-m<reject-all:matrix.org> Users able to run full nodes without data centers/high-end equipment is by definition decentralization/censorship resistance.[... more lines follow, see mrelay.p2pool.observer/e/5fvFs88KeDRoR29V ]
-
br-m<ofrnxmr> Dependa what you define as "common user" and "consumer hardware" and "typical bandwidth"
-
br-m<datahoarder> @reject-all:matrix.org: there's a future with aggregated proofs that would allow a mixed version of pruned/full/archival. One where it downloads pruned txs, but each block has an aggregated proof that fully verifies the transactions. Archival nodes would keep the per-transaction full proofs, but they aren't needed for these lighter full verification nodes.
-
br-m<datahoarder> Storage and bandwidth requirements for these would be vastly lower
-
br-m<ofrnxmr> tevador's proposal seems to intend to keep up with "consumer hardware" advancements
-
br-m<ofrnxmr> For full/archival nodes
-
br-m<ofrnxmr:xmr.mx> @elongated:matrix.org: future proof with 10mb blocks = 3tb for year 1 :)
-
br-m<elongated:matrix.org> @ofrnxmr:xmr.mx: Isn’t the consensus 100mb limit ?
-
br-m<elongated:matrix.org> Just assuming some agency has its life mission to spam xmr 😅
-
br-m<elongated:matrix.org> 30tb/yr ? With 100mb limit
-
DataHoardermaking it highly centralized due to storage/compute/bandwidth costs -> then strike the central locations :)
-
br-m<ofrnxmr:xmr.mx> @elongated:matrix.org: 100mb is the packet size limit, wont be hit for 6yrs under tevador's or articmines proposals
-
br-m<ofrnxmr:xmr.mx> Yeah, ppl yelling fud about 90mb limit dont realize that 90mb is 65gb per day
-
DataHoarderin chain data* the limit can strike due to other factors
-
br-m<elongated:matrix.org> @ofrnxmr:xmr.mx: Thx to artic fans
-
br-m<ofrnxmr:xmr.mx> i dont even think artic fans, just ppl who are claiming that im "breaking a promise" "for no reason"
-
br-m<ofrnxmr:xmr.mx> Pointing to getmonero's retarded faq as proof that monero blocks are currently "unlimited"
-
DataHoarderPeople don't realize we use size_t and not arbitrary precision integers for packed and block sizes
-
DataHoardercan't even go past uint64_t block sizes!
-
br-m<ofrnxmr:xmr.mx> getmonero.org/get-started/faq/#anchor-block-limit
-
br-m<ofrnxmr:xmr.mx> > No, Monero does not have a hard block size limit. Instead, the block size can increase or decrease over time based on demand. It is capped at a certain growth rate to prevent outrageous growth (scalability).
-
br-m<elongated:matrix.org> @ofrnxmr:xmr.mx: Needs to be fixed
-
DataHoardertechnically there is a cap even if we send packets less bad
-
DataHoarderif block header itself reaches packet limit :)
-
DataHoarder100 MiB block headers would be ... interesting
-
DataHoarderjust about 3 million tx hashes
-
br-m<ofrnxmr:xmr.mx> WHY ARE YOU BREAKING MONERO'S PROMISE! DATAHOARDER IS A FED WHO IS TRYING TO HIJACK MONERO
-
DataHoarder1 exabyte (2^63) block size :)
-
br-m<ofrnxmr:xmr.mx> has anyone looked to see if zano has the packet limit?
-
DataHoarderdamn, you can no longer address the storage of the world in a single uint64
-
br-m<rbrunner7> After reading this, and the Twitter thread it links to, I fear we could be near a total breakdown of any sensible discussion about block sizes. Out go technical arguments and sound logical reasoning, and emotions totally rule the day: old.reddit.com/r/Monero/comments/1p…ill_be_the_only_contender_for_sound
-
br-m<rbrunner7> (There is currently a very detailed response to my comment from ArticMine caught in some filter, waiting for release.)
-
DataHoarder> This is all really strange considering that the current average block size is 100 kB and it's not possible to up it up to anything really big in a fast manner.
-
DataHoarderstressnet disagrees
-
br-m<rbrunner7> Say, how much would it cost to produce a valid 50 MB block, mine it, and bring the network down with it? Can't be more than a few thousand dollars, I would guess? If I was rich I would be tempted to do that as an attempt to bring people to their senses.
-
DataHoarderyeah the temporary part (like, not even has to make it to the hardfork if fixed before) has been totally lost to the wind
-
DataHoarderif it's a miner, rbrunner7, effectively "free"
-
DataHoardereven better if they do 51%
-
DataHoarderthey can pad their own blocks with txs to grow the median for free
-
br-m<rbrunner7> Ah, yes, of course, because you get your expenses back :)
-
DataHoarderwithout majority hashrate you still need to spam, but you can get some better efficiency if you are already a mining pool
-
DataHoarderpad what you can and spam the rest
-
br-m<rbrunner7> Maybe we can win over M5M400?
-
br-m<elongated:matrix.org> @ofrnxmr:xmr.mx: They are safe with 0.01 zano tx fees
-
DataHoarderfunnily qubic was padding their blocks with withheld txs
-
br-m<ofrnxmr:xmr.mx> zano = 100mb (same code as monero) github.com/hyle-team/zano/blob/mast…%2Finclude%2Fnet%2Flevin_base.h#L90
-
br-m<ofrnxmr:xmr.mx> And 50mb p2p github.com/hyle-team/zano/blob/mast…rency_core%2Fcurrency_config.h#L141
-
DataHoarder... but the max number of txs they could mine was, 20.
-
br-m<rbrunner7> No, seriously, I think there are people right now that can only return to their senses quickly by hitting them on the head with a hammer.
-
DataHoarderso literally qubic had set a hardcoded limit into how many transactions could be included
-
DataHoarderzano 50mb limit!!!!
-
DataHoarderactually, we also do have 50mb packet size
-
DataHoarderand 100mb for levin
-
br-m<rbrunner7> Yeah, but anyway not a contender for "sound money", so ...
-
DataHoarderMAX_RPC_CONTENT_LENGTH = 1048576 // 1 MB
-
DataHoarderDEFAULT_RPC_SOFT_LIMIT_SIZE 25 * 1024 * 1024 // 25 MiB
-
br-m
-
br-m<ofrnxmr:xmr.mx> monero has that same 50mb line
-
DataHoarderso maybe we are worse than we thought :)
-
DataHoarderin a worse place than*
-
br-m<rbrunner7> Don't think you can get away with "unlimited logical block size, with a limit of 100 MB for individual block parts". See the word "limit" in there? That will be enough for people to freak out :)
-
DataHoarderblock parts = txs
-
DataHoarderwhich is already bounded
-
br-m<ofrnxmr:xmr.mx> Rbrunner, you didnt read getmonero.org? Blocks are unlimited
-
DataHoarder#define CRYPTONOTE_MAX_TX_SIZE 1000000
-
DataHoarderoh also
-
DataHoarder#define CRYPTONOTE_MAX_TX_PER_BLOCK 0x10000000
-
DataHoarder^ also size limit
-
br-m<ofrnxmr:xmr.mx> oh thats racist
-
br-m<rbrunner7> I am stealth "small blocker", what do you expect.
-
DataHoarderthat is 2^28
-
br-m<ofrnxmr:xmr.mx> you have to change that to infinity
-
DataHoarder1000000 * 2^28 bytes to TiB = 244 TiB blocks
-
DataHoarderyet another limit
-
br-m<rbrunner7> Well, maybe we could live with that limit if we drop blocktime down to 1 second.
-
br-m<rbrunner7> More transactions that way.
-
DataHoarderbring it down enough that speed of light and distance starts making 10+ blocks orphanable so all miners need to coexist the same server rack
-
br-m<kayabanerve:matrix.org> If we had asynchronous consensus, blocks could be produced per throughput, not an arbitrary time interval.
-
br-m<rbrunner7> Note to self: If people around me throw reason and logic overboard and act almost purely on emotion, it doesn't help if I do likewise as my reaction to this happening.
-
sech1With the current monerod limitations, miners will start limiting block sizes way before 100 MB
-
sech1I mean performance limitations
-
sech1Qubic even mined empty blocks for a while to be "more efficient"
-
sech1P2Pool has a packet size limit of 128 KB which limits it to max 4k transactions per block
-
br-m<gingeropolous> we need to build a fab
-
br-m<sgp_> lazy developers need to do their job reddit.com/r/Monero/comments/1peug7…o_developers_are_on_track_to_add_an
-
br-m<sgp_> shame on you all for prioritizing fcmp++. We all know that will lead to way worse privacy than simply allowing big blocks. shame!
-
br-m<rbrunner7> A sad day for Monero. I can hear Monero's enemies rejoice.
-
br-m<sgp_> This scaling death cult was a sleeping issue all along unfortunately. These network vulnerabilities finally being challenged is a step in the right direction
-
br-m<rbrunner7> I would like to see LMDB manage a multi-terabyte blockchain file. Would be an interesting exercise.
-
br-m<syntheticbird> @rbrunner7: LMDB2: eletric boogaloo when
-
br-m<boog900> I am happy to see some push back on reddit, starting a propaganda war is stupid.
-
br-m<ofrnxmr:xmr.mx> so.. wen serialization limit fixes? 7999 8867 9433
-
br-m<boog900> cuprate has already done it :p
-
br-m<ofrnxmr:xmr.mx> Since those limit blocks to ~30mb
-
br-m<ofrnxmr:xmr.mx> @boog900: Right, but were fussing about 100mb genesis when we have a 30mb limit added in 2020
-
br-m<ofrnxmr:xmr.mx> Thats been fixed since like 2021
-
br-m<boog900> I am really surprised it has taken so long for 9433
-
br-m<boog900> like the others I kinda get taking a while to review and whatever, but that should be a simple change.
-
br-m<ofrnxmr:xmr.mx> Considering 9433 is just a stop-gap/bandaid, im also surprised that it hasnt yet been reviewed/merged
-
DataHoarderuntested, removed the never used txin/txout values irc.gammaspectra.live/b11e6d8f7bdf2…and-serialization-code-for-tx.patch
-
DataHoardermainnet node just works :')
-
DataHoarder> 6 files changed, 5 insertions(+), 292 deletions(-)
-
br-m<ofrnxmr:xmr.mx> Had anyone compared 7999 and 8867 to see which actually perforns better?
-
br-m<boog900> Proposed some different scaling parameters: seraphis-migration/monero #44#issuecomment-3617687600
-
br-m<ofrnxmr:xmr.mx> 8867 has, aiui, started to be merged in pieces, but 7999 is the smaller pr and (again, aiui) has demonstrated much improved performance
-
br-m<gingeropolous> so these things could address the 90MB block limit. And have been sitting in PR limbo since 2021
-
br-m<gingeropolous> so it'll soon be 5 years that these fixes have sat there.
-
br-m<ofrnxmr:xmr.mx> @gingeropolous: The 30mb limit
-
br-m<ofrnxmr:xmr.mx> The 90/100mb limit is unaddressed
-
br-m<ofrnxmr:xmr.mx> Theres also a 50mb p2p packet limit, also inherited
-
br-m<boog900> @ofrnxmr:xmr.mx: I am working on a proposal to change how blocks are synced. Hopefully fixing this and adding a couple nice features.
-
br-m<boog900> @ofrnxmr:xmr.mx: Also I checked and I can't see where this is enforced.
-
br-m<ofrnxmr:xmr.mx> Fluffy blocks during ibd w/ split messages
-
br-m<ofrnxmr:xmr.mx> @ofrnxmr:xmr.mx: ?*
-
br-m<boog900> I mean we could reuse the messages but that wouldn't be ideal IMO. But if you just mean just the gist of fluffy blocks then yes.
-
br-m<ofrnxmr:xmr.mx> download all block headers first, then add the txs 🧠
-
br-m<boog900> If you are taking the mick, then I don't see why. I want to add more to it than just that, for example adding support for not always sending the miner tx in a broadcast and not disconnecting if the block has enough PoW but is invalid, plus some more. I would prefer us get all these changes in at once as its a good time to do it.
-
br-m<boog900> Having a spec we can discus before I just put some code in Cuprate is the better way to do this.
-
niocI will comment here as I don't have a github account, the comment "The spam attacks we have had in Monero were stopped by by the short term median."
-
nioc1) so we can distinguish spam :)
-
nioc2) I thought why this wasn't successful is that the blocks did not grow at the expected rate, that there was a bug that kept fees too low to expand the blocks
-
niocI am thinking of the most recent episode, am I remembering this correctly?
-
br-m<rucknium> nioc: Mostly the spam was using minimum fee. If the real users had auto-adjusted their fee to the next level, their txs would not have been delayed. I don't think the auto-adjust would have increased block size much because the vast majority of txs were the low-fee spam. More info: github.com/Rucknium/misc-research/b…d/pdf/monero-black-marble-flood.pdf
-
niocyeah I though the low-fee spam was low fee due to incorrect auto-adjust
-
niocvague memories
-
br-m<321bob321> CRS
-
plowsoffor nioc 2) monero-project/monero #9219
-
br-m<tigerix:matrix.org> I believe in the good will of the people in this community with the Blocksize limit.
-
br-m<tigerix:matrix.org> Satoshi also introduced a Blocksize limit with good will for safety reasons. This turned out to be the nail in the coffin for Bitcoin as money.
-
br-m<tigerix:matrix.org> This shouldn't be done, because temporary things usually stay the way they are. That's just life experience![... more lines follow, see mrelay.p2pool.observer/e/zuK5zc8KX19ob2VN ]
-
DataHoarderit's already in the code and introduced. we are trying to remove it.
-
DataHoarderit came with cryptonote.
-
br-m<tigerix:matrix.org> Zcash has a Blocksize limit of 2 MB and thus will never be more than private gold. Monero can be more than that!
-
br-m<redsh4de:matrix.org> To be clear, the blocksize will still be dynamic. The limit is not arbitrary like 1MB or 2MB, it is literally under what would break the Monero network with the current code if it gets there
-
br-m<redsh4de:matrix.org> things start breaking at 32MB already
-
br-m<redsh4de:matrix.org> the cap is 3x that
-
br-m<tigerix:matrix.org> It sounds reasonable, but isn't this a nice problem to have?
-
br-m<tigerix:matrix.org> I mean, if Monero gets used that much, great! We can introduce an emergency fix for that. But does it have to be rushed beforehand?
-
br-m<redsh4de:matrix.org> @tigerix:matrix.org: It is not a nice problem to have if it renders the network unusable. What good is a unlimited block size if the nodes can't sync those blocks?
-
br-m<redsh4de:matrix.org> Plan is to set a temporary cap on growth which would not be reached within 6 years. During that time it should be enough to resolve the underlying technical debt and fix the serialization issues with with the C++ daemon that prevent us from safely scaling. After that, the cap can be forked away, because nobody wants it to be t [... too long, see mrelay.p2pool.observer/e/4KOIzs8KLXBfUV9k ]
-
br-m<redsh4de:matrix.org> On Bitcoin, the 1MB block size limit was set to avoid spam. The 90MB block growth cap here is to ensure Monero doesn't literally die by bigger blocks than the reference client can chew if it gets that much activity
-
br-m<articmine> The 90MB cap doesn't do anything that is not addressed in my proposal, unless the fix for the 100 MB bug is not fixed in six (6) years
-
br-m<redsh4de:matrix.org> Yes, it is imperative to fix the 100MB bug asap
-
br-m<tigerix:matrix.org> If Monero gets used more and more, we see it way before the worst case Szenario happens.
-
br-m<tigerix:matrix.org> To be fair, currently there is no Blockstream in Monero luckily, who wants to make money by offering custodial services. But we never know which state actor is trying to stear Monero in the wrong direction.
-
br-m<articmine> What in reality is going on here is that people are arguing for this cap in order to avoid dealing with this bug during the next 6 years
-
br-m<articmine> @tigerix:matrix.org: There does exist a conflict of interest with strong links to US Law enforcement
-
DataHoarder21:36:03 <br-m> <tigerix:matrix.org> I mean, if Monero gets used that much, great! We can introduce an emergency fix for that. But does it have to be rushed beforehand?
-
DataHoarderthat's what we are trying to. not have to rush it. then forked away (or even remove before next hardfork if we fix the issues)
-
br-m<articmine> Way worse than Blockstream
-
br-m<redsh4de:matrix.org> @articmine: Not setting the cap would be a "gun to your head" to get it fixed within 6 years, yes. Unironically can be a motivating factor
-
DataHoarderlike. it can be exploited today even worse with existing scaling
-
DataHoarderthe packet size cap is 50 MiB, levin deserialization 100 MiB ... and as listed last night there are other fixed caps existing already inherited from cryptonote
-
br-m<articmine> With existing scaling one needs about 5 months
-
br-m<articmine> In fairness to cryptonote. In 2013 they were looking at over 2000 TPS. That was less than VISA back then
-
br-m<articmine> The TX size was like ~500 bytes
-
br-m<articmine> It was still a bad idea back then
-
br-m<gingeropolous> i mean call me crazy pants, but I think 6 million transactions of FCMP is better privacy than 900 gajillion transactions of ringsize 16. FCMP gets in, then its optimize and fix all the things, like the PRs that have been sitting since 2021 that are kinda related i think
-
br-m<articmine> Actually no
-
br-m<gingeropolous> i really think we're missing the forest for the trees
-
br-m<articmine> Especially with quantum computerss
-
br-m<gingeropolous> well thats a whole other kerfuffle
-
br-m<redsh4de:matrix.org> @gingeropolous: anon / perfect-daemon had made PRs that upgrade serialization/etc, right? Maybe we'll see something of that sort from him now again after his CCS got funded
-
DataHoarder21:57:29 <br-m> <articmine> Especially with quantum computerss
-
DataHoarder^ especially. FCMP++/Carrot includes specific changes to improve PQ
-
DataHoardercurrent addressing scheme does not.
-
br-m<articmine> @gingeropolous: It t actually a valid research topic
-
br-m<boog900> @articmine: are you saying RingCT is better than FCMP for QCs?
-
br-m<datahoarder> ^ > <@datahoarder> The conflict of interest here is delaying FCMP++ due to scaling issues which would already cover the part of breaking surveillance for rings, so that must be prioritized. Adding sanity scaling parameters/adjustments so that can exist happily with current implementations can speed the process of deploying this in an agreeable way and stopping BS
-
br-m<articmine> I am saying that with QC forward privacy can be broken by combining BS with QC
-
DataHoardergoal shifting now, FCMP++ deals with BS, but suddenly, that's irrelevant
-
br-m<articmine> One needs to hide the public keys. This is in the Carrot specification
-
br-m<boog900> yes, currently you can break it without even the public keys. Its even worse.
-
DataHoarderthe carrot specification was changed recently :)
-
DataHoarderI implemented it
-
br-m<articmine> DataHoarder: It is not irrelevant, but it is not a complete panecea
-
br-m<articmine> DataHoarder: Do you still need to hide the public keys to have forward secrecy?
-
DataHoardergiven current BS and you placing that much importance in it, I'd say FCMP++ completely neuters them except in specific future cases, which we built protection/fallbacks for (PQ turnstile test being one)
-
br-m<boog900> @articmine: even if you did this is a step up.
-
br-m<articmine> It is a yes or no question?
-
DataHoarderinternal sends (change) is protected, even if they know all your public keys ArticMine. non-internal sends are protected, given that all public keys are not shared.
-
DataHoarderif they know explicitly your target address (not any) they can do quantum stuff there to learn amounts
-
DataHoarderlearning a different subaddress is not
-
DataHoarderalso - jeffro256/carrot #6
-
br-m<articmine> ... but some pubic are available to a BS adversary
-
br-m<articmine> Public keys
-
br-m<boog900> why are we taking about this at all?
-
br-m<boog900> talking*
-
DataHoardergoal shifting boog900
-
br-m<boog900> 100%
-
DataHoarderarticmine: you have 0,2, BS has 0,3, they learn nothing
-
DataHoarderthey have 0,2, they learn amounts.
-
DataHoarderin quantum
-
br-m<articmine> What does the sender have?
-
DataHoarderif exchange sends, they have 0,2
-
DataHoarderthat is what BS has
-
DataHoarderbut then you receive with 0,3. they get nothing
-
br-m<articmine> So then BS has 0.2 for some of the public keys
-
DataHoarder0,2 IS the public key
-
DataHoarderit's not shared with 0,3 or 1,2
-
DataHoarderthat's why they are derivated with proper methods
-
DataHoarder(I mean 0,2 as account/index)
-
DataHoarderbasically. exchange sends you money at address A (0,2).
-
DataHoarderThey can break it! (but they already have the info). They can later break using quantum outputs received with specifically A (0,2)
-
DataHoarderyou receive using B (0,3). this is not broken, this is an entirely new set of public keys
-
br-m<articmine> Yes but all the current outputs were received with 0.2
-
DataHoarderwhat do you mean
-
DataHoarderso they already know the details of the outputs?
-
DataHoarderthen why do they need to learn them
-
DataHoarderthere isn't carrot deployed yet
-
DataHoarderthat's why it's important to have it
-
br-m<articmine> The existing outputs if the public keys are known
-
br-m<articmine> The address
-
DataHoarderthere aren't carrot existing outputs
-
br-m<articmine> Correct, but if they are not transferred after FCMP++ they are still vulnerable
-
DataHoarderthey are vulnerable regardless. it's not carrot outputs
-
DataHoarderso yes, migrating to FCMP/Carrot is important
-
DataHoarderwithout carrot you don't even need pubkeys > 22:01:24 <br-m> <boog900> yes, currently you can break it without even the public keys. Its even worse.
-
br-m<articmine> DataHoarder: You have to
-
DataHoarderwhy are non-carrot outputs even considered. they are broken under a quantum adversary directly
-
DataHoarderwhen you bring it up. migrating would be a factor. I'm answering > 22:00:07 <br-m> <articmine> I am saying that with QC forward privacy can be broken by combining BS with QC
-
br-m<articmine> Because BS relies on correlations between outputs,. So broken some not broken
-
br-m<articmine> They are not isolated from each other
-
br-m<articmine> The worst part of all of this is it doesn't even have to work. All the government has to do is to convince a judge and not a professional mathematician that it works
-
br-m<articmine> Then they can convict an innocent person
-
DataHoarderso back to hypothetical that regardless what we deploy even sending everything to burn addresses, a judge can convict you
-
DataHoarderso doesn't matter what we do. close the chain right?
-
br-m<articmine> My point is that we need multiple layers. Not just one protection
-
br-m<articmine> ... and yes sheer volume can and should be part of the equation
-
DataHoarderso let's deploy these layers no? specially the ones that cover PQ and ring surveillance
-
br-m<articmine> I am not against FCMP++ What I am against is a fanatical push to keep the existing chain as small as possible
-
DataHoarderI don't think it's fanatical nor push to keep at small. but to allow it to grow safely without exploding and causing chain splits that require emergency changes
-
br-m<articmine> To give an example. Many of the devs are concerned about a growth rate of 2 and propose growth rates between 1.2 and 1 7. I come up with an effective growth rate of 1.085 and they ask for more and more drastic restrictions
-
DataHoarderso bring people or gather devs for making it work now, instead of bringing people to bicker around 90 MiB permanent size forever, which was not discussed at all. We can remove it before next hard fork, let's do so, but otherwise a bomb is left planted (which is already there)
-
br-m<boog900> way to misrepresent it
-
br-m<boog900> disgusting
-
br-m<boog900> trying to win the propaganda war again
-
br-m<boog900> I have said again and again my position, here it is: seraphis-migration/monero #44#issuecomment-3617687600
-
br-m<boog900> your 1.085 increases no matter what
-
br-m<articmine> Your position is 1.2.
-
br-m<boog900> not exactly equivalent
-
br-m<articmine> I am offering 1.085
-
br-m<boog900> oh my days
-
br-m<articmine> @boog900: Over time it is
-
br-m<boog900> if my proposal was really more, you would like it more, the more dangerous the better right?
-
br-m<articmine> @boog900: No I am arguing for shot to medium term flexibility
-
br-m<articmine> Long term no more than 1.5x per year
-
br-m<articmine> I originally had a long term sanity median of 1000000 blocks
-
br-m<articmine> With a growth rate of 2
-
br-m<articmine> I actually believe that Tevador's proposal is way better for a sanity cap
-
br-m<boog900> @articmine: where? not in the proposal I am looking at
-
br-m<articmine> I have given multiple talks with the long term sanity median of 1000000 bytes
-
br-m<boog900> as so this one proposal in the past that wasn't the one you wanted for FCMP?
-
br-m<boog900> like come on
-
br-m<articmine> The last was at MonerKon 2025
-
br-m<boog900> I wont be talking about this with you anymore, we gone round in circles enough over the past couple weeks.
-
br-m<articmine> MoneroKon
-
br-m<articmine> I even discussed this there with Jeffro256 who told me it was unnecessary. That is why I took it out
-
br-m<articmine> Of CT this was fi FCMP
-
br-m<articmine> Of course
-
br-m<articmine> @boog900: Then don't.
-
br-m<articmine> I know what is really going on her. It has nothing to do with scaling parameters
-
br-m<articmine> here*
-
DataHoarderwe have pointed at the specific code that would break already. bring people, or let the existing people to fix it without sending hordes in misinterpreted social messages
-
DataHoarderthere isn't a consensus for 90 MiB anymore, besides the 5m where there was an abstain from you. so, why all of this?
-
DataHoardersame limit exists on all other cryptonote derivations, too
-
DataHoarderin the end it doesn't matter if the technical limit is in or not. BS will exploit it and kill the network :)
-
DataHoarderor well, have some emergency deployment. wouldn't that be fun
-
sech1I think it's more of a philosophical question. Any software has its limits. Even if Monero declares "unlimited" block size, there's always a physical limit of what the network can handle. Dev team's responsibility is to ensure that this limit is always bigger than the real world usage at any time, but setting a fail-safe (hard cap for the known
-
sech1value of the limit) is perfectly normal, assuming that this hard cap gets increased with every node optimization (every new software release)
-
br-m<articmine> I ABSTAIN that does not mean I support it. When I see posts on r/BTC on this. This tells me that this 90 MB limit is very controversial outside of t MRL
-
DataHoarderwe couldn't do an emergency release for dns checkpoints either because of existing technical debt, too. it's not a first
-
DataHoarderofc, you abstaining can still mean you are against. usually it means that you let the rest of the consensus move forward and not move instead to try to misrepresent it elsewhere
-
br-m<articmine> I am not misrepresenting this
12 seconds ago