-
br-m
<reject-all:matrix.org> In regards to this BS stuff, Is monero moving towards nodes only on data centers and high-end infrastructure?
-
br-m
<elongated:matrix.org> @reject-all:matrix.org: If it’s heavily spammed yes
-
br-m
<reject-all:matrix.org> If users are unable to run a node (without access to data centers, high-end equipment) does this threaten the decentralization (and therefore the censorship resistance) of Monero?
-
br-m
<ofrnxmr:xmr.mx> @reject-all:matrix.org: Obviously
-
br-m
<ofrnxmr:xmr.mx> Or (less rude) by definition, yes.
-
br-m
<reject-all:matrix.org> So you would say it's a requirement for the 'common' user with consumer hardware and typical bandwidth limitations to be capable of running a full node, and if otherwise Monero won't be a decentralized/permissionless?
-
br-m
<reject-all:matrix.org> Is this sufficiently being taken into account with regards to BS/scaling?
-
br-m
<reject-all:matrix.org> @ofrnxmr:xmr.mx
-
br-m
<ofrnxmr:xmr.mx> i wouldnt say a requirement, but a preference
-
br-m
<ofrnxmr:xmr.mx> Currently monero is running quite well on stressnet with 10+ mb blocks, including a couple old, hdd, quad core systems and a single core 2gb ram vm
-
br-m
<ofrnxmr:xmr.mx> As far as bandwidth, no. I don't think monero aims to support limited bandwidth, though we are working to reduce bandwidth by upwards of 70% from current
-
br-m
<ircmouse:matrix.org> New Monero research paper just dropped! "Inside Qubic’s Selfish Mining Campaign on
-
br-m
<ircmouse:matrix.org> Monero: Evidence, Tactics, and Limits"
-
br-m
-
br-m
<ircmouse:matrix.org> Credit to @ack-j:matrix.org for pointing it out in the MRL channel. Didn't see it posted here so wanted to share.
-
br-m
<datahoarder> ^ I commented about this paper in MRL channel. TL;DR, they had limited/not granular data, estimated similar numbers as we did empirically from granular data
-
br-m
<elongated:matrix.org> @ofrnxmr:xmr.mx: How much storage does these “old” systems need to have ? To be future proof for 3-4yrs
-
br-m
<reject-all:matrix.org> @ofrnxmr:xmr.mx: Interesting, I'll try and get setup with stressnet on my PC.
-
br-m
<reject-all:matrix.org> But I do find something a bit unclear:
-
br-m
<reject-all:matrix.org> Users able to run full nodes without data centers/high-end equipment is by definition decentralization/censorship resistance.[... more lines follow, see
mrelay.p2pool.observer/e/5fvFs88KeDRoR29V ]
-
br-m
<ofrnxmr> Dependa what you define as "common user" and "consumer hardware" and "typical bandwidth"
-
br-m
<datahoarder> @reject-all:matrix.org: there's a future with aggregated proofs that would allow a mixed version of pruned/full/archival. One where it downloads pruned txs, but each block has an aggregated proof that fully verifies the transactions. Archival nodes would keep the per-transaction full proofs, but they aren't needed for these lighter full verification nodes.
-
br-m
<datahoarder> Storage and bandwidth requirements for these would be vastly lower
-
br-m
<ofrnxmr> tevador's proposal seems to intend to keep up with "consumer hardware" advancements
-
br-m
<ofrnxmr> For full/archival nodes
-
br-m
<ofrnxmr:xmr.mx> @elongated:matrix.org: future proof with 10mb blocks = 3tb for year 1 :)
-
br-m
<elongated:matrix.org> @ofrnxmr:xmr.mx: Isn’t the consensus 100mb limit ?
-
br-m
<elongated:matrix.org> Just assuming some agency has its life mission to spam xmr 😅
-
br-m
<elongated:matrix.org> 30tb/yr ? With 100mb limit
-
DataHoarder
making it highly centralized due to storage/compute/bandwidth costs -> then strike the central locations :)
-
br-m
<ofrnxmr:xmr.mx> @elongated:matrix.org: 100mb is the packet size limit, wont be hit for 6yrs under tevador's or articmines proposals
-
br-m
<ofrnxmr:xmr.mx> Yeah, ppl yelling fud about 90mb limit dont realize that 90mb is 65gb per day
-
DataHoarder
in chain data* the limit can strike due to other factors
-
br-m
<elongated:matrix.org> @ofrnxmr:xmr.mx: Thx to artic fans
-
br-m
<ofrnxmr:xmr.mx> i dont even think artic fans, just ppl who are claiming that im "breaking a promise" "for no reason"
-
br-m
<ofrnxmr:xmr.mx> Pointing to getmonero's retarded faq as proof that monero blocks are currently "unlimited"
-
DataHoarder
People don't realize we use size_t and not arbitrary precision integers for packed and block sizes
-
DataHoarder
can't even go past uint64_t block sizes!
-
br-m
-
br-m
<ofrnxmr:xmr.mx> > No, Monero does not have a hard block size limit. Instead, the block size can increase or decrease over time based on demand. It is capped at a certain growth rate to prevent outrageous growth (scalability).
-
br-m
<elongated:matrix.org> @ofrnxmr:xmr.mx: Needs to be fixed
-
DataHoarder
technically there is a cap even if we send packets less bad
-
DataHoarder
if block header itself reaches packet limit :)
-
DataHoarder
100 MiB block headers would be ... interesting
-
DataHoarder
just about 3 million tx hashes
-
br-m
<ofrnxmr:xmr.mx> WHY ARE YOU BREAKING MONERO'S PROMISE! DATAHOARDER IS A FED WHO IS TRYING TO HIJACK MONERO
-
DataHoarder
1 exabyte (2^63) block size :)
-
br-m
<ofrnxmr:xmr.mx> has anyone looked to see if zano has the packet limit?
-
DataHoarder
damn, you can no longer address the storage of the world in a single uint64
-
br-m
<rbrunner7> After reading this, and the Twitter thread it links to, I fear we could be near a total breakdown of any sensible discussion about block sizes. Out go technical arguments and sound logical reasoning, and emotions totally rule the day:
old.reddit.com/r/Monero/comments/1p…ill_be_the_only_contender_for_sound
-
br-m
<rbrunner7> (There is currently a very detailed response to my comment from ArticMine caught in some filter, waiting for release.)
-
DataHoarder
> This is all really strange considering that the current average block size is 100 kB and it's not possible to up it up to anything really big in a fast manner.
-
DataHoarder
stressnet disagrees
-
br-m
<rbrunner7> Say, how much would it cost to produce a valid 50 MB block, mine it, and bring the network down with it? Can't be more than a few thousand dollars, I would guess? If I was rich I would be tempted to do that as an attempt to bring people to their senses.
-
DataHoarder
yeah the temporary part (like, not even has to make it to the hardfork if fixed before) has been totally lost to the wind
-
DataHoarder
if it's a miner, rbrunner7, effectively "free"
-
DataHoarder
even better if they do 51%
-
DataHoarder
they can pad their own blocks with txs to grow the median for free
-
br-m
<rbrunner7> Ah, yes, of course, because you get your expenses back :)
-
DataHoarder
without majority hashrate you still need to spam, but you can get some better efficiency if you are already a mining pool
-
DataHoarder
pad what you can and spam the rest
-
br-m
<rbrunner7> Maybe we can win over M5M400?
-
br-m
<elongated:matrix.org> @ofrnxmr:xmr.mx: They are safe with 0.01 zano tx fees
-
DataHoarder
funnily qubic was padding their blocks with withheld txs
-
br-m
-
br-m
-
DataHoarder
... but the max number of txs they could mine was, 20.
-
br-m
<rbrunner7> No, seriously, I think there are people right now that can only return to their senses quickly by hitting them on the head with a hammer.
-
DataHoarder
so literally qubic had set a hardcoded limit into how many transactions could be included
-
DataHoarder
zano 50mb limit!!!!
-
DataHoarder
actually, we also do have 50mb packet size
-
DataHoarder
and 100mb for levin
-
br-m
<rbrunner7> Yeah, but anyway not a contender for "sound money", so ...
-
DataHoarder
MAX_RPC_CONTENT_LENGTH = 1048576 // 1 MB
-
DataHoarder
DEFAULT_RPC_SOFT_LIMIT_SIZE 25 * 1024 * 1024 // 25 MiB
-
br-m
-
br-m
<ofrnxmr:xmr.mx> monero has that same 50mb line
-
DataHoarder
so maybe we are worse than we thought :)
-
DataHoarder
in a worse place than*
-
br-m
<rbrunner7> Don't think you can get away with "unlimited logical block size, with a limit of 100 MB for individual block parts". See the word "limit" in there? That will be enough for people to freak out :)
-
DataHoarder
block parts = txs
-
DataHoarder
which is already bounded
-
br-m
<ofrnxmr:xmr.mx> Rbrunner, you didnt read getmonero.org? Blocks are unlimited
-
DataHoarder
#define CRYPTONOTE_MAX_TX_SIZE 1000000
-
DataHoarder
oh also
-
DataHoarder
#define CRYPTONOTE_MAX_TX_PER_BLOCK 0x10000000
-
DataHoarder
^ also size limit
-
br-m
<ofrnxmr:xmr.mx> oh thats racist
-
br-m
<rbrunner7> I am stealth "small blocker", what do you expect.
-
DataHoarder
that is 2^28
-
br-m
<ofrnxmr:xmr.mx> you have to change that to infinity
-
DataHoarder
1000000 * 2^28 bytes to TiB = 244 TiB blocks
-
DataHoarder
yet another limit
-
br-m
<rbrunner7> Well, maybe we could live with that limit if we drop blocktime down to 1 second.
-
br-m
<rbrunner7> More transactions that way.
-
DataHoarder
bring it down enough that speed of light and distance starts making 10+ blocks orphanable so all miners need to coexist the same server rack
-
br-m
<kayabanerve:matrix.org> If we had asynchronous consensus, blocks could be produced per throughput, not an arbitrary time interval.
-
br-m
<rbrunner7> Note to self: If people around me throw reason and logic overboard and act almost purely on emotion, it doesn't help if I do likewise as my reaction to this happening.
-
sech1
With the current monerod limitations, miners will start limiting block sizes way before 100 MB
-
sech1
I mean performance limitations
-
sech1
Qubic even mined empty blocks for a while to be "more efficient"
-
sech1
P2Pool has a packet size limit of 128 KB which limits it to max 4k transactions per block
-
br-m
<gingeropolous> we need to build a fab
-
br-m
-
br-m
<sgp_> shame on you all for prioritizing fcmp++. We all know that will lead to way worse privacy than simply allowing big blocks. shame!
-
br-m
<rbrunner7> A sad day for Monero. I can hear Monero's enemies rejoice.
-
br-m
<sgp_> This scaling death cult was a sleeping issue all along unfortunately. These network vulnerabilities finally being challenged is a step in the right direction
-
br-m
<rbrunner7> I would like to see LMDB manage a multi-terabyte blockchain file. Would be an interesting exercise.
-
br-m
<syntheticbird> @rbrunner7: LMDB2: eletric boogaloo when
-
br-m
<boog900> I am happy to see some push back on reddit, starting a propaganda war is stupid.
-
br-m
<ofrnxmr:xmr.mx> so.. wen serialization limit fixes? 7999 8867 9433
-
br-m
<boog900> cuprate has already done it :p
-
br-m
<ofrnxmr:xmr.mx> Since those limit blocks to ~30mb
-
br-m
<ofrnxmr:xmr.mx> @boog900: Right, but were fussing about 100mb genesis when we have a 30mb limit added in 2020
-
br-m
<ofrnxmr:xmr.mx> Thats been fixed since like 2021
-
br-m
<boog900> I am really surprised it has taken so long for 9433
-
br-m
<boog900> like the others I kinda get taking a while to review and whatever, but that should be a simple change.
-
br-m
<ofrnxmr:xmr.mx> Considering 9433 is just a stop-gap/bandaid, im also surprised that it hasnt yet been reviewed/merged
-
DataHoarder
-
DataHoarder
mainnet node just works :')
-
DataHoarder
> 6 files changed, 5 insertions(+), 292 deletions(-)
-
br-m
<ofrnxmr:xmr.mx> Had anyone compared 7999 and 8867 to see which actually perforns better?
-
br-m
<boog900> Proposed some different scaling parameters:
seraphis-migration/monero #44#issuecomment-3617687600
-
br-m
<ofrnxmr:xmr.mx> 8867 has, aiui, started to be merged in pieces, but 7999 is the smaller pr and (again, aiui) has demonstrated much improved performance
-
br-m
<gingeropolous> so these things could address the 90MB block limit. And have been sitting in PR limbo since 2021
-
br-m
<gingeropolous> so it'll soon be 5 years that these fixes have sat there.
-
br-m
<ofrnxmr:xmr.mx> @gingeropolous: The 30mb limit
-
br-m
<ofrnxmr:xmr.mx> The 90/100mb limit is unaddressed
-
br-m
<ofrnxmr:xmr.mx> Theres also a 50mb p2p packet limit, also inherited
-
br-m
<boog900> @ofrnxmr:xmr.mx: I am working on a proposal to change how blocks are synced. Hopefully fixing this and adding a couple nice features.
-
br-m
<boog900> @ofrnxmr:xmr.mx: Also I checked and I can't see where this is enforced.
-
br-m
<ofrnxmr:xmr.mx> Fluffy blocks during ibd w/ split messages
-
br-m
<ofrnxmr:xmr.mx> @ofrnxmr:xmr.mx: ?*
-
br-m
<boog900> I mean we could reuse the messages but that wouldn't be ideal IMO. But if you just mean just the gist of fluffy blocks then yes.
-
br-m
<ofrnxmr:xmr.mx> download all block headers first, then add the txs 🧠
-
br-m
<boog900> If you are taking the mick, then I don't see why. I want to add more to it than just that, for example adding support for not always sending the miner tx in a broadcast and not disconnecting if the block has enough PoW but is invalid, plus some more. I would prefer us get all these changes in at once as its a good time to do it.
-
br-m
<boog900> Having a spec we can discus before I just put some code in Cuprate is the better way to do this.
-
nioc
I will comment here as I don't have a github account, the comment "The spam attacks we have had in Monero were stopped by by the short term median."
-
nioc
1) so we can distinguish spam :)
-
nioc
2) I thought why this wasn't successful is that the blocks did not grow at the expected rate, that there was a bug that kept fees too low to expand the blocks
-
nioc
I am thinking of the most recent episode, am I remembering this correctly?
-
br-m
<rucknium> nioc: Mostly the spam was using minimum fee. If the real users had auto-adjusted their fee to the next level, their txs would not have been delayed. I don't think the auto-adjust would have increased block size much because the vast majority of txs were the low-fee spam. More info:
github.com/Rucknium/misc-research/b…d/pdf/monero-black-marble-flood.pdf
-
nioc
yeah I though the low-fee spam was low fee due to incorrect auto-adjust
-
nioc
vague memories
-
br-m
<321bob321> CRS
-
plowsof
-
br-m
<tigerix:matrix.org> I believe in the good will of the people in this community with the Blocksize limit.
-
br-m
<tigerix:matrix.org> Satoshi also introduced a Blocksize limit with good will for safety reasons. This turned out to be the nail in the coffin for Bitcoin as money.
-
br-m
<tigerix:matrix.org> This shouldn't be done, because temporary things usually stay the way they are. That's just life experience![... more lines follow, see
mrelay.p2pool.observer/e/zuK5zc8KX19ob2VN ]
-
DataHoarder
it's already in the code and introduced. we are trying to remove it.
-
DataHoarder
it came with cryptonote.
-
br-m
<tigerix:matrix.org> Zcash has a Blocksize limit of 2 MB and thus will never be more than private gold. Monero can be more than that!
-
br-m
<redsh4de:matrix.org> To be clear, the blocksize will still be dynamic. The limit is not arbitrary like 1MB or 2MB, it is literally under what would break the Monero network with the current code if it gets there
-
br-m
<redsh4de:matrix.org> things start breaking at 32MB already
-
br-m
<redsh4de:matrix.org> the cap is 3x that
-
br-m
<tigerix:matrix.org> It sounds reasonable, but isn't this a nice problem to have?
-
br-m
<tigerix:matrix.org> I mean, if Monero gets used that much, great! We can introduce an emergency fix for that. But does it have to be rushed beforehand?
-
br-m
<redsh4de:matrix.org> @tigerix:matrix.org: It is not a nice problem to have if it renders the network unusable. What good is a unlimited block size if the nodes can't sync those blocks?
-
br-m
<redsh4de:matrix.org> Plan is to set a temporary cap on growth which would not be reached within 6 years. During that time it should be enough to resolve the underlying technical debt and fix the serialization issues with with the C++ daemon that prevent us from safely scaling. After that, the cap can be forked away, because nobody wants it to be t [... too long, see
mrelay.p2pool.observer/e/4KOIzs8KLXBfUV9k ]
-
br-m
<redsh4de:matrix.org> On Bitcoin, the 1MB block size limit was set to avoid spam. The 90MB block growth cap here is to ensure Monero doesn't literally die by bigger blocks than the reference client can chew if it gets that much activity
-
br-m
<articmine> The 90MB cap doesn't do anything that is not addressed in my proposal, unless the fix for the 100 MB bug is not fixed in six (6) years
-
br-m
<redsh4de:matrix.org> Yes, it is imperative to fix the 100MB bug asap
-
br-m
<tigerix:matrix.org> If Monero gets used more and more, we see it way before the worst case Szenario happens.
-
br-m
<tigerix:matrix.org> To be fair, currently there is no Blockstream in Monero luckily, who wants to make money by offering custodial services. But we never know which state actor is trying to stear Monero in the wrong direction.
-
br-m
<articmine> What in reality is going on here is that people are arguing for this cap in order to avoid dealing with this bug during the next 6 years
-
br-m
<articmine> @tigerix:matrix.org: There does exist a conflict of interest with strong links to US Law enforcement
-
DataHoarder
21:36:03 <br-m> <tigerix:matrix.org> I mean, if Monero gets used that much, great! We can introduce an emergency fix for that. But does it have to be rushed beforehand?
-
DataHoarder
that's what we are trying to. not have to rush it. then forked away (or even remove before next hardfork if we fix the issues)
-
br-m
<articmine> Way worse than Blockstream
-
br-m
<redsh4de:matrix.org> @articmine: Not setting the cap would be a "gun to your head" to get it fixed within 6 years, yes. Unironically can be a motivating factor
-
DataHoarder
like. it can be exploited today even worse with existing scaling
-
DataHoarder
the packet size cap is 50 MiB, levin deserialization 100 MiB ... and as listed last night there are other fixed caps existing already inherited from cryptonote
-
br-m
<articmine> With existing scaling one needs about 5 months
-
br-m
<articmine> In fairness to cryptonote. In 2013 they were looking at over 2000 TPS. That was less than VISA back then
-
br-m
<articmine> The TX size was like ~500 bytes
-
br-m
<articmine> It was still a bad idea back then
-
br-m
<gingeropolous> i mean call me crazy pants, but I think 6 million transactions of FCMP is better privacy than 900 gajillion transactions of ringsize 16. FCMP gets in, then its optimize and fix all the things, like the PRs that have been sitting since 2021 that are kinda related i think
-
br-m
<articmine> Actually no
-
br-m
<gingeropolous> i really think we're missing the forest for the trees
-
br-m
<articmine> Especially with quantum computerss
-
br-m
<gingeropolous> well thats a whole other kerfuffle
-
br-m
<redsh4de:matrix.org> @gingeropolous: anon / perfect-daemon had made PRs that upgrade serialization/etc, right? Maybe we'll see something of that sort from him now again after his CCS got funded
-
DataHoarder
21:57:29 <br-m> <articmine> Especially with quantum computerss
-
DataHoarder
^ especially. FCMP++/Carrot includes specific changes to improve PQ
-
DataHoarder
current addressing scheme does not.
-
br-m
<articmine> @gingeropolous: It t actually a valid research topic
-
br-m
<boog900> @articmine: are you saying RingCT is better than FCMP for QCs?
-
br-m
<datahoarder> ^ > <@datahoarder> The conflict of interest here is delaying FCMP++ due to scaling issues which would already cover the part of breaking surveillance for rings, so that must be prioritized. Adding sanity scaling parameters/adjustments so that can exist happily with current implementations can speed the process of deploying this in an agreeable way and stopping BS
-
br-m
<articmine> I am saying that with QC forward privacy can be broken by combining BS with QC
-
DataHoarder
goal shifting now, FCMP++ deals with BS, but suddenly, that's irrelevant
-
br-m
<articmine> One needs to hide the public keys. This is in the Carrot specification
-
br-m
<boog900> yes, currently you can break it without even the public keys. Its even worse.
-
DataHoarder
the carrot specification was changed recently :)
-
DataHoarder
I implemented it
-
br-m
<articmine> DataHoarder: It is not irrelevant, but it is not a complete panecea
-
br-m
<articmine> DataHoarder: Do you still need to hide the public keys to have forward secrecy?
-
DataHoarder
given current BS and you placing that much importance in it, I'd say FCMP++ completely neuters them except in specific future cases, which we built protection/fallbacks for (PQ turnstile test being one)
-
br-m
<boog900> @articmine: even if you did this is a step up.
-
br-m
<articmine> It is a yes or no question?
-
DataHoarder
internal sends (change) is protected, even if they know all your public keys ArticMine. non-internal sends are protected, given that all public keys are not shared.
-
DataHoarder
if they know explicitly your target address (not any) they can do quantum stuff there to learn amounts
-
DataHoarder
learning a different subaddress is not
-
DataHoarder
-
br-m
<articmine> ... but some pubic are available to a BS adversary
-
br-m
<articmine> Public keys
-
br-m
<boog900> why are we taking about this at all?
-
br-m
<boog900> talking*
-
DataHoarder
goal shifting boog900
-
br-m
<boog900> 100%
-
DataHoarder
articmine: you have 0,2, BS has 0,3, they learn nothing
-
DataHoarder
they have 0,2, they learn amounts.
-
DataHoarder
in quantum
-
br-m
<articmine> What does the sender have?
-
DataHoarder
if exchange sends, they have 0,2
-
DataHoarder
that is what BS has
-
DataHoarder
but then you receive with 0,3. they get nothing
-
br-m
<articmine> So then BS has 0.2 for some of the public keys
-
DataHoarder
0,2 IS the public key
-
DataHoarder
it's not shared with 0,3 or 1,2
-
DataHoarder
that's why they are derivated with proper methods
-
DataHoarder
(I mean 0,2 as account/index)
-
DataHoarder
basically. exchange sends you money at address A (0,2).
-
DataHoarder
They can break it! (but they already have the info). They can later break using quantum outputs received with specifically A (0,2)
-
DataHoarder
you receive using B (0,3). this is not broken, this is an entirely new set of public keys
-
br-m
<articmine> Yes but all the current outputs were received with 0.2
-
DataHoarder
what do you mean
-
DataHoarder
so they already know the details of the outputs?
-
DataHoarder
then why do they need to learn them
-
DataHoarder
there isn't carrot deployed yet
-
DataHoarder
that's why it's important to have it
-
br-m
<articmine> The existing outputs if the public keys are known
-
br-m
<articmine> The address
-
DataHoarder
there aren't carrot existing outputs
-
br-m
<articmine> Correct, but if they are not transferred after FCMP++ they are still vulnerable
-
DataHoarder
they are vulnerable regardless. it's not carrot outputs
-
DataHoarder
so yes, migrating to FCMP/Carrot is important
-
DataHoarder
without carrot you don't even need pubkeys > 22:01:24 <br-m> <boog900> yes, currently you can break it without even the public keys. Its even worse.
-
br-m
<articmine> DataHoarder: You have to
-
DataHoarder
why are non-carrot outputs even considered. they are broken under a quantum adversary directly
-
DataHoarder
when you bring it up. migrating would be a factor. I'm answering > 22:00:07 <br-m> <articmine> I am saying that with QC forward privacy can be broken by combining BS with QC
-
br-m
<articmine> Because BS relies on correlations between outputs,. So broken some not broken
-
br-m
<articmine> They are not isolated from each other
-
br-m
<articmine> The worst part of all of this is it doesn't even have to work. All the government has to do is to convince a judge and not a professional mathematician that it works
-
br-m
<articmine> Then they can convict an innocent person
-
DataHoarder
so back to hypothetical that regardless what we deploy even sending everything to burn addresses, a judge can convict you
-
DataHoarder
so doesn't matter what we do. close the chain right?
-
br-m
<articmine> My point is that we need multiple layers. Not just one protection
-
br-m
<articmine> ... and yes sheer volume can and should be part of the equation
-
DataHoarder
so let's deploy these layers no? specially the ones that cover PQ and ring surveillance
-
br-m
<articmine> I am not against FCMP++ What I am against is a fanatical push to keep the existing chain as small as possible
-
DataHoarder
I don't think it's fanatical nor push to keep at small. but to allow it to grow safely without exploding and causing chain splits that require emergency changes
-
br-m
<articmine> To give an example. Many of the devs are concerned about a growth rate of 2 and propose growth rates between 1.2 and 1 7. I come up with an effective growth rate of 1.085 and they ask for more and more drastic restrictions
-
DataHoarder
so bring people or gather devs for making it work now, instead of bringing people to bicker around 90 MiB permanent size forever, which was not discussed at all. We can remove it before next hard fork, let's do so, but otherwise a bomb is left planted (which is already there)
-
br-m
<boog900> way to misrepresent it
-
br-m
<boog900> disgusting
-
br-m
<boog900> trying to win the propaganda war again
-
br-m
<boog900> I have said again and again my position, here it is:
seraphis-migration/monero #44#issuecomment-3617687600
-
br-m
<boog900> your 1.085 increases no matter what
-
br-m
<articmine> Your position is 1.2.
-
br-m
<boog900> not exactly equivalent
-
br-m
<articmine> I am offering 1.085
-
br-m
<boog900> oh my days
-
br-m
<articmine> @boog900: Over time it is
-
br-m
<boog900> if my proposal was really more, you would like it more, the more dangerous the better right?
-
br-m
<articmine> @boog900: No I am arguing for shot to medium term flexibility
-
br-m
<articmine> Long term no more than 1.5x per year
-
br-m
<articmine> I originally had a long term sanity median of 1000000 blocks
-
br-m
<articmine> With a growth rate of 2
-
br-m
<articmine> I actually believe that Tevador's proposal is way better for a sanity cap
-
br-m
<boog900> @articmine: where? not in the proposal I am looking at
-
br-m
<articmine> I have given multiple talks with the long term sanity median of 1000000 bytes
-
br-m
<boog900> as so this one proposal in the past that wasn't the one you wanted for FCMP?
-
br-m
<boog900> like come on
-
br-m
<articmine> The last was at MonerKon 2025
-
br-m
<boog900> I wont be talking about this with you anymore, we gone round in circles enough over the past couple weeks.
-
br-m
<articmine> MoneroKon
-
br-m
<articmine> I even discussed this there with Jeffro256 who told me it was unnecessary. That is why I took it out
-
br-m
<articmine> Of CT this was fi FCMP
-
br-m
<articmine> Of course
-
br-m
<articmine> @boog900: Then don't.
-
br-m
<articmine> I know what is really going on her. It has nothing to do with scaling parameters
-
br-m
<articmine> here*
-
DataHoarder
we have pointed at the specific code that would break already. bring people, or let the existing people to fix it without sending hordes in misinterpreted social messages
-
DataHoarder
there isn't a consensus for 90 MiB anymore, besides the 5m where there was an abstain from you. so, why all of this?
-
DataHoarder
same limit exists on all other cryptonote derivations, too
-
DataHoarder
in the end it doesn't matter if the technical limit is in or not. BS will exploit it and kill the network :)
-
DataHoarder
or well, have some emergency deployment. wouldn't that be fun
-
sech1
I think it's more of a philosophical question. Any software has its limits. Even if Monero declares "unlimited" block size, there's always a physical limit of what the network can handle. Dev team's responsibility is to ensure that this limit is always bigger than the real world usage at any time, but setting a fail-safe (hard cap for the known
-
sech1
value of the limit) is perfectly normal, assuming that this hard cap gets increased with every node optimization (every new software release)
-
br-m
<articmine> I ABSTAIN that does not mean I support it. When I see posts on r/BTC on this. This tells me that this 90 MB limit is very controversial outside of t MRL
-
DataHoarder
we couldn't do an emergency release for dns checkpoints either because of existing technical debt, too. it's not a first
-
DataHoarder
ofc, you abstaining can still mean you are against. usually it means that you let the rest of the consensus move forward and not move instead to try to misrepresent it elsewhere
-
br-m
<articmine> I am not misrepresenting this
-
DataHoarder
not what I have seen on reddit comments, unless that's an impostor, if so they have done a great job.
-
br-m
<articmine> I was actually shocked by the reaction to me including Tevador's proposal into mine
-
DataHoarder
as sech1 said "assuming that this hard cap gets increased with every node optimization (every new software release)" < I think that's the point of the technical cap.
-
sech1
So I'm against making this a consensus rule (fixed max block size). Rather make it a constant that can be changed in a point release, and ensure that scaling rules don't let the network reach the cap quickly
-
sech1
so the team has the time to react if network load changes
-
sech1
quickly -> in less than 2-3 years
-
br-m
<articmine> sech1: Honestly this does not work
-
sech1
I disagree
-
br-m
<articmine> It actually broke Bitcoin in 2013
-
sech1
Make scaling rules work such that 90 MB can't be reached in less than 3 years
-
sech1
If blocks start to grow, team has 3 years to react and optimize the node, and increase the hard cap
-
br-m
<articmine> sech1: My proposal means it cannot be reach in over 6 years
-
DataHoarder
they can feed these blocks via other means, not chain growth
-
br-m
<articmine> YeT this is not enough for some people
-
DataHoarder
it will get deserialized
-
sech1
yes, 90 MB blocks can be crafted and sent via RPC, or even P2P as a new top chain candidate. Nodes will have to process them
-
sech1
which is why a limit is needed, but not as part of consensus rules
-
br-m
<articmine> DataHoarder: How.?
-
sech1
it's a technical limitation, not a consensus rule
-
sech1
you have to choose between node crashing or node just refusing such blocks
-
sech1
either way, there is a hard cap (implicit in the first case)
-
br-m
<articmine> If this can be done outside of consensus rules then I change my vote from ABSTAIN to YES > <sech1> which is why a limit is needed, but not as part of consensus rules
-
br-m
<articmine> On the 90 MB cap
-
DataHoarder
> <sech1> which is why a limit is needed, but not as part of consensus rules
-
DataHoarder
^ semi-consensus, if 90 MiB blocks cannot be broadcasted then nodes can fall behind if network is fed txs in specific ways
-
DataHoarder
if it could be done in point releases, that'd be nice.
-
sech1
if it gets to the point when we have 90+ MB blocks and some nodes can't sync, these nodes have to update, right?
-
sech1
Because the fixed version will be available at this point
-
sech1
Remember about the 3+ years lead time due to scaling rules
-
br-m
<articmine> So a node relay rule. No problem here
-
DataHoarder
unless those are mining nodes and they make a longer chain, sech1
-
DataHoarder
tx node relay rule gets skipped for found blocks with those txs as example
-
sech1
Then it's just miner consensus, not a problem
-
sech1
I think pools will self-regulate when blocks get big
-
sech1
They won't allow their nodes to become too slow
-
br-m
<articmine> DataHoarder: A longer chain that crashes
-
DataHoarder
maybe it can be brought to the table next MRL with more details
-
sech1
so they'll limit blocks to a few MB or whatever value their servers can handle
-
DataHoarder
that longer chain has less block size, so no it doesn't ArticMine
-
DataHoarder
that's why they made it longer
-
DataHoarder
but then - the existing limit is already there :')
-
br-m
<articmine> DataHoarder: This did not work for Bitmain in 2018
-
DataHoarder
though a well placed limit is consistent instead of having secondary pieces throw exceptions or err
-
br-m
<articmine> That is history
-
DataHoarder
people moved from MineXMR but they went to Qubic
-
DataHoarder
:)
-
br-m
<articmine> Yes but blocks refusing to relay over 90 MB because they crash is very difficult to fight.
-
br-m
<articmine> Then there is my proposal that blocks over 90 MB for over 6 years
-
br-m
<syntheticbird> DataHoarder: if people = cfb and its llm bots then yeah surely
-
br-m
<articmine> In consensus
-
br-m
<articmine> The way to harden a node relay rule on this is to set the node relay cap at 45 MB. Then miners will need over 51% to override the nodes.
-
br-m
<articmine> So I will support a node relay rule at 45 MB
-
br-m
<diego:cypherstack.com> Anything that delays FCMP is bad imo
-
br-m
<diego:cypherstack.com> Once FCMP is ready, we need to get it live. Monero's privacy is currently porous.
-
br-m
<diego:cypherstack.com> I have no such links, and I only care about the proposal that gets us to FCMP++ the fastest. > <@articmine> There does exist a conflict of interest with strong links to US Law enforcement
-
br-m
<diego:cypherstack.com> And I would say anyone who says anything other than FCMP++ as an absolute priority is the one with suspect intentions, given how substandard Monero's current privacy protocol is in comparison other serious privacy tech.
-
br-m
<diego:cypherstack.com> Though that's potentially an inflammatory argument that looks too much at people, so I say it very lightly.
-
br-m
<diego:cypherstack.com> I know I am in no way the decision maker anywhere, but I want FCMP launch Q2 2026
-
br-m
<diego:cypherstack.com> And I have been burning my crypto boy candles at both ends to get it there.
-
br-m
<diego:cypherstack.com> FCMP first, scaling immediately after if need be.
-
br-m
<diego:cypherstack.com> It's not an indefinitely pushed discussion. It is just a very very VERY distant second to get FCMPs out. Once out, it can be first on the agenda.
-
br-m
<diego:cypherstack.com> One more elaboration if I may, the cryptographers working for me are also concerned about Monero scaling. We would be among the first to insist on and contribute to further scaling discussions after FCMPs goes live.
-
br-m
<rucknium> @diego:cypherstack.com: If I can prod you a bit, your position is also not an enlightened one. "Set scaling discussion aside" on its face means keep the current scaling algorithm. Many people think the current scaling algorithm allows large blocks too quickly. This is the "anchoring" problem in negotiations. It also shows how [... too long, see
mrelay.p2pool.observer/e/w5rH0c8KTHNFdy14 ]
-
br-m
<diego:cypherstack.com> I am "fix it immediately after fcmp" not "fix it later"
-
br-m
<articmine> One can actually do FCMP++ with the current scaling untouched
-
br-m
<diego:cypherstack.com> Later is nebulous. "Immediately after fcmp" is expectation if a hard fork within the next year if not sooner after FCMP to implement scaling solutions.
-
br-m
<articmine> This does actually work
-
br-m
<diego:cypherstack.com> @articmine: This was my understanding, yes.
-
br-m
<diego:cypherstack.com> I hate to break it to everyone, but we don't have a massive flood of people just waiting for FCMP before they do all of their txs which will bring us right to the brink right away.
-
br-m
<diego:cypherstack.com> We have time. Not infinite time. And not enough time to sit on our laurels, but a bit of time. A year's worth at least.
-
br-m
<diego:cypherstack.com> (yes "a year" is pulled out of my butt)
-
br-m
<diego:cypherstack.com> Since my 4 crypto boys been picking apart Monero and FCMP non-stop for the past year, it has become very clear to me that nothing is as remotely close as important as getting FCMP++ live. And if the network won't blow up in a year (it won't), I don't think we can afford delays.
-
br-m
<diego:cypherstack.com> I'm preaching to the choir here, but you all know privacy is an arms race, and RingCT might as well be 1950s tech at this point with how fast the space moves
-
br-m
<articmine> @diego:cypherstack.com: The 1950's bandwidth did not support centralized ledgers such as VISA at even a fraction of what Monero currently does in transactions per second
-
br-m
<diego:cypherstack.com> @articmine: I've attended every one of your C3 talks. I know. :P
-
br-m
<diego:cypherstack.com> And it was hyperbole anyways. My point is, the arms race moves fast, and Monero hasn't taken a meaningful step forward since RingCT.
-
br-m
<articmine> ... now supporting FCMP with the current scaling is a piece of cake compared to thst
-
br-m
<diego:cypherstack.com> And raising the ring size barely counts
-
DataHoarder
ring size 1024 ought to be enough
-
br-m
<articmine> This assumes that the US Government and Chainalysis can fend off the legal counter attack in the courts. If they fail we could see a sudden flood of transactions on chain > <@diego:cypherstack.com> I hate to break it to everyone, but we don't have a massive flood of people just waiting for FCMP before they do all of their txs which will bring us right to the brink right away.
-
br-m
<articmine> This is an example why I am so opposed to a lower than 2x growth rate for the long term median.
-
br-m
<articmine> By the way if they fail, I am seriously considering adding fuel to the fire by pursuing legal action in the EU against the delisting of Monero from centralized exchanges.
-
br-m
<articmine> By the way this is orthogonal to the proposed 90 MB cap in the consensus rules.