-
br-m
<321bob321> USA stopped dialup soo
-
br-m
<0xfffc> This ^ is average for most people. Of course participants in this discussion not going to be representative of average users. > <@sgp_> to be fair we are nerds who seek out good internet (selection bias). my dumb apartment provides only 40up/40down by default
-
br-m
<ammortel> Hello I learned through xenu's video that you want to limit block size? That is scary. Isn't it possible to improve the daemon? To make it resilient to bigger blocks? That would be the right answer to a scaling problem. Limiting Monero sounds a lot like a defeat.
-
br-m
<rbrunner7> Well, as far as I understand, the discussion is to set an absolute upper limit that is like 100 times or so higher than the current average blocksize. "Unlimited block size" can't really be unlimited, you can't have 1 GB blocks ...
-
br-m
<rbrunner7> IMHO it would be intellectually dishonest to put the head into the sand and pretend that we don't need any limit whatsoever, just to "feel good"
-
br-m
<rbrunner7> And I feel it is a bit funny that nobody does any rough calculations what those monster block sizes would mean for blockchain size growth. Could be we would have a blockchain with a multi-terabyte size in no time. "Unlimited" block size but impossible to run their own node for 95% of all potential users isn't a good trade-off, if you ask me.
-
br-m
<ammortel> From what I understand monero already has an Upper Block size Limit ? It Just grows as demand grows. That is good. Maybe slow it down past a Critical point, where you fear a system breakdown ?
-
br-m
<321bob321> normally interwebz connections are asymmetric
-
DataHoarder
ammortel: On the discussion of changing that limit so it scales better for FCMP++, there is an absolute technical limit that can cause a chain split and breakage, depending how blocks are fed to clients, 100 MiB packet limit
-
DataHoarder
stressnet has shown things start breaking before that, anyhow, but that's the existing limit we inherited. The technical limit to be placed/suggested was 90 MiB to prevent that from being exploited (now or in x years) until that has been refactored (at which point we'd hardfork out that limit)
-
DataHoarder
though the discussion is more nuanced as you can see from the amount of messages on MRL room + lounge :)
-
br-m
<ofrnxmr> 90mb blocks = 65gb of chain growth per day, fwiw. Its also not a permanent limit and should be removed as soon as the sw and hw can handle it
-
br-m
<untraceable> What does it take to get to this 90mb limit? How much time?
-
DataHoarder
on current parameters, not much (stressnet example) > 15:44:32 <rucknium:monero.social> We hit 25MB blocks in less than 24 hours of dedicated spamming.
-
DataHoarder
but it's limited by current tools we have to spam efficiently
-
br-m
<rucknium> ^ This costs a lot in fees. Blocks full of tier-3 fees.
-
DataHoarder
yeah. it wouldn't be economical for an attacker
-
br-m
<rucknium> 2+ XMR in fees per block.
-
DataHoarder
but we have seen from past events and Qubic that doesn't need to be the case if the point is to be malicious
-
br-m
<ammortel> ok so the solution is to make Monero pump in price. Just wait a couple of months and this attack will be considered too expensive
-
br-m
<ofrnxmr> DataHoarder: Qubic etc can mine their own txs in 51/100 blocks and pay nothing
-
br-m
<ofrnxmr> The other 49 blocks can be empty
-
DataHoarder
yeah, in any miner initiated growth the cost is free
-
DataHoarder
even if they fail, they lose nothing (just pad their blocks with their own txs)
-
DataHoarder
ammortel: miners can do it for free, or can have other factors. blocks at high heights can also be fed to RPC or similar endpoints
-
br-m
<sgp_> I think the risk of blocks being filled with spam is understated. Yes, a miner can fill in blocks without paying fees. But also, a miner could be paid almost nothing to fill in the blocks with someone else's transactions, since the minimum fee is only relay enforced, not consensus.
-
br-m
<sgp_> So it's essentially fee-free, even for non-miners in relatively simple scenarios, to fill blocks up. The only actual fee that comes into play is the block reward penalty, which even impacts miners.
-
br-m
<sgp_> Begging miners to not accept payment to include transactions in their mined blocks is a fundamentally unsafe assumption, imho. There's no measurable incremental cost to the miner for accepting payment on these.[... more lines follow, see
mrelay.p2pool.observer/e/-8Sem88KTFlrblJZ ]
-
br-m
<sgp_> Would the large hashrate miners decide it's not worth it to accept e.g. $1 of payment to fill the rest of the block? Maybe in practice they would refuse? But I think that's a bad assumption to rely on
-
br-m
<sgp_> ^ fill the rest of the block up until the penalty would be imposed
-
br-m
<ammortel> I have the feeling we are talking about things theoretically possible but practically non existant
-
br-m
<gingeropolous> what like a 18 block re-org?
-
br-m
<ammortel> Yeah we all got scared of qubic but in the end it was a scam. I doubt miners can get paid more on a long term with their scheme. And if they do that thing again, it just takes us to build a concurrent scamcoin like theirs to ruin their efforts and attract a good portion of the greedy miners in their place
-
br-m
<sgp_> outscam the scammers
-
br-m
<sgp_> in any case I just want to raise awareness that we should probably assume for security planning purposes that blocks can be filled up to the point the block reward penalty would start to be imposed for almost nothing. Whether that's $0.01, $1, or $100 in practice, who knows for sure. The block reward is a real, strong penalty
-
br-m
<ofrnxmr> isnt the penalty only imposed based on fees that replace it?
-
br-m
<sgp_> wdym
-
br-m
<ammortel> I Don't get the sentence "the Block reward is a real strong penalty"
-
br-m
<elongated:matrix.org> Can’t we have consensus on fees?
-
DataHoarder
it being a scam doesn't discount the damage they did
-
DataHoarder
and they did this while losing money
-
DataHoarder
and could have made this ^ attack if they actually didn't have bad tech
-
DataHoarder
(you can trigger this via RPC already, too)
-
DataHoarder
it's practically existent and more so when we strictly define scaling that gets to those numbers (instead of it happening due to inherited code)
-
sech1
Miners get their income from block reward + fees, so if they do spam transactions, yes they get fees back, but they lose on the block reward
-
br-m
<monero.arbo:matrix.org> as to why BCH can do 256 MB blocks and we can't: it's as I've been saying, the compute cost of XMR transactions > <sech1> Even my 5G mobile internet is 930 Mbit down/91 Mbit up. It's not 1995 anymore.
-
br-m
<monero.arbo:matrix.org> as far as comments like this, I would implore everyone to remember that not all Monero users live in modern countries with widespread 5G, fiber, etc
-
DataHoarder
^ it'd be amazing if we can have "stream proving" to have a half full/pruned node and allow those txs to be smaller while being proven
-
DataHoarder
"Chain proving", I am interested on any details that come out of that for FCMP++ for the p2pool aggregation, though not strictly necessary
-
br-m
<rucknium> @monero.arbo:matrix.org: The 90MB block size cap was suggested to prevent going over Monero's 100MB p2p packet size limit, which doesn't directly involve cryptographic computation.
-
br-m
<rucknium> It's not a physical limit. It's arbitrary, set 11 years ago. Just need some good programmers to go in there and fix it.
-
br-m
<gingeropolous> is it a fix thats needed or just removal
-
br-m
<rucknium> I don't think anyone knows what, if anything, would go wrong if the packet size limit was lifted. Correct me if I'm wrong. I doubt the original Cryptonote developers put a lot of analysis into that limit. Even if they did, the analysis is out of date by 11 years of technological progress.
-
br-m
<boog900> its a nice way of preventing someone from sending you 10 TB of epee binary data. I think the best solution is going to be changing how we sync blocks so we can break blocks into multiple packets, so the 100 MB packet limit will still be there.
-
sech1
this ^
-
sech1
no need to invent anything, we can take some ideas from torrents
-
DataHoarder
Specially when the individual parts are transactions
-
DataHoarder
So it's already explicitly like torrents, we have the piece hashes. The issue here is that its desired to not allow reviving arbitrary txs unless within the context of receiving a block (that can have arbitrary txs), unless they are already in your mempool
-
br-m
<syntheticbird> @boog900: The day is 4th december 2025. War is arriving, anonymity is being slowly but surely removed to the average individual. And we're yet dealing with epee limitations.
-
br-m
<syntheticbird> once upon time, madlads were taking a big room at an office building or university, playing video games, eating pizza and talking during MONTHS without seeing a glimmer of sunshine. That's how we got TCP and X11. Maybe we should do the same
-
br-m
<syntheticbird> joke aside, maybe we should start listing all epee limitations somewhere and think of a major overhaul of the protocol
-
DataHoarder
1. obscure implementation
-
sech1
If do a full rewrite, I'd go with google protobuf or something similar + re-design protocol messages to not ever have this "100 MB" limit problem
-
DataHoarder
protobuf maybe too malleable (as most specific changes we do are versioned) but that similar area would be nice
-
DataHoarder
also in the same line
flatbuffers.dev
-
DataHoarder
"Access to serialized data without parsing/unpacking" :)
-
br-m
<syntheticbird> proposal
-
br-m
<syntheticbird> instead of using packets
-
br-m
<syntheticbird> we do remote memory access between nodes
-
br-m
<syntheticbird> much faster
-
DataHoarder
RDMA
-
br-m
<syntheticbird> ye
-
DataHoarder
oh no. it uses network packets!!!
-
br-m
<syntheticbird> i meant serialized data
-
br-m
<syntheticbird> you get the idea
-
br-m
<syntheticbird> I'm sure RDMA is what epee envisioned to be
-
DataHoarder
on the same line of thought. maybe Transaction v3 is in the books. remove all the unused accumulated fields
-
DataHoarder
as checking for version doesn't even mean you'll be able to decode it, given the in/outs change and they have custom non-prefixed format, so you can't skip it
-
DataHoarder
using tx version as the combo of "ringct ver + range proof ver + serialization ver" all in one might be interesting.
-
DataHoarder
so v3 = coinbase, v4 = FCMP++ + Bulletproofs+ (and no rings, just images)
-
DataHoarder
that can also finally make us bring the tx extra specific fields (ephemeral pub) into the output field, and leave extra even smaller for minimal usage
-
DataHoarder
FCMP++ + carrot already changes both input format and output format (Carrot)
-
DataHoarder
I recently implementing decoding/encoding all transaction versions + ringct + range proofs, and it's a joy to have the encoding of specific fields also change
-
DataHoarder
uint32 -> maybe it can be varint -> actually let's make it byte
-
br-m
<kayabanerve:matrix.org> @boog900:monero.social: Have we considered splitting how we handle blocks _and_ lowering the packet limit _and_ reworking the RPC as @jberman:monero.social: told me they theorized some time ago _and_ upgrading wallets accordingly _and_ finally replacing the epee impl if not epee itself?
-
br-m
<kayabanerve:matrix.org> DataHoarder: a v3 is far less needed when you model transactions as tagged unions like -oxide does. There'd still be notable benefits however, mainly how we'd replace an enum per output with an enum of the entire transaction.
-
DataHoarder
yeah, unions are fine
-
DataHoarder
it still wastes bytes specially in the inputs
-
DataHoarder
or outputs ... carrot could have ended up with pub per out except you can reuse it in a specific case
-
DataHoarder
carrot also took an existing type for tx outputs
-
DataHoarder
it was unused, not valid, but it was defined nonetheless
-
DataHoarder
so now you need to be aware of the time/context you decode the tx for different meanings
-
DataHoarder
(type 0 was txout_to_script before, now it's txout_to_carrot_v1)
-
DataHoarder
> typedef boost::variant<txout_to_script, txout_to_scripthash, txout_to_key, txout_to_tagged_key> txout_target_v;
-
DataHoarder
to
-
DataHoarder
> typedef boost::variant<txout_to_carrot_v1, txout_to_scripthash, txout_to_key, txout_to_tagged_key> txout_target_v;
-
DataHoarder
so it still has unused ones :)
-
DataHoarder
same for vin, defined inputs for deserialization but marked invalid when checking later
-
DataHoarder
we could get rid of all these permanently for higher version txs
-
DataHoarder
(typedef boost::variant<txin_gen, txin_to_script, txin_to_scripthash, txin_to_key> txin_v;)
-
DataHoarder
only gen and to_key are used
-
DataHoarder
gen could be specific to coinbase transactions if it's splitted, or kept as v2
-
br-m
<kayabanerve:matrix.org> @jeffro256:monero.social: Why did we take an existing byte's definition? I'm fine saying that byte wasn't in the protocol to begin with, I'm just curious why not continue incrementing from where we left off.
-
DataHoarder
now that we are considering cleaning up code tech debt, and whatever net serialization does has to serialize these txs fields, and it deserializes THEN validates/forbids types or values
-
DataHoarder
yeah that reuse was fun to find, I assumed it'd increase when I implemented mine :)
-
br-m
<jeffro256> I like removing dead code it tickles my brain
-
DataHoarder
oh, you also killed half of txin_to_scripthash cause it used that
-
DataHoarder
why not all of them?
-
br-m
<jeffro256> I could do that, I just didn't want to include it in the FCMP++ branch because it can also be done outside in its own PR
-
br-m
<jeffro256> Also, to keep the Boost variant serialization backwards-compatible, you can't shift the positions of the types of the variant's type list, since it more or less uses the which() value as a prefix in the data to dispatch deserialization
-
br-m
<datahoarder> The following tx decodes properly on current monerod:
-
br-m
<datahoarder> curl --verbose
127.0.0.1:18081/send_raw_transaction -d '{"tx_as_hex":"020008006611e0bb6d7975f431011d1857fb3cd8b935befa4a66feb178f5547abff103079c8ad8b3c1f9f1c10504873c71bd00d2ff828745d194c45749da4ce123f484de962dd43c80bb6246b3a5e1f3b4757ea2dcca9d9291f5be4705f1d04a34b500c444793d0aa1c3c790829a45fb563c09c32220c90d8b4fa52725 [... too long, see
mrelay.p2pool.observer/e/zNXioc8Ka2ctM2lw ]
-
br-m
<datahoarder> [... more lines follow, see
mrelay.p2pool.observer/e/zNXioc8Ka2ctM2lw ]
-
br-m
<datahoarder> it's a serialization/unserialization part that we are handling currently but never used/handled in the rest of the code. should be entirely ripped out. Maybe more for #monero-dev:monero.social discussion anyhow
-
br-m
<datahoarder> (then keep a bogus "invalid" type that errors when serializing/deserializing directly, instead of being empty)
-
br-m
<datahoarder> this could be used to identify monero node versions (who has upgraded or not) via P2P entirely
-
br-m
<datahoarder> Here's one that decodes on both, still using unused types:
-
br-m
<datahoarder> 020008003f11f7990ee84560886669fd9e5136c027e379b6bd15400a2b2601c8fe74a3f9fadfecbc9894c3b649059703a7a7380063fb2205d6528b14d004e5b78f0853cec657bc7d9e4166349dbb3c9df7f2082ff29af7b1c3abe2c97b03fd714e005b283929ad3272fb5807f72455c9ff4412c542b67fc5fb3cc8fe54120c207f15a99ff5a9faf7fc9844070f0d7a79a2f7fd007a4af36741daa9d77fd76242837b0fe [... too long, see
mrelay.p2pool.observer/e/wvHwoc8KeTdjcjVR ]
-
DataHoarder
you can even send mixed input/output types on the wire. it's just later that gets checked, but critically, after decoding
-
br-m
<33monero:matrix.org> I think we have gotten too complacent
-
br-m
<33monero:matrix.org> we have to remember its really us vs the world (in overlysimplified way) we are not noobies, we know every crypto is transparent, except Monero, think like the State would, up Zcash (backdoor) and attack Monero from the inside
-
br-m
<33monero:matrix.org> putting a cap is 100% Fed idea, and what a Fed would do, everything else is theatrics, I dont think Monero getting capped is the way, at its very essence its limitation
-
br-m
<33monero:matrix.org> who really thinks there arent feds in here, and have not infiltrated development
-
br-m
<datahoarder> The cap already exists. We are trying to remove it and make current one not blow up.
-
br-m
<datahoarder> Cap existed originally on serialization. If it's about hypothetical intrusions maybe #monero-research-lounge:monero.social is a better place for recently joined accounts
-
br-m
<kayabanerve:matrix.org> I think suggesting the network should be at risk of fundamental instability and non-functioning is what a fed would want
-
br-m
<33monero:matrix.org> nice attempt to discredit, one being a member in the Monero community is not dictated by one server
-
br-m
<kayabanerve:matrix.org> This isn't 'large blocks bad'. It's 'the network literally doesn't work with anything this large'.
-
br-m
<33monero:matrix.org> alr a red flag from u
-
br-m
<33monero:matrix.org> @kayabanerve:matrix.org: Not yet at least
-
br-m
<kayabanerve:matrix.org> If you want to upgrade the various aspects of the network to work with such large blocks, PRs welcome.
-
br-m
<datahoarder> What I am saying, we are discussing solutions here. Hypotheticals around what cap means or fed interests are better for #monero-research-lounge:monero.social
-
br-m
<kayabanerve:matrix.org> As that'd presumably take months to even a year or two, and existing developers are focusing on existing tasks towards that goal, this is the immediate solution which keeps us safe until then.
-
br-m
<kayabanerve:matrix.org> So PR the solution now or don't stand in the way of the solution possible now, which just turns a fundamental break of the network into a reduced-capacity functioning, would be my blunt/rude way to phrase it.
-
br-m
<33monero:matrix.org> if Monero cracks in the inside, do not say no one pointed out the signs
-
br-m
<datahoarder> Back to the tx serialization, maybe we can remove all the forever unused cruft on v2 txs
-
br-m
<kayabanerve:matrix.org> DataHoarder: PRs welcome
-
br-m
<kayabanerve:matrix.org> :p
-
br-m
<datahoarder> yep I will
-
br-m
<datahoarder> after other PRs I need to open :P
-
br-m
<kayabanerve:matrix.org> Story of all our lives
-
br-m
<datahoarder> also even better. v1/v2 txs are deserialized already in the same class/object. so no reason to split it at the "root" (v1/v2/...) instead of later, which end up with various nested checks or looking what inputs/outputs we have to decode fields differently
-
br-m
<datahoarder> a cleaner v3/v4 etc. could have better defined bounds for decoding/wire, and if that'd be the future format, we could ensure that at least "there" things wouldn't blow up along the fluffy blocks
-
br-m
<datahoarder> older txs versions/blocks (before hardfork, so there wouldn't be any new ones after the hardfork) could indeed have this sanity limit, given the per-tx size is already limited by CRYPTONOTE_MAX_TX_SIZE = 1000000 in code