-
jberman[m]
In theory, when constructing a tx, a wallet could also include the entire contents of each ring member's associated tx, and a pruned node could theoretically verify this tx only using stored tx hashes from across the chain and key images that it has previously verified? Not that this is necessarily a good idea, but just curious
-
UkoeHB
a pruned node has to store all outputs, so it just needs to verify all outputs referenced by a tx are in their partial ledger
-
UkoeHB
adding in the tx hash of referenced outputs doesn't add much afaik
-
jberman[m]
When you say outputs you mean output ID's, pub keys, and commitments?
-
UkoeHB
yes
-
jberman[m]
Ah and unlock times. Got it. The thing the hash idea would save is just needing to store the stuff that is included as part of the tx hash
-
mj-xmr[m]
MRL Meeting at 17 UTC today here. Agenda:
monero-project/meta #657
-
UkoeHB
meeting 0.5hrs
-
UkoeHB
-
UkoeHB
1. greetings
-
UkoeHB
hello
-
ArticMine
Hi
-
selsta
hi
-
rbrunner
Hi there
-
netrik182
hi
-
jberman[m]
hello
-
Rucknium[m]
Hi
-
carrington[m]
Salutations
-
mj-xmr[m]
Buenos
-
Scalability
días.
-
UkoeHB
Today we should focus on fee changes for the upcoming hardfork (and also look into the future). I summarized two concerns here:
monero-project/research-lab #70#issuecomment-1024964432
-
UkoeHB
It sounds like ArticMine agreed to reduce the long term scaling factor from 1.4x -> 2x to 1.4x -> 1.7x (this is maximum growth over 69 days).
-
ArticMine
-
UkoeHB
reduce the upcoming change*
-
ArticMine
sgp value of 1.7 as a compromise
-
ArticMine
Then we can find consensus for the subsequent HF
-
UkoeHB
Yes I think that is fine.
-
carrington[m]
Overall I think 1.7 is a decent compromise for the growth rate, but I think the sanity check hardcap on blocksize should be lowered to something with a basis in reality (like jberman calculated)
-
carrington[m]
Also fees should be higher in general
-
ArticMine
<carrington[m]> Overall I think 1.7 is a decent compromise for the growth rate, but I think the sanity check hardcap on blocksize should be lowered to something with a basis in reality (like jberman calculated) <--- there are ways of dealing with this without hard coding obsolescence into consensus
-
UkoeHB
ArticMine: what is your response to the stability concern I raised?
-
ArticMine
There are many way to deal with this including effectivly pricing the upload bandwidth of nodes. This doe not require consensus
-
jberman[m]
There is a point at which you can't retroactively solve for a chain that has grown too large to verify and sync on commodity hardware, that imo is the only way to lead to obsolescence
-
Rucknium[m]
Do we still lack an estimate for the cost of a deliberate maximal blockchain-bloating spam incident?
-
carrington[m]
Yeah long term nodes "catching up" wouldn't be a problem if we had some kind of disposable history, but for now considering there is already a hardcoded sanity check I think tweaking it isn't a permanent problem
-
UkoeHB
yes it is lacking (I could do it, but I am busy with seraphis... any takers?)
-
jberman[m]
I can take it on if there are no other takers, seems to be priority #1 for the fork at this point + I'm looking deeper into the fee changes at this point now anyway
-
ArticMine
Spam costing is something I am prepared to do but not in a rust before the next HF
-
ArticMine
This is the point of the compromise
-
UkoeHB
ArticMine: yes, there are ways to improve the performance of nodes. However, those methods aren't a 'solution', they are only a 'bandaid' to the basic problem. The basic problem is unbounded block size growth _cannot_ be supported by casual users (at some point only server centers can handle the load).
-
carrington[m]
What do you mean by "pricing the upload speed of nodes"?
-
ArticMine
Prioritizing transactions for relay based upon the fee
-
rbrunner
Aka fee market?
-
ArticMine
while keeping the number of broadcast nodes constant
-
ArticMine
For low bandwidth nodes this effectivly created a fee market
-
UkoeHB
What does a low bandwidth node do if it gets a huge valid block from a mining pool?
-
UkoeHB
It just falls behind and oh well?
-
ArticMine
Keep in mind that many internet connections can ahve a 30x difference in upload and download bandwith
-
ArticMine
and a Monero node needs easily 12x as more upload than download bandwith
-
jberman[m]
-
jberman[m]
even if you assume 0 cost bandwidth (aka infinite mb/s upload and download), the time to verify + storage requirements to run a pruned node that verifies the chain would eventually get too large
-
ArticMine
^ I have serious doubts with that. Batch verification is something that can be run in parallel
-
LyzaL
time for blocks to propogate increases orhphans no? 5-10 sec block upload time seems not great even if other hardware reqs are there - how long does it take a block to fully propogate in that situation?
-
carrington[m]
I think verification is the issue more than bandwidth?
-
UkoeHB
ArticMine: this is as much a theoretical problem as it is a numerical one.
-
jberman[m]
the time to verify is divided by 8, the number of threads on my machine. parallelism is accounted for
-
ArticMine
There mare solutioon here that de not involve hard caps
-
Rucknium[m]
I can review jberman 's work and/or work with him on spam cost estimation.
-
UkoeHB
LyzaL: that's a good point, since dandelion++ increases propagation time
-
ArticMine
I just do not see the argument to run verification on a single thread if there are many txs
-
rbrunner
I wonder how high we would like to see fees for extreme scenarious to feel save. Even with 10s of thousands of USD you could still fear a dedicated enemy with deep pockets.
-
ArticMine
If the orphan rate increase then mines will require higher fee. There is exisitng reserach on the fpor Bitcoin / Bitcoin cash
-
jberman[m]
the numbers don't assume verification is running on a single thread
-
ArticMine
^ It is critcal to clarifit this
-
ArticMine
clarify
-
carrington[m]
On one hand, solutions against a big bang attack in issue 70 already depend on the ability of the community to react within the timeframe of the long term window
-
ArticMine
UkoeHB was very clear in the simulation on this
-
UkoeHB
it is only critical when you are defining a hard-coded number... it is irrelevant for my theoretical objections which are being ignored
-
jberman[m]
orphan rates would increase as a consequence of larger blocks
-
carrington[m]
So in some sense we are choosing between baking in obsolescence and baking in centralized upgrades
-
UkoeHB
what do you mean react? iirc those comments did not factor into ArticMine and my analyses
-
rbrunner
Isn't "obsolence" a bit hard for a hard upper limit on the number of transaction?
-
LyzaL
personally I favor a conservative growth rate, having a fee market for a few months during a period of big growth isn't the end of the world, but mass centralization is
-
carrington[m]
rbrunner: my opinion is that spam attacks should cost the same as 51% attacks for equivalent security guarantees
-
LyzaL
also L2 seems possible at some point
-
ArticMine
It is not. Bitcoin is a prime example
-
ArticMine
Bandwith has increased 200x in the Bitcoin gensis block
-
ArticMine
while people keep up the debate on the blocksize
-
rbrunner
So why not have limits that go up more or less together with technology improving?
-
UkoeHB
Bitcoin does not have huge expensive blocks...
-
ArticMine
I posted on the orginat BCT thread that was started in 2010 on increasing the blocksize
-
ArticMine
<rbrunner> So why not have limits that go up more or less together with technology improving? <--- Bingo That is what I want to work on for the next HF
-
ArticMine
Which is why 1.7 is a reasonable compromise
-
carrington[m]
For my comment about "reacting" look at Artic's comments in 70 about "recent network attacks"
-
LyzaL
to my eyes even 1.4 looks like a massive growth rate
-
ArticMine
Over nthe long term yes over a 2 - 5 month period no
-
jberman[m]
If we are relying on the fact that we can adapt and react and change the protocol in the event of unforeseen circumstances, why not err on the side of keeping it at the more conservative growth rate it is at now and react in the direction of allowing for more growth, on the chance that we do not find agreement and the long term gets away from us?
-
rbrunner
Because maybe that feels a little like defeat :)
-
ArticMine
Because one can control growth in many ways
-
ArticMine
if there is an issue
-
rbrunner
More a psychological than a technical problem ...
-
ArticMine
^ I agree and Bitcoin is the prime example
-
carrington[m]
makes sense to me jberman , a conservative growth rate is nothing like locking things at fixed caps for a decade
-
ArticMine
We have a compromise. I am not going back to ask for 2 but I will not support less that 1.7 1.7 has been on the table for over a year
-
ArticMine
This is the critical psycological promlem
-
merope
Perhaps a dumb question: why does the growth have to be exponential? Why not linear?
-
LyzaL
or logarithmic
-
UkoeHB
pretty sure adoption change is proportional to existing adoption
-
ArticMine
in the intial stages yes
-
rbrunner
"will not support less that 1.7" How could that look in the light of a possible result of surprisingly low costs - still, after fee rising - for spam attacks?
-
rbrunner
Just hypothetically, we don't know yet after all
-
ArticMine
becasue this has been hased to death
-
Rucknium[m]
I think it is a good idea to read a few messages in the thread of the first suggestion to increase the bitcoin block size.
-
Rucknium[m]
-
Rucknium[m]
"If we upgrade now, we don't have to convince as much people later if the bitcoin economy continues to grow."
-
Rucknium[m]
merope: That's sort of my thinking as well. The functional forms that are being chosen are sort of forcing us into a space that maybe we don't want to be. But if we changed the functional forms, then we would have to re-work many things.
-
ArticMine
My views in that thread are still valid
-
ArticMine
It is the reason I gave up on Bitcoin in 2015
-
ArticMine
and the rest is history
-
rbrunner
Yeah, but come on, even with 1.1 instead of 1.7 or whatever we are much better than Bitcoin. Is that even a fair comparison?
-
ArticMine
Yes it is
-
ArticMine
look at the hisotory of Bitcoin
-
ArticMine
history
-
ArticMine
1.7 was on the tale for a year
-
ArticMine
table
-
carrington[m]
It is not a fair comparison, especially seeing as Monero is in general still being upgraded regularly
-
ArticMine
So is Bitcoin
-
rbrunner
I feel you, but how can this help us now to come to a "loose consensus" and go forward with the HF?
-
ArticMine
I though we had consisus at 1.7 untill this morning
-
rbrunner
We seem to sit at something like a stalemate now, if you want to be brutally honest
-
ErCiccione
Don't we have to do 2 hard forks anyway? Maybe better be conservative for this one and in case increase in the next one? Does that make sense?
-
ArticMine
No
-
rbrunner
Well, 7 hours ago jberman found out even 1.4 can go to 4.5 TB per year, worst case ...
-
ErCiccione
i mean the second hard for for seraphis
-
ArticMine
There was critical work that was done over a two year period
-
ErCiccione
but if it doesn't makes sense it'll just shut up 🙂
-
UkoeHB
ErCiccione: seraphis may be 3 hardforks in the future, if it takes long enough. The next hardfork after this one is likely to be very small (if we have one).
-
UkoeHB
ArticMine: it might help if you publish your numerical research for us to examine. Your presentation of 'we need 2x' was only backed up by a couple paragraphs.
-
ArticMine
I am not convicend it will do any help
-
ArticMine
I o zero response to my comments on 70 untill the very las minute
-
ArticMine
'las
-
ArticMine
last
-
ArticMine
Take a look at the date of the post on issue 70
-
ArticMine
There was ample time to ask questions etc
-
carrington[m]
The maximally bloated scenario is unlikely IMO because of the way the penalty works. A "medium speed" growth rate will probably provide more useful numbers for a given "maximum annual growth"
-
UkoeHB
ok sure, but we are talking about it now... better late than never
-
ArticMine
In order to kii the entire proposal
-
UkoeHB
I don't personally have a big stake in the number chosen, but it would be nice from an engineering PoV to understand the argument with more precision. Right now there feels a lot of vagueness
-
rbrunner
Seems to me that's almost a given, because you can easily work with quite different assumptions about growth, behaviour of market, of miners, of users ...
-
rbrunner
Which then may lead to a different "best" growth rate
-
ArticMine
One can argue against any number by taking edge cases
-
rbrunner
Realistically, if we go now with 1.7 and something bad comes upon us, we can probably emergency-HF in a month or even less. With this in sight, we should not allow a stalemate ruin our nice HF. IMHO.
-
selsta
I would go with 1.7, we will still have enough time to do further research on fees in the future
-
ArticMine
^ I agree
-
ArticMine
This is the point of the compromise
-
jberman[m]
Ok
-
selsta
also mooo already updated to PR so we have to go with it now :D
-
UkoeHB
lol
-
rbrunner
lol
-
rbrunner
Best argument today :)
-
mj-xmr[m]
^ Convinced
-
rbrunner
Solid engineering
-
» moneromooo tyoped to 2.7 instead of 1.7 so I guess we're going with that.
-
ArticMine
lol
-
rbrunner
So maybe this time it's not our famous "loose consensus" but only a very loose one, but the comprimise can go through - barely?
-
carrington[m]
If we are at "roughly consensus" on the 1.7 number, does anyone have any insight into this concern of mine:
-
carrington[m]
do we know for sure that miner implementations are actually set up to behave "rationally" once we are in the dynamic blocksize regime? i.e. does software like XMRig account for the penalty that will be applied correctly and allow itself to build blocks larger than 300kb under optimal fee environments, or does it naively just keep adding highest-fee transactions beyond the 300kb limit? It seems to me that if the mining
-
carrington[m]
software is not set up to actually figure this stuff out, the whole dynamic blocksize system will not kick in smoothly.
-
ArticMine
The dynamic blocksize has kied in before
-
ArticMine
kicked in
-
ArticMine
after the RingCT fork in 2017
-
rbrunner
Don't know, can't imagine miners would put up with this for longtime before they revolt
-
moneromooo
xmrig doesn't have to care about this.
-
UkoeHB
Ok we are at the end of the meeting. There seems general consensus to allow 1.7 into the fork. I hope/expect by summer there is a stronger and more precise understanding about scaling, stability, and spam costs around block sizes and block growth. This way we can have compelling arguments about scaling factors and the presence/absence of a hard upper limit.
-
merope
Xmrig does not know anything about transactions nor fees, only hashing block templates
-
rbrunner
Well spoken, UkoeHB.
-
merope
Fees and txes are the responsibility of nodes (or whoever generates the block template to be mined)
-
carrington[m]
OK I guess I mean pool operators (or whatever algorithms they use)
-
UkoeHB
thanks for attending everyone
-
ArticMine
Thanks
-
rbrunner
+1
-
UkoeHB
carrington[m]: I think if there is a bug, it would get fixed realll fast when miners notice their profit margin fall.
-
UkoeHB
bug or failure to take the dynamic size into account*
-
carrington[m]
I suppose if that's the worst case scenario it is not something to worry about. Unless miners are taken offline in huge numbers while they patch their transaction selection algoritms
-
merope
It's not the miners that would have to do the patching, but pool ops
-
merope
And it wouldn't take very long (individually)
-
carrington[m]
but pool would be offline I guess? And miners would still stop mining?
-
merope
Also, I'm pretty sure the pool software (as well as monerod?) already take penalties into account when generating the block template
-
merope
merope: ^
-
carrington[m]
If that is the case then all is well
-
ArticMine
The average blocksize went over 300000 bytes in 2017 and over the 60000 byte limit in early 2017
-
jberman[m]
UkoeHB: do you think we've sufficiently explored the curve25519 idea for view tags aka does it seem ready to you?
-
ArticMine
-
UkoeHB
tevador's results suggest we need to record the tx pubkey (output/enote pubkey) as a curve25519 point. I'm leaning toward pushing that kind of thing to Seraphis, and leaving your PR as-is.
-
UkoeHB
Adding curve25519 as a dependency for the upcoming hardfork is a big ask I think.
-
jberman[m]
Seems reasonable to me. knaccc might have thoughts there, think they will be happy to know it's looking like it'll be part of a future upgrade
-
knaccc
yeah it does look like a big ask for the small performance gain
-
knaccc
it also may affect our ability to use txpubkeys in clever ways in the future, outside of the output pubkey ecdh
-
knaccc
so an active imagination would be required to determine if it could be useful to have ed25519 txpubkeys
-
UkoeHB
we do have another clever use in jamtis, but it relies on another ecdh exchange
-
UkoeHB
so there is no penalty from curve25519
-
knaccc
i remember thinking of schemes where it would be useful to reveal sG, where s is the tx private key
-
knaccc
since sD is declared as the txpubkey, and since we know d, we can calculate d^-1 * sD to get sG
-
knaccc
and then sG acts as a shared secret between sender and receiver
-
knaccc
for which only the sender knows the private key
-
UkoeHB
yep we use the secret `sG` (as you call it) in jamtis
-
knaccc
oh nice
-
UkoeHB
We bake it into the amount blinding factor and encoding factor, to add an extra permission tier to the wallet key structure.
-
wernervasquez[m]
So, with current subadresses, what information is needed to decode the amount recieved? And what information is needed to check that it matches the commitment?
-
UkoeHB
Just the view key
-
UkoeHB
Oh yeah, baking in `sG` also solves the Janus problem without any extra tx bytes.
-
wernervasquez[m]
UkoeHB: what, precisely, is XORed against the amount? And what is hashed into the blinding factor?
-
UkoeHB
-
wernervasquez[m]
Thanks, I had misread something in there earlier. But a fresh read fixed it.