00:36:38 In theory, when constructing a tx, a wallet could also include the entire contents of each ring member's associated tx, and a pruned node could theoretically verify this tx only using stored tx hashes from across the chain and key images that it has previously verified? Not that this is necessarily a good idea, but just curious 00:45:45 a pruned node has to store all outputs, so it just needs to verify all outputs referenced by a tx are in their partial ledger 00:46:01 adding in the tx hash of referenced outputs doesn't add much afaik 00:52:03 When you say outputs you mean output ID's, pub keys, and commitments? 00:54:54 yes 00:58:41 Ah and unlock times. Got it. The thing the hash idea would save is just needing to store the stuff that is included as part of the tx hash 10:34:50 MRL Meeting at 17 UTC today here. Agenda: https://github.com/monero-project/meta/issues/657 16:25:58 meeting 0.5hrs 17:00:44 meeting time: https://github.com/monero-project/meta/issues/657 17:00:45 1. greetings 17:00:45 hello 17:00:49 Hi 17:00:55 hi 17:00:58 Hi there 17:01:55 hi 17:02:09 hello 17:02:12 Hi 17:02:37 Salutations 17:02:44 Buenos 17:02:55 días. 17:03:42 Today we should focus on fee changes for the upcoming hardfork (and also look into the future). I summarized two concerns here: https://github.com/monero-project/research-lab/issues/70#issuecomment-1024964432 17:05:01 It sounds like ArticMine agreed to reduce the long term scaling factor from 1.4x -> 2x to 1.4x -> 1.7x (this is maximum growth over 69 days). 17:05:17 Yes that is correct https://github.com/monero-project/research-lab/issues/70#issuecomment-1025334284 17:05:19 reduce the upcoming change* 17:05:46 sgp value of 1.7 as a compromise 17:06:48 Then we can find consensus for the subsequent HF 17:07:06 Yes I think that is fine. 17:07:32 Overall I think 1.7 is a decent compromise for the growth rate, but I think the sanity check hardcap on blocksize should be lowered to something with a basis in reality (like jberman calculated) 17:07:56 Also fees should be higher in general 17:09:00 Overall I think 1.7 is a decent compromise for the growth rate, but I think the sanity check hardcap on blocksize should be lowered to something with a basis in reality (like jberman calculated) <--- there are ways of dealing with this without hard coding obsolescence into consensus 17:09:18 ArticMine: what is your response to the stability concern I raised? 17:11:07 There are many way to deal with this including effectivly pricing the upload bandwidth of nodes. This doe not require consensus 17:11:33 There is a point at which you can't retroactively solve for a chain that has grown too large to verify and sync on commodity hardware, that imo is the only way to lead to obsolescence 17:11:35 Do we still lack an estimate for the cost of a deliberate maximal blockchain-bloating spam incident? 17:12:06 Yeah long term nodes "catching up" wouldn't be a problem if we had some kind of disposable history, but for now considering there is already a hardcoded sanity check I think tweaking it isn't a permanent problem 17:12:11 yes it is lacking (I could do it, but I am busy with seraphis... any takers?) 17:13:39 I can take it on if there are no other takers, seems to be priority #1 for the fork at this point + I'm looking deeper into the fee changes at this point now anyway 17:14:07 Spam costing is something I am prepared to do but not in a rust before the next HF 17:14:18 This is the point of the compromise 17:14:36 ArticMine: yes, there are ways to improve the performance of nodes. However, those methods aren't a 'solution', they are only a 'bandaid' to the basic problem. The basic problem is unbounded block size growth _cannot_ be supported by casual users (at some point only server centers can handle the load). 17:14:48 What do you mean by "pricing the upload speed of nodes"? 17:15:21 Prioritizing transactions for relay based upon the fee 17:15:43 Aka fee market? 17:15:44 while keeping the number of broadcast nodes constant 17:16:14 For low bandwidth nodes this effectivly created a fee market 17:16:43 What does a low bandwidth node do if it gets a huge valid block from a mining pool? 17:16:51 It just falls behind and oh well? 17:16:56 Keep in mind that many internet connections can ahve a 30x difference in upload and download bandwith 17:17:26 and a Monero node needs easily 12x as more upload than download bandwith 17:17:39 Bandwidth isn't the issue in my calculations: https://github.com/monero-project/research-lab/issues/70#issuecomment-1027806393 17:18:32 even if you assume 0 cost bandwidth (aka infinite mb/s upload and download), the time to verify + storage requirements to run a pruned node that verifies the chain would eventually get too large 17:18:51 ^ I have serious doubts with that. Batch verification is something that can be run in parallel 17:18:53 time for blocks to propogate increases orhphans no? 5-10 sec block upload time seems not great even if other hardware reqs are there - how long does it take a block to fully propogate in that situation? 17:19:20 I think verification is the issue more than bandwidth? 17:19:21 ArticMine: this is as much a theoretical problem as it is a numerical one. 17:19:22 the time to verify is divided by 8, the number of threads on my machine. parallelism is accounted for 17:19:24 There mare solutioon here that de not involve hard caps 17:19:42 I can review jberman 's work and/or work with him on spam cost estimation. 17:20:47 LyzaL: that's a good point, since dandelion++ increases propagation time 17:20:51 I just do not see the argument to run verification on a single thread if there are many txs 17:21:38 I wonder how high we would like to see fees for extreme scenarious to feel save. Even with 10s of thousands of USD you could still fear a dedicated enemy with deep pockets. 17:21:42 If the orphan rate increase then mines will require higher fee. There is exisitng reserach on the fpor Bitcoin / Bitcoin cash 17:21:45 the numbers don't assume verification is running on a single thread 17:22:07 ^ It is critcal to clarifit this 17:22:12 clarify 17:22:49 On one hand, solutions against a big bang attack in issue 70 already depend on the ability of the community to react within the timeframe of the long term window 17:22:50 UkoeHB was very clear in the simulation on this 17:22:52 it is only critical when you are defining a hard-coded number... it is irrelevant for my theoretical objections which are being ignored 17:23:37 orphan rates would increase as a consequence of larger blocks 17:23:55 So in some sense we are choosing between baking in obsolescence and baking in centralized upgrades 17:24:22 what do you mean react? iirc those comments did not factor into ArticMine and my analyses 17:24:36 Isn't "obsolence" a bit hard for a hard upper limit on the number of transaction? 17:24:37 personally I favor a conservative growth rate, having a fee market for a few months during a period of big growth isn't the end of the world, but mass centralization is 17:24:50 rbrunner: my opinion is that spam attacks should cost the same as 51% attacks for equivalent security guarantees 17:24:59 also L2 seems possible at some point 17:25:06 It is not. Bitcoin is a prime example 17:25:24 Bandwith has increased 200x in the Bitcoin gensis block 17:25:37 while people keep up the debate on the blocksize 17:26:10 So why not have limits that go up more or less together with technology improving? 17:26:12 Bitcoin does not have huge expensive blocks... 17:26:33 I posted on the orginat BCT thread that was started in 2010 on increasing the blocksize 17:27:04 So why not have limits that go up more or less together with technology improving? <--- Bingo That is what I want to work on for the next HF 17:27:34 Which is why 1.7 is a reasonable compromise 17:27:44 For my comment about "reacting" look at Artic's comments in 70 about "recent network attacks" 17:28:16 to my eyes even 1.4 looks like a massive growth rate 17:29:15 Over nthe long term yes over a 2 - 5 month period no 17:31:32 If we are relying on the fact that we can adapt and react and change the protocol in the event of unforeseen circumstances, why not err on the side of keeping it at the more conservative growth rate it is at now and react in the direction of allowing for more growth, on the chance that we do not find agreement and the long term gets away from us? 17:32:21 Because maybe that feels a little like defeat :) 17:32:40 Because one can control growth in many ways 17:32:49 if there is an issue 17:32:56 More a psychological than a technical problem ... 17:33:31 ^ I agree and Bitcoin is the prime example 17:34:34 makes sense to me jberman , a conservative growth rate is nothing like locking things at fixed caps for a decade 17:35:37 We have a compromise. I am not going back to ask for 2 but I will not support less that 1.7 1.7 has been on the table for over a year 17:36:07 This is the critical psycological promlem 17:36:24 Perhaps a dumb question: why does the growth have to be exponential? Why not linear? 17:37:04 or logarithmic 17:37:40 pretty sure adoption change is proportional to existing adoption 17:37:58 in the intial stages yes 17:41:17 "will not support less that 1.7" How could that look in the light of a possible result of surprisingly low costs - still, after fee rising - for spam attacks? 17:41:36 Just hypothetically, we don't know yet after all 17:41:52 becasue this has been hased to death 17:42:00 I think it is a good idea to read a few messages in the thread of the first suggestion to increase the bitcoin block size. 17:42:00 https://bitcointalk.org/index.php?topic=1347.msg15366#msg15366 17:42:00 "If we upgrade now, we don't have to convince as much people later if the bitcoin economy continues to grow." 17:42:27 merope: That's sort of my thinking as well. The functional forms that are being chosen are sort of forcing us into a space that maybe we don't want to be. But if we changed the functional forms, then we would have to re-work many things. 17:43:01 My views in that thread are still valid 17:43:19 It is the reason I gave up on Bitcoin in 2015 17:43:44 and the rest is history 17:44:00 Yeah, but come on, even with 1.1 instead of 1.7 or whatever we are much better than Bitcoin. Is that even a fair comparison? 17:44:41 Yes it is 17:44:50 look at the hisotory of Bitcoin 17:44:54 history 17:45:37 1.7 was on the tale for a year 17:45:52 table 17:45:57 It is not a fair comparison, especially seeing as Monero is in general still being upgraded regularly 17:46:12 So is Bitcoin 17:46:12 I feel you, but how can this help us now to come to a "loose consensus" and go forward with the HF? 17:46:39 I though we had consisus at 1.7 untill this morning 17:46:49 We seem to sit at something like a stalemate now, if you want to be brutally honest 17:47:14 Don't we have to do 2 hard forks anyway? Maybe better be conservative for this one and in case increase in the next one? Does that make sense? 17:47:34 No 17:47:56 Well, 7 hours ago jberman found out even 1.4 can go to 4.5 TB per year, worst case ... 17:48:03 i mean the second hard for for seraphis 17:48:09 There was critical work that was done over a two year period 17:48:18 but if it doesn't makes sense it'll just shut up 🙂 17:48:37 ErCiccione: seraphis may be 3 hardforks in the future, if it takes long enough. The next hardfork after this one is likely to be very small (if we have one). 17:49:34 ArticMine: it might help if you publish your numerical research for us to examine. Your presentation of 'we need 2x' was only backed up by a couple paragraphs. 17:50:02 I am not convicend it will do any help 17:50:20 I o zero response to my comments on 70 untill the very las minute 17:50:24 'las 17:50:29 last 17:51:04 Take a look at the date of the post on issue 70 17:51:17 There was ample time to ask questions etc 17:51:45 The maximally bloated scenario is unlikely IMO because of the way the penalty works. A "medium speed" growth rate will probably provide more useful numbers for a given "maximum annual growth" 17:51:53 ok sure, but we are talking about it now... better late than never 17:53:17 In order to kii the entire proposal 17:53:18 I don't personally have a big stake in the number chosen, but it would be nice from an engineering PoV to understand the argument with more precision. Right now there feels a lot of vagueness 17:54:23 Seems to me that's almost a given, because you can easily work with quite different assumptions about growth, behaviour of market, of miners, of users ... 17:54:44 Which then may lead to a different "best" growth rate 17:55:09 One can argue against any number by taking edge cases 17:56:13 Realistically, if we go now with 1.7 and something bad comes upon us, we can probably emergency-HF in a month or even less. With this in sight, we should not allow a stalemate ruin our nice HF. IMHO. 17:56:15 I would go with 1.7, we will still have enough time to do further research on fees in the future 17:56:52 ^ I agree 17:57:57 This is the point of the compromise 17:58:39 Ok 17:59:19 also mooo already updated to PR so we have to go with it now :D 17:59:30 lol 17:59:33 lol 17:59:42 Best argument today :) 17:59:52 ^ Convinced 17:59:57 Solid engineering 18:00:08 * moneromooo tyoped to 2.7 instead of 1.7 so I guess we're going with that. 18:00:26 lol 18:01:54 So maybe this time it's not our famous "loose consensus" but only a very loose one, but the comprimise can go through - barely? 18:02:04 If we are at "roughly consensus" on the 1.7 number, does anyone have any insight into this concern of mine: 18:02:04  do we know for sure that miner implementations are actually set up to behave "rationally" once we are in the dynamic blocksize regime? i.e. does software like XMRig account for the penalty that will be applied correctly and allow itself to build blocks larger than 300kb under optimal fee environments, or does it naively just keep adding highest-fee transactions beyond the 300kb limit? It seems to me that if the mining 18:02:04 software is not set up to actually figure this stuff out, the whole dynamic blocksize system will not kick in smoothly. 18:03:02 The dynamic blocksize has kied in before 18:03:18 kicked in 18:03:42 after the RingCT fork in 2017 18:03:44 Don't know, can't imagine miners would put up with this for longtime before they revolt 18:04:01 xmrig doesn't have to care about this. 18:04:01 Ok we are at the end of the meeting. There seems general consensus to allow 1.7 into the fork. I hope/expect by summer there is a stronger and more precise understanding about scaling, stability, and spam costs around block sizes and block growth. This way we can have compelling arguments about scaling factors and the presence/absence of a hard upper limit. 18:04:11 Xmrig does not know anything about transactions nor fees, only hashing block templates 18:04:50 Well spoken, UkoeHB. 18:04:54 Fees and txes are the responsibility of nodes (or whoever generates the block template to be mined) 18:05:09 OK I guess I mean pool operators (or whatever algorithms they use) 18:05:17 thanks for attending everyone 18:05:43 Thanks 18:05:50 +1 18:07:58 carrington[m]: I think if there is a bug, it would get fixed realll fast when miners notice their profit margin fall. 18:08:47 bug or failure to take the dynamic size into account* 18:09:50 I suppose if that's the worst case scenario it is not something to worry about. Unless miners are taken offline in huge numbers while they patch their transaction selection algoritms 18:10:27 It's not the miners that would have to do the patching, but pool ops 18:10:53 And it wouldn't take very long (individually) 18:11:22 but pool would be offline I guess? And miners would still stop mining? 18:11:40 Also, I'm pretty sure the pool software (as well as monerod?) already take penalties into account when generating the block template 18:11:51 merope: ^ 18:12:30 If that is the case then all is well 18:13:59 The average blocksize went over 300000 bytes in 2017 and over the 60000 byte limit in early 2017 18:14:04 UkoeHB: do you think we've sufficiently explored the curve25519 idea for view tags aka does it seem ready to you? 18:14:04 https://bitinfocharts.com/comparison/size-xmr.html#log&alltime 18:15:29 tevador's results suggest we need to record the tx pubkey (output/enote pubkey) as a curve25519 point. I'm leaning toward pushing that kind of thing to Seraphis, and leaving your PR as-is. 18:15:59 Adding curve25519 as a dependency for the upcoming hardfork is a big ask I think. 18:18:08 Seems reasonable to me. knaccc might have thoughts there, think they will be happy to know it's looking like it'll be part of a future upgrade 18:19:41 yeah it does look like a big ask for the small performance gain 18:20:58 it also may affect our ability to use txpubkeys in clever ways in the future, outside of the output pubkey ecdh 18:21:42 so an active imagination would be required to determine if it could be useful to have ed25519 txpubkeys 18:26:13 we do have another clever use in jamtis, but it relies on another ecdh exchange 18:26:27 so there is no penalty from curve25519 18:27:39 i remember thinking of schemes where it would be useful to reveal sG, where s is the tx private key 18:30:06 since sD is declared as the txpubkey, and since we know d, we can calculate d^-1 * sD to get sG 18:30:20 and then sG acts as a shared secret between sender and receiver 18:30:31 for which only the sender knows the private key 18:35:25 yep we use the secret `sG` (as you call it) in jamtis 18:42:34 oh nice 18:44:28 We bake it into the amount blinding factor and encoding factor, to add an extra permission tier to the wallet key structure. 19:10:50 So, with current subadresses, what information is needed to decode the amount recieved? And what information is needed to check that it matches the commitment? 19:11:45 Just the view key 19:14:29 Oh yeah, baking in `sG` also solves the Janus problem without any extra tx bytes. 19:15:02 UkoeHB: what, precisely, is XORed against the amount? And what is hashed into the blinding factor? 19:15:37 https://web.getmonero.org/library/Zero-to-Monero-2-0-0.pdf check ch5 19:32:44 Thanks, I had misread something in there earlier. But a fresh read fixed it.