01:47:07 My opinion regarding the idea of limiting maximum inputs to 8 is that it could be bad for merchants when they want to consolidate. I read what kayaba wrote up on github and the conversation here. I'm not so sure that having a function in the wallet that automatically consolidates will be good enough. Because even if you have this it will still be troublesome and slow (20 min wait time for each consolidation) to consolidate and sweep their received coins. Monero has the best privacy at the moment IMO, and sure FCMP++ give even greater privacy but if the coin is very troublesome to accept as medium of exchange (impossible to spend or consolidate many received inputs into an output) then the privacy gains are all for not. If merchants decide it's too much of a pain to deal with then that is no bueno. 01:48:19 My opinion regarding the idea of limiting maximum inputs to 8 is that it could be bad for merchants when they want to consolidate. I read what kayaba wrote up on github and the conversation here. I'm not so sure that having a function in the wallet that automatically consolidates will be good enough. Because even if you have this it will still be troublesome and slow (20 min wait time for each consolidation) to consolidate and sweep their received coins. Monero has the best privacy at the moment IMO, and sure FCMP++ give even greater privacy but if the coin is very troublesome to accept as medium of exchange (impossible to spend or consolidate many received inputs into an output) then the privacy gains are all for naught. If merchants decide it's too much of a pain to deal with then that is no bueno. 02:11:34 fr33_yourself: It's 20 minutes for each layer of consolidations, not 20 minutes for each consolidation. 02:12:15 I'll also repeat how only 3% of TXs would be affected per Rucknium's recent analysis of the last five years, and how merchants already have to consider this as the current MAX_INPUTS is high, but not high enough it's unreachable. 02:13:01 How many inputs per 20 minutes or consolidation layer? I thought you said only 8 per 20 min. What is upperbound on inputs per 20 min that can be consolidated 02:13:14 There's a logarithmic amount of layers. 02:13:52 If you have 1 million outputs, you can consolidate them in (log_MAX\_INPUTS(1 million) - 1) * 20 minutes. 02:14:08 I saw your comment. I'm just thinking about how merchants that receive a few hundred tx per day can consolidate when they want to move funds? How would consolidation and sweeping in a reasonable timeframe work post FCMP++ hardfork? 02:14:21 So for MAX_INPUTS = 8, 2 hours. 02:14:36 (for an absurd amount of inputs) 02:14:39 1m -> 125k -> 16k -> 2k -> 250 -> 32 -> 4 02:14:57 Ok, I'm not sure I fully understand the logarithmic nature of the math, but I'll take your word for it 02:15:06 It's that final line I just sent. 02:15:42 1m outputs is consolidated into 125k outputs, each consolidation transaction only having 8 inputs. 02:16:01 Then the 125k outputs is consolidated into just ~16k outputs, each consolidation transaction only having 8 inputs. 02:16:09 Let's say someone has received 7,000 separate transactions (inputs) into one wallet in a week. At the end of the week they want to sweep those coins into another wallet. What would this process look like after FCMP++ ? 02:16:33 So if you have 50 outputs, it's literally just 20 minutes. You make 7 transactions (7 * 8 = 56, which is more than 50) and then end up with 7 outputs 20 minutes later. 02:18:11 They make ~900 transactions aggregating 7000 outputs into 900 outputs. Then they make ~115 transactions aggregating 900 outputs into 115. Then 15 aggregating 115 into 15. Then they finally sweep two outputs in two TXs to the other wallets. It'd take one hour. 02:19:25 Under the current MAX_INPUTS, which I cited as 220 in practice yet ofrnxmr says is 120 (which I believe is correct), it'd not be 7000 -> 900 -> 115 -> 15 -> 2 but 7000 -> 70 -> 1. It'd be only 20 minutes but this would still need to be programmed and considered. 02:19:46 And I guess they would write some sort of software to do that? Kinda seems pretty difficult manually performing all of those transactions though 02:20:01 Yes, as they _already_ do. 02:20:31 How do they do that just curious 02:20:37 Sorry I am a bit of a n00b\ 02:20:43 Eh. You can flood your destination wallet with 70 outputs (per the prior example). It'll just need to do its own consolidation after the second week. 02:21:00 I'm saying they already have to. 02:21:01 I can't comment on if/how literal services do. 02:21:13 I'm noting that today, in Monero, you cannot spend 7k outputs in one TX and already need to consolidate as I describe here. 02:21:19 How do you know that they already have to navigate this challenge? 02:21:32 Because there's already a MAX_INPUTS in practice? 02:21:51 I'm not introducing this. I'm just proposing adding an explicit, lower value than the current implicit one. 02:21:53 What's the practical max input again? 02:21:59 120 AFAIK 02:22:29 So for your example of 7000 outputs a week, yes, those have to be consolidated if you want to spend your entire wallet balance at once. 02:22:33 Do you have any guesses how merchants currently handle it? I mean 120 input max per tx is still a lot more than 8 though 02:23:12 so if they are manually performing the transactions by hand in the command line or gui wallet then it probably wouldn't be too atrocious currently 02:23:14 No idea. I am not privy to the inner workings of centralized exchanges re: a privacy coin which makes trying to reverse-engineer this on-chain near impossible. 02:24:01 Centralized exchanges are the primary affected group IMO. They're the group I'd see with the most volume who also regularly needs to make transfers out. 02:24:23 Monero already has some code to handle the case where your outputs don't fit within MAX_INPUTS. `transfer_split`. 02:24:59 I also noted how transaction chaining allows a wallet running in the background, without the private keys, to handle consolidation (if a user comes online once to initialize it). 02:25:40 Yes I saw that. And it all sounds nice in theory. I just wonder if it will really get implemented that quickly after a FCMP++ hardfork 02:27:06 So from a UX perspective, this should still be resolvable. More users will have to leave their wallet open in the background, and there will be higher latency for anyone wanting to spend a large value of their wallet funded from a bunch of small inputs, but that's about it. I'll also point out how UTXO consolidation is a frequent topic within the Bitcoin community. This is somethi ng exchanges and users already face over there, and Monero already faces (even if we don't discuss it as much). 02:27:51 Agreed wallets needs to take the step after. Wallets needing to do that isn't enough of a problem to justify the fundamental DoS issues with keeping MAX_INPUTS at 120. 02:28:32 If we did one FCMP per proof, minimizing the computational base cost and allowing as much batch verification as possible, we'd only support 16 inputs under the current implicit limit FWIW. 02:28:34 Thanks for taking the time to address some of my concerns. If it's really not that big of a deal for merchants or exchanges that are receiving tons of transactions, then my concerns are unwarranted. I just worry about kicking those types of people off the network by making it super hard to use. As Monero's use-value is dependent on online merchants or vendors being able to accept lots of fairly small "chunks" inputs, and being able to move them in a reasonable timeframe and with minimal headache. 02:29:01 I have mixed feelings. I completely agree with you in that Monero has to be usable. 02:29:16 I also just believe there's risks to the protocol itself without this and we can't risk protocol stability for usability. 02:29:48 I do agree this will be another annoyance. I fear the risks otherwise would be far more annoying. If TXs did take 2 seconds to validate, free RPCs would disappear or require you spend tens of seconds on PoW to justify even listening to your RPC request. 02:29:49 What's the DOS problem with max inputs at 120? I didn't catch this? 02:30:14 Right now? None, it's just a large TX. Under FCMPs++? 120 * 15ms = 1.8s verification time 02:30:39 (assuming a linear growth in proof time from 1 input to 120 inputs) 02:30:48 What are the implications of 2 second verification time per tx? 02:30:57 I also do believe the usability concerns here *should be mitigatable at the wallet level* and that's where *they should be mitigated*. 02:31:06 That it requires more bandwidth to keep up to date with current blockheight? 02:31:17 No, we'd presumably lose free RPCs 02:31:19 or is the problem with the initial blockchain sync verification 02:32:07 If I can trigger your server to spend two seconds verifying a transaction it takes me 20ms to upload, I can DoS you for ages. 02:32:24 It's a massive asymmetry which allows trivially occupying any free Monero nodes. 02:32:50 Ahhhh I see. So that is what you mean by DOS. 02:33:24 I believe it would lead to paid RPC access in some form, either explicitly monetary, via PoW (the whole credit system we currently have with calls to deprecate), or via anti-spam PoW (which will be tens of seconds on mobile devices to publish any transaction). 02:33:25 So the game theory would be that people would remove their publicly available nodes or nodes like cake wallet's would get DOS'd 02:33:46 That's not even to discuss the risk to the P2P layer itself where any node can send you a TX which takes multiple seconds to validate. 02:34:05 Yeah the P2P layer seems like a bigger deal 02:34:20 I mean power users should be running their own nodes anyways. 02:34:21 It's just an outrageously easy burden to trigger on any publicly accessible node and will likely lead to nodes not being publicly accessible. 02:34:41 Yeah, such a large computation time has a burden in a lot of places... 02:34:42 It's great for there to be free RPC, but if that went away I don't know how much it would effect Monero's core user base 02:35:02 Also, I may be somewhat biased as I wrote the code to handle this years ago and don't need to make any adjustments. 02:35:34 But also, I wrote this code years ago because it was necessary years ago. My proposal to reduce MAX_INPUTS doesn't change its necessity. 02:36:02 That's a recent proposal 02:36:09 What are the implications of 2 second verification time for the peer to peer layer? Also for bandwidth and staying up to date? And for initial blockchain sync? 02:36:22 But that also means I've accepted this, been ready for this, and have had years to cope with this. Y'all are only just facing it :p 02:36:59 If the network got DoSd, any node publishing/relaying a TX would need to include an associated PoW to justify it most likely? I'm not a P2P net person so I can't comment how this would ideally be solved. 02:37:21 We're likely back to whoever makes the TX spending tens of seconds on an anti-spam PoW *after signing it and proving for it* to justify it. 02:37:44 But it's such an outrageous burden it isn't 'how do we cope with it', it's 'let's ban that by reducing MAX_INPUTS' 02:38:22 Setting MAX_INPUTS to 8 actually shouldn't hurt the computational burden of the network. Outlier TXs pay a base cost. Non-outlier TXs share a base cost. 02:38:42 I mean I'm not a high volume merchant, so it's not the end of the world for me personally. My main concerns about FCMP++ are related to bloat in tx size, sync time, and the issue we are discussing about limiting max inputs. I guess making sure the code is sound and no inflation during the hard fork are also important points of focus. I'm kinda neutral on the FCMP++ upgrade. I unde rstand the benefits, but the costs are harder to get my head around. \ 02:38:45 Forcing more TXs to not be outliers, by limiting how far they can be outliers (MAX_INPUTS=8, not MAX_INPUTS=120) means more TXs share a computational cost. 02:39:25 Also, the cost to verify the Bulletproof for 8 inputs vs 120 inputs is largely linear? It's technically `O(n / log n)` due to the multiexp but I've been handwaving that out. 02:39:38 This seems like a nightmare to implement, so I can see why it would be desirable to keep verification time per tx low 02:39:52 So verifying 15 8-input TXs vs 1 120-input TXs *independently, not in a batch* should be largely the same. 02:40:29 The one consideration is that the 15 8-input TXs produces 15 outputs and the 1 120-input TX produces 1 output. That means we do pay a logarithmic cost for the inevitable consolidation of those 15 outputs. 02:40:57 So it goes from a giant O(n / log n) cost -> an incremented O(n * log n) cost 02:41:09 I'm following 02:41:26 It's generally always better to handle things incrementally than at-once, even if the at-once cost is cheaper overall. 02:41:45 So what is difference in projected verification time with max inputs = 8 versus max inputs = 120 post FCMP++ hardfork 02:41:48 (and we do still have a `/ log n` when verifying a one-input proof, it's just less notable as it matters more when n is large) 02:42:28 was my handwaved estimate though I've not prior benched 120 inputs (solely the one input case) 02:42:39 15ms an input 02:43:05 So with a limit of 8 inputs, each TX can only cause ~120ms of burden. 02:44:55 The code is sufficient someone can run the benchmark for the 120-input case though, I just find that not productive and haven't bothered 02:46:11 ofrnxmr: https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/wallet/wallet2.cpp#L11076-L11085 02:48:01 The full reward zone is 300000. That, div 2, minus 600 (coinbase blob reserved size) is the 149400 value I quoted. 02:48:07 Ok so what again are the exact issues that having the much lower verification time (15 x 8 = 120ms) solve? 02:48:24 Are you sure there's a further 66% applied ofrnxmr ? 02:48:56 What is the verification time currently for an 8 input transaction? Just out of curiosities sake 02:49:00 fr33_yourself: It prevents being able to get an arbitrary node to spend 2s, a massive amount of CPU time, off a P2P/RPC request as I said. 02:49:04 versus post FCMP++ 02:49:36 No idea. I referred to the rationale for MAX_OUTPUTS in which this discussion follows, not the current premise with MAX_INPUTS. 02:50:02 I'm returning to my work so please refer to my post or do your own benchmarking for more information. 02:50:23 max_outputs isn't so worrisome for me personally, I defer to the judgment of you and the others. I was curious about max_inputs only as that has an impact on large merchants 02:50:41 ofrnxmr: https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/wallet/wallet2.cpp#L9597 02:51:59 I really think wallet2 is the 149,400 value, per my re-review of the code I prior reviewed when working on monero-wallet, despite my historical belief it was 100kB and your active belief it was. Would appreciate you citing the further 67% rule or acknowledging we were wrong about 100kB (or someone else with wallet2 experience chiming in, this probably is better for monero-dev though...) 03:24:58 all the better!