00:27:56 A probably-stupid question: why not make the fee algorithm account for any wasted efficiency on the input side? If someone were to, for example, make a 5-in transaction, then the fee would be the same or nearly the same as an 8-in transaction. 00:28:39 We already have a claw back for outputs. That doesn't change it's still DoSy and want to move towards more limited input/outputs. 00:29:12 The clawback also isn't perfect. It can't really be due to the semantics of batch verification. 00:29:48 Also, the block size limit is exactly that: a size limit. If you could make comparatively small TXs with very large computational cost (as possible here), it'd cause issues in that regard. 00:31:27 well then the fee could ramp up more aggressively, no? As long as a DoS attack is no cheaper, it's not the network's problem if a user chooses to pay high fees 00:31:50 err, user/wallet 00:32:48 the point is if someone wants to consolidate many inputs simultaneously and is willing to pay for the privilege, it should be allowed 00:34:15 Again, see the block size limit issue. 00:34:27 We'd have to redefine block limits to have a size and a computational bound. 00:36:23 If we maintain the lack of an input limit, a malicious adversary can create a bomb of _200,000 scalar multiplications_. 00:36:35 That's all within the transaction size limit. 00:37:07 A 16-output transaction's range proof is just 2048 scalar multiplications. 00:38:45 I'm not saying we can't fit so many scalar multiplications into the block time. I'm unsure all existing nodes we want to maintain support for can fit so many scalar multiplications *into the time we need to check a block within*. We can't take a full two minutes to check a block and need a result within a fraction of that time. 00:39:04 fair enough, but isn't there already a weight calculation (not directly related to size, in bytes) to limit blocks full of output spam? 00:39:07 could replicate that 00:39:22 Also, if there's 2 16-output transactions in a block, it doesn't force 4096 scalar multiplications. It's still just 2048. They overlap. 00:39:24 though again I'm not sure on that, ask ArticMine lol 00:40:11 and I'm not necessarily advocating no input limit, just a higher one 00:40:41 *They do have their own growth factor yet it's very marginal in comparison. 100% base with +1% per proof. The base overlaps. 00:41:38 Again, we want to set an input limit for privacy reasons, and accumulation can still be done with logarithmic depth. 00:43:01 An output limit of 8 only delays accumulation by 20 minutes compared to an output limit of 64. 00:44:51 So we're discussing a DoS risk, redefining how block limits are evaluated, new fee policies, and moving away from the goal of standardized input counts, to save 20 minutes of time in an infrequent use case *which still has the delays even with higher input limits*. 04:24:13 Fair enough, thanks for the explanation 12:53:55 Anyone here interested in writing a brief update on FCMP++ and related audits/projects (e.g. CARROT)? It would be beneficial to have, as we can share it across Reddit, Twitter etc. to inform the community as well as (potential) donors 12:59:53 dEBRYUNE related to my monero post? 13:00:07 reddit post* 13:35:34 A general thought I had earlier, but reinforced by your post yes :-P 15:03:11 https://github.com/monero-project/research-lab/issues/100#issuecomment-2435545727 15:15:15 I wrote up notes on Curve Forests. I prior posted initial thoughts here in this room. It saves computational cost for nodes if we have a low enough MAX_INPUTS (a 2-input TX has 16 points for the second input we'd remove, saving an average of 5.33 points per TX alone as they represent 33% of the chain. Those pay off most of the second tree themselves), and this scheme saves space o n the blockchain. It does add some additional IO costs re: building additional trees. 15:15:30 The main question is just if wallets can afford the trees. 15:20:24 It does argue MAX_INPUTS=4 which isn't quite how aggressive I'd like to be right now. 15:20:45 (if not 2) 15:23:38 By next meeting I'll try to get an estimate of how disruptive in/out limits could be to user experience. See how many additional txs it would take for historical txs to effect their intended consolidation and fan-out. 16:59:33 Rucknium: 128 inputs has a 128 * 384 base and then 128 * ~30 per-proof elements. 16:59:33 4 inputs has a 4 * 384 base and then 4 * ~30 per-proof elements. 16:59:35 This means if one 128-input TX exists in a block with only 4-input TXs, they overlap in the base of 4 * 384 yet the single 128-input TX incurs an additional 124 * 384 elements *effectively per-proof*. 16:59:37 Even if you had to make 1000 more transactions due to the reduced fan-in degree, it's still less computationally expensive for the blockchain. 17:00:48 That's the rough sketch of the math, apologies if it's a bit unclear. I just want to note with common sizes, we pay a base cost per block and only ~30 per input. With abnormal sizes, the cost per input goes up to an insane degree. 17:03:03 Practically, I think this is fine, but input-heavy users (looking at you, p2pool miners) should have the number of enotes they receive optimized 17:05:04 Which I know it outside the scope of this specific design discussion. For Rucknium's analysis, you can see an uptick of txs with 146 inputs, nearly all of which will relate to p2pool consolidation 17:05:14 Which I know is outside the scope of this specific design discussion. For Rucknium's analysis, you can see an uptick of txs with 146 inputs, nearly all of which will relate to p2pool consolidation 17:06:27 They can still have a tree to fan-in with reasonable time complexity (logarithmic). It just requires running a wallet to actively do the accumulation, or to have the wallet open for an hour prior to their actual spend (enough time to accumulate 4096 outputs if MAX_INPUTS=8). 17:11:14 It's more expensive for nodes, but how much less expensive for users? How do you value annoyance? 17:12:04 Maybe this could be considered for the FCMP hard fork: "Coinbase Consolidation Tx Type" https://github.com/monero-project/research-lab/issues/108 17:18:13 Well, anyway, you can set the price of the computational cost equal to the externality imposed on nodes, and see if users are willing to pay that. That's how you get revealed preference for avoiding annoyance. 17:21:32 IMHO, in the next hard fork, at least the required fees should be set by the number of inputs and outputs (and any extra data in tx_extra) instead of size in bytes. It is really annoying for wallet developers to match the wallet2 fees because they are too precise and dependent on exact tx size. Right now txs have a lot of variable-length integers that make tx size unpredictable. F CMP won't have the ring offsets, which are the biggest source of the variable-length integers, but maybe the FCMP txs will still have some variable-length integers or other things that impact exact tx size. 17:25:12 Fee discretization without a change in how fees are charged will push high-byte txs to the back of the line whenever there is txpool congestion, I think. I'm not sure yet what the exact effects of fee discretization would be, but at least it would require charging by number of inputs and outputs. 17:33:38 IIRC, fees are variable-length integers now. That introduces a type of recursion into fee estimation if fees are based on bytes. 18:23:12 Rucknium: just calculate 9 fees for all possible lengths and select the one whose length is equal to its predicted length. 18:28:19 No, you must do it recursively so that recursion is used more in the universe :P 18:28:19 kayabanerve Can you comment on whether FCMP++ txs will have many, or any, variable-length-integer components? 18:32:04 Rucknium: You're forgetting length of TX extra in your fee suggestion. 18:32:31 No I mentioned it 18:32:38 Parenthetically 18:32:48 Oh sorry, you're right. 18:33:59 The reference block for the tree root may or may not be implemented as a varint. 18:35:53 A later tree root may have a larger proof if the tree grew in layers. We shouldn't change the fee in such a case tbh. 23:29:41 https://www.getmonero.org/resources/research-lab/ 23:29:41 "Published Papers" are directly published by MRL team right?