01:59:49 @Rucknium can we put preliminary Carrot audit discussion on the agenda for tomorrow please? I don't expect any decisions this week, but just a general discussion of where we're headed with that 14:03:07 jeffro256: Sure. What is the current link for the Carrot document(s)? 14:51:54 I think it's this one https://github.com/jeffro256/carrot/blob/master/carrot.md 14:58:32 MRL meeting in this room in one hour. 14:58:38 In two hours I mean 17:00:28 Meeting time! https://github.com/monero-project/meta/issues/1070 17:00:32 howdy 17:00:34 1) Greetings 17:00:58 hello 17:00:59 Hi 17:01:05 Hello 17:01:53 *waves* 17:03:55 hi 17:04:02 2) Updates. What is everyone working on? 17:04:11 👋 17:05:44 Got stuck with some LWS stuff, but back on hackerone stuff 17:05:54 me: done getting first proposals back for Carrot audit quotes. DM me if you want to see the proposals and/or my comparison. Also just adding finishing touches and preparing for an implementation PR 17:05:55 me: Some double spend probability analysis for the N blocks lock discussion. Some analysis of issues in the Chainalysis video. Finishing up analysis of node tx relay logs for black marble source detection (preview: it was made very difficult after a code fix in 2019). 17:07:00 me: fcmp++, implemented trimming the tree on reorg/pop blocks, implemented multithreaded tree building, moving to the key image migration next 17:07:42 3) Stress testing monerod. https://github.com/monero-project/monero/issues/9348 17:08:12 AFAIK there are no new updates about stressnet 17:09:45 4) Research Pre-Seraphis Full-Chain Membership Proofs. Reviews for Carrot. https://github.com/jeffro256/carrot/blob/master/carrot.md 17:10:39 I'll raise the question of a consensus rule vs output index binding. 17:10:46 jeffro256: did you form an opinion on which side you prefer? 17:11:52 <0​xfffc:monero.social> ( apologies for being late. Hi everyone ) 17:12:03 I think I prefer the consensus rule again since implementing output index binding causes an extra round for collaborative protocols anyways, *and* adds the round to normal wallet workflows. 17:13:08 Sorry, how does output index binding add a round to the normal wallet workflow? 17:13:23 As long as this rule is well documented, I don't see it being an issue. And anyways, it should be best practice for collaborative protocols to commit-and-reveal their transaction components anyways to prevent any accidental interdependence 17:14:11 The amount commitment doesn't need to bind to the input context. If we share the amount commitment with the key images, there's no complexities to the flow there. 17:14:28 Unless I'm missing something, sorry if I am. 17:14:36 You have to do your amount commitment derivations for all enotes first, then sort them within the transaction and assign `output_index`, and then complete the enotes. Whereas without ouput index binding, all enotes are completely derived in parallel and only sorted at the end 17:15:15 Ah, you are talking about internal wallet "workflow" 17:16:40 I'll drop it even though I truly hate the idea of using consensus rules to solve the burning bug. 17:17:24 The amount commitment binding to the input context isn't the complication, it's the `output_index` being dependent on a component inside the enote that complicates things for wallet code 17:18:33 I did the whole rewrite of Carrot for binding to `output_index` instead of relying on the consensus rule, and it was pretty hairy I gotta say 17:18:56 Can I bully you by pointing out there's a random chance TXs will fail naturally if we use a consensus rule due to the minimized amount of entropy 17:20:14 ... Hm. I wonder that the exact odds of that are. I'm unsure it's as high as 2**64 because there's a pool of n outputs. It may actually be something we as humans would naturally stumble onto... 17:20:20 You can bully me lol but what do you mean about the minimized amount of entropy? 17:20:43 We're only using 16 bytes for entropy during the derivations? 17:21:48 If two outputs happen to have the same entropy, they'll trigger this consensus rule and cause a failure? You need to derive entropy and do a uniqueness check prior to doing the full derivations? 17:22:40 Is 16 max outputs enough for that effect to be noticeable (assuming we sent all to the same address anyway)? I mean I guess it's technically lower than 128 bits so.... 17:23:00 Because I think at best this is a 1/2**64 chance, but the fact it's any 2 of the n outputs in a transaction may reduce that further to the point yes, we do actually need code for that. 17:23:47 I assume the check the entropy is unique before deriving is simple enough to implement tbf. 17:27:39 I don't think it's as low as 2**64 since the result space isn't 128 bits, its 256 bits 17:28:12 ... except it only has 16 bytes of entropy? 17:28:40 I'm looking to collide the preimage, not the hash. 17:31:14 Can we move on to whatever other topics we have on the agenda today by at least acknowledging checking the entropy is unique is a viable solution if this is a problem? We can discuss statistics/implementation/how that complexity compares to other complexity later? 17:31:40 Yes and 2**64 is the expected value assuming you get as many samples as you want, no? The real probability for a preimage collision in a 16-out tx (assuming your machine's entropy is good) is 1/(2**128-1) * 1/(2**128-2) * 1/(2**128-3) * ... * (1/2**128 - 15) 17:31:42 jeffro256: You wanted to talk about Carrot reviews, right? 17:32:24 Oops formatting. Yeah, we can move on. I will look into it for sure 17:33:09 https://github.com/jeffro256/carrot/blob/master/carrot.md 17:33:50 So I solicited quotes from different auditing firms for the Carrot specification. For all of them, I requested that the general specification be reviewed to find any glaring vulnerabilities, but more concretely, to create security proofs for the security properties in Section 9 (except for Janus resistance). My first question: is this the best way to go about scoping the Carrot au dit in term of what the community needs? 17:34:23 Oh gosh, you may be right this isn't what I was thinking yet is properly defined as a multi-target collision where each new entropy is checked against all existing entropy. That'd be 2**k / n (where n is the amount of already sampled entropy). Sorry if I did botch that. 17:36:48 Your definition for auditing seems reasonable to me *though I haven't reviewed every line item in that section*. 17:37:36 Why the exception for Janus resistance? 17:37:52 If that can be explained in some simple terms :) 17:38:38 I ask the same question but without the "if" 👀 17:39:27 I got some feedback that Janus attack resistance might be slightly harder to prove, which might raise costs 17:40:06 I asked for a less formal review of Janus resistance, but that can certainly be upgraded if desired 17:40:48 Well, lately donors were extremely generous. At least some of them, it seems. Maybe if do all that effort, spending some more might still be worth it 17:40:52 It does take up a good chunk in the number of steps in the enote scan process, so it might be worth reviewing more formally just because of that complexity 17:42:43 Okay I will inquire into that and report on it when that information becomes available 17:43:00 Second note: The most expensive firm which responded had a quote 5x higher than the cheapest firm. On the one hand, they valued their man-hours at about 2.5x higehr rate than the cheapest firm, so they were likely going to be more expensive regardless. However, they also estimated the number of man-hours required to be 2x than the cheapest firm. One of these firms is probably misg uided on the effort required, or they understood the scope to be of different depths. At any rate, I need to sync with them on that to see where the man-hour discrepancies lie. 17:43:16 It's easy to ask other people's money shall get spent, but I would feel better if all components there get equal treatment 17:43:41 Can we get two quotes? With/without? 17:44:19 Yes, I will ask 17:44:42 At what point does the marginal utility of a formal security proof get outweighed by its cost? 17:44:44 such a difference makes me want to know the firms. I know it's been decided to be withheld, but still. 17:46:11 Depends on the cost :D get us numbers jeffro 17:46:23 Janus only affects privacy, right? Just so I understand right 17:46:38 I'll personally pay at least an extra $10 for this so we can set a floor there 17:46:49 lmao 17:47:00 AFAIK FCMP++ is spending a little below expected budget for the research side, so it's probably worth it. 17:47:03 Janus is where someone has two public addresses and confirms they are held by the same entity 17:47:29 *puts 10 USD in my willingness-to-pay privacy calculator* 17:47:33 Yeah Janus affects off-chain privacy: the ability to correlate two Monero addresses to the same user. The attack needs to be actively started by sending funds 17:48:08 It is not a theft risk nor a counterfeiting risk. 17:48:10 The legitimate worst case is linking an anonymous profile to a doxxed one. 17:49:19 AFAIK Seraphis-Jamtis was supposed to eliminate the Janus attack. So it would be good if Carrot does too 17:49:22 Nope 17:49:25 Nor a DoS 17:49:47 I thought one of the variants eliminated it? 17:49:49 This is the same technique as JAMTIS AFAIK. 17:50:10 Oh. You were responding to my previous message 17:50:13 Nope was it to not being a theft risk, sorry for the confusion. 17:51:10 Obviously, jeffro256 to confirm, yet an unproven Carrot presumably is as good at stopping Janus as an unproven Seraphis JAMTIS? Same guts on this matter? 17:51:15 Yeah btw Carrot should have feature parity with Jamtis except for 1) subaddress lookahead tables are still required 2) no fancy probabilistic light wallet servers, and 3) the key exchange is *slightly* slower 17:51:55 It doesn't use the same technique, but they both should have cryptographic strength at blocking Janus attacks AFAIK 17:52:26 Oh. My bad, sorry. 17:53:50 Jamtis does a third Diffie-Helman key exchange and binds to that in the amount commitment, while Carrot basically does an HMAC and stuffs it in the space where Jamtis address tags would be 17:55:41 I think MRL wants to get quotes for both with/without Janus. 17:56:20 What should the upper limit of our budget be for Carrot in general? 40K, 50K, 60K USD? For transparency, the highest quote I received was for 100K USD, which I think is likely too high for this work in the depth that we need it 17:57:10 This is with *less formal* Janus review, but it's still defined to be in-scope 17:57:59 20K (the cheapest offer) definitely sounds too low, so 40K minimum? 17:58:31 How many offers did you receive so far? 17:58:57 Are we including proof review, not including proof review, or not doing proof review? 17:59:00 Can you DM me all the groups you reached out to thus far? 18:00:03 One entity needs to write the proofs and another has to review, right? This would only be for writing them, right? 18:00:37 rbrunner: 4 18:01:08 Ok, might be enough to learn about "reasonable" regarding amounts by comparing them 18:01:35 We could do a review of the written proofs. I wonder how much value that would bring given that Monero addressing schemes are already relatively well understood 18:02:13 A review of the implementation code is definitely more important in my opinion 18:03:35 We have two more agenda items. jeffro256 , do you have everything you need until the next meeting? 18:05:18 I think so yes. I told the firms that we would have a discussion where the representatives could pop into the meeting and discuss pros/cons of their proposal. Would next week be a good time for that? 18:05:40 That sounds great. 18:06:11 5) 10 block lock discussion https://github.com/monero-project/research-lab/issues/102 https://github.com/AaronFeickert/pup-monero-lock/releases/tag/final 18:06:40 kayabanerve at last meetng suggested that the N block lock should be set so that the mining pool with the highest hashpower share (currently about 30%) should have a 1% or less probability of success of re-orging the chain N blocks deep through a double-spend attack. 18:06:48 If you thought users liked the 10 block lock, they're going to love 18:07:00 the 20 block lock 18:07:15 I used Theorem 1 of Grunspan & Perez-Marco (2018) "Double spend races" to produce this table: https://gist.github.com/Rucknium/da1e57b1864aca477dfa3b4e02e86e26 18:08:10 The formula assumes that the adversary keeps mining on the attacking chain for an infinitely long period of time if they don't succeed after N blocks. That's usually not economically rational. 18:08:29 Grunspan & Perez-Marco (2021) "On Profitability of Nakamoto Double Spend" considers scenarios when the attacker breaks off the attack after falling behind the honest chain. I plugged in a few numbers. The results don't change much with this economic rationality formula because of the parameters we're working with. The attacker already accepts a 99% probability of failure. 18:09:45 If a 20 block lock is considered too long, then you can change the assumptions. Lower the hashpower share of the attacker or increase the acceptable attack success probability. 18:10:22 Or just say that only benign re-orgs will be considered for the N block lock analysis 18:10:34 The cell with 0.86739 is the one, right, row 20, column 0.3 18:10:58 Sorry, what are the cells? 18:11:02 Right 18:11:03 nice work Rucknium, I'll read it with more active attention after the meeting. I want to say that looking at current hash rates of pools is an insufficient metric IMHO. the economic feasibility of a related attack depends on the depth N. if N is lowered, more hash power may come online from the sideliness to carry out an attack, because they are no longer priced out. 18:11:10 The table in that gist 18:11:11 Probability of attack success 18:11:13 I understand the row and column definition. I'm unclear what the cells are. 18:11:24 Percentages of success? 18:12:06 Yeah, Rucknium does not fail to surprise :) 18:12:07 essentially, a dark forest scenario. 18:13:41 But so far what this does *not* hint at IMHO is that 10 is already overlay cautious 18:13:45 chaser: I agree. However, 30% is already very high for a hidden adversary. Just 20% more and the attacker can execute a malicious re-org for any confirmation wait time. 18:13:54 10% of hash power over 5 years makes a 10-block reorg likely, if I'm interpreting this correctly? 18:14:46 I'd consider that secure and not call for further raising the lock 18:15:07 Years? The sequential numbers down the rows are the number of blocks that an attacker could re-org in a single attack attempt 18:17:27 100/c, where c is the cell value, for the amount of attempts. Then I said 20 minutes per attempt as the goal is a 10-block reorg (which is 20 minutes of time). 18:18:07 That value, for a 10% attacker, is roughly 5 years. 18:18:32 If you allow the attacker to attack over and over again, the necessary block lock to prevent all attacks would be huge 18:18:37 *I'm perfectly aware that's not the proper formula, I just wanted an offhand estimate of how long an adversary with 10% would need before they stumble onto a successful reorg. 18:18:43 Is that the probability of attack success that a single, given block at the top of the chain will get reorged by the largest malicious mining party in some finite timeframe? 18:19:17 Rucknium: that would be new-entrant hash power, so essentially +100% on top of what we have now. I 'm honestly not sure if that's low or high, but also consider that Monero is non-ASIC, so there is a lot of hardware out there that can be repurposed to support a reorg attack. 18:19:25 I'm not asking about preventing all locks. I'm curious how long it takes before they stumble onto it. I can't reasonably argue an adversary would pay for 10% of the hash power for *years* just to perform a DoS here. 18:20:24 But there may be value to an adversary in having a large amount of hash power for days, or weeks, to perform even just a DoS. 18:20:45 jeffro256: The first row is just probability of re-orging one block with a single attack. The model assume that the attacker gets a head start of one block. The attacker is basically constantly mining until they get that one-block head start 18:22:25 If people are wondering "How secure is PoW, really?", then you can read Budish, E. (2022). "The Economic Limits of Bitcoin and Anonymous, Decentralized Trust on the Blockchain." https://moneroresearch.info/index.php?action=resource_RESOURCEVIEW_CORE&id=101 18:22:27 What is a single attack "attempt"? How long does it take for this attacker to give up on a failed attack? 18:22:31 Budish recently released an updated draft a few months ago 18:22:51 Maybe after much back and forth with will find out, in a few weeks, that - surprise - 10 blocks are just *perfect* 18:23:12 jeffro256: In this model, the attack continues forever. If the attacker "loses" the race to the N blocks, he/she can still win later because of random block arrivals 18:23:50 In the second paper Grunspan & Perez-Marco (2021). "On Profitability of Nakamoto Double Spend." 18:23:50 they consider the attacker breaking off the attack if he/she loses the first part of the race 18:24:18 But anyway, I put some number in that Grunspan & Perez-Marco (2021) formula and didn't see much difference 18:24:22 So if they 'start a new attempt' to reroll a 1% chance, it's of no difference to continuing the existing attack? 18:24:58 I'm not surprised by that statement, but the numbers in the table don't mentally click for me to be in line with that statement 18:25:32 "it's of no difference to continuing the existing attack" What do you mean? 18:27:19 The traditional 6 block confirmation time for bitcoin comes from the slightly incoirrect original formula from Staoshi where the attacker has 10% hashpower share and less than 0.1% probability of success. See Table 1 of Grunspan & Perez-Marco (2018) https://moneroresearch.info/index.php?action=resource_RESOURCEVIEW_CORE&id=192 18:27:28 Satoshi* 18:28:04 Interesting 18:29:16 I want to get to the Chainalysis video. Maybe we can digest this info later, discuss the video now 18:29:42 If I have 10% of the hash power and want to do a 10-block reorg, what period of time do I need to maintain 10% of the hash power before I successfully pull off such an unlikely event? 18:29:55 That's the question I've been trying to get to, but sure, we can circle back lager 18:29:58 *later 18:30:22 Ok I can try to answer for you later. It's not difficult to compute because the attempts are independent 18:30:55 6) Chainalysis capabilities video 18:31:59 Sometimes I read papers about attacks that seem a little contrived. I think "Is anyone listening? Does anyone care?" Now I know that Chainalysis is listening and caring. 18:32:37 In other words, this gives us a lower bound on the resources they have put into attacking the privacy on Monero users. 18:34:23 I watched it and I wasn't too surprised. the edge they have will be mostly curbed by FCMP++. on top of that the main assets for them are weaknesses in transfer-layer privacy, which they formed by running swarms of their own nodes. 18:34:35 Some interesting things: There are a few abbreviated variables attached to transactions that the presenter doesn't explain. Any ideas what those could be? 18:35:10 I felt that the recent further looking-into into D++ is a worthy direction. 18:35:34 A lot of government provided IP address information was used in the analysis. Makes https://github.com/monero-project/monero/pull/8996 a bit more important than I originally anticipated 18:35:45 chaser: agreed 18:35:55 Rucknium I can't recall the abbreviations, could you give a time stamp? 18:36:37 They have the time difference between when one of their nodes observed the txs relayed for the first and second time. If they don't have network topology info, the first-spy estimator is best. I wonder if they are trying a topology-based estimator. It may be possible to take the time delay data they display, put it into a complex statistical model, and get an estimate for the numb er of malicious nodes they are running. 18:37:14 chaser: About 19:00 18:37:44 "Transaction features box" 18:38:48 Wallet trees would mean a passively malicious node doesn't learn any additional info on the outputs spent. 18:39:09 10/+ is probably decoy count? 18:39:18 IMHO, fee uniformity should be a near-term research priority. Fees were at the top of their tx indistinguishably list. 18:39:42 chaser: The `K,E` part 18:39:45 The fundamental technique can be done with outputs today, it'd just use a lot more wallet storage. 18:40:40 I was just about to say this. discretized fees, here we go again! 18:41:25 Well, cough, with Seraphis we would get these, if I remember correctly ... 18:41:37 I _think_ it's the details of the extra field 18:42:13 IMHO, at a minimum it makes sense to charge fees based on the number of inputs/outputs and any extra tx_extra info instead of the exact number of bytes. 18:42:17 K being a public key, E being an encrypted payment ID and AK being additional public keys 18:42:40 rbrunner: we can do discretized fees in RingCT if we just restrict the `txnFee` field to only so many values by validator rule 18:42:48 It's really hard right now to even confirm that a wallet has standard fees since txs have variable-length integers that make tx sizes slightly different. 18:42:50 boog900 I think so too 18:43:15 boog900: yeah I remember them mentioning "key order" as a feature which, yeah, might be what this 18:43:38 Yes buy that shouldn't be fingerprintable Rucknium 18:43:57 IIRC a long time ago one could tell which transactions were cold signed because they had 2 tx pubkey fields in `tx_extra` instead of 1 18:44:00 The Monero wallet deals with that in a way. As long as everyone doing custom fee code matches that way, it's not distinguishable 18:44:31 There are so many wallets that dont do that 18:44:35 I just want to clarify AFAICT, this is to force alt wallets in line, not to resolve fundamental issues 18:44:39 Extra field presence/ordering has been a topic for a while, someone has a data set 18:44:42 And I have no function that I can input a Monero tx in and get the standard wallet2 fee 18:45:00 Heard. I do support this work TBC. 18:45:19 The "problem" with discretized fees is that it doesn't fix nonstandard fees that are very far from "standard", which is a lot of them 18:45:22 Explicitly giving each output its own key in a structured position? Sounds great 18:45:38 A lot of wallets aren't even trying 18:46:22 Reference: https://github.com/Rucknium/misc-research/tree/main/Monero-Nonstandard-Fees 18:46:37 So we need....price control! :P 18:46:39 Bit shilly, yet getting exactly in line with wallet2 was a couple weeks of work for monero-serai. I fully understand how nontrivial it is 18:46:43 kayabanerve isn't "force alt wallets in line" the way to solve the fundamental issue? 18:46:52 (shout out to jberman who actually did it) 18:47:18 The price controls issue is what makes this hard. And the interaction with the dynamic block size, miner fee penalties, etc 18:48:23 Yes chaser. My comment was this isn't wallet2 that is fingerprinting users. It's alt wallets which are. Users can use wallet2 without worrying their personal running of the software will fingerprint them across TXs. 18:49:13 A nice thing about discretized fees is it fixes an issue with an EAE attack that is even possible with FCMP++: If a user spends the whole balance, then the only difference that Eve sees is the tx fee, which are different for many txs. 18:50:00 That isn't to downplay the issue. that's to not have people concerned about its a protocol failure (rather than a shortcoming of it to handle lazy wallet devs) 18:50:26 All wallet devs are lazy until proven otherwise :P 18:50:55 I proved otherwise :((( 18:50:57 kayabanerve got it. 18:51:08 I mean, they take the shortest path to get something working, usually 18:51:38 kayabanerve, not til you publish monero-wallet to crates.io :^) 18:51:57 More topics on the video? 18:52:01 I'm very much on board with making as much of the fee function part of consensus as possible. 18:52:53 (kayaba: jk just releease me from needing to use serai as a submodule, señor) 18:52:57 well, I have one but may be out of scope for the meeting 18:53:04 rucknium: re the video, would it be helpful to scrape more information from the presentation? 18:53:17 the information from the spreadsheets shown, that is 18:53:32 Sneurlax: Cargo.toml a git revision? 18:53:33 This nice post by Stnby and Siren suggests that Chainanlysis may have taken advantage of old DNS configs to "hijack" "trusted" remote nodes: https://www.digilol.net/blog/chainanalysis-malicious-xmr.html 18:53:39 what would cryptography that conceals in/out numbers look like? in/out arity was another factor they used. 18:54:02 Every TX would be n/n 18:54:26 With tons of fake stuff for simple txs? 18:54:44 sneurlax: Yes, especially the relay timing info in `ms`. Later I could try to do something with that to estimate their number of spy nodes. I have been reading so many gossip protocol papers lately :D 18:54:56 I hope something else we thought would be mathematically impossible will see the light of day 18:54:58 Zero-value ins/outs for padding 18:55:07 chaser: You can bring it up 18:55:37 (Rucknium: this is it, arity) 18:55:56 tevador had an idea for discretized arity, I'll dig it up 18:55:59 Oh, I forgot I had make another two tables 18:56:27 Tabulation of Monero transaction inputs and outputs https://gist.github.com/Rucknium/d2c02f51a2d9f103a28caa8f51be7dbf 18:56:56 With dummy inputs, every single transaction could be a 2/2, and owners of funds can still split/consolidate funds to/from `N` TXOs in `O(log(N))` time 18:57:15 The most import info is how many txs have 3+ inputs. At that point, consolidation heuristics might help adversaries narrow down which ring member is the real spend. 18:57:55 jeffro256: you're a horrible person for not at least giving us 4/4. 18:59:01 About 7 percent of txs have 3 or more inputs. So the Chainalysis method of collecting info about who owns which outputs, then analyzing many-input txs, would usually only be applicable to about 7% of txs as an upper bound. 18:59:59 Maybe you could try that consolidation analysis with txs with only 2 inputs. I don't know. 19:00:24 Actually the probability hasn't been formally analyzed 19:00:52 lol every tx being 256/16 should cover most usecases, mempool handling code be damned 19:01:32 Wouldn't it be applicable to all txs since those 7% could (maybe, big assumption) be eliminated as decoys? 19:02:11 For the false positive rate of analyzing tx uniformity defects in single-ring transactions, I developed an exact formula in https://github.com/Rucknium/misc-research/blob/main/Monero-Fungibility-Defect-Classifier/pdf/classify-real-spend-with-fungibility-defects.pdf 19:02:13 Dummy transaction inputs (tevador): https://github.com/monero-project/research-lab/issues/96 19:02:14 increasing uniformity of number of inputs/outputs (my generalized take): https://github.com/monero-project/research-lab/issues/114 19:03:42 The formula is equation 12, a little complicated already 19:04:14 IMHO restricting ins to 2^n (n>0) and outs to constant 2 could go a long way 19:04:41 jeffro256: Do you mean with a chain-reaction analysis? 19:05:44 ^ my gut instinct exactly 19:05:45 I can't endorse 2/2 compared to 4/4, personally. 19:06:43 2/2 will be hours of delay and requires perfect precision w.r.t. output usage planning. It also will hit wallet UX. 19:07:04 IMHO, there is not a developed theory about why diverse in/outs are inherently bad, except that allowing many inputs can help an adversary perform the consolidation analysis when ring size is finite. Nonstandard fees are inherently bad because the wallet produces them _every time_, so an adversary can link the txs easier. a wallet won't produce the same in/outs every time 19:08:16 *hours of delay at scale. 256 outputs take 8 hops or 2.66h as of right now. It's 1.33h with 4 which is still a massive hit compared to as many inputs as fit. 19:09:42 Rucknium I agree that fee uniformity is a more pressing issue 19:10:04 What comes to mind is that a txs with many inputs or many outputs is more likely to belong to a service. A miner or merchant consolidating txs with many inputs. An exchange sending txs out to many users in a single batch would use many outputs. 19:10:53 well, these guys love giving visits to services 19:12:09 If Monero requires 2/2 for all txs, then no one will have to worry about the privacy of those services since they won't exist. (this is a joke) 19:12:19 lol 19:12:43 this behavior could be eliminated by restricting just outs to 2, with dummy if needed. the practicality issues are heavier with restricting ins 19:14:02 We are at two hours. Marathon meeting. Thanks all for attending and working so hard to improve Monero. If you didn't see the video, Chainalysis praises Monero developers for their hard work lol 19:14:49 let services send individual tx's. a service can afford forethought in output planning 19:14:57 yeah, thank you all 19:15:17 The meeting can end here. Feel free to continue discussing issues. 19:16:25 Thanks, everyone! 19:32:03 updated draft, clocking an impressive 71 pages: 19:32:04 Budish: Trust at Scale: The Economic Limits of Cryptocurrencies and Blockchains (July 2024) 19:32:06 https://ericbudish.org/wp-content/uploads/2024/07/Trust-at-Scale-The-Economic-Limits-of-Cryptocurrencies-July-2024.pdf 19:32:08 a lot of complementary material here: https://ericbudish.org/publication/the-economic-limits-of-bitcoin-and-the-blockchain/