16:56:43 Meeting in a bit more than 1 hour 18:00:01 Meeting time. Hello! https://github.com/monero-project/meta/issues/1265 18:01:38 hey 18:02:16 hi 18:02:46 *waves* 18:02:50 Howdy 18:03:16 What are the reports from last week? 18:03:40 I worked jeffro256 's review comments into my PR, but today he found some little thing more to polish :) https://github.com/monero-project/monero/pull/9939 18:03:51 same procedure as every week 18:03:53 just one thing worth to mention, I didn't see a good reason why the CLI `set_daemon` command had no option for daemon login and proxy, so I added both options ([commit](https://github.com/monero-project/monero/commit/78e22f7eb526cc9bcf45dceb5f4005aebd6ec7e4)) 18:04:07 Almost exclusively worked on PR 81, the PR to refactor wallet2 refresh and fix fcmp++ reorg handling (and last blocker for stressnet). I worked on implementing using the existing `short_chain_history` approach to request blocks with some modifications (the goal with `short_chain_history` is to minimize trips to the daemon / data downloaded when handling reorgs). I detailed the mod ifications and rationale in the PR description, and in a follow-up comment. It's a bit complicated. I think it would be solid to get consensus on the PR's approach https://github.com/seraphis-migration/monero/pull/81 18:05:14 So pouring all this work into `wallet2` still is in the interest of working well for FCMP++, right? 18:05:53 Reviewing @j-berman's PR #81 today, have been working on a multisig fix upstream for the last few days. A wallet2 reorg handling issue with multisig exchange messages has been plaguing Haveno ever since Qubic started all those reorgs 18:06:01 I'd say yes. The seraphis refresh mechanism has months of work to get it to production ready 18:06:56 I see 18:07:18 Oh, multisig data exchange *and* reorgs, sounds like fun :) 18:08:08 I dimmly remember thinking while implementing the MMS years back "Well, there won't be any reorgs while this code here runs, hopefully" 18:08:57 And for a long time that was almost a given. Not anymore right now, of course. 18:10:20 Is there an ETA for the first FCMP++ based network starting to run? 18:10:30 With the nodes of a handful of testers 18:10:32 7 days after we get PR 81 in 18:10:43 Ok :) 18:11:14 Looks a bit scary from the description alone already, not sure it's within my reach to review. Will try to have a look. 18:11:41 "Modification 1" is the main thing I'm hoping to get consensus on 18:12:13 I can try to explain it a little more here with shorter context 18:12:41 First a primer on `short_chain_history` and requesting blocks using the `block_ids` param in getblocks.bin 18:13:56 The idea is to include a short, ordered list of hashes that the wallet has already seen. The daemon will then traverse this list, starting from the highest block included in the request (i.e. most recent), and upon encountering a block id that is in the main chain, the daemon will then return a contiguous set of blocks starting from that block id 18:14:43 If there was a reorg, then the daemon will include contiguous blocks starting from the *first* block included in the request that is *still* in the main chain 18:15:21 This way the wallet will know if there was a reorg based on the daemon's response 18:16:27 *end context on short_chain_history/block_ids` 18:17:56 Now on to this PR: I modified the core approach in wallet2 to make sure that the `short_chain_history` in every request to getblocks.bin *always* reflects the wallet's known view of contiguous blocks, making sure that the list of `block_ids` **always** includes the oldest block that the wallet can handle reorgs back to 18:19:38 ALSO importantly, the request will always include the genesis block in it as well, so if the daemon serves blocks contiguous from genesis, the wallet will know it cannot handle the reorg 18:19:56 This way the daemon response will **always** serve the blocks that the wallet can handle reorging, OR the wallet will be able to tell that it cannot handle the reorg 18:20:55 (another piece of context and why there is a max reorg depth that the wallet cannot handle reorging below: the FCMP++ tree that the wallet is building locally is "pruned" so that it doesn't take GB's of space. It has a max reorg depth so that the wallet can prune the tree effectively) 18:22:22 So that is the necessary context to understand how the PR aims to maximize benefit of short_chain_history 18:22:28 Following this so far? 18:23:01 wallet2 doesn't include the block ID for max reorg depth already? 18:23:03 What if `max-reorg-depth` on a wallet is set very deep? Will the list of `block_id`s fit inside an RPC message? 18:23:27 Just cheked... ig not lol 18:23:30 with "wallet cannot handle reorgs", you mean without `rescan_bc`? 18:23:38 That's definitely an improvement 18:24:00 "Following this so far?" Not yet, I would say, lacking basic knowledge how the wallet synchs so far in detail, but I think your context written here will be valuable 18:24:21 The short chain history uses exponential backoff, so it will only be logarithmically bigger 18:24:42 The short_chain_history works like this: start with 10 most recent blocks, then go step-wise *2. So first 10, then 12th, then 16th, then 24th, etc. So going back to the start of max reorg depth is log 2. Quick someone do the maths 18:24:56 right it doesn't 18:26:14 Ok jeffro is following, so I think I will just continue and I think it will eventually click for all at some point 18:26:24 Just curious: You write in the PR description "Correctly handles deep reorgs". Where is the limit of the old i.e. current code? 18:26:27 Makes sense. So you get more of the set of block ids fitting inside of a message, at the cost of more messages. 18:26:53 m_max_reorg_depth 18:27:31 Old code (as in code on release branch) I don't think is broken though AFAIK. It's fcmp++-stage fcmp++ integration that this is fixing tbc 18:27:59 Ah, now I understand. 18:29:07 So continuing rn on the core thing I want to get consensus on: right now, wallet2 fetches the **next set of blocks** asynchronously in parallel to synchronously processing the **previous** set of blocks 18:29:18 process_parsed_blocks is where the block processing is done 18:29:32 If I understand correctly, `rescan_bc` might not be usable for soft resets after FCMP++ since we need to download all outputs since the starting refresh point to build the FCMP tree 18:29:38 and that is where wallet2 sync m_blockchain, the local data structure keeping track of which blocks wallet2 has synced 18:30:20 sorry, yes, this is what I meant 18:30:34 will double check rescan_bc soft after this 18:32:39 Continuing from here. Since process_parsed_blocks internally handles syncing m_blockchain (including handling reorgs), that presents a synchronicity conflict: how to make sure we're fetching the next set of contiguous blocks in parallel without needing to wait for `process_parsed_blocks` on the prev set of blocks to complete, using the expected short_chain_history that reflects th e latest chain state 18:34:12 I solved this by maintaining a second struct `hashchain cur_chain` that I read in the control thread executing the request for the next set of blocks in parallel 18:34:46 That struct is distinct from m_blockchain, but I duplicate the reorg /sync logic to make sure that cur_chain handles reorgs the exact same way 18:35:25 This way I maintain wallet2's approach to fetch next set of blocks in parallel to processing the prev parsed blocks 18:36:17 I wrote about an alternative relatively simple approach here that would present another change to wallet2's refresh structure, but would avoid the duplicated logic of reorg handling: https://github.com/seraphis-migration/monero/pull/81#issuecomment-3263948108 18:36:34 And that decision is the thing I'm hoping to get consensus on 18:36:58 I don't think it needs consensus in this meeting, but hopefully that context helps to reach conensus faster 18:37:26 And helps explain the PR's approach 18:37:47 👋 18:38:50 rescan_bc soft is still good, it clears m_transfers, m_blockchain, and m_tree_cache. It just doesn't clear things like notes and labels 18:38:54 Apologies for being late. My only update is how the fcmp++-stage branch merged the latest fcmp++ code and I'm very happy about that. 18:38:59 "I don't think it needs consensus in this meeting" That would be asking for a bit much .... 18:39:45 (faster prove time for large inputs will be in stressnet) 18:40:01 Nice 18:40:53 Sorry that explanation above ended up longer winded than I had hoped 18:41:06 Ok, thanks for all the background info, probably helps understanding a lot 18:41:56 Well, I think that's very often the most valuable info: general approach, intent, and overall structure of code 18:43:37 *thumbs up* 18:43:49 Ok, if that's it about that PR, do we have anything else to discuss today? 18:43:55 Wouldn't you still need to have a "copy" of the hashchain in this proposed appraoch to revert to in case there as an error in block processing? 18:44:08 Nah it was good info 18:45:37 This is a good point to raise and is one of the complications to deal with as result of duplicated logic 18:45:59 I initially missed it in my first pass, but this should solve that issue: https://github.com/seraphis-migration/monero/pull/81#issuecomment-3263948108 18:46:56 Resetting that locally scoped cur_chain to the state of m_blockchain in the event of an error should make sure the next loop continues from where it's supposed to 18:47:24 "error in block processing" also includes interruptions of the communication with the daemon? 18:47:25 That's the proposed method I was referring to. So you would be syncing hashchain updates before making the next request to the daemon. Doesn't that mean you need to copy & mutate the hashchain before the processing in case the processing goes wrong? 18:47:57 Like I think you have to have it either way, no? 18:48:06 yep, and also CLI errors upon encountering a receive and user needs to input their password 18:48:17 Unless you add new logic to rollback changes made before processing 18:50:08 So first in the control thread, it rolls back the locally scoped cur_chain as needed for next loop 18:50:21 Then next loop makes the request for that next set of blocks in parallel 18:50:43 Then it continues processing prev parsed blocks on top of m_blockchain (which has not been updated yet for the rollback) 18:51:32 process_parsed_blocks will then *also* handle rolling back m_blockchain internally (using duplicated reorg handling logic) 18:51:52 Is this the current state of #81? 18:52:00 The last 4 messages? 18:52:14 if process_parsed_blocks errors, then the control thread should hit that catch statement, and cur_chain should reset back to m_blockchain 18:52:34 And in effect, this will have been a wasted request for the next set of blocks, but no local state change should happen to the wallet 18:52:41 yes 18:53:53 Alright, discussion between people in the know about PR #81 can continue of course, but allow me to close the meeting proper. Thanks everybody for attending, read you again next week! 18:54:03 Wait I had one other thing to raise! 18:54:06 Thanks everybody 18:54:09 jk 18:54:20 actually a couple sorry 18:55:04 Sorry, didn't suppose that ... 18:55:39 Just going to mention in here, I'm pretty ok with ArticMine 's latest fee proposal to scale fee by tx byte size alone without considering verification time, considering tx size as a whole actually *does* increase a significant amount as n inputs increases beyond just membership proofs size 18:55:48 pinging kayabanerve 18:56:07 this kayabanerve 18:56:39 the figures: https://github.com/seraphis-migration/monero/issues/44#issuecomment-3150754862 18:57:24 I still support a fee by verification time too, to the point it's never worse re: fees to make multiple small TXs vs one large TX. 18:57:29 But I'll concede if I'm in the minority 18:57:32 The discussion in the last MRL meeting gave me the impression that consensus about the new fees is still somewhere in the distance ... 18:58:36 I myself don't have an opinion there, don't know enough about fees unfortunately 18:59:13 the latest MRL meeting had a lot of discussion on block weight penalty, which is another discussion. I do think we are closer to a finalized weight calculation 18:59:46 We may see in 2 days, hopefully 19:00:54 I think the latter point still holds using artic's current proposed figures, but will double check it 19:01:25 It shouldn't, as the inputs are linear but the proof sizes favor large TXs. 19:01:27 oh wait, I misread you as flipped 19:01:41 Unless it's superlinear to proof size still. 19:01:48 Then ack, I stated my criteria and can be happy however it's satisfied 👍 19:01:56 you're saying you would prefer that users not have the incentive to make one large tx, versus multiple txs? 19:02:17 Coreecr 19:02:21 *Correct 19:03:40 But verifying larger should always be faster than smaller too? So you're making 2 points there, A) that you would still support verification time, and separately B) that you would not prefer an incentive to make larger txs 19:04:32 > But verifying larger should always be faster than smaller too 19:04:33 In that 128-in fcmp verify should be slightly faster than 16x 8-in 19:05:52 It seems like these 2 points are in conflict 19:06:36 Not if we weight verification time superlinearly as to achieve an incentive for many lower-verification-time TXs. 19:07:47 I won't complain if they're at least equal but would argue a 128-input TX should pay twice as much as 19 8-input TXs :C 19:08:50 I would also argue 128-input TXs shouldn't exist and point out you wanted booting a Monero node to require sampling ~200k points which would take around a minute D: 19:09:41 But that's not the community's vibe at this time :p So instead I made it so loading those 200k points take ~40x less time. 19:09:43 https://github.com/seraphis-migration/monero/issues/44#issuecomment-3187685572 19:10:04 16x 8-in is almost twice as large as 1x 128-in 19:10:20 bandwidth is likely to be the main limiting factor to scaling long term 19:10:34 And should be The input limit will be the limiting factor to scaling if we ever introduce transaction uniformity, if every TX must be 100 KB due to the inputs. 19:12:13 From transaction uniformity to IVC (which could compress all inputs to just one), the input limit is an incredibly important factor. 19:12:29 I think it should be lower. It'll be 128. I think people shouldn't be encouraged to make TXs with 128 inputs. 19:12:55 *IVC would presumably also have a fixed input limit. 19:13:45 The reality is the current hf and what's planned for it in particular and I think goal should be to optimize for that, not a future hf which may have distinct tech/research to achieve the goal of uniform txs 19:14:28 Except people keep bitching about the idea of reducing inputs and this would gently nudge them to build better wallet software themselves, reducing friction in any inevitable future where Monero development doesn't die off. 19:15:09 At some point, Monero will either be technically stagnant or will adopt a fixed-input limit (even if solely internally considered and not exposed to the user) for privacy or scaling, I'd bet on it. 19:15:13 I wonder if, without proper broadcasting of this rule, people making very large TXs will ever do the math and be deterred from making large TXs, or they'll just go "hmmm... that's a really big fee, oh well..." 19:15:17 I mean, I'd bet like $100, I'm not a millionaire. 19:15:45 jeffro256: p2pool is probably a large enough force that someone will build an optimized p2pool aggregator 19:16:04 It doesn't require users be smarter. It requires a few devs be smarter with the systems users use. 19:16:58 But aggregators do technically break the current UX of using Monero, so the users at least have to be aware of why it is being done 19:17:29 Any part of Monero 'breaks' the UX of Monero, fight me /s 19:18:10 Aggregators already exist. This does aggravate them, but in a softer way than what I originally pushed for, in hopes they'll adjust and it'll flow downstream so it never comes to the shove I wanted. 19:19:02 The fact users may not have noticed aggregators exist, or sweeping may be necessary, is solely due to the conditions which may or may not always exist. This narrows the chance they live in blissful ignorance, sure, but it sets a transition path and encourages uniformity now. 19:21:06 I still think your line of reasoning is giving too much weight to how you imagine a future hf would be designed to accommodate tx uniformity 19:21:32 I understand I'm disagreed with and I'm not a dictator here 19:21:41 Feel free to move on :p 19:21:55 I think a future hf that achieves tx uniformity in some solidly designed /agreed upon way is fine to then "shove" toward tx uniformity at that point 19:22:25 rather than push for it in next hf 19:23:28 the goal is still rough consensus on weights for this current hf. sounds like you still wouldn't agree with weights scaled by byte size alone and would NACK what's currently on the table 19:23:38 which is an important NACK to take into account 19:24:43 For what it's worth, my gut feeling is also to get FCMP++ out of the door in any reasonable, acceptable form, see how that works, and then adjust and improve starting from that 19:25:10 It's already a giant step, a bit problematic to make it bigger still 19:26:03 I think byte size has strong justification and has major benefit of being simple 19:28:56 I'm content with closing discussion on this issue for now :) 19:29:08 at least within this meeting that's still quasi ongoing 19:29:34 I did want to raise one other thing, but I'll hold off on it 19:29:55 So next week for that "one more thing"? 19:30:13 or I'll just say it now and whoever wants can discuss 19:30:16 1 sec 19:30:46 It will at least make it into the meeting log :) 19:31:47 I raised a point in 81 that is unrelated to 81, that I will probably make a separate issue out of: https://github.com/seraphis-migration/monero/pull/81#issuecomment-3197430438 19:32:17 jeffro256 and ofrnxmr and I then had a discussion on it that is interesting 19:34:49 my TL;DR: if a pool's tx FCMP++ uses a tree root that will no longer be part of the main chain e.g. daemon user pops blocks 40 blocks or there is a >= 10 block reorg, then said tx will no longer validate (unless of course, the tree becomes the main chain tree again), and as such I propose we kick the tx from the pool as a step in a better direction from current (the tx remaining i n the pool and preventing a user from spending the inputs that tx included) 19:35:47 TL;DR if the tx is absolutely hopeless kick it? 19:35:55 yes 19:36:03 jeffro argues that since the tx could theoretically become valid again, it may make sense to keep the tx around in the pool 19:36:18 Sounds good, with a watertight predicate "Is hopeless" of course 19:36:19 (I think is his argument, may be putting words in his mouth sorry jeffro) 19:37:36 Yeah, saw something of that argument. I think that may have merit, but not with a full week of expiry. I would guess if after 1 day there is no switch back of the chain it's over. 19:38:02 We could keep the tx but drop it if another tx comes in that double spends it but is valid? 19:38:18 or we could just keep both in the pool 19:38:32 why drop either :) 19:39:21 Uh, tx id must be key in tons of related structures 19:39:30 I think the latter is arguably the optimal long term solution, but more involved of a change that I don't exactly think is worth it for v1 19:39:38 Keeping both might not be trivial 19:40:08 tbf this is an issue that needs fixing now not with FCMP 19:40:21 and it doesn't need to be a HF/consensus change 19:40:34 that's fair too 19:40:41 Alright, *now* I declare the meeting proper to be over. Thanks again, until next week. 19:41:04 I would only be okay with keeping both if the signable transaction hash is the same. But perhaps as a v1, I guess it makes sense to drop the transaction under a >=10 block reorg and allow a new transactions, rather than keeping that input completely locked for 3 days (current mempool expiry period) 19:41:33 thanks everyone, until next time 19:42:05 the problem with keeping the tx around (without other modifications) is that a deep reorg by a malicious actor could be even more problematic for honest users than it needs to be (honest users get their txs stuck and can't spend the inputs) 19:42:33 I think dropping txs in the pool after a reorg is fine as only txs in the pool before the reorg would be affected if the main chain comes back 19:43:19 another contextual element: blocks that are reorged also have their txs placed back in the pool 19:43:44 and I think it's a better step for v1 to drop those 2 if they can't be included in what is at the time the main chain 19:43:51 those too* lol 19:45:06 better step relative to the current of just keeping txs in the pool until expiry without changing logic to accommodate newly valid txs 19:51:58 ______ 19:53:09 ^sorry ignore above