00:44:13 jeffro256[m]: it's probably fair to say that, do you have some particular situation in mind for this question? 00:47:29 I was trying to brainstorm ways to take information from the blockchain to help order the tx pool events between restarts consistently that didn't involve storing extra information 00:48:05 For https://github.com/monero-project/monero/pull/8076 00:49:17 Right now wall-time is used which can lead to arbitrarily large inconsistencies with tx ordering if the clock is changed manually or just during normal syncs 00:56:01 between restarts of the daemon? wouldn't you just reload the tx pool in whatever state it was when you saved it, which would be implicitly ordered? 00:59:50 Yes, but the wallet stores a value called "m_pool_info_query_time" which the daemon returns and the PR allows the wallet to incrementally synchronize the pool while simultaneously downloading blocks 01:00:23 That value "m_pool_info_query_info" has be increasing monotonically otherwise you the wallet will get an incomplete view of the pool 01:03:53 do you need consistency between interactions with different daemons? otherwise you could give each tx pool event an id from a monotonic counter, and queries would indicate the highest counter they know about 01:08:07 that would give you a way to track txs no longer in the pool: return a list of counter ids <= the query's top id for txs still in the pool 01:09:13 * that would give you a way to detect when txs are no longer in the pool: return a list of pool ids <= the query's top id for txs still in the pool 01:43:29 > do you need consistency between interactions with different daemons? otherwise you could give each tx pool event an id from a monotonic counter, and queries would indicate the highest counter they know about 01:44:15 Yes that's exactly my thought, the problem is now consistency between restarts. It doesn't need to be consistent across daemons, just consistent for one daemon 01:44:31 I could just store the counter but I am trying to avoid it 01:44:47 For simplicity 01:45:17 after restarts you should just re-do the pool scanning 01:45:23 for simplicity :) 01:46:13 restarts should be quite rare 01:46:54 Yes, but how would the wallet know when to do the rescanning? They could know for sure if the counter went down I guess but the counter might not go down 01:47:46 one solution would be a session id 01:47:48 I was thinking a session ID as well as the counter 01:47:54 jinx 01:47:57 lmao 01:48:20 Great minds think alike 01:49:49 That'll be a little extra review work, but I think it'll be worth it 11:28:01 jeffro256[m]: I vaguely remember that I thought through the case "daemon restart" 11:28:32 I would predict the daemon sees that the info about the pool content does not go as far back as the client wants to get it 11:28:58 And will return "Sorry, I could not give back incremental results, here have the whole pool" 11:30:22 And then the client switches the return value processing approach from "incremental" to "whole pool" as a reaction 11:31:11 I would also suspect that over the course of a year we amassed enough tests to detect if you could throw everything off 11:31:52 by simply restarting the connected daemon ... or maybe I misunderstand the situation you investigate? 13:28:21 Restarting causing inconsistency isn't necessarily a problem as-is with wall-time ordering, but wall-time ordering is susceptible to any arbitrarily large inconsistencies due to system time adjustments 13:33:06 Right now, as the PR stands, you can have the following situtation: 1) Wallet asks daemon for daemon time and daemon returns time A. 2) Daemon system time jumps backwards in time, either from manual adjustment or regular synchronizations to time B (B < A). 3) Daemon then add transaction T to pool at time C (B < C < A). 4) Wallet then asks for incremental refresh since A. 5) Since C < A, the daemon does not send transaction T and so 13:33:07 the wallet refresh is in an inconsistent state 13:35:28 These scenarios are eliminated if we use a steady clock (e.g. std::chrono::steady_clock or m_cookie), but now we have to worry about monotonicity across restarts 13:42:56 Or at least, you have to worry about notifying the wallet that they need to fetch the whole pool this time before continuing incremental refresh 17:10:16 Well, I guess you are right, but doesn't that self-correct after only one call? 17:10:49 I mean, alright, the first call right after the daemon had its time travel can omit transactions that it should return 17:11:13 But then it returns its new time to the client, that will use that next time, and everything should be alright already? 17:11:47 By the way, just to not lose perspective, we are merely talking about *pool* transactions 17:12:13 After 2 minutes of missing transactions, with the arrival of the next block, chances are good everything is alright 17:12:31 for the simple fact that the missed transactions went into that block and reach the client that way 17:14:00 It's anyway mildly worrying to me how much effort goes into this, where merely the processing of pool transactions gets improved, nothing else 17:14:34 If you really like, you could see the whole pool transaction handling as "nice to have" - the wallet would perfectly work without it ... 21:09:27 Looks like the cli wallet doesn't have the option to ignore the exchange_multisig_info stage for 2/2 wallets. 21:10:04 Alex|LocalMonero: pass in the output of the make_multisig call plus set the force update flag to true 21:10:14 the output of the local call* 21:11:40 UkoeHB: You mean the `prepare_multisig` command? The `make_multisig` command just throws an error that the wallet is already multisig. 21:11:51 Actually, prepare_multisig throws the same error. 21:12:08 I mean the output you get when you call it the first time 21:13:28 UkoeHB: There doesn't seem to be a force update flag in the CLI 21:13:44 well it's not a user-safe option 21:13:53 so only RPC has it 21:15:15 Perhaps its enablement should be added as a set flag? Or 2/2 wallets should simply not ask for the post-kex round. 21:19:25 Like I said before, the default should be as strict as possible. It may be feasible to have a 'fast N-of-N setup' method. 23:06:00 Output blackballing hasn't been useful for over 4 years: https://old.reddit.com/r/Monero/comments/9rml9j/generating_and_importing_an_output_spent/ 23:06:01 Not sure if there ever was a follow up that takes public pool data into account, but I can't imagine it making much of a difference considering the significant ring size increase since then. 23:06:08 Is there any reason to keep this around? 23:06:25 Removing it would eliminate ~2k lines of code, shave a couple MB from release archives and be one less thing to worry about during seraphis transition. 23:18:40 can somebody explain to me how a remote hidden service is chosen when I use --tx-proxy? 23:18:56 is it random each time the node restarts or does it stick to the same node? 23:19:39 how does it get the list of available nodes? 23:21:36 tobtoht[m]1: can you make a GitHub issue for that? 23:22:03 Also Alex|LocalMonero make an issue for the n of n fast setup 23:23:00 monerodtxproxy: #monero:monero.social 23:23:27 i would like to be pointed to code if possible 23:23:43 UkoeHB: yes 23:23:55 so that i can modify the source to ensure on restart it does not connect to same node 23:24:16 i don't know if it has sticky behavior 23:24:27 tobtoht: Could the blackballing code be used to exclude coinbase outputs from decoy selection? 23:50:19 Rucknium: No, would be much easier to have the daemon mark coinbase outputs as such in get_outs call during tx construction rather than keeping (and updating, somehow) a list of coinbase outputs in the shared ringdb.