00:25:24 gingeropolous: My opinion of 9130 after analyzing it "Very small differences in the decoy selection algorithm would not be statistically distinguishable, especially for small ring sizes. Therefore, now we would not have a problem with having two slightly different decoy selection algorithms (the current code and the modification from this PR) being used by two different wallet2 ve 00:25:24 rsion by sets of people who did and did not update to this version." https://github.com/monero-project/monero/pull/9023#issuecomment-1802593848 02:27:42 oh roight roight. i do remember reading that 08:47:02 hello everyone, so I am not sure how many people generally use remote nodes (probably most) over slow connections or Tor (probably not most), but there is this really annoying delay every time a transaction is to be sent, and it seems to be related to the fetching of the decoy outputs from the server. on a slow connection, this can take 20-40, or more, seconds. 08:47:31 is there some way the wallet could cache most of the decoys ahead of time, and only get the ones it really needs at that time from the server? 08:55:46 yes, there is a way, it's called local node 08:56:33 decoys can be taken from _any_ past transaction on the blockchain, so you'll need to "cache" the whole blockchain => local node 09:07:58 well, remember that the biggest part of the problem is an unreliable, slow connection. so, syncing a local node is perhaps not the most indicated here. besides, there are other considerations like battery usage and disk space usage .. i already have a few nodes running elsewhere with good connections etc, but all it takes is being on a slow connection and that enormous delay appea 09:07:59 rs.. it's tolerable and all but I am wondering if there isn't a better way. 09:07:59 i don't know exactly how the algorithm works, but would it not be possible to preemptively cache some decoys ahead of time, and then at transaction time fetch only the few recent decoy outputs that the algorithm picks in realtime? 09:13:49 But decoys are selected randomly, going down from current block height. Any such cache will be outdated quickly if it doesn't store all transactions 09:13:59 quickly = in 2 minutes on average 09:20:42 if you have a few nodes, can't you run a hot wallet together with one of them? 09:28:21 oh, two minutes is not very long indeed.. yeah, I could run a hot wallet in one of them, I guess, but my workflow in Qubes works very differently, I have a vm for the hot wallet which is firewalled and can only talk to my remote monerods 09:28:46 so when ring sizes go to 100+ this problem will only get worse huh? 09:30:58 i'm just daydreaming now, but what if the remote node constructed the ring signature instead (obviously it would have to be trusted, which is the case here for example) ? does that even make sense? sorry if it's a stupid question, i'm really not so familiar with the internals here .. just trying to come up with ideas, because i have seen many friends complaining that "monero is sl 09:30:58 ow" and this is what they mean (they are also using said crappy internet connections) 09:51:38 node constructing ring signatures is effectively the same as running the wallet there 09:58:24 in a way, it still wouldn't be able to spend funds from the remote wallet, right 13:41:24 hrm. self compiled master won't run on my seed node 17:18:03 I'll add to the convo about getting decoys from the node, that it has felt a *little* silly to me at times that wallets download all this transaction data from the nodes when scanning for outputs and simply throw it away. I understand not everyone would want to keep it, but an option for the wallet to cache data it gets from the node could be helpful for some people. even with a local node, it would make syncing multiple wallets on the same 17:18:03 device faster. 17:19:14 the main downside I can think of is it could worsen the issue of malicious nodes handing out bad data if that data gets cached. on the other hand, if you cache the data, you can potentially notice when there's a mismatch after switching nodes 17:21:57 getmonero.dev is now alive as a replacement for monerodocs.org. GitHub repo: https://github.com/MAGICGrants/getmonero.dev 17:22:29 hell, every now and then I notice transaction construction going a bit slow connected to a node on my own LAN. or, even moreso users who run a node from a VPS-type setup. it's their node, but they're going to typically be remotely connected and potentially downloading a lot of repeat data 17:22:37 sgp_ that's dope 17:24:26 IIRC jeffro256 had a proposal to cache the historical distribution of outputs. AFAIK, that's the first round of communication with the node (`get_output_distribution` RPC call). I think it is about 3MB of data. Then the wallet needs to pick the output indices of the decoys (and remember the index of the real spend(s) it is using). 17:25:30 IIRC, the second round of communication is when the wallet asks for the output public keys for the output indices it selected. Probably caching all output public keys would be too much data for a wallet cache. 17:26:38 Since decoys are preferentially selected from more recent outputs, you could age out the public keys from the cache after a period of time. say only keep stuff from the last month, or whatever number makes sense given the output selection algorithm 17:28:21 but I guess the second round isn't as much data? since a whole transaction is only ~2 kB 17:28:29 For my wallet, get_output_distribution was definitely the slowest part. I had a plan to make a cache for it since 99% of it remains the same between each invocation (you just need to check for new blocks or reorgs), however I got busy with other things. If someone wants to work on that, I'd be grateful and would be willing to help out, but I've got a lot on my plate rn 17:28:39 I think the bigger issue is probably overloaded nodes with slow seek/response times 17:29:23 jeffro256 very interesting! 17:33:32 3 MB is a lot to send every tx construction even for an unloaded node if your wallet connection isn't that great 17:51:23 I do this in Feather. I can clean up the patch and submit upstream when I have some time. 17:52:05 Amazing! Ping me and I will gladly review when upstreamed 19:55:54 yeah I was thinking like, when it asks for the output public keys, the (Im assuming) multiple database calls required to provide that info 19:57:25 I think can lead to latency in that part, even though the amount of data is small. as I said I've noticed slow TX construction on my LAN where bandwidth is plentiful. I'm guessing at times when the disk is busier. not that it's a big deal to me personally, we're talking about a few seconds, I'm js 20:20:37 Yeah the concurrency model in the `Blockchain` class tends to lead to poor multiple-reader performance since it doesn't make a distinction between read threads and write threads, so any RPC requests can take a long time for public nodes or nodes with a lot of read requests being done concurrently 20:21:35 0xFFFC is working on a PR rn to fix this effect though, so hopefully the situation will improve soon 21:29:07 yes indeed, that is what happens here. and probably in most parts of the world also