10:23:51 vtnerd: Do you have an opinion on https://github.com/monero-project/monero/issues/10118? I know you've spent time with epee's encoder and also may also benefit from this feature. 11:40:39 huh... batch JSON-RPC, hmmmm 11:41:05 how would that interact with restricted limits? 12:05:27 I think it's better to introduce new RPCs for the use cases where people would need batch requests. Batch requests would probably fetch much more data than needed in each case, and put more load on the server 12:15:09 an example is https://docs.getmonero.org/rpc-library/monerod-rpc/#get_block 12:15:51 * which undocumented can take hashes/heights 12:15:56 * https://docs.getmonero.org/rpc-library/monerod-rpc/#get_block_header_by_hash 12:16:02 which has hash or hashes 12:16:12 to select one or multiple blocks at once 12:16:37 hashes was not documented but I was using it on my library, until it got documented recently 12:16:52 quite certain other methods have similar batch requests 12:53:27 DataHoarder: `get_block` can already take multiple items? 12:53:43 * get_block_header_by_hash 12:53:45 That solves why I was asking for batch requests. 12:53:50 lemme see get_block 12:54:05 You first specified `get_block` 12:54:10 bad copy paste 12:54:13 that's why * 12:54:28 sech1: I mean, we're claiming to offer a JSON-RPC 2.0 server when we don't :/ We also still have a response size limit. 12:54:39 it can only get single hash or height 12:55:16 its not even JSON 😭 12:55:30 block headers can be fetched by hashes list (and returns a list) 12:55:52 Because the parser won't decode some valid JSON, because we sometimes inline weird binary blobs, or for yet more mysterious reasons boog900? 12:56:04 DataHoarder: I need the transaction hashes but not the transactions. 12:56:31 that's on get_block indeed 12:56:53 get_block doesn't contain the transactions, just hashes 12:57:05 the binary blobs, atm its a custom format that _can_ be used as JSON most of the time 12:57:07 block header only has miner tx id + tx count :( 12:57:20 though you can probably reuse the same HTTP/TCP connection to send the multiple queries after each finishes 12:57:35 although it is planed to be fixed at some point 12:57:49 get_block.bin tbh does it for you here 12:57:54 fun fact: `"--"` is a valid number for monerod json parser (its zero) 12:59:05 14:53:45 That solves why I was asking for batch requests. 12:59:05 so if requesting a list of JSON-RPC requests, these are processed sequentially right? 12:59:31 It'd be the same than not closing the TCP/HTTP connection and issuing a couple in sequence, ofc, you take roundtrip latency 12:59:59 The JSON-RPC 2.0 request allows handling them in an unordered fashion with unordered responses. 13:00:39 The simplest implementation for Monero would be to accept the entire request, write '[' as a response, then handle them sequentially, writing each individual response and the necessary commas, before finally writing `]' 13:00:52 alternatively do you just want the *transaction ids* ? 13:01:06 and tie to block height 13:01:17 I want the block. 13:01:30 The block is the block header and the list of transactions within the block. 13:01:56 If you tell me I have a way to fetch the transaction IDs alone, I'm curious, but it alone is not a sufficient solution for me. 13:01:56 yeah, I was mostly referring to > 14:56:04 DataHoarder: I need the transaction hashes but not the transactions. 13:02:06 for that second ... https://docs.getmonero.org/rpc-library/monerod-rpc/#get_outs and request every 2 global output indices 13:02:25 ... 13:02:27 you'll hit some dupes but given at minimum you need 2 outputs + coinbase id :) 13:02:34 and allows multiple! 13:02:38 I hate how reasinable that is in the current environment. 13:02:40 (and gets txid) 13:02:46 *reasonable 13:03:03 you get coinbase id on block headers 13:03:07 which also allow ranges 13:03:18 Right, but that will just be the outputs, not even the transactions. 13:03:19 via https://docs.getmonero.org/rpc-library/monerod-rpc/#get_o_indexesbin 13:03:40 that gets you indices of each coinbase outputs... then get outs for the missing indices :D 13:03:55 each output says the txid, not the transaction yeah 13:04:01 + height 13:04:48 get_blocks_by_height.bin is probably the most reasonable method here 13:04:52 but it's not JSON-RPC 13:05:01 Yes, I have two separate goals: 13:05:03 1) Efficiently fetch blocks, defined as the block header and list of transactions within it 13:05:05 2) Efficiently fetch 'scannable blocks' (block, transactions, output indexes) 13:05:27 The issue with `get_blocks.bin` is it doesn't return the `prunable_hash`. `get_blocks_by_height.bin` doesn't support pruning at all and won't work with a pruned node. 13:05:38 https://github.com/monero-project/monero/issues/10121 :D 13:05:50 return prunable_hash? 13:05:54 for the transaction? 13:05:57 So I have to use `get_blocks.bin`, then re-request all transactions for their prunable hashes (pointless) 13:06:10 get_block also just returns the in-wire tx ids 13:06:16 which don't have the prunable hash afaik? 13:06:22 OR I have to call `get_block` (JSON-RPC) and then `/get_transactions` (JSON but not JSON-RPC) 13:06:55 yeah, only way that I know to get transaction other hashes (prune related stuff) is via get_transactions 13:07:26 Yes, but I can use `/get_blocks.bin` and `/get_transactions` for the prunable hashes, downloading every transaction twice, or `get_block` and `/get_transactions`, downloading every transaction once BUT suffering from non-batch fetching of the blocks. 13:07:35 get_block returns the "block" which doesn't include transactions, just a list of them. so it seems you always need to hit get_transactions 13:07:47 does get_blocks.bin also return txs here? 13:08:08 `get_blocks.bin` includes all the transactions, yet sets `prunable_hash` to zeroes 13:08:09 aha! then that's special! 13:08:15 yeah it's the "complete entry" 13:08:19 https://github.com/monero-project/monero/issues/10120 13:08:32 Hence why the discussion is `/get_blocks.bin` vs `get_block` + `/get_transactions` 13:08:33 nice timed issue :) 13:08:46 complete* 13:08:47 *prunable hash may or may not be included 13:09:15 whatever the person who wrote that believes to be complete :) 13:09:36 It is EXCEPT the RPC explicitly sets it to zero. 13:09:49 The DB code fetches it without issue AFAICT. 13:09:53 As for the timing of the issues, I've been ranting to boog900 about my rewrite of monero-oxide's RPC for a few days now. 13:10:41 its been real journey 13:10:44 It turns out writing a Monero RPC client is extremely difficult/impossible to do performantly unless you don't verify pruned transactions and shove yourself into the `get_blocks.bin` API, which is really a blockchain synchronization manager, not a way to fetch blocks? 13:11:13 If you request two historical blocks with monero-oxide's RPC, we will download 100 MB because Monero will give us a 100 MB response. 13:11:21 ^ aye on that. for everything that I do with large sets of blocks I just end up with a local block id -> get_block cache 13:11:47 https://github.com/monero-project/monero/issues/9901 is the issue for that 13:12:03 usually my problem is getting block ids efficiently as they depend on each other 13:12:08 JSON (too slow) -> `get_blocks_by_height.bin` (no pruning) -> `get_blocks.bin` (no prunable hash) -> now 🥳 13:12:17 how about we took you three, close you up in a room for a week so that you can come up with a new core RPC layout? 13:12:28 (unironically) 13:12:29 It's been merged for almost a year, it just still hasn't been included in a release. Finally the next one however 13:12:31 boog900: And we're back to JSON. 13:12:49 why not get_blocks_by_height.bin? that allows specific heights 13:12:56 you need to handle the case that heights change 13:13:04 Because it requires a full node and I want to support pruned nodes. 13:13:08 but allows an efficient request of desired heights 13:14:11 right, that doesn't run properly with pruning 13:14:34 I think the lack of JSON-RPC batching (even in unrestricted mode) is just a "fix" for everything else that is wrong 13:15:04 Actually, using `get_blocks_by_height.bin` may be optimal boog900. If we have a full node, then we get the full transactions which is needed as `get_blocks.bin` omits `prunable_hash`. If we don't have a full node, they'll be silently omitted and we won't download them twice. We can then use `/gettransactions` for pruned transactions. 13:15:10 how about we took you three, close you up in a room for a week so that you can come up with a new core RPC layout? 13:15:10 just use gRPC and make everyone cry (and extreme bloat) 13:15:40 But it's only optimal so long as the current behavior is codified into its API. 13:15:43 just make sure to handle cases where the block id you fetched is different than expected :) 13:15:50 that's a load bearing bad design 13:16:14 SyntheticBird: are you sure that you don't just wanna see three people locked up in a room? 13:16:23 https://xkcd.com/1172/ 13:16:26 full nodes will be returning txs unpruned which isn't great but yeah it is probably best for now 13:16:50 oh now that we are complaining about RPC, send_raw_transaction sends a transaction. It has a parameter "do_not_relay". If set to true it prevents relaying 13:16:57 ... unless the node is bootstrapping 13:16:58 But again, only for as long `get_blocks_by_height.bin` silently omits pruned TXs. 13:17:06 then it instantly sends it out regardless if do_not_relay is false 13:17:50 someone needs to rewrite the RPC in Rust 13:18:08 /s 13:19:11 but only the exact same RPC behavior. 13:19:34 No, someone needs to rewrite the RPC in Rust but with the exact same mindset as the developer of epee 13:19:46 A whole new undocumented codebase with its own collection of bugs 13:20:06 and easter eggs 13:20:07 I'm trying to decode non-coinbase transactions now ... yeah I feel that 13:20:18 gRPC is terrible, JSON is slow and unstructured... clearly, the solution would be XML-RPC 13:20:39 nah, an IRC network where clients connect to 13:20:45 distributed in channels 13:20:51 discord bots rpc when? 13:20:55 What would be nice is an automated version of RPC in binary. Currently it's a manual dupe. 13:21:11 discord bot RPC over firebase 13:21:22 epic 13:21:30 like decoding that for what I wanted, the outputs, was so convoluted 13:21:36 the unrestricted endpoints would be only accessible through twitch chat 13:21:38 I heavily considered parsing it *backwards* 13:21:44 and taking just the outputs out lol 13:21:57 twitch chat is IRC 13:23:06 fair 13:23:17 Maybe I should implement a struct annotation epee marshaler/unmarshaler for my go stuff ... 13:23:41 I wrote one for SWF files including bitmaps/dynamic checks, it can't be THAT much worse 13:25:45 :D https://crates.io/crates/monero-epee/0.1.0 > The best specification is available here. 13:25:51 (the link 404s) 13:33:50 i feel the problem with the RPC api and co. is that it's designed for usages where you shouldn't be trusting the node, because it may be malicious 13:34:17 I think the issue is there was no design 13:34:28 just bodge after bodge 13:35:14 i mean, that's also part of it 13:41:46 Hélène: The monero-oxide interface API clearly documents its behavior, and we do provide sanity/consistency checks over the data from the RPC. 13:42:12 Additionally, we provide (but don't implement) an API which allows you to select decoys entirely via a local database, without using a Monero daemon. 13:43:07 I've done my best to be very thorough and clear in what the inputs, outputs are 13:46:40 Is there anyone who is asking for xmr and onion developers? 13:46:50 😁 13:48:50 well, it makes sense that its design would be reasonable, considering you have examples of real consumers (Serai), whereas monerod didn't and just copied bitcoin core :P 14:26:49 Eh. The designs aren't directly related. 14:27:57 Serai can use a trusted full node if it wants 14:43:00 DataHoarder: Ugh, did I get the link wrong? It should be to jeffro256's Rust epee lib which has a markdown file detailing epee 14:43:15 commit probably gone? 14:43:24 links to https://github.com/jeffro256/serde_epee/tree/cbebe75475fb2c6073f7b2e058c88ceb2531de17PORTABLE_STORAGE.md 14:43:36 oh 14:43:41 missing slash :) 14:43:54 FWIW, the code in that crate there is now https://github.com/monero-oxide/monero-oxide/pull/90 14:43:55 yeah, added slash, works now 14:44:24 It's been further improved. It now supports full traversal of the object while maintaining a lack of allocating. 14:44:53 I want to do the same for JSON as well but boog900 won't let me >:( 14:45:11 - wants to build a monero node 14:45:11 - doesn't want to rewrite basic functionality from scratch 14:45:13 Make it make sense. 14:45:14 3 KiB stack for the intermediate state? 14:45:27 Now 1 KB. 14:45:59 It was only 3 KB when I supported non-single-pass traversal, allowing you to enter _and exit_ objects. 14:46:07 Now you can only enter and restart. 14:46:22 yeah to reduce heap allocs I went as far as remove some io.Reader interfaces on go stuff (or bringing their sha3 hasher out of tree to prevent it from using interfaces) 14:46:53 now I can do the full p2pool consensus / verification on stack 14:47:30 It's a neat lib, even if just for the circlejerk of not allocating while offering a key-value API. 15:26:04 I'm not sure pipelining would help lws much, but making zmq msgpack _finally_ would. Or else lws could switch from zmq to http binary I guess 15:27:33 The problem with the array would be the responses over http would have to be sent in one shot. That array mode is probably better for a custom TCP protocol instead of http 1.1 15:28:08 hi 15:28:36 the CI tests of my DoT PR seem to fail, specifically the ones that resolve stuff over DNS 15:28:48 well, Quad9's regular DNS server 15:29:03 but i tested the unit tests on my computer, and they all passed 15:36:26 I mean, is there a real distinction between a bunch of small objects and one large object? The RPC server puts both in memory before finally sending it down the wire vtnerd 16:14:10 I’m not sure how that relates to my comments. The pipelining thing should work better with a custom tcp protocol because you can send responses as they are completed 16:19:42 You said the problem would be. I'm saying I'm unsure it's a problem as it wouldn't be _worse_, right? 16:19:49 Or did you mean the problem preventing it from benefiting lws? 16:32:04 Cindy: don't hardcode quad9, you might have better luck with FDN's DNS resolver (also available over DoT) or Wikimedia's 16:32:46 the CI hardcodes quad9 16:32:50 over regular DNS 16:32:56 that's what i'm talking about, and it's failing my PR 17:20:40 Cindy: we can change it to 1.1.1.1 17:20:58 you can do it in your PR, but ideally as a separate commit 17:22:40 i'll change the DNS_PUBLIC env var in the CI workflow to 1.1.1.1 17:22:50 and see if that passes or fails the DNS resolver tests 17:38:08 also btw, there is a global settings in monerod or the wallet porgrams to store the option to use DoT? 17:38:25 i don't want to use the environment variables because it seems way less flexible 17:43:21 monerod does not really have global settings 17:43:32 you can use a config file 17:44:17 i just mean arguments 17:44:21 command arguments 18:36:41 jeffro256: please also open 10123 against release 18:47:19 [@jeffro256:monero.social](https://matrix.to/#/@jeffro256:monero.social) i caught that a few weeks ago, but could only get it to segfault on one device. I typed up the issue bur never submit it. Glad to see the segfault could be reproduced 18:50:21 i switched to 1.1.1.1 (over TCP) and the CI still failed 18:51:03 i don't actually know why it can resolve domains in my machine, but not on the CI 18:51:14 .merges 18:51:14 Merge queue empty 19:30:42 @selsta: the bug fixed in 10123 isn't present in the release branch because 8703 wasn't PR'ed to the release branch 19:32:40 Adding multiple requests wouldn't be worse, but may have minimal gains when used over http 1.1. 20:23:43 Fair. For me, it's about minimizing latency by making one trip, not one hundred, without making multiple connections. 22:16:12 it turns out that the TLS CA certificate bundle that was hardcoded in the code 22:16:33 the path to the bundle (/etc/ssl/cert.pem) didn't exist in the CI's filesystem 22:16:42 so unbound couldn't initialize properly and caused the CI to fail 22:16:58 so i had to switch it out with a path given by OpenSSL's X509_get_default_cert_file 22:21:59 the path to the bundle (/etc/ssl/cert.pem) didn't exist in the CI's filesystem 22:22:01 wont this also not exist on, say, windows? 22:38:41 true 22:38:51 the hardcoded path was a dumb mistake 22:38:55 i should have used OpenSSL's functions 22:39:02 and not guess for ONE platform 23:07:55 If you've got that much data to transfer, typically it's better to overlap requests with local processing instead of pipelining. That's the typical case for monero - request next set of blocks while doing work on current set. There may be other cases where pipelining is more beneficial, but those seem to be less prevalent 23:09:06 If http 1.1 allowed real pipelining, then this changes because you could requests multiple blocks (but kind of need them in order, so it's still tricky) 23:49:58 While an app SHOULD avoid observing this latency by so pipelining, that doesn't change my offering this amount of latency is unnecessary and I can do better by not having to make sequential HTTP requests. 23:51:52 But heard you're unconvinced and uninterested :)