-
m-relay
<kayabanerve:matrix.org> vtnerd: Do you have an opinion on
monero-project/monero #10118? I know you've spent time with epee's encoder and also may also benefit from this feature.
-
DataHoarder
huh... batch JSON-RPC, hmmmm
-
DataHoarder
how would that interact with restricted limits?
-
sech1
I think it's better to introduce new RPCs for the use cases where people would need batch requests. Batch requests would probably fetch much more data than needed in each case, and put more load on the server
-
DataHoarder
-
DataHoarder
* which undocumented can take hashes/heights
-
DataHoarder
-
DataHoarder
which has hash or hashes
-
DataHoarder
to select one or multiple blocks at once
-
DataHoarder
hashes was not documented but I was using it on my library, until it got documented recently
-
DataHoarder
quite certain other methods have similar batch requests
-
m-relay
<kayabanerve:matrix.org> DataHoarder: `get_block` can already take multiple items?
-
DataHoarder
* get_block_header_by_hash
-
m-relay
<kayabanerve:matrix.org> That solves why I was asking for batch requests.
-
DataHoarder
lemme see get_block
-
m-relay
<kayabanerve:matrix.org> You first specified `get_block`
-
DataHoarder
bad copy paste
-
DataHoarder
that's why *
-
m-relay
<kayabanerve:matrix.org> sech1: I mean, we're claiming to offer a JSON-RPC 2.0 server when we don't :/ We also still have a response size limit.
-
DataHoarder
it can only get single hash or height
-
m-relay
<boog900:monero.social> its not even JSON 😭
-
DataHoarder
block headers can be fetched by hashes list (and returns a list)
-
m-relay
<kayabanerve:matrix.org> Because the parser won't decode some valid JSON, because we sometimes inline weird binary blobs, or for yet more mysterious reasons boog900?
-
m-relay
<kayabanerve:matrix.org> DataHoarder: I need the transaction hashes but not the transactions.
-
DataHoarder
that's on get_block indeed
-
DataHoarder
get_block doesn't contain the transactions, just hashes
-
m-relay
<boog900:monero.social> the binary blobs, atm its a custom format that _can_ be used as JSON most of the time
-
DataHoarder
block header only has miner tx id + tx count :(
-
DataHoarder
though you can probably reuse the same HTTP/TCP connection to send the multiple queries after each finishes
-
m-relay
<boog900:monero.social> although it is planed to be fixed at some point
-
DataHoarder
get_block.bin tbh does it for you here
-
m-relay
<syntheticbird:monero.social> fun fact: `"--"` is a valid number for monerod json parser (its zero)
-
DataHoarder
14:53:45 <m-relay> <kayabanerve:matrix.org> That solves why I was asking for batch requests.
-
DataHoarder
so if requesting a list of JSON-RPC requests, these are processed sequentially right?
-
DataHoarder
It'd be the same than not closing the TCP/HTTP connection and issuing a couple in sequence, ofc, you take roundtrip latency
-
m-relay
<kayabanerve:matrix.org> The JSON-RPC 2.0 request allows handling them in an unordered fashion with unordered responses.
-
m-relay
<kayabanerve:matrix.org> The simplest implementation for Monero would be to accept the entire request, write '[' as a response, then handle them sequentially, writing each individual response and the necessary commas, before finally writing `]'
-
DataHoarder
alternatively do you just want the *transaction ids* ?
-
DataHoarder
and tie to block height
-
m-relay
<kayabanerve:matrix.org> I want the block.
-
m-relay
<kayabanerve:matrix.org> The block is the block header and the list of transactions within the block.
-
m-relay
<kayabanerve:matrix.org> If you tell me I have a way to fetch the transaction IDs alone, I'm curious, but it alone is not a sufficient solution for me.
-
DataHoarder
yeah, I was mostly referring to > 14:56:04 <m-relay> <kayabanerve:matrix.org> DataHoarder: I need the transaction hashes but not the transactions.
-
DataHoarder
for that second ...
docs.getmonero.org/rpc-library/monerod-rpc/#get_outs and request every 2 global output indices
-
m-relay
<kayabanerve:matrix.org> ...
-
DataHoarder
you'll hit some dupes but given at minimum you need 2 outputs + coinbase id :)
-
DataHoarder
and allows multiple!
-
m-relay
<kayabanerve:matrix.org> I hate how reasinable that is in the current environment.
-
DataHoarder
(and gets txid)
-
m-relay
<kayabanerve:matrix.org> *reasonable
-
DataHoarder
you get coinbase id on block headers
-
DataHoarder
which also allow ranges
-
m-relay
<kayabanerve:matrix.org> Right, but that will just be the outputs, not even the transactions.
-
DataHoarder
-
DataHoarder
that gets you indices of each coinbase outputs... then get outs for the missing indices :D
-
DataHoarder
each output says the txid, not the transaction yeah
-
DataHoarder
+ height
-
DataHoarder
get_blocks_by_height.bin is probably the most reasonable method here
-
DataHoarder
but it's not JSON-RPC
-
m-relay
<kayabanerve:matrix.org> Yes, I have two separate goals:
-
m-relay
<kayabanerve:matrix.org> 1) Efficiently fetch blocks, defined as the block header and list of transactions within it
-
m-relay
<kayabanerve:matrix.org> 2) Efficiently fetch 'scannable blocks' (block, transactions, output indexes)
-
m-relay
<kayabanerve:matrix.org> The issue with `get_blocks.bin` is it doesn't return the `prunable_hash`. `get_blocks_by_height.bin` doesn't support pruning at all and won't work with a pruned node.
-
m-relay
<boog900:monero.social>
monero-project/monero #10121 :D
-
DataHoarder
return prunable_hash?
-
DataHoarder
for the transaction?
-
m-relay
<kayabanerve:matrix.org> So I have to use `get_blocks.bin`, then re-request all transactions for their prunable hashes (pointless)
-
DataHoarder
get_block also just returns the in-wire tx ids
-
DataHoarder
which don't have the prunable hash afaik?
-
m-relay
<kayabanerve:matrix.org> OR I have to call `get_block` (JSON-RPC) and then `/get_transactions` (JSON but not JSON-RPC)
-
DataHoarder
yeah, only way that I know to get transaction other hashes (prune related stuff) is via get_transactions
-
m-relay
<kayabanerve:matrix.org> Yes, but I can use `/get_blocks.bin` and `/get_transactions` for the prunable hashes, downloading every transaction twice, or `get_block` and `/get_transactions`, downloading every transaction once BUT suffering from non-batch fetching of the blocks.
-
DataHoarder
get_block returns the "block" which doesn't include transactions, just a list of them. so it seems you always need to hit get_transactions
-
DataHoarder
does get_blocks.bin also return txs here?
-
m-relay
<kayabanerve:matrix.org> `get_blocks.bin` includes all the transactions, yet sets `prunable_hash` to zeroes
-
DataHoarder
aha! then that's special!
-
DataHoarder
yeah it's the "complete entry"
-
m-relay
-
m-relay
<kayabanerve:matrix.org> Hence why the discussion is `/get_blocks.bin` vs `get_block` + `/get_transactions`
-
DataHoarder
nice timed issue :)
-
m-relay
<kayabanerve:matrix.org> complete*
-
m-relay
<kayabanerve:matrix.org> *prunable hash may or may not be included
-
DataHoarder
whatever the person who wrote that believes to be complete :)
-
m-relay
<kayabanerve:matrix.org> It is EXCEPT the RPC explicitly sets it to zero.
-
m-relay
<kayabanerve:matrix.org> The DB code fetches it without issue AFAICT.
-
m-relay
<kayabanerve:matrix.org> As for the timing of the issues, I've been ranting to boog900 about my rewrite of monero-oxide's RPC for a few days now.
-
m-relay
<boog900:monero.social> its been real journey
-
m-relay
<kayabanerve:matrix.org> It turns out writing a Monero RPC client is extremely difficult/impossible to do performantly unless you don't verify pruned transactions and shove yourself into the `get_blocks.bin` API, which is really a blockchain synchronization manager, not a way to fetch blocks?
-
m-relay
<kayabanerve:matrix.org> If you request two historical blocks with monero-oxide's RPC, we will download 100 MB because Monero will give us a 100 MB response.
-
DataHoarder
^ aye on that. for everything that I do with large sets of blocks I just end up with a local block id -> get_block cache
-
m-relay
<kayabanerve:matrix.org>
monero-project/monero #9901 is the issue for that
-
DataHoarder
usually my problem is getting block ids efficiently as they depend on each other
-
m-relay
<boog900:monero.social> JSON (too slow) -> `get_blocks_by_height.bin` (no pruning) -> `get_blocks.bin` (no prunable hash) -> now 🥳
-
m-relay
<syntheticbird:monero.social> how about we took you three, close you up in a room for a week so that you can come up with a new core RPC layout?
-
m-relay
<syntheticbird:monero.social> (unironically)
-
m-relay
<kayabanerve:matrix.org> It's been merged for almost a year, it just still hasn't been included in a release. Finally the next one however
-
m-relay
<kayabanerve:matrix.org> boog900: And we're back to JSON.
-
DataHoarder
why not get_blocks_by_height.bin? that allows specific heights
-
DataHoarder
you need to handle the case that heights change
-
m-relay
<kayabanerve:matrix.org> Because it requires a full node and I want to support pruned nodes.
-
DataHoarder
but allows an efficient request of desired heights
-
DataHoarder
right, that doesn't run properly with pruning
-
DataHoarder
I think the lack of JSON-RPC batching (even in unrestricted mode) is just a "fix" for everything else that is wrong
-
m-relay
<kayabanerve:matrix.org> Actually, using `get_blocks_by_height.bin` may be optimal boog900. If we have a full node, then we get the full transactions which is needed as `get_blocks.bin` omits `prunable_hash`. If we don't have a full node, they'll be silently omitted and we won't download them twice. We can then use `/gettransactions` for pruned transactions.
-
DataHoarder
<syntheticbird:monero.social> how about we took you three, close you up in a room for a week so that you can come up with a new core RPC layout?
-
DataHoarder
just use gRPC and make everyone cry (and extreme bloat)
-
m-relay
<kayabanerve:matrix.org> But it's only optimal so long as the current behavior is codified into its API.
-
DataHoarder
just make sure to handle cases where the block id you fetched is different than expected :)
-
m-relay
<kayabanerve:matrix.org> that's a load bearing bad design
-
m-relay
<helene:unredacted.org> SyntheticBird: are you sure that you don't just wanna see three people locked up in a room?
-
m-relay
<kayabanerve:matrix.org>
xkcd.com/1172
-
m-relay
<boog900:monero.social> full nodes will be returning txs unpruned which isn't great but yeah it is probably best for now
-
DataHoarder
oh now that we are complaining about RPC, send_raw_transaction sends a transaction. It has a parameter "do_not_relay". If set to true it prevents relaying
-
DataHoarder
... unless the node is bootstrapping
-
m-relay
<kayabanerve:matrix.org> But again, only for as long `get_blocks_by_height.bin` silently omits pruned TXs.
-
DataHoarder
then it instantly sends it out regardless if do_not_relay is false
-
m-relay
<boog900:monero.social> someone needs to rewrite the RPC in Rust
-
m-relay
<boog900:monero.social> /s
-
DataHoarder
but only the exact same RPC behavior.
-
m-relay
<kayabanerve:matrix.org> No, someone needs to rewrite the RPC in Rust but with the exact same mindset as the developer of epee
-
m-relay
<kayabanerve:matrix.org> A whole new undocumented codebase with its own collection of bugs
-
m-relay
<syntheticbird:monero.social> and easter eggs
-
DataHoarder
I'm trying to decode non-coinbase transactions now ... yeah I feel that
-
m-relay
<helene:unredacted.org> gRPC is terrible, JSON is slow and unstructured... clearly, the solution would be XML-RPC
-
DataHoarder
nah, an IRC network where clients connect to
-
DataHoarder
distributed in channels
-
m-relay
<syntheticbird:monero.social> discord bots rpc when?
-
moneromoooo
What would be nice is an automated version of RPC in binary. Currently it's a manual dupe.
-
m-relay
<helene:unredacted.org> discord bot RPC over firebase
-
m-relay
<syntheticbird:monero.social> epic
-
DataHoarder
like decoding that for what I wanted, the outputs, was so convoluted
-
m-relay
<syntheticbird:monero.social> the unrestricted endpoints would be only accessible through twitch chat
-
DataHoarder
I heavily considered parsing it *backwards*
-
DataHoarder
and taking just the outputs out lol
-
DataHoarder
twitch chat is IRC
-
m-relay
<syntheticbird:monero.social> fair
-
DataHoarder
Maybe I should implement a struct annotation epee marshaler/unmarshaler for my go stuff ...
-
DataHoarder
I wrote one for SWF files including bitmaps/dynamic checks, it can't be THAT much worse
-
DataHoarder
:D
crates.io/crates/monero-epee/0.1.0 > The best specification is available here.
-
DataHoarder
(the link 404s)
-
m-relay
<helene:unredacted.org> i feel the problem with the RPC api and co. is that it's designed for usages where you shouldn't be trusting the node, because it may be malicious
-
m-relay
<boog900:monero.social> I think the issue is there was no design
-
m-relay
<boog900:monero.social> just bodge after bodge
-
m-relay
<helene:unredacted.org> i mean, that's also part of it
-
m-relay
<kayabanerve:matrix.org> Hélène: The monero-oxide interface API clearly documents its behavior, and we do provide sanity/consistency checks over the data from the RPC.
-
m-relay
<kayabanerve:matrix.org> Additionally, we provide (but don't implement) an API which allows you to select decoys entirely via a local database, without using a Monero daemon.
-
m-relay
<kayabanerve:matrix.org> I've done my best to be very thorough and clear in what the inputs, outputs are
-
m-relay
<peteryangtime:matrix.org> Is there anyone who is asking for xmr and onion developers?
-
m-relay
<peteryangtime:matrix.org> 😁
-
m-relay
<helene:unredacted.org> well, it makes sense that its design would be reasonable, considering you have examples of real consumers (Serai), whereas monerod didn't and just copied bitcoin core :P
-
m-relay
<kayabanerve:matrix.org> Eh. The designs aren't directly related.
-
m-relay
<kayabanerve:matrix.org> Serai can use a trusted full node if it wants
-
m-relay
<kayabanerve:matrix.org> DataHoarder: Ugh, did I get the link wrong? It should be to jeffro256's Rust epee lib which has a markdown file detailing epee
-
DataHoarder
commit probably gone?
-
DataHoarder
-
DataHoarder
oh
-
DataHoarder
missing slash :)
-
m-relay
<kayabanerve:matrix.org> FWIW, the code in that crate there is now
monero-oxide/monero-oxide #90
-
DataHoarder
yeah, added slash, works now
-
m-relay
<kayabanerve:matrix.org> It's been further improved. It now supports full traversal of the object while maintaining a lack of allocating.
-
m-relay
<kayabanerve:matrix.org> I want to do the same for JSON as well but boog900 won't let me >:(
-
m-relay
<kayabanerve:matrix.org> - wants to build a monero node
-
m-relay
<kayabanerve:matrix.org> - doesn't want to rewrite basic functionality from scratch
-
m-relay
<kayabanerve:matrix.org> Make it make sense.
-
DataHoarder
3 KiB stack for the intermediate state?
-
m-relay
<kayabanerve:matrix.org> Now 1 KB.
-
m-relay
<kayabanerve:matrix.org> It was only 3 KB when I supported non-single-pass traversal, allowing you to enter _and exit_ objects.
-
m-relay
<kayabanerve:matrix.org> Now you can only enter and restart.
-
DataHoarder
yeah to reduce heap allocs I went as far as remove some io.Reader interfaces on go stuff (or bringing their sha3 hasher out of tree to prevent it from using interfaces)
-
DataHoarder
now I can do the full p2pool consensus / verification on stack
-
m-relay
<kayabanerve:matrix.org> It's a neat lib, even if just for the circlejerk of not allocating while offering a key-value API.
-
m-relay
<vtnerd:monero.social> I'm not sure pipelining would help lws much, but making zmq msgpack _finally_ would. Or else lws could switch from zmq to http binary I guess
-
m-relay
<vtnerd:monero.social> The problem with the array would be the responses over http would have to be sent in one shot. That array mode is probably better for a custom TCP protocol instead of http 1.1
-
Cindy
hi
-
Cindy
the CI tests of my DoT PR seem to fail, specifically the ones that resolve stuff over DNS
-
Cindy
well, Quad9's regular DNS server
-
Cindy
but i tested the unit tests on my computer, and they all passed
-
m-relay
<kayabanerve:matrix.org> I mean, is there a real distinction between a bunch of small objects and one large object? The RPC server puts both in memory before finally sending it down the wire vtnerd
-
m-relay
<vtnerd:monero.social> I’m not sure how that relates to my comments. The pipelining thing should work better with a custom tcp protocol because you can send responses as they are completed
-
m-relay
<kayabanerve:matrix.org> You said the problem would be. I'm saying I'm unsure it's a problem as it wouldn't be _worse_, right?
-
m-relay
<kayabanerve:matrix.org> Or did you mean the problem preventing it from benefiting lws?
-
m-relay
<helene:unredacted.org> Cindy: don't hardcode quad9, you might have better luck with FDN's DNS resolver (also available over DoT) or Wikimedia's
-
Cindy
the CI hardcodes quad9
-
Cindy
over regular DNS
-
Cindy
that's what i'm talking about, and it's failing my PR
-
selsta
Cindy: we can change it to 1.1.1.1
-
selsta
you can do it in your PR, but ideally as a separate commit
-
Cindy
i'll change the DNS_PUBLIC env var in the CI workflow to 1.1.1.1
-
Cindy
and see if that passes or fails the DNS resolver tests
-
Cindy
also btw, there is a global settings in monerod or the wallet porgrams to store the option to use DoT?
-
Cindy
i don't want to use the environment variables because it seems way less flexible
-
selsta
monerod does not really have global settings
-
selsta
you can use a config file
-
Cindy
i just mean arguments
-
Cindy
command arguments
-
selsta
jeffro256: please also open 10123 against release
-
m-relay
<ofrnxmr:xmr.mx> [@jeffro256:monero.social](https://matrix.to/#/@jeffro256:monero.social) i caught that a few weeks ago, but could only get it to segfault on one device. I typed up the issue bur never submit it. Glad to see the segfault could be reproduced
-
Cindy
i switched to 1.1.1.1 (over TCP) and the CI still failed
-
Cindy
i don't actually know why it can resolve domains in my machine, but not on the CI
-
tobtoht_
.merges
-
xmr-pr
Merge queue empty
-
m-relay
<jeffro256:monero.social> @selsta: the bug fixed in 10123 isn't present in the release branch because 8703 wasn't PR'ed to the release branch
-
m-relay
<vtnerd:monero.social> Adding multiple requests wouldn't be worse, but may have minimal gains when used over http 1.1.
-
m-relay
<kayabanerve:matrix.org> Fair. For me, it's about minimizing latency by making one trip, not one hundred, without making multiple connections.
-
Cindy
it turns out that the TLS CA certificate bundle that was hardcoded in the code
-
Cindy
the path to the bundle (/etc/ssl/cert.pem) didn't exist in the CI's filesystem
-
Cindy
so unbound couldn't initialize properly and caused the CI to fail
-
Cindy
so i had to switch it out with a path given by OpenSSL's X509_get_default_cert_file
-
m-relay
<ofrnxmr:xmr.mx> <Cindy> the path to the bundle (/etc/ssl/cert.pem) didn't exist in the CI's filesystem
-
m-relay
<ofrnxmr:xmr.mx> wont this also not exist on, say, windows?
-
Cindy
true
-
Cindy
the hardcoded path was a dumb mistake
-
Cindy
i should have used OpenSSL's functions
-
Cindy
and not guess for ONE platform
-
m-relay
<vtnerd:monero.social> If you've got that much data to transfer, typically it's better to overlap requests with local processing instead of pipelining. That's the typical case for monero - request next set of blocks while doing work on current set. There may be other cases where pipelining is more beneficial, but those seem to be less prevalent
-
m-relay
<vtnerd:monero.social> If http 1.1 allowed real pipelining, then this changes because you could requests multiple blocks (but kind of need them in order, so it's still tricky)
-
m-relay
<kayabanerve:matrix.org> While an app SHOULD avoid observing this latency by so pipelining, that doesn't change my offering this amount of latency is unnecessary and I can do better by not having to make sequential HTTP requests.
-
m-relay
<kayabanerve:matrix.org> But heard you're unconvinced and uninterested :)