-
br-m
<drinksomemilk:matrix.org> Do workshares increase the advantage a miner with low network latency has over miners with higher network latency?
-
br-m
<antilt:we2.ee> better reward distribution is nice to have (we have p2pool), but the issue is a non-rational attacker with too much burst hash rate. Mid term someone may rent 51% for longer periods and dev resources should go where needed most.
-
DataHoarder
List of Qubic blocks up to this recent epoch 177
irc.gammaspectra.live/2990a5684cf1c5a5/qubic-blocks-epoch177.csv Total reported: 11209 blocks
-
DataHoarder
Added: block status (CHAIN or ORPHAN) orphan blocks, and their block headers for coinbase verification. These blocks are found by scanning open monero instances for their alt blocks and saving them, to gather any late blocks that could have been received by other monero nodes.
-
br-m
<monero.arbo:matrix.org> DataHoarder: The Qubic docs I'm looking at say an epoch is seven days but that must be outdated.... how long is an epoch?
-
DataHoarder
one week, correcty
-
DataHoarder
the list of blocks is up to and including epoch 177
-
br-m
<monero.arbo:matrix.org> oh okay, was about to say a week only has 5040 blocks
-
br-m
<monero.arbo:matrix.org> thx lol
-
DataHoarder
if you want to filter blocks only made with their epoch 177 key use 42Vt47oLyRT7C1Ch3BbapKFZgs5Hip5m3RrRVT2dbjDT8NWs76gc77NgfZvzXpZnPYGgVZFf79T5TSKWSjFxYWk4A77WGa6
-
DataHoarder
Some orphan blocks for previous epochs keep appearing on some remote monero nodes, so this list may grow over time
-
DataHoarder
orphan block spelunking :)
-
DataHoarder
if you want a specific one for a single epoch I can generate it, or filter by the address for that epoch
-
br-m
<monero.arbo:matrix.org> I was just curious what percent of blocks they were producing over a longer timespan like a week
-
DataHoarder
1460 blocks mined (including orphans) on 177
-
DataHoarder
-
DataHoarder
ruck collated this data, numbers are different as qubic sometimes doesn't roll keys instantly
-
br-m
<monero.arbo:matrix.org> nice ty for the link
-
DataHoarder
the corresponding section on
monero-project/monero #10064 has epoch 176
-
br-m
<rucknium> @monero.arbo:matrix.org: My analysis includes orphaned blocks. That will be more blocks than on the main chain. I am trying to get a rough estimate of hashpower share. Qubic orphaning honest blocks removes them from the main chain. Hashpower share estimates based only on main chain data would be inaccurate
-
DataHoarder
what time range are you using for the table?
-
DataHoarder
as I get 1460 blocks mined including orphans, 1127 without (only main blocks)
-
DataHoarder
important it being Wed 12 to 12 UTC :)
-
br-m
<rucknium> DataHoarder: I merge it with alt chain data from my node. I think some are lost in that step.
-
DataHoarder
ah yeah, that'd be correct. some of these orphan blocks were on very obscure remote nodes and pushing these around the network is not trivial
-
br-m
<gingeropolous> so how is it that fluffly blocks slows down qubics blocks?
-
br-m
<gingeropolous> and is there any way to kick it up a notch
-
br-m
<ofrnxmr> Qubics blocks have txs in them that have never been seen before
-
br-m
<gingeropolous> ah so its just verification time?
-
br-m
<ofrnxmr> So they have to broadcast blocks + txs at the same time, and receiving nodes have to verify the txs at the same time they receive the block
-
br-m
<ofrnxmr> With fluffy blocks, the node has already verified the txs (this is my understanding)
-
br-m
<gingeropolous> gotcha gotcha. so just mechanics. there's no designed property of the fluffly block system for a node to delay the broadcast of a block with unknown txs
-
br-m
<ofrnxmr> > A block is made up of a header and transactions. Fluffy Blocks only contain a header, a list of transaction indices, and any transactions that the node recieving the block may be missing. This saves bandwidth because nodes might already know about most or all of the transactions in the block and they don't need to be sent them again.
-
br-m
<gingeropolous> im sure this has been thought of or discussed before, but could a node intentionally slow down the relay of a block that contains unknown txs? Like, if a node has txpool of 40 txs, and a block comes in that has 0/40 transactions ... the node could delay block propagation by n seconds or something. if it has 10/40 transactions, then 0.9*n, etc
-
br-m
<rucknium> Wouldn't that potentially punish including any txs, because miners cannot be sure that each tx has actually properly propagated through nodes' txpools?
-
br-m
<gingeropolous> i think it would depend on tx propagation. and the extreme case of 0/total seems like it would only come from selfish mining
-
br-m
<gingeropolous> though qubic could just publish empty blocks.... and working in a delay empty blocks thing probably opens up more nonsense
-
br-m
<gingeropolous> though an empty block would be 0/total ...
-
DataHoarder
they could broadcast all but the tip
-
DataHoarder
as monero builds up
-
br-m
<gingeropolous> what do you mean.. broadcast blocks? or txs?
-
DataHoarder
blocks and txs
-
br-m
<ofrnxmr:xmr.mx> It also doesnt work in reverse
-
br-m
<ofrnxmr:xmr.mx> Meaning.. how do we know which node's tx pool was the correct one?
-
br-m
<ofrnxmr:xmr.mx> What if the "real" txpool was empty, but qubics txpool had 10txs?
-
br-m
<gingeropolous> then it'd still be 0/total match
-
br-m
<ofrnxmr:xmr.mx> Or vice versa, if the real txpool had 10txs, but my node hasnt seen them yet? delaying the block is essentially delaying the tx relay
-
br-m
<ofrnxmr:xmr.mx> With doesnt make sense
-
br-m
<gingeropolous> i guess we'd need to gather data from the network
-
br-m
<gingeropolous> but id imagine that most honest nodes see some > 90% of txs in their txpool before they see a block come in that references them
-
br-m
<ofrnxmr:xmr.mx> an example that really annoys me, is that the txpool logic is pretty ugly atm
-
br-m
<ofrnxmr:xmr.mx> I have 4 or 5 nodes running an fcmp testnet. I, repeatedly, have a mismatch between txpools
-
br-m
<gingeropolous> otherwise fluffyblocks wouldn't be magical
-
br-m
<gingeropolous> perhaps run a normal testnet and see if its fcmp specific ?
-
br-m
<ofrnxmr:xmr.mx> its not fcmp specific
-
br-m
<ofrnxmr:xmr.mx> its tx relay logic
-
br-m
<ofrnxmr:xmr.mx> The txpool was 330mb. All nodes except for my mining node had the full 330mb. The mining node decided to pull in like 100txs at a time, and wait like 5mins before requesting more
-
br-m
<ofrnxmr:xmr.mx> The blocks i was mining MISSED a lot of txs, due to no fault of the miner
-
br-m
<gingeropolous> so something else to optimize
-
br-m
<ofrnxmr:xmr.mx> Just because the stupid node doesnt request the full contents of the tx pool,
-
br-m
<ofrnxmr:xmr.mx> Hopefully 0xfffc's txrelayv2 pr fixes this
-
br-m
<gingeropolous> yeah whats the status of that
-
br-m
<ofrnxmr:xmr.mx> updated yesterday
-
br-m
-
br-m
<rucknium> IIRC, it was common to have varying txpool sizes between stressnet nodes last year.
-
br-m
<rucknium> Nodes have a backstop mechanism to re-broadcast txs occasionally if they haven't been mined. In theory, that should get you the same txs in each txpool eventually.
-
br-m
<rucknium> I have occasional wait timers on my spammer to allow txs to propagate. Maybe the RPC connection/activity stops tx broadcast.
-
br-m
<ofrnxmr:xmr.mx> My mining node uses monerod as the miner, and only queries rpc to check the txpool every couple mins
-
br-m
<gingeropolous> oh jeez
-
br-m
<ofrnxmr:xmr.mx> @rucknium: My spammer runs flat out. And in my scenario above, i hit 325mb txpool BECAUSE the mining node wasnt getting the txs, causing me to run out of inputs
-
br-m
<ofrnxmr:xmr.mx> so the non-mining nodes and spammer were idle for over an hour, while i waited for the mining nodes txpool to catch up
-
br-m
<ofrnxmr:xmr.mx> i even restarted the node after a while, thinking it was "stuck". After which, is downloaded like 500mb and uploaded 900 😭
-
br-m
<ofrnxmr:xmr.mx> For a 325mb txpool, it used over 1.5gb to receive it
-
br-m
<ofrnxmr:xmr.mx> Received 586058072 bytes (558.91 MB) in 103435 packets in 54.8 minutes, average 173.96 kB/s = 0.53% of the limit of 32.00 MB/s
-
br-m
<ofrnxmr:xmr.mx> Sent 1199412459 bytes (1.12 GB) in 41065 packets in 54.8 minutes, average 356.02 kB/s = 4.35% of the limit of 8.00 MB/s
-
br-m
<ofrnxmr:xmr.mx> Nvm, that was for the last 60(!!)mb of txs!
-
br-m
<ofrnxmr:xmr.mx> sent 1.12mb to nodes that already had the txs, and downloaded 560mb trying to pull in 60mb. Insanely inefficient
-
br-m
<gingeropolous> hrmmm... monero on shadow with 90 agents running is taking up 30 GB ram .... 1k agents might be a tall order
-
br-m
<ofrnxmr:xmr.mx> monerod and lmdb both need ram, and wallets as well
-
br-m
<ofrnxmr:xmr.mx> I dont know what you mean by agents, but i assume each agent is running a node?
-
DataHoarder
23:22:19 <br-m> <ofrnxmr> i think we should update proactively (as opposed to reactively), maybe rolling blocks 4-10? Updating at every new block
-
DataHoarder
proactively yeah, from some depth from tip as they come
-
br-m
<ofrnxmr:xmr.mx> Yea
-
DataHoarder
you probably want to do 4-10 but keep some before (so nodes with reaaally sluggish DNS still get some old ones)
-
DataHoarder
and maybe do a pin of the last randomx epoch
-
DataHoarder
so that's for deep reorg verification
-
br-m
<ofrnxmr:xmr.mx> the issue to watch for is how expensive the parsing of the records is
-
DataHoarder
it's integer parsing + hex
-
DataHoarder
I publish a copy of checkpoints on $ dig +dnssec +multi checkpoints.gammaspectra.live TXT
-
DataHoarder
for my own testing
-
br-m
<ofrnxmr:xmr.mx> i mean, were contacting 7+ servers, and comparing their results. I figure more records is more costly, (but maybe inconsequential )
-
DataHoarder
the DNSSEC signature verification, ed25519, is more expensive :)
-
DataHoarder
specially as that's also hashing and doing base64 decode
-
DataHoarder
or ecdsa depending on algo
-
br-m
<ofrnxmr:xmr.mx> so its more a matter of how many servers vs how many records?
-
DataHoarder
keeping it to ~10 "active" checkpoints is a sane number I was testing with
-
DataHoarder
+2 for deep reorg
-
DataHoarder
and any other that want to be set for old purposes like current records have
-
br-m
<ofrnxmr:xmr.mx> i wonder if dns checkpoints help with node sync
-
DataHoarder
technically you would never have to do parsing of old records if you know you have them, but it's probably slower to check these in string form than to parse
-
DataHoarder
static checkpoints do, these get added on the same call but without the pinned difficulty
-
br-m
<ofrnxmr:xmr.mx> @ofrnxmr:xmr.mx: (Shower thoughts)
-
br-m
<ofrnxmr:xmr.mx> DataHoarder: Yeah, i know the static ones do. Not sure abt dns though
-
DataHoarder
they get added on the same "call" minus difficulty, handled internally
-
DataHoarder
I was looking at that code before MRL meeting
-
DataHoarder
for > Checkpoint at given height already exists, and hash for new checkpoint was different!
-
DataHoarder
oh yes.
-
DataHoarder
they definitely do
-
DataHoarder
they also trigger all the other events like bootstrap nodes must be also on the max height checkpoint
-
DataHoarder
and wallet2::fast_refresh
-
br-m
<ofrnxmr:xmr.mx> There was an issue where:
-
br-m
<ofrnxmr:xmr.mx> The checkpointing node was used to create the checkpoint, but was reorged before it checked the dns checkpoint, and then created a new checkpoint that conflicted with the old one
-
DataHoarder
also wallet2::trim_hashchain
-
br-m
<ofrnxmr:xmr.mx> This caused the node to be incapable of creating blocks, since the checkpoints conflicted
-
DataHoarder
that has to be checked by the system issuing checkpoints itself
-
DataHoarder
that the new checkpoint IS In a chain of the previous checkpoint was part of
-
DataHoarder
walk the tree, effectively
-
DataHoarder
you also can take multiple points and aggregate the data (not for issuing checkpoints/voting, but for supplementing blocks to feed back to proper monerod)
-
DataHoarder
if you see alts on other areas, pass them around so everyone has the same available information
-
br-m
<ofrnxmr:xmr.mx> maybe checkpointing nodes should reject reorgs deeper than the checkpoint height altogether, instead of relying on dns themselves
-
DataHoarder
yeah, but they need to be aware of the checkpoints
-
DataHoarder
local RPC to query/verify and set checkpoints on local node on privileged RPC can be very useful
-
DataHoarder
that way you can set, and verify they are active and current chain is on them
-
DataHoarder
it needs to fail-safe and checked in depth, if there is any risk not issuing anything or bailing out is safer than continuing
-
DataHoarder
even if monero verifies it's on a chain on checkpoints, the system that places the checkpoints should also walk the tree and verify
-
DataHoarder
must*
-
br-m
<ofrnxmr:xmr.mx> hmmm. On second thought, if your node is reorged and invalidates the checkpoints that your dns just set, the BFT should fix your node (since your node would get updated from the consensus of other nodes), and then your dns should update your records to match the consensus
-
br-m
<ofrnxmr:xmr.mx> If you lock in your own checkpoints immediately, that is incorrectly assuming that your checkpoints will be a part of the 2/3+1
-
DataHoarder
less so locking them immediately
-
DataHoarder
but before issuing the next checkpoint after first one was confirmed across the other actors
-
br-m
<ofrnxmr:xmr.mx> We're only checking dns every 300seconds, but updating dns every 120 :/
-
DataHoarder
but that's for general consumption
-
br-m
<ofrnxmr:xmr.mx> So checkpointing nodes should check more often then?
-
DataHoarder
records have TTL set of 300 but they can update faster by querying source of them gossiping with each other directly
-
br-m
<ofrnxmr:xmr.mx> i could see rucks updating every few seconds
-
br-m
<ofrnxmr:xmr.mx> I dont think the records ttl was limited to 300
-
DataHoarder
TTL is time it can be cached but effectively the check can be quite often
-
DataHoarder
specially if you have a recursive resolver that queries the source
-
DataHoarder
DNS is a distribution method but alternate means can be done specially if we pin pubkeys to subdomains :)
-
br-m
<ofrnxmr:xmr.mx> i think the biggest hurdle with dns checkpoints isnt the monerod side of things, but coordinating the updating of the checkpoints to ensure that we dont have a random bag of checkpoints all with different tips
-
br-m
<ofrnxmr:xmr.mx> Like checkpointA had block 305 304 303
-
br-m
<ofrnxmr:xmr.mx> Checkpoint b has 306 305 304
-
br-m
<ofrnxmr:xmr.mx> Checkpoint c has 304 303 302
-
br-m
<ofrnxmr:xmr.mx> Etc
-
br-m
<ofrnxmr:xmr.mx> They all have 304 in common, but the total records dont match, so i think monerod rejects them if they dont match entirely
-
DataHoarder
that's why these could gossip together what checkpoint they'd set
-
DataHoarder
lemme check
-
br-m
<ofrnxmr:xmr.mx> Or set checkpoints at fixed blocks? Like 0 and 5?
-
br-m
<monerobull:matrix.org> always baffles me how the chain tip can be different across stuff
-
br-m
<monerobull:matrix.org> not talking about block propagation
-
br-m
<monerobull:matrix.org> but how sometimes monero gui might show the current block +1 vs xmrchain.net
-
DataHoarder
sometimes they say depth vs height
-
DataHoarder
00:09:28 <br-m> <ofrnxmr:xmr.mx> They all have 304 in common, but the total records dont match, so i think monerod rejects them if they dont match entirely
-
DataHoarder
I am reading this code but what a wild way to check
-
DataHoarder
but if so we actually have a way forward as well
-
DataHoarder
if we want to change that behavior, we can ensure old code always mismatches :)
-
DataHoarder
best way to understand the code is reimplement it :)
-
br-m
<ofrnxmr:xmr.mx> I guess it makes sense if only checkijg every hour, since all dns should be updated by the time the hour is up
-
DataHoarder
it could race condition but next one would be fixed
-
DataHoarder
the purpose was to publish maybe one every couple of years or so IF a bug was triggered
-
nioc
<monerobull:matrix.org> but how sometimes monero gui might show the current block +1 vs xmrchain.net <<>> daemon shows the block currently being worked on while a block explorer shows the last blocked created
-
nioc
not sure what the GUI is showing as I never use it
-
br-m
<monerobull:matrix.org> i guess that makes a little bit of sense
-
br-m
<monerobull:matrix.org> hm looks like this is always the case
-
br-m
<monerobull:matrix.org> guess i only noticed it sometimes
-
DataHoarder
ofrnxmr: full record needs to match on current implementation
-
DataHoarder
if higher cadence is necessary, recommendation would be to change the code so it returns the matching records
-
DataHoarder
checking single TXT entry wise (in the ordered set already) instead of checking the set as a whole
-
DataHoarder
this also allows the checkpoints to be used in a way that does not conflict with old implementations
-
DataHoarder
by including a specific bogus record on all checkpointers which is unique to it, this record will never match, but also not match as a set for old ones
-
DataHoarder
it doesn't even need to be a valid checkpoint entry, adding like "_<domain name>" which won't be parsable (but is skipped silently) would be ok
-
br-m
<ofrnxmr:xmr.mx> I think entire records matching is probably preferrable, which would ensure that nodes dont roll back to arbitrary heights, and start building ontop of diffetent tips
-
DataHoarder
then for that TTL of 5m might be too low
-
br-m
<ofrnxmr:xmr.mx> Yeah, i think updating checkpoints when a block hits a *0 or *5 block (every 10min) might be better
-
DataHoarder
as by chance a node asking can get 2-3 different values across the nodes. and if it's 5/7, that'd end up with only a factor of 2 for normal circumstances assuming all domains work
-
DataHoarder
if you checkpoint a 0, by the time some nodes check and all records are valid you could be many depths in
-
br-m
<ofrnxmr:xmr.mx> edit: update when the tip ends in 0 or 5
-
DataHoarder
also - if the records keep changing they may never be valid if recorded set wise
-
DataHoarder
basically if two checkpointers are down or misbehave, the rest need to match perfectly and have no leeway for DNS delays
-
DataHoarder
(in 2/3rds 7 domain setup)
-
br-m
<ofrnxmr:xmr.mx> if we are updating the checkpoints at 12:00 and 12:10, users check at 12:03 and 12:08, they should be getting matching checkpoints for one of those checks, (the times are just to illustrate 10min updates and 5min checks)
-
DataHoarder
they check at 12:03. but their upstream DNS might have only checked at 11:50
-
br-m
<ofrnxmr:xmr.mx> 2/3+1 in a 7 donain setup is 6 matching
-
DataHoarder
they get served this old record while ISP verifies again
-
DataHoarder
it's 5 matching
-
DataHoarder
5/7
-
br-m
<ofrnxmr:xmr.mx> So only 1 can be down
-
br-m
<ofrnxmr:xmr.mx> 5/7 is less than 2/3+1
-
br-m
<ofrnxmr:xmr.mx> Its 6 matching
-
br-m
<ofrnxmr:xmr.mx> 5.66 to be exact, rounded to 6
-
DataHoarder
7 * 0.66 -> round up or to next no?
-
br-m
<ofrnxmr:xmr.mx> +1
-
DataHoarder
if exact
-
br-m
<ofrnxmr:xmr.mx> 2/3+1
-
DataHoarder
and then also round up?
-
br-m
<ofrnxmr:xmr.mx> Yeah
-
br-m
<ofrnxmr:xmr.mx> 2/3 = 4.66 +1 = 5.66 -> rounded = 6
-
DataHoarder
in supermajority terms or network design the +1 usually means rounding it "up"
-
DataHoarder
so I was confused, hmm
-
DataHoarder
yeah that won't work with normal DNS times, we can ensure core ones do it right but for other people wanting to enable these they will effectively mismatching
-
DataHoarder
if kept to one record + record at depth that could work. but then you have the timing/race options, so you can't have good rolling ones
-
nioc
tevador didn't correct it when 5/7 was mentioned
-
br-m
<ofrnxmr:xmr.mx> i could be wrong about how the int rounds vs truncated
-
br-m
<ofrnxmr:xmr.mx> It might be truncated to 4 + 1 instead of rounded from 4.66->5
-
DataHoarder
it'd surprise me if it wasn't 5/7, but if it's 6/7 basically we have the same error margin as now (one down) plus DNS shenanigans
-
br-m
<ofrnxmr:xmr.mx> Tevador also said 5, so i'd assume that it truncated instead of rounds (and that my 6 is wrong)
-
br-m
<ofrnxmr:xmr.mx> I dont know c++ great
-
DataHoarder
back to > 00:36:25 <br-m> <ofrnxmr:xmr.mx> if we are updating the checkpoints at 12:00 and 12:10, users check at 12:03 and 12:08, they should be getting matching checkpoints for one of those checks, (the times are just to illustrate 10min updates and 5min checks)
-
DataHoarder
the TTL is a hint of how you'd like your record cached
-
DataHoarder
some ISPs might say fuck you and do 1h or 1d
-
br-m
<ofrnxmr:xmr.mx> Yeah, isp or resolver?
-
DataHoarder
when they get the record is effectively random, as some might drop from cache early
-
DataHoarder
ISP "DNS Resolver" or forwarder
-
DataHoarder
local ones also have cache
-
DataHoarder
local system DNS also has cache
-
DataHoarder
turtles all the way down but DNS fun
-
br-m
<ofrnxmr:xmr.mx> Yeah but we can also instruct pools to run monerod with DNS_PUBLIC=tcp://8.8.8.8 etc?
-
DataHoarder
moneropulse page addresses this by instructing users to use a specific DNS server when querying if their ISPs one is not great
-
DataHoarder
yes, exactly
-
DataHoarder
pools preferably would run a local DNS recursive resolver with almost no cache specifically for monerod
-
br-m
<ofrnxmr:xmr.mx> In my experience, a lot of ISPs dont respect DNSSEC
-
DataHoarder
indeed
-
DataHoarder
-
DataHoarder
you want the closest to a DNS Recursive Resolver
-
br-m
<ofrnxmr:xmr.mx> Then some, like 9.9.9.9, have ruckniums blacklisted lolz
-
br-m
<ofrnxmr:xmr.mx> I think the question of whether pools would run their own dns resolvers should be brought up to the pool ops and not expected
-
DataHoarder
for that purpose the checkpoints domains should use a variety of country codes/root domains that are good, and not all using same moneropulse entry
-
DataHoarder
variations of this can be good specifically due to these blocklists
-
DataHoarder
if a gossip network is to be established and records shared via this as an alternative way (including signatures) that could be used by pools to get records faster
-
br-m
<ofrnxmr:xmr.mx> DataHoarder: Then theres no reaosn for dns :P
-
DataHoarder
DNS is for the general purpose anyone can enable this
-
br-m
<ofrnxmr:xmr.mx> (Aside for how to inject the checkpoints into the daemon)
-
DataHoarder
DNS records can be pushed you know :)
-
DataHoarder
zone transfers also exist, and NOTIFY, and secondary dns
-
br-m
<ofrnxmr:xmr.mx> I dont know anything about dns 🫡
-
DataHoarder
for example - here's a full copy of the zone file for checkpoints subdomain $ dig +dnssec +multi checkpoints.gammaspectra.live AXFR @ns1-checkpoints.gammaspectra.live
-
DataHoarder
with that any DNS secondary server can serve the entire domain, but they can't sign records
-
br-m
<ofrnxmr:xmr.mx> Says "transfer failed"
-
DataHoarder
add +tcp to flags
-
br-m
<ofrnxmr:xmr.mx> Same
-
DataHoarder
make sure to use that specific server and that your system doesn't override it :)
-
br-m
-
br-m
<ofrnxmr:xmr.mx> Oh. My sistem is very likely overriding (tor)
-
br-m
<ofrnxmr:xmr.mx> System
-
DataHoarder
another one $ dig +tcp +dnssec +multi testpoints.gammaspectra.live AXFR @ns1-testpoints.gammaspectra.live
-
DataHoarder
that's how my secondary DNS servers get updates, they get notified via DNS as well
-
DataHoarder
DNS has eventual consistency
-
DataHoarder
best way to put it
-
DataHoarder
when you are querying 7 different systems each with eventual consistency
-
DataHoarder
but you are updating them continuously AND expect them to match exactly to each other
-
br-m
<ofrnxmr:xmr.mx> I'm not totally against shipping only-moneropulse domains as an immediate resolution
-
DataHoarder
that will consistently not match :)
-
br-m
<ofrnxmr:xmr.mx> In short, it is the least complicated and fastest to deploy
-
DataHoarder
yep
-
DataHoarder
I think real life testing is deserved across a variety of setups to get proper data
-
DataHoarder
specially about the eventual consistency given all record sets have to match entirely, not individual items from the set (or up to specific heights)\
-
br-m
<ofrnxmr:xmr.mx> yeah. The "have to match entirely" is probably why (on testnet) it helped to move to just 1 record
-
br-m
<ofrnxmr:xmr.mx> As with multiple records, being updated at every block, it would frequently mismatch at the check
-
br-m
<ofrnxmr:xmr.mx> Allowing for windows where i was able to reorg the checkpointing/checkpointed nodes
-
br-m
<basses:matrix.org> Is there some summary of the whole DNS stuff added to monero? like what it does and the premise?
-
br-m
<basses:matrix.org> I have some questions too.
-
br-m
<basses:matrix.org> On what basis is the choice of TLDs is based on (.ch, .fr etc)[... more lines follow, see
mrelay.p2pool.observer/e/ipT2-7MKdUxXbHBU ]
-
br-m
<ravfx:xmr.mx> For DNSSEC, the registrar and the DNS server MUST be a different entity
-
br-m
<ravfx:xmr.mx> Else the registrar can just change the DNS and DNSSEC will still be valid
-
br-m
<basses:matrix.org> Also
-
br-m
<basses:matrix.org> for how long they will be will be used? what are the risks if stopped paying and it's taken over?
-
br-m
<basses:matrix.org> Will they be removed in the future (if alternative solution has been found)?
-
DataHoarder
delegation can be used as well - per subdomain. but indeed, the recommendation I have is that registrars are used to register but not as dns servers
-
DataHoarder
this is a bandaid and temporary solution. read on
monero-project/monero #10064
-
DataHoarder
it has not been decided to use them yet, but to clean up all around it to make way for any positive or negative decision
-
DataHoarder
DNS stuff is not explicitly being added to monero. it already exists as MoneroPulse and is still opt in
-
DataHoarder
-
br-m
<basses:matrix.org> Thanks DataHoarder, I will read it fully, but now I scrolled to the importants parts for me, which there were not enough thoughts and discussions about (choice of TLDs, registrars, DNS providers, risks of takeover, who controls them (the old ones and new ones)).
-
br-m
<basses:matrix.org> Will also the new solution will be mandatory (to mititgate current attacks) not opt in as it have already been?
-
br-m
<ofrnxmr:xmr.mx> DataHoarder: any updates we deploy to the codebase, doesnt effect the network. Lets say some entity decided to roll back 50 blocks, we could "undo" that by contacting mining pooks and having then enable checkpoints on their nodes
-
DataHoarder
I think the current consensus is for them to continue opt-in
-
br-m
<ofrnxmr:xmr.mx> The tlds's have been owned for many years
-
br-m
<ofrnxmr:xmr.mx> They are the same tlds used for various monero things, such as dns blocklists
-
DataHoarder
it would be recommended for specifically miners and merchants to opt-in, but that is still their decision
-
br-m
<ofrnxmr:xmr.mx> they are owned by core
-
br-m
<ofrnxmr:xmr.mx> so we arent "choosing" tlds, but as fluffy noted on 10064, they already exisg
-
DataHoarder
all the current and new ones in that PR are owned by core, no diversification has been done. it was recently touched on in the MRL meeting a few hours ago
-
br-m
<ofrnxmr:xmr.mx> 4 of them are already in the codebase
-
br-m
<basses:matrix.org> which is not always great with incidents Ruck mentioned from the link DH posted?
-
br-m
<ofrnxmr:xmr.mx> Of course its not, BUT they are opt in
-
br-m
<ofrnxmr:xmr.mx> If something nefarious were to happen, miners and exchanges need only un-opt-in and restart their nodes
-
br-m
<basses:matrix.org> hmmm ok, thanks all. But I would look into decentralization of choice of providers but if it is maintained by "core" then I guess nothing...
-
br-m
<basses:matrix.org> however Binaryfate seems to be taking suggestions
monero-project/monero #10064#issuecomment-3259592231
-
br-m
<basses:matrix.org> so im a bit hopeful!
-
br-m
<ofrnxmr:xmr.mx> binaryfate will run whatever setup we ask him to
-
DataHoarder
we can make a checklist for getting the setup refreshed
-
DataHoarder
besides code changes, operational changes
-
br-m
<basses:matrix.org> that would be great!
-
br-m
<ofrnxmr:xmr.mx> The pr for code changes is up (10075)
-
br-m
<ofrnxmr:xmr.mx> Betweem 10064 and 10075, any feedback is appreciated
-
DataHoarder
(sigh I hate to keep linking this project now) I have made some sort list on DNSSEC and other deployment notes for a delegated subdomain
git.gammaspectra.live/P2Pool/monero-highway#cmd-dns-checkpoints
-
br-m
<ofrnxmr:xmr.mx> i may also like to disable checkpoint checking by default. Not sure how others feel about that
-
br-m
<ofrnxmr:xmr.mx> --enable-dns-checkpoints is currently ON by default. I dont like that, never did
-
DataHoarder
ofrnxmr: does #10075 implement the desired math? :D
-
br-m
<ofrnxmr:xmr.mx> It notifies users if there are discrepencies, but its also calling home unnecessarily
-
br-m
<ofrnxmr:xmr.mx> DataHoarder: the 2/3+1? Yeah
-
DataHoarder
5/7 or 6/7 :)
-
DataHoarder
integer division rounds down
-
br-m
<ofrnxmr:xmr.mx> probably 5/7
-
br-m
<ofrnxmr:xmr.mx> DataHoarder: I think it truncates, not even rounding
-
DataHoarder
well yes
-
DataHoarder
not nearest, just, down
-
DataHoarder
if interpreted in floating calls
-
nioc
<ofrnxmr:xmr.mx> --enable-dns-checkpoints is currently ON by default. I dont like that, never did <<>> I thought that it wasn't as it was suggested to use --enforce-dns-checkpointing if one wished to
-
DataHoarder
floor((7 * 2) / 3) + 1
-
br-m
<ofrnxmr:xmr.mx> enforce and enable arw different flags
-
DataHoarder
yep :)
-
nioc
ah
-
DataHoarder
enable just screams in logs
-
br-m
<ofrnxmr:xmr.mx> --enable-dns-checkpoints --disable-dns-checkpoints and --enforce-dns-checkpointing
-
DataHoarder
note by default this uses the ISP/system DNS which won't show the direct ip of the requester
-
nioc
cause I tried to use enable today by mistake and daemon gave me a list of possible commands :D