09:59:15 Do workshares increase the advantage a miner with low network latency has over miners with higher network latency? 11:45:42 better reward distribution is nice to have (we have p2pool), but the issue is a non-rational attacker with too much burst hash rate. Mid term someone may rent 51% for longer periods and dev resources should go where needed most. 12:50:14 List of Qubic blocks up to this recent epoch 177 https://irc.gammaspectra.live/2990a5684cf1c5a5/qubic-blocks-epoch177.csv Total reported: 11209 blocks 12:50:14 Added: block status (CHAIN or ORPHAN) orphan blocks, and their block headers for coinbase verification. These blocks are found by scanning open monero instances for their alt blocks and saving them, to gather any late blocks that could have been received by other monero nodes. 15:17:35 DataHoarder: The Qubic docs I'm looking at say an epoch is seven days but that must be outdated.... how long is an epoch? 15:17:53 one week, correcty 15:18:02 the list of blocks is up to and including epoch 177 15:18:28 oh okay, was about to say a week only has 5040 blocks 15:18:36 thx lol 15:18:38 if you want to filter blocks only made with their epoch 177 key use 42Vt47oLyRT7C1Ch3BbapKFZgs5Hip5m3RrRVT2dbjDT8NWs76gc77NgfZvzXpZnPYGgVZFf79T5TSKWSjFxYWk4A77WGa6 15:19:03 Some orphan blocks for previous epochs keep appearing on some remote monero nodes, so this list may grow over time 15:19:23 orphan block spelunking :) 15:19:44 if you want a specific one for a single epoch I can generate it, or filter by the address for that epoch 15:21:03 I was just curious what percent of blocks they were producing over a longer timespan like a week 15:21:06 1460 blocks mined (including orphans) on 177 15:21:12 see https://gist.github.com/Rucknium/fb9a02fbd89c8d93e0d9f48fbc470e05 15:21:36 ruck collated this data, numbers are different as qubic sometimes doesn't roll keys instantly 15:21:44 nice ty for the link 15:22:08 the corresponding section on https://github.com/monero-project/monero/issues/10064 has epoch 176 15:32:01 @monero.arbo:matrix.org: My analysis includes orphaned blocks. That will be more blocks than on the main chain. I am trying to get a rough estimate of hashpower share. Qubic orphaning honest blocks removes them from the main chain. Hashpower share estimates based only on main chain data would be inaccurate 15:35:11 what time range are you using for the table? 15:35:51 as I get 1460 blocks mined including orphans, 1127 without (only main blocks) 15:36:40 important it being Wed 12 to 12 UTC :) 15:39:07 DataHoarder: I merge it with alt chain data from my node. I think some are lost in that step. 15:39:55 ah yeah, that'd be correct. some of these orphan blocks were on very obscure remote nodes and pushing these around the network is not trivial 19:56:20 so how is it that fluffly blocks slows down qubics blocks? 19:56:50 and is there any way to kick it up a notch 19:57:48 Qubics blocks have txs in them that have never been seen before 19:58:27 ah so its just verification time? 19:58:43 So they have to broadcast blocks + txs at the same time, and receiving nodes have to verify the txs at the same time they receive the block 19:59:13 With fluffy blocks, the node has already verified the txs (this is my understanding) 20:00:01 gotcha gotcha. so just mechanics. there's no designed property of the fluffly block system for a node to delay the broadcast of a block with unknown txs 20:02:01 > A block is made up of a header and transactions. Fluffy Blocks only contain a header, a list of transaction indices, and any transactions that the node recieving the block may be missing. This saves bandwidth because nodes might already know about most or all of the transactions in the block and they don't need to be sent them again. 20:04:50 im sure this has been thought of or discussed before, but could a node intentionally slow down the relay of a block that contains unknown txs? Like, if a node has txpool of 40 txs, and a block comes in that has 0/40 transactions ... the node could delay block propagation by n seconds or something. if it has 10/40 transactions, then 0.9*n, etc 20:06:23 Wouldn't that potentially punish including any txs, because miners cannot be sure that each tx has actually properly propagated through nodes' txpools? 20:08:04 i think it would depend on tx propagation. and the extreme case of 0/total seems like it would only come from selfish mining 20:10:15 though qubic could just publish empty blocks.... and working in a delay empty blocks thing probably opens up more nonsense 20:15:49 though an empty block would be 0/total ... 20:16:11 they could broadcast all but the tip 20:16:22 as monero builds up 20:16:55 what do you mean.. broadcast blocks? or txs? 20:17:05 blocks and txs 20:17:24 It also doesnt work in reverse 20:17:50 Meaning.. how do we know which node's tx pool was the correct one? 20:18:09 What if the "real" txpool was empty, but qubics txpool had 10txs? 20:18:45 then it'd still be 0/total match 20:18:59 Or vice versa, if the real txpool had 10txs, but my node hasnt seen them yet? delaying the block is essentially delaying the tx relay 20:19:00 With doesnt make sense 20:19:19 i guess we'd need to gather data from the network 20:19:43 but id imagine that most honest nodes see some > 90% of txs in their txpool before they see a block come in that references them 20:19:44 an example that really annoys me, is that the txpool logic is pretty ugly atm 20:20:11 I have 4 or 5 nodes running an fcmp testnet. I, repeatedly, have a mismatch between txpools 20:20:11 otherwise fluffyblocks wouldn't be magical 20:20:44 perhaps run a normal testnet and see if its fcmp specific ? 20:20:59 its not fcmp specific 20:21:07 its tx relay logic 20:22:06 The txpool was 330mb. All nodes except for my mining node had the full 330mb. The mining node decided to pull in like 100txs at a time, and wait like 5mins before requesting more 20:22:29 The blocks i was mining MISSED a lot of txs, due to no fault of the miner 20:22:59 so something else to optimize 20:23:04 Just because the stupid node doesnt request the full contents of the tx pool, 20:23:04 Hopefully 0xfffc's txrelayv2 pr fixes this 20:23:21 yeah whats the status of that 20:23:32 updated yesterday 20:24:04 https://github.com/monero-project/monero/pull/9933 20:30:05 IIRC, it was common to have varying txpool sizes between stressnet nodes last year. 20:30:59 Nodes have a backstop mechanism to re-broadcast txs occasionally if they haven't been mined. In theory, that should get you the same txs in each txpool eventually. 20:31:38 I have occasional wait timers on my spammer to allow txs to propagate. Maybe the RPC connection/activity stops tx broadcast. 20:43:06 My mining node uses monerod as the miner, and only queries rpc to check the txpool every couple mins 20:43:38 oh jeez 20:45:56 @rucknium: My spammer runs flat out. And in my scenario above, i hit 325mb txpool BECAUSE the mining node wasnt getting the txs, causing me to run out of inputs 20:46:31 so the non-mining nodes and spammer were idle for over an hour, while i waited for the mining nodes txpool to catch up 20:47:22 i even restarted the node after a while, thinking it was "stuck". After which, is downloaded like 500mb and uploaded 900 😭 20:47:38 For a 325mb txpool, it used over 1.5gb to receive it 20:48:00 Received 586058072 bytes (558.91 MB) in 103435 packets in 54.8 minutes, average 173.96 kB/s = 0.53% of the limit of 32.00 MB/s 20:48:00 Sent 1199412459 bytes (1.12 GB) in 41065 packets in 54.8 minutes, average 356.02 kB/s = 4.35% of the limit of 8.00 MB/s 20:48:46 Nvm, that was for the last 60(!!)mb of txs! 20:49:30 sent 1.12mb to nodes that already had the txs, and downloaded 560mb trying to pull in 60mb. Insanely inefficient 20:55:46 hrmmm... monero on shadow with 90 agents running is taking up 30 GB ram .... 1k agents might be a tall order 21:01:32 monerod and lmdb both need ram, and wallets as well 21:02:29 I dont know what you mean by agents, but i assume each agent is running a node? 21:35:23 23:22:19 i think we should update proactively (as opposed to reactively), maybe rolling blocks 4-10? Updating at every new block 21:35:45 proactively yeah, from some depth from tip as they come 21:36:10 Yea 21:36:31 you probably want to do 4-10 but keep some before (so nodes with reaaally sluggish DNS still get some old ones) 21:36:42 and maybe do a pin of the last randomx epoch 21:37:01 so that's for deep reorg verification 21:37:09 the issue to watch for is how expensive the parsing of the records is 21:37:28 it's integer parsing + hex 21:38:21 I publish a copy of checkpoints on $ dig +dnssec +multi checkpoints.gammaspectra.live TXT 21:38:23 for my own testing 21:38:28 i mean, were contacting 7+ servers, and comparing their results. I figure more records is more costly, (but maybe inconsequential ) 21:38:46 the DNSSEC signature verification, ed25519, is more expensive :) 21:39:01 specially as that's also hashing and doing base64 decode 21:39:09 or ecdsa depending on algo 21:39:11 so its more a matter of how many servers vs how many records? 21:39:42 keeping it to ~10 "active" checkpoints is a sane number I was testing with 21:39:46 +2 for deep reorg 21:40:01 and any other that want to be set for old purposes like current records have 21:40:38 i wonder if dns checkpoints help with node sync 21:40:38 technically you would never have to do parsing of old records if you know you have them, but it's probably slower to check these in string form than to parse 21:41:00 static checkpoints do, these get added on the same call but without the pinned difficulty 21:41:11 @ofrnxmr:xmr.mx: (Shower thoughts) 21:41:37 DataHoarder: Yeah, i know the static ones do. Not sure abt dns though 21:41:53 they get added on the same "call" minus difficulty, handled internally 21:42:02 I was looking at that code before MRL meeting 21:42:10 for > Checkpoint at given height already exists, and hash for new checkpoint was different! 21:43:32 oh yes. 21:43:36 they definitely do 21:43:58 they also trigger all the other events like bootstrap nodes must be also on the max height checkpoint 21:44:11 and wallet2::fast_refresh 21:44:22 There was an issue where: 21:44:23 The checkpointing node was used to create the checkpoint, but was reorged before it checked the dns checkpoint, and then created a new checkpoint that conflicted with the old one 21:44:37 also wallet2::trim_hashchain 21:44:49 This caused the node to be incapable of creating blocks, since the checkpoints conflicted 21:45:24 that has to be checked by the system issuing checkpoints itself 21:45:37 that the new checkpoint IS In a chain of the previous checkpoint was part of 21:46:10 walk the tree, effectively 21:46:57 you also can take multiple points and aggregate the data (not for issuing checkpoints/voting, but for supplementing blocks to feed back to proper monerod) 21:47:18 if you see alts on other areas, pass them around so everyone has the same available information 21:49:07 maybe checkpointing nodes should reject reorgs deeper than the checkpoint height altogether, instead of relying on dns themselves 21:53:13 yeah, but they need to be aware of the checkpoints 21:53:43 local RPC to query/verify and set checkpoints on local node on privileged RPC can be very useful 21:54:09 that way you can set, and verify they are active and current chain is on them 21:55:08 it needs to fail-safe and checked in depth, if there is any risk not issuing anything or bailing out is safer than continuing 21:55:35 even if monero verifies it's on a chain on checkpoints, the system that places the checkpoints should also walk the tree and verify 21:55:46 must* 21:58:07 hmmm. On second thought, if your node is reorged and invalidates the checkpoints that your dns just set, the BFT should fix your node (since your node would get updated from the consensus of other nodes), and then your dns should update your records to match the consensus 21:58:49 If you lock in your own checkpoints immediately, that is incorrectly assuming that your checkpoints will be a part of the 2/3+1 22:00:07 less so locking them immediately 22:00:22 but before issuing the next checkpoint after first one was confirmed across the other actors 22:00:47 We're only checking dns every 300seconds, but updating dns every 120 :/ 22:01:01 but that's for general consumption 22:01:17 So checkpointing nodes should check more often then? 22:01:35 records have TTL set of 300 but they can update faster by querying source of them gossiping with each other directly 22:02:06 i could see rucks updating every few seconds 22:02:14 I dont think the records ttl was limited to 300 22:02:28 TTL is time it can be cached but effectively the check can be quite often 22:02:38 specially if you have a recursive resolver that queries the source 22:07:48 DNS is a distribution method but alternate means can be done specially if we pin pubkeys to subdomains :) 22:07:56 i think the biggest hurdle with dns checkpoints isnt the monerod side of things, but coordinating the updating of the checkpoints to ensure that we dont have a random bag of checkpoints all with different tips 22:08:32 Like checkpointA had block 305 304 303 22:08:32 Checkpoint b has 306 305 304 22:08:32 Checkpoint c has 304 303 302 22:08:32 Etc 22:09:28 They all have 304 in common, but the total records dont match, so i think monerod rejects them if they dont match entirely 22:09:54 that's why these could gossip together what checkpoint they'd set 22:10:09 lemme check 22:10:22 Or set checkpoints at fixed blocks? Like 0 and 5? 22:16:45 always baffles me how the chain tip can be different across stuff 22:17:07 not talking about block propagation 22:17:38 but how sometimes monero gui might show the current block +1 vs xmrchain.net 22:18:08 sometimes they say depth vs height 22:18:33 00:09:28 They all have 304 in common, but the total records dont match, so i think monerod rejects them if they dont match entirely 22:18:41 I am reading this code but what a wild way to check 22:18:51 but if so we actually have a way forward as well 22:19:15 if we want to change that behavior, we can ensure old code always mismatches :) 22:20:32 best way to understand the code is reimplement it :) 22:21:00 I guess it makes sense if only checkijg every hour, since all dns should be updated by the time the hour is up 22:22:58 it could race condition but next one would be fixed 22:23:15 the purpose was to publish maybe one every couple of years or so IF a bug was triggered 22:25:53 but how sometimes monero gui might show the current block +1 vs xmrchain.net <<>> daemon shows the block currently being worked on while a block explorer shows the last blocked created 22:26:13 not sure what the GUI is showing as I never use it 22:27:07 i guess that makes a little bit of sense 22:27:44 hm looks like this is always the case 22:27:59 guess i only noticed it sometimes 22:29:24 ofrnxmr: full record needs to match on current implementation 22:29:58 if higher cadence is necessary, recommendation would be to change the code so it returns the matching records 22:30:26 checking single TXT entry wise (in the ordered set already) instead of checking the set as a whole 22:30:41 this also allows the checkpoints to be used in a way that does not conflict with old implementations 22:31:26 by including a specific bogus record on all checkpointers which is unique to it, this record will never match, but also not match as a set for old ones 22:32:38 it doesn't even need to be a valid checkpoint entry, adding like "_" which won't be parsable (but is skipped silently) would be ok 22:32:49 I think entire records matching is probably preferrable, which would ensure that nodes dont roll back to arbitrary heights, and start building ontop of diffetent tips 22:33:09 then for that TTL of 5m might be too low 22:33:55 Yeah, i think updating checkpoints when a block hits a *0 or *5 block (every 10min) might be better 22:33:56 as by chance a node asking can get 2-3 different values across the nodes. and if it's 5/7, that'd end up with only a factor of 2 for normal circumstances assuming all domains work 22:34:40 if you checkpoint a 0, by the time some nodes check and all records are valid you could be many depths in 22:34:41 edit: update when the tip ends in 0 or 5 22:34:58 also - if the records keep changing they may never be valid if recorded set wise 22:36:03 basically if two checkpointers are down or misbehave, the rest need to match perfectly and have no leeway for DNS delays 22:36:17 (in 2/3rds 7 domain setup) 22:36:25 if we are updating the checkpoints at 12:00 and 12:10, users check at 12:03 and 12:08, they should be getting matching checkpoints for one of those checks, (the times are just to illustrate 10min updates and 5min checks) 22:36:47 they check at 12:03. but their upstream DNS might have only checked at 11:50 22:36:57 2/3+1 in a 7 donain setup is 6 matching 22:37:03 they get served this old record while ISP verifies again 22:37:25 it's 5 matching 22:37:29 5/7 22:37:34 So only 1 can be down 22:37:39 5/7 is less than 2/3+1 22:37:48 Its 6 matching 22:38:01 5.66 to be exact, rounded to 6 22:38:18 7 * 0.66 -> round up or to next no? 22:38:28 +1 22:38:29 if exact 22:38:36 2/3+1 22:38:45 and then also round up? 22:38:50 Yeah 22:39:04 2/3 = 4.66 +1 = 5.66 -> rounded = 6 22:39:54 in supermajority terms or network design the +1 usually means rounding it "up" 22:40:01 so I was confused, hmm 22:41:18 yeah that won't work with normal DNS times, we can ensure core ones do it right but for other people wanting to enable these they will effectively mismatching 22:42:06 if kept to one record + record at depth that could work. but then you have the timing/race options, so you can't have good rolling ones 22:43:25 tevador didn't correct it when 5/7 was mentioned 22:43:31 i could be wrong about how the int rounds vs truncated 22:44:00 It might be truncated to 4 + 1 instead of rounded from 4.66->5 22:44:19 it'd surprise me if it wasn't 5/7, but if it's 6/7 basically we have the same error margin as now (one down) plus DNS shenanigans 22:44:54 Tevador also said 5, so i'd assume that it truncated instead of rounds (and that my 6 is wrong) 22:45:07 I dont know c++ great 22:45:45 back to > 00:36:25 if we are updating the checkpoints at 12:00 and 12:10, users check at 12:03 and 12:08, they should be getting matching checkpoints for one of those checks, (the times are just to illustrate 10min updates and 5min checks) 22:46:02 the TTL is a hint of how you'd like your record cached 22:46:09 some ISPs might say fuck you and do 1h or 1d 22:46:23 Yeah, isp or resolver? 22:46:24 when they get the record is effectively random, as some might drop from cache early 22:46:33 ISP "DNS Resolver" or forwarder 22:46:37 local ones also have cache 22:46:46 local system DNS also has cache 22:46:55 turtles all the way down but DNS fun 22:47:17 Yeah but we can also instruct pools to run monerod with DNS_PUBLIC=tcp://8.8.8.8 etc? 22:47:18 moneropulse page addresses this by instructing users to use a specific DNS server when querying if their ISPs one is not great 22:47:22 yes, exactly 22:47:51 pools preferably would run a local DNS recursive resolver with almost no cache specifically for monerod 22:48:03 In my experience, a lot of ISPs dont respect DNSSEC 22:48:12 indeed 22:48:14 https://www.cloudflare.com/learning/dns/dns-server-types/ 22:48:39 you want the closest to a DNS Recursive Resolver 22:48:46 Then some, like 9.9.9.9, have ruckniums blacklisted lolz 22:49:31 I think the question of whether pools would run their own dns resolvers should be brought up to the pool ops and not expected 22:49:41 for that purpose the checkpoints domains should use a variety of country codes/root domains that are good, and not all using same moneropulse entry 22:49:57 variations of this can be good specifically due to these blocklists 22:50:39 if a gossip network is to be established and records shared via this as an alternative way (including signatures) that could be used by pools to get records faster 22:51:08 DataHoarder: Then theres no reaosn for dns :P 22:51:21 DNS is for the general purpose anyone can enable this 22:51:37 (Aside for how to inject the checkpoints into the daemon) 22:51:38 DNS records can be pushed you know :) 22:51:57 zone transfers also exist, and NOTIFY, and secondary dns 22:52:27 I dont know anything about dns 🫡 22:53:18 for example - here's a full copy of the zone file for checkpoints subdomain $ dig +dnssec +multi checkpoints.gammaspectra.live AXFR @ns1-checkpoints.gammaspectra.live 22:53:38 with that any DNS secondary server can serve the entire domain, but they can't sign records 22:53:53 Says "transfer failed" 22:54:16 add +tcp to flags 22:54:30 Same 22:54:41 make sure to use that specific server and that your system doesn't override it :) 22:54:43 https://mrelay.p2pool.observer/p/z-HX-rMKUmNJbUxJ/1.txt (code snippet, 5 lines) 22:55:04 Oh. My sistem is very likely overriding (tor) 22:55:06 System 22:55:42 another one $ dig +tcp +dnssec +multi testpoints.gammaspectra.live AXFR @ns1-testpoints.gammaspectra.live 22:56:11 that's how my secondary DNS servers get updates, they get notified via DNS as well 22:56:56 DNS has eventual consistency 22:57:01 best way to put it 22:57:14 when you are querying 7 different systems each with eventual consistency 22:57:31 but you are updating them continuously AND expect them to match exactly to each other 22:57:43 I'm not totally against shipping only-moneropulse domains as an immediate resolution 22:57:44 that will consistently not match :) 22:58:01 In short, it is the least complicated and fastest to deploy 22:58:24 yep 22:58:26 I think real life testing is deserved across a variety of setups to get proper data 22:59:07 specially about the eventual consistency given all record sets have to match entirely, not individual items from the set (or up to specific heights)\ 23:00:19 yeah. The "have to match entirely" is probably why (on testnet) it helped to move to just 1 record 23:00:50 As with multiple records, being updated at every block, it would frequently mismatch at the check 23:01:32 Allowing for windows where i was able to reorg the checkpointing/checkpointed nodes 23:38:00 Is there some summary of the whole DNS stuff added to monero? like what it does and the premise? 23:38:00 I have some questions too. 23:38:00 On what basis is the choice of TLDs is based on (.ch, .fr etc)[... more lines follow, see https://mrelay.p2pool.observer/e/ipT2-7MKdUxXbHBU ] 23:39:04 For DNSSEC, the registrar and the DNS server MUST be a different entity 23:39:04 Else the registrar can just change the DNS and DNSSEC will still be valid 23:40:41 Also 23:40:41 for how long they will be will be used? what are the risks if stopped paying and it's taken over? 23:40:41 Will they be removed in the future (if alternative solution has been found)? 23:40:49 delegation can be used as well - per subdomain. but indeed, the recommendation I have is that registrars are used to register but not as dns servers 23:41:08 this is a bandaid and temporary solution. read on https://github.com/monero-project/monero/issues/10064 23:41:34 it has not been decided to use them yet, but to clean up all around it to make way for any positive or negative decision 23:42:54 DNS stuff is not explicitly being added to monero. it already exists as MoneroPulse and is still opt in 23:43:05 see https://docs.getmonero.org/infrastructure/monero-pulse/ 23:47:34 Thanks DataHoarder, I will read it fully, but now I scrolled to the importants parts for me, which there were not enough thoughts and discussions about (choice of TLDs, registrars, DNS providers, risks of takeover, who controls them (the old ones and new ones)). 23:47:34 Will also the new solution will be mandatory (to mititgate current attacks) not opt in as it have already been? 23:47:56 DataHoarder: any updates we deploy to the codebase, doesnt effect the network. Lets say some entity decided to roll back 50 blocks, we could "undo" that by contacting mining pooks and having then enable checkpoints on their nodes 23:48:02 I think the current consensus is for them to continue opt-in 23:48:19 The tlds's have been owned for many years 23:48:43 They are the same tlds used for various monero things, such as dns blocklists 23:48:52 it would be recommended for specifically miners and merchants to opt-in, but that is still their decision 23:48:57 they are owned by core 23:49:39 so we arent "choosing" tlds, but as fluffy noted on 10064, they already exisg 23:49:39 all the current and new ones in that PR are owned by core, no diversification has been done. it was recently touched on in the MRL meeting a few hours ago 23:49:46 4 of them are already in the codebase 23:50:12 which is not always great with incidents Ruck mentioned from the link DH posted? 23:50:28 Of course its not, BUT they are opt in 23:50:58 If something nefarious were to happen, miners and exchanges need only un-opt-in and restart their nodes 23:52:20 hmmm ok, thanks all. But I would look into decentralization of choice of providers but if it is maintained by "core" then I guess nothing... 23:53:44 however Binaryfate seems to be taking suggestions https://github.com/monero-project/monero/issues/10064#issuecomment-3259592231 23:53:59 so im a bit hopeful! 23:54:06 binaryfate will run whatever setup we ask him to 23:54:29 we can make a checklist for getting the setup refreshed 23:54:39 besides code changes, operational changes 23:54:57 that would be great! 23:55:02 The pr for code changes is up (10075) 23:55:16 Betweem 10064 and 10075, any feedback is appreciated 23:55:41 (sigh I hate to keep linking this project now) I have made some sort list on DNSSEC and other deployment notes for a delegated subdomain https://git.gammaspectra.live/P2Pool/monero-highway#cmd-dns-checkpoints 23:55:54 i may also like to disable checkpoint checking by default. Not sure how others feel about that 23:56:12 --enable-dns-checkpoints is currently ON by default. I dont like that, never did 23:56:25 ofrnxmr: does #10075 implement the desired math? :D 23:56:28 It notifies users if there are discrepencies, but its also calling home unnecessarily 23:56:38 DataHoarder: the 2/3+1? Yeah 23:56:46 5/7 or 6/7 :) 23:56:56 integer division rounds down 23:56:58 probably 5/7 23:57:21 DataHoarder: I think it truncates, not even rounding 23:57:28 well yes 23:57:39 not nearest, just, down 23:57:48 if interpreted in floating calls 23:57:50 --enable-dns-checkpoints is currently ON by default. I dont like that, never did <<>> I thought that it wasn't as it was suggested to use --enforce-dns-checkpointing if one wished to 23:58:01 floor((7 * 2) / 3) + 1 23:58:13 enforce and enable arw different flags 23:58:17 yep :) 23:58:28 ah 23:58:44 enable just screams in logs 23:58:46 --enable-dns-checkpoints --disable-dns-checkpoints and --enforce-dns-checkpointing 23:59:10 note by default this uses the ISP/system DNS which won't show the direct ip of the requester 23:59:24 cause I tried to use enable today by mistake and daemon gave me a list of possible commands :D