02:00:16 @rucknium:monero.social, p2pool api now also returns orphan blocks, monero-blocks had an update to support this new format (and mark blocks as orphan) but old one keeps working, just labels blocks as valid 04:58:46 I have added direct Proof verification (from monero coinbase verification) to https://qubic-snooper.p2pool.observer/tree/ ; This currently works with either disclosed pools view keys or disclosed tx private keys for each coinbase tx. Proof is shown and can be clicked to get a full proof in an explorer. 04:58:46 Current pools: P2Pool (all public), MoneroOcean, Qubic (delayed one week). If any other pool wants to disclose their view keys they can be added, otherwise the matching is done via their public APIs and taken at face value (but not verified) 06:51:26 deployed onto blocks.p2pool.observer, also shows when proofs fail. https://irc.gammaspectra.live/45ebcfb298833c14/image.png qubic does not release view keys for the current week and as such all their blocks will fail verification unless they provide the view keys. Their previous blocks (from last epoch) pass fine! 06:51:26 https://irc.gammaspectra.live/564f3b491897662a/image.png 06:53:11 Source code is also available on https://git.gammaspectra.live/P2Pool/blocks.p2pool.observer 15:40:46 Full plot is on https://blocks.p2pool.observer/fullplot.svg without cutouts 21:48:54 Whats the bottleneck on resolving the qubic situation? funding needed? 21:48:54 (sry for crosspost) 21:59:44 @longtermwhale:matrix.org: Currently research bandwith and productive discussion between research members for a viable solution. 22:12:26 my apologies @longtermwhale:matrix.org to be precise multiple long-term solutions has been proposed but they all have tradeoffs or unquantified caveats. Short-term however, DNS based checkpointing is currently being tested. 22:58:25 @longtermwhale:matrix.org anyway, there is a pr open for the dns checkpoints 22:58:53 @longtermwhale:matrix.org: I personally put a lot of time into trying to attract researchers to work on Monero research topics when I was on the MAGIC Monero Fund committee (2022-2024). I had mixed success. 22:59:07 The main things to discuss are 22:59:07 1. Depth to checkpoint 22:59:07 2. How many checkpoints to save 22:59:07 3. How often / what trigger to push new checkpoints[... more lines follow, see https://mrelay.p2pool.observer/e/7-aazbQKQTlEdWxw ] 23:00:06 5. Contacting mining pools after releasing new version, to ensure theyve updated and are successfully receiving the checkpoints 23:00:30 Thats a 5 23:05:11 Researchers are very specialized. And there is a lot of inertia. I will try to explain: A researcher's first question is whether a research question fits into their research agenda. Lots of funding can help encourage someone to look at a research question, but funding is not the only term in the equation, let's say. 23:06:49 It takes a lot of effort o get to the research frontier of a research question. You want to do a very good, thorough job. You don't want to misunderstand the system you're trying to study or fail to read papers about the research topic. 23:06:58 to get to* 23:08:20 So, the question needs to fit into your research agenda (now) or at least the question is in a territory that you want to explore and you think will lead to lots of fruitful research. 23:09:24 soooo... more monero stickers in universities 23:09:43 With a fast-paced research area like blockchains, the research questions can change while you are trying to work on them. I mean, a research question can become irrelevant/obsolete. 23:10:58 Which is something that happened with some research on churning we were helping to move forward with University of Zurich: by the time things would have been ready to start funding, FCMP was in a late stage, which will make churning an irrelevant research question. 23:12:07 I think more funding can be helpful, but not in short bursts. It is hard to maneuver when you don't know how much fuel you have. 23:14:17 With the MAGIC Monero Fund, we funded vtnerd for 3 months of work. I think it took a month and a half to get it funded when MAGIC was the fundraising agent. When it is the CCS, vtnerd is usually funded in a week or two. The slow funding of vtnerd made me worry that we couldn't fundraiser for less-known researchers when we would need to. 23:16:01 I think frankly @diego:cypherstack.com at Cypher Stack does a pretty good job of handling these challenges. He can bring new and "old" researchers together. Of course, Cypher Stack does not work exclusively on Monero. 23:17:15 Programmers are another beast and I don't have much useful to say about the challenges there, with respect to having more high-skilled programmers work on the Monero protocol. 23:19:12 Cypher Stack has its own centralized decision structure and economic self-interest in acquiring and "deploying" researchers. I think that's why they can be successful. MAGIC and the CCS aren't like that. Well, MAGIC is somewhat centralized, but there is not economic self-interest of the MAGIC committee members to get contracts, etc. 23:19:37 @rucknium: I'll probably have Josh work more on Monero directly. One of my devs. Sneurlax. 23:20:36 Inb4 Cypher Stack is the blockstream of Monero 23:21:26 @diego:cypherstack.com: blockstream as Blockstream with a capital B ? I heard they weren't very appreciated 23:21:28 Here is my little manifesto from years ago: "The Monero Project should actively recruit technical talent from universities" https://reddit.com/r/Monero/comments/pkg3d6/the_monero_project_should_actively_recruit/ 23:21:41 My efforts fell short 😢 23:24:07 @syntheticbird: Blockstream hired core devs on Bitcoin to work on Bitcoin. One of the debalces there cuz they said Blockstream accumulated power to change bitcoin 23:24:21 When many of the devs were just looking to continue working on Bitcoin but not starve. 23:24:37 And no alternatives were offered other than that they should work on it for free for the love of the game. 23:29:56 Thanks for the kind words @rucknium:monero.social: 23:33:41 https://rucknium.me/donate/ 23:33:49 I only shill 23:37:33 ofrnxmr: the big point still, 5/7 without matching records individually (instead of matching the whole record-set) will not work with that new size if continuous updates are expected 23:38:35 something that could be a compromise there is to match record-wise, up to the highest that all agree properly OR that existed previously in the node (which allows rolling) 23:38:52 that way higher ones only come into effect once enough records have all agreed, without "holes" 23:41:24 DataHoarder: Right, thats point 1 and 2 23:41:35 nioc: i think every major core contributor who is with us for more than 3 years should get a 5k direct donation, where is @plowsof:matrix.org for coordination 😇 23:41:45 not only. this is about the way we verify DNS records 23:42:11 (the technical part within monerod) and not the part of how we select them 23:42:32 even if we just expose one, DNS latencies on subpar setups will cause them to mismatch most of the time 23:42:47 with 3/4 that was "doable" due to random chances that entire record set matches 23:43:24 since you are a longtermwhale I think that you should act on your idea 23:43:32 To have matching records, at the time of the checks (5 minutes), the values of 5/7 have to match 23:43:35 it's a nice thought 23:43:47 and that's as said assuming all domains work fine - if two are down/unreachable all the others have to match 23:44:16 yea, but if were starting with moneropulse-only, theyll likely all be up 23:44:17 remember a slight off time (something cached for longer, tiered caches) already moves you into the previous window. DNS replication is not instant 23:44:18 nioc: thats why i am here 23:45:04 ok to illustrate the example. state of 1-7 is all consistent. new checkpoint is pushed [T=0] records TTL is 5m 23:45:26 @ofrnxmr:xmr.mx: Its more of an issue if we have to deal with independent updating of records 23:45:36 T+1 client1 checks, 1-2 has new state, 3-7 nope 23:46:11 T+2 client2 checks, 1-5 has new state, 5-7 nope (valid) 23:46:18 we push new records 23:46:24 [T = 3] 23:47:18 T+4 client1 checks, 1-2 has new new state, 3-5 has "new" state, 6-7 has olderstate. mismatch again 23:47:41 the difference is T+4 is using ISP and ISP is using a upstream dns with tiered cache 23:47:50 this can repeat until by random chance they all match 23:48:01 client2 uses a recursive resolver with minor caching 23:48:05 I think thats only an issue if TTL and caching is less than or equal to the check time 23:48:14 remember. they don't respect TTL 23:48:17 ttl is a hint 23:48:30 DNS providers and OS and everything in between will do garbage with it 23:48:48 it's a system with eventual consistency 23:48:56 Again why we'd be recommending the pools use specific dns providers 23:49:02 but given we check records as a set, no single set will be consistent 23:49:15 yeah, and minor to no local caching. Recursive resolver setup 23:50:07 but still - it's DNS. It can fail in fun ways, which is also why we have the 5/7 (or 3/4) before, assuming domains being taken out is not even incorporated 23:50:25 I am using a recursive resolver and docker domains keep resolving to bogus ips 23:50:35 why? because TTL 23:51:43 and local system caches differently, ofc, then requests to my recursive resolver, which needs minor caching, and one of the different sets is outdated but gets served + refreshed in the background 23:51:45 before it was half +1, 3/4 because of there were only 4 in the codebase, but essentially the rule was "more than half" 23:51:56 before we also didn't want to update them continously 23:52:17 eventual consistency works for that. you set new ones, and wait for rules to deploy 23:52:21 DataHoarder: And they were only checked once per hour 23:52:27 it takes 5m or 8h, don't care, eventual consistency 23:52:44 Which is why i think updating them every 10mins and checking every 5 should work 23:52:47 but - our new records will be continuously updated and active 23:53:06 Assuming local dns caching is irrellevant 23:53:09 that is now how DNS is expected to work, setting the records != client asking and seeing that record 23:53:34 the timer on each domain can be different - causing the issue where domains end up desync'd from each other 23:54:08 Why would they be different if they are all updated at the same time, from the same accoint, on the same provider? 23:54:18 provider has regional dns 23:54:26 these records get pushed async, not atomically 23:54:34 these spread slowly across their systems 23:54:47 even by the time we push a new one some of their systems might not have updated 23:54:56 usually it can be faster 23:55:01 Well i think we should just test it 23:55:21 Get BF to add a new subdomain like testpoints.moneropulse.xx 23:55:24 yes. we need to gather more data with proves all about 23:55:31 testpoints exists :D 23:55:33 but it's empty 23:55:40 And we can check on our vsrious machines to see if they are updating in sync 23:55:50 basically, make a set of records that updates each minute with a known token 23:55:56 for that specific time 23:56:03 or a counter, that works too 23:56:27 note in a desired setup they all wouldn't be on the same provider on the same account - as that's a single point of failure 23:56:36 Just need to run whatever script we plan on running, with whatever params we plan on using 23:56:55 And check them locally 23:57:14 you cannot issue them every N minutes either\= 23:57:24 DataHoarder: Were not looking for long term solutions here 23:57:25 monero heights might go faster or slower than that 23:57:36 Were looking for immediate rollout to prevent current malicious activity 23:57:39 yes, but the short term solution should consistently work 23:57:58 DataHoarder: Yes, i said every *5 and *0 block, which is roughly every 10mins 23:59:58 also - let's log the MDEBUG