00:00:00 > MDEBUG("Found " << (good_record == record_count.end() ? 0 : good_record->second) << "/" << dns_urls.size() << " matching records from " << num_valid_records << " valid records"); 00:00:06 and also > LOG_PRINT_L0("WARNING: no majority of DNS TXT records matched (only " << good_record->second << "/" << dns_urls.size() << ")"); 00:00:15 these are important to see how the distribution hits nodes 00:01:14 i was referring to simply using a curl loop to check them 00:01:36 but we can also deploy them on the real testnet 00:01:45 fair 00:01:50 curl / dig etc. 00:02:04 you can run dig locally and query different sets / ISPs 00:02:17 but make sure it's not being captured by anything local and sent elsewhere :) 00:02:47 Most of the issue is solved by directly using a DNS recursive resolver for monero nodes 00:03:17 unbound/bind etc., we could suggest a setup or parameters for relevant operators 00:05:06 Also fun Pi-Hole setup :) https://docs.pi-hole.net/guides/dns/unbound/ 00:05:11 Yeah 00:05:58 fun page :) https://unbound.docs.nlnetlabs.nl/en/latest/topics/core/serve-stale.html 00:06:24 I can take a dig on the specific minimal config for unbound and make a post 00:06:43 at least that covers the client side, servers can still take time deploying records 00:11:17 Then my recommendation if all records in the set must be matching is to set the TTL at most half of the expected update frequency 00:11:33 And if 1m is respected, to that 00:13:13 Otherwise 2-3m would be the maximum for this usage - 5m was the recommendation with the expectation that recordset would be matched record wise (not as a set) 09:15:08 @longtermwhale:matrix.org: funding is not the bottleneck. The bottleneck is a lack of willingness to accept that the paradigm of proof of work has its limits in that the hardware cant be slashed and be reused for subsequent attacks. The paradigm of proof of stake has trouble explaining what the amount of stake actually does f [... too long, see https://mrelay.p2pool.observer/e/wdzq3rQKX2dXZ3hD ] 09:17:37 I would bet money that there is zero chance that giving money to "university people" will solve this issue. University people are good at what Thomas Kuhn calls "normal science" but that is not what is needed to solve this issue. 09:23:08 @spirobel:kernal.eu: why not half pow and half pos alternating blocks? 09:28:10 I really fail to see how proof of stake helps in the current situation; and I do agree with the argument that authorities seizing DNMs could easily run an attack on Monero as it would be highly beneficial to them to break it 09:28:55 (I do not know enough about finality layers and I consider them a separate thing to proof of stake, so this is not a comment on finality layers) 09:31:08 For DNS checkpointing: would it be possible to make p2pool serve a DNS checkpointing server, and to configure monerod to use it? 09:33:03 Since p2pool shares are incentivised to be very up-to-date with the network, this could be a good compromise for those who do not wish to rely on the centralised DNS servers 12:00:39 @spirobel:kernal.eu: Interesting, you didn't get any reasonable responses. I can say the same. Consider who the likely whales are, including the overall amount of Monero seized from DNMs and held in custody. Factor in 75% market acceptance, and you might arrive at a ballpark 2.1M Monero to date. That might as well be a 67% majority right from the start. 12:00:39 Then look up what Chainanalysis can achieve with a PoS majority within the scope of the finality layer. For one, even with encrypted stakes Zano style, they will be able to compute an anonymous rich list by combining the known total staked amount with proportions inferred from block signing frequencies, ranking commitments by [... too long, see https://mrelay.p2pool.observer/e/7OjI47QKV1ktTG1V ] 12:05:27 "PoW security can be rectified at the protocol level, rather than relying on external factors and market conditions." <<>> how? 12:15:18 By doing the actual work, rather than endlessly talking about it and relying on current research papers, which are clearly insufficient. There's a vast difference between research and coding/implementation. The frontier happens at the latter step. Several PoW-only proposals have been made, all valid, all tackling the issue in their own way, to a degree. 13:40:37 I think Monero has usually focused on evidence led development, with careful consideration of options and potential impacts on privacy, decentralisation and security. Having said that, we have hurtled forward with Moneropulse, so perhaps all is game. 13:56:34 Yes, hard to do otherwise, in a decentralized, consensus-driven development environment. I don't mean to criticise that, although the fine point remains. Research tends to be cautious and rigid, and discoveries are best found at the coding step. Take the work shares, for example. QUAI has 120K GPU miners within its first year, [... too long, see https://mrelay.p2pool.observer/e/26nx5rQKZ2JpSThi ] 14:12:18 @radanne:matrix.org: Where exactly are you getting your information? 14:12:50 AFAIK, QUAI is based on a very formal research paper that was developed and released before QUAI mainnet. 14:13:04 Not through coding. 14:13:19 And it's been live for about 6 months. That's not battle-tested. 14:13:55 Any difference in hashpower would be due to security budget: Purchasing power of the daily block reward. 14:15:33 All of the countermeasures to selfish mining I have seen (except ones that require trusted timestamps) at best reduce selfish mining rewards by about one half. Is that enough to make a difference against Qubic, who does not even seem to care much about profit, but cares about propaganda value? 14:16:50 The paper is Aumayr et al. (2025) "Optimal Reward Allocation via Proportional Splitting" https://arxiv.org/abs/2503.10185 14:17:54 Except for the game theory part, they use a methodology to evaluate selfish mining that has been developed in papers for about a decade: Markov Decision Process. 14:21:56 > <@rucknium> @radanne:matrix.org: Where exactly are you getting your information? 14:21:56 Did you watch the MoneroTalk podcast with Dr K? That's where I got the info from. It was exceptionally well explained. I didn't dig into any research. He also followed up on X, reiterating the ws don't need to hit the block, but can happen in memory (agreed to my best understandng) so to me, it makes the bloat & propagation concern invalid. 14:22:47 No, but I read the paper 14:23:07 I may watch the podcast 14:24:53 > <@rucknium> And it's been live for about 6 months. That's not battle-tested. 14:24:53 Do we rely on our own expertize or do we rely on research of others before it could possibly hit Monero? I also have solution in mind, implemented & deployed, which I have all the reasons to believe it should be great for Monero, but since there are no research papers, only user/dev level definitions (not that kind of depth you guys seem to need) I don't even dare to bring it up. 14:25:22 @rucknium: Please do, it was awesome. 14:26:30 The Proportional Splitting paper calls for something like 6000 PoW shares per block. Completely infeasible with RandomX. 14:28:49 Hmm Dr K said 20 per miner should be enough, but he likes to do 100. In memory, it doesn't hit the block. He works on a PR too, so hopefully once delivered it becomes clearer. 14:32:50 @radanne:matrix.org: I think it's good practice for MRL to vet others' solutions instead of deploying them without fully understanding them. An "informal" solution would be OK, but it should be scrutinized with formal methods, which would take time. I am trying to understand the application of MDP to selfish mining problems. I [... too long, see https://mrelay.p2pool.observer/e/x5P257QKaER4amkz ] 14:33:52 MDP just takes some protocol and tries to compute the best possible strategy that a selfish miner could use against it. 14:34:28 By "best" I mean most profitable for the selfish miner. But in theory you could substitute other objectives I think. 14:40:15 @rucknium: I agree, not just good, prudent. But mind you, the guy isn't just willing to do a proposal. He's got all the code, and he's adapting the proposal/code, intending to identify the differences for adaptation. So all I'm saying is that the arguments I've heard against WS didn't seem to stand, and his ways should be given some serious consideration. 14:41:58 I think PRS (workshares) could be barely worth it, after you factor in implementation complexity, new consensus risk, initial blockchain sync time increase, etc. 14:42:12 ^ This is a positive statement in favor of PRS 14:43:19 I was excited when I first read the paper, but I also wondered about RandomX hash verification impacts. tevador, who is much more knowledgeable than me in this area, gave his opinion about that. 14:45:07 Regarding the 'informal' proposal, good to hear. May take some time, but I'll try to do a write up, as technical as I can, and pop in the MRL group. If it sparks enough interest and will worth consideration, the dev may be willing to chat one to one and fill the gaps. It's a time chain, the model is Proof-of-Time. 14:46:47 His opinion was: 14:46:53 > After reviewing the available literature about selfish mining, I first want to disqualify all mitigation strategies that rely on more frequent hashrate sampling. This is the Fruitchain and related proposals [2,3]. The main idea is to submit proof of work "fruits" or "shares" with a much lower difficulty than what is required [... too long, see https://mrelay.p2pool.observer/e/us-p6LQKaUh4Vlhx ] 14:47:00 https://github.com/monero-project/research-lab/issues/144 14:47:51 Citation [3] is the Proportional Rewards splitting (work shares) paper. 14:48:09 yea seen that one, on which basis I said, if you do the workshare in memory instead bloating the block, the concern doesn't apply. The paper doesnt cover this. 14:49:40 How would nodes syncing the blockchain from genesis verify that the miners' rewards don't violate the protocol rules, without having that info in the block? 14:55:40 The rewards won't be excluded, they need to be recorded. Plain coinbase txs for 10k miners m/o will 'bloat' about 13GB per year. Paid every block. That's nothingburger. If that's too much then consider a 1/2 or 1/4 loterry-ness factor, really, no need to pay eveyone every block. But not what I refer to or Tevador refers to (I [... too long, see https://mrelay.p2pool.observer/e/_-DJ6LQKUDM5RFRp ] 15:00:54 I think Karakostas was suggesting that, for a node that is actively running, the hashes do not have to be verified at the time that the block was received. They can be verified shortly after, so block propagation does not slow down. But they would still have to be verified by a new node, which would be slow. FCMP will slow dow [... too long, see https://mrelay.p2pool.observer/e/l_zc6LQKYlE4Wnox ] 15:01:18 By default, Monero nodes do not even do full block verification on initial sync. 15:02:26 "Monero doesn't support actual full nodes" https://github.com/monero-project/monero/issues/8836 15:03:32 You can turn this off by disabling fast sync manually, but it turns a 2-day sync into a weeklong sync for a powerful machine. I've done it. 15:04:27 But cuprate has sped this up. @boog900:monero.social or @syntheticbird:monero.social do you have a current estimate for full-verification sync from genesis of cuprate? 15:05:39 radanne: the PRS paper doesn't support the claim that 20-100 shares per block are sufficient. 15:06:35 > <@rucknium> I think Karakostas was suggesting that, for a node that is actively running, the hashes do not have to be verified at the time that the block was received. They can be verified shortly after, so block propagation does not slow down. But they would still have to be verified by a new node, which would be slow. [... too long, see https://mrelay.p2pool.observer/e/7N_x6LQKMkpMcHhF ] 15:06:35 Still don't quite get it, why not just use the ws to compute the rewards, do the rewards (which are just cash txs, which I would not call 'bloat'/delay), and discard the ws? Why are hashes records needed in a block for validation by new node when the rewards are already recorded in real time? 15:07:22 The paper assumes the hashrate estimation is accurate to within 3%, which in practice requires thousands of shares per block. 15:07:23 because the entire point of a blockchain is for users to be able to verify history they weren't there for 15:07:44 tevador: The paper doesn't, but unless Dr K is lying then it's just a matter of a paper. 15:07:47 radanne: new nodes must be able to verify that the block reward is correct. 15:08:41 single emission record maybe? 15:09:20 If block reward is based on shares, then you need to include and verify all shares to trust the blockchain. 15:09:26 then the splits don't need the excess of validation, only the sum of them? 15:09:52 The correctness of the split must also be verifiable. 15:10:35 I don't think he's lying. I think he might be speculating. You have to re-run the analysis with fewer work shares. I don't know ho much effort that would require. I am now digging into the MDP code that many of these type of papers use. Once I understand it better, I may be able to alter the assumptions and see if a low number of work shares is viable. 15:11:02 tevador: yes, but then how different is that really from standard txs that need to be validated? Are you saying to validate the rewards, you need to store all ws (eg 100 per miner) in the block? 15:11:47 The subproblem I am working on is to get the stats of the depth of blockchain re-orgs that occurs with each of the selfish mining countermeasures that use the MDP analysis. 15:12:22 Yes, you need to store and verify all workshares. 15:12:49 Since that info isn't described in detail in any of the papers AFAIK. I think you can get it from the MDP output, but it's not easy. 15:14:08 tevador: Well, can't argue with that, I'd be best for Dr K to clarify, he seems confident he knows a way, since he implemented it. 15:15:18 With RandomX, even 120 shares per block would be too much. We could probably do 10 shares per block, but that's pretty far from an accurate hashrate estimation. 15:16:44 tevador: What are your thoughts about switching to Equi-X, since it's RamdomX-like, but is much faster to verify? 15:17:25 Doesn't reduce the storage requirement, of course. 15:17:33 EquiX was not designed to be ASIC resistant. 15:18:19 It's intended for DoS protection, where you can swap out the algorithm easily if an ASIC appears. 15:18:46 However, AFAIK, there are coins that already use it. 15:26:12 we managed to do sub 24 hours on an 8 core, 25gb machine. > <@rucknium> But cuprate has sped this up. @boog900:monero.social or @syntheticbird:monero.social do you have a current estimate for full-verification sync from genesis of cuprate? 15:26:17 iirc 15:45:38 @syntheticbird: I did one recently it was over 12 hours but I don't remember exactly 15:47:20 Monero has a pr that speeds up ringct verification by 40% iirc. I dont remember now if that was with fast blockbsync or not 15:50:40 With this: https://github.com/Cuprate/cuprate/pull/535 I was able to download the whole chain on my home PC in 30 mins 15:50:57 Just need to add code to write it to the db 15:54:03 >whole chain 15:54:06 >30 mins 15:55:57 > Home PC 15:57:22 I dont think monerod is truly not verifying anything in fast-sync mode. It shouldnt take 10 seconds to write 20mb if it was juat download -> save 15:57:25 With a 25gb wan, a 64 core epyc and 3 u.2 ssds in RAID 0 maybe fast-sync under 1 minute 15:57:36 is it a home PC with a 9950X3D? :D 15:58:43 monerod is very much verifying in fast-sync mode (which is the default, if i remember right?) 15:58:58 Downloading 250gb should take 30mins if downloading at 140MiB/s 15:59:23 Its not 250 GB fwiw 15:59:30 its 230 15:59:41 Just for the sake of shilling i'm gonna say it's thanks to fast-syncing with blake3 instead of keccak 15:59:43 Idk about cuprates, but moneord is 230 16:00:13 do we know exactly how much it is without db overhead? 16:00:23 Its more like 170 16:00:35 that's big overhead huh 16:00:39 Maybe I am wrong, but IIRC, monerod's fast sync verifies PoW, but not txs. 16:00:41 @ofrnxmr:xmr.mx: Nah that's the db not the raw block and tx blobs 16:01:06 @rucknium: It doesn't do pow iirc 16:01:54 It must be verifying something 16:02:14 @ofrnxmr:xmr.mx: perf is your best friend 16:02:24 fire up a fast-sync and profile 16:02:34 you'll find very quickly what monerod is spending time on 16:02:58 It's verifying the spaghetti that's in the code 🍝 16:03:07 <17lifers:mikuplushfarm.ovh> https://mrelay.p2pool.observer/m/mikuplushfarm.ovh/4zZymtyu9MvFU72jfWZU0eObR84s1lrT.png (1757779382879.png) > is it a home PC with a 9950X3D? :D 16:03:46 @boog900:monero.social: You must be right 16:03:46 > Sync up most of the way by using embedded, "known" block hashes. Pass 1 to turn on and 0 to turn off. This is on (1) by default. Normally, for every block the full node must calculate the block hash to verify miner's proof of work. Because the RandomX PoW used in Monero is very expensive (even for verification), monerod offe [... too long, see https://mrelay.p2pool.observer/e/jJTD6rQKNU1XUERr ] 16:03:47 https://docs.getmonero.org/interacting/monerod-reference/#performance 16:04:17 The docs arent gospel :P 16:04:28 That's true, too 16:04:35 Could very well be wrong or implemented incorrectly 16:04:41 Or both 16:05:06 @ofrnxmr:xmr.mx: It does the cheap checks I think the interaction with the db doesn't help 16:05:30 Cuprate uses a cache for all db reads when fast syncing 16:05:52 So not actually touching the db for reads at all 16:07:08 It could just be poor peer selection when downloading 16:07:21 It could be a lot of things not just verification 16:07:48 That's what this PR solves it removes the need to select good peers > <@boog900> With this: https://github.com/Cuprate/cuprate/pull/535 I was able to download the whole chain on my home PC in 30 mins 16:08:14 Slow peers will be slow but won't slow fast peers 16:12:30 > Yes, you need to store and verify all workshares. 16:12:30 tevador: Still trying to wrap my wits around this. What's the snag computing the individual hashrate in real time, have the nodes to reach consensus before it's recorded, then make a record of it against a single derived hash, and then you have coinbase records that can be validated, without bloating the block? Why would the c [... too long, see https://mrelay.p2pool.observer/e/w5Dj6rQKQ2oyWFBv ] 16:16:00 What would stop miners from paying themselves more than they deserve? You could not prove that they broke the rules because you could not verify the work shares. 16:17:12 That's interesting question, but how different is that from 'what stops miners from paying themselves' now? 16:18:56 Surely that same consensus before the reward is sent, can be applied no matter if you do 1 coinbase tx or 10k of them? 16:19:03 A single RandomX hash of the block that proves the miner exceeded the difficulty requirement at that point in the blockchain history. It entitles them to 0.6 XMR per block, plus transaction fees of all the transactions in the block. 16:25:08 @rucknium: Don't the network still have to agree on the winner? 16:26:40 @radanne:matrix.org: Welcome on the paradigm of alternative chains 16:27:02 indeed nodes can disagree but in the long-term they manage to rectify it 16:29:15 Maybe you need a proof of time. So the nodes could agree before they act. 16:34:37 Nah, i can get many GBs of cache downloded "waiting" to be synced > <@boog900> It could just be poor peer selection when downloading 16:34:52 Definitely not a network issue 16:35:18 * struggle * 16:35:34 >"Hold on don't say it" 16:35:35 ... 16:35:36 SKILL ISSUE? 16:36:51 Monerod's? Yes 16:36:56 ALT ROOT DEPTH +8 block(s) 16:37:02 qubic found 8 blocks in 3 minutes now 16:37:34 @ofrnxmr:xmr.mx: Waiting for what? A previous block? 16:38:02 As that what slowed cuprate down we would have a full cache of blocks with 1 holding us up 16:38:35 so for checkpoints to effectively issue these you really need a quite low TTL, specially if only one record is published and such quick blocks are found 16:39:15 they have found 10 blocks in 5m now 16:40:03 I know that we're adapting what was already in the node. But truly DNS protocol for this is a bad choice 16:45:04 reorg, 8 blocks, they reorg when they had a buffer of 2 16:45:14 It's sure not the best 'choice'. It's just the quickest, interim choice. As long as it helps to mitigate the issue, any reasons why not (centralisation aside)? 16:48:11 a 10 block reorg, ouch 16:48:32 8 blocks, could have done 9, 10 heights (they need +1 to reorg) 16:49:18 syntheticbird: i asked a few things about other possibilities for checkpoints (if they're possible at all), but i don't think anyone saw them 16:49:31 they're probably not possible though i presume 16:52:17 If it is possible with DNS there are no reason any other protocol can be used 16:53:07 I'm not saying we shouldn't use DNS 16:53:36 but this isn't adapted 16:55:34 let me resend what i had sent earlier: 16:55:36 For DNS checkpointing: would it be possible to make p2pool serve a DNS checkpointing server, and to configure monerod to use it? 16:55:36 Since p2pool shares are incentivised to be very up-to-date with the network, this could be a good compromise for those who do not wish to rely on the centralised DNS servers 16:58:18 18:55:36 For DNS checkpointing: would it be possible to make p2pool serve a DNS checkpointing server, and to configure monerod to use it? 16:58:29 no different than monero pointing to a good dns server 16:59:01 well, it's just that i don't trust DNS very much and DNS is often broken by ISPs 17:01:08 but if p2pool share data can be as easily faked as on DNS by e.g. a malicious ISP or registrar, then i guess it would make no difference 17:01:17 DataHoarder, as helene said ISP's CGNAT can completely fuck up DNS on some countries. Impossible to have up to date checkpointing without a fully fledged recursive dns resolver. There is also obviously the TTL parameter that can cause issue as some dns server will gladly enforce some minimum latency beyond the authoritative choice. 17:01:32 I have been mentioning this myself :) 17:01:40 see back logs overnight several days in a row 17:02:18 I am writing a minimal unbond configuration for this 17:02:40 i miss forums, they were better for reading what i missed than irc lods :D 17:02:41 logs* 17:02:42 otherwise I wrote a DNS server that does this, but "just bundling" a dnssec recursive DNS resolver in p2pool is no small feat 17:03:13 ah, no, i don't mean a full DNS resolver or anything; just enough for p2pool to provide a checkpoint source-of-truth to monerod 17:03:32 i said DNS so it doesn't require many changes to monerod, but it doesn't have to be DNS at all 17:04:49 (also, even running a full recursive resolver with e.g. unbound is sometimes not enough, some ISPs will forcefully strip out DNSSEC on any DNS packet going through, and other fun things like that) 17:05:37 19:03:13 ah, no, i don't mean a full DNS resolver or anything; just enough for p2pool to provide a checkpoint source-of-truth to monerod 17:05:50 for that you'd need to ... recursively resolve DNS 17:05:55 then at most pin DNSSEC keys 17:06:38 DoH could be the way there, then p2pool or something else expose it 17:06:50 but monerod already supports that, and you can use other VPN dns resolver that way 17:06:59 I think helene proposition is to make p2pool chain communicating the checkpoints and monerod be able to fetch these on the p2pool p2p network 17:07:09 no DNS involved 17:07:19 you just need a p2pool peer 17:08:05 so how do you know the records are legit? 17:08:11 AFAIK, with the default setup, nodes connected to p2pool miners will willingly re-org to the attacking chain. 17:08:13 signed i presume 17:08:16 you need to verify the signature. 17:08:21 that key is in DNS records 17:08:26 which are signed by root domain 17:08:30 which comes from DNS records 17:08:38 all the way to root DNSSEC keys 17:08:59 the signature key in the dns record isn't meant to be changed every 2 min tho right? unlike checkpoint records 17:09:03 if checkpoint domains run on not their own servers they can't even pin to a single key 17:09:17 say they run directly on CF or other provider 17:09:53 If you run stuff like https://git.gammaspectra.live/P2Pool/monero-highway#cmd-dns-checkpoints 17:09:58 you CAN pin to a key 17:10:02 I think we're confused 17:10:05 or your own bind server etc. 17:11:17 why are you still talking about dns? 17:11:36 i must have missed something 17:14:29 so basically. alternate distribution method on p2pool to serve checkpoints 17:14:41 with the additional 5/7 existing setup, pinned keys 17:14:52 they could broadcast same way donation messages do now 17:15:01 syntheticbird: yes, that is my proposition, but i have no clue if it's even viable in practice :) 17:15:25 however, syntheticbird, they NEED to expose valid DNS records 17:15:29 for monerod to use them 17:15:34 that is why we are talking about that 17:16:04 monerod enforces these checks. however, I don't think they enforce DNSSEC verification on their own, only that the server says "I have verified them" 17:16:09 I'll dig in the code 17:16:44 if DNS is a mess because it would enforce DNSSEC on the monerod side, then it's probably not worth doing over DNS for this proposition 17:18:13 i guess i can reformulate it another way: would it be viable to use a p2pool node as a source-of-truth for checkpointing on monerod? (no matter how that would be communicated to monerod) 17:18:50 that'd need an api with monero RPC that was mentioned 17:19:03 ofc, it'd need to be restricted RPC 17:19:11 remote monero nodes wouldn't be affected 17:19:20 it would definitely need to be something only the monero node admin can do, yes 17:19:24 pi-hole can also setup https://docs.pi-hole.net/guides/dns/unbound/ :D 17:19:43 but there is nothing at the moment. it'd require new code + release of monero 17:19:56 but would it be viable for anyone to do so with how p2pool works, in theory? 17:20:22 it'd require new protocol changes in p2pool and new release, as done for previous changes 17:20:29 but that's doable. 17:20:40 IF we are going that way, alternate methods to broadcast. backdated monero txs :) 17:20:48 viewkeys to verify them 17:21:02 uses normal monero broadcast 17:21:06 :D 17:21:16 opens another can of worms ofc 17:24:54 from an implementation perspective it would be much faster to just make a new P2P command in monerod and hardcode the signature key 17:25:04 yeah i'm opening the can of worms right away sorry 17:26:04 well, the tx way has old nodes also help 17:26:20 new command might be prone to new nodes being attacked/split 17:34:44 my thought was that by using p2pool shares as a source for checkpointing, there would be a way to see how much miners have worked on specific blocks with good network knowledge (because p2pool is very well connected) 17:35:06 hence, if many new blocks pop up all of a sudden, and they're not at all on p2pool shares, it would be a good indicator of selfish mining 17:35:29 but then again, i am likely to be missing something 17:37:08 I wouldn't recommend using p2pool for such proof of work shenanigans except as a distribution network 17:37:33 you don't even need to post shares, just p2p messages work 17:37:48 share format is fixed. that would require a hardfork of p2pool 17:38:11 don't shares currently contain information on what the tip of the chain at mining time is? 17:38:27 yes, but p2pool itself can also reorg 17:38:44 https://p2pool.observer/api/block_by_id/532237975e8da177f4793441a7b6168d55f1a3feafd5eb55aeb2cb82b72cf3dc/full 17:38:57 the only "flexible" part of p2pool share format is the extra_buffer part 17:39:11 which is 4x 32-bit, 17:39:27 and miner addresses 17:39:36 plus extra nonce (~32 bit) 17:40:13 it would be relying on weaker side-chain, indeed :/ 17:40:41 p2pool as of latest version is broadcasting monero blocks as well as p2pool ones across p2p 17:40:52 which improves monero blocks broadcast latency 17:41:24 some pools are running p2pool within their systems without mining to it, as they receive other blocks faster and can send their blocks faster 17:43:17 doesn't p2pool require having the chain tip in order to confirm a share being proposed for the chain tip in question? 17:43:32 p2pool allows different monero ids 17:43:46 if it's more than 10 heights away it gets rejected by peers 17:44:08 but otherwise it's all allowed as different tips can exist on different points of view across monero 17:44:26 it's a local chain tip, in the end 17:44:37 and it doesn't require the p2pool nodes to be aware of the blocks of those tips? 17:45:24 they may not be aware if they are alt blocksd 17:45:46 p2pool is not centralized so you can't fixate on one specific monero height 17:46:18 monero also doesn't share alt blocks around (and RPC doesn't accept them easily) so you can perfectly be on an alt block and any of them could win 17:46:39 yes; i guess my question is more "can a p2pool share be accepted if it depends on a tip that p2pool can't know about?" 17:46:44 same as monero itself, different parts of the network can be on different sides, even if tips are public 17:46:53 they can. 17:47:04 otherwise it'd split the network at regular intervals 17:47:06 i see, thanks! 17:47:19 whenever a normal pool just finds blocks at the same time 17:47:25 or local monero node even 17:48:12 you also can't even be sure all txids are there 17:50:55 talking about p2pool, good p2pool chain of blocks https://irc.gammaspectra.live/89167bd7f78335f7/image.png 17:53:18 nice :D 17:57:06 also I wonder if the get block template RPC could also return the tx key used in miner tx, hmm 17:57:17 that could be used for fun proofs 17:57:26 right now it's used then discarded