-
DataHoarder
> MDEBUG("Found " << (good_record == record_count.end() ? 0 : good_record->second) << "/" << dns_urls.size() << " matching records from " << num_valid_records << " valid records");
-
DataHoarder
and also > LOG_PRINT_L0("WARNING: no majority of DNS TXT records matched (only " << good_record->second << "/" << dns_urls.size() << ")");
-
DataHoarder
these are important to see how the distribution hits nodes
-
br-m
<ofrnxmr:xmr.mx> i was referring to simply using a curl loop to check them
-
br-m
<ofrnxmr:xmr.mx> but we can also deploy them on the real testnet
-
DataHoarder
fair
-
DataHoarder
curl / dig etc.
-
DataHoarder
you can run dig locally and query different sets / ISPs
-
DataHoarder
but make sure it's not being captured by anything local and sent elsewhere :)
-
DataHoarder
Most of the issue is solved by directly using a DNS recursive resolver for monero nodes
-
DataHoarder
unbound/bind etc., we could suggest a setup or parameters for relevant operators
-
DataHoarder
-
br-m
<ofrnxmr:xmr.mx> Yeah
-
DataHoarder
-
DataHoarder
I can take a dig on the specific minimal config for unbound and make a post
-
DataHoarder
at least that covers the client side, servers can still take time deploying records
-
DataHoarder
Then my recommendation if all records in the set must be matching is to set the TTL at most half of the expected update frequency
-
DataHoarder
And if 1m is respected, to that
-
DataHoarder
Otherwise 2-3m would be the maximum for this usage - 5m was the recommendation with the expectation that recordset would be matched record wise (not as a set)
-
br-m
<spirobel:kernal.eu> @longtermwhale:matrix.org: funding is not the bottleneck. The bottleneck is a lack of willingness to accept that the paradigm of proof of work has its limits in that the hardware cant be slashed and be reused for subsequent attacks. The paradigm of proof of stake has trouble explaining what the amount of stake actually does f [... too long, see
mrelay.p2pool.observer/e/wdzq3rQKX2dXZ3hD ]
-
br-m
<spirobel:kernal.eu> I would bet money that there is zero chance that giving money to "university people" will solve this issue. University people are good at what Thomas Kuhn calls "normal science" but that is not what is needed to solve this issue.
-
br-m
<noname-user0:matrix.org> @spirobel:kernal.eu: why not half pow and half pos alternating blocks?
-
helene
I really fail to see how proof of stake helps in the current situation; and I do agree with the argument that authorities seizing DNMs could easily run an attack on Monero as it would be highly beneficial to them to break it
-
helene
(I do not know enough about finality layers and I consider them a separate thing to proof of stake, so this is not a comment on finality layers)
-
helene
For DNS checkpointing: would it be possible to make p2pool serve a DNS checkpointing server, and to configure monerod to use it?
-
helene
Since p2pool shares are incentivised to be very up-to-date with the network, this could be a good compromise for those who do not wish to rely on the centralised DNS servers
-
br-m
<radanne:matrix.org> @spirobel:kernal.eu: Interesting, you didn't get any reasonable responses. I can say the same. Consider who the likely whales are, including the overall amount of Monero seized from DNMs and held in custody. Factor in 75% market acceptance, and you might arrive at a ballpark 2.1M Monero to date. That might as well be a 67% majority right from the start.
-
br-m
<radanne:matrix.org> Then look up what Chainanalysis can achieve with a PoS majority within the scope of the finality layer. For one, even with encrypted stakes Zano style, they will be able to compute an anonymous rich list by combining the known total staked amount with proportions inferred from block signing frequencies, ranking commitments by [... too long, see
mrelay.p2pool.observer/e/7OjI47QKV1ktTG1V ]
-
nioc
"PoW security can be rectified at the protocol level, rather than relying on external factors and market conditions." <<>> how?
-
br-m
<radanne:matrix.org> By doing the actual work, rather than endlessly talking about it and relying on current research papers, which are clearly insufficient. There's a vast difference between research and coding/implementation. The frontier happens at the latter step. Several PoW-only proposals have been made, all valid, all tackling the issue in their own way, to a degree.
-
midipoet
I think Monero has usually focused on evidence led development, with careful consideration of options and potential impacts on privacy, decentralisation and security. Having said that, we have hurtled forward with Moneropulse, so perhaps all is game.
-
br-m
<radanne:matrix.org> Yes, hard to do otherwise, in a decentralized, consensus-driven development environment. I don't mean to criticise that, although the fine point remains. Research tends to be cautious and rigid, and discoveries are best found at the coding step. Take the work shares, for example. QUAI has 120K GPU miners within its first year, [... too long, see
mrelay.p2pool.observer/e/26nx5rQKZ2JpSThi ]
-
br-m
<rucknium> @radanne:matrix.org: Where exactly are you getting your information?
-
br-m
<rucknium> AFAIK, QUAI is based on a very formal research paper that was developed and released before QUAI mainnet.
-
br-m
<rucknium> Not through coding.
-
br-m
<rucknium> And it's been live for about 6 months. That's not battle-tested.
-
br-m
<rucknium> Any difference in hashpower would be due to security budget: Purchasing power of the daily block reward.
-
br-m
<rucknium> All of the countermeasures to selfish mining I have seen (except ones that require trusted timestamps) at best reduce selfish mining rewards by about one half. Is that enough to make a difference against Qubic, who does not even seem to care much about profit, but cares about propaganda value?
-
br-m
<rucknium> The paper is Aumayr et al. (2025) "Optimal Reward Allocation via Proportional Splitting"
arxiv.org/abs/2503.10185
-
br-m
<rucknium> Except for the game theory part, they use a methodology to evaluate selfish mining that has been developed in papers for about a decade: Markov Decision Process.
-
br-m
<radanne:matrix.org> > <@rucknium> @radanne:matrix.org: Where exactly are you getting your information?
-
br-m
<radanne:matrix.org> Did you watch the MoneroTalk podcast with Dr K? That's where I got the info from. It was exceptionally well explained. I didn't dig into any research. He also followed up on X, reiterating the ws don't need to hit the block, but can happen in memory (agreed to my best understandng) so to me, it makes the bloat & propagation concern invalid.
-
br-m
<rucknium> No, but I read the paper
-
br-m
<rucknium> I may watch the podcast
-
br-m
<radanne:matrix.org> > <@rucknium> And it's been live for about 6 months. That's not battle-tested.
-
br-m
<radanne:matrix.org> Do we rely on our own expertize or do we rely on research of others before it could possibly hit Monero? I also have solution in mind, implemented & deployed, which I have all the reasons to believe it should be great for Monero, but since there are no research papers, only user/dev level definitions (not that kind of depth you guys seem to need) I don't even dare to bring it up.
-
br-m
<radanne:matrix.org> @rucknium: Please do, it was awesome.
-
tevador
The Proportional Splitting paper calls for something like 6000 PoW shares per block. Completely infeasible with RandomX.
-
br-m
<radanne:matrix.org> Hmm Dr K said 20 per miner should be enough, but he likes to do 100. In memory, it doesn't hit the block. He works on a PR too, so hopefully once delivered it becomes clearer.
-
br-m
<rucknium> @radanne:matrix.org: I think it's good practice for MRL to vet others' solutions instead of deploying them without fully understanding them. An "informal" solution would be OK, but it should be scrutinized with formal methods, which would take time. I am trying to understand the application of MDP to selfish mining problems. I [... too long, see
mrelay.p2pool.observer/e/x5P257QKaER4amkz ]
-
br-m
<rucknium> MDP just takes some protocol and tries to compute the best possible strategy that a selfish miner could use against it.
-
br-m
<rucknium> By "best" I mean most profitable for the selfish miner. But in theory you could substitute other objectives I think.
-
br-m
<radanne:matrix.org> @rucknium: I agree, not just good, prudent. But mind you, the guy isn't just willing to do a proposal. He's got all the code, and he's adapting the proposal/code, intending to identify the differences for adaptation. So all I'm saying is that the arguments I've heard against WS didn't seem to stand, and his ways should be given some serious consideration.
-
br-m
<rucknium> I think PRS (workshares) could be barely worth it, after you factor in implementation complexity, new consensus risk, initial blockchain sync time increase, etc.
-
br-m
<rucknium> ^ This is a positive statement in favor of PRS
-
br-m
<rucknium> I was excited when I first read the paper, but I also wondered about RandomX hash verification impacts. tevador, who is much more knowledgeable than me in this area, gave his opinion about that.
-
br-m
<radanne:matrix.org> Regarding the 'informal' proposal, good to hear. May take some time, but I'll try to do a write up, as technical as I can, and pop in the MRL group. If it sparks enough interest and will worth consideration, the dev may be willing to chat one to one and fill the gaps. It's a time chain, the model is Proof-of-Time.
-
br-m
<rucknium> His opinion was:
-
br-m
<rucknium> > After reviewing the available literature about selfish mining, I first want to disqualify all mitigation strategies that rely on more frequent hashrate sampling. This is the Fruitchain and related proposals [2,3]. The main idea is to submit proof of work "fruits" or "shares" with a much lower difficulty than what is required [... too long, see
mrelay.p2pool.observer/e/us-p6LQKaUh4Vlhx ]
-
br-m
-
br-m
<rucknium> Citation [3] is the Proportional Rewards splitting (work shares) paper.
-
br-m
<radanne:matrix.org> yea seen that one, on which basis I said, if you do the workshare in memory instead bloating the block, the concern doesn't apply. The paper doesnt cover this.
-
br-m
<rucknium> How would nodes syncing the blockchain from genesis verify that the miners' rewards don't violate the protocol rules, without having that info in the block?
-
br-m
<radanne:matrix.org> The rewards won't be excluded, they need to be recorded. Plain coinbase txs for 10k miners m/o will 'bloat' about 13GB per year. Paid every block. That's nothingburger. If that's too much then consider a 1/2 or 1/4 loterry-ness factor, really, no need to pay eveyone every block. But not what I refer to or Tevador refers to (I [... too long, see
mrelay.p2pool.observer/e/_-DJ6LQKUDM5RFRp ]
-
br-m
<rucknium> I think Karakostas was suggesting that, for a node that is actively running, the hashes do not have to be verified at the time that the block was received. They can be verified shortly after, so block propagation does not slow down. But they would still have to be verified by a new node, which would be slow. FCMP will slow dow [... too long, see
mrelay.p2pool.observer/e/l_zc6LQKYlE4Wnox ]
-
br-m
<rucknium> By default, Monero nodes do not even do full block verification on initial sync.
-
br-m
<rucknium> "Monero doesn't support actual full nodes"
monero-project/monero #8836
-
br-m
<rucknium> You can turn this off by disabling fast sync manually, but it turns a 2-day sync into a weeklong sync for a powerful machine. I've done it.
-
br-m
<rucknium> But cuprate has sped this up. @boog900:monero.social or @syntheticbird:monero.social do you have a current estimate for full-verification sync from genesis of cuprate?
-
tevador
radanne: the PRS paper doesn't support the claim that 20-100 shares per block are sufficient.
-
br-m
<radanne:matrix.org> > <@rucknium> I think Karakostas was suggesting that, for a node that is actively running, the hashes do not have to be verified at the time that the block was received. They can be verified shortly after, so block propagation does not slow down. But they would still have to be verified by a new node, which would be slow. [... too long, see
mrelay.p2pool.observer/e/7N_x6LQKMkpMcHhF ]
-
br-m
<radanne:matrix.org> Still don't quite get it, why not just use the ws to compute the rewards, do the rewards (which are just cash txs, which I would not call 'bloat'/delay), and discard the ws? Why are hashes records needed in a block for validation by new node when the rewards are already recorded in real time?
-
tevador
The paper assumes the hashrate estimation is accurate to within 3%, which in practice requires thousands of shares per block.
-
br-m
<monero.arbo:matrix.org> because the entire point of a blockchain is for users to be able to verify history they weren't there for
-
br-m
<radanne:matrix.org> tevador: The paper doesn't, but unless Dr K is lying then it's just a matter of a paper.
-
tevador
radanne: new nodes must be able to verify that the block reward is correct.
-
br-m
<radanne:matrix.org> single emission record maybe?
-
tevador
If block reward is based on shares, then you need to include and verify all shares to trust the blockchain.
-
br-m
<radanne:matrix.org> then the splits don't need the excess of validation, only the sum of them?
-
tevador
The correctness of the split must also be verifiable.
-
br-m
<rucknium> I don't think he's lying. I think he might be speculating. You have to re-run the analysis with fewer work shares. I don't know ho much effort that would require. I am now digging into the MDP code that many of these type of papers use. Once I understand it better, I may be able to alter the assumptions and see if a low number of work shares is viable.
-
br-m
<radanne:matrix.org> tevador: yes, but then how different is that really from standard txs that need to be validated? Are you saying to validate the rewards, you need to store all ws (eg 100 per miner) in the block?
-
br-m
<rucknium> The subproblem I am working on is to get the stats of the depth of blockchain re-orgs that occurs with each of the selfish mining countermeasures that use the MDP analysis.
-
tevador
Yes, you need to store and verify all workshares.
-
br-m
<rucknium> Since that info isn't described in detail in any of the papers AFAIK. I think you can get it from the MDP output, but it's not easy.
-
br-m
<radanne:matrix.org> tevador: Well, can't argue with that, I'd be best for Dr K to clarify, he seems confident he knows a way, since he implemented it.
-
tevador
With RandomX, even 120 shares per block would be too much. We could probably do 10 shares per block, but that's pretty far from an accurate hashrate estimation.
-
br-m
<rucknium> tevador: What are your thoughts about switching to Equi-X, since it's RamdomX-like, but is much faster to verify?
-
br-m
<rucknium> Doesn't reduce the storage requirement, of course.
-
tevador
EquiX was not designed to be ASIC resistant.
-
tevador
It's intended for DoS protection, where you can swap out the algorithm easily if an ASIC appears.
-
tevador
However, AFAIK, there are coins that already use it.
-
br-m
<syntheticbird> we managed to do sub 24 hours on an 8 core, 25gb machine. > <@rucknium> But cuprate has sped this up. @boog900:monero.social or @syntheticbird:monero.social do you have a current estimate for full-verification sync from genesis of cuprate?
-
br-m
<syntheticbird> iirc
-
br-m
<boog900> @syntheticbird: I did one recently it was over 12 hours but I don't remember exactly
-
br-m
<ofrnxmr:xmr.mx> Monero has a pr that speeds up ringct verification by 40% iirc. I dont remember now if that was with fast blockbsync or not
-
br-m
<boog900> With this:
Cuprate/cuprate #535 I was able to download the whole chain on my home PC in 30 mins
-
br-m
<boog900> Just need to add code to write it to the db
-
br-m
<syntheticbird> >whole chain
-
br-m
<syntheticbird> >30 mins
-
br-m
<boog900> > Home PC
-
br-m
<ofrnxmr:xmr.mx> I dont think monerod is truly not verifying anything in fast-sync mode. It shouldnt take 10 seconds to write 20mb if it was juat download -> save
-
br-m
<syntheticbird> With a 25gb wan, a 64 core epyc and 3 u.2 ssds in RAID 0 maybe fast-sync under 1 minute
-
helene
is it a home PC with a 9950X3D? :D
-
helene
monerod is very much verifying in fast-sync mode (which is the default, if i remember right?)
-
br-m
<ofrnxmr:xmr.mx> Downloading 250gb should take 30mins if downloading at 140MiB/s
-
br-m
<boog900> Its not 250 GB fwiw
-
br-m
<ofrnxmr:xmr.mx> its 230
-
br-m
<syntheticbird> Just for the sake of shilling i'm gonna say it's thanks to fast-syncing with blake3 instead of keccak
-
br-m
<ofrnxmr:xmr.mx> Idk about cuprates, but moneord is 230
-
helene
do we know exactly how much it is without db overhead?
-
br-m
<boog900> Its more like 170
-
helene
that's big overhead huh
-
br-m
<rucknium> Maybe I am wrong, but IIRC, monerod's fast sync verifies PoW, but not txs.
-
br-m
<boog900> @ofrnxmr:xmr.mx: Nah that's the db not the raw block and tx blobs
-
br-m
<boog900> @rucknium: It doesn't do pow iirc
-
br-m
<ofrnxmr:xmr.mx> It must be verifying something
-
br-m
<syntheticbird> @ofrnxmr:xmr.mx: perf is your best friend
-
br-m
<syntheticbird> fire up a fast-sync and profile
-
br-m
<syntheticbird> you'll find very quickly what monerod is spending time on
-
br-m
<rucknium> It's verifying the spaghetti that's in the code 🍝
-
br-m
<17lifers:mikuplushfarm.ovh>
mrelay.p2pool.observer/m/mikuplushf…zZymtyu9MvFU72jfWZU0eObR84s1lrT.png (1757779382879.png) > <helene> is it a home PC with a 9950X3D? :D
-
br-m
<rucknium> @boog900:monero.social: You must be right
-
br-m
<rucknium> > Sync up most of the way by using embedded, "known" block hashes. Pass 1 to turn on and 0 to turn off. This is on (1) by default. Normally, for every block the full node must calculate the block hash to verify miner's proof of work. Because the RandomX PoW used in Monero is very expensive (even for verification), monerod offe [... too long, see
mrelay.p2pool.observer/e/jJTD6rQKNU1XUERr ]
-
br-m
-
br-m
<ofrnxmr> The docs arent gospel :P
-
br-m
<rucknium> That's true, too
-
br-m
<ofrnxmr> Could very well be wrong or implemented incorrectly
-
br-m
<ofrnxmr> Or both
-
br-m
<boog900> @ofrnxmr:xmr.mx: It does the cheap checks I think the interaction with the db doesn't help
-
br-m
<boog900> Cuprate uses a cache for all db reads when fast syncing
-
br-m
<boog900> So not actually touching the db for reads at all
-
br-m
<boog900> It could just be poor peer selection when downloading
-
br-m
<boog900> It could be a lot of things not just verification
-
br-m
<boog900> That's what this PR solves it removes the need to select good peers > <@boog900> With this:
Cuprate/cuprate #535 I was able to download the whole chain on my home PC in 30 mins
-
br-m
<boog900> Slow peers will be slow but won't slow fast peers
-
br-m
<radanne:matrix.org> > <tevador> Yes, you need to store and verify all workshares.
-
br-m
<radanne:matrix.org> tevador: Still trying to wrap my wits around this. What's the snag computing the individual hashrate in real time, have the nodes to reach consensus before it's recorded, then make a record of it against a single derived hash, and then you have coinbase records that can be validated, without bloating the block? Why would the c [... too long, see
mrelay.p2pool.observer/e/w5Dj6rQKQ2oyWFBv ]
-
br-m
<rucknium> What would stop miners from paying themselves more than they deserve? You could not prove that they broke the rules because you could not verify the work shares.
-
br-m
<radanne:matrix.org> That's interesting question, but how different is that from 'what stops miners from paying themselves' now?
-
br-m
<radanne:matrix.org> Surely that same consensus before the reward is sent, can be applied no matter if you do 1 coinbase tx or 10k of them?
-
br-m
<rucknium> A single RandomX hash of the block that proves the miner exceeded the difficulty requirement at that point in the blockchain history. It entitles them to 0.6 XMR per block, plus transaction fees of all the transactions in the block.
-
br-m
<radanne:matrix.org> @rucknium: Don't the network still have to agree on the winner?
-
br-m
<syntheticbird> @radanne:matrix.org: Welcome on the paradigm of alternative chains
-
br-m
<syntheticbird> indeed nodes can disagree but in the long-term they manage to rectify it
-
br-m
<radanne:matrix.org> Maybe you need a proof of time. So the nodes could agree before they act.
-
br-m
<ofrnxmr:xmr.mx> Nah, i can get many GBs of cache downloded "waiting" to be synced > <@boog900> It could just be poor peer selection when downloading
-
br-m
<ofrnxmr:xmr.mx> Definitely not a network issue
-
br-m
<syntheticbird> * struggle *
-
br-m
<syntheticbird> >"Hold on don't say it"
-
br-m
<syntheticbird> ...
-
br-m
<syntheticbird> SKILL ISSUE?
-
br-m
<ofrnxmr:xmr.mx> Monerod's? Yes
-
DataHoarder
ALT ROOT DEPTH +8 block(s)
-
DataHoarder
qubic found 8 blocks in 3 minutes now
-
br-m
<boog900> @ofrnxmr:xmr.mx: Waiting for what? A previous block?
-
br-m
<boog900> As that what slowed cuprate down we would have a full cache of blocks with 1 holding us up
-
DataHoarder
so for checkpoints to effectively issue these you really need a quite low TTL, specially if only one record is published and such quick blocks are found
-
DataHoarder
they have found 10 blocks in 5m now
-
br-m
<syntheticbird> I know that we're adapting what was already in the node. But truly DNS protocol for this is a bad choice
-
DataHoarder
reorg, 8 blocks, they reorg when they had a buffer of 2
-
br-m
<radanne:matrix.org> It's sure not the best 'choice'. It's just the quickest, interim choice. As long as it helps to mitigate the issue, any reasons why not (centralisation aside)?
-
helene
a 10 block reorg, ouch
-
DataHoarder
8 blocks, could have done 9, 10 heights (they need +1 to reorg)
-
helene
syntheticbird: i asked a few things about other possibilities for checkpoints (if they're possible at all), but i don't think anyone saw them
-
helene
they're probably not possible though i presume
-
br-m
<syntheticbird> If it is possible with DNS there are no reason any other protocol can be used
-
br-m
<syntheticbird> I'm not saying we shouldn't use DNS
-
br-m
<syntheticbird> but this isn't adapted
-
helene
let me resend what i had sent earlier:
-
helene
For DNS checkpointing: would it be possible to make p2pool serve a DNS checkpointing server, and to configure monerod to use it?
-
helene
Since p2pool shares are incentivised to be very up-to-date with the network, this could be a good compromise for those who do not wish to rely on the centralised DNS servers
-
DataHoarder
18:55:36 <helene> For DNS checkpointing: would it be possible to make p2pool serve a DNS checkpointing server, and to configure monerod to use it?
-
DataHoarder
no different than monero pointing to a good dns server
-
helene
well, it's just that i don't trust DNS very much and DNS is often broken by ISPs
-
helene
but if p2pool share data can be as easily faked as on DNS by e.g. a malicious ISP or registrar, then i guess it would make no difference
-
br-m
<syntheticbird> DataHoarder, as helene said ISP's CGNAT can completely fuck up DNS on some countries. Impossible to have up to date checkpointing without a fully fledged recursive dns resolver. There is also obviously the TTL parameter that can cause issue as some dns server will gladly enforce some minimum latency beyond the authoritative choice.
-
DataHoarder
I have been mentioning this myself :)
-
DataHoarder
see back logs overnight several days in a row
-
DataHoarder
I am writing a minimal unbond configuration for this
-
helene
i miss forums, they were better for reading what i missed than irc lods :D
-
helene
logs*
-
DataHoarder
otherwise I wrote a DNS server that does this, but "just bundling" a dnssec recursive DNS resolver in p2pool is no small feat
-
helene
ah, no, i don't mean a full DNS resolver or anything; just enough for p2pool to provide a checkpoint source-of-truth to monerod
-
helene
i said DNS so it doesn't require many changes to monerod, but it doesn't have to be DNS at all
-
helene
(also, even running a full recursive resolver with e.g. unbound is sometimes not enough, some ISPs will forcefully strip out DNSSEC on any DNS packet going through, and other fun things like that)
-
DataHoarder
19:03:13 <helene> ah, no, i don't mean a full DNS resolver or anything; just enough for p2pool to provide a checkpoint source-of-truth to monerod
-
DataHoarder
for that you'd need to ... recursively resolve DNS
-
DataHoarder
then at most pin DNSSEC keys
-
DataHoarder
DoH could be the way there, then p2pool or something else expose it
-
DataHoarder
but monerod already supports that, and you can use other VPN dns resolver that way
-
br-m
<syntheticbird> I think helene proposition is to make p2pool chain communicating the checkpoints and monerod be able to fetch these on the p2pool p2p network
-
br-m
<syntheticbird> no DNS involved
-
br-m
<syntheticbird> you just need a p2pool peer
-
DataHoarder
so how do you know the records are legit?
-
br-m
<rucknium> AFAIK, with the default setup, nodes connected to p2pool miners will willingly re-org to the attacking chain.
-
br-m
<syntheticbird> signed i presume
-
DataHoarder
you need to verify the signature.
-
DataHoarder
that key is in DNS records
-
DataHoarder
which are signed by root domain
-
DataHoarder
which comes from DNS records
-
DataHoarder
all the way to root DNSSEC keys
-
br-m
<syntheticbird> the signature key in the dns record isn't meant to be changed every 2 min tho right? unlike checkpoint records
-
DataHoarder
if checkpoint domains run on not their own servers they can't even pin to a single key
-
DataHoarder
say they run directly on CF or other provider
-
DataHoarder
-
DataHoarder
you CAN pin to a key
-
br-m
<syntheticbird> I think we're confused
-
DataHoarder
or your own bind server etc.
-
br-m
<syntheticbird> why are you still talking about dns?
-
br-m
<syntheticbird> i must have missed something
-
DataHoarder
so basically. alternate distribution method on p2pool to serve checkpoints
-
DataHoarder
with the additional 5/7 existing setup, pinned keys
-
DataHoarder
they could broadcast same way donation messages do now
-
helene
syntheticbird: yes, that is my proposition, but i have no clue if it's even viable in practice :)
-
DataHoarder
however, syntheticbird, they NEED to expose valid DNS records
-
DataHoarder
for monerod to use them
-
DataHoarder
that is why we are talking about that
-
DataHoarder
monerod enforces these checks. however, I don't think they enforce DNSSEC verification on their own, only that the server says "I have verified them"
-
DataHoarder
I'll dig in the code
-
helene
if DNS is a mess because it would enforce DNSSEC on the monerod side, then it's probably not worth doing over DNS for this proposition
-
helene
i guess i can reformulate it another way: would it be viable to use a p2pool node as a source-of-truth for checkpointing on monerod? (no matter how that would be communicated to monerod)
-
DataHoarder
that'd need an api with monero RPC that was mentioned
-
DataHoarder
ofc, it'd need to be restricted RPC
-
DataHoarder
remote monero nodes wouldn't be affected
-
helene
it would definitely need to be something only the monero node admin can do, yes
-
DataHoarder
-
DataHoarder
but there is nothing at the moment. it'd require new code + release of monero
-
helene
but would it be viable for anyone to do so with how p2pool works, in theory?
-
DataHoarder
it'd require new protocol changes in p2pool and new release, as done for previous changes
-
DataHoarder
but that's doable.
-
DataHoarder
IF we are going that way, alternate methods to broadcast. backdated monero txs :)
-
DataHoarder
viewkeys to verify them
-
DataHoarder
uses normal monero broadcast
-
DataHoarder
:D
-
DataHoarder
opens another can of worms ofc
-
br-m
<syntheticbird> from an implementation perspective it would be much faster to just make a new P2P command in monerod and hardcode the signature key
-
br-m
<syntheticbird> yeah i'm opening the can of worms right away sorry
-
DataHoarder
well, the tx way has old nodes also help
-
DataHoarder
new command might be prone to new nodes being attacked/split
-
helene
my thought was that by using p2pool shares as a source for checkpointing, there would be a way to see how much miners have worked on specific blocks with good network knowledge (because p2pool is very well connected)
-
helene
hence, if many new blocks pop up all of a sudden, and they're not at all on p2pool shares, it would be a good indicator of selfish mining
-
helene
but then again, i am likely to be missing something
-
DataHoarder
I wouldn't recommend using p2pool for such proof of work shenanigans except as a distribution network
-
DataHoarder
you don't even need to post shares, just p2p messages work
-
DataHoarder
share format is fixed. that would require a hardfork of p2pool
-
helene
don't shares currently contain information on what the tip of the chain at mining time is?
-
DataHoarder
yes, but p2pool itself can also reorg
-
DataHoarder
-
DataHoarder
the only "flexible" part of p2pool share format is the extra_buffer part
-
DataHoarder
which is 4x 32-bit,
-
DataHoarder
and miner addresses
-
DataHoarder
plus extra nonce (~32 bit)
-
helene
it would be relying on weaker side-chain, indeed :/
-
DataHoarder
p2pool as of latest version is broadcasting monero blocks as well as p2pool ones across p2p
-
DataHoarder
which improves monero blocks broadcast latency
-
DataHoarder
some pools are running p2pool within their systems without mining to it, as they receive other blocks faster and can send their blocks faster
-
helene
doesn't p2pool require having the chain tip in order to confirm a share being proposed for the chain tip in question?
-
DataHoarder
p2pool allows different monero ids
-
DataHoarder
if it's more than 10 heights away it gets rejected by peers
-
DataHoarder
but otherwise it's all allowed as different tips can exist on different points of view across monero
-
DataHoarder
it's a local chain tip, in the end
-
helene
and it doesn't require the p2pool nodes to be aware of the blocks of those tips?
-
DataHoarder
they may not be aware if they are alt blocksd
-
DataHoarder
p2pool is not centralized so you can't fixate on one specific monero height
-
DataHoarder
monero also doesn't share alt blocks around (and RPC doesn't accept them easily) so you can perfectly be on an alt block and any of them could win
-
helene
yes; i guess my question is more "can a p2pool share be accepted if it depends on a tip that p2pool can't know about?"
-
DataHoarder
same as monero itself, different parts of the network can be on different sides, even if tips are public
-
DataHoarder
they can.
-
DataHoarder
otherwise it'd split the network at regular intervals
-
helene
i see, thanks!
-
DataHoarder
whenever a normal pool just finds blocks at the same time
-
DataHoarder
or local monero node even
-
DataHoarder
you also can't even be sure all txids are there
-
DataHoarder
talking about p2pool, good p2pool chain of blocks
irc.gammaspectra.live/89167bd7f78335f7/image.png
-
helene
nice :D
-
DataHoarder
also I wonder if the get block template RPC could also return the tx key used in miner tx, hmm
-
DataHoarder
that could be used for fun proofs
-
DataHoarder
right now it's used then discarded