-
br-m
<fr33_yourself> I have done some more thinking about rolling DNS checkpoints and am now less favorable on them again. Not that my opinion matters much haha. I do admit that rolling DNS checkpoints are a good pragmatic solution to "mitigate" deep re-orgs until a more comprehensive solution to selfish mining is ready. Here are some of my concer [... too long, see
mrelay.p2pool.observer/e/sZbXq7MKSHlCTXct ]
-
DataHoarder
1) the keys are individual and they are DNSSEC, that's also where tevador's 2/3rds +1 % (supermajority two-thirds rule)
-
DataHoarder
these subdomains would be kept separate from the main domains which can be removed/rotated as needed
-
DataHoarder
4) DNS servers DDOS as in, major ISP ddos across a distributed servers (and any of these queried work).
-
DataHoarder
If the records don't agree or are obsolete, well, the miners always follow the "longest" chain
-
DataHoarder
there isn't need for any code in place, that's the default behavior. the checkpointing just pins a specific height to a given id
-
DataHoarder
it doesn't make the blocks appear or manually move the nodes forward, the nodes are always moving forward. it sets the "rear"
-
br-m
<fr33_yourself> Perhaps the biggest issue with the rolling DNS checkpoint idea, is that creates an environment where the likelihood is higher (than pure longest chain rule) of prolonged chain splits. Although it is fairly unlikely that this would be an issue in practice, if the balance of hashrate following checkpoints was say about 60% and 4 [... too long, see
mrelay.p2pool.observer/e/7OGDrLMKWFNXOUpC ]
-
DataHoarder
and as for > If the checkpointing idea is released and adopted, can we be sure that it won't be used longer than necessary when other solutions are ready or that it won't be used for other purposes?
-
DataHoarder
that's up to monero community to address (plus the previously discussed writing about what will be done with them)
-
DataHoarder
the other alternative as it is now, before any long term hardfork, is that people can do worse on the chain and get invalidated decoys with the already discussed issues
-
br-m
<fr33_yourself> How many different servers / signing keys would there be? Also what does DNSSEC mean? And tevador's proposal means that a minimum of 2/3rds vote is needed for a checkpoint to be valid? > <DataHoarder> 1) the keys are individual and they are DNSSEC, that's also where tevador's 2/3rds +1 % (supermajority two-thirds rule)
-
DataHoarder
and remember - checkpoints are not a one way door. If deployment of them ends up with higher issues than expected due to network effect, they can just stop being issued or removed.
-
DataHoarder
The suggestion is to increase from 4 -> 7
-
DataHoarder
and 50%+1 to 2/3rd +1
-
DataHoarder
right now 3/4 need to agree to have a checkpoint valid
-
DataHoarder
if more are added and threshold is set, it'd be 5/7 that need to agree
-
DataHoarder
each checkpointing subdomain has a set of signing keys
-
DataHoarder
-
DataHoarder
it authenticates DNS records
-
DataHoarder
these records can be signed offline, then pushed to DNS secondary servers that can serve but not sign new records
-
DataHoarder
As an example I wrote a simple DNS + DNSSEC server that serves exactly the TXT records needed, signed, and allows DNS Zone transfers for secondary DNS servers to then provide
git.gammaspectra.live/P2Pool/monero-highway#cmd-dns-checkpoints
-
DataHoarder
-
DataHoarder
This automatically replicates right now across Hurricane Electric's DNS secondaries and 1984.hosting DNS secondaries
-
br-m
<fr33_yourself> And how deep are the proposed checkpoints again? I still am of the opinion that the closer to 10 we are the better. As chain splits are more likely if the depth is to close to the tip. Especially if Qubic and CFB go maniac mode and want to cause chaos. I think it is already the case that CFB is hesistant to reorg deeper than 10, so that seems like the most reasonable number
-
DataHoarder
my suggestion is depth of two "from tip", to account for around 5 previous records missing, due to TTL
-
br-m
<fr33_yourself> If checkpoints are less than 5 blocks deep from the tip, then CFB could try to reorg deeper than that causing chain splits. I mean he could do that still with 10 block checkpoint, but it is harder and I think even he has limits to how much chaos he wants to create
-
DataHoarder
that'd get it around the 10 mark with a sliding window of 10 checkpoints
-
DataHoarder
> If checkpoints are less than 5 blocks deep from the tip, then CFB could try to reorg deeper than that causing chain splits.
-
br-m
<fr33_yourself> I don't follow that point haha
-
DataHoarder
if he can do 5 he can do 10
-
DataHoarder
or 20, like, on demand
-
DataHoarder
DNS checkpoints aren't instant
-
DataHoarder
assume they can get to good clients within a few seconds, but many users DNS servers will lag behind around 5-7 minutes
-
DataHoarder
you need to keep this into account to set the checkpoints with the point that they will be there when the clients receive them, in a workable way, for the purpose needed
-
DataHoarder
if you set a checkpoint at 9 and the client receives it when we are 20 deep, they hold no weight except to make more splits
-
DataHoarder
the closer to tip, it eliminates the ability to make splits, but you also want to leave the tip to behave naturally via highest work
-
br-m
<fr33_yourself> Ok, I think I better understand now. You're saying that there are lags in people receiving all the checkpointed info so in practice a 2 depth checkpoint only enforces like a 10 block finality or something like that
-
DataHoarder
you want to have a margin of error to account that clients would need 5/7 of the DNS domains matching some of the records
-
br-m
<fr33_yourself> DataHoarder: Wouldn't it be easier for splits to occur though? As CFB just needs to reorg one block deeper than the checkpoint. But I might misunderstand the degree of incremental difficulty for him to reorg one block deeper
-
DataHoarder
that is a reorg, which reorgs back
-
DataHoarder
the point is to make 10+ infeasible
-
DataHoarder
to prevent transaction invalidation and double spend
-
DataHoarder
you can ensure the core players have a good setup, but everyone else that opts in also needs to receive good data in a timely manner even if lagging behind
-
DataHoarder
the 10-block range is within the confirmation window, more and that's where the issues appear @fr33_yourself
-
br-m
<fr33_yourself> I agree that to the extent that most mining hash follows the checkpoints then it would make 10+ double spend and reorgs infeasible
-
DataHoarder
depth set to two, qubic tries to reorg below that, but it'd make no difference if it was a checkpoint or not. they can already do that
-
DataHoarder
but extending this to 10+ is not something that is desired, and the point of the checkpoints
-
br-m
<fr33_yourself> Correct. The "honest" mining pool admins need to be able to receive the data quickly from the signing servers. > <DataHoarder> you can ensure the core players have a good setup, but everyone else that opts in also needs to receive good data in a timely manner even if lagging behind
-
DataHoarder
from measurements it's like 15s latency when you query things properly :)
-
DataHoarder
depending on DNS setup as well (and we'd want some variety) that can add some as well
-
DataHoarder
some DNS servers from ISPs can and will enforce 5m TTL
-
br-m
<fr33_yourself> > <DataHoarder> depth set to two, qubic tries to reorg below that, but it'd make no difference if it was a checkpoint or not. they can already do that
-
br-m
<fr33_yourself> Yes, but isn't it a possibility that Qubic could do a fairly deep reorg (via selfish mining) after checkpoints are enabled. And in this hypothetical scenario they could persist mining and building on their deep reorg chain, and possibly with some honest miners (if they don't follow the check pointed chain, but just follow stan [... too long, see
mrelay.p2pool.observer/e/hvfZrLMKZmFrOEhp ]
-
br-m
<fr33_yourself> In the current setup with selfish mining the reorg passes through which is disruptive, but after Qubics reorgs are "released" the network remains on a single chain.
-
DataHoarder
see that'd invalidate some transactions but then the chain would come back
-
DataHoarder
where the current situation is that they'd never come back
-
DataHoarder
and it'd allow double spending, invalidation as well
-
DataHoarder
if their point is "profit" their coins there would be useless
-
br-m
<fr33_yourself> How would the checkpointing situation allow the chain to "come back"? As long as there are two meaningfully sized groups of hashrate building on different chains then this would cause a currency split if one of two groups doesn't defect in a shortperiod of time. > <DataHoarder> see that'd invalidate some transactions but then the chain would come back
-
DataHoarder
> As long as there are two meaningfully sized groups of hashrate building on different chains then this would cause a currency split if one of two groups doesn't defect in a shortperiod of time.
-
DataHoarder
The one that has the monetary majority
-
DataHoarder
this includes hashpower and some specific merchants
-
br-m
<fr33_yourself> > <DataHoarder> if their point is "profit" their coins there would be useless
-
br-m
<fr33_yourself> Maybe, but isn't this contingent on exchanges, merchants and majority of the ecosystem configuring their nodes such that they follow the checkpoints (not just the miners following the checkpoints, but even exchanges etc). Because otherwise if some exchanges nodes or merchants nodes are configured to simply follow longest chain [... too long, see
mrelay.p2pool.observer/e/ovDxrLMKWm1SSHEt ]
-
DataHoarder
note DNS checkpoints were originally released to explicitly address that situation of a "split" due to consensus issues coming from a bug
-
DataHoarder
nodes would opt-in as needed
-
br-m
<fr33_yourself> Yes, I agree that after the "currency split". Then the more valuable currency and it's accompanying chain will be followed / mined > <DataHoarder> The one that has the monetary majority
-
DataHoarder
also - note that the current attacker being qubic does not mine 24/7
-
DataHoarder
-
br-m
<fr33_yourself> DataHoarder: What does this point mean? Why is this relevant to our current discussion? Ohhhh I see what you mean. It would be difficult for them to persistently continue building on their "naughty reorg chain" because they only mine in marathons. So in the event of a chainsplit their chain would die unless CFB overrules their current decentralized AI B.S. and starts mining Monero full time
-
DataHoarder
this is where the 2/3rds was mentioned in the comments
-
DataHoarder
correct, fr33_yourself.
-
DataHoarder
checkpoints also prevent a generally covert attacker from implementing one-off attacks
-
br-m
<fr33_yourself> You mostly mean merchants that do meaningful transaction volume, exchanges, and proportion of hashpower distributed between the two competing split chains. Whichever has "more" of those aspects would end up winning the split. Pretty much like what happened with BTC and BCH > <DataHoarder> this includes hashpower and some specific merchants
-
DataHoarder
Except for one side they know which is the canonical chain
-
br-m
<fr33_yourself> DataHoarder: Yep, because if they can't persistently build on their reorg chain, then it just gets orphaned off. It's like a game of stamina.
-
DataHoarder
and the other would temporarily be elsewhere, then flip. it not being permanent matters, plus users on the wrong side can still opt-in to checkpoints or at least see the warnings
-
DataHoarder
it is more valuable to do short selfish attacks or literally just mine
-
DataHoarder
they are orphaning others atm but their implementation is not strictly giving higher profits
-
br-m
<fr33_yourself> DataHoarder: How would it still be valuable to do short selfish mining attacks if the checkpoint depth is 2 blocks? Wouldn't that make short reorgs impossible? Unless it is like exactly 1 or 2 blocks deep. Or you mean even with checkpointed depth of 2, Qubic could still possibly pass a 3 block depth reorg due to latency?
-
DataHoarder
depth of 2 is, if blocks 100 101 102 exist, you are checkpointing 100
-
DataHoarder
they could still make their own version of 101, 102, and 103 and publish these
-
DataHoarder
ending up with 2-1 orphaned blocks depending how many they do
-
DataHoarder
they could also do 102, 103 and orphan 102 only
-
DataHoarder
say clients have an older one
-
DataHoarder
reorg could happen there, but it's still within conf window
-
DataHoarder
then as blocks get built it reorgs back
-
DataHoarder
all of that can happen within the 10-block interval and it's all ok
-
DataHoarder
the point being that blocks coming out of the 10-conf window should be well checkpointed by overlapping height/time intervals to account for lucky chances or network delays on DNS records
-
br-m
<fr33_yourself> I see where you guys are coming from and you are making good points. It is a powerful practical tool, but I guess the philosophical angle is still a bit murky since the rolling dns checkpoints still introduce a centralized point of trust. But in practice it perhaps it isn't such a big deal because all honest miners and network [... too long, see
mrelay.p2pool.observer/e/oZmirbMKX2hSdmdT ]
-
DataHoarder
what was mentioned on previous MRL meeting as well was to start building up documentation around the purpose itself and what activation window we are talking about for the bandaid
-
DataHoarder
> the philosophical angle is still a bit murky since the rolling dns checkpoints still introduce a centralized point of trust
-
DataHoarder
yeah, read on
monero-project/monero #10064 and that's a major point
-
br-m
<fr33_yourself> Roger, I will read it now. I appreciate your points and explanations.
-
DataHoarder
good testing has been done on testnet to find pain points and issues to bring that up, and others that need fixing on monero to get a better system for it, but these are not consensus breaking changes (they are just in the DNS checkpoints subsystem)
-
br-m
<privacyx> The most important point is rolling dns checkpointing is temporary bandaid (and can be switched off) until more permanent solution is ready, i lost track if PoP will be implemented alongside it?
-
DataHoarder
yeah. they must never become a permanent solution on their own. their purpose is for emergency situations as originally deployed, for consensus split resolution, so this is stretching usage a bit already
-
DataHoarder
lightly adapting them to fit this better as well, but when other solutions come in, they should be gone quick
-
br-m
<ofrnxmr> @fr33_yourself: At a depth of 3 or more
-
br-m
<ofrnxmr> For 10 years, the network has essentially never disagreed / reorged beyond 3 blocks. Any discrepency that large is likely to be a dishonest attempt to have an unfair advantage (selfish mining or otherwise mining while disconnected from the rest of network)
-
br-m
<ofrnxmr> the checkpoints should be following the chain post-any honest reorgs
-
br-m
<privacyx> DNS checkpointing only really helps against deep reorgs (9+), but it doesn’t address selfish mining. Many miners, myself included, are noticing reduced rewards it feels like our work is being undercut, and over time that could make small/medium-scale mining harder to sustain.
-
br-m
<privacyx> I’m curious: aside from DNS checkpointing, are there any other solutions currently being discussed to mitigate selfish mining on Monero?
-
DataHoarder
> it doesn’t address selfish mining
-
DataHoarder
if applied on those depths, it reduces their ability to do long chains and "wait" for the chain to catch up. it limits them to short lucky chains, and their work resets every couple of heights
-
DataHoarder
they can't count on monero getting unlucky and them being lucky anymore
-
DataHoarder
so their risk their chain gets orphaned is way higher, which makes it less profitable for them to use that strategy as optimally (or not really for them) as they have been doing
-
DataHoarder
but yeah, it is still viable but with reduced orphan depths
-
DataHoarder
note they also are getting quite orphaned by monero itself
qubic-snooper.p2pool.observer/tree :)
-
DataHoarder
@rucknium:monero.social can you edit
monero-project/meta #1263#issuecomment-3255876843 to have the logs in a "code" block via ```<logs>```? I think github is trying to sanitize html tags so only users with invalid html names get shown over the bridge :D
-
br-m
<kiltsonfire:matrix.org> > <@fr33_yourself> Rucknium is also looking into Proportional reward splitting which is even more secure than "work shares" (fruit-chains) by themselves. But if my understanding is correct Proportional reward splitting with work shares is problematic for Monero in practice because of RandomX. And we should definitely keep [... too long, see
mrelay.p2pool.observer/e/5d76zLMKcV9sUWhS ]
-
br-m
<kiltsonfire:matrix.org> Just to be clear PRS is the paper we wrote about workshares.
-
br-m
<kiltsonfire:matrix.org> > <@rucknium> @mr_x_tcv:matrix.org: If Workshares are the same as Proportional Rewards Splitting (PRS), described in Aumayr et al. (2025) "Optimal Reward Allocation via Proportional Splitting"
arxiv.org/abs/2503.10185 , then I don't think it limits re-org depth at all. PRS would increase block verification time to [... too long, see
mrelay.p2pool.observer/e/youbzbMKdWxNZ1FZ ]
-
br-m
<kiltsonfire:matrix.org> This is the paper that the Quai team wrote on workshares. In terms of block verification time in real-time execution, the workshares come in throughout the interblock interval and their work can be leisurely validated and then put into a valid workshare cache. When the block comes in, it can contain on the valid shares, but th [... too long, see
mrelay.p2pool.observer/e/youbzbMKdWxNZ1FZ ]
-
br-m
<kiltsonfire:matrix.org> > <@rucknium> I also think that PRS may actually be worse for solo miners because they wont be able to place most of their work shares in a block within the reward window (most of their hashes will be out of date). I think that's how it works. Tell me if I am mistaken. The paper doesn't discuss this solo mining problem.
-
br-m
<kiltsonfire:matrix.org> Everyone is incentivized to include shares to make their block the heaviest and most likely to win in the case of a re-org. Since the reward is paid out proportional to the shares, not divided amongst shares per block, there is only positive incentive to include, thus the conclusion in PRS that this is in Nash Equilibrium. It [... too long, see
mrelay.p2pool.observer/e/ucClzbMKbTRmNTdo ]
-
tevador
It's not free at all. It would have a significant impact on sync times. Like 120 hours of CPU time to sync 1 year of blocks instead of 1 hour of CPU time.
-
tevador
^ Numbers only include PoW verification.
-
br-m
<kiltsonfire:matrix.org> > <@vtnerd> As stated on Twitter, the prs crowd nearly has me convinced that it's better than tevadors proposal, but no one has a good response to the increased sync time
-
br-m
<kiltsonfire:matrix.org> All of the resources used here are inconsequential to any real execution parameter on almost any node. For example, if we look at the validation of a RandomX hash it should take say 5ms. A typically block takes 2 seconds to validate. If we did 100 shares per block the validation time would go from 2 seconds to 2.5 seconds an increase of 25%.
-
br-m
<kiltsonfire:matrix.org> If you wanted to be lighter on the resources you could do as few as 10 shares per block which would only increase the validation time by 50ms or 2.5%. This would still get most of the benefit. Optimality, depending on desired node and sync properties lies somewhere between 10-100 workshares per block.
-
br-m
<kiltsonfire:matrix.org> tevador: This is not true. Most of the time in sync is done in block verification not in randomX hash verification. A block is about 2 seconds, whereas a hash is 5ms.
-
br-m
<kiltsonfire:matrix.org> If you had 10 workshares per block you would increase sync time 2.5%. 30 shares per block 7.5% and so on.
-
DataHoarder
in light mode, on slow devices without hw aes or hw float that can be different
-
br-m
<kiltsonfire:matrix.org> > <@rucknium> Aumayr et al. (2025) does not directly analyze how many shares would have to be put in each block to make their protocol work. They analyze in pieces: 1) Assume hashpower estimation is done with zero error, then how effective is PRS against selfish mining? 2) How much estimation error do you get at different numbers of workshares per block?
-
br-m
<kiltsonfire:matrix.org> This is part of the simulations. I probably can provide you the source code.
-
tevador
1) RandomX in light mode is more like 15 ms with a fast CPU. 2) Average block verifies much faster than 2 seconds. It's probably closer to 0.5 seconds (<0.1 for empty blocks). So block verification would go from 0.5 s to 2.5 s.
-
br-m
<kiltsonfire:matrix.org> @kiltsonfire:matrix.org: Probably not gaining anything meaningful by having lightnodes, which aren't mining, validate anything but the blockhash itself. Certainly at depth.
-
br-m
<kiltsonfire:matrix.org> tevador: depends on the number of shares. You can get most of the benefit with as little as 10 shares.
-
DataHoarder
in-depth could be verified a mix of using
tevador/RandomX #265 + main hash, then only closer tip would need further, specifically for light nodes
-
br-m
<kiltsonfire:matrix.org> DataHoarder: yes.
-
tevador
10 shares means an effective block time of 12 seconds, so you don't really need shares, it could just be blocks. Ethereum has a block time of 12 seconds.
-
DataHoarder
oddly p2pool like parameters :)
-
br-m
<kiltsonfire:matrix.org> tevador: Ok so lets move to 12 second blocks. But even if you do you will still be vulnerable to a 30% attacker, probably more so given the natural uncle rate of fast block times. Shares help to increase attacker resilience regardless of block time. The real limit is how much resources you want to allocate to it, ie bandwidth [... too long, see
mrelay.p2pool.observer/e/i__azbMKeTVLWlFo ]
-
br-m
<kiltsonfire:matrix.org> DataHoarder: That is one way to think about this. You are ameliorating selfish mining, long range re-orgs and decentralizing pools on-chain.
-
br-m
<rucknium> @kiltsonfire:matrix.org: Posting the simulation code would be great. Giving it an open source license would be even better. Thanks. > <@kiltsonfire:matrix.org> This is part of the simulations. I probably can provide you the source code.
-
br-m
<rucknium> Thanks for coming here to discuss, too :)
-
tevador
RS with a block time of 12 seconds has the same selfish mining resistance as PRS with a workshare time of 12 seconds.
-
br-m
<kiltsonfire:matrix.org> > <@vtnerd> My original interpretation of the paper was that you would have an x block window to post your own shares. This wouldn't help solo mining at all.
-
br-m
<kiltsonfire:matrix.org> What PRS is fundamentally saying is that fruitchains never gets finality but is "perfectly fair". If you want to have practical finality, you have to compromise on fairness. The way to do it is have a finite time to inclusion of shares, but the number of blocks in which you allow inclusion is related the the liklihood that an [... too long, see
mrelay.p2pool.observer/e/wqb3zbMKN0J0d3dZ ]
-
br-m
<kiltsonfire:matrix.org> tevador: This is not true, because shares are independent events. They do not create state transitions nor do they independently carry weight. They only carry weight when included in a block. By having a block with say 10 workshares the likelihood of a 30% attacker being able to get lucky for a block is around 3% while they wi [... too long, see
mrelay.p2pool.observer/e/6YiNzrMKam9YaVBZ ]
-
br-m
<kiltsonfire:matrix.org> @rucknium: Simulation code for PRS:
github.com/commonprefix/proportional-reward-splitting-MDP
-
tevador
No, you misunderstood PRS. An attacker with 30% still has a 30% chance of mining a block even with 1 workshare per second. The difference is in the reward distribution.
-
br-m
<kiltsonfire:matrix.org> tevador: You are correct, but they won't be able to withhold profitably, nor will they be able to continue to extend a heavier chain past 1 block frequently.
-
tevador
A workshare is exactly the same as a block without transactions in RS.
-
tevador
Rs uses uncles in place of overlapping workshares.
-
br-m
<rucknium> @kiltsonfire:matrix.org: Thank you!
-
tevador
Therefore RS with 12s/block is the same as PRS with 12s/share, except RS will confirm transactions faster.
-
br-m
<kiltsonfire:matrix.org> tevador: Clarify the abbreviation RS for me.
-
tevador
-
tevador
Proposal 2
-
tevador
I proposed a block time of 60 seconds. 12 seconds might also be possible. 1s/sample is definitely too much for RandomX, 12s/sample MIGHT be acceptable, 60s/sample is definitely acceptable.
-
br-m
<kiltsonfire:matrix.org> tevador: Specifically referencing RS in #141, we explored the idea of exponential decay of blocks are shares when doing PRS. Any non-equal weighing of blocks are shares leads to making selfish mining worse, not better. I was initially in this camp. It has been almost a year since we did the work, so I do not remember why, but it definately was the case.
-
tevador
There are basically 2 orthogonal proposals in #144: 1. Publish or Perish, which makes selfish reorgs harder to achieve (by giving higher weight to "in-time" blocks) and 2. is RS, which splits block rewards more fairly based on uncle blocks.
-
tevador
PoP is sort of like a decay.
-
tevador
Late blocks get 0, in-time blocks get 1.
-
br-m
<kiltsonfire:matrix.org> I believe it had to do with the attacker witholding their shares and including everyone elses shares, which gives them a weight advantage over the honest chain, which they cannot recover from because the shares of the dishonest actor when broadcast as part of their block are then not as heavy for the honest chain.
-
br-m
<kiltsonfire:matrix.org> We definately explored all variations of these ideas and concluded the two following things: 1) All shares need to be paid the same within the inclusion window 2) the weight of the block or share to be proportional to the payment of the block or share. 3) Rewards have to vary per block based on the total included shares, not s [... too long, see
mrelay.p2pool.observer/e/tJ7nzrMKbGZvS2Rk ]
-
br-m
<kiltsonfire:matrix.org> Those are the 3 conclusions, which I think no matter the implementation of blocks, uncles or shares, has to be true to eliminate selfish mining and achieve fairness.
-
br-m
<kiltsonfire:matrix.org> tevador: My only issue with PoP is that I think it is gameable for a non-economically significant amount.
-
tevador
Monero would have some problems with variable block rewards. We basically need a fixed base reward per block to make the fee scaling work.
-
tevador
In which sense is PoP gameable?
-
br-m
<kiltsonfire:matrix.org> tevador: can you elaborate?
-
br-m
<kiltsonfire:matrix.org> tevador: I can setup many nodes and lie about the time that I saw a block.
-
tevador
-
tevador
My node doesn't care when your node claims to have seen a block.
-
tevador
In PoP, nodes don't share information when they saw a block. They share the block itself and each node keeps its own stats.
-
tevador
So the only gameable thing is when you publish a block.
-
br-m
<kiltsonfire:matrix.org> @kiltsonfire:matrix.org: In this case you would just set R0 = k * shares.
-
tevador
R0 needs to be a constant.
-
tevador
-
br-m
<rucknium> You could probably make it work with the fee scaling, but it would be even more complicated than the current fee scaling design...let's say by a factor of 2 😬
-
br-m
<rucknium> @kiltsonfire:matrix.org: Sorry, I didn't catch it: Which of the authors of the paper are you? Or you can maintain an anonymity set of 5 😉
-
br-m
<chowbungaman:matrix.org> Very exciting to see Dr.K in here working with Monero devs! From Monero Talk to the MRL!
-
br-m
<kiltsonfire:matrix.org> tevador: can it be constant in expectation?
-
br-m
<kiltsonfire:matrix.org> for example you could say 1 share = 1 XMR. 1 block is expected to have 10 shares which means each block = 10 XMR on average. But you could have a 5 XMR block and a 15 XMR block for any given block.
-
br-m
<kiltsonfire:matrix.org> just play numbers of course