-
br-m
<fr33_yourself> I currently believe that tevador's proposals to mitigate selfish mining are the most promising. That being said, would it be possible for an attacker to spam the network with invalid workshares (that require verification) creating a DOS issue?
-
br-m
<venture> is my assumption correct that the current MAX_TX_EXTRA_SIZE=1060 limits work-shares to be included to 13 ?
-
br-m
<gp1688:matrix.org> I did not read
monero-project/research-lab #146#issuecomment-3344168035 thoroughly, but wonders whether a persistent network lag >6s will cost no one to produce the block?
-
br-m
<venture> No. Exceeding d (network propagation delay), is not relevant for new blocks that extend the tip
-
br-m
<venture> @venture: "is my assumption correct that the current MAX_TX_EXTRA_SIZE=1060 limits work-shares to be included to 13 ?"
-
br-m
<venture> As per the last comment on GH, the number is rather 24-25 workshares since not all fields need to serialized and can be inferred from the block it is contained in (eg, prev_id)
-
br-m
<venture> > <@fr33_yourself> I currently believe that tevador's proposals to mitigate selfish mining are the most promising. That being said, would it be possible for an attacker to spam the network with invalid workshares (that require verification) creating a DOS issue?
-
br-m
<venture> "I currently believe that tevador's proposals to mitigate selfish mining are the most promising. That being said, would it be possible for an attacker to spam the network with invalid workshares (that require verification) creating a DOS issue?"
-
br-m
<venture> I don't think this is an issue. It's one hash with the nonce provided and check the target against diff(x)/w
-
br-m
<rucknium> @venture:monero.social: No, AFAIK. Max tx_extra size is for user txs relayed through the network. It isn't even a consensus rule for user txs. Coinbase txs aren't subject to it, AFAIK. > <@venture> is my assumption correct that the current MAX_TX_EXTRA_SIZE=1060 limits work-shares to be included to 13 ?
-
sech1
1060 bytes limit is for mempool
-
sech1
coinbase transactions are a different thing, they don't have this limit
-
DataHoarder
> Coinbase txs aren't subject to it, AFAIK.
-
DataHoarder
They have no effective limit which makes decoding them from the wild quite fun buffer wise :D
-
DataHoarder
besides max p2p size
-
DataHoarder
some of the fields in tx extra have a commonly agreed encoding... but really it could be anything
-
DataHoarder
also, workshares could be effectively committed into the merge-mine tag (or other) and not have the full data within, and this be transferred via alternative means that allows not retaining it in the future
-
DataHoarder
that'd make it compatible with most pool backed systems out there that have merge mining support for Tari or others
-
br-m
<venture> DataHoarder: I'm not familiar with the merge-mine tag, but would this change things regarding space requirements?
-
DataHoarder
basically that tag is a merkle root of a merkle tree with various chains
-
DataHoarder
the size is constant for however much data you have
-
DataHoarder
however, this data + merkle proof (a couple of single hashes, not pow related) need to be presented on a side channel
-
DataHoarder
the merkle proof allows verifying that block included this data
-
br-m
<venture> and the data of the merkle root is stored where?
-
DataHoarder
by looking at the merkle root on coinbase transaction
-
DataHoarder
that's a side channel. you can have this be monero p2p, stored, sent via pigeon carrier, "attached" to blocks for nodes that support it
-
DataHoarder
but the block is valid "as is" for nodes that don't know about workshares
-
DataHoarder
(or merge mined chains as example)
-
DataHoarder
storage requirements don't change much, but with this you can make them not permanent
-
DataHoarder
you can opt to keep all data, or not keep old workshare data, or once they are not needed forget it
-
br-m
<venture> ah okay this makes sense
-
DataHoarder
the only data that is kept on-chain is in the coinbase transaction hash tag
-
DataHoarder
this is also mentioned on the tracking GH issue
-
DataHoarder
(As a bonus adapting a system that already merge mines to this is somewhat easier)
-
br-m
<venture> I was somewhat hoping that the merge-mine tag would allow for paying shares, what i proposed on the GH issue 😅
-
DataHoarder
merge mine tag is used for example to prove that you are mining on monero, then p2pool and tari as sidechains
-
DataHoarder
example this block mined Tari and P2Pool share as well as the Monero block
blocks.p2pool.observer/block/8aa3ad…6466709f9868db82f9346c7ee207ca14473
-
DataHoarder
tari data is a "hash" of their own blob that gets published to make their block elsewhere
-
DataHoarder
actually, heh, you could probably design this workshare system as basically a p2pool-like sidechain
-
DataHoarder
for sharing of workshares and other assorted data
-
br-m
<venture> DataHoarder: yes, the concept seems pretty similar to pool mining
-
DataHoarder
yeah, but that's how you could fling the data across, either separate or across monero p2p :)
-
DataHoarder
and if using mm tags ... the sidechain would be the Monero block + workshares
-
DataHoarder
merkle root commits the workshares you are mining along (plus any other merge mined sidechains)
-
br-m
<venture> with w=16 it might actually replace the big p2pool > <DataHoarder> actually, heh, you could probably design this workshare system as basically a p2pool-like sidechain
-
DataHoarder
if the idea is to have majority pools have it, it's not so much replacing p2pool, just that you end up with a larger sidechain
-
DataHoarder
after all, point of p2pool is to share payouts in smaller slices, not produce workshares (side effect)
-
DataHoarder
it uses the weighted shares based on difficulty when it was found to then aggregate payouts :)
-
br-m
<venture> ah shit.. i keep forgetting that they are not paid. never mind :)
-
DataHoarder
not far off, but critically workshares can only be used within similar heights with less leeway, and anything past a few heights is not relevant
-
br-m
<venture> yes, 267MHs current p2pool to 312MHs (5GHs/16)
-
DataHoarder
effectively the "hashrate" would be the same wouldn't it?
-
DataHoarder
as pools would basically be merge mining and publishing workshares
-
DataHoarder
hashrate of p2pool sidechain ~= hashrate of p2pool in main chain
-
br-m
<venture> yes, it wouldn't change. it's not even merge-mining and more like normal mining with 2 thresholds (it will be the same hashing blob), first threshold for being a share, second threshold for being a block
-
DataHoarder
yes, that's merge mining
-
DataHoarder
same hashing blob, different thresholds
-
DataHoarder
you send out hashing blob + aux data (in this case workshares)
-
br-m
<venture> ah okay :)
-
DataHoarder
it's not directly comparable (as you want to closely tie workshares to the specific Monero block id/prev id / height) but here's a nice overview + api P2Pool follows for example for Tari and others
github.com/SChernykh/p2pool/blob/master/docs/MERGE_MINING.MD
-
DataHoarder
merge_mining_get_aux_block sets difficulty, merge_mining_submit_solution sends the blob / proof / template
-
DataHoarder
note I am not saying that it has to follow or is this but you can have a tight adapted method along same lines that more or less works with common merge mine methods
-
br-m
<venture> thanks. yes, it has some differences, but definitely closely related
-
br-m
<venture> was there ever a re-org or selfish-mining happening on p2pool btw? it already has uncles, but rewards the miner that includes the uncle similar to ethereum I think (which was sub-optimal wrt to preventing selfish mining)
-
DataHoarder
there have been reorgs, but its allowed uncle depth of 3 allows this to reorg more often
-
DataHoarder
some p2pool sidechains have different %
-
DataHoarder
otherwise, there was one big miner with basically borked networking that kept mining behind
-
DataHoarder
and releasing these at various intervals
-
DataHoarder
and some big miner that bumped mini difficulty past main (more hashrate) though sech1 has more details there
-
DataHoarder
at each miner entity you can see a uncle shares counter
p2pool.observer/miners
-
DataHoarder
some bad miners have even orphaned shares (they were mining behind or out of sync)
-
DataHoarder
I can't remember but one big miner recently had one of their shares orphaned out, but that share also found a Monero block :)
-
DataHoarder
-
DataHoarder
this is different from a share that mined a monero block but was orphaned from monero itself
-
br-m
<venture> DataHoarder: wow that's wild :D
-
DataHoarder
bad miners, sometimes with offset clocks +-15m
-
br-m
<venture> but that orphaning didn't affect the immediate payout right?
-
DataHoarder
or some with worse, but that gets blocked at share verification time
-
br-m
<venture> from mining the monero block itself
-
DataHoarder
no, that's just p2pool itself
-
DataHoarder
it just doesn't count for the miner weight
-
DataHoarder
(reward share% = miner weight / total weight
-
br-m
<venture> yes I see. still funny.
-
DataHoarder
it's funny as this miner had shares before, and one after within uncle distance
-
DataHoarder
so they COULD have included it as one of their own uncles
-
DataHoarder
but they did not ^ (I actually wonder why, sech1, maybe they had different p2pool instances and share didn't propagate well?)
-
sech1
The same instance would have included that share
-
DataHoarder
yeah, as they have it locally
-
sech1
So they probably run multiple instance
-
DataHoarder
their clock was out of wack, lol
-
DataHoarder
they mined it 5m late
-
DataHoarder
so probably some system stuck behind
-
DataHoarder
if we assume clock is right, it was mined late
-
br-m
<venture> well, if it's all due to offset clock, it's not intentional at all..
-
br-m
<venture> Is the SoP proposal somewhat less affected by clock sync? I thought it only needs local time and not be in sync with everyone else's clock?
-
DataHoarder
if it was mined in time it'd have been uncled
-
DataHoarder
afaik SoP only cares about block heights and ids
-
DataHoarder
and a monotonic local clock to handle the time decay
-
br-m
<venture> DataHoarder: that's my understanding as well