-
UkoeHB
Wooh! Today I ran the first performance test of my Seraphis PoC :). The early early result says ~the same or a bit slower verification compared to Triptych, with better size scaling on inputs (not surprising, but cool to see the numbers pop up).
-
gingeropolous
HUZZZAH!
-
gingeropolous
UkoeHB, mind if i tweet so we can get internet points? cause mArKeTiNg
-
UkoeHB
go for it
-
gingeropolous
oh and obvi UkoeHB , you are welcome to get access to some of my rigs if having 64 threads and 256gb of ram could help the iterative process. waiting for computers is ... meh
-
UkoeHB
Yeah that will be super helpful once I have all the variants ready to test! There are a couple more I need to write up, but they should be fairly easy.
-
fluffypony
-
fluffypony
new look on TLU if anyone is interested
-
jberman[m]
gingeropolous: could I jump on into a rig and run a test? I want to run a bunch of iterations of the decoy selection unit tests with this code (
j-berman/monero 1ed3dfe) and see how often it fails :)
-
carrington[m]
Is it true to say that you can't have a fully deterministic, protocol verified decoy selection system because of the desired ability to delay publishing txns (or txn chaining)?
-
carrington[m]
Seems you cant have a rule such as "transactions in this block must have X decoys from each of Y bins" if the defined bins change between txn construction and inclusion in a block
-
moneromooo
Not necessarily. I'd expect you publish a base height with your tx.
-
moneromooo
It provides a bound to when the tx was created though.
-
moneromooo
But it should be always needed in any case I think.
-
moneromooo
There could be some quantization, but too much and you bias against the very recent chain outputs.
-
carrington[m]
What would "base height" mean in this scenario? Do you mean you define the txn creation height inside the txn, and the decoy selection defines the bins based off that height?
-
moneromooo
Some height after which you cannot select outputs.
-
carrington[m]
I'm not sure what privacy implications leaking txn creation time would have, seems minimal
-
moneromooo
ie, for a trivial algorithm, that picks equiprobably, if you have N outputs on the chain, the algorithm might pick rand() % N. You want N to be known to the verifier.
-
carrington[m]
Seems to me that if you supplied an untrue txn creation height, the verifier would generate bins which are misaligned with the bins used during txn generation. This could result in failed verification.
-
carrington[m]
It also seems that any quantization/fuzzing of the creation height would require an increase in the 10 block "ignored zone"
-
moneromooo
Obviously you generate your tx with the height you claim.
-
carrington[m]
So you would have to provide the true txn generation height, which is more than "provides a bound to when the tx was created"
-
gingeropolous
jberman[m], yeah. hit me up in dm
-
UkoeHB
reminder: mrl meeting in 1.75hr (
monero-project/meta #621)
-
Rucknium[m]
<carrington[m]> "Is it true to say that you can't..." <- What do you mean by 'deterministic" here?
-
isthmus
I would apply statistical test to [x - min(ages) for x in ages]
-
isthmus
Rather than the absolute
-
isthmus
So the distribution has to be correct relative to the first ring member, not relative to the block height
-
isthmus
Otherwise even getting stuck in the mempool too long could bork things
-
Rucknium[m]
isthmus: I think that might be OK, but it adds another layer to the probability theory here. We need not just the distribution of the decoy selection algorithm f_D(x), but the _distribution of a distribution_ that is based of a probabilistic draw from f_D(x). It's feasible to work that out, but it takes research and coding time.
-
isthmus
It's just an offset
-
isthmus
And I think it's fundamentally necessary
-
isthmus
Whether a transaction was generated correctly needs to be something that can be evaluated independently of when it got included in a block, right?
-
Reuben[m]
Do we know how much increasing ring size would mitigate current issues even if decoy picking algo wasn't changed ?
-
Reuben[m]
Of course ideally both but was wondering how increases in ring size esp in terms of Seraphis/Spark would also make Monero less vulnerable to these type of issues given that it's been an ongoing issue
-
Rucknium[m]
isthmus: It's a random offset, though, if I understand your idea correctly.
-
isthmus
I'll sketch it up later and see if it makes sense
-
isthmus
Hey @Rucknium[m]
-
isthmus
@Reuben[m] 👋
-
Reuben[m]
Heyy good to cya :)
-
Rucknium[m]
Reuben[m]: Yes, we know. I will quote my comments in my CCS proposal
-
Rucknium[m]
-
Rucknium[m]
"Increasing the ring size is part of Monero's long-term development roadmap. However, I have produced evidence that the statistical vulnerability would still remain with larger ring sizes. Raising the ring size from 11 to, say, 17 would barely dent the potency of my attack."
-
Rucknium[m]
"Raising the ring size to 256 would mitigate the attack to a substantial degree, but user privacy would still be at significant risk. In other words, we cannot get ourselves out of this problem by simply raising the ring size."
-
Reuben[m]
Interesting thanks
-
UkoeHB
-
UkoeHB
1. greetings
-
UkoeHB
hi
-
isthmus
Salutations
-
rbrunner
hello
-
jberman[m]
hiya
-
Rucknium[m]
Hello
-
h4sh3d
Hi
-
carrington[m]
I guess I didn't mean completely deterministic. Just that the distribution amongst bins would be deterministic
-
carrington[m]
Hello
-
SerHack
Hi
-
UkoeHB
Today I'd like to nail down ring size for next hard fork, since it is... a blocker for next hard fork. I like 15 (less than 50% rise in input size/verification costs), or 17 (aesthetics: prime number).
-
dartian[m]
Salutations
-
UkoeHB
This is agenda item 2.
-
carrington[m]
Regarding ringsize increase, I think 16 is the way to go unless binning is implemented in the next upgrade in which case it should be 22
-
UkoeHB
Reasoning?
-
jberman[m]
In last dev meeting we discussed going with a number that would smooth the binning implementation so no remainder in a bin
-
carrington[m]
There was discussion about binning with two outputs from each bin, so 22 gives you a bin for each current output
-
jberman[m]
22 was tossed as ideal since it maintains the current gamma selection + yields the benefit of binning
-
carrington[m]
A number with many factors basically was said to be good for binning options
-
jberman[m]
when you have 11 bins + 2 members per bin
-
UkoeHB
Ok, jberman[m] can you update us on status of binning proposal?
-
carrington[m]
I have not yet had time to look over the binning PoC
-
Halver[m]
hello
-
jberman[m]
I found a flaw yesterday in my proposed wallet-side binning algorithm that is fixable: basically trying to randomly select an output from a block in the final step for each output can cause a situation where an output is guaranteed to be a decoy
-
isthmus
Good catch
-
jberman[m]
Working on the fix
-
jberman[m]
the reason it randomly selects an output from a block in the final step is to mitigate a miner re-ordering a block to their advantage (e.g. ordering a bunch of outputs into a bin)
-
carrington[m]
Articmine also stated that if ringsize goes beyond 24 with CLSAG, there would need to be some changes to the dynamic block size system
-
Rucknium[m]
From a decision making point of view, isn't it the case that the exact state of the binning PoC isn't too important, as long as we believe that there are not fatal flaws with binning as a concept?
-
UkoeHB
Yes that's true, my brain is still warming up on this subject.
-
Rucknium[m]
One scenario: We go to an even number of ring size in the next hard fork. Then, later, binning is released in a new wallet release -- it requires no hardfork.
-
rbrunner
Log of the dev meeting in question is here, btw:
monero-project/meta #614#issuecomment-933010683
-
isthmus
There aren't many even primes :/
-
gingeropolous
hrm, but doesn't that also get into the whole enforcing of ring member selection thing.
-
gingeropolous
u could end up with wallets that bin and those that don't
-
UkoeHB
We have hard from: UkoeHB, carrington[m], jberman[m] about ring sizes. Does anyone else have things they want to say? sgp_ ?
-
gingeropolous
ringsize a bajillion
-
jberman[m]
A switch to binning in the wallet is guaranteed to cause rings to have a clear different distribution of rings from older rings, so it would be nice to have it in place for a hardfork to try and get as many people as possible using the updated algorithm, though not strictly necessary
-
sgp_
hi. nothing changes from me; I still recommend 16-17 absent binning
-
Rucknium[m]
gingeropolous: Yes, that would almost 100% happen -- or a least there would be a lag as other wallet implementations adopt it.
-
UkoeHB
To be clear: an increase from 11 to 22 ring size will double per-input size/verification costs.
-
selsta
16-17 seems good to me too, I would rather not go too high due to verification time
-
Rucknium[m]
My not-completely-thought-out suggestion is ring size 16 since it would allow 2-output bins later.
-
rbrunner
Well, if there is a later :)
-
wfaressuissia
Hello
-
sgp_
with binning would be 8x2, seems reasonably acceptable but a little low
-
rbrunner
Or better said, another hardfork between the coming one and Seraphis ...
-
carrington[m]
Whether 8 2-ouput bins is better than 4 4-output bins, I'm not so sure
-
Rucknium[m]
sgp_: I agree with "seems low". I would want a substantial overhaul of the decoy selection algorithm to be in place before or at the same time as 8x2 binning.
-
gingeropolous
i like 22 mainly because its the largest number that seems acceptable at this time, and allows for simple binning that seems acceptable by all
-
atomfried[m]
with BP+ how much bigger would a 16-17 ring tx be compared to a current one?
-
sgp_
22 is quite large but indeed has no downside compared to current with binning turned on
-
Rucknium[m]
gingeropolous: If 22 is feasible, that would be nice. It seems to be outside of the range that people have typically thrown around, however.
-
sgp_
the highest I realistically want to go is 18
-
selsta
vtnerd wanted to add ASM speedup for syncing
-
gingeropolous
its literally been 3 years since ringsize 11 enforced. tech has advanced since then to accommodate.
-
carrington[m]
Doesn't BP+ lower txn size by a flat 96 bytes?
-
selsta
with that added, it might allow for a bit higher ring sizes, depending on the speedup
-
sgp_
yeah tech hasn't advanced 100% though :p
-
UkoeHB
Let's look at just 16 vs 22.
-
UkoeHB
16: conservative, avoid 2xing size/verification costs per-input
-
UkoeHB
22: optimistic about binning
-
gingeropolous
22: also puts the gas on the pedal for seraphis
-
isthmus
(and 22 = more statistical noise to lower the power of heuristic analyses intended to deanonymize ring signatures)
-
gingeropolous
wait i messed up that metaphor
-
UkoeHB
lol
-
isthmus
haha
-
jberman[m]
<carrington[m]> "Whether 8 2-ouput bins is better..." <- the Moser paper recommends bin sizes of 2, but need to review the math a bit deeper. their reasoning is basically that bins of size 2 provide mitigation for the worst of any potential problems with pure gamma selection, while still allowing for many gamma-selected outputs in the ring
-
UkoeHB
In my view, the pattern/precedent of conservative design choices in Monero favors 16 over 22.
-
carrington[m]
So... 18 is the happy medium? Also allows bin sizes of 3
-
gingeropolous
ah ha, there's the compromise. 16 now, but include in code that ringsize grows by 2 every year for some n years
-
sgp_
just to ground everyone in reality, most of the targeted heuristics don't care about ringsize 11 v 22. It doesn't make those twice as hard
-
sgp_
I'm by no means a "small ringsize-r", but going for 22 only makes sense to me if we want to do 11x2 binning
-
carrington[m]
Automatic ringsize increase sounds unnecessarily complicated
-
gingeropolous
re: sgp, does that include binning bonuses?
-
carrington[m]
As presumably the decoy selection would need to change at the same time
-
rbrunner
And one would really hope that a next-gen protocols is not several years out
-
sgp_
gingeropolous: probably, if you assume 2+ poisioned outputs
-
gingeropolous
i just dunno if the heuristics have been done against binning
-
UkoeHB
Once again, I feel us going in circles on this subject.
-
sgp_
we're getting off topic, but that's not the relevant test at play here for 2+ poisoned outputs
-
gingeropolous
yep. im fine with 16,17,22. anything >11 is fine
-
rbrunner
The dev meeting participants seemed to grativate towards 16, so ...
-
rbrunner
*gravitate
-
carrington[m]
Maybe we file this question under "awaiting more binning research"
-
sgp_
I'm good with 16
-
gingeropolous
nah, one of the goals is to nail down a number
-
gingeropolous
no more can kicking
-
h4sh3d
seems like no one is against 16 so...
-
UkoeHB
Imo if binning were to be enforced by consensus, then the case for 22 would be stronger.
-
sgp_
11 -> 16 is the largest increase ever (not by % obv)
-
UkoeHB
Ok how about tentatively 16 ring size for next hf. Any objections? We can revisit this again if anyone wants to before go-live.
-
isthmus
👍
-
gingeropolous
^
-
sgp_
no objections
-
wfaressuissia
"Once again, I feel us going in circles on this subject." right, especially considering undelivered triptych with 100+
-
jberman[m]
none from me
-
isthmus
Well, except the fact that it's not prime :- P
-
gingeropolous
^^
-
» isthmus is superstitious
-
UkoeHB
isthmus: at least it's a power of 2!
-
h4sh3d
power of 2 and prime would be good :p
-
rbrunner
16 will probably make also possible nice displays in block explorers and such
-
UkoeHB
Let's move on to other agenda items. This part is open-ended.
-
rbrunner
I have a question regarding Seraphis that may be of common interest and also came up recently on Reddit:
-
UkoeHB
I'll just add my update for the log: yesterday I ran the first performance tests for one design variant of my Seraphis PoC. As expected, it is ~similar in verification cost to Triptych, and smaller size scaling on inputs.
-
rbrunner
If I understand correctly Seraphis has the potential to offer better view-only wallets, even in more than 1 variant
-
rbrunner
The Seraphis draft whitepaper seems to speak about "design choices" regarding this
-
rbrunner
Does this mean we can freely choose among several possibilities, as a community, and the "loose consensus" can get implemented?
-
UkoeHB
Yes the consensus can be implemented
-
isthmus
Sounds good to me
-
UkoeHB
Technically addressing are all 'conventions', not enforced by consensus.
-
UkoeHB
addressing schemes*
-
rbrunner
Do addresses change in length depending on the choice of the "power" of view-only wallets?
-
UkoeHB
Yes
-
UkoeHB
Some variants require 3 key addresses
-
rbrunner
What must be result in a new record length for all coins :)
-
isthmus
I'm intrigued. I think that the limits on our current view key system really lower the practical utility
-
h4sh3d
Does 2 key addresses remain compatible?
-
UkoeHB
-
UkoeHB
h4sh3d: no there is no variant that remains compatible
-
rbrunner
Very interesting. Is that from a text that also explains what a "tier" is?
-
UkoeHB
no it's just a summary; I just call a 'tier' a different level of authority
-
isthmus
I like Janus A
-
rbrunner
Depending on how many private keys one holds?
-
isthmus
I like Janus A
-
UkoeHB
rbrunner: more or less
-
isthmus
oops sorry for the dupe, connectivity issues
-
rbrunner
I see
-
sgp_
what does Janus mean if it's in the table under a tier?
-
UkoeHB
you can check section 4.6.2 in the seraphis draft paper
-
UkoeHB
sgp_: you can detect janus attacks with that tier
-
isthmus
I think there's value in having view received and view spent separately
-
isthmus
In the case of a charity that publishes their view key, it might be better to just share the show received, since it's undesirable to publicly reveal a large number of spends
-
isthmus
But for a company running audits on internal wallets, having view spend is desirable
-
UkoeHB
isthmus: the problem is you can usually detect spends by looking at change outputs
-
rbrunner
"Plain B" looks interesting because it does not yet seem to require a third key
-
UkoeHB
so it is of questionable utility to separate the tiers
-
UkoeHB
separate the capabilities*
-
sgp_
Janus A seems the best then. Janus B would be cool for minimizing info known to lightweight wallets, but not being able to observe Janus is a downside
-
UkoeHB
However, if ring size is expanded to 'all the outputs', then the distinction can be useful.
-
isthmus
Well, my accounting department isn't going to want to heuristically infer spends from change outputs :- P
-
isthmus
But yea, that is a bummer for the charity use case
-
sgp_
even so, the distinction is useful from an auditability perspective
-
sgp_
using key images to confirm guesses is terrible UX
-
isthmus
Yea, with Monero's current setup, the ONLY way for accounting to get a full view of wallet activity is if they have access to sensitive keys
-
UkoeHB
To be clear: a tier 2 wallet has all the capabilities of a tier 1 wallet
-
isthmus
Whereas with e.g. Janus A, they could have a full dashboard to monitor all wallet activities, without ever needing access to a sensitive key
-
isthmus
Which is sweet
-
isthmus
(from a secure systems design perspective)
-
carrington[m]
Couldn't change outputs be sent to a different subaddress? That way, you couldn't infer the full flow of funds with a "view incoming" key for a specific address
-
carrington[m]
Maybe a bit convoluted, and possibly completely wrong
-
isthmus
🤔
-
moneromooo
Apologies if this was said, I just popped in, but why does plain B have three distinct tiers with only two keys ?
-
UkoeHB
isthmus: you get the same thing with all the other variants except Plain A
-
isthmus
Yea
-
rbrunner
Aren't keys address-independent?
-
isthmus
They're all improvements over Plain A
-
UkoeHB
moneromooo: there are actually 3 private keys in Plain B, but only 2 public keys in the address
-
rbrunner
Sounds quite clever
-
moneromooo
Thanks
-
rbrunner
So it seems we can start discussion and let any future hardfork to Seraphis come nearer in a relaxed way because the design can be finalized quite quickly
-
UkoeHB
I suppose so
-
jberman[m]
"Janus" types wouldn't leave an additional 16 bytes of data on chain, but require larger addresses in order to protect against Janus?
-
UkoeHB
jberman[m]: correct
-
UkoeHB
Even if you use 16 bytes to mitigate janus, the janus address variants still need 3 keys to get the more versatile permissions
-
sgp_
how long we talking
-
isthmus
It would be a kind of cool UI feature if I could export / import the view keys as mnemonic word lists
-
jberman[m]
got it
-
UkoeHB
sgp_: until what?
-
sgp_
long in size, 50% longer?
-
UkoeHB
yes the Janus variants would be 50% longer
-
UkoeHB
One last minute question: is anyone working on Drijvers attack mitigation? wfaressuissia ?
-
carrington[m]
Longer addresses are better than more data on chain
-
h4sh3d
UkoeHB: I had a look at your technical note. Was surprised to not see reference to MuSig2 work, is it intentional?
-
wfaressuissia
yes
-
UkoeHB
h4sh3d: I did not read that paper
-
UkoeHB
Since I guess the other papers are sufficient
-
sgp_
carrington: agree 100%, quite a minor downside
-
h4sh3d
worth having a quick look at 1.3 Concurrent Work then, just to get an idea,
eprint.iacr.org/2020/1261.pdf
-
sethsimmons
Users just copy-paste and verify first and last few characters, so longer addresses don't really matter in any way I can think of, and Monero's addresses are already dauntingly long lol
-
h4sh3d
yes FROST cover the same
-
UkoeHB
wfaressuissia: great thank you! :)
-
UkoeHB
Ok we are at the end of the meeting. Let's do another meeting same time next week. Thanks for attending everyone.
-
carrington[m]
Worth noting for the logs: Haveno are offering a $2500 bounty for fixing the Wagner/drijvers attack
-
isthmus
Productive meeting
-
isthmus
I’ve made some progress in a little side project inspired by the transaction volume excess I could share too, but I forgot to add to the agenda
-
isthmus
Should I brain dump now or save it for next week's episode? :- P
-
carrington[m]
Rucknium are we any closer to reviewers deciding if your hackerone disclosure should be published? Haven't seen mention in a while
-
gingeropolous
i guess u should save it for next week isthmus
-
carrington[m]
Isthmus I can add that to the top of the next agenda, if you give me a title to hype it up.
-
isthmus
Actually, let me double check if I can even come next week
-
isthmus
This time often overlaps with engineering meetings that I need to be at
-
carrington[m]
Logs are posted
-
isthmus
Next week is iffy for me. I'll whip up an abstract now while it's fresh on my mind, and then we can tentatively pencil it in for next week
-
isthmus
I’ve been continuing to dig into the July 2021 txn flood. The anomaly has been a fantastic case study for ring signature deanonymization due to high volume and extreme transaction homogeneity.
-
isthmus
Since doing recursive searches over ring signatures is grossly inefficient, i.e. O(R^H) for ring size R and number of hops H, I’ve been working on efficient encodings for analyses in O(# Txns) by working from genesis to head (which is presumably how any adversary with basic CS skills would approach it)
-
isthmus
The data features may take a few days or weeks to build, but you only need to do it once, and then you can read in basically O(1).
-
isthmus
One cool application is marking all of the outputs upstream of a given ring signature, which you can think of like an N-1 length bitstring attached to the Nth output, where a 1 at the jth index indicates that output J was a parent of output N.
-
isthmus
(You can also imagine this as a triangular matrix with ((# outputs)^2)/2 nonzero entries)
-
isthmus
It’s a windowable method, for example our first application is identifying which (pre-flood) outputs were used to set up the transaction volume excess this summer. Since we can narrow our focus to a few months leading up to the anomaly, each tag is only a few kB, so we don’t need much computational power or disk space to pull it off.
-
isthmus
It’s also surprisingly viable to apply this to the entire blockchain if you have a bit of patience for the first build. Back of the envelope estimation is that all edges in the Monero transaction tree could be naively encoded into this matrix / bitstring formalization in just over 100 TB (which is very small from a big data industry perspective, where good data engineers are expected to sling around PB’s of data
-
isthmus
efficiently.)
-
isthmus
There are methods for encoding this more efficiency ( e.g. succinct posets:
arxiv.org/abs/1204.1957 ), but honestly since the naively encoded data set is only ~100 TB it probably wouldn’t be worth the effort of implementing all the fancy math.
-
isthmus
With the data in this shape, output recombination analysis becomes trivial, so we’ll be able to answer conclusively whether the transactions in the anomaly had two >0 outputs (amount + change) or only one valued output and one dummy (0-value) output.
-
isthmus
The reason it becomes so simple is this: the way that we built the matrix means that it’s already sorted (both in terms of rows and columns). So now we can test hypotheses extremely quickly by slicing out columns then simply working top to bottom until you encounter the first [1,1]
-
isthmus
For example, if a transaction created the xth and x+1th outputs, we 1) simply pull out xth and x+1th columns, then 2) throw out all the 0s before the xth index, then 3) only search as far as we need to to find the first [1,1].
-
isthmus
It’s extremely efficient, which is pretty cool, and it doesn’t matter if there are 3 or 30 or 300 hops between when the second output gets folded in downstream of the first one.
-
isthmus
For a given output, let delta be the height difference between output creation and the first recombination, with delta(x,x+1)=inf (or nan) if never recombined. The interesting thing is not how often recombinations occur, but how quickly.
-
isthmus
There will always be spurious recombinations due to decoy selection, but when we look at the distribution of deltas, the baseline distribution (in a sense, the control case) will have a longer expectation value than what we observe during the anomaly if both outputs were valued.
-
isthmus
It’ll take some time to code up the tags and build the data features, but I expect that our next report(s) on the flood will be able to deanonymize most of the ring signatures, conclusively answer whether the anomaly was producing dummy outputs or change outputs, and hopefully identify which transactions _before_ the anomaly created the thousands of ‘mise en place’ outputs that were consumed when it began).
-
jberman[m]
Checking my understanding: you're starting with suspect outputs created in a set of suspect tx's, then seeing when they're first used in a ring later in the chain. If you find that a large swathe of the suspect outputs are first used in a ring later on much sooner than what one would expect from the decoy selection algorithm, then you can guess those suspect outputs were more likely to be spent in those rings?
-
isthmus
Bingo
-
isthmus
In terms of guessing true spends, we actually have 3 heuristics to combine:
-
isthmus
1) timing, usually 10-15 blocks old
-
isthmus
2) only interested in ring members that have the same fingerprint: 2 outputs, unlock_time = 0, fee matching core wallet, tx_extra length 44 bytes, etc
-
isthmus
3) above linking analysis
-
isthmus
Oh and 4) throw out fresh off the coinbase
-
isthmus
I think that even without #3 the other heuristics will knock it down to 1-2 plausible members per ring
-
UkoeHB
wfaressuissia: it seems MuSig2 has a marginally more efficient spec for bi-nonce signing compared to FROST/SpeedyMuSig (
eprint.iacr.org/2020/1261 section 4.1 'Second round signing'); they use a single nonce aggregation coefficient `b` to reduce the cost of combining all nonces (thanks h4sh3d for mentioning this paper)
-
isthmus
Note also that even for normal wallets without recombination, #2 and #4 apply to change output chains
-
jberman[m]
Very neat, looking forward to hearing more :) A challenge with applying it in the general case seems to be that you need a large initial set of suspect tx's, and trying to guess at initial sets seems difficult. But if you start with a large set of initial suspect tx's (and expect quick spends), you have a lot to go off of
-
isthmus
I was thinking that for analyzing the volume anomaly, we can probably just look at the ~500,000 inputs leading up to the start of the excess volume
-
isthmus
(a few weeks)
-
isthmus
That leaves us with few kB scale tags which will be easy to manipulate without much memory or computational power
-
isthmus
And if we don't find the origin outputs in that window, we can just rerun it another 500k indices further back
-
Rucknium[m]
<carrington[m]> "Rucknium are we any closer to..." <- I am aware that some reviewers have been slowed by unrelated circumstances. 2.5 weeks ago I agreed to not discuss OSPEAD and related issues in detail publicly for the time being since, well, there was quite a bit of controversy.
-
Rucknium[m]
However, I think it is reasonable at this point to "break the silence" about some details of where we are in the process, especially since it has been over a month since I submitted to HackerOne.
-
Rucknium[m]
Some reviewers have not identified themselves publicly, so some of what I say will be vague.
-
Rucknium[m]
Current status on my end of things:
-
Rucknium[m]
1) A biostatistician within the Monero community has written a "review" of my submission. Overall, I feel that it is a very positive review. In essence, it found no fundamental flaws with my attack nor my proposed solution to it, according to my interpretation of the review.
-
Rucknium[m]
I caution that the biostatistician specifically stated that his opinion should not be considered a go/no go judgement, however.
-
Rucknium[m]
Here's the thing about peer review: Generally an arbiter would have a review and a reply together. I also note that the biostatistician said he reviewed my HackerOne submission as if it were a scientific publication. Reviewing it that way implies a high level of scientific rigor.
-
Rucknium[m]
I didn't write my submission in the style of a scientific publication, so I am somewhat uncomfortable sharing the review without my reply, since the context is lacking. I wrote it to be a...HackerOne submission, for the purpose of getting broad feedback from the Vulnerability Response Process team about what could be shared and what cannot in composing my CCS.
-
Rucknium[m]
So what has been occupying my Monero time is two activities:
-
Rucknium[m]
1) Writing an extensive reply to one of the biostatistician comments about the description of OSPEAD being too vague to comment on at a technical statistical level. I agree it was vague. I specifically state in my submission at one point:
-
Rucknium[m]
"I have the mathematical definitions of these [ideas] worked out in my head, but they are not written here."
-
Rucknium[m]
So I wrote out a description in words of a key part of OSPEAD in about 2 pages in my HackerOne submission. This is Section 7 of my submission.
-
Rucknium[m]
I have recently written a fairly precise treatment of what I intend to do (in that key part of OSPEAD) as, essentially, a response to the comment about it being vague. It's about 10 pages of fairly technical mathematics. The presentation is much more technical than anything contained within my HackerOne submission. Let's call this Document A.
-
Rucknium[m]
I have given Document A to the biostatistician, isthmus, and jberman. However, there are more reviewers who would probably want to see it.
-
Rucknium[m]
Importantly, I believe that it may be safe to publicly release a slightly modified version of Document A so as to more clearly explain to the community what I intend to do. The overall thrust of Document A is not sensitive and would, I think, not be useful to a Monero adversary. I am not certain on this point, however.
-
Rucknium[m]
Well, I said "what has been occupying my Monero time is two activities", but let's disaggregate further:
-
Rucknium[m]
2) I am finishing a detailed technical critique of Section 6.1 "Fitted Mixin Sampling Distribution" of Moser et al. (2018). This was basically requested in the biostatisticians' review.
-
Rucknium[m]
3) General replies to the biostatistician's other comments. Those replies don't really require any further research work.
-
Rucknium[m]
4) Some new results that extend some parts of my HackerOne submission. These results are useful in judging current risks to user privacy.
-
Rucknium[m]
5) A fifth thing.
-
Rucknium[m]
Once I finish with 1-5, which is hopefully be in the next few days, I will send it to all current "reviewers" of my HackerOne submission. At that point, I will halt technical work on the decoy selection algorithm until the funding situation is clearer.
-
Rucknium[m]
Yes, 1-5 also includes the original review by the statistician, along with my replies to his comments. In total, it would be about 30 pages I think.
-
Rucknium[m]
^ This presents an unfortunate or maybe interesting possibility that I am producing research faster than it can be reviewed by the people who want to review it, since my original HackerOne submission was 28 pages, and 1-5 will be more technical than my HackerOne submission.
-
Rucknium[m]
Most of the work to produce 1-5 was envisioned to have occurred under the plan laid out in my original CCS proposal, so it's not wasted effort or anything.
-
Rucknium[m]
carrington: Ok, done with update :)
-
Rucknium[m]
And I updated the BCH community on the expected timeline for my BCH&XMR work:
-
Rucknium[m]