-
plowsof11
Author of Bp++ has just got back in contact, stating that they want to get a new draft ready before the paid review starts. CypherStack are aware and of course are fine with holding off. Liam feels we should wait and get a "review of the security proofs, which are not in the current eprint version." and they've uncovered "issues with the current eprint version". I will follow up and get an estimate of his time to complete this new
-
plowsof11
draft (now that communication lines are open).
-
plowsof11
If MRL now wants to wait for the new paper (to get these mysterious security proofs) then this would change the scope of the review and increase the price (unknown how much until the new draft is looked at)
-
UkoeHB
plowsof11: thanks, if Liam says it's best to wait then we should wait
-
-
monerobull[m]
Ok so we cant stop arb data storage
-
monerobull[m]
we want a little bit of tx_extra to prevent the more harmful way of data storage that is using outputs
-
blankpage[m]
I would guess that the mempool isn't clearing because the larger relative size of the morbs causes a sorting problem meaning something is being left behind more often when a block is formed
-
monerobull[m]
you can still chain a lot of transactions with a smol tx_extra
-
monerobull[m]
what is the only "real" reason to use more than the limited tx_extra and instead use more than 1 tx? nfts
-
monerobull[m]
or data storage
-
monerobull[m]
so that just needs to be as unatractive as possible
-
hyc
I don't see how we can prevent chaining txns
-
monerobull[m]
for now it looks like we won against asics with forking around a bit and settling with randomx
-
monerobull[m]
hyc: we dont
-
monerobull[m]
we just need to make nfts themselves as unattractive as possible
-
monerobull[m]
aka break the transfering of them as often as it takes
-
hyc
steganography in outputs, afaik, is only feasible because there are so many spare bits in rangeproofs/bulletproofs to begin with
-
monerobull[m]
nobody will flood the chain with massive nft "collections" if they dont have the prospect of profiting of selling the collection to others
-
blankpage[m]
The solution is deterministic decoy selection
-
hyc
why can't we eliminate "spare" bits from outputs, thus leaving nowhere for steg to operate?
-
blankpage[m]
Because they "transfer" the morbs by making a custom decoy selection of burnt outputs
-
blankpage[m]
I thought steg could live anywhere by definition, not just in spare bits? Like you make a zero-amount output and hide it in the nonsense receiving address
-
hyc
not necessarily. I don't believe so anyway.
-
hyc
ASN.1 and Unicode had similar problems - there are multiple possible bit encodings for the same character
-
hyc
so additional rules were added to e.g. ASN.1 DER and UTF-8 to require that only the shortest possible encoding is used, and any others are rules invalid.
-
hyc
if we define a set of canonical encoding rules, then toggling spare bits of valid inputs will make them invalid.
-
hyc
steg only "works" because there are bits in the stream that are too insignifcant to affect the output when they're toggled.
-
hyc
tighten up the encoding rules, and that goes away.
-
sech1
After seeing an argument that arb data can be stored in outputs (32 bytes per output), I'm more inclined to saving tx_extra as an optional and encrypted 256-byte chunk of data
-
sech1
and make it pay higher fee, but not higher than "output storage" would need
-
sech1
hyc you can store 32 bytes per output by overwriting the stealth address of receiver
-
sech1
so you can generate 16 outputs of 1 piconero each
-
sech1
or even 0 XMR each
-
sech1
16*32 = 512 arbitrary bytes per transaction
-
sech1
well, one of the outputs must be real
-
sech1
or there will be no way to "prove NFT ownership"
-
sech1
so 480 bytes per transaction
-
hyc
almost 1 whole disk sector ;)
-
sech1
and if the fee is something like 0.0000001 per byte (5x the usual), it will cost 0.000048 XMR to store 480 bytes, or ~0.1 XMR per MB
-
sech1
no, it will cost more because it will be 1 in/16 out transaction
-
sech1
so if X = fee for 256 byte tx_extra field, A = fee for 1in/2out regular tx, B = fee for 1in/16out regular tx, than X = B-A - maximum fee we can charge for this field
-
sech1
*then X
-
sech1
yes, my quick calculations show we can charge 5x more per byte for this 256-byte field
-
hyc
isn't tx_extra currently used in merge mining?
-
DataHoarder
that's in coinbase
-
sech1
yes, but no one is talking about removing it from coinbase
-
hyc
ok. presumaby then this 5x should not apply to coinbase txn either
-
sech1
coinbase tx doesn't pay fee at all
-
sech1
it "pays" the fee by occupying block space and not letting other transactions into the block, if mempool is full
-
hyc
yeah. sorry, been up all night, a bit fuzzy headed now
-
sech1
maybe it will be better to always have encrypted tx_extra, for better transaction uniformity
-
monerobull[m]
but can we realistically enforce that
-
sech1
in a hardfork, yes
-
sech1
I'm talking about post-Seraphis
-
sech1
limit it to 1060 bytes before hardfork, make it encrypted and mandatory 256-byte field after hardfork
-
monerobull[m]
i thought detecting if something is encrypted or not is hard
-
sech1
there are statistical tests for random sequences
-
blankpage[m]
They are flawed. Lots of false postives
-
sech1
we can crank up their sensitivity - even if tests fail for 1% of real encrypted messages, wallet can just try different encryption key to pass the test
-
sech1
1% extra work for the wallet, but it will detect unencrypted data quite easily
-
sech1
so false positives (<1%) are even desired
-
monerobull[m]
i assume the tests require really little ressources for nodes right
-
sech1
we can pick only those tests which require little CPU time, there are plenty to choose from
-
sech1
-
blankpage[m]
An NFT protocol could simply XOR the jpeg with some other part of the transaction, making it trivial to decrypt the blob which otherwise passes the statistical test
-
sech1
in other words, NFT protocol can just publish their encryption keys
-
blankpage[m]
Yes
-
monerobull[m]
true, encryption would only help on the very surface
-
sech1
It's up to them, we just want transaction uniformity on blockchain level, without offchain data to decode it
-
blankpage[m]
So statistical tests for encryption are simply a waste of time
-
sech1
not a waste if all default wallet implementations use encryption
-
monerobull[m]
they would help against people storing arb data outside of public nft protocols
-
sech1
because if we make the "default" way (encryption) the easiest way, everyone will use it
-
DataHoarder
we lost every matrix user
-
Alex|LocalMonero
Arbitrary data storage will always be limited by the path of least resistance. The true cost of arbitrary data storage of X bytes will be however much you permit cheap storage (with tx_x) + steg. The more expensive arbitrary data storage is the less of it will be used. The true price limit will always be steg, so the way to limit arbitrary data storage is to remove all the paths of least resistance and enforce stricter
-
Alex|LocalMonero
validation on steg as hyc suggested.
-
fr33_yourself[m]
<monerobull[m]> "we want a little bit of tx_extra..." <- keeping tx_extra in a restrained form does not *prevent* an individual from using outputs for data storage, but it may *disincentivize* use of outputs for that purpose
-
fr33_yourself[m]
<blankpage[m]> "That cost analysis also applies..." <- would tightening the dynamic block algorithm help mitigate a flood attack?
-
UkoeHB
fr33_yourself[m]: the algorithm is already designed to do that
-
fr33_yourself[m]
alrighty thanks :) I figured there were probably tweaks made after the 2021 flood
-
fr33_yourself[m]
I agree with Alex and HYC's ideas, but those solutions sound pretty hard to get right and implement, so perhaps they are likely to be longer term solutions as opposed to actions that can be deployed quickly
-
Alex|LocalMonero
There's no output spam at the moment.
-
plowsof11
UkoeHB: bp++ author has set a target to complete new draft for April 14th
-
UkoeHB
Sweet
-
jtgrassie
I don't see a way to prevent steg (when used in pub keys) because we have to allow any valid pub key (which we know can easily be brute forced in just a few bits of any 32 byte key).
-
jtgrassie
We cetainly can (and cheaply) do a statistical test on tx_extra.
-
jtgrassie
The simplest is an entropy test (
github.com/jtgrassie/cent) whereby we enforce entropy of something like ~7 bits per byte.
-
jtgrassie
Sure, there will be several files/types which will pass, but that's ok as many unencrypted will fail.
-
Rucknium[m]
jtgrassie: Does an entropy test have a p-value?
-
jtgrassie
An entropy test we just define the level of bits per byte we deem distibuted enough. Any valid encryption should be approaching 8.
-
Rucknium[m]
My skepticism is that you don't set a false positive rate with an ad hoc test like this
-
Rucknium[m]
To say nothing of the false negative rate
-
jtgrassie
Most encryption algos have high entropy on output, that's a goal. We can even suggest algo people use to ensure pass rate.
-
jtgrassie
Remember, our goal here is to reduce unencrypted data, and high entropy does that.
-
jtgrassie
But if something unencrypted random passes our test, fine.
-
Rucknium[m]
If there is a better test, should we use it?
-
jtgrassie
There are other distibution tests (chi-sq comes to mind), we should use the fastest
-
Rucknium[m]
The desired outcomes should be determined. What is the false positive and false negative rates we want? And the computational speed?
-
jtgrassie
Indeed
-
Rucknium[m]
This is Shannon Entropy, right?
-
jtgrassie
yes
-
UkoeHB
any bit distribution test can be trivially avoided
-
UkoeHB
it is pointless to play whack-a-mole with people who don't want privacy, all we can do is design to support the people who do
-
Dimitri[m]
Hi all. My name is Dimitri. I am a researcher on elliptic cryptography. I read monero documentation. That is why I asked the following question on the site
crypto.stackexchange.com/questions/…-a-set-for-an-elliptic-curve-e-over
-
Dimitri[m]
Let me know please if you have any comments on my question.
-
Dimitri[m]
Your hash function E(Fq) -> E(Fq) on the curve Ed25519 raises many questions on the Internet. I think that it is necessary to conduct research on this topic. I am ready to do that.
-
Rucknium[m]
Shannon Entropy of tx_extra of 07c818d91365ea0959dcae1fc45c47477e4ea0b7be3bf5df7368c8dea01d1e21: 7.947646
-
Rucknium[m]
Shannon Entropy of tx_extra of 1bef5d92e56076a80c4b72089617f92698edfce77cae4ce2ad998761759943d0: 7.969315
-
jtgrassie
UkoeHB: "trivially avoided" yes, encrypt and publish the encryption key, but our goal would be a quick test to prevent easily readable data.
-
Rucknium[m]
The former tx is not a Mordinal. The latter is a Mordinal. Are my calculations incorrect?
-
UkoeHB
-
UkoeHB
jtgrassie: much more trivial than that, just XOR in some fixed string or even another part of the tx
-
UkoeHB
Dimitri[m]: there is another mapping called Elligator2 that has similar properties
-
Dimitri[m]
<UkoeHB> "Dimitri:
web.getmonero...." <- I took a look earlier at these sources. The first is not published. The second is obsolete.
-
UkoeHB
Obsolete?
-
Dimitri[m]
-
Dimitri[m]
-
Rucknium[m]
^ I may have messed up the labels above. One moment....
-
UkoeHB
Obsolete as in, there are better functions now? In order to change the hash to point function we’d need a major hard fork on a similar magnitude to seraphis.
-
jtgrassie
Rucknium[m]: those are only over 32 bytes (and your results are off)
-
jtgrassie
We would be testing over 255 bytes (not 32)
-
Dimitri[m]
UkoeHB: Yes, there are faster hash functions to elliptic curves. Nevertheless, your approach is still safe of course.
-
jtgrassie
07c818.. has 4.8125 (over it's 32 bytes) and 1bef.. has 4.9375. But a test over only 32 bytes is not the same over 255. The more bytes the better the test.
-
Rucknium[m]
I think I misidentified the txs. I am using the JSON response, which gives tx_extra as 0-255 integers
-
jtgrassie
I also misunderstood your msg. I thought you posted the actual tx_extra, not a txid!
-
Rucknium[m]
Technically, how do you deal with a "sample" that is missing one or more of the possible 0-255 integers when calculating entropy? Just putting it into the formula gives to NaN.
-
jtgrassie
You enforce 255 bytes
-
Rucknium[m]
I accidentally chose two Mordinals 😅
-
Rucknium[m]
The sample probability of the "missing" integers/bytes is 0. log(0) = NaN
-
jtgrassie
It's also pointless doing tests on the existing blockchain as we use tx_extra for other purposes. Tests have to be independent
-
jeffro256[m]
It isn't necessarily completely pointless if you strip away the "regular" bits of into from tx_extra and test the entropy of what's left
-
jeffro256[m]
*of info
-
jeffro256[m]
Presumably in the future, if a randomness test were to be enforced, it wouldn't happen over needed bits of information like encrypted payment IDs and additional pubkeys
-
jeffro256[m]
That would leave the majority of transactions with a tx_extra entropy of effectively 0
-
Rucknium[m]
At a minimum, those things should have a high probability of passing the test. That's an example of encrypted content or public keys that should pass the test.
-
jeffro256[m]
True
-
jeffro256[m]
I'm not super familiar, but aren't there tests that measure entropy changes over parts of the string, sort of like a local entropy. We wouldn't want someone to "hide" low entropy by putting them next to truely random bits
-
Rucknium[m]
I made that point about a month ago: more powerful tests would consider the sequence of bytes instead of just the set of bytes.
-
jtgrassie
Yes, and other tests should be considered vs just using an entropy test.
-
jtgrassie
Many images would pass an entropy test of say 7 bits per byte (over 255 bytes) but many would also fail. But, using most enc algos would almost always be higher than 7.
-
jtgrassie
I agree we need to define and test, but we should remember our goal is not to enforce encrypption, just to have a simple, fast test to pluck out low hanging fruit.
-
jtgrassie
("low hanging fruit" being obviously not encrypted)
-
BawdyAnarchist[m
Yes I thought the idea was that anyone designing to use tx_extra would get too many dropped txns unless they default encrypt. Which solves maybe 99% of the problem, even if you can individually find ways of passing non-random data in tx_extra.
-
xmrack[m]
I pinged the authors of the constant-size range proof paper, as a reminder, and they confirmed they will be at tomorrows meeting. Looking online I found another constant-size range proof paper from 2022 called Cuproof. UkoeHB kayabanerve Have you read this before?
moneroresearch.info/index.php?actio…OURCEVIEW_CORE&id=179&browserTabID=
-
kayabanerve[m]
I believe Cuproof has a prior discussion
-
xmrack[m]
The name sounded familiar, but I don't think I looked at it until now
-
kayabanerve[m]
Trusted setup RSA group
-
xmrack[m]
ew, nevermind
-
kayabanerve[m]
what's wrong with a lil' rsa in our protocol
-
kayabanerve[m]
*I continued to be horrified by the continued usage of RSA, even if I understand their practicality.
-
jtgrassie
BawdyAnarchist[m: "Which solves maybe 99% of the problem..." < Exactly this