03:57:20 Author of Bp++ has just got back in contact, stating that they want to get a new draft ready before the paid review starts. CypherStack are aware and of course are fine with holding off. Liam feels we should wait and get a "review of the security proofs, which are not in the current eprint version." and they've uncovered "issues with the current eprint version". I will follow up and get an estimate of his time to complete this new 03:57:20 draft (now that communication lines are open). 03:57:20 If MRL now wants to wait for the new paper (to get these mysterious security proofs) then this would change the scope of the review and increase the price (unknown how much until the new draft is looked at) 04:11:41 plowsof11: thanks, if Liam says it's best to wait then we should wait 08:08:21 * monerobull[m] uploaded an image: (59KiB) < https://libera.ems.host/_matrix/media/v3/download/matrix.org/JAtOwHKUGWpxfZjmadiWQHoW/grafik.png > 08:10:55 Ok so we cant stop arb data storage 08:11:10 we want a little bit of tx_extra to prevent the more harmful way of data storage that is using outputs 08:11:37 I would guess that the mempool isn't clearing because the larger relative size of the morbs causes a sorting problem meaning something is being left behind more often when a block is formed 08:12:09 you can still chain a lot of transactions with a smol tx_extra 08:12:14 what is the only "real" reason to use more than the limited tx_extra and instead use more than 1 tx? nfts 08:12:18 or data storage 08:12:35 so that just needs to be as unatractive as possible 08:13:02 I don't see how we can prevent chaining txns 08:13:07 for now it looks like we won against asics with forking around a bit and settling with randomx 08:13:13 hyc: we dont 08:13:35 we just need to make nfts themselves as unattractive as possible 08:14:10 aka break the transfering of them as often as it takes 08:14:32 steganography in outputs, afaik, is only feasible because there are so many spare bits in rangeproofs/bulletproofs to begin with 08:14:52 nobody will flood the chain with massive nft "collections" if they dont have the prospect of profiting of selling the collection to others 08:15:23 The solution is deterministic decoy selection 08:15:58 why can't we eliminate "spare" bits from outputs, thus leaving nowhere for steg to operate? 08:16:06 Because they "transfer" the morbs by making a custom decoy selection of burnt outputs 08:17:39 I thought steg could live anywhere by definition, not just in spare bits? Like you make a zero-amount output and hide it in the nonsense receiving address 08:18:47 not necessarily. I don't believe so anyway. 08:19:15 ASN.1 and Unicode had similar problems - there are multiple possible bit encodings for the same character 08:19:58 so additional rules were added to e.g. ASN.1 DER and UTF-8 to require that only the shortest possible encoding is used, and any others are rules invalid. 08:20:58 if we define a set of canonical encoding rules, then toggling spare bits of valid inputs will make them invalid. 08:22:54 steg only "works" because there are bits in the stream that are too insignifcant to affect the output when they're toggled. 08:23:19 tighten up the encoding rules, and that goes away. 08:32:38 After seeing an argument that arb data can be stored in outputs (32 bytes per output), I'm more inclined to saving tx_extra as an optional and encrypted 256-byte chunk of data 08:32:58 and make it pay higher fee, but not higher than "output storage" would need 08:33:55 hyc you can store 32 bytes per output by overwriting the stealth address of receiver 08:34:07 so you can generate 16 outputs of 1 piconero each 08:34:12 or even 0 XMR each 08:34:25 16*32 = 512 arbitrary bytes per transaction 08:34:52 well, one of the outputs must be real 08:35:01 or there will be no way to "prove NFT ownership" 08:35:13 so 480 bytes per transaction 08:35:48 almost 1 whole disk sector ;) 08:37:00 and if the fee is something like 0.0000001 per byte (5x the usual), it will cost 0.000048 XMR to store 480 bytes, or ~0.1 XMR per MB 08:37:58 no, it will cost more because it will be 1 in/16 out transaction 08:39:20 so if X = fee for 256 byte tx_extra field, A = fee for 1in/2out regular tx, B = fee for 1in/16out regular tx, than X = B-A - maximum fee we can charge for this field 08:39:29 *then X 08:40:46 yes, my quick calculations show we can charge 5x more per byte for this 256-byte field 08:41:29 isn't tx_extra currently used in merge mining? 08:41:40 that's in coinbase 08:41:47 yes, but no one is talking about removing it from coinbase 08:42:28 ok. presumaby then this 5x should not apply to coinbase txn either 08:42:54 coinbase tx doesn't pay fee at all 08:43:22 it "pays" the fee by occupying block space and not letting other transactions into the block, if mempool is full 08:43:23 yeah. sorry, been up all night, a bit fuzzy headed now 08:45:09 maybe it will be better to always have encrypted tx_extra, for better transaction uniformity 08:45:44 but can we realistically enforce that 08:45:50 in a hardfork, yes 08:46:00 I'm talking about post-Seraphis 08:46:34 limit it to 1060 bytes before hardfork, make it encrypted and mandatory 256-byte field after hardfork 08:46:51 i thought detecting if something is encrypted or not is hard 08:47:05 there are statistical tests for random sequences 08:47:42 They are flawed. Lots of false postives 08:47:43 we can crank up their sensitivity - even if tests fail for 1% of real encrypted messages, wallet can just try different encryption key to pass the test 08:48:05 1% extra work for the wallet, but it will detect unencrypted data quite easily 08:48:40 so false positives (<1%) are even desired 08:49:43 i assume the tests require really little ressources for nodes right 08:50:16 we can pick only those tests which require little CPU time, there are plenty to choose from 08:50:38 https://github.com/blep/TestU01 08:51:05 An NFT protocol could simply XOR the jpeg with some other part of the transaction, making it trivial to decrypt the blob which otherwise passes the statistical test 08:51:50 in other words, NFT protocol can just publish their encryption keys 08:52:07 Yes 08:52:07 true, encryption would only help on the very surface 08:52:13 It's up to them, we just want transaction uniformity on blockchain level, without offchain data to decode it 08:52:22 So statistical tests for encryption are simply a waste of time 08:52:56 not a waste if all default wallet implementations use encryption 08:52:58 they would help against people storing arb data outside of public nft protocols 08:53:28 because if we make the "default" way (encryption) the easiest way, everyone will use it 09:06:03 we lost every matrix user 13:51:24 Arbitrary data storage will always be limited by the path of least resistance. The true cost of arbitrary data storage of X bytes will be however much you permit cheap storage (with tx_x) + steg. The more expensive arbitrary data storage is the less of it will be used. The true price limit will always be steg, so the way to limit arbitrary data storage is to remove all the paths of least resistance and enforce stricter 13:51:24 validation on steg as hyc suggested. 14:23:44 "we want a little bit of tx_extra..." <- keeping tx_extra in a restrained form does not *prevent* an individual from using outputs for data storage, but it may *disincentivize* use of outputs for that purpose 14:34:09 "That cost analysis also applies..." <- would tightening the dynamic block algorithm help mitigate a flood attack? 14:34:50 fr33_yourself[m]: the algorithm is already designed to do that 14:36:04 alrighty thanks :) I figured there were probably tweaks made after the 2021 flood 14:37:08 I agree with Alex and HYC's ideas, but those solutions sound pretty hard to get right and implement, so perhaps they are likely to be longer term solutions as opposed to actions that can be deployed quickly 14:38:54 There's no output spam at the moment. 15:29:58 UkoeHB: bp++ author has set a target to complete new draft for April 14th 15:36:23 Sweet 18:42:15 I don't see a way to prevent steg (when used in pub keys) because we have to allow any valid pub key (which we know can easily be brute forced in just a few bits of any 32 byte key). 18:42:19 We cetainly can (and cheaply) do a statistical test on tx_extra. 18:42:22 The simplest is an entropy test (https://github.com/jtgrassie/cent/) whereby we enforce entropy of something like ~7 bits per byte. 18:42:25 Sure, there will be several files/types which will pass, but that's ok as many unencrypted will fail. 18:43:51 jtgrassie: Does an entropy test have a p-value? 18:45:22 An entropy test we just define the level of bits per byte we deem distibuted enough. Any valid encryption should be approaching 8. 18:45:56 My skepticism is that you don't set a false positive rate with an ad hoc test like this 18:47:28 To say nothing of the false negative rate 18:51:41 Most encryption algos have high entropy on output, that's a goal. We can even suggest algo people use to ensure pass rate. 18:53:29 Remember, our goal here is to reduce unencrypted data, and high entropy does that. 18:54:16 But if something unencrypted random passes our test, fine. 18:55:43 If there is a better test, should we use it? 18:56:24 There are other distibution tests (chi-sq comes to mind), we should use the fastest 18:57:52 The desired outcomes should be determined. What is the false positive and false negative rates we want? And the computational speed? 18:58:22 Indeed 18:59:42 This is Shannon Entropy, right? 19:05:06 yes 19:07:39 any bit distribution test can be trivially avoided 19:10:21 it is pointless to play whack-a-mole with people who don't want privacy, all we can do is design to support the people who do 19:14:22 Hi all. My name is Dimitri. I am a researcher on elliptic cryptography. I read monero documentation. That is why I asked the following question on the site https://crypto.stackexchange.com/questions/105870/secure-permutation-of-e-mathbbf-q-as-a-set-for-an-elliptic-curve-e-over 19:14:50 Let me know please if you have any comments on my question. 19:18:00 Your hash function E(Fq) -> E(Fq) on the curve Ed25519 raises many questions on the Internet. I think that it is necessary to conduct research on this topic. I am ready to do that. 19:18:57 Shannon Entropy of tx_extra of 07c818d91365ea0959dcae1fc45c47477e4ea0b7be3bf5df7368c8dea01d1e21: 7.947646 19:18:58 Shannon Entropy of tx_extra of 1bef5d92e56076a80c4b72089617f92698edfce77cae4ce2ad998761759943d0: 7.969315 19:19:27 UkoeHB: "trivially avoided" yes, encrypt and publish the encryption key, but our goal would be a quick test to prevent easily readable data. 19:19:40 The former tx is not a Mordinal. The latter is a Mordinal. Are my calculations incorrect? 19:19:46 Dimitri[m]: https://web.getmonero.org/resources/research-lab/pubs/ge_fromfe.pdf https://arxiv.org/pdf/0706.1448.pdf 19:20:29 jtgrassie: much more trivial than that, just XOR in some fixed string or even another part of the tx 19:22:36 Dimitri[m]: there is another mapping called Elligator2 that has similar properties 19:26:14 "Dimitri: https://web.getmonero...." <- I took a look earlier at these sources. The first is not published. The second is obsolete. 19:26:50 Obsolete? 19:28:15 There is the draft https://datatracker.ietf.org/doc/draft-irtf-cfrg-hash-to-curve/ 19:29:52 not to mention recent research like https://link.springer.com/chapter/10.1007/978-3-031-22963-3_3 19:30:26 ^ I may have messed up the labels above. One moment.... 19:31:39 Obsolete as in, there are better functions now? In order to change the hash to point function we’d need a major hard fork on a similar magnitude to seraphis. 19:31:54 Rucknium[m]: those are only over 32 bytes (and your results are off) 19:32:21 We would be testing over 255 bytes (not 32) 19:35:52 UkoeHB: Yes, there are faster hash functions to elliptic curves. Nevertheless, your approach is still safe of course. 19:39:20 07c818.. has 4.8125 (over it's 32 bytes) and 1bef.. has 4.9375. But a test over only 32 bytes is not the same over 255. The more bytes the better the test. 19:40:54 I think I misidentified the txs. I am using the JSON response, which gives tx_extra as 0-255 integers 19:43:01 I also misunderstood your msg. I thought you posted the actual tx_extra, not a txid! 19:43:24 Technically, how do you deal with a "sample" that is missing one or more of the possible 0-255 integers when calculating entropy? Just putting it into the formula gives to NaN. 19:43:57 You enforce 255 bytes 19:44:16 I accidentally chose two Mordinals 😅 19:46:24 The sample probability of the "missing" integers/bytes is 0. log(0) = NaN 19:46:34 It's also pointless doing tests on the existing blockchain as we use tx_extra for other purposes. Tests have to be independent 19:49:43 It isn't necessarily completely pointless if you strip away the "regular" bits of into from tx_extra and test the entropy of what's left 19:49:50 *of info 19:50:40 Presumably in the future, if a randomness test were to be enforced, it wouldn't happen over needed bits of information like encrypted payment IDs and additional pubkeys 19:51:21 That would leave the majority of transactions with a tx_extra entropy of effectively 0 19:51:31 At a minimum, those things should have a high probability of passing the test. That's an example of encrypted content or public keys that should pass the test. 19:51:58 True 19:53:50 I'm not super familiar, but aren't there tests that measure entropy changes over parts of the string, sort of like a local entropy. We wouldn't want someone to "hide" low entropy by putting them next to truely random bits 20:04:42 I made that point about a month ago: more powerful tests would consider the sequence of bytes instead of just the set of bytes. 20:14:38 Yes, and other tests should be considered vs just using an entropy test. 20:17:28 Many images would pass an entropy test of say 7 bits per byte (over 255 bytes) but many would also fail. But, using most enc algos would almost always be higher than 7. 20:19:46 I agree we need to define and test, but we should remember our goal is not to enforce encrypption, just to have a simple, fast test to pluck out low hanging fruit. 20:20:23 ("low hanging fruit" being obviously not encrypted) 21:41:39 Yes I thought the idea was that anyone designing to use tx_extra would get too many dropped txns unless they default encrypt. Which solves maybe 99% of the problem, even if you can individually find ways of passing non-random data in tx_extra. 21:52:26 I pinged the authors of the constant-size range proof paper, as a reminder, and they confirmed they will be at tomorrows meeting. Looking online I found another constant-size range proof paper from 2022 called Cuproof. UkoeHB kayabanerve Have you read this before? https://moneroresearch.info/index.php?action=resource_RESOURCEVIEW_CORE&id=179&browserTabID= 21:53:01 I believe Cuproof has a prior discussion 21:53:58 The name sounded familiar, but I don't think I looked at it until now 21:54:58 Trusted setup RSA group 21:57:14 ew, nevermind 21:57:42 what's wrong with a lil' rsa in our protocol 21:57:59 *I continued to be horrified by the continued usage of RSA, even if I understand their practicality. 21:58:30 BawdyAnarchist[m: "Which solves maybe 99% of the problem..." < Exactly this