00:12:28 To clarify, RingCT sigs never appeared in a mainnet coinbase tx, correct? 00:12:30 https://github.com/monero-project/monero/commit/d22dfb75948e5ad382758c4731c840bd3f067cd0 14:26:14 Hello. I have idea of a compression algorithm which might drastically compress along chain of monero blocks at the high computation cost for the monero node. Do you think it would be of benefit? Because as I know, Monero wallets have to download the entire monero blockchain to realize their balance and transactions. And when you initialize a wallet somewhere else, you have to down 14:26:16 load GiBs of data. 14:26:25 Hello. I have idea of a compression algorithm which might drastically compress along chain of monero blocks at the high computation cost for the monero node. Do you think it would be of benefit? Because as far as I realized, Monero wallets have to download the entire monero blockchain to realize their balance and transactions. And when you initialize a wallet somewhere else, you h 14:26:26 ave to download GiBs of data. 14:32:25 My idea, which *might* work is using a version of Genetic Programming and treat a blockchain as data which we do Symbolic Regression on it. The process is very compute intensive, but if it works, it could turn few MiBs of data into few hunderds of bytes. I'm asking if this tradeoff is worth the benefit? 14:32:26 My idea, which *might* work is using a version of Genetic Programming and treat a blockchain as data which we do Symbolic Regression on it. The process is very compute intensive, but if it works, it could turn few MiBs of data into few hunderds of bytes. I'm asking if this tradeoff is worth the benefit. 14:32:46 My idea, which *might* work is using a version of Genetic Programming and treat a blockchain as data which we do Symbolic Regression on it. The process is very compute intensive, but if it works, it could turn few MiBs of data into few hunderds of bytes. I'm asking if this tradeoff is worth the benefit, before I start anything. 14:35:27 Monero blockchain is opaque. If you don't know private keys for each transaction, it's just a random data for all means and purposes. You can't compress random data. 14:35:49 information theory says you can't compress random data 14:35:55 Jinx 14:42:00 There might be some efficiency gains to be had in how the transactions are packed and information sent redundantly but the over-the-wire format is pretty efficiently already 14:42:58 jberman has a branch which speeds up scanning by up to 2x by utilizing system resources better 14:57:44 random like real random? I'v read BTC's whitepaper. So from what I understand, it's not like that. right? 14:57:53 I mean blockchain and transactions and so on 15:07:21 Farooq [MR. Potato]: The biggest parts of txs and blocks are cryptographic public keys and signatures. Those are extremely close to true uniform random. If they were not, the cryptography could be broken. Very similar for bitcoin and Monero. In Monero, the share of the tx data that is keys and signatures is larger than bitcoin. 15:08:12 hmm yeah didn't consider this 15:08:13 thanks 15:55:33 I don’t want to debate heavily on this but I’m certain that’s a misconception and provably false. Either way this isn’t the right room for me to have that conversation. 15:58:02 You’re on the right track, but getting to the solution you’re looking for from there is easier said than done. Otherwise I would’ve expected this to be publicly well known. 16:00:44 I’m not going to comment further in this room about this to avoid a likely unwanted discussion among other reasons 16:01:19 If it's provably false, do the honors and prove it (maybe in a different room) 16:01:40 Lossless compression of random data is news to me, can't wait to compress my whole SSD to 1 byte 16:14:11 Someone once made a clever scheme, and managed to create a set of files that did compress random data slightly. The trick was that there were more than one file as output, and some information was impicitely encoded in the size of those. That was nifty. 17:35:52 Of course it's easier said than done. I am already on a GP research for near a year without the results I wanted to see. But others said I'm not on the right track. 17:38:04 Possibly there is one thing. I am not directly compressing the data. I am using GP as a SR tool to "predict data" 17:38:19 to train a model which generates the desired data using a program 17:45:55 If you’re stuck on that approach there’s others you could try. But you got the right general idea from what I can tell. Maybe I wouldn’t necessarily pidgeonhole it through genetic programming specifically but more broadly ML, since you might be fixated on genetic programming as a GP researcher. Anyway you don’t have to take my advice and just give up on it. All I’ll say 17:45:56 is if @moneromoo cites that example he gave, that’s a proof by contradiction to the claim that you can’t compress a finite sample of randomly generated data. 17:47:05 It's a good idea to start with a small scale and see how things will go on 17:51:31 The scheme I mentioned did not actually compress, as some of the output data was not in the output files, but was still present, and not counted. 17:56:55 That’s a shame, would’ve been nice to put that to rest with a quick citation for him. 17:56:56 Anyway I personally don’t believe it should be hard for any one who’s sufficiently interested here to figure out a proof for themselves. But whatever happens, happens. Wish you luck. 18:12:54 I think a proof could be along the lines of... encrypting a N bit string such that it can be unambiguously decrypted means that there's a one to one mapping to/from plaintext and ciphertext. So N bit ciphertext domain needs >= N plaintext domain. 18:13:28 >= N since you need enough bits to represent a unique ciphertext for every possible plaintext. 18:14:25 If the inputs are all equiprobable, you can't play on making some "probably" inputs encode to part of the ciphertext domain that uses fewer bits. 18:15:22 Which is what encryption is doing, at a high level: map probable inputs to part of the domain with fewer bits, while leaving unlikely inputs encode to more bits. 18:15:50 In the above, when I write encrypt, I mean compress. Sorry. 18:16:26 If you take some good modern compression like zstd, it *will* compress some inputs to more bits than the input. 18:17:36 moneromooo, ty for the explanation. Nevertheless, my idea is a program which when executed, generates a sequence of inputs 18:17:43 Does it still apply? 18:19:47 More generally you can think in terms of a generating function as an example, that just so happens to generate a specific set of random data that was generated from a atmospheric noise or something that you believe to be a TRNG. Assuming TRNGs even exist, the result would be approximately the same using a good PRNG. 18:21:19 The intuition from there for how to go about making a proof should be kinda self explanatory 18:21:42 Alright I should stop talking here now