14:44:49 New paper: 14:44:49 https://eprint.iacr.org/2021/1674 14:45:21 "Efficient and Post-Quantum Zero-Knowledge Proofs for Blockchain Confidential Transaction Protocols" 14:45:21 Shang GAO and Tianyu ZHENG and Yu GUO and Bin XIAO 14:45:40 >Abstract: We propose new zero-knowledge proofs for efficient and post-quantum ring confidential transaction (RingCT) protocols based on lattice assumptions in Blockchain systems. First, we introduce an inner-product based linear equation satisfiability approach for balance proofs with a wide range (e.g. 64-bit precision). Unlike existing balance proofs that require additional proofs for some ''corrector values'' [CCS'19], our 14:45:40 approach avoids the corrector values for better efficiency. 14:45:55 >Furthermore, we design a ring signature scheme to efficiently hide a user's identity in large anonymity sets. Different from existing approaches that adopt a one-out-of-many proof [CCS'19, Crypto'19], we show that a linear sum proof suffices in ring signatures which could avoid the costly binary proof part. We further use the idea of ''unbalanced'' relations to build a logarithmic-size ring signature scheme. 14:46:07 > Finally, we show how to adopt these techniques in RingCT protocols and implement a prototype to compare the performance with existing approaches. The results show our solutions can reduce about 25% proof size of Crypto'19, and up to 70% proof size, 30% proving time, and 20% verification time of CCS'19. We also believe our techniques are of independent interest for other privacy-preserving applications such as secure e-voting 14:46:08 and are applicable in a generic setting. 14:47:43 "based on lattice assumptions" 14:48:37 Shame sarang isn't around for a quick sanity check on how speculative these are. 14:51:04 Can have him look at it after the holidays 14:52:32 I was assuming he'd already know what those (lattice assumptions) are, and what amount of scrutiny they passed already, didn't mean "plz read paper". 14:52:49 ie, quick reject filter ^_^ 14:54:25 Yeah should be fine and if it isn't a quick reject should be interesting nevertheless. 15:02:38 My arguments for unique change addresses: https://gist.github.com/tevador/50160d160d24cfc6c52ae02eb3d17024#gistcomment-4006358 16:12:32 I think it can work. We can attribute funds in change outputs to the account of the first key image in its tx. 16:19:42 I think usually wallet software doesn't mix outputs from different accounts in a single tx 16:38:37 'usually' 17:50:51 I am going to write a baseline PoC for jamtis over the next couple weeks (excluding all the serialization stuff). 17:54:26 I already have a python script that can generate addresses 17:55:30 I mean a PoC integrated with seraphis, with a full account model and mock ledger. 17:55:43 cool 17:56:53 The biggest architecture conundrum I have is how to organize the account structure (with wallet tiers, merchant/nonmerchant, multisig). 18:13:28 "We can attribute funds in change outputs to the account of the first key image in its tx." > Oh, nvm attribution can be done via the one-time address construction. 18:35:28 Yes, this rule will apply not just to change outputs, but to all self-spends 18:37:03 How do you handle account history in that case? A change amount should be invisible, but a self-spend needs to show up as an event. 18:43:57 Maybe fork again with a domain separator "self-spend"? If checking change fails, check if self-spend. 18:44:11 how are they differentiated now? 18:44:39 I'm not sure... moneromooo ? 18:47:59 Another thing: we should modify the key k_a^i for self-spends from merchant accounts to hide them from Tier 1.5. Otherwise Tier 1.5 can still detect them via the amount commitment. I think we can use k_a^(i,j) as if it was a non-merchant account. So Tier 1.5 won't be able to see self-spends, which is fine for the inteded use case. 18:50:34 The sent amount is stored in the wallet cache. If the wallet cache is destroyed and recreated from the chain, it'll be deemed to be 0. 18:54:38 I think it can be simply differentiated by the presence of a non-owned output. If the TX has a non-owned output -> the other output is change. Otherwise it's a churn and can show up in the history. I think churns also have a zero-amount fake change output. 18:55:36 yes they do 18:55:44 tevador: you can just redefine `q = H("dh", k_vb, K_e)`, which affects the whole output. 18:58:40 "I think it can be simply differentiated..." IMO this leaves too many edge cases. Better to have a robust solution that always works. I think adding another branch to output generation isn't a big burden. 19:01:56 Could be H("dh change", k_vb, K_e) or H("dh churn", k_vb, K_e) 19:03:51 "self-spend" > "churn", since there are reasons other than churn to self-spend (e.g. output splitting). 19:05:21 It can't usually be differentiated: if you have a 10 output and send yourself 3, you'll send 10 and get back 3 and 7 (ignoring the fee for the sake of the argument). You cannot guess which one was the amount sent. 19:06:21 so another new feature then 19:07:20 it's not really a feature, since all of this is hidden from users; another design detail :) 19:09:01 it's a tiny feature (correct tx history listing for self-spends after wallet restore) 19:09:19 oh I suppose 19:13:18 Btw, the zero-amount change output for self-spends should go to a random address, otherwise it's another leak (both outputs having a matching view tag). Not sure how it's done now. 19:13:40 right not it's to a random address 19:14:40 now* 19:15:57 How many EC ops does "checking the view tag" require ? It used to be at least a mult by G IIRC, but it's just dawning on me that the current conversation implies no ops (ie, public)... 19:16:16 one variable base scalarmult 19:16:35 With what scalar ? 19:16:52 k_fr (new private key) 19:17:04 "find-received key" 19:17:54 OK, the third key, that resolves my sudden confusion. Thanks. 19:19:13 Yes, basically the current "view key" is split into two: "view-balance key" and "find-received key". Tier 1 wallet only has the latter. 19:19:32 Presumably (I kinda forget how the G mult is sped up), since a wallet will send that key to a third party daemon which will scan many outputs, that daemon would be able to preprocess the key to have that mult be essentially as fast as a G mult, right ? 19:20:13 ie, the preprocessing gets amortized if many scans are to be done ? 19:24:28 I guess some standard multiexponentiation algorithms could be used if you want to check multiple keys against the same base 19:25:43 No, I was thinking multiple bases against the same key. Like when multiplying by G. Same base, many keys. 19:26:40 I don't think you can optimize multiple bases for the same key, but I could be wrong 19:27:19 So... the speedup of multiplying by G is... hand crafted, knowing the bits ? 19:27:35 If so, it seems SMC might work... 19:28:00 Unless G just has the right bit pattern to be fast. But I think it's not chosen to have anice bit pattern. 19:28:01 no, multiplying by G is a fixed base and multiple keys, so it would work 19:29:12 if you want to calculate x B for the same point B and many different values of x, you can precalculate some table that will speed things up 19:30:34 moneromooo: view tags already use `generate_key_derivation()`, which can leverage supercop. Are you thinking there are further optimizations above that? 19:31:39 I don't know the details, I just know a*G is substantially faster than a*K, and if we reuse K a lot, some slow precalc might be amortized to make A*K fast. 19:31:43 Yes, if a third party has many private keys of users and wants to calculate k_fr^i K_e for each user i 19:31:52 I suppose if you need to compute `generate_key_derivation()` many times on the same base key, you could first deserialize to `ge_p3` representation (instead of deserializing for each call). 19:32:12 It would require an update to supercop 19:32:44 Anyway, this is just musing on possible speedups, kind of a tengent really. 19:32:45 Idk how costly deserializing is, compared to the scalar mult 19:34:06 the speed up is not just about the deserialization, it's possible to prepare some table of values like 1*B, 2*B, 4*B, 8*B, ... and then calculate x*B much faster 19:40:48 ah, well if you are scanning each output with 100+ keys, that might be worthwhile; would likely require a lot of expertise to code up 19:43:25 Scanning 10k outputs with one key. 19:43:55 Wait. I am an idiot. 19:44:04 Sigh. 19:44:06 Ignore me.