08:29:03 hello, good day 08:29:42 i see a recursion in that code. Does the bulletproof that Monero uses also have a recursion? 08:30:00 https://github.com/AdamISZ/bulletproofs-poc 08:38:49 and i don't even know how to run that code, 100^G dependencies ... 08:46:54 " i see a recursion in that code...." <- Well, there is a while loop that will perform inner products until it is over. But I dont believe that there is function calling itself until a condition is reached. 08:47:21 right ! 08:47:45 that is a blessing :) 08:48:02 "and i don't even know how to run..." <- From my experience up to now, it is not trivial to run any Python code that simulates what Monero is doing. Im putting a lot of efforts to do that actually. 08:48:09 i don't like recursive functions 08:49:07 i wish you all the best ! 08:50:29 i want to make it verbatim what the bulletproofs are doing. 08:52:18 and as the pedersen commits are additively homomorphic, i want to use Z_p instead of ec points 08:52:46 If everything goes well by the end of next month I will publish some explanations and codes to do that ;) 08:52:56 and use a small p such that the reader can convince himself of that too 08:53:55 i am glad to help :) 08:58:39 again, there is a sort of halving of a vector happening in the range proof of the current Monero code? Does this explain it : 08:58:42 https://github.com/SarangNoether/skunkworks/tree/pybullet 08:58:44 ? 08:59:22 furthermore wich cpp file contains the rangeproof in the source of xmr? 09:06:02 "furthermore wich cpp file..." <- For BP you can find here: https://github.com/monero-project/monero/blob/master/src/ringct/bulletproofs.cc#L724 09:16:52 what is that : 09:16:55 https://github.com/monero-project/monero/blob/master/src/ringct/bulletproofs.cc#L698 09:18:03 by the time i try to calculate t_1 and t_2 i hit a bump. 09:29:53 Is there a cryptographic reason outputs are limited to 16 when we can have hundreds of inputs? Outputs add 32 bytes in proofs. Inputs add ~half a KB and several more ops. 09:33:02 No. 09:33:23 (it's arbitrary really) 09:34:36 I'm not sure whether the limit was added before the concept of weight was introduced. I believe the limit is intended to prevent someone from stuffing a tx with huge number of outs, which could be used to cheaply 09:35:28 spam the chain (which is less of an issue now with the weight concept) and increase verification time. This can be done with lots of txes anyway, so the difference would be fees. 09:36:51 right, each output needs to be scanned by every syncing wallet 09:37:41 I was about to comment on how this makes no sense regarding nodes, yet when you include wallets in consideration... it makes a bit more sense 09:39:20 We have newfound leeway with view tags? Yet as I noted, each output adds a trivial amount (probably ~140 bytes in total). With a TX size limit of 150kb, we could have 1000 outputs per TX... 09:40:24 Though that is scanned fully as just 4 outputs now, with solely partial processing on the rest. So it may not be an issue overall? Or it may make sense to limit to 64/256 outs? 09:45:18 kayabaNerve, a tx with 64 outs? 09:45:46 slave_blocker: What about it? 09:46:19 Are you asking if one exists or asking why I would want one? 09:46:26 seems only exchanges would need to do that 09:46:30 ... right 09:46:37 :p 09:47:30 BP clawback was added after multi output BPs (from commit dates). 09:47:33 The 10-block lock means there's 20 minutes for a given group of funds. The simplest fund management algorithm greedily consolidates without optimization, meaning you can only action every 20 minutes. 16 outputs means you get <1 payment per minute which isn't feasible. 09:48:09 So you can either write a very complicated scheduling protocol which divvies into 10x, re-balances, is able to merge groups temporarily... or write a log scheduler for outputs which is my plan. 09:48:34 Doesn't stop me from advocating increasing the output limit when it's currently cryptographically incredibly cheap and also decently minimal on TX KB which is paid for by fees. 09:49:22 If someone is still in contact with sarang, they could ask whether the limit was made before weight was considered. 09:49:48 I'll ask later today 09:50:10 Send greetings along with it then please :) 09:53:10 Will do :) 10:34:00 BTW p2pool has transactions with hundreds of outputs already today and nothing broke 10:56:15 kayabaNerve fund management algorithm can grow the number of outputs 15 times every 10 blocks (1in/16out transactions). If wallet maintenance takes 1 hour, exchange can have 3375 times more outputs in that time. 10:57:05 Actually 16 times every 10 blocks, so multiplier will be 4096x/hour 10:58:27 sech1: Right. "log scheduler". In this case, log16 :) 10:58:59 Thanks for giving me the heads up. If I wasn't already there, I'd definitely need it 10:59:34 But such scheduler will be eventually limited by the block size 10:59:51 So around 100-150 transactions per block 11:00:13 Or 1600 new outputs every 2 minutes 11:02:19 But that's over 100 user withdrawals per second sustained throughput, and blocks will grow under such load 11:02:38 *10 withdrawals per second 11:05:59 Well, I'm only planning doing 1 line of the pyramid every 20 minutes, for n pyramids. It's not per block. I'm also expecting a small fraction of that load :p This isn't CryptoKitties. 11:06:36 And I'm not tying my code to exact block timing. Just an understanding it needs to assume a delay of at least 20 minutes before the next option to act. 11:15:47 why is there a lock time of 10 blocks ? 11:16:07 why 10 and not 5 ? 11:17:45 does the wallet-cli have a command that disperses the funds into 5 sub-addresses "equally" ? 11:18:13 transfer 11:18:47 ie, transfer A a B a C a D a E a. Up to 15/16 address/amount pairs. 11:21:04 so if i want to buy 5 coffees within 15 minutes i can with the same wallet... Not that i entitle myself at the greatness of arguing in an influencing way about this, and is the issue of lock time not negligible? 11:21:40 "why is there a lock time of 10..." <- Output keys are included in rings. We reference ring members by global output indexes. So instead of TX hash X, output O, it's global output O. It's only assigned a global output O once in a block. So that means 1 block. Any reorg with slightly different TXs, even just by ordering, would change your global output index and invalidate your TX. 11:22:07 Setting a lock of 10 blocks ensures reorgs don't change TX validity, so long as the reorg doesn't exceed 10 blocks. There was also a recent discussion on reducing it. 11:22:45 slave_blocker: You'd have to do them all at once, using `transfer coffee1 $1 coffee2 $1 ...` OR have 5 inputs in your wallet, each having sufficient funds for each coffee. 11:23:27 a monerian should have some weight on his shoulders aswell not putting all on the devs 11:23:42 when i say shoulders i meant addresses 11:23:51 ... except I think wallet2 will use 2 inputs if possible, regardless of necessity, for privacy reasons. You'd actually need 9 inputs. I'm also not sure if it'll select a low value second input or a random one. In the latter case, > 5 need to have sufficient funds in order to guarantee success 11:25:42 Random IIRC. You can avoid this by twiddling... a couple vars in the wallet. 11:26:29 min-outputs-{value,count} 11:27:00 Set this to 0,0 and it should never pick a second out if not necessary. I think. 11:27:01 "would you like this wallet for buying coffees? 11:27:08 "yes!" 11:27:17 great do step 1 11:38:39 moneromooo, when i was using my wallet-cli i noticed that while inputing the seed there is no auto-completion with tab under linux. Is that good to have? 11:41:56 Since the MRL meeting where we discussed reducing the 10 block lock I have been thinking that wallet-level solutions such as Monerujo's PocketChange may present a privacy risk in some cases of user behavior. If a user spends a large proportion of their wallet's balance, then outputs that are from the same transaction or are temporally closely-spaced will be referenced in that large-value transaction. 11:42:31 Basically, the same risk from a sweep_all operation that has been discussed before. 11:59:02 Probably not. Only the prefix (usually 3 or 4 letters IIRC) matters anyway. 12:01:04 I've been wondering for a while if doing a sweep_single on each output would increase the level of privacy of future spends 12:01:53 Because sweeping a single output actually increases the number of "backwards branches", and thus potential ancestors 12:02:08 Dummy inputs would help in that regard too 12:04:35 Actually, the discussion on "isolated enotes" for collaborative funding makes me wonder if they could be used in a non-interactive protocol that acts kinda like a coinjoin, but with hidden amounts 12:05:08 So that all user transactions in a block could be blended into one big combined tx 12:08:04 That would really blow up the number of potential combinations of associated inputs and outputs - good luck trying to probabilistically match those 18:53:23 moneromooo: Size is as I described (n). Number of proofs is a power of 2 though and they must be padded. Computational complexity follows this power of two and batches have the complexity of the largest included member. 18:53:28 That's why my argument doesn't hold 19:00:01 I'm a bit confused here. BP size of O(log(N)). Verification time is O(N) (N being the number of outputs). 19:03:30 ... is BP log(n) in size for amount of proofs? I thought it was n. Regardless, I was more thinking of TX size which would be n per output. The issue is the number of included proofs must be a power of 2 though. So we have 2^4 for 16. If we raise that to 32, it's 2^5, except now every single BP is verified as if it's 32 outputs, so long as any individual BP is included with 17 outputs. 19:04:24 So considering no one legitimately needs that many outputs in a single TX, and it nukes batch performance... there's no reason to raise it 19:04:56 ... though I am a bit curious how p2pool handles it given a lack of TX chaining. 19:05:04 I think if you include N dummy proofs, you still pay for them due to the concept of weight. 19:05:21 But it is true I'm not sure the weight includes *dummy* ones actually... 19:05:31 They wouldn't be serialized. 19:05:39 Is that relevant ? 19:05:53 ... since TX fee is byte based? 19:06:12 uint64_t bp_clawback = get_transaction_weight_clawback(tx, n_padded_outputs); 19:06:19 So it seems to include the dummy ones in the weight. 19:06:22 Oh 19:06:27 (first approximation) 19:06:40 So if we have a block solely with 2 output TXs, from my current understanding, we'll batch verify with 2^1. If a single TX includes 3 outputs, we batch verify all as 2^2. 19:06:47 That's the issue described by sarang 19:07:28 "how p2pool handles it given a lack of TX chaining." uhmm what? 19:07:43 This is correct AFAIK. But the person will pay for the notional equivalent of 4 non-log outs. 19:07:53 sech1: What 19:08:29 Oh, I see, I think. 19:08:38 moneromooo: But they change every single TX in the block to be batch verified with it 19:08:51 ... they affect the batch verification of all TXs in the block? 19:08:53 Better phrasing 19:09:28 When batching, it is faster to verify 8x 2outs than 1x 16 outs. Is that your point ? 19:09:33 kayabaNerve you wrote it, I don't understand what you meant. p2pool transactions are "special", they have 0 inputs and therefore no bulletproofs 19:09:53 moneromooo: 8x 2outs become 8x 16 outs if any single one of them becomes 1x 16 outs 19:10:14 Yes, but that's immaterual to the point AIUI. 19:10:33 ... I mean, it's nuking batch verification performance based on a single TX 19:11:02 Oh, for the purposes of my question, I assumed the limit ix 2, not 16. So a 16 out tx has to be split into 8x 2 out txes. 19:11:10 I'll rephrase then: 19:11:19 Okay. I am going to do my best to sum this up. 19:11:25 When batching, it is faster to verify 8x 16outs than 1x 128 outs. Is that your point ? 19:11:52 If we have a limit of 16, and we get 2 TXs of 2 outputs, we batch 2 TXs with 2 padded outputs. If we add a third TX with 9 outputs, we batch three TXs with 16 padded outputs. 19:12:20 Every single BP in the batch has the same padded output count. 19:13:14 Your cited weight code has the individual TX's weight increased for the padded outputs it uses. If it uses 17 outputs, when everything else in the block uses 2, we only charge it +15. In reality, it's +15 +30n 19:13:19 That's my understanding. 19:13:26 sech1: Nothing to do with BPs. 19:13:55 We were discussing log scheduling outputs earlier. The simplest p2pool theoretical implementation directly pays out from the block reward as expected. Given XMR's 16 output limit, that means the miner TX has 16 outputs, and then after 60 blocks, it'd immediately do 16 outputs on each of its outputs until... 19:14:03 OK. So you're saying that the "few out" txes get extra verification time for dummy BPs they never included. Right ? 19:14:08 (when batching) 19:14:13 right 19:14:25 OK. That makes sense then. Thanks. 19:15:32 So in an honest network, we'll frequently batch verify as if each BP has 2 outs. In an attacked network, we'll batch verify every BP as if it has 16 outs. If we increase the limit, which no one needs, the attack also increases. 19:16:14 sech1: I just have no idea how you schedule payments if you're using a log scheduler given the lack of chaining, unless you have some multisig construction in play, which I'm not assuming. 19:16:25 Miner payouts are not limited to 16 outputs 19:16:28 It could be a top 16 gets paid out in that block, and you're not using a log scheduler, or... 19:16:30 Oh. 19:16:48 Do you know whether it'd be faster to verify 15x 2 outs tx plus 1x 16 out tx in two separate, sequential verifications, rather than all these txes at once ? 19:16:50 ... never mind then :p 19:17:41 (because we can then make several batches, per number of padded outs) 19:17:45 moneromooo: Literally? No idea. From an estimation standpoint? ... it's pretty expensive. 19:18:21 multiexps are insane. I think my work got 100x with it on a scale of ~40. BPs is hundreds. 19:19:05 So yeah, if we're discussing several bs +30 output counts, it may work out. I wouldn't hesistate to guess though and I think you could write some timing code faster than I could for this :p 19:19:28 True... 19:24:14 moneromooo: I think I tested that and splitting them doesn't help you 19:24:26 OK 19:25:47 https://github.com/UkoeHB/monero/blob/0f4e87f6820fe5c986459ba557870392e725a9a9/tests/performance_tests/main.cpp#L198 19:27:49 moneromooo: I found my old math. I was 10x for my entire protocol, which multiexp was the worst part of, with just 50 keys. I also think at the time I had only made one of my two multiexp targets multiexp so it may be a higher ratio when appropriately cordoned. 19:27:58 Though ofc, UkoeHB has the actual numbers. 19:28:49 Speaking of koe, the discussion also came up that BP+ MPC is a pain to implement relies on a weaker security assumption than BP MPC. 19:29:12 Since Seraphis is no longer focusing on collab funding... not really an issue? 19:29:17 But wanted to forward the note 19:29:27 *pain to implement and relies on 19:30:13 for collaborative funding you'd need to send input amounts to the central tx coordinator anyway, so they know when sum to zero is achieved 19:30:28 so the tx coordinator can handle range proofs locally 19:31:12 it would be a huge pain to try and hide that (if it's even possible) 19:31:56 Or move to the on-chain protocol which preserves privacy yet has higher fees/latency 19:32:10 BP MPC is possible. Can't comment on Seraphis MPC :p 19:59:20 UkoeHB: I messed up here. BPs require stronger assumptions and are a pain to implement over MPC because... what isn't. BP+s do leak amounts in a MPC setting. We discussed it in context of dalek and dalek is BP when I thought they were BP+. 20:08:09 let there be an expression of an inner product like : < a_l - z, (a_r + z)*y + 2^n > ... how to deal with the arithmetic ? How does that expand formally ? 20:21:16 i mean i know the summation formula... how does the expression evaluate ? 20:44:39 i mean a(b + c) evaluates to ab + ac, < ( a + b ) c , d > evaluates to ? 20:45:17 don't get me wrong it's written on my mirrors just fancy ! 20:45:25 like poetry. 20:46:09 If i don't get it at least it looks fancy 20:46:16 :] 21:10:59 https://ccs.getmonero.org/proposals/gingeropolous_zenith_storage.html 21:13:20 I think you can thank me for that, seeing how it was merged a mere 15 minutes after i expressed my support of the proposal /s 21:43:56 Here is the analysis of the Federal Reserve data on cryptocurrency use that I discussed at the meeting yesterday: 21:43:56 https://www.reddit.com/r/Monero/comments/uyi6kw/new_data_on_banking_the_unbanked_in_the_us/ 21:44:25 If you don't want to read it on Reddit, give me a few moments to put it on my website. 22:02:29 Ok here it is. Also available as an onion hidden service 22:02:30 https://rucknium.me/posts/financial-marginalization-and-cryptocurrency-payments/