03:13:15 I missed the meeting but just caught up. Interesting that no one brought up the possibility of adding a TXPoW requirement similar to what tevador made for TOR. 03:16:11 If done correctly it could stop the spam without knee-jerk changes to the fee structure or blocksize. This would need to be carefully thought through with an adversarial mindset as it could add a DOS vector if an adversary discovers a way to cheaply increase the TXPoW requirement to something infeasible. There should be a cap. 03:37:58 ZK-SNARKs are based on the Knowledge of Exponent/Coefficient Assumption and the hardness of lattice problems. KAE was theorized in the early 90s. lattice problems sufficient for cryptographic use exist since the mid-90s. (even the SNARK authors mention that the former is a non-standard assumption that belongs to a lower falsifiability category.) 03:41:46 contrast that with the integer factorization problem and the discrete log problem that have been known and seem to hold since the mid-70s. hardness assumptions are those, assumptions, and cryptography in a way is about making assumptions and disproving them until we find ones that seem to hold well enough. naturally, newer assumptions have received less scrutiny on aggregate. 03:43:14 re: proof size, isn't the practical outcome the same -- you end up with a constant-sized proof as the root of verifiability? 03:47:32 I didn't say the math is wrong ("moon math" is not pejorative from my mouth). 03:53:36 I would be very surprised if currently more cryptographers had experience with ZK cryptography than with solutions based on e.g. discrete logs. but I would also like to be wrong on this, because it is an important field. 03:54:03 ... ZK-SNARKS are largely based on discrete logs. 03:54:35 I won't say lattice-based SNARKs don't exist. I will say the most popular SNARKs rely on the ECDLP. 03:54:59 (and sure, additional assumptions on top of that) 03:55:52 As for *what* additional assumptions on top, there are works which are solely DL I believe. 03:56:56 > re: proof size, isn't the practical outcome the same -- you end up with a constant-sized proof as the root of verifiability? 03:56:57 SNARKs don't require a constant-sized proof, solely a smaller proof than circuit size. I believe the general agreement is even sqrt complexity is acceptable though there's a strong preference of logarithmic. 03:57:44 > "moon math" is not pejorative from my mouth 03:57:44 If you restate disrespect for no other reason than to spread the disrespect, it is from your mouth. 03:58:26 Again, 03:58:27 > We use the library to build a transparent zkSNARK in the random oracle model where security holds under the discrete logarithm assumption. 04:00:58 Knowledge of exponent assumptions are frequent with pairing-curve-based SNARKs. That doesn't mean all SNARKs rely on them. 04:03:52 yes, if those additional assumptions are broken, that is an issue for the scheme. I'm glad to hear not all schemes use it, or that others are in the works. 04:04:33 ... this is incredibly ironic given 1) CLSAG was deployed with uncommon assumptions 2) CLSAG's security proof was just found flawed 04:04:35 (this was in comment to how Mina uses recursive proofs, not SNARKs in general) 04:05:21 The CLSAG security proof was rewritten and now supposedly holds again. I wouldn't disparage the knowledge of exponent assumption when we have believed to have deployed less common assumptions (at least, I believe less common). 04:05:59 You can have a recursive proof with isn't even sublinear to circuit size. 04:06:19 Nove, inherently, is linear to circuit size. Their sublinearity is from instantiating it with Spartan. 04:07:10 so if I point out that some people use moon math as derogatory, then I'm one of those people? come on. 04:07:26 *If* there was no other reason to bring it up, yes. 04:08:03 You can only argue it as having factual benefit if you had a factual reason to bring it up. I don't believe you did. You distinctly established the smaller talent pool. 04:11:24 When asked downsides, you said they're called "moon math", repeating a derogatory term without factual benefit. For you to have stated it factually, IMO, you should have said "They're largely disparaged by community members who refer to it with derogatory terms such as 'moon math'". That would've shifted it from a drive-by to a discussion on social push back. 04:11:58 But I really don't care to play the language police game when the insulted party is a group of mathematical algorithms and would rather move on. 04:12:35 I just wanted to counter the label applied to reduce its spread as community push back built on fear of the idea, due to that language being used, will solely harm Monero in the future. 04:18:36 Distinctly, CLSAG was proven and deployed with the κ-OMDL assumption. 04:21:13 I'm unsure that's actively considered more likely to hold than Knowledge of Exponent assumptions so not only would I not say SNARKs require weaker assumptions, I'm unsure I'd say most modern SNARKs do. 04:21:18 I was talking to someone who, judged by what and how they asked, weren't familiar with the concept, and how it's viewed in the Monero space by some people. that's the sole reason I brought that up. yeah, I think this here is nit-picking on me being concise and language-policing. I right after said how amazing innovations take place in zero knowledge (whatever the hardness assumpti 04:21:18 ons). I didn't disparage. the difference in talent pool size and novelty are objective observations. 04:25:07 Distinctly, Halo, which isn't a SNARK yet is an IVC which would allow folding input/output proofs, is also solely DL. 04:36:19 I was talking about the second counterfeiting bug, that was noticed in 2018 by Gabizon. the first one was caught by the MS researcher you mentioned, sometime before Zcash launched. and true, it wasn't an operator, it was an upper limit that was omitted from a group of formulas. 04:36:39 In that case, sorry, we are discussing two incidents. 04:36:47 I'm only aware of a single incident post-launch. 04:37:19 Apologies for being unaware of a prior incident and assuming you were referring to the one I knew about. 04:38:29 Err, no, I'm discussing the one noticed by Gabizon where extra elements were included in the transcript which should have been erased. 04:39:15 It has nothing to do with an upper limit according to Zcash's disclosure. 04:39:15 > Some of these elements are unused by the prover and were included by mistake; but their presence allows a cheating prover to circumvent a consistency check, and thereby transform the proof of one statement into a valid-looking proof of a different statement. This breaks the soundness of the proving system. 04:40:38 that is the one I know to have been present post-launch 04:40:56 Right, so we are discussing the same incident then. 04:41:12 They published variables which, in hindisght, should not have been published. 04:41:42 inflation-bug.png 04:41:44 It's not about a faulty mathematical operation or about bounds on range. 04:42:03 Oh. 04:42:24 So sorry. I've been working with integer protocols recently where range means something quite distinct. 04:42:24 this is the diff, they look like upper bounds to me 04:42:25 Yes, if you mean a range of polynomial elements, completely agree. 04:42:30 no problem 04:42:45 I thought you meant they allowed keys > the modulus or some similar behavior which enabled undefined arithmetic bla bla bla 04:43:06 (I actually am storing at a protocol which isn't ZK if the witness leaves an interval) 04:43:10 (I actually am staring at a protocol which isn't ZK if the witness leaves an interval) 09:52:18 Just saw that the logs for the last 2 MRL meetings are not present in the corresponding GitHub issues, for the meeting yesterday and the one 1 week earlier. Not sure, is plowsof the right person to remind here, or do you Rucknium usually post yourself? I noticed this because I wanted to post a link of yesterday's discussion regarding possibly rising fees to Reddit, because that ma y be of general interest. 10:28:25 I can get to them later today rbrunner thnx 11:20:14 Thanks! 12:45:41 rbrunner7: I posted the logs. Thanks for the reminder. 12:48:33 xmrack: PoW for txs was suggested by BawdyAnarchist. Nano's original spam-prevention was PoW, but they got spammed anyway. IIRC Nano added "output" age and coin amount as prioritization criteria to fix their spam. Monero's amount and output age are hidden of course. 13:00:40 just checked, our min relay fees are twice as much per kilobyte as bitcoin's (1 sat/byte) ---- in chain-native terms, no USD, basically we are charging "2 monero-sats per byte" i.e. 0.00000002 xmr per byte 13:00:51 not sure if that's a comparison I've seen made in the fee convo 13:01:34 Yws sech1 noted that fees are "expensive" in terms of monero and we 5*d them recently 13:02:40 The easier way is to multiply Monero tx fees by Bitcoin price and see the result 13:02:44 Thank you for the comparison Lyza, didnt know this 13:02:49 They're NOT cheap 13:04:53 sech1: I got $2.43 as the *minimum* fee if xmr price == btc price :O 13:05:22 If Monero gets to Bitcoin levels of usage, fees will go down 13:05:29 So it won't be $2.43 13:05:48 right right 13:51:48 If my math is correct, a 1/2 transaction fee (31 micromonero) is equivalent to 9-10 minutes of mining at 20 kh/s. Does it count as cheap or not? 13:54:19 in some ways that seems v cheap, in others pretty expensive 13:55:00 I guess I would say mostly it feels cheap 13:55:28 an average laptop user could "afford" like 20-30 TX per day from a laptop 13:55:45 which is way more than most people send 13:55:48 (I suspect) 14:03:22 anotehr way of thinking about it, a network that probably settled a quarter billion USD in value collects what, dozens of dollars in fees prior to the recent transaction surge? more like 10k usd now (order of magnitude estimate) so an overall fee rate of umm, 0.004% 14:04:46 250,000,000 would be 10% of the market cap, is where that random-looking guess comes from, plus there's 80 million in volume visible on exchanges according to coingecko, so any kind of reasonable estimate puts the overall ratio of fees to value moved ludicrously low.... obviously fees don't scale with transaction value but, still 14:05:44 lots of kinda arbitrary ways to looks at this :/ 14:09:25 what do you mean? 14:20:44 Raise the fee but don't force the user to mine it. Keep in mind the spammer can also mine the spam. Even worse the spammer can move to a cold place and launch the spam from there 14:25:47 The real value of the network is the use so it's better the network to be usable by lowering transactions fees as possible, like that people will want use it and there will be real value, the DN teach us Bitcoin is shit, pretty no one use it anymore in real life, just speculate... bitcoin is pretty worthless in real life, will you buy a coffee for 14:25:48 5$ fees? Ligtning will be censored as hell so... as lowering fees, it's usable, we want that, so price will go up and mining reward will be more in $ that's a real win win... if fees go up, people will don't like that and let it, price will go down and mining reward will go down in $ please never never rise fees, Monero is the last stand, it should 14:25:48 be usable by everyone 14:25:52 The idea of TxPOW is because there is no cost effective way to make micro payments. It simply makes no sense in Monero 14:26:27 Now that sounds like a James Bond plot. Evil Dr Bytecoin builds a secret lair under the north pole and plots to destroy privacy worldwide, using the fossil fuel to, er, fuel his quest to pierce Monero's shield... 14:27:41 Well, I had a neat idea for off chain micropayments, mining to a service. There are RPC for this in monerod. 14:28:17 tw pow seems interesting 14:28:30 tx pow seems interesting 14:29:29 but will it be usefull for the network? 14:29:57 Replace TxPOW with Monero micro payments 14:30:03 I'm not very smart, and I'm not saying something needs to be done at all. I think a wait and see approach is a good idea for the short-term. But what if fundamentally enabling the attack from my perspective, is that the fees are not or will not jump up high enough on the single entity doing the high tx volume. If fees went up enough in accordance with their spam then the attack wo uld probably cease as it wouldn't be financially feasible. I don't understand the algorithms all that well, but isn't it fairly unlikely that the spammer will be able to pay enough to keep the blocksize elevated after the Long Term Median gets boosted up? Like he could keep this up for the next 65 days or whatever, but once the blocksize gets a boost upward I would think that woul d increase the total amount he has to pay to continuously fill blocks. 14:30:38 I'm not very smart, and I'm not saying something needs to be done at all. I think a wait and see approach is a good idea for the short-term. But what is fundamentally enabling the attack from my perspective, is that the fees are not, or will not, jump up high enough on the single entity doing the high tx volume. If fees went up enough in accordance with their spam then the attack would probably cease as it wouldn't be financially feasible. I don't understand the algorithms all that well, but isn't it fairly unlikely that the spammer will be able to pay enough to keep the blocksize elevated after the Long Term Median gets boosted up? Like he could keep this up for the next 65 days or whatever, but once the blocksize gets a boost upward I would think that wo uld increase the total amount he has to pay to continuously fill blocks. 14:31:58 One issue here is the supposed spam occurred for the most part before the penalty was triggered 14:32:32 Yes, so that restrains the extent to which they can commit spam right? 14:33:15 Yes. It puts a limit on the growth of the spam 14:35:12 My point is the following: Is this tx spam really that big of a deal IF the flooder's expenses (to fill blocks) rise each time that the Long Term Median increases? If their expenses actually do not increase when the Long Term Median increases, then this flood is more likely to persist (as they continue expending the same level of financial resources to produce even greater tx volume). 14:35:51 No need to burn any additional fossil fuels. Just redirect home hating to spamming 14:36:14 Green spam 14:36:18 Man... something powered by hate would be unstoppable. 14:37:34 I think someone actually built space heaters that mine. 14:39:07 If we expect their expenses to rise meaningfully when the next bump in the Long Term Median occurs, then I think we should just wait and see if they have adequate financial resources to continue flooding at a (meaningfully) more expensive price. This is my uneducated / n00b / rube take. If we expect that their total expenses (price to fill blocks with their own transactions) wil l NOT rise meaningfully enough after the next bump in the Long Term Median, then maybe tweaking the fee or dynamic block algorithms is a good idea to make it such that it is increasingly more expensive for a single entity to fill blocks with only their own transactions. (If this is even possible). 14:41:21 I don't know if Monero can possibly prevent a black marble attack, but it could (possibly) be made expensive enough to deter people from performing such an attack. Where they would have to expend increasingly more resources as their tx volume bumps up blocksizes. 14:42:08 For a constant rate of spam the cost over time is constant 14:43:02 Ok I follow, but when the block size (Long Term Median) increases that requires them to increase their rate of spam and therefore the cost over time will rise correct? 14:44:15 Ok I follow, but when the block size (Long Term Median) increases that requires them to increase their rate of spam (to fill the new larger blocks) and therefore the cost over time will rise (as blocks are bigger post-expansion) correct? 14:44:27 In a constant rate of spam the block size remains constant unless the ham rate changes 14:45:45 And in our current scenario, the spammer is simply keeping the rate of spam constant? It doesn't look like they are trying to bump the blocksize up? 14:46:07 It is a spam 14:46:20 If it is spam 14:46:57 I am far from convinced 14:47:27 But does it look like their intention is to maintain current spam levels in a careful manner so as to not increase the Long Term Median? Or does it appear they are spamming as much as possible in order to increase the Long Term Median (in about 60 days from now or whenever)? 14:49:54 The long term median would increase by 20% or so at this rate 14:50:04 ah the answer is right here. This is interesting. So it implies that they aren't obsessively pre-occupied with the simple objective of sending the maximum amount of transactions possible given the algo. If they were obsessed with this purpose then they would surely be smart enough not to let the block median fall down. 14:51:14 If they let the short term median fall then they have to start all over again. That is by design 14:52:25 I think a wait and see approach is fine. Based on Rucknium's comment if I had to speculate then the attacker IS NOT trying to push up the Long Term Median about two months from now. They are (if this is truly adversarial and malevolent) simply trying to poison as many outputs within the current bounds of the blocksize. This will likely decrease effective ring-size, but I don't kno w if it's worth a hardfork to try to tweak things in this case currently. Not that my single opinion means anything anyways haha 15:29:08 https://matrix.monero.social/_matrix/media/v1/download/monero.social/qDIIRbkGvldebHZuGaGGVZxD 15:30:40 ^ The 100 block median has fallen back down a few times, but the block size is still increasing. 15:32:24 In the plot you can also see when miners push the block size up when there are high-fee txs that compensate for the block reward penalty for going over the limit. 16:18:15 Yes but the question is how to interpret the data before and after 300,000 byte block size 16:19:59 As the tx pool fills up one expects a stable block size 16:21:21 Since the tx pool acts as temporary tx storage to compensate for fluctuations in tx demand 16:35:51 By the way the bump in December 2023 is consistent with the seasonal retail surge. In the VISA case this can be 16x the annual average. Here we can have an average of say 1500 retail Rx a day surging to 20,000 tx per day just before the holidays 16:37:07 This is critically important when sett the amount by which the short term median can surge over the long term median 16:42:38 The degree of fluctuation in the block size is not consistent in my view with a spam attack. .The frequency of the fluctuations is way too high. Also look at the small fluctuations after the block size is over 300,000 bytes 17:24:19 Is it a good design that the short-term median can just drop all the way back to 300 Kb, if the flood stops just for a couple hours? It dropped again today 17:37:30 Even if the flood stays at the same level, many blocks are found faster than in 2 minutes, so they will be smaller, and it can eventually drop the median 18:01:53 Isn't it good to drop it quickly? 18:23:31 It's not good when we will have a real flow of transactions above 300 kb block size. We'll get congestion periodically every time the median drops 19:21:18 sech1: It is designed with the expectation that normal tx volume doesn't massively surge and the sustain itself at a much higher volume. The short term is flexible to accommodate temporary short-term bursts but otherwise to be (relatively) expensive for spam. 19:21:31 then* 19:22:55 But what if "normal tx volume" is above 300 kb, will the median stay above 300 kb without dropping? Was it tested? 19:23:33 assuming all transactions come at the same steady rate 20:26:49 Yes when the long term median catches up 20:37:24 rbrunner7: For the analysis of how different ring sizes would defend against black marble flooding, should I compare ring sizes by block size or flood tx volume? When ring size increases, the bytes per tx increases of course. Basically, what should be the X axis? What I've written so far compares ring sizes by block size. 21:07:06 Well, I'm a bit out of my depth here. 21:07:32 what if pre 18.2.2 decoy selection algo was used ? 21:07:48 You linked to a nice graph in the MRL meeting, [this one](https://matrix.monero.social/_matrix/media/v1/download/monero.social/bGgVqJzYAtzSQXUdTKOcWunA) 21:08:23 Would your work result in a "family" of such graphs, one per ring size that is "interesting"? 21:09:05 (With the approach that you currently follow, "compares ring sizes by block size") 21:09:12 Yes. Multiple lines on the same plot 21:10:03 I guess that would be informative and easy to grasp, yes. 21:10:04 I guess which comparison makes the most sense depends on if the dynamic block size algorithm would change when the ring size changes. 21:10:49 It's all so very dynamic, it's hard to get a good grip on. 21:11:13 jack_ma_blabla: 18.2.2 just fixed the off by one block error. The before and after decoy selection algorithms are very close. 21:11:39 You probably can't cram everything into two-dimensional graphs, and with reasonable effort to make the graphs 21:13:09 An extension of the linked graph with several lines for several possible ring sizes should give us something at hand to discuss and maybe decide. 21:15:21 Rucknium: if we increase ringsize and limit number of decoys we select from recent blocks will that help ? 21:16:53 Ok. Here are my assumptions: There is a set of non-spam txs. I have data on their number of outputs and inputs. I multiply the inputs by a factor, assuming that the impact of +1 ring member creates a linear increase in the amount of input data. That non-spam set of txs fills blocks by a certain kB. Then I have all the spam txs as 1in/2out. The size of each spam tx also increase wh en ring size increases. 21:18:57 jack_ma_blabla: Increasing the ring size helps reduce the impact of any attempted black marble flooding, but if more old outputs are selected as decoys, then that opens a different type of attack vector. Moser et al. (2017) and Kumar et al. (2017) both exploit the divergence between the real spend age distribution and the decoy distribution to guess the real spend with high probability of success 21:20:36 Decoys must be credible. They must look like the real thing. They must actually draw the attention of the adversary. Lots of very old decoys are not credible. An adversary would ignore them. 21:21:16 Moser et al. (2018) I mean. They use the "Guess newest heuristic" 21:21:40 but currently if you are spending old outputs with recent decoys as most are from same attacker, that would also stand out ? 21:23:09 also if 90% of recent decoys are owned by attacker, those are useless anyways 🤔 21:23:35 How would they stand out? 21:24:47 those recent outputs decoys are already known to the attacker that those are fake 21:25:17 those recent output decoys are already known to the attacker that those are fake 21:25:35 The attacker also know that old outputs are unlikely to be the real spend 21:25:58 sorry but i do spend old outputs too 21:27:52 And there are many, many rings on the blockchain that have old ring members but the old ring members are not the real spend. That is the theory of ring signature K-anonymity. So an adversary do not know that your rings, that actually spend the older output, are the ones where the old output is the real spend 21:30:12 but using recent ouputs which we know are 99% poisoned is not a smart thing to do, if we are increasing ringsize then we should limit the amount of recently used decoys 21:30:55 This is Remark 2 of Ronge, V., Egger, C., Lai, R. W. F., Schröder, D., & Yin, H. H. F. (2021). "Foundations of ring sampling." https://moneroresearch.info/index.php?action=resource_RESOURCEVIEW_CORE&id=19 21:31:13 > Attentive readers might observe the following peculiar phenomenon: Suppose the real signer happens to be Alice who has very low signing probability according to S. It is likely that the mimicking sampler produces a ring in which all members except Alice have high signing probabilities, making Alice stand out. This is paradoxical since the mimicking sampler is close to optimal. 21:31:24 > The answer to the riddle is that the sampled ring could be, with similar probability (not the same due to potential collision), the result of someone else in the ring being the real signer, and picking Alice as a ring member. 21:31:38 > With the same reasoning, the mimicking sampler naturally resists timing attacks described in Section 2.1, which assumes that the signing probability of a signer depends on its age (c.f. Section 7.1). Indeed, the event that a young signer ending up in the ring could be with similar (high) probabilities the result of him being the signer or him being chosen by another signer. 21:32:42 The share of recently owned outputs keeps rising in your example. The real amount (about 75%) to 90% and then to 99%. 21:35:04 what are the chances of algo picking non-poisioned outputs from recent blocks ? 21:35:32 We don't know a way that the decoy selection algorithm could automatically detect black marbles. The adversary could just change their txs to avoid the detection. Having multiple standard decoy selection algorithms being used on the blockchain also opens another attack vector: https://github.com/Rucknium/misc-research/tree/main/Monero-Fungibility-Defect-Classifier/pdf 21:36:21 So a change in the decoy selection algorithm is safest at a hard fork boundary when all wallets must upgrade 21:37:04 yes, change dsa when we increase ringsize and increasing ringsize now should be ok as there has been no HF for quite sometime 21:38:14 If there is a hard fork in 6 months, a large share of the decoys should be selected from more than 6 months prior because that's when the black marble flooding started? Not a good idea. 21:39:02 we did emegency hf's before for pow, we can do it now 21:45:08 Maybe it would be a good idea to select even more decoys from the spammed time interval so that the non-black-marble decoy selection distribution is closer to the intended decoy selection distribution. But like I said, it is too difficult to write wallet software to identify black marble outputs automatically. 22:07:32 The interim solution I see is: 22:07:32 1) Increase the reference TX size to 8000 bytes 22:07:32 2); Only increase the penalty free minimum to 400,000 bytes 22:07:33 3) Increase the minimum node relay fee 4x before the start of the penalty. 22:07:33 This will accommodate Tx sizes of around 6000 bytes. Full membership proofs or at least ring 40 with the current proofs 22:11:01 Actually this can accommodate a ring size of even 50 with the current proofs 22:11:59 The key tool that will also be needed is graphics parallel verification 22:12:44 I plan to run scenarios of effective ring size with ring size 16, 25, 40, and 60. Is this a good set of ring sizes? 22:13:05 Yes 22:24:47 We can also add the proposals I discussed at the last MoneroKon 22:24:47 1) Increase the scaling rate of the long term median from 1.7 to 2 and lower the surge factor for the short term median from 50 to 16 22:24:48 2) Introduce an ultra long term sanity median of 1000000 bytes. This will follow Nielsen's law. The surge of the long term median will be based upon the upload bandwidth of a high end consumer or small business bandwidth. This is currently about 3Gbps 22:26:08 This is what I am working on 22:27:21 My take is that this will make the transition to full membership proofs seamless from a scaling point of view