15:01:46 MRL meeting in this room in two hours. 17:01:01 Meeting time! https://github.com/monero-project/meta/issues/1082 17:01:06 1) Greetings 17:01:22 Hello. 17:01:33 Hello 17:02:06 Hi 17:02:28 Hi 17:02:30 howdy 17:02:39 👋 17:02:56 *waves* 17:03:36 2) Updates. What is everyone working on? 17:03:38 <0​xfffc:monero.social> Hi everyone 17:04:14 me: Wrote "Required possession duration of malicious hashpower for successful double-spend attack with a _z_ stopping rule." https://gist.github.com/Rucknium/37d772f7232aef3989bfd5b9c6d99596 17:04:19 me: Carrot balance recovery handling and getting updated quotes from auditors which include security proofs for Janus attacks 17:04:50 Me: converted lws rest server to boost::beast and going to finish up some monerod stuff today hopefully 17:04:52 I made my library for calculating divisors constant time to avoid any concerns there. I'm now working on redoing the data flow for FCMP++ prove/verify from the current shims to what we actually need in a production context. 17:06:00 <0​xfffc:monero.social> 1, Fixed the pop blocks, submitted the PR. 2. Fixed rpc limit ( not pushed yet ). 3. Breaking dynamic bss code to different PRs ( smaller PR and will submit it tomorrow ). Spent a little bit of time on a new problem about public nodes. 17:07:16 3) Stress testing monerod https://github.com/monero-project/monero/issues/9348 17:08:33 There's a problem with the txpool limiter that appears rarely. Sometimes the node sets a low txpool limit arbitrarily, like 10-50MB. The trigger for the bug is unknown. 17:09:22 This could be a problem for mining nodes. It also slows down block verification since the nodes have to ask for the txs they are missing when they get a fluffy block 17:09:36 me: working on wallets sync the tree locally for fcmp++ (to avoid needing to reveal any statistical trace to daemons which outputs are the users when spending) 17:10:14 So the pool just shrinks dramatically, dropping txs without the txs showing up in the chain? 17:10:51 I don't know if it keeps the old ones and throws the new ones out, or throws out old ones to fit the new ones 17:11:23 How long does it stay at this reduced size? 17:11:41 The node that's actively having this problem now has the "Sync data returned a new top block candidate..." message for almost every block 17:12:07 AFAIK, until node restart 17:13:38 It has happened at least to my node and spackle's node on stressnet. Maybe other people's nodes, but they may not have noticed. 17:15:24 4) Research Pre-Seraphis Full-Chain Membership Proofs. Reviews for Carrot. https://github.com/jeffro256/carrot/blob/master/carrot.md 17:17:09 I've been receiving updated quotes for including Janus resistance in scope of properties for which we want security proofs. I plan to invite them to the next MRL for a quick schpeal if that's okay with everyone 17:17:50 good with me 17:18:04 Sounds good :) 17:19:33 I'm still trying to work things out with veridise. Should be done next week. I also saw Rucknium's request and have to do the book keeping 17:19:56 Thanks, kayabanerve 17:22:44 By "work things out", are you referring to allocating more budget to Veridise with their current audit? 17:23:31 No, confirm scope and quote. 17:23:50 Oh, for which audit? 17:26:56 The remaining work on divisors prior discussed. 17:28:10 Anything more on FCMP++? 17:29:44 Nothing here 17:29:50 I have one topic 17:31:16 Helioselene (my lib for Helios/Selene) is multiple times slower than other EC libs. My CCS explicitly states my impl is not expected to be prod grade. The fact it happens to be usable in prod is more commentary on how well I did my work on the proofs. 17:32:39 I have an EC divisor library which is functional. It's also the most computationally expensive part of the proving. Thankfully, it can be done asynchronously. Before a user signs a TX, or even does a membership proof, the wallet can create divisors to be used. 17:33:45 So wallet UX should be able to make the time to calculate divisors a complete non-issue in practice EVEN IF we don't have wallet trees and do use RPC calls to fetch path information (as the divisors can have their calculation started before committing to making a TX and starting to send RPC requests). 17:34:35 I have some degree of interest in organizing a contest for anyone to develop more efficient implementations. I think they're clearly defined, concise parts of the codebase, amenable to such incentivization. 17:35:09 That's actually all I have to say on this for now, I just wanted to have the idea noted. 17:37:06 I figure tevador would also be a good candidate here, perhaps we can put up the bat signal for tevador / see if they're interested? and if not, then go with bounties? 17:37:26 I'll also clarify the divisors should be a fraction of a second on a modern laptop. We need more than ten of them for a FCMP so it can become seconds depending on the device hardware. I'm not concerned about it because threads exist (I've only used single threads) AND again, they can be calculated at any time. Even when you open a wallet, it can start 10 sets in the background for the hell of it. If a user does start to make a TX, the time for them to enter the address/amount/review/hit confirm should be longer than the calculation time. 17:38:18 Does that scale linearly with the number of inputs? 17:38:30 tevador prior agreed to do an optimized version of Helios/Selene. tevador hasn't been seen in a while AND tevador being able to do an optimized version doesn't mean they'd do the best implementation. 17:38:45 And does not depend on the merkle tree "anchor"? 17:39:10 to be clear, there's also all the other Helioselene ec arithmetic that affects tree building that has room for optimization, right? not just divisors 17:39:17 If the total time is less than a few seconds , that cost will likely be dwarfed by scanning times anyways and there might not be a very compelling reason to put a ton of effort into optimizing it IMO 17:39:58 It's necessary per input, it's not bound to the input nor the tree. The amount of divisors necessary for an input is variable to the tree depth. 17:40:13 Yeah, ideally, a new Helioselene lib gets 3x across the board jberman 17:40:25 Can divisior generation can be done without access to spend or view opening of tx outputs? 17:41:04 Jeffro256: I'm concerned on phones it'll become tens of seconds, but including the divisors lib may be reasonably not worthwhile for such a contest (limiting this discussion to Helioselene) 17:41:47 jeffro256: It doesn't need any keys but would leak sender privacy 17:44:01 a contest sgtm. with community buy-in, since it looks like some of the research funds will be leftover, could perhaps allocate some of what looks like leftovers toward seed funding a contest 17:44:18 If tevador comes back within the next month or so and delivers a 3x faster Helioselene, we can say this isn't sufficiently worthwhile. I think this contest format can attract fresh talent and is our best chance at the highest performant library possible. 17:44:30 I would want to do it on a new CCS. 17:45:43 The research funds can't be used for this (I don't hate the idea but they're earmarked otherwise). Upon research completion, the leftover funds turnover to MRL and would be eligible, yet that only occurs at the end of the road. I'd ideally propose this contest around EOY (not end of the road for the research tasks). 17:48:16 It's an idea. I'm months away from being able to actually put it forth. I'd need to spend weeks organizing it due to the development time of an evaluation framework. 17:48:24 5) 10 block lock discussion https://github.com/monero-project/research-lab/issues/102 https://github.com/AaronFeickert/pup-monero-lock/releases/tag/final 17:48:46 I wrote "Required possession duration of malicious hashpower for successful double-spend attack with a _z_ stopping rule." 17:48:47 Direct link to table: https://gist.github.com/Rucknium/37d772f7232aef3989bfd5b9c6d99596#table-duration-of-meta-attack-to-achieve-attack-success-probability-of-50-percent 17:50:02 IMHO, the relevance of this table depends on the threat model. If the cost to the attacker to acquire more hashpower is a roughly linear function of the desired malicous hashpower, then it is best to go big or go home: Get a majority of hashpower. 17:50:54 maybe a comment on here may be a good place to try to contact tevador https://gist.github.com/tevador/d3656a217c0177c160b9b6219d9ebb96 17:51:56 So, if the equation is `total_budget = duration * hashpower_share`, then it is best to put the money into hashpower share instead of duration. But if the threat actor is a hacker who acquires control of a large mining pool, then it's more relevant, since the hacker isn't spending a linear amount of money to acquire hashpower. 17:52:44 For 15m USD of opportunity cost (assuming zero TX fees), 10% of the hash power can likely perform a 8 block reorg from my reading of that table. 17:53:58 15m USD can do a 720-block re-org if they possess hashpower for 24 hours. 17:54:17 A much smarter move by an adversary. 17:54:54 The risk of collateral damage from a deep re-org also depends on the chosen full confirmation time of potential suitable victims. The best suitable victim type IMHO is an exchange. Kraken requires 15 blocks: https://support.kraken.com/hc/en-us/articles/203325283-Cryptocurrency-deposit-processing-times 17:55:11 Most of Trocador's instant swap partners require 10-15 blocks. 17:56:10 Rucknium: Can you please clarify? 17:57:08 15m USD isn't the cost to acquire 51% of hash power for 24 hours, is it? Is the suggestion by bribing the top pools as that'd be profitable for them? 17:57:11 (15m exceeds a day's block rewards by farrrr) 17:57:18 The durations in that table are probably an upper bound because the simple _z_ stopping rule isn't optimal. A smart adversary would halt an attack and re-start from scratch sometimes before the _z_th block is reached, and even sometimes after. A paper (Hinz (2020). "Resilience Analysis for Double Spending via Sequential Decision Optimization.") could be useful for analyzing optima l stopping rules, but their solution algorithm requires a parameter for the value of the double-spent tx. 17:58:21 I get `720 * 0.6 * 170 = 73440` Am I wrong? 17:59:57 Oh, you were saying that it exceeds the block rewards 18:00:10 No, but if this is your premise, we have two different discussions 18:00:18 Yes, that's my point. Much better and cheaper to acquire hashpower for a shorter period of times 18:00:30 I'm unsure in practice that top mining pools would accept such bribes. 18:00:52 If they wouldn't, one would have to acquire new hash power. New hash power is much more expensive to require. 18:01:28 Eh. There's enough social engineering/illegal methods the multiple is probably small enough we can drop it from being an issue/consideration. 18:01:45 Drop what? 18:02:06 Drop my concern about the difference in price of existing and new hash power. 18:03:47 But potentially the resale value it still very good if only used for a 24 hour attack 18:04:26 You mean use a warehouse full of PCs ready to go to shops for 24 hours :) 18:04:27 '1000 Threadrippers, lightly used, BTC only' 18:04:50 nanopool has more than 30% of hashpower usually. If a hacker got control of their hashpower for 1/3rd of a day, the hacker could achieve a 10-block re-org with 50% probability. 18:05:07 Does anyone know the percentage of Monero users that need to reduce the 10 block lock mechanism because it is cramping what they are trying to do? 18:05:26 We have evidence at scale, RISC-V boards are most efficient *from a production/new hardware acquisition standpoint*. 18:06:07 Well, we could come to the conclusion that this Nanopool hacked scenario is too dangerous and go *higher*, e.g. to 15 blocks ... 18:06:12 We'll always fundamentally have a 1 block lock fwiw 18:06:50 I'm unsure user experience is notably different for 8 vs 10 (as an example) 18:07:19 Yeah, and data sure does not seem to point towards 5 blocks or so, right? 18:08:38 By the way, the table is an upper bound of the required possession duration. The _z_ stopping rule isn't optimal, because a smart attacker would end an attack and re-start at less than _z_ blocks sometimes, and continue past _z_ sometimes, too, depending on how far behind they are in the race against the honest chain. A paper (Hinz (2020). "Resilience Analysis for Double Spending via Sequential Decision Optimization.") has an algorithm that could help compute a better bound, but it depends on a parameter for the value of the double-spent tx. 18:08:53 The unfortunate discussion is that if we want lower confirmations, the solution is PoS at least as a secondary layer. 18:09:08 That's unfortunate not because of my opinions on PoS, yet because I can't imagine that discussion being well received. 18:09:19 You can bet :) 18:10:18 rbrunner: Depends on the threat actor. If the adversary is just buying hashpower at a linear cost, then the adversary will probably just perform a 51% attack. Then any block lock is irrelevant. If the threat model is a mining pool operator, then the risk table is relevant. 18:10:24 As a secondary layer, we'd still be PoW. We'd just have some declaration of validators (with transparent stake) who cement the PoW blocks. 18:10:52 Or...rolling checkpoints! :D 18:12:53 I think that table is at least a nice new tool to direct people to and have them check the results if they vote for lowering from 10 18:13:12 If a minority-hashpower attack is assumed to be unlikely, then evaluation of a safe N block lock would analyze benign re-orgs. Which are pretty shallow in the empirical record. (There are papers on the theoretical benign re-org depth that we could look at) 18:14:14 I will probably clean up the gist and put it in the 10 block lock GitHub issue. 18:17:28 Or one could say that the risk of tx invalidation from a rare, shallow re-org caused by a malicious double spend attack is an acceptable risk 18:18:14 If an attacker cannot earn money from a DS because any of the big targets have high confirmation times, then there will not be the collateral damage to other users, anyway. 18:20:26 I think inflicting "reputational damage" is also a possible motivation for people with money to burn through 18:21:27 And the 10 block lock doesn't prevent a victim from accepting a tx at 5 confirmations, then getting double-spent against. It just prevents collateral damage of txs necessarily being invalidated because they referenced a now-invalid merkle tree. 18:22:32 More things to think about. We can end the meeting here. 18:23:20 Thanks everyone! 18:23:55 imo if you use monero regularly it's just something you learn to account for ahead of time 18:24:10 Thanks 18:25:17 If we pad outputs, we can make standard practice for a payment to X to be to use multiple outputs for the one payment. 18:26:31 Which is why I wonder how big of a problem is the 10 block lock? It has served Monero well all these years. 18:27:15 Everyone hates the 10 block lock. 18:27:54 I've often wondered too. it used to bother me in the beginning, but then I adapted. it seems that it mostly bothers newbies and infrequent users, it's probably a decent percentage 18:28:05 of course, I would also prefer if it wasn't there 18:28:57 In most cryptocurrencies, you can spent an output from a tx that is in the mempool/txpool. You can chain them together. 18:30:24 In Bitcoin Cash, some services were hitting their limit of 50 chained txs in the mempool. Programmers changed an algorithm from O(n^2) to something faster and eliminated the limit a few years ago. 18:30:49 yeah. it doesn't make intuitive sense either, like being handed change in cash and not being able to get it out of your pocket for 20 mins. it catches people by surprise 18:31:04 but with the current ring size arrangement I also appreciate why it's there 18:32:10 Rucknium: I have scheduling code to avoid exactly that bound on Bitcoin (and derivatives). 18:33:43 If you added BCH to Serai instead of BTC, you wouldn't have had to worry about it :P 18:45:54 Silence shill D: 19:03:10 With FCMP++ churning / "PocketChange" and similar mechanisms to produce outputs for oneself are much less of a privacy problem than today with rings, right? While they still can bloat the chain, of course 19:15:25 Correct