03:48:49 given hash_to_ec was renamed to biased_hash_to_ec in fcmp monero fork, should hash_to_p3 also get renamed? the function is equivalent except input arguments 06:08:45 If the Monero network cannot handle the volume the scaling design allows, the formulas are only a fantasy. Small blocks and high fees make a network unusable; so does a malfunctioning daemon. 06:12:09 All I would ask if that some consideration be made for demonstrated daemon performance. I might suggest another scaling metric, let's call it 'runway.' This is how long the network can operate under maximum scaling conditions before it reaches a known limitation which requires intervention. Right now on mainnet the runway is s [... too long, see https://mrelay.p2pool.observer/e/nZf46cMKbEF3T0U2 ] 06:32:29 what makes you say half a day? 06:33:34 If prepared, i think a 600mb txpool is bad enough 06:33:57 So more like, however long it takes to spam that many txs 06:36:19 But thats not necessarily a scaling problem, just poorly designed. 06:36:19 are you referring to block growth over 12hrs? 06:38:24 I gave that estimate in the interest of making my point while also being generous. 06:38:58 but what is the estimate referring to, block growth? 06:39:34 I agree there is more than one issue which arises on that timeframe. So long as everyone understands that, it frames the design discussion correctly. 06:40:21 The good thing is that a lot of these issues are being fixed in the immediate term (before fcmp hard fork or scaling changes) 06:41:52 I stated half a day as that is a conservative rough estimate for how long it might take to hit the long term median limit of 30 MB. 06:44:35 If the daemon is unable to handle volume up to this limit, the network fails within hours. If the daemon can handle volume up to this limit, the network is hardened against all possible circumstances for over two months. 06:51:06 I think we'll be able to push stressnet to 30+mb blocks, but id like to wait for some of the efficiency fixed to be in first. 1000x fees is 0.03xmr for a 1:2 tx on mainnet today. Id probably argue that scaling could be slowed by an order of magnitude 06:53:40 If txpool is fixed (if we can handle large txpools), then we could retain txs for longer than 3 days and increase the max txpool size 06:54:05 Which would allow scaling to be slowed down a bit w/o dropping txs 07:06:52 Technically to max out the proposed cap on the short term median one needs 32 MB, assuming a 1 MB penalty free zone 08:03:26 This provides over 2 months 08:03:53 50000 blocks to be exact 08:16:14 <321bob321> https://signal.org/blog/spqr/ 08:16:14 <321bob321> Signal Protocol and Post-Quantum Ratchets 08:48:47 @articmine: If this is the network stability target it should be an acknowledged performance requirement by developers (jberman and jeffro specifically). 10:52:15 In my opinion the goal should be to agree upon a robust design with the 50000 block runway, and move forward as soon as possible. 10:52:57 A 50000 block runway would be a fortress; something to be stood upon while saying with confidence that other proposals are unnecessary distractions. If we cannot stand on that foundation, then other proposals become far more enticing. 15:43:50 The FCMP++ hard fork effectively quadruples the size of transactions before even considering verification time. 15:43:50 My take is that if we cannot stand on this fortress we need to delay the hard fork until we can. 15:45:16 It is the prudent course of action. 15:50:15 the scaling code is in now though, we are vulnerable now. 15:50:46 AFAIK no changes were made to it on the stressnet fork 15:51:33 The stressnet is currently working on the current scaling with a 50x long-term median and a maximum block size of 100x the penalty free zone. This gives a 30 MB target for 300000 minimum penalty free zone 15:52:54 At least qubic can only put max 20 transactions per block of theirs due to their choices :) 15:53:18 They will push median down! 15:54:16 @boog900: Which is why the existing vulnerability needs to be addressed before we increase the transaction size by a factor of 4 for starters 15:55:05 we can reduce the max block growth for the next HF 15:56:19 We already are from 50x to 16x on the short term median 15:57:22 yeah I know, so no need to delay the HF right? 15:57:47 It is the prudent way to go 15:59:00 if we make the block growth slower for the next HF that could actually help our situation 15:59:35 so delaying it would be leaving us with the faster block growth for longer 16:01:07 Not really, because the HF itself could spur a major growth in adoption. This could easily be further compounded by a legal defeat of blockchain surveillance in the US courts 16:01:34 Delaying it just makes an attack take longer 16:01:50 Unless there's a plan on how to act or a limit it'll just take more time 16:01:51 and tomorrow qubic could decide they want to fill all their blocks to the max 16:02:01 ^ 16:02:08 they have already demonstrated they can get more than 50% of blocks 16:02:09 Having the 50000 block runway is plenty prudent. No need for a delay so long as the network can meet that minimum robustness requirement. 16:02:17 They already pad their blocks with garbage txs 16:02:19 That cost them nothing as it's their own 16:02:35 No it provides time to fix the underlying code issues that have been identified in stressnet 16:03:41 I mean if next hardfork the growth is slowed 16:03:42 @spackle: I am suggesting a delay until we get the 50000 block runway 16:03:55 Instead of capped or logarithmic 16:04:03 the real solution would be to put out a soft fork limiting block growth tomorrow 16:04:20 but that would be controversial 16:04:28 Instead of exponential 16:04:33 although I do think it is justified 16:05:13 No the real solution is to fix the underlying code issues