01:27:14 have you used something like gcov or any other code coverage tool along with these tests? 01:40:22 hyc: no, although I question if they can really uncover any of the actual gnarly logic paths (i.e. the ones that depend on interleaved ordering of behavior in separate control flows) 01:43:26 for example, I just found another bug (sigh) where (from my notes): if a legacy enote is spent in a seraphis tx in block 10, then a reorg pops block 10 and the same legacy enote is spent in a legacy tx in block 11, then if a legacy scan is done after block 11 then when the legacy scan discovers the legacy key image in block 11 and tries to update the spent context of its local record of the key image it won't replace 01:43:26 the spent context because the new one is 'younger' and hence has lower priority; a subsequent seraphis scan will clear the spent context referring to block 10, leaving the legacy enote appearing as if it's unspent; the next legacy scan may not re-acquire the spent context of the legacy enote spent in block 11, and so the enote store will be oblivious to the fact that the legacy enote is spent; solution: make sure to 01:43:26 clean up seraphis caches when a reorg is detected during a legacy scan (because there is a dependency between scan types in that a legacy enote can be spent in legacy and seraphis txs) 01:44:36 cache invalidation, classic CS problem... 01:48:03 so yeah, a code coverage test can't tell you if your code accounts for all edge cases. it can only tel lyou if all the cases you've accounted for got exercised. 01:48:07 I've had several bugs like this caused by too tightly coupling my enote store to both legacy and seraphis balance recovery... guess I need some pain to learn my lesson. 01:51:53 Would definitely be interesting to see what a code coverage test says. Unfortunately I didn't really document the edge cases properly. My goal is if you change a line to 'something that looks better (but is broken)' then one of the tests will fail. Afaik most or all of the off-by-1 and integer overflow on subtraction cases will cause test failures. 08:01:42 A fuzz test could help there, though you need a hefty amount of runtime. 08:02:23 It's a directed testing where an initial data will be mutated to see whether it causes the code under test to take another branch. 08:02:48 And if it does, treat it as another starting point. So it blindly explores the code path. 08:04:01 It's blind though (that is, the mutations aren't targetted based on code analysis), so it's slow. 08:04:49 If your core verification code isn't huge, it's a good tool. See tests/fuzz_tests, there's one for BPs IIRC which could serve as a model. 22:34:26 "image.png" <- Is there an attack that could actually do this to Monero's network? I know the tx_extra field is a bit of chink in Monero's armor... It's funny how the Bitcoin guys would rather try to attack Monero, than improve Bitcoin so that it can actually compete with Monero in circumstances that require strong privacy. 22:40:28 "If we don't, and there is a..." <- +1 For getting it Formally Peer Reviewed 22:53:13 > <@articmine:monero.social> Ramping up the short term median to the equivalent of the ZCash max blocksize can be around 50-70 K USD . If the attack is stopped for more than 100 min then the median resets, and the spammer has to start again. 22:53:13 > 22:53:13 > So the start and stop approach of the ZCash spammer gets very expensive. 22:53:13 If someone ended up expending 70k USD to do this attack, what would be the estimated practical result / negative consequence of it happening?