15:03:32 MRL meeting in this channel in two hours. 17:00:28 Meeting time! https://github.com/monero-project/meta/issues/1034 17:00:32 👋 17:00:34 1) Greetings 17:00:56 *waves* 17:00:59 Hello 17:01:04 hello 17:02:03 Hi 17:02:09 Howdy 17:04:11 2) Updates. What is everyone working on? 17:06:08 Organizing research/audits, as usual. 17:06:13 me: Helping with stressnet. Successfully (I think) ran the Dulmage-Mendelsohn Decomposition on a simulated set of transactions that have been flooded by black marbles. Running some stats on the mainnet p2p transaction logs to see if they have evidence of the black marble flooding. 17:06:17 me: polishing fix PR for subaddress-related temporary scanning misses 17:06:38 me: grow_tree and trim_tree are approaching production-grade ready, continuing on trim_tree this week 17:07:10 I worked on LWS frontend API. 17:08:06 3) Stress testing `monerod` https://github.com/monero-project/monero/issues/9348 17:09:02 Maybe rbrunner can explain what he found in "Daemons processing big blocks may bump against serializer sanity checks and fail to sync" https://github.com/monero-project/monero/issues/9388 17:09:47 It's mostly described in the issue. 17:10:13 The short version: Sanity checks, put in place a few years ago, trigger if daemons asks other daemons for 20 very large blocks 17:10:26 because asking for 20 blocks is what they do per default 17:11:17 The immediate "band aid" measure on stressnet was syncing single blocks, before the root issue was known, which works 17:11:46 I submitted a PR to stressnet to go higher with the sanity checks 17:12:26 Durable and sensible solution might be to go dynamic with that number of blocks requested, depending on average blocksize 17:12:30 Is that PR on spackle's repo? 17:12:43 Yes. 3 constants redefined bigger. 17:14:15 Here's rbrunner's that set sanity checks higher: https://github.com/spackle-xmr/monero/pull/12 17:14:16 The behavior is acutally quite funny: The daemon starts to disconnect every other daemon because it thinks they all send it corrupt data :) 17:14:16 Here's mine that just set default sync chunk to 1: https://github.com/spackle-xmr/monero/pull/8 17:14:57 Damn 16K bytes per string is a really tiny limit lol 17:15:16 I think it's 16'000 strings. 17:15:35 Ah I see, that makes more sense 17:15:45 We are at about 4MB block size now on stressnet. No one falls behind permanently, but there is some temporary fall-behind and re-org/orphaning at this block size: https://monitor.stressnet.net/ 17:16:07 moneromooo mentioned that they just chose the limits more or less according to gut feeling, not as the result of some testing, or even simulation 17:16:39 So we probably should not give too much weight to their actual current values 17:16:42 ofrnxmr set up a stressnet block explorer at (onion hidden service): http://stressgguj7ugyxtqe7czeoelobeb3cnyhltooueuae2t3avd5ynepid.onion 17:17:07 So blocks could have 820 txs per block, and if there's 20 blocks, then that would go over the string limit? 17:17:29 It seems so, yes. I saw it in the debugger go over the limit. 17:17:42 After getting around 30 MB of data 17:18:12 Although I could not find out where exactly, because that complicated templated serialization code was beyond my debugging fu 17:18:36 Well another issue at hand is why the *serializer* doesn't check those limits while it is writing out 17:18:49 Well, duh, yes 17:19:43 I think it's protection about doctored data that blows up to gigabytes and brings down Monero daemons. Was a thing once as a possible attack, if I remember correctly 17:20:04 *protection against 17:20:31 Maybe vtnerd knows more 17:23:18 I don't plan to work more on this, I think somebody with better knowledge should find a good solution here 17:23:47 Going dynamic might not be trivial 17:25:58 The current limit was chosen somewhat arbitrarily by moo. My new serializer often does limits based on wire size - a string is frequently required to be of minimum length, etc, instead of a max count. Unfortunately there are still some max counts in a few places that I didn't completely remove 17:28:27 Do any devs have requests for what spamming "configuration" to set up on stressnet? Right now we are spamming 1in/2out with 4MB blocks and a small txpool. We can increase the txpool, lower/raise block sizes, maybe try many-input txs to analyze "A lot of 150/2 transactions in the txpool causes memory spike / OOM" https://github.com/monero-project/monero/issues/9317 17:30:49 Doing 1in/16out will let you create the highest number of transactions as quick as possible 17:31:48 Maybe having 150 1-input txs will also cause the issue? 17:32:27 We are at about 20 txs/second being confirmed with 4MB blocks 17:32:58 I think there is nothing special with having 150 1-input transactions? Maybe I misunderstand. 17:33:27 We have hundreds of transactions going into a single block on stressnet now ... 17:33:33 For byte spam, inputs is the way to go. 17:33:38 150 1-input txs is sort of what we are doing now...over and over again 17:33:49 Without OOM problems, as far as I am aware 17:33:52 (except you consume inputs faster than you make them) 17:34:10 And you haven't run into the OOM issue at all? 17:34:12 Per byte of data, inputs also take longer to verify than outputs according to my tx performance tests 17:34:25 Seems so, yes 17:35:00 jeffro256: Low-RAM stressnet machines OOM if they have too many connections. On my 4GB node I've set connections to 4in/4out IIRC 17:35:05 All kinds of problems, but I did not hear about daemons running out of memory. Not on machines with "reasonable" amount of memory to start with 17:36:09 You can check the monitor node's RAM use, connections, CPU, etc on https://monitor.stressnet.net/ 17:36:31 Or did the data collection go down? 17:36:36 nice website ;) 17:36:57 I'll check that after the meeting 17:37:24 4) Potential measures against a black marble attack. https://github.com/monero-project/research-lab/issues/119 17:37:48 I made progress on two topics. 17:37:59 A paper Vijayakumaran, S. 2023, "Analysis of Cryptonote transaction graphs using the Dulmage-Mendelsohn decomposition." Paper presented at 5th Conference on Advances in Financial Technologies (AFT 2023). https://moneroresearch.info/index.php?action=resource_RESOURCEVIEW_CORE&id=39 17:38:17 was released with a DM decomposition program written in Rust. I took the transactions that occurred during the suspected black marble flooding and eliminated the "black marbles" and removed black marble ring members from "real" txs. Then I ran the DM decomposition to see if even more ring members could be eliminated. 17:38:27 For more info on this attack, see Section 4 "Chain reaction graph attacks" of my https://github.com/Rucknium/misc-research/blob/main/Monero-Black-Marble-Flood/pdf/monero-black-marble-flood.pdf 17:38:46 According to my initial calculations, the suspected black marble flooding could reduce 0.5% of rings to effective ring size one (i.e. the real spend could be deduced) without trying any chain reaction graph attacks. In preliminary results, the Dulmage-Mendelsohn Decomposition can double that percentage to 1%. That number is within what I expected. 17:39:11 The other topic is analyzing p2p tx broadcast logs. I have two questions. 17:39:27 How is the time of queue set in src/cryptonote_protocol/levin_notify.cpp for fluff txs? I think nodes wait for some random time before sending a gossip message with the txs it has accumulated. How is that set? 17:39:58 The other question is whether Dandelion++ starts broadcasting at the "top of the minute", e.g. 17:31:00. In the data the gossip message arrival times are not uniformly distributed across the seconds of a minute. It almost looks like D++ does start broadcast at the top of the minute. I see higher probability of receiving a tx gossip message in the first 15 seconds of a minute. Then there is a smaller bump at about :40, which is the D++ embargo timeout time. 17:40:33 Another explanation is that some entity is broadcasting lots of txs at the top of the minute...like a spammer 17:41:10 I don't see how the code waits until the top of a minute, that sounds like something custom 17:41:47 Thanks. Then it's a possible fingerprint 17:43:06 If the gossip messages were uniformly distributed across a minute, then each second would have 1.667% of the messages. But what I am seeing is that the 10th second has 1.8%. 17:43:44 This is with about 15 million gossip messages. It cannot be random variation 17:44:16 Thanks to people who submitted monerod logs :) 17:45:03 I asked my first question because I noticed that txs were 'clumped" in gossip messages more than I expected. 17:46:10 Hmm, I wonder why you would care about minutes if you spam. 17:46:25 I mean, I want to understand the process of clumping 17:46:46 Ah, maybe some primitive throttle that results in this? 17:47:08 I don't think the spam script was set to maximum of what it was capable of. 17:47:33 5) Research Pre-Seraphis Full-Chain Membership Proofs. Discuss hiring Cypher Stack for 4 days of work (38 XMR) reviewing Veridise's proofs of the divisor technique. https://www.getmonero.org/2024/04/27/fcmps.html https://repo.getmonero.org/monero-project/ccs-proposals/-/merge_requests/449#note_25181 17:47:50 kayabanerve: You wanted to discuss this 17:48:05 Yep 17:48:24 So the divisor proofs are complete, with ~3-4 hours to spare which is now moving to the R1CS circuit review as available. 17:49:51 That means we need to 17:49:52 1) Have the proofs reviewed (which is arguable tertiary since Eagen originally proposed the technique) 17:49:54 2) Potentially be fine extending Veridise's contract 2-3 hours for the full R1CS review which should be less than a grand? 17:50:04 Oh that it is expected if I'm understanding you correctly. There's a fluff delay so txes will be grouped somewhat 17:50:43 I solicited two quotes and received one from Cypher Stack on the proof review. It was 38 XMR @ 4 days (I believe 4 days of work, not 4 days of turnaround). cc Diego Salazar to confirm the quote if they're around. 17:50:45 But I wouldn't expect it to happen specifically at the top of a minute 17:51:03 ye 17:51:33 So that'd be my endorsement, and now subject to jberman 's co-endorsement (prior solicited yet needing an on-the-record version) and then MRL's. 17:51:49 vtnerd: Thanks. I skimmed the code with my non-C++ eyes and I couldn't find how the fluff timer is set. Every X seconds, with a random component? 17:52:23 And then, I forgot to bring this up prior, but yes, I'd also like to confirm it's fine if we contract Veridise a few more hours as necessary to complete the R1CS review. It was projected at 5 hours and we're a bit short of that much time remaining. It's all reasonable to me. 17:53:21 I'm skimming the code to remind myself hold on 17:53:35 +1 for Cyper Stack divisor proof review, +1 for Veridise contract extension for R1CS review. Both sound solid to me 17:54:25 kayabanerve: Thanks. You said you solicited two quotes for the divisor proof review. Any info on the second one? 17:54:42 I also have a tertiary topic I'd like to solicit MRL's opinion on, if we have a few minutes after the this Veridise discussion. It should be relatively brief, and is vaguely related to FCMPs, but I apologize if I should've brought it up prior and gotten it officially on the agenda. I only thought of it a couple days ago. 17:54:49 I solicited two and received one. 17:55:41 Because we didn't actually receive the other quote in a timely fashion (solicited over a week ago), we decided to move on with the proposal from CS which is reasonable and from the very trusted Aaron. 17:55:45 "Cyper Stack divisor proof review" and "Veridise contract extension for R1CS review" sound good to me. 17:56:25 From the little I can really judge this, good for me as well ... 17:56:33 I will note we did not solicit a full spread as we originally did. Given the prior prices we've been quoted, I'm definitely preferring working with boutique firms (which is leading to not sending out as many emails and doing as many calls for results probably not worthwhile). 17:56:53 rbrunner: Progress :D Exciting things :D Sign over more money and get more progress :D 17:57:13 Suuure :) 17:57:23 On a much more legitimate and descriptive note, the divisor technique is a way to efficiently prove in-circuit a scalar multiplication. 17:57:35 We use it to add (and remove) terms to Pedersen Commitments 17:57:47 The divisor technique makes the cost 7 multiplicative constraints. Otherwise, it'd be 512. 17:57:59 So it's 72x faster than prior state of the art. 17:58:12 The randomized timer is set per connection - slightly longer timeout for in connections. It gets set when there are no txes in the connection queue and gets flushed entirely when it expires. So you should see batches of txes sent 17:58:44 The technique was posited by Eagen in 2022. Eagen's divisor work is largely subject to a lot of the same criticism as faced BP++. Aaron Feickert may be a more accurate person to comment as an individual who actually does review. 17:59:11 Veridise was prior contracted to do a proper set of proofs for the technique, expanding on the rather sparse notes of Eagen on safety (~ half a page). 17:59:28 vtnerd: Thank you! 17:59:33 I haven't done a thorough review of the divisor preprint, so I can't provide meaningful conclusions on that yet 17:59:56 We now have a 12 page document establishing the background and security. While that's arguably already secondary review, Eagen being the primary source, tertiary review of the idea/secondary review of the proofs is reasonable. 18:00:05 My very initial reading suggested that there could be similar issues as in BP++ (at least the initial version of BP++) 18:00:18 So that's all this is. Hiring Cypher Stack/Aaron to do review and ensure we actually get such great performance safely :) 18:01:07 IMHO, giving the task of review to a skeptical reviewer is a good move. That's what is proposed AFAIK. 18:01:26 By the numbers, I'll also note Veridise (with extension to complete the R1CS review) + this review by Cypher Stack is cheaper than both the Cypher Stack and Goodell quotes to do the proofs AFAICR. 18:02:37 If no one has any objections, and the above summary provides sufficient clarity, I'd like to bring up my semi-adjacent topic. Would that be fine with you, Rucknium? 18:03:09 (i ask as you execute our agenda) 18:03:22 Sounds great. I think we have loose consensus in favor of your two proposals today. 18:03:41 👍️ 18:04:11 My other topic is simply how I want to contract review of the unreviewed hash function in Monero which we rely on for security. 18:04:47 This is out of scope of the FCMP work, and the FCMP research budget, but I don't want to throw a researcher into submitting a CCS/MAGIC fundraise without a MRL endorsement. 18:04:57 And I'd like to get this review done before the FCMP hard fork so if our unreviewed hash function is found unsafe, we can replace it at that time. 18:05:11 You mean the CryptoNight "slow hash"? 18:05:27 I'm specifically referring to the bespoke hash to point within the Monero codebase, which has been documented yet not reviewed, and I would not be surprised if it had bias. 18:06:59 If we're argue it as targeting 126 bits of security, and it has 110, it's fine yet should be deprecated and replaced (and we can replace it any time. Right now, the FCMP++ proposal has the key image generator a variable in the tree. It can be calculated however we want, including fixed to some constant if we had a proposal there) 18:07:19 Hm. That's an interesting thought for its own line of research :D 18:07:59 Yet to go back to my original point: bespoke hash to point probably not great. We don't know if it's great or acceptably bad or very bad. We should have this reviewed. If not great, FCMP++ should replace it with Elligator or similar. 18:08:44 I'd just like to ask MRL that sounds reasonable and if I get a researcher to submit a CCS/MAGIC fund raise, they have backing on the premise being worthwhile. 18:09:08 Any reason _not_ to simply go with Elligator, as it's well studied? 18:09:15 Or is that "simply" carrying too much weight 18:09:34 Performance, political capital, we still have to do this research to ensure supply hasn't been violated historically. 18:10:10 Elligator (at least as used, which may be some variant) does two hash to points and sums them to create a non-biased point AFAIK. 18:10:31 A lot of the hash to points discussed for standardization do so from my brief experience with them (which may be incomplete). 18:11:20 So I don't want to propose a 50% perf drop without justification. We get the justification it's 'standard' but now we have two hash algos and the old is tech debt still needing impl/maintenance. Even if we said the security alone was sufficient, we still need to understand the security of the old one. 18:11:28 And if the old one ends up secure, why move off? 18:12:02 Yep, basically 18:12:27 Considering its keccak256 with some method of determining a y coordinate which is valid, and recovering the x, my expectation/hope is that it's a bn254-esque issue. 18:12:29 Deprecate, don't use, still out of human feasibility as understood today. 18:12:38 50% perf drop on this operation means what percent drop on verifying a whole tx? 18:12:44 Which means if we replace it, we're fine and can move on. 18:12:47 i.e. how important is it? 18:12:57 Uhhhh right now or FCMP++s? 18:13:12 If you can give an answer on both 18:13:26 Right now, we don't cache the key image generator. Every ring signature loads the output and re-runs the hash to point AFAIK. 18:13:47 So input verification would decrease... 10-20%? Off-hand estimate? 18:14:18 That hash to point will be 1/3 hash functions done by the CLSAG which is dominated by its hashes IIRC. 18:14:29 Well, presumably you could use a faster hash function with Elligator, no? 18:14:32 If you'd already be migrating 18:14:50 Under FCMP++s, input verification doesn't do a hash to point. We calculate it once per output and save it to the tree. 18:15:04 Like honestly, we may even be able to do a static point there now that I think about it. 18:15:08 _e.g._ one of the BLAKEs 18:15:55 Eh. No. We rely on the explicit hash to point to achieve a binding property in the linking tag and prevent using a view key to burn other outputs. 18:16:42 So we'd need to at least solve that problem, and then also redo the considerations on related key attacks. 18:17:18 And then even if it wasn't for those issues/necessities, we just had the composition reviewed and I can't sign off on that much time/effort when it's an optimization/protocol simplification at best. 18:17:46 Anyways. Should be marginal under FCMP++s, off-hand I'd guess 10-20% worse CLSAG verification due to lack of caching. 18:18:47 Aaron Feickert: I'd argue the end hash would need to be 20+% faster to be justified, and likely under the FCMP++ design (which determines and saves the key image). We can make the current CLSAG verify use a variable hash to point yet we'd have to track ring member age with the ring member which is a mess. 18:19:11 (if we were picking a new function and justifying it based on performance. A 5% faster algorithm isn't worth complicating the specification) 18:19:13 Could be interesting to time that out... Elligator-upon-some-BLAKE vs. CN 18:19:28 Interesting to time out, I agree :D 18:20:04 I've seen solid throughput numbers for the BLAKEs, but don't recall what that means for small inputs where "throughput" probably isn't really the measurement you want 18:20:07 But again, historical review is necessary to ensure current integrity so I'd just like MRL's endorsement on an effort to find *someone* to review the current fn for bias/determine how much bias it has. 18:20:35 kayabanerve: Can you say if you have a candidate to attempt this? 18:21:12 Worst case, it's not collision resistant and we need to start arguing the security of CryptoNote ring signatures? Does CLSAG's soundness hold even if the dynamic generator is non-uniform Aaron Feickert ? 18:21:49 Rucknium: No, I asked one person and they deferred believing there to be better candidates. I don't want to cold email the Elligator people before knowing MRL endorses this research. 18:22:27 Because I can cold email and organize a CCS, but then CCS will ask for MRL's opinion and we'll be in a MRL meeting a month from now (after I used a bunch of people's time) discussing the premise of this research as we are now :p 18:22:57 I'd like to confirm the premise is agreed upon, find a candidate, and then have MRL solely review the specific proposal (not the premise of the proposal, which has become a stated goal) 18:23:06 I don't fully understand this specific issue, but I am in favor of funding research on potential vulnerabilities. It sounds like this is that. 18:23:19 So don't worry, not going to ask to raise 100k ahead of time as a slush fund for this research :p 18:23:27 It assumes the hash-to-point function can be modeled as a random oracle 18:23:58 Yes but does it have to be a _good_ random oracle for _soundness_ 18:24:05 (I can follow up in DMs later, don't worry :) ) 18:24:43 Yeah, probably a little off topic for this meeting 18:24:56 (basic answer... consequences of "bad" RO unclear) 18:25:54 I have a question: If it isn't as secure as we want, can the issue be eliminated on the blockchain by prohibiting new txs with that operation (e.g. use Elligator instead), or do the unspent outputs on the chain _need_ to use the old operation? So it would be harder to stop a vulnerability from being exploited later 18:28:21 Yeah, so worst case (<110 bits of security), we'd do follow up work on the CN ring sigs/MLSAG/CLSAG to investigate impact. 18:28:28 This effects the key image. 18:28:54 Can we get the opinion of tevador about this topic? Or jeffro256 ? 18:28:55 Hm. Both affects and effects are arguable as valid there. I meant to type affects but I'm not sure I'm wrong for not doing so... 18:29:14 But yeah, we can't move to Elligator universally without enabling double spending all historical outputs. 18:30:33 By the way, https://monitor.stressnet.net is back. The data collector script had died. 18:30:37 The exact implications of it being bad are unclear. 18:31:00 I'd want to say under FCMP++, it'd only break unlinkability? I don't believe we argue soundness at all as premised on the hash to point? 18:31:57 IIRC soundness would not be directly affected 18:32:02 We output `I' = I + r_i V` and a commitment to `r_i`. The membership circuit asserts the r_i is consistent. Even if you found a relationship of I to V, it doesn't matter as we computationally bind to the V randomness. 18:32:37 Right. So even if the existing points are biased, and you can find relationships for existing points, it shouldn't be an issue. 18:32:43 At no point do things like commitment binding depend on any particular relationship between the hash output and other generators 18:32:49 And we'd stop the ability to produce new unbiased points, so you could only work off what's already on-chain. 18:32:53 (this was specifically checked) 18:33:39 The larger concern is the existing ring signatures, MLSAG historically, and CLSAG. CLSAG very strongly transcripts and I hope its soundness to be fine. Ring signatures... may have some short Schnorr argument? MLSAG, I'd hope to follow CLSAG. 18:33:53 But ring signatures' use of hash to point is incredibly load bearing, to a horrific degree. 18:34:25 That's its own nightmare. Thankfully, FCMP++s mean we can finally deny TXs with ring signatures (they're still allowed today for migratory purposes). 18:34:54 Anyways. Whole thing. Investigating the bias would be the first step to any discussion. 18:35:10 FWIW I can take a closer look at the CLSAG security proofs later to investigate this more thoroughly 18:35:12 It's a good question 18:35:39 (anyone else is of course welcome to do this too) 18:36:07 Part of me doesn't want you to waste your time before we determine bias, yet I'd explicitly be curious if MLSAG/CLSAG soundness holds upon a bad ROM here. 18:36:11 kayabanerve: A decision on this can wait until next meeting, right? Maybe get more opinions and discuss next time. 18:37:10 it can? It just delays my contacting people. I have yet to hear any objections into this research which is obviously ambiguous, and in the worst case, enables forging key images/proofs. 18:37:12 Eh, I'm curious :D 18:37:39 So if you want the official position to be until next meeting, sure, yet I personally think the premise has been well received enough I'm fine moving forward. 18:37:46 "Do random oracles actually exist" is a major load-bearing question for a lot of cryptography... 18:37:47 But also yes, I'd love tevador's opinion. 18:38:13 (And if I'm fine moving forward, I get it's not yet officially MRL backed and promise to not so represent it ;) ) 18:39:17 If MLSAG/CLSAG holds, the absolute worst case is we migrate all existing RCT outputs yet not all historical CN outputs. Then we turnstile those so it goes from inflation to theft. 18:39:45 (migrate into FCMP++) 18:43:18 I have nothing else to say immediately on this (and apologies it took much longer than expected), other than the above comment being *why* this review should complete *before* the FCMP++ PR is finalized. We can finish discussions next week for any official endorsement of the research topic (though yes, I'll probably start reaching out prior to the official endorsement). I have not hing else to say on FCMPs other than congrats to CS for the new work. 18:44:01 --- END MEETING --- 18:44:07 Thanks, everyone. 18:44:41 thank you 19:14:34 Is the 'hash to point' function the only one in the current protocol that doesn't already have a formal review conducted (and one assumes published)? 19:25:25 There are two hash to point functions, technically. Informally, decompress(keccak256()), which has a failure rate, and map(keccak256()), used for key image generators and without a failure rate. 19:25:46 The former is only used for the generator `H`. The latter is used for key image generators and BP generators. 19:27:02 Aaron Feickert: should be able to comment that the former is fine? They may decline to as the exact security properties may be difficult to define exactly, yet I've never heard it implied to be significantly faulty. 19:38:21 The former should be fine, yes 19:48:11 I'll also note Shen notated the function a bit, and cited other hash to points at the time (noting how this construction was distinct). Unfortunately, they didn't work on positing it as comparable/secure. My points of contact will presumably be the people who proposed the hash to points being standardized, some modern researchers, and potentially one or two who have similar constructions. 19:48:17 (if anyone are similar) 21:32:24 Excuse my ignorance, but why would the former be fine, and not the latter? 21:33:26 Because decompression effectively forces rejection sampling until it finds a valid y. 21:33:40 The map function does coercion which may produce a non-uniform y. 21:50:23 The cryptonote hash-to-point originates in this paper: https://arxiv.org/pdf/0706.1448. And is informally described by Shen Noether in https://web.getmonero.org/resources/research-lab/pubs/ge_fromfe.pdf 21:57:36 kayabanerve: i really appreciate that explanation. I actually think i understand it! 21:57:48 UkoeHB: nice, thanks. 22:07:00 UkoeHB: Shen described it as novel. TIL there is a citation for it. 22:08:21 Do you have further background on where/when that was found? I'm pulling it up ro compare now 22:08:24 *to 22:19:13 Not to say the theory isn't related, yet the formula Shen notated do not appear present in the former paper. 22:20:06 That could be notational, I won't claim to know either algorithm expertly after 10m, or could be a lack of equivalence. 22:21:43 Or it could be how the former isn't for hashing to twisted edwards and the latter would have to embed the map from the Weierstrass curve in order to hash to a twisted edwards point. 22:23:17 Pulled up ZtM. It cites the CryptoNote WP for that claim. 22:25:34 Found the relevant citation in the CN WP. I'd have to ping Aaron Feickert on if the former actually lines up with the latter, assuming some Weierstrass map was crammed in? 22:35:08 I never looked into the math of it, so not sure if the citation is accurate.