00:08:40 Clover? 00:09:17 yes 00:10:43 as i understand it Rucknium is waiting on Dandelion++ author to review clover work since lacking in depth analysis. But since he got no news and that the goal was having peer review, maybe some companies might get a review done? 00:16:52 I thought someone already looked it over, and was unimpressed? 00:18:48 I can't find that in chat logs, so I must be mistaken 00:31:59 IIRC the D++ author was unimpressed with clover but I could be mistaken 00:32:26 Bat signal Rucknium 00:34:51 I can't get an accurate date of when Rucknium said this but for context "I got a reply from Giulia Fanti, the lead author of the Dandelion++ paper ( https://www.ece.cmu.edu/directory/bios/fanti-giulia.html ). She said she hadn't read the Clover paper yet, but would take a look. Clover is an alternative to D++ that is supposed to have better 00:34:51 privacy for nodes with closed inbound ports." 14:05:20 SyntheticBird: I don't know if there are any established companies that would do this type of review. There are companies that check if a study used standard, appropriate statistical methods, but that's for medical trials and similar. This is very different. Maybe someone could try to find a company that would do this. 14:05:27 Anyway, a strict "review" would probably just return what I've already said: the Clover paper uses less rigorous methods for supporting their claims. Simulations aren't as good as mathematical proofs. 14:05:45 You would want someone to go beyond a review and try to mathematically prove Clover to roughly the same standard as the D++ paper. Or you could have someone try to find holes in Clover, but not finding a hole is not the same as proving its privacy properties. 14:05:57 bitjson, a BCH developer, said he recently contacted some of the Clover and D++ paper authors https://bitcoincashresearch.org/t/network-layer-privacy-on-bitcoin-cash/1524/40 14:05:58 > I reached out to researchers involved with several of the papers last week; I’ll post anything new I learn or if any have comment they want to make public. 14:06:10 So, maybe Fanti will put her read of the Clover paper at a higher priority now that two blockchains are interested in Clover. I'll email her again. Fanti isn't going to give a full peer review, at least not in the near term. She would probably give her general impressions. 15:02:31 MRL meeting in this room in two hours. 15:52:57 I see. thx for the explanations. 16:22:08 Hiii 16:32:30 fund uni students for research? 16:35:15 Uh oh. Do we have a delay with matrix.org messages? The timestamp on rando 's message is 14:24 UTC 16:37:11 Yes, possibly other types of external researchers could be funded to do this. I just meant that I doubt there would be companies that do this type of thing the way that there are code audit and cryptography review companies. 16:38:18 Maybe it's just rando 's client with the time problem 17:00:27 Meeting time! https://github.com/monero-project/meta/issues/1177 17:00:35 1) Greetings 17:00:41 hello 17:00:51 Hello 17:00:52 hello 17:00:58 Howdy 17:01:08 Hi 17:01:11 I'm new here 17:01:33 *waves* 17:01:50 love: welcome 17:01:52 hi 17:02:49 hello 17:02:51 Is it really mathematically impossible to implement a smart contract mechanism in Monero? 17:03:35 2) Updates. What is everyone working on? 17:04:14 love: this is a weekly meeting with a set agenda, your question can be discussed later 17:04:28 Sure 17:04:30 love: Monero's current protocol doesn't support smart contracts. Implementing smart contracts would require a major protocol change. AFAIK, there is little support for this idea in the Monero community. 17:05:05 sounds interesting 17:05:05 me: implemented @jeffro256 's suggestions to trim the tree via cached right-edge in the db on reorg + delay trimming until the tree is 10 blocks ahead of the chain (more on these here: https://github.com/monero-project/monero/pull/9436#issuecomment-2519858103) and finishing up some debugging at the moment. Got expected RPC functional transfers + blockchain tests working. Also work 17:05:06 ing on banning torsion at consensus for the FCMP++ fork 17:06:07 me: Wrote a simulation of the privacy risks of OSPEAD DSA deployment without a hard fork. Preliminary results here: https://gist.github.com/Rucknium/fb638bcb72d475eeee58757f754acbee . Also, some more work on speed improvements to part of OSPEAD estimation procedure (3x speedup on a days-long process). 17:07:05 Me: Successfully integrated and tested Carrot scanning into `wallet2`: https://github.com/seraphis-migration/monero/blob/ebf18f4d4001b08676de29f47c07754649c1d240/tests/unit_tests/wallet_scanning.cpp#L660. With a net decrease in the size of `wallet2.cpp` I might add 17:07:37 There are some kayabaNerve items on the agenda, so ping kayabanerve 17:08:10 3) Maintainers for the research-lab GitHub repo. https://github.com/monero-project/research-lab 17:09:50 luigi granted me some powers on the repo, so I can put things in the it (I think). Probably can start with updating the IRC server from freenode to Libera and adding the Matrix address. There's already a PR for that. And add jeffro256 's "docs/utils: add decoy selection implementation guides and tools" https://github.com/monero-project/monero/pull/9024 17:10:07 forgot to give updates me: working on perf updates and lws-frontend stuff 17:10:30 And put in the papers commissioned by MRL on FCMP so we have extra copies. 17:10:53 Any other suggestions for the repo? 17:12:21 4) Salazar, R., Slaughter, F., & Szramowski, L. (2025). "Veridise Logarithmic Derivative Review." https://github.com/cypherstack/divisor_deep_dive 17:14:48 I guess kayaba isn't here, but not sure of much more to discuss on this review since we last discussed it. The general takeaway from it was that the next work item with CS is meant to address concerns raised in the paper 17:15:08 I read this. (I did not understand much). AFAIK, it's saying that it wished that the Veridise paper by Bassa had better practices with citing important mathematical theorems and lemmas in the research literature, but that things looked OK. It was also more clear/direct about the risks in the proposed protocol. That it needs a range proof or an adversary can produce a forgery. The Veridise paper by Bassa was not as direct about that IMHO. 17:16:10 I guess a range proof is needed (which could make the protocol much heavier in tx size probably) or there is some cryptography context surrounding this component that repairs the range prof issue 17:16:16 proof* 17:17:02 I guess we need someone external to verify that the context avoids the issue 17:17:44 AFAIU it's the latter. kayaba's claim as I understand it is that it's not relevant how we're using divisors, and that CS next slate of work is to verify the claim (and our use of divisors generally), while addressing that component 17:18:45 CS = Cypher Stack https://www.cypherstack.com/ , for anyone unaware 17:19:05 Ok sounds good. We will wait for that proposal 17:19:47 Any other comments on this item? 17:20:30 5) FCMP: Veridise Formal Verification of Gadgets and Circuit Audit. https://gist.github.com/kayabaNerve/0de6320b67357dd348fba3ce80bf537d 17:22:26 "The research fund has over 250,000 USD at current prices." What is that again? 17:22:28 > Veridise has yielded a quote to work on formally verifying the non-interactive gadgets present. The FCMP++ specification has two interactive gadgets, one the discrete log proof they wrote security proofs for, and one the tuple-member-of-list gadget. Their quote also includes the development of a soundness proof for the tuple-member-of-list gadget. 17:24:21 This wording is confusing to me since it says Veridise would be verifying _non-interactive_ gadgets. But then it says FCMP++ has two _interactive_ gadgets. Is it worded like that to say that there is still work to be done on other gadgets after Veridise woulc omplete this proposed work? 17:25:01 It might be a typo. FCMP++ shouldn't have any interactive gadgets AFAIK 17:25:39 That would mean that each node would need a direct line of communication with the sender of a transaction ;) 17:25:58 Is all this code auditing, or some of it theoretical mathematics, too? If the latter, then we should comment to them (if not already) that we want more extensive citation practices than Bassa (2024), because Cypher Stack researchers suggested it. 17:27:02 rbrunner: What is the research fund? AFAIK, this: https://ccs.getmonero.org/proposals/fcmp++-research.html 17:27:15 "It might be a typo. FCMP++ shouldn't have any interactive gadgets AFAIK" -> believe this is correct 17:27:50 Ah, yes, of course. So it *was* 250,000 USD, when spending started :) 17:29:20 "Is all this code auditing, or some of it theoretical mathematics, too?" -> it's both. In the list of remaining research tasks, this would cover 1) gadgets formal verification (theoretical), 2) gadgets impl audit (code audit), 3) circuit impl audit (code audit) 17:30:01 "then we should comment to them (if not already) that we want more extensive citation practices than Bassa (2024), because Cypher Stack researchers suggested it." -> seems a reasonable suggestion to me (pinging kayabanerve to notify) 17:30:02 Is it the intention that this proposed expenditure achieve loose consensus here at this meeting? 17:30:34 That was the intent 17:30:35 Wrong again. This is probably indeed what currently remains. I wasn't aware anymore how big the original CCS sum was. 17:31:55 > Finally, their quote includes an audit of generalized-bulletproofs-circuit-abstraction, generalized-bulletproofs-ec-gadgets, and full-chain-membership-proofs. The first library provides a higher-level API for the generalized-bulletproofs crate, audited by Aaron Feickert under Cypher Stack's solicitation, funded by a third party (Power Up Privacy). 17:31:56 This will audit a "wrapper" for something that has already been audited, or Veridise will audit the same library again? 17:32:12 I'm not opposed to either alternative 17:32:25 The former as I understand it 17:32:32 Just making sure I understand the scope of work 17:32:45 Hey all, I can comment a bit on the scope of work for this Veridise proposal 17:32:58 Thanks, sgp_ 17:33:37 https://matrix.monero.social/_matrix/media/v1/download/monero.social/MTCidUxooiQjtMoWcdgGaXij 17:33:51 This is exactly how the current proposal is in the contract 17:34:01 The redline was accepted, so you can ignore that 17:34:29 This audit, in sum, is one of the largest remaining components of this research CCS 17:34:42 Is Picus source-available at least? 17:34:59 The work will be completed in this manner: 17:35:00 1. Formal verification/proofs for the gadgets. 17:35:02 2. Review of the circuit and overarching proof. 17:35:04 3. Review of the implementation. 17:35:25 It's in this order because if they find something incorrect in part 1 or 2, it can be fixed before wasting the rest of the time auditing a broken implementation 17:35:56 https://github.com/Veridise/Picus 17:36:50 I think that marketers love using the "proprietary" adjective, even if inappropriately. 17:37:45 Well, maybe it's technically correct to use that word since it is copyrighted in their name, but it doesn't increase confidence in this situation. 17:38:12 The estimated start date for this is April 7th, and I believe it's important to get started on this if there is approval for it 17:39:36 For (1), then MRL would get someone to review Veridise's mathematical proofs? 17:39:40 Veridise is the preferred vendor because they have a lot of experience with these toolings. That's the main thing they are known for as I understand 17:40:29 Yes, there will be a completely separate project for reviewing the divisors work, and Veridise will _not_ be the vendor for reviewing that work 17:40:31 Which would make it important to ask them to change their citation practices so that, potentially, Cypher Stack researchers wouldn't be distracted by it. 17:40:55 I spoke with kayaba yesterday about that, and those quotes were not ready in time for this meeting. Buy they are being actively discussed 17:41:17 I spoke with kayaba yesterday about that, and those quotes were not ready in time for this meeting. But they are being actively discussed 17:42:13 It's possible that other vendors will be contacted for the divisors review as well, but Cypher Stack has done the two prior reviews and is the main firm being talked to for the final review 17:42:42 This proposed Veridise audit is largely separate from the divisors work. It can be thought of as a distinct project for planning purposes 17:42:43 I'm a +1 on Veridise in this proposal. It's quite large in scope, and knocks out major tasks remaining. Veridise has demonstrated high quality work and has the requisite skills and knowledge to take this on 17:43:17 Thanks, jberman. More opinions on this proposal? 17:44:50 I am not aware about any mayor hiccup so far in the whole process, kaya seems to be on top of it, and I continue to trust his proposals hot to proceed 17:45:06 *how to proceed 17:45:56 Just saw that the fiat price almost doubled since people donated to that CCS. Good luck! 17:46:20 +1 This all sound good to me. Thank you everyone who put work into this important step :) 17:46:51 Agreed! 17:47:27 Make sure the Veridise researcher(s) doing the mathematical proof reads Salazar, R., Slaughter, F., & Szramowski, L. (2025). "Veridise Logarithmic Derivative Review." https://github.com/cypherstack/divisor_deep_dive 17:48:34 Rucknium: For the divisors, Veridise is not expected to do more work on that, unless another review demonstrates that they have something else to review 17:48:57 They were shared a copy of that CS review work 17:49:37 sgp_: The Cypher Stack researchers had suggestions for how Veridise could present the work. That's the most relevant parts of it. 17:50:01 Veridise disagreed with those, and believes no modifications are necessary 17:51:10 Well, at least they read it 17:51:41 Further review work remains on divisors, but the suggested plan needs more work before presenting at a meeting 17:52:26 So expect an update on those in a week or two (?) when there's a clear option to deliberate on 17:54:10 Circling back to Veridise, was that +1 from Rucknium and agree from jeffro also a favorable opinion on proceeding with Veridise on their proposal? 17:55:11 Yes, +1 from me on https://gist.github.com/kayabaNerve/0de6320b67357dd348fba3ce80bf537d . Thanks for checking 17:55:53 Let me know if this has specific approval to move forward so I can execute the agreement 17:56:01 I see rough consensus in favor of the expenditure to contract Veridise for this scope of work: https://gist.github.com/kayabaNerve/0de6320b67357dd348fba3ce80bf537d 17:56:42 sgp_: You are approved to execute the agreement 17:56:50 Thank you all 17:57:38 6) Prize contest to optimize some FCMP cryptography code. https://github.com/j-berman/fcmp-plus-plus-optimization-competition 17:57:49 +1 from me too 17:58:40 I have one point of discussion for the contest 17:59:29 kayaba is of the opinion that final divisors review will go smoothly, and that the contest should not be delayed by ongoing divisors review 18:00:27 You mean, not wait until it's really sure that those divisors hold, but take the small risk and start nevertheless? 18:00:42 Correct 18:01:32 I think there is still some non-0 (seemingly small) chance that final divisors review identifies issues 18:02:03 IMHO, the risk should be taken. Since it will probably be mostly people not currently involved in FCMP implementation, the loss would only be in XMR, not FCMP labor hours. That's my first impression. 18:02:04 Nothing against that from me. There is some risk inherent in almost every step here, so if chances look good I think we should march on 18:04:18 Here's a possible timeline: we launch the contest and say we start accepting submissions 1 month from today. In 4 weeks, the divisors review completes and identifies some issue, while the contest hasn't opened for submissions yet 18:04:27 Not exactly sure how to handle that situation 18:05:14 I'm thinking about including a clause in the divisors contest only that there is review work ongoing and that there is a small chance the review work identifies something that would affect the contest 18:06:32 Or in that situation do we just count it as a loss, and proceed with the contest as it originally was anyway and pay out any submissions that satisfy the contest rules still? 18:06:49 I'm not sure I like that option. IMO if we open a competition with terms and people work on it, then we have an obligation to pay the winner who fulfills those terms. Also, saying "we accept submissions in 4 weeks but there'a a chance we close it" makes us sound weak/unsure and might drive off competition 18:07:17 Doesn't exactly inspire confidence 18:08:02 Ok, good with me to take the risk 18:08:11 I agree with jeffro256 . 18:08:13 Isn't the chance of a total loss smaller still? Maybe only small tweaks may be needed, that the submissions might be able to modify to 18:08:24 If the gadget is broken, but the issue is minor and most of the performant code can be reused/refactor-ed, then it isn't a complete loss anyways 18:08:36 rbrunner: jinx 18:08:47 :) 18:08:54 Awesome 18:09:22 Well then, contest is a go :) jeffro256 's given approval on the details 18:09:43 A major loss for us would only occur if the issue was a major flaw in the divisors, which I feel is unlikely this far down in the process. I can forsee small issues, but IDK 18:09:44 Good. The tension waiting for the fun is getting hard for me :) 18:10:40 I am really curious how it will go 18:10:55 Proposal: we open the contest for submissions Monday April 27th, and the contest closes for submissions June 29th 18:12:25 And can commence the marketing blitz with xmrack 's help imminently 18:12:35 Sounds great to me. Thanks for all the work on this, especially jberman , jeffro256 , and kayabanerve 18:13:12 👍🏼 18:13:42 Catching up on chat logs now 18:14:37 xmrack: Don't leave yet. Next item may be relevant to you :) 18:14:58 7) Release of OSPEAD HackerOne and CCS milestone submissions. Analysis of risk of new decoy selection algorithm without a hard fork. https://github.com/Rucknium/OSPEAD 18:15:10 This work suggests that the feared "dip" in eff. ring size during a transition period will not be a major problem, at least at the startup phase with >10% upgraded wallets. 18:15:18 New gist: "Preliminary results of risk of OSPEAD deployment without hard fork" https://gist.github.com/Rucknium/fb638bcb72d475eeee58757f754acbee 18:15:44 flip flop: That's right, according to very preliminary results. 18:16:01 In these very preliminary results, I find that it is better to deploy the OSPEAD-derived now, with any user adoption share, than to continue with the current DSA. There are some big caveats. Mainly: 18:16:16 1) Some of the chosen distribution are not exactly what would be deployed 18:16:18 2) The procedure I used is not necessarily the most powerful for the adversary. A procedure that puts everything into a machine learning algorithm might be able to achieve greater de-anonymization results. 18:16:46 IMHO, next steps are trying to put it all in a machine learning algorithm, possibly with the help of spackle , xmrack , and/or ack-j's prior code applying machine learning to stagenet/testnet txs: https://github.com/ACK-J/Monero-Dataset-Pipeline/tree/main/DataScience 18:16:55 And re-estimating OSPEAD using recent data and the faster code I've been working on, plus a few changes appropriate for deployment without a hard fork. Then the new estimates can be used for simulations. 18:17:14 Why is the risk not greater? IMHO, it's because the NN classification into new/old DSA become harder when the sample is very unbalanced, i.e. when the new-DSA users are a small minority. Yet, that is exactly the circumstances when a non-fungibility classifier is strongest. 18:17:30 For example, when 50% of txs are new-DSA and 50% are old-DSA, the NN classifier correctly predicts the DSA of new-DSA transactions in 91% of cases. When 5% of txs are new-DSA and 95% are old-DSA, the NN classifier correctly predicts the DSA of new-DSA transactions in only 67% of cases. This is a well-known issue with classification unbalanced samples. So well-known that they teach it in medical school: https://quoteinvestigator.com/2017/11/26/zebras/ 18:17:46 Also, we must hold in mind that not just the "transaction of interest" must be classified into new/old-DSA by the NN classifier. The transactions of every ring member (the "antecedent transactions") must also be classified as new/old DSA, which multiplies the error. 18:18:09 Having a machine learning algorithm do all the steps may increase the de-anonymization risk since the ML can adjust the first DSA classification step rules to suit the later real spend classification step. Right now they are separated. 18:18:21 That's my update 18:20:02 Very encouraging preliminary results. 18:20:19 Of course we don't know much about potential adversaries, but I wonder who would still pour money into their tools and update after such a deployment, with FCMP++ approaching 18:20:36 > For example, when 50% of txs are new-DSA and 50% are old-DSA, the NN classifier correctly predicts the DSA of new-DSA transactions in 91% of cases. 18:20:36 ... that leaves 9% for old DSAs. Which is pretty good (low risk) ! 18:21:43 The MAP Decoder is the strongest real spend classifier using just the timing data, according to Proposition 4 of Aeeneh et al. (2021). So a ML classifier in theory wouldn't be able to improve on it. My non-fungibility (NF) classifier is pretty good I think, but maybe it could be improved. I think the potential advantage from a whole-ML approach is in improvements in mixing the two real-spend classifiers and adapting the DSA classification rules/thresholds to serve the later real spend classification step. 18:24:14 This is very interesting. So in the case where we have 5% old DSA 95% new DSA? 18:24:53 I really like this work, but "do nothing" is in fact a valid option... imho 18:28:14 ArticMine: Good question. These are some results with an older version of the NN classifier: When it is 10% _old_ DSA share, still there is not greater risk for those lagging users. But then at 5% _old_ I get modestly higher risk: 32.8% probability of guessing the real spend for those old-DSA users, compared to 27% baseline risk of MAP Decoder against the old DSA. 18:29:04 Then at 2% old I get 29.7% probability of the adversary guessing the real spend (again, using the older and less effective NN model). 18:30:17 Of course, by that point the risk for the new-DSA users is much smaller, and they make up the vast majority of users, so the aggregate mean risk is lower when we are that far into the adoption curve 18:30:48 Rucknium: im pretty limited on free cycles but can probably whip something up fast if you point me to the dataset. We could also post it on kaggle and see who can tune the best NN 18:31:01 ... when the baseline of 4.2 translates to 26.25% 18:31:16 Also that's still "only" a 1/3 chance, where it's a 1/4.2 chance now, yeah? 18:33:15 jeffro256: Yes. Note that baseline MAP Decoder risk is a little higher in this simulation: https://gist.github.com/Rucknium/fb638bcb72d475eeee58757f754acbee#assumptionsinitial-parameters 18:34:21 ... when the baseline of 4.2 translates to 23.2% 18:34:28 xmrack: Could you have more free cycles if I paid you to work on it (possible for spackle , too)? 18:34:42 ... when the baseline of 4.2 translates to 23.8% [sry] 18:35:24 that's an interesting situation, reminding me a bit of a kind of a moral hazard/utilitarian position, i.e. users adopting the new DSA can reduce the effective ring size of those who stay on the old DSA. not making a judgment here, just noting. 18:36:27 I acknowledge that my knowledge of machine learning techniques is limited and I want to make sure we get all the help we can on this 18:37:25 It actually becomes an outreach / marketing problem. Minimizing the time where the new DSA / old DSA is most unbalanced 18:37:51 Rucknium: its more a time constraint issue. I’ll happily help out as much as I can 18:38:33 chaser: I doubt that 95% rate of adoption could be achieved before a new hard fork. But yes you are right that there are interesting differential effects on adopters/non-adopters that raise normative questions. 18:39:26 xmrack: Thanks. I will work towards it. 18:40:38 Rucknium good point. in that case, as ArticMine said, the goal would be to boost the initial adoption wave to push the mark past the first 5% as quickly as possible. 18:40:56 "It" = seeing how I can set something up for you to easily plug in data to what you already have, etc 18:41:43 on marketing: does monerod throw a (aggressive) warning to peers, which are not up to date? 18:42:13 I liked spackle 's idea to have a delayed trigger in a release version that would have all updated wallets start using it after a predtermined block height, maybe 1-2 months after release. Then you could "skip" some of the adoption curve 18:42:25 Rucknium: My preference is to contribute what I am willing and able to as a volunteer. That said, I'll consider the possibilities. 18:43:03 on marketing: does monerod throw a (aggressive) warning to peers, which are not up to date? [ like i.e. zanod does ] 18:43:40 the delayed trigger sounds like a good idea. 18:43:42 flip flop: What needs to update is the wallet, not the nodes. Anyway, the GUI wallet suggests updates when they are available. And the "third party" mobile wallets would probably suggest updates, too, or they automatically update on some users' devices 18:44:11 spackle: I'm using your NN model setup in my simulations. 18:45:21 Feather Wallet gives you a notification in its status bar (at the bottom) upon new releases. it can't even be turned off, which, in this case, is a good thing :) 18:45:23 I will post the simulation code soon. Right now it is a little more confusion than I would like, since it has to switch between the two DSA possibilities a lot. 18:45:43 I like the delayed trigger idea. 18:46:00 confusing* 18:47:47 Let's end the meeting here since we are 45 minutes past the hour. Feel free to continue discussing any items after the meeting. Thank you everyone. 18:51:14 Thanks all 19:00:12 Apologies. My timing was off today 19:01:36 jeffro256: The two interactive gadgets refers to the gadgets requiring the Fiat-Shamir transform: the discrete log gadget and the tuple-set-membership gadget. One was proven, the other would have security proofs under this proposal. 19:03:24 It's only the gadgets which are non-interactive which would be formally verified. The ones requiring a hash function aren't really eligible. It's those we either have security proofs for or would get a security proof under this quote. 19:06:07 there actually a delay monero.social/matrix.org issue again I guess 19:06:12 As for the largely, distinct discussion on divisors/the discrete log gadget's security proofs, I'll try to have a complete comment these next few days. 19:34:16 *largely distinct, that comma shouldn't be there D: 20:10:29 love: regarding your earlier question about smart contracts, which is actually a good question. what Rucknium said is all correct. I'll raise attention though to two items: 20:10:49 * Kayaba proposed a scheme in which Monero would evaluate an R1CS circuit, a sort of a fertile ground for verifying ZK proofs of execution of arbitrary logic. they also proceeded to sketch out a concrete design: https://github.com/monero-project/research-lab/issues/116#issuecomment-1947749510 20:14:12 * Andrew Poelstra's "scriptless scripts." the general idea is to exploit certain properties of a signature scheme to "encode" logic. Poelstra's work explores this concretely using Schnorr adaptor signatures (https://download.wpsoftware.net/bitcoin/wizardry/mw-slides/2018-05-18-l2/slides.pdf). I (very boldly) conjectured that other schemes, including Monero's current CLSAG, could b 20:14:14 e exploited in this way. however, scriptless scripts are much more constraining (you can't just write a contract you want), and I expect their currently understood form to become irrelevant for Monero on the mid term because there's momentum to abandon elliptic curve cryptography in favor of post-quantum crypto.