15:01:20 MRL meeting in this room in two hours. 15:04:08 Yu Gao, first author of recent paper ["Charting the Uncharted: The Landscape of Monero Peer-to-Peer Network"](https://moneroresearch.info/267) has joined the room. Welcome, Yu Gao ! 15:04:33 The paper will be the first agenda item of the meeting in two hours. 17:00:15 <0​xfffc:monero.social> Hi everyone. 17:00:19 Meeting time! https://github.com/monero-project/meta/issues/1197 17:00:27 1) Greetings 17:00:51 Hello, ALL, thanks for the warm welcome, glad to talk with you here. 17:01:01 Hello 17:01:01 hi 17:01:02 Hello 17:01:05 *waves* 17:01:21 my coauthor is joining also 17:01:42 hi 17:01:43 Yu Gao: Wonderful. Thanks. 17:02:04 Hi 17:02:06 hello 17:02:46 Proposal: when an author join the meeting rucknium should replace "Greetings" section with "MRL. Assemble!" 17:03:00 2) Updates. What is everyone working on? 17:03:05 Hi everyone! 17:03:38 me: mostly lws-frontend stuff, with some odds-and-ends in monerod 17:04:04 Matija Piskorec is also an author of [Gao, Y., Piškorec, M., Zhang, Y., Vallarano, N., & Tessone, C. J. (2025). "Charting the Uncharted: The Landscape of Monero Peer-to-Peer Network".](https://moneroresearch.info/267) 17:04:06 <0​xfffc:monero.social> Thanks to help from [boog900](https://matrix.to/#/@boog900:monero.social) I have started working on new tx relay protocol. 17:04:06 <0​xfffc:monero.social> Few general PRs on the side. 17:04:22 me: shared analysis on the FCMP++ weight calculation and proposed an algorithm (https://github.com/seraphis-migration/monero/pull/26#discussion_r2057203539), continuing on including the FCMP++ tree root in the PoW hash (after discussions with jeffro, I'm currently working on keeping the tree 9 blocks ahead of the tip i.e. growing the tree with outputs that unlock 10 blocks higher 17:04:22 than the tip which includes normal outputs from the tip block, so that SPV sync via block headers can have solid assurance the latest usable tree root for FCMP++ txs has multiple blocks of PoW on top) 17:05:05 me: Working on the "unit test" of subnet deduplication for peer selection. I have something that seems to be working well for the `white_list`, but maybe not for the `gray_list`. 17:05:57 Also if if a monero twitter account handler is watching, would be good to announce the FCMP++ optimization contest is now open for submissions 17:06:29 jberman: "SPV" means a wallet using a remote node? 17:07:18 It could. It means "Simplified Payment Verification": https://wiki.bitcoinsv.io/index.php/Simplified_Payment_Verification 17:07:26 Here is the link for the FCMP++ optimization contest, by the way: https://web.getmonero.org/2025/04/05/fcmp++-contest.html 17:07:57 * m-relay whispers there is a 95k$ prize 17:08:16 Yes, SPV is usually a term used in BTC-like blockchains. I was wondering how it is explained in Monero. 17:08:49 I mean, you are trying to have nodes "prove" to wallets that they are not being malicious, right? 17:09:25 You could sync block headers, and then verify that the user is spending an output that is a member of the chain by checking it against the FCMP++ root included in the block header (that is PoW-verified) 17:09:39 AFAIK, wallet2 did some checks on the outputs distribution histogram to try to detect if nodes were giving bad distributions for ring construction 17:10:32 Or on the wallet side, the wallet can have stronger assurance the tree it's using to construct the proof is the correct tree (i.e. a malicious node has to provide PoW to feed fake tree data to the wallet) 17:11:18 sounds awesome 17:11:24 Sounds great. Is this related to the issue that tevador found in the current PoW? 17:11:39 I mean, is it a way to reduce/eliminate that issue? 17:11:53 And are you talking with sech1 about it? 17:12:17 Another interesting benefit: a node can sync block headers first, checking PoW, and then start doing verification/tree building in parallel with a solidly optimal construction that maximizes CPU utilization (how I believe evoskuli is doing for Bitcoin) 17:12:32 Not familiar, what issue are you referring to there? 17:12:53 Let's move on to the agenda items. Maybe pick this back up after 17:13:06 3) [Gao, Y., Piškorec, M., Zhang, Y., Vallarano, N., & Tessone, C. J. (2025). "Charting the Uncharted: The Landscape of Monero Peer-to-Peer Network".](https://moneroresearch.info/267) 17:13:31 There was some initial discussion of this paper in #monero-research-lounge:monero.social : https://libera.monerologs.net/monero-research-lounge/20250424#c520278-c520470 17:13:56 And now, thanks to xmrack , we have two of the paper's authors here: Yu Gao and Matija Piskorec 17:14:01 Welcome! 17:14:04 Me and Yu Gao read the discussion that you linked! 17:14:35 Great. Well, how do you want to discuss? Any way is fine. 17:14:55 We are open to answer any questions you might have 17:14:58 We have already read it before 17:16:39 If I recall correctly, the paper wasn't completely clear on whether you were measuring just outbound connections or both inbound and outbound. Of course, one node's outbound is another's inbound, so you can estimate the whole graph, still. But if you wanted to focus on a specific node's connections based only on its data, then you may want to know its inbound connections, too. 17:16:54 So can you directly measure an node's inbound connections? 17:18:23 We regard each "connection" as an undirected link; to detect the architecture is the primary task, and we didn't include the incoming or outgoing detection. 17:19:32 We don't get information about whether a peer is outgoing or incoming from a peer list. So we cannot distinguish between the two 17:20:24 I see. Thank you. Like I stated in the #monero-research-lounge:monero.social discussion, I am skeptical that an adversary could use the topology measurement to harm the network's integrity or privacy because the topology estimate must be taken over a long period of time, e.g. a week. By the time an adversary has that information, it is not useful for malicious behavior. 17:20:43 would the good nodes find each other again in case there is a network shutdown? do you expect the topology to reappear or could it end up differently? 17:21:13 I guess you could rule out incoming by connecting to that peer and seeing if its ports are open, yeah? Would distinguishing between incoming/outgoing make a meaningful difference to partitioning attacks? It might make a privacy difference for Dandelion++... 17:21:34 I agree with this - we have to collect data for a period of time in order to calculate a statistic 17:22:27 i stumbled upon 8 outbound cx you chose. Isn't the default 12 ? 17:22:59 flip flop: I also noticed that. I assume it was a typo (BTC uses 8). 17:24:18 We don't know whether the topology would be identical between the shutdowns. I would assume most probably not. However, this depends on how peers (white and gray lists) are stored in the node between different runs. I guess that these lists can be used to try to reestablish the connections, but it is not guarantied that this will succeed, so topology would probably be different (e specially is enough time elapsed) 17:24:27 My understanding is that, from the graph theory perspective, it is not really a big issue, but of course, from the security perspective, it is meaningful, because it allows incoming connections to pose risks. 17:25:09 Hey Yu and Matija thanks again for joining the meeting. I was wondering if you were aware of the large number of Monero proxy nodes discovered by boog900. Would these “fake” nodes have an impact on your analysis? 17:25:37 Feel free to continue your discussion and come back to my question later 17:25:44 I think some of the in/out measurement depends on what actions, exactly, trigger an update to `last_seen`. Actually, recently I have been investigating how that all works. 17:26:08 I think this is a typo, you are right! 17:26:24 The only way to find out is to observe the network under real stress. Simulations (i've done those) are notoriously hard to do and need to be optimized against real-world case studies. 17:26:24 Would it be feasable to collect data regularily ? 17:26:48 Do these fake nodes play an active role in the network? 17:27:11 From what I understand of the paper, if inbound connections do not trigger an update to `last_seen`, then the method may only be measuring a specific node's outbound connections (but all connections of reachable nodes if this is measured at all reachable nodes). 17:28:18 Yu Gao: It seems that these nodes only accept inbound connections, but do not establish their own outbound connections 17:28:25 They become potential outgoing connections for honest nodes, just like any other node. 17:28:32 There are no reasons to believe that they don't participate in block propagation and consensus requirements. 17:28:58 but yeah, they are just here to grab poor ignorant nodes and weaken tx anonymity. 17:29:02 Info on the suspected malicious nodes: https://github.com/monero-project/research-lab/issues/126 17:29:16 We are continuously collecting peer list data from the three Monero nodes that we are running from various locations 17:30:06 MRL has recommended that node operators ban the suspected spy nodes from having connections to their nodes: https://github.com/monero-project/meta/issues/1124 17:31:01 Network shutdown is a chance for an eclipse attack, they can refresh the peerlist, top grey_list with fake IPs. The shutdown is a base for this kind of attack 17:31:55 There were some recent patches to vulnerabilities that could crash nodes through p2p communication. 17:32:19 don't forget to say it's padillac who found it. I know he want this to be insisted upon. 17:32:31 We were not aware of these fake proxy nodes! Is there a way to identify them? 17:32:47 great question 17:33:18 The method to identify them hasn't yet been made public 17:33:35 Matija Piskorec: Yes. Last MRL meeting, it was loosely decided to disclose the method to detect the nodes once an intermediate countermeasure is written. 17:33:54 Which will hopefully happen in a few months or less. 17:34:10 Looking foreward to the next stressful event... ! Seriously, this data will be helpful. 17:34:18 <0​xfffc:monero.social> Until we have counter measures ready, it is naive to disclose the method. 17:34:34 0xfffc tbf there are other methods available 17:35:42 Info about the recent p2p crash vulnerability patches are in the release nodes: https://github.com/monero-project/monero/releases/tag/v0.18.4.0 and HackerOne disclosures: https://hackerone.com/monero/hacktivity?type=team 17:35:50 there is, like active methods 17:36:26 Matija Piskorec: If you look at your data again, these spy nodes should be obvious because they "saturate" their /24 IP address subnets. 17:36:34 We don't yet have a plan to publish this data. It would be technically challenging (it's GBs of data). But we are open for suggestions and collaboration! :-) 17:36:46 Xmr used to use 8 17:36:54 That suggests that a malicious party is renting whole /24 subnets to run proxy nodes 17:37:13 ofrnxmr: Do you know when that changed? 17:38:20 Matija Piskorec, Yu Gao : Do you plan to present this paper anywhere at a conference soon? 17:38:22 Thank you for the suggestion! In the future we might decide to exclude them. 17:38:24 Can you expand on this scenario ? 17:39:04 Yu Gao: is presenting in on IEEE ICBC conference in Pisa, 2-6 June. https://icbc2025.ieee-icbc.org/ 17:40:58 Matija Piskorec: Well, maybe still include them and write a narrative about them or something. You could elaborate a lot about the facts you found, if you want to speculate a little. On a related topic, you found "supernodes" with very high number of connections (different from the spy nodes that boog900 discovered. As you know, this paper also found that supernodes existed: [Gao , Y., Piškorec, M., Zhang, Y., Vallarano, N., & Tessone, C. J. (2025). "Charting the Uncharted: The Landscape of Monero Peer-to-Peer Network".](https://moneroresearch.info/267) 17:41:09 Great! 17:41:14 This is another research idea I am going to simulate, but it is very early stage, maybe I will share it in the future? 17:42:39 I wonder if the supernodes are nodes of mining pools that want to propagate their blocks as quickly as possible. Or other honest services. Or they could be trying to spy, but Dandelion++ is designed to defend against supernodes since the stem phase propagates txs to outbound connections. 17:43:11 is there the possibility to implement a reputation system to prevent these kinds of eclipse attacks? It seems like using ips to make it expensive to run fake peers is duct tape. would be a great research topic 17:43:31 Rucknium: I am also going to your MorKon5 workshop. 17:43:41 2020 https://github.com/monero-project/monero/commit/c67fa324965268cd1c01cbcb513038e7344f35d1 17:44:46 Fantastic. I will be presenting there, too, but remotely. 17:44:56 i have some ideas regarding WoT -- last agenda item. 17:45:17 WebOfTrust? 17:45:18 <0​xfffc:monero.social> Videos will be available immediately? 17:45:21 Thanks. I actually thought the number of connections change was earlier 17:47:05 0xfffc: My videos will be available immediately since they will be pre-recorded and available on Vimeo (or possibly another public site). I think MoneroKon organizers need some time to post edited versions of most in-person presentations. You can ask in #monerokon:matrix.org for more info. 17:48:28 boog900 has suggested that the default number of outbound connections could go even higher if and when a more bandwidth-efficient tx relay protocol is implemented. 0xfffc said at the beginning of the meeting that he was working on it. 17:48:47 Yu Gao: maybe I missed it but did you publish the list of super node IP addresses? 17:48:58 no 17:49:33 We totally understand the responsibility of analysing DLT networks. 17:49:36 Here is the new tx relay proposal: https://github.com/monero-project/monero/issues/9334 17:49:41 A higher number would probably improve a majority pruned-node environment, but w/o tx-relay improvements, it adds a lot of latency 17:49:46 <0​xfffc:monero.social> Yes. Boog is implementing for cuprate. I have started monerod tx relay 17:49:46 <0​xfffc:monero.social> Let me find the links 17:49:58 Theres also in-progress code for this 17:51:13 The main problem with increasing outbound is putting a bottleneck on the minority nodes who accept incoming connections 17:51:41 Yu Gao: would you consider disclosure of the super node IP addresses to the Monero VRP via hackerone? 17:52:41 In a network where only 1/10 nodes have incoming connections, the singke node has to handle all of the traffic. We set incoming to unlimited by default, bur in practice most node will bottleneck-out at 100-1000 connections, depending on network haedware, operating system, and uplink speed 17:53:17 <0​xfffc:monero.social> [Rucknium](https://matrix.to/#/@rucknium:monero.social) 17:53:18 <0​xfffc:monero.social> Cuprate: 17:53:20 <0​xfffc:monero.social> https://github.com/Cuprate/cuprate/pull/407 17:53:22 <0​xfffc:monero.social> Monerod: 17:53:24 <0​xfffc:monero.social> https://github.com/0xFFFC0000/monero/pull/60 17:53:26 for the record, in a stress situation, less cx will be more resilient (in all likelyhood) 17:53:43 So increasing outbound is, imo, abbad idea at a certain point. Cant use up "all" of the incoming slots by abusing our outbound connections 17:54:23 In principle we are open to disclosing any information (data, code) related to our paper to the Monero team. 17:55:03 The paper says, "Our results show that the network is highly centralized around several super nodes with significant betweenness centrality and high degrees. While this centralization strengthens security and robustness, it also introduces potential vulnerabilities." 17:55:04 Technically, yes the graph metrics would show more centrality when a few nodes decide to set a high non-default number of outbound connections. However, is it centralized in a meaningful way? If the supernodes didn't exist, would the network be in a better or worse state? 17:55:06 i assume the supernodes are what i am referring to. Most non-supernodes are exhausted much more easily 17:55:16 Increasing outbound connections exacerbates this problem 17:56:22 <0​xfffc:monero.social> Skimmed the paper, my questions too 👆🏻 17:56:24 (Even with improved tx-relay) 17:57:09 If the supernodes didnt exist, the question is: do the remaining nodes have enough inbound slots to service the nerwork 17:57:10 I'm not saying we should wack the number of connections stupid high, but with the new protocol reducing bandwidth usage by 75%+ we could double the number connections and still be using less bandwidth overall (assuming linear growth which isn't the case with the new protocol, it should be less). 17:57:54 Bandwidth isnt the bottleneck here, its the number of inbound connections available to the network 17:58:14 IIRC, there have been a couple of papers that worry that BTC's default 8 outbound connections is too low. 17:58:23 The view needs to be dynamic and individualistic. nodes may decide for themselves (UI?) 17:58:28 In would assume that less than 50% kf nodes on the network are reachable via incoming due to nat, shared ip, etc 17:59:01 I uploaded 41 GB over 11 hours today on an unrestricted node. So not sure about " Bandwidth isnt the bottleneck here" ... 17:59:28 rbrunner: found the supernode operator :P 17:59:44 Lol 17:59:47 what is an "unrestricted node"? 17:59:58 It depends on how we define centralisation. If the nodes have a lot of connections, for instance is the mining pool nodes, then it has super computing power and an advantage to get more rewards, something like this, and also if the super nodes shut down, the network is easily disconnected 18:00:03 No limits on number of incoming, no speed limit 18:00:19 and is your rpc public? 18:00:25 No 18:01:05 41gb isnt a lot. Its like 1/4 of a node syced from you 18:01:21 i highly doubt the 42gb is due to txrelay 18:01:41 <0​xfffc:monero.social> > and also if the super nodes shut down, the network is easily disconnected 18:01:42 <0​xfffc:monero.social> I have problem with this part. I can’t imagine what kind of network / graph is that. 18:02:21 thats hard to predict. 18:02:24 Its because most nodes dont have incoming connections, so those nkdes _must_ make their connections to those who do, leading to centralization around the ones that do 18:02:28 how many connections rbrunner? on average doesn't have to be exact 18:02:47 100 or so 18:02:52 If 80% of the connections on the network are directed at supernodes, that implies that the rest of the network's incoming is likely exhausted 18:03:33 Yu Gao: Matija Piskorec : By the way, MRL has powerful scientific computing hardware if you ever did want to share data with MRL so that MRL researchers could look at it, too. Or if something needed a lot of RAM and threads. 18:03:54 I believe a general node would not intentionally modify its config file to get a lot of connections. 18:04:05 One machine with 256GB of RAM and another with 1TB of RAM. 18:05:02 Current impl of the node node picks more-recently-succeeded outgoings with high probability, so perhaps the network strongly coalesces around nodes which have high uptime 18:05:05 About 100 inbound connections on nodes with open ports is very common, AFAIK. 18:05:55 Yu Gao: By default, Monero nodes accept an unlimited number of inbound connections. 18:06:32 My opinion is that centralization is a bit ambiguous word - we can measure centrality in many different ways. And of course, more or less centralization is not necessarily a good or a bad thing, because it affects the network in different ways. So it's hard to say whether removing the supernodes would make the network better or worse. 18:07:28 <0​xfffc:monero.social> [ofrnxmr](https://matrix.to/#/@ofrnxmr:monero.social) 18:07:48 it would slow down the network, but consensus may be robust. The spam attack hints at that. Future will tell. 18:09:13 it would slow down the network, but consensus may be robust. The spam attack hints at that. Future will tell. 18:09:14 Unless a bad actor takes advantage of the situation... i think i get your idea. 18:09:35 This is under 10Mbps. It does indicate that upstream bandwidth can be a limitation 18:10:03 But inbound connections can be a burden for a general node? 18:10:16 from my data in 9334 I would expect about 10GB of that to be tx-relay FWIW 18:10:48 the rest is probably a node syncing 18:10:55 Yu Gao: Yes. In the suspected tx spam attack last year, many nodes had big problems because too many txs and too many conenctions. 18:12:25 That is a reason why a Monero "stressnet" was organized last year. A large number of txs were spammed to test problems and correct them. 18:12:33 Not if they are behind a nat etc, and in practice that is limited by uplink speed and network hardware or os (anywhere from 10-1000 connections) 18:13:20 It can be if they are in a cable as opposed to fibre Internet connection. This is because of the mindset of the cable tv companies 18:13:26 I have to go now, but thank you everyone for your questions and interest in our paper! You can always reach us via email (they are in the paper) if you have any additional questions or you want to discuss potential collaboration 18:13:29 Upnp rarely works in my experience, meaning that unless a user manually enables port forwarding, their node does not have incoming. 18:13:40 IMO we can double the number of outbound connections with the new relay protocol and still be fine, if individual nodes need to limit connections they should be doing that already. 18:13:51 Imo we should definitely not 18:14:16 Here is info on what happened on the stressnet: https://www.reddit.com/r/Monero/comments/1eoana8/the_stressnet_so_far/ 18:14:16 if that is enough to kill the network then the network is more at risk now with the current protocol and connection count 18:14:20 Not without knowing how many non-superpeer nodes have incoming connections 18:14:20 Thank you! 18:14:28 sorry Rucknium, I also need to go now. Thanks for reaching out; it's a pleasure to join here, and the great discussions. Feel free to connect with us by email. 18:15:06 Matija Piskorec: Yu Gao Thank you very much for discussing here! I think we will be in touch. 18:16:57 Its a very simple problem. If there are 150 network participants, but only 10 of them have incoming connections, and limited to 100 connections each = max 1000 incoming connections can be served. If the other 140 nodes have 12 outgoing connections, that is 1680 outgoing slots to fill. 18:16:59 <0​xfffc:monero.social> Why are you saying that? With new tx relay protocol, what’s the problem? Once new tx relay kicks in, the bandwidth usage almost halves 18:17:07 For example here in British Columbia Canada, the Telco offers up to 3 Gbps symmetrical and in some cases 5Gbps symmetrical. The Cable co maxes out at 200 Mbps upstream and in most cases much lower 18:17:14 This isnt about bandwidth, its about slots 18:17:59 <0​xfffc:monero.social> No, my point was, I don’t see issues doubling slots when we reduced bandwidth usage 18:18:11 You're not doubling incomibg slots 18:18:25 Youre doubling the amount of incoming slots that are used up 18:19:06 move on ? 18:19:11 Instead of 1680, now those 140 nodes are trying to hold 3360 outgoing connections against 1000 available incoming slots 18:19:43 Nodes with open ports have a strangely high number of inbound connections. If there are no supernodes with non-default outbound conenctions, then about 90% of nodes are unreachable. But probably there are supernodes. 18:20:43 There are a couple more agenda items. 18:21:42 web of trust, web of trust (if you chant it, it sounds a bit like one of us, one of us) hyped for this topic as it seems to be at the bottom of all of this 18:22:03 We have to distinguish here between initial node synchronization and ongoing node relay. Nodes with closed ports still have to deal with ongoing node relay 18:22:16 4) FCMP++ transaction weight formula. https://github.com/seraphis-migration/monero/pull/26 18:22:56 jberman and jeffro256 are working on the requirements for this. They have been discussing in #no-wallet-left-behind:monero.social 18:23:57 Transaction weight is used for the dynamic block weight algorithm and tx fees. 18:24:07 my latest analysis on FCMP++ tx weight formula is here: https://github.com/seraphis-migration/monero/pull/26#discussion_r2057203539 18:24:41 <0​xfffc:monero.social> Honestly, if we would’ve opened a separate repo for FCMP++ migration would been much more clear. I still have problem understanding what is going on. We do FCMP++ under seraphis migration, we used to have seraphis as protocol. Now we are doing FCMP++ 18:26:59 The clawback was implemented to account for verification time with l6 outputs. It is overkill when we are limiting the number of outputs to 8. So I would recommend against it 18:28:00 <0​xfffc:monero.social> ( anyway, not wanna hijack the topic ) 18:29:57 Outputs, and not inputs, right 😏 18:30:07 Wonder how verification time on an 8-input FCMP++ membership proof compares to a 16-output BP/BP+, guessing it's higher by a solid amount/and the scaling properties much more pronounced for FCMP++ than for BP/BP+? 18:30:14 Verification is still >5x slower for an 8-input tx, whereas the size is only <2x 18:30:29 i'm 100% against limiting inputs to 8 18:30:39 [@tobtoht:monero.social](https://matrix.to/#/@tobtoht:monero.social) 18:30:40 Or are you referring to the BP+ clawback? 18:30:54 but what about the load on nodes 😏 18:30:59 Yes 18:31:41 what about load on nodes when i need to chain 10 tx together. 18:32:36 We can re-use the BP+ clawback code for FCMP++ weight, it's like less than 10 extra lines, so it's not too much of a hassle if it brings us a more accurate pricing IMO 18:33:11 Its not like we magically limit the number of inputs used per day by limiting the number usable per tx to some unrealistically low number 18:34:14 reusing BP+ clawback code exaclty had the problem of incentivizing fewer input combinations into more txs versus 1 tx for all inputs, which is why I made that algo similar to bp+ clawback with those tweaks 18:34:20 It does not improve pricing especially if we can improve on parallel processing for verification. Unbound bandwidth is the biggest limitation 18:34:48 they scale up the next power of 2 right? 1 17 input tx is more expensive to verify than a 16 + 1 input txs or whatever. 18:35:31 Not from my understanding (or lackthereof) 18:36:09 This comes down to the cost and availability of bandwidth vs parallel processing compute time 18:36:34 More correctly unbound bandwidth 18:36:43 you wouldn't need to chain fwiw if you want to have comparable ux to today, you could construct multiple txs at one time. the idea behind the 8-input limit was to be able to verify a single tx in reasonable time for tx pool processing 18:37:15 thats what i mean by chaining 18:37:32 Yes this is sort of true. It's not exactly powers of two like Bulletproofs, but this is more or less correct 18:37:33 Making me send 5 separate tx for 1 payment 18:37:46 The limit makes sense but it makes additional weights redundant 18:37:58 Sorry, not like Bulletproofs *as we use them for range proofs* 18:38:09 B/c FCMPs also use Bulletproofs underneath 18:38:27 Rather, a variation named GBPs 18:38:43 I think that last time MAX_INPUTS/MAX_OUTPUTS was discussed, it was decided to take up the question again once the optimized code is settled and there are benchmarks for everything. 18:39:35 from a purely UX standpoint, i'm against a limit below... the tx size limit 🙃 18:39:41 I recall that discussion 18:39:42 So maybe stay on the topic of just tx weight formula. 18:40:12 Unless the questions cannot be separated at all. 18:40:35 I am saying we don't need weights just for verification time 18:40:36 we must we only have 1 FCMP per tx? 18:40:40 why must we only have 1 FCMP per tx? 18:40:59 can't we have 1 for each block of 8 or something 18:41:05 that's a good point^ 18:41:48 kayabanerve: 18:42:16 AFAIK, there isn't anything stopping us cryptographically 18:42:47 Also FCMP has to use a fixed weight for a given number of outputs to address changes in the number of layers 18:43:32 I remember that there was code written just like this for Bulletproofs, where it broke down the number of outputs into its bit-wise decomposition, and put as many BPs as there were 1 bits in the output count. I can't remember why it wasn't used in production 18:44:00 interesting, I didn't know that 18:44:46 yeah the number of BPs in a tx is dynamic per serialization it is enforced to 1 though by consensus. 18:46:14 jeffro256 proposed an idea to use a constant number of layers in the weight calculation and I support the idea, reasoning is here: https://github.com/jeffro256/monero/blob/fcmp%2B%2B-stage-weight/src/cryptonote_basic/cryptonote_format_utils.cpp#L510-L531 18:47:21 I guess I am wrong, but I thought more inputs added to the same FCMP does not increase the verification time per input. So breaking them up does not improve verification time. And the storage size per input of a FCMP actually _decreases_ as input number increases. These were my interpretation of kayabaNerve's numbers, which were theoretical, but not direct code benchmarks. 18:48:26 Yes I agree. If the number of layers changes the weight does not change. I support this 18:48:31 jberman: I removed the multiple-BP construction code in this PR: https://github.com/monero-project/monero/pull/9717. This is what it used to look like:https://github.com/jeffro256/monero/blob/2e8a128c752a3cee2a0bee43b3c15ae7ec344792/src/ringct/rctSigs.cpp#L1202-L1240 18:49:41 Keeping the weight constant over a layer change is very important for scaling 18:51:43 It does increase time-per-input once you hop over a power-of-2 boundary, which I think is what happens for a 5-input under jberman's benchmarks, I would have to check again. Much like how going from proving 8 range proofs to 9 means you pay for 16 18:51:44 I think this will be an ongoing discussion. Let's try to get a few words in about the Web of Trust for peer selection idea. 18:52:26 5) Web-of-Trust for node peer selection 18:52:41 I forgot to put this on the posted agenda 😬 18:53:02 tbc, we're at ~150mn or so outputs today, we bump to 7 layers at ~320mn, 8 layers at 12bn, 9 at 200bn+ 18:53:14 flip flop: Do you want to discuss Web-of-Trust? 18:55:08 i'll shoot a text but we can discuss next time 18:55:23 A naive rating system as in pgp key signing may backfire - bad actors (20% spynodes) may be faster to sign each other than the rest of us. Local F2F nets are a different application compared to global consensus. 18:55:24 I'd rather expand on /anchor nodes/ by proofing good behaviour such as availability, contribution to consensus, etc. The idea is that an adversary may have trouble faking such a track record (which may give more weight to old entries). Ideally a big database could be avoided. Thats just a theoretical idea atm. 18:55:28 flip flop: Sounds good. 18:56:01 We can end the meeting here. Thanks everyone. 18:56:27 bye! 18:56:28 thanks 18:56:30 delicious meeting 18:56:47 Thanks 18:57:07 Ty 18:57:38 jeffro256: Maybe what I was seeing was the total verification time cost, given current user behavior. In other words, the txs that used to have many inputs would have to follow a consolidation sequence. The consolidation sequence would use the max power of two to do the consolidations, so it would be efficient. 18:59:18 <0​xfffc:monero.social> Thanks everyone for great meeting 19:00:09 In here: "Monero FCMP MAX_INPUTS/MAX_OUTPUTS empirical analysis" https://gist.github.com/Rucknium/784b243d75184333144a92b3258788f6 19:03:41 fwiw kayabanerve did also discuss including multiple proofs in 1 tx here: https://gist.github.com/kayabaNerve/dbbadf1f2b0f4e04732fc5ac559745b7 19:03:42 reasoned against it for bandwidth reasons and to avoid having a single tx take seconds to verify. I think the latter is still a reasonable point of contention when allowing multiple proofs in 1 tx (imagine including many valid proofs and then 1 invalid at the back) 19:06:21 I don't think including very many almost-correct proofs in 1 transaction is any different from including very many txs in an almost-correct set of txs, as long as you verify each FCMP individually 19:06:56 We would want to put a max size on each FCMP obviously 19:07:38 Presumably the RPC would limit to 1 tx per submission, and I guess blocks other RPC submissions 19:07:40 yeah we are going to be paying the bandwidth cost anyway just over many txs 19:08:39 for UX I think having a single tx is better and the verification/bandwidth cost is the same as if it was split into multiple txs 19:09:21 It's also better for services which depend on certain on-chain actions being atomic 19:10:00 True, which isn't much different than blocking to verify each FCMP individually 19:10:43 Fair counter-points imo 19:14:21 There's also the argument that requiring more txs = more outputs = more FCMP++ proofs to verify longer term too 19:14:35 kayabanerve: also said here that 5 4 input FCMP would verify quicker than 1 16: https://github.com/monero-project/meta/issues/1102 19:14:36 > If the block only has a single 16-input transaction, that single proof will cost us thousands of scalar multiplications alone. If we had 5 4-input transactions (which is how they'd be aggregated under MAX_INPUTS=4), the computational complexity would be a fraction. 19:14:38 > < k​ayabanerve:matrix.org > (as the 4-input TXs would reasonably share their base cost with other TXs, making their cost an amortized percentage of the base cost and only the per-proof costs) 19:14:40 If I understand correctly. 19:22:48 Maybe I'm misunderstanding, but I think the point Kayaba is trying to make is that you can verify four 4-input TXs at a time, whereas you can only use 1 core for a 16-input, so your real-time difference between starting to verify and stopping is 4x, even though total CPU time is the same 19:23:48 Which is especially important for tx propagation time 19:24:40 That's also assuming that the 16-input tx is using 1 FCMP, not multiple 19:38:15 Yes of course. One can parallel process the 4 4input txs 19:38:40 Independently of each other 19:39:48 Steel-manning kayaba's argument using mine and kayaba's figures: 19:39:55 To DoS a node, an attacker would hammer it perpetually, hogging bandwidth/CPU time 19:39:56 Allowing uncapped input limits, each malicious IP gets an estimated/untested 2s of a CPU thread for a bad tx (and more bandwidth expended over a shorter period) 19:39:58 Capping input limits to 8: each malicious IP gets ~200ms of a CPU thread for a bad tx 19:40:00 So the same attack when capping limits would require 10x more IP's to perpetually pull off the same attack 19:40:02 If you spread out more txs with input limits, it still requires more IP's to submit the bad txs that maliciously hog the node get your IP blocked 19:43:13 > Allowing uncapped input limits, each malicious IP gets an estimated/untested 2s of a CPU thread for a bad tx (and more bandwidth expended over a shorter period) 19:43:14 This is assuming 1 FCMP per transaction though, yeah? 19:43:20 I mean txs are sent in batch + we could verify each FCMP in a tx one by one 19:43:52 If the batching is seen as a potential issue 19:44:58 Which should work out the same if we have multiple fcmp per tx or multiple txs 19:48:04 I don't think batching is the primary issue. Even if you do allow batching and then process serially, each good tx is fine. The problem is that a bad tx would take (hypothetically) 2s to verify which is then hogging the CPU 19:51:03 That's on verification time alone. If we also factor in bandwidth and the RPC limits to 1 per submission, then the same underlying issue kayaba is pointing out is there (each bad tx per IP hogs significantly more bandwidth) 19:51:33 A large number of input TX because of it's size would attract a very large penalty if it is added at the penalty threshold. This type of TX should require a much larger node relay fee to ensure it is mined. 19:51:34 This leads to a spam attack where nodes are flooded with large input TXs the majority of which would not get mined. 19:51:36 If large input TXs are allowed the best way I see to deal with them is to have a significantly higher minimum node relay fee that would pay the penalty at the threshold. Increasing the weight for these TXs is not a solution and can actually make the problem worse 19:52:26 Ya, we could require node relay fees for large input txs and moneromoo did already set that sort of thing up 19:52:34 I am not in favor of large input TXs by the way 19:53:48 to be clear, I'm referring to a node relay fee not included in the tx, but would need to be paid to a node directly in order for the node to accept processing a large input tx 19:55:57 Keep it simple and pop at the fee to the miner. Why would the miner mine these TXs at a loss? 19:55:58 I'm not even sure fees really help here, someone could create double spends of the same inputs and then got a lot of txs for "free" as they wont ever be mined 19:56:08 Pay the fee to the miner 19:56:10 (and then it becomes a problem on the p2p relay side though) 19:56:22 the maximum you could go is 1 double spend per node 19:57:37 Why should nodes relay a mass of doubly spends? 19:58:21 if a tx take a while to verify a lot of double spends will cause a DoS with little cost + no banning 19:58:35 maybe we layer on pow hashing for submitting/relaying txs lol, pow hash paid for by tx creator 19:59:20 I was thinking the same thing 19:59:41 No . This discriminates against rho who live in the tropics 19:59:42 I do think that is the only valid solution 20:00:35 fees only count for txs in blocks 20:00:59 even paying fees to transfer only counts if the tx is included in a block 20:01:17 It is not a solution. Just launch the attack from Canada during the winter A good -40 temperature should do the trick 20:02:37 POW hashing to send transactions is a terrible idea 20:02:55 The computational cost wouldn't be anywhere close to block difficulty, just enough that it's some multiple >1 of cost to verify an FCMP , and the hash wouldn't be re-usable for mining 20:02:59 Russia would also work 20:03:10 p2pool has PoW for connections .... 20:03:29 i guess that is mining so it doesn't matter as much 20:04:01 tor introduced it ya? 20:04:41 The computational cost is very dependent on the outside temperature. This kind of thing does not belong in Monero 20:05:09 Doing a PoW-based attack, even in Russia or Canada, where the heating is "free", still costs something because of opportunity cost. If you're expending all this effort, that's CPU time that isn't being used for mining XMR 20:05:47 One is using CPUs 20:05:50 For non-attackers this cost should be minimal 20:06:32 Not if you are in the Australian outback during the summer or in India during a heat wave 20:06:48 It should be fine even for them 20:07:46 Your device is already overheating and one forces POW? 20:08:26 Charge a fee in XMR intead 20:08:33 Instead 20:08:51 Single-digit seconds of CPU time won't be enough to even warm your hand. By that logic, Monero requiring stealth addresses, and thus 1 ECDH per enote to scan is discriminatory. That takes way more CPU time for honest users 20:09:02 1) I'm betting it's almost certainly going to be less CPU power than scanning the chain, 2) it could only be required for higher input txs 20:09:27 I would bet creating the tx would cost more than the PoW 20:09:31 2) is an interesting twist 20:09:37 the PoW only has to be more than verification 20:09:43 ^ 20:09:57 ^ in reponse to "I would bet creating the tx would cost more than the PoW" 20:09:58 Why not simply require a higher fee? 20:10:07 you can't - this only effects txs in blocks 20:10:22 tx-pool txs can be double spent 20:10:34 we're discussing nodes processing the txs before they enter their pool 20:10:51 The point is that you can't validate a transaction until after you do. You could require a higher fee, but you can't actually check it w/o expending CPU time for verification 20:10:52 A node relay fee 20:11:22 ay yes the tx before the tx 20:11:25 which also needs a tx ... 20:11:42 or we paying with fiat? 20:11:43 also this 20:12:12 would have to pay fees for every relay to every node too 20:13:59 How do you check the node relay fee without verifying a transaction? 20:14:02 One just increases the node relay fee paid to the miner. This would provide deterrence 20:14:19 no 20:14:27 One cannot stop invalid transactions with source POW 20:14:28 at least if i wanted to dos the network I wouldn't be deter 20:14:58 (I want to dos the network and this is a confession that im a bad actor acting against monero's interest) 20:15:10 currently ~5k nodes on the network I can double spend an input ~5k times and send a different tx to each node cutting my cost by 5000 20:15:49 no but it adds cost 20:16:06 to make my node do work, you must have to do work 20:17:26 I can do the work when it is -40 C and make you work at 40 C 20:17:40 *Equal rights for monero p2p agents!* 20:18:34 there is a question of if it's enough PoW to even stop the DoS. If this holds, which I believe it does thanks to ec divisors, then the PoW really only max 2x's the cost to the attacker. It would really have to be a more significant PoW to work 20:18:36 The nature of PoW is that it is much easier to verify work than to create work. It wouldn't necessarily be a 1-to-1 cost 20:19:24 the PoW could only be required for large input counts though and scale up as input counts increase 20:20:24 If we are talking about getting rewarded for work then yes that would be bad. You have to solve 1 puzzle to send a tx, the puzzle will cost less than creating the tx (probably). 20:20:24 If we are talking about discriminating against attackers then yes you are right. 20:20:26 That doesn't really matter to the verifier. A valid transaction could take 24 hours to create, but the verifier only cares about verification time. It takes 0ms to make a bad elliptic curve divisor 20:20:54 One of the biggest misconceptions of POW is that people ignore the value or cost or the heat producedi 20:21:17 that's true, but point being, it would probably have to be a solidly costing PoW 20:21:41 the goal isn't for someone to be pumping PoW all day to send txs 20:22:09 the tx generation/scanning process will generate more heat 20:22:40 Are there really no distinguishable characteristics that could permit a node to "predict" a margin of computation time it would take ? 20:22:56 Won't a RandomX PoW requirement (even for high-input txs) entrench RandomX in wallet code? IIRC, there was a suggestion to eventually eliminate all PoW-related code from wallets to reduce malware false positives. 20:23:32 Monero CEO will have to buy the 999k$ Micro(talent)soft approved certificate 20:23:51 curious if equix gets flagged too https://spec.torproject.org/hspow-spec/v1-equix.html 20:24:34 This is a very valid point. One more reason to keep POW away from wallets 20:24:52 we could see how long it would take a reference CPU to construct the PoW and enforce that difficulty 20:25:28 Yes and no. No if, like in the current situation today, all wallets submit transactions to daemons's RPC. Daemons inherently already have RandomX code. It would prevent a hypothetical wallet from existing which submits transactions to the p2p interface of another daemon, *and* doesn't need RandomX code. 20:26:34 lr 20:26:39 fwiw I also don't think windows hostile and easily reversible decisions should affect Monero protocol decisions in this regard 20:26:42 But submitting transactions to another's daemon p2p interface directly as a non-daemon is obviously horrible for privacy if not done correctly. 20:26:48 Sorry, I meant for FCMP tx verification 20:26:54 not PoW 20:28:22 oh I see, ya! by looking at how many inputs are the proof, the node can predict a margin of computation time it would take 20:28:54 to verify 20:29:20 sort of 20:29:31 ... and not relay, or at least delay relay 20:30:01 I mean ya for sure. the node could theoretically even verify all input combos on boot and measure timings. I'm sure there's a better way to do it though 20:31:16 There could be a formula for forcing source PoW or other mechanism on (predicted) time consuming txs 20:32:28 Realistically if large input TXs are an attack vector, which I agree with, can't we just limit this at the protocol level? 20:32:32 Yeah FCMP verification time is more-or-less a constant function of (number of inputs, number of tree layers), both of which are provided explicitly in the transaction. 20:35:04 Yes, but at the cost of not having large input TXs: UX degradation, the time delay that input consolidation brings, and worse liveness / atomicity guarantees for services 20:39:18 If we allow high-input TXs at a consensus level, we can always tack on relay rules or PoW or miner relay fees or whatever else afterwards. But if we reject it at a consensus level, then we would need a hard fork to bring them back 20:40:55 wont txs with multiple FCMPs naturally have PoW built in. The worry is if the last proof is invalid the whole thing has the be thrown away but the person still has to construct multiple valid proofs then they had to put in work 20:41:36 This is a very valid point. We can restrict or penalize them at node relay 20:41:49 I am fine with that 20:42:33 boog900: this why I'm arguing the PoW should probably be significantly greater than cost to construct the tx for txs with higher n inputs, and should scale by n inputs in the tx 20:42:54 Not if it's an invalid FCMP, then it's basically free to make 20:43:30 The cost should scale, not because of honest construction time, but because of verification time 20:43:44 boog is saying the earlier FCMPs packed in would be valid, and then a later one would be invalid 20:44:02 yeah but the argument was that multiple FCMPs was equivalent to a single FCMP if the last is invalid, as they both waste significant work if invalid 20:44:22 but they aren't as you need to put work in for multiple valid FCMPs 20:44:50 I have a serious concern with POW at the wallet level. Requiring POW to access P2P pool is completely different since the users in that are contributing hash power 20:45:51 Ah yes true, sorry my misunderstanding 20:45:59 here^ 20:47:11 Tbf, it wouldn't be at the wallet level, it would be at the daemon level in the p2p protocol 20:48:24 this makes multiple FCMPs per tx equivalent to multiple txs with 1 FCMP, right? 20:48:35 I would think it would be at the wallet level too to protect public RPC's 20:48:51 Why would nodes be the entities creating the PoW? If I am a node, why would I bother creating PoW to relay a tx? 20:49:19 Relaying anything is actually a favor. 20:49:52 *You will thanks me for relaying your transaction and you will be happy* 20:50:01 Because either A) it's your node and your wallet and you want the tx relayed, B) your node, and users you care about, or C) you're running a public RPC node 20:50:28 The point of public RPC is that it's a public service 20:50:44 or better 20:51:04 "this makes multiple FCMPs per tx equivalent to multiple txs with 1 FCMP, right?" -> sorry I'm not following fully following the thread of your argument here. can you rephrase? 20:51:06 We allow both mode. If node allows it you can request it to generate the PoW. Otherwise the node can say "nah sorry not to day, please do it yourself" 20:52:24 Very good point 20:53:51 We probably expect public RPC's to reasonably exist and don't want public RPC's to be exposed to a dos vector, so I think it would make sense that wallets would construct the PoW in the first place 20:55:32 The problem was multiple FCMPs in a tx allows for someone to use up a lot of CPU if all the FCMPs are valid except the last. Which is the same as 1 big FCMP in cost to the node. 20:55:34 This is different than splitting the tx into multiple different ones with 1 FCMP each as the valid ones will still be added to the pool, therefor are not a waste. 20:55:36 However the multiple FCMPs per tx still have a significant cost adding a pseudo-PoW meaning it isn't the same as 1 big FCMP. Also individual txs can be double spent still wasting work. 20:55:38 My argument is that we should allow multiple FCMPs per tx as there is no difference. 20:55:44 if PoW for relay is introduced I would like it the update to be called FCMP++/Carrot/POWER. because PoW Enabled Relaying. 20:56:06 do i welcome any more creative naming 20:56:20 tho* 20:57:25 I'm not against public RPC operators choosing to block/allow any traffic that they see fit since it's by definition a public service and they're paying for their own server / own internet hookup. But you're inherently opening yourself up to DoS vectors that don't exist if you don't turn on public RPC, so I don't see why it wouldn't be an option 20:59:06 "My argument is that we should allow multiple FCMPs per tx as there is no difference." -> Not in place of PoW, ya? But as a separate (but related) point? 21:00:01 yeah I am saying PoW or not having multiple FCMPs per tx is no worse than multiple txs 21:00:31 Ya I think that is a reasonable point 21:05:55 They only difference I can think of rn is that in between each "checkpoint", or smallest meaningful indivisible unit of computation, if the transaction with 1 FCMP succeeds, someone somewhere is paying a transaction fee. Whereas with a transaction with N FCMPs succeeds and N-1 verify, but the last fails, then no-one pays a tx fee 21:06:58 Well wait no, I understand. Let's just assume no PoW for now 21:06:58 The node gets stuck processing a large bad tx that the node would not use. Yes, the client has done work to construct this bad tx, but that work is wasted work compared to having constructed a complete valid tx. 21:07:00 That would be categorically worse than node getting stuck for 10x less time processing a bad tx 21:07:02 "Also individual txs can be double spent still wasting work" -> w/current policy, nodes can just quickly reject a key image it has already seen, no? Not following how that is consequential here 21:07:31 I can (I think) see the argument that with PoW, it still makes sense to have multiple FCMPs per tx 21:08:42 (Going to have to step away in a sec and come back unfortunately) 21:12:43 > w/current policy, nodes can just quickly reject a key image it has already seen, no? Not following how that is consequential here 21:12:44 Yeah I thought double spending was checked after the input checks but looking at the code it is checked first, non-input checks like bulletproofs are still checked before the double spend check tho my bad. 21:12:59 and yeah it could've just been changed if it was 21:15:52 np, I actually didn't remember if it is or not, but was just making a point that it could be 21:16:36 I do remember bringing that up now as a way to find out if a node has a stem tx or not ... 21:20:22 literally unfixable though without introducing this DoS 21:31:15 tevador explains relevant rationale here for why equix over randomx in tor: https://github.com/tevador/equix/blob/master/devlog.md 21:31:16 TL;DR faster verification to prevent flooding attacks, and high memory requirement 21:31:18 Worth thinking it through in our case. Probably don't want to impose too strict of memory requirements on clients, though our allowed verification time could potentially be significantly higher if we don't impose PoW relay reqs on lower input txs 21:31:20 So maybe the low memory RandomX variant could make sense in our use case 21:33:08 jberman: This is the issue I referred to previosuly: https://github.com/monero-project/monero/issues/8827 21:33:10 > Better security for wallets using untrusted remote nodes. Malicious remote nodes can feed wallets fake blockchain data. With this proposal, wallets could partially verify the integrity of the blocks received from untrusted remote nodes with the cost of a few hashes. 21:33:35 gotcha thank you 21:36:17 this seems relatively simple to include in next fork 21:46:40 and to answer your q "Is this related to the issue that tevador found in the current PoW?" -> checking PoW on block headers is related to including the tree root yep. That idea by tevador would speed up the PoW verification / benefit the approach 21:49:51 @sech1 @selsta do you know if anyone has started working on bringing that change into monero for next fork? we can add it to the list of TODO's if not 21:50:29 sech1 knows best the current status 21:53:04 tevador was working on it but he's not active anymore 21:54:32 He actually did submit it https://github.com/tevador/RandomX/pull/265 21:54:43 It's just RandomX V2 which is not finished yet (RISC-V code is missing for it) 21:55:21 got it ya, so looks like it's just a matter of implementing that specific change in the monero repo 21:56:33 / integrating that PR 21:57:47 *thumbs up* good bump rucknium 21:59:54 I hope tevador is alright 22:00:58 That's one reason we have regular meetings. To discuss current work plans to make sure possible synergies aren't missed :) 22:38:01 Rucknium: bytes can decrease upon an increase in inputs. It still trends upwards. 22:39:04 jeffro256: Batch verification of four small FCMPs is faster than verification of one large FCMP, even on a single core. 23:43:34 One thing worth mentioning: if the PoW puzzle also hashes a ref to the latest block in addition to the tx, it removes the attacker's ability to slowly accumulate many bad txs + PoW solutions over a long period of time 23:46:10 (could also expand the window to be n blocks in case the chain advances after constructing)