01:56:03 .merges 01:56:03 -xmr-pr- 8319 8355 8516 8517 8525 8527 8529 8543 8564 8569 8570 8571 8578 8580 8590 8593 8594 04:00:24 Hey y'all I wanted to start a discussion about community nodes and start asking for community node operators: https://github.com/monero-project/monero/issues/8624 05:52:56 For anyone that was interested, the demo yesterday https://youtu.be/aAvSpfll9z4 10:44:02 struct SpendableOutput... (full message at ) 10:46:09 rct is short for ringct, which is short for ring confidential transactions. 10:46:24 That's the current signing sytem monero uses. 10:46:47 As an optional string, I'm not sure what the semantics would be in your structure though. 14:29:56 ridauhebauye[m]: yes, I am updating monerod 14:30:18 your suggested change wouldn't help much becaue an attacker can have 1+ connections; an outbound connection is not controllable externally 14:30:57 sorry meant 2+ connections to an IP 14:32:00 intentionally skipping that link creates a trivial to identify situation 15:26:55 is there any way to implement a consensus algorithm that's <= O(log n), where n is number of transactions? 15:29:27 the question doesn't make sense. do you have an example of an algo that is > O(log n) for your definition of n? 15:29:29 cvmn: the PoW consensus algorithm is independent of tx count, maybe you could clarify 15:30:12 hyc: yeah. e.g. bitcoin for example. as transactions are added (including mining announcements with hash proofs; enough zeros, that is), they add a blog stuffed with transactions. 15:31:07 the cosensus is implemented by the blockchain, which grows O(n) where n is transactions number. 15:31:35 the blockchain grows O(n) yes but that's not because of the consensus algo 15:32:03 as UkoeHB said, PoW consensus is independent of tx count 15:32:28 consensus is the longest chain, where length here is computational power. right? 15:33:22 no 15:33:27 You want to define your terms. It's so vague nobody can say yes or no. 15:33:58 "consensus" is the very vague term here. 15:34:30 Your second line implies chain size. 15:34:31 ok. to me, consensus is the mechanism to have an agreement about whether a certain number of transactions happened. 15:34:48 s/number// 15:34:50 Yes, but what part of it do you want to be O(log(n)) :) 15:34:55 consensus in blockchains has nothing to do with number of txns 15:34:58 And in number of what ops. 15:35:21 hyc: yes, not their number, but whether transactions happened. 15:35:29 Typically, it's "number of operations to verify a chain". but your second line implies it's not what you want. 15:35:35 \blocks are mined on a set time interval, they could be empty or full or anywhere in between. the number of txns is irrelevant. 15:36:37 Well, maybe it is. In which case, it's not possible, because you need to verify every single tx. 15:36:41 true. then, the longest chain is followed as the global agreement of where the world is. 15:37:14 There are shortcuts if yo're willing to forego security, such as allowing chcekpoints where PoS "protects" a ptoof all txes up to that point were verified by "someone", and you trust that. 15:37:18 in bitcoin, individual transactions are added to blocks. blocks with more transactions are larger. so, it's effectively O(n). 15:37:44 irrelevant 15:37:44 Monero is pretty similar to Bitcoin in all such things. 15:37:58 consensus is not based on the longest chain 15:38:03 Above, I meant PoW, not PoS, lol 15:38:18 hyc: what is it based on, then? 15:38:19 and 'length of a chain" depends only on the number of blocks, not the number of txns contained in the blocks 15:38:44 consensus is based on the chain with the most work, in PoW chains 15:38:47 "longest" is common speak for "with most cum work". 15:38:54 well, a block ls longer if it has more transactions. so, a chain effectively becomes longer as some blocks will be taller than others. 15:39:02 cvmn: no 15:39:21 OK, so you're on about size now. There are chains that allow "merging" some txes (Mimblewimble). 15:39:35 there are different types of lengths here. one length is the computational power and another is the length in bytes on disk. i'm referring to the bytes on disk. 15:39:40 Not sure how much you gain, but there's still a linear component AFAIK. 15:39:48 cvmn: still wrong 15:40:24 any idea which terms am i abusing? 15:40:24 You can also have a balance based chain. In that case you can collapse a number of things *after* you verify txes. That's close to O(1). 15:40:42 (it's O(a) wiht a being number of accounts) 15:41:05 Balance based chains are not privacy friendly at all though. 15:41:15 cvmn: I already gave you the correct definitions above. 15:41:35 You may want to lookup Cryptonite, the mini blockchain. 15:41:50 \blocks are mined on a set time interval, they could be empty or full or anywhere in between. the number of txns is irrelevant. <--- this? 15:42:04 It was balanced based. AFAIK Ethereum also is. But has a massive unusable chain so... 15:42:07 that's part of it, yes 15:42:35 hyc: are you saying that, even the disk-space wise is independent of number of transactions? 15:42:52 the disk space has nothing to do with consensus 15:43:21 doesn't bitcoin's consensus require connectivity all the way back to the genesis block? 15:43:31 connectivity of blocks, yes 15:43:40 Are you really on about disk space (after verufucation) or chain size (data needed to verify the chain in the first place) ? 15:43:43 at least, during initial synchronisation, a full node will require to download all transactions. 15:43:43 but again - it doesn't matter how many txns are inside a block 15:44:36 txn validation and blockchain consensus are competely independent matters 15:45:13 txns are validated by the first node that receives them. can be rejected immediately if invalid. "consensus" is irrelevant. it is O(1) operation. 15:45:23 is it possible to validate a block's hash, without knowing transactions inside it? 15:45:31 yes 15:46:06 even without knowing hashes of transactions inside it? 15:46:29 a miner sees the hashing header only, not all of the actual transactions 15:47:13 so, we require having all hashes of transactions in a block, in order to verify that block. right? 15:49:43 "As an optional string, I'm not..." <- I found it from here https://github.com/mymonero/mymonero-core-cpp/blob/master/src/monero_transfer_utils.hpp#L64 15:50:17 Ask in #mymonero maybe. I think most people there are here too though. 15:50:26 what will happen if I have a rct, and how can I get it? 15:50:38 i wrote that codr 16:44:23 "I found it from here https://..." <- the rct is 'coinbase' or a mask 17:36:27 Hey everyone 👋️... (full message at ) 17:43:44 Ideally, if you were to spend more time on this? Where would you like to take it ? 17:54:45 make a Rust full node implementation which would... (full message at ) 17:55:18 > <@vorot93:matrix.org> Hey everyone 👋️... (full message at ) 17:55:41 > <@vorot93:matrix.org> make a Rust full node implementation which would... (full message at ) 17:56:16 is there a way to make the bridge post the full message, instead of a link to libera.ems.host? 17:59:04 ghostwaynoted 17:59:05 there's no public repo yet, and I put it on ice for now as I'm still prototyping the juicy bits (snapshots, libp2p) using working full node (Akula) first 17:59:05 once those are done, I'll copy them over and present an alternative to C++ monerod with killer-feature set 18:00:35 Yep just saw your tweet saying it's not public yet. I see, so no early-contributions heh. Good luck! Quite excited 18:06:08 Artem Vorotnikov: Sounds very interesting. Do you have a use case in mind? Or just indefinitely experimental? 18:23:31 Rucknium 18:23:31 - Much faster to sync up, which is relevant for mobile use case 18:23:31 - MDBX, which should give even better performance having been stress-tested and improved on multi-terabyte databases of Akula and Erigon 18:23:31 - 10x more modular, cleaner and accessible to developers because of staged sync and Rust - consequently much easier to add features like custom RPC calls 18:29:19 What is staged sync ? 18:30:57 also forgot: 18:30:57 - plaintext chainspec format, which allows for adding support for any network, provided it doesn't require any unsupported protocol features, example for an Ethereum testnet: https://github.com/akula-bft/akula/blob/master/src/res/chainspec/sepolia.ron 18:35:33 Yeah, what is staged sync. Much faster how? 18:35:51 moneromooo... (full message at ) 18:37:33 rbrunnercombined with other innovations, Akula can sync an archive Ethereum node (2+ TB worth) in under 48h on good machine 18:37:33 Compare this to Go Ethereum which can (non-state-sync) sync in..... months? 18:38:45 You're claiming doing this is much faster because of... cache friendliness ? Other ? 18:40:21 Anyway, if you can speed sync up noticeably while checking the same things, it sounds like a great win. 18:41:15 Staged sync is more about easiness and simplification of node development, but it's also faster because CPU-heavy logic can much easier be parallelized 18:41:15 Not sure about Monero, but at least in Ethereum we have extraction of tx sender from secp256k1 sig - since it's a separate stage we actually parallelize all of it for a massive win worth several hours of sync at least 18:42:10 Monero sync is not massively parallel. I can see how *that* could speed things up. 18:42:27 Some parts of verification is parallel, but more could be. 18:43:08 Monero sync's heavy on disk I/O. Optimizing this might get you good wins too. 18:44:23 Akula's data model is heavily optimized towards random reads, sequential writes, keeping within recommended storage engine parameters, so I/O is _not_ a bottleneck for us 18:45:09 but since you use (almost) the same storage engine, I believe it's not a big problem for you either (cc @hyc) 19:04:40 we've done a similar experiment in OpenLDAP, years ago. when bulk loading a database, just load entries first, and then iterate over attribute indices one at a time 19:05:09 while each individual pass is relatively fast, the sum of all separate passes still takes more time than ading entries and indexing at once 19:06:56 loading all the entries by themselves first certainly is sequential and fast. but index creation is always random access 19:07:19 unless you use external extract-transform-load like we do 19:07:27 then it becomes sequential 19:07:43 then you require an external memory or data store large enough to hold an entire index 19:07:46 If you've got the RAM, you might be able to create it with seq access by writing in already sorted order from RAM. 19:07:59 Well, yes. 19:08:18 sure, anything can be made to go faster if you have enough RAM 19:09:03 we don't do ETL on tmpfs exactly because indexes are large 19:09:22 and they would compete for RAM with MDBX' mmapping anyway 19:10:49 naturally, syncing a 2TB database on exactly 2TB free space is a bad idea 19:10:49 but it's always a bad idea for anything, at least on SSDs 19:10:49 we don't peak at much-much more than the final DB size though 19:12:56 anyway, have fun re-learning with MDBX everything we already tested and discarded in OpenLDAP 20 years ago 19:13:12 lol 19:13:39 The fun started already ... 19:17:28 It's much easier to have fun if you are young. Old people like hyc and I are pretty hard to excite anymore :) 19:17:38 lol 19:17:44 Is it correct to say that this Rust node would have to implement the communication protocol of the C++ `monerod` (in addition to any separate improved protocols) too so that it can communicate with the rest of the network? 19:19:22 Rucknium 19:19:22 maybe - there are several ways to go about it 19:19:22 I am most certainly _not_ planning on starting with rewriting Levin, if anything it's the hardest _and_ least important to get the node to function. 19:19:32 Rucknium[m]: I don't see how else you would expect to interact with the existing network 19:19:35 I would say so, yes. Otherwise you wouldn't have a single network 19:20:49 Akula started as a companion node to Erigon, leeching off its MDBX on-disk database 19:20:50 Yeah, if you want a proof of concept that demonstrates faster sync and superior modularization you just implement enough to fetch blocks from somewhere 19:21:26 then since I'm planning libp2p anyway, I could make an overlay network, connected to Levin-based network through such companion nodes 19:22:07 Levin is a stretch goal in any case, not a hard requirement _even_ for prod 19:22:20 how does erigon compare to turbo-geth? 19:22:54 ehm, Erigon was renamed from turbo-geth 19:23:05 ah 19:23:17 no wonder it sounds familiar ;) 19:23:39 naturally, it's even more shiny since it's quite a few commits away :) 19:24:21 so this still describes your ETL? https://github.com/BTAutist/turbo-geth/blob/master/common/etl 19:24:56 another problem with approaches like these are typical Windows users. the process you describe requires uninterrupted operation from 0 to tip. 19:27:04 correct, but each index stage only requires a few hours, not days, so 🤷‍♂️️ 19:27:13 MDBX's freepage reuse policy, reuse most-recently-used pages first, will also destroy SSDs faster 19:28:48 that's not quite enough workload to significantly shorten modern SSD's lifespan 19:30:59 it depends entirely on how good the wear leveling in the controller is 19:31:01 Well it all depends on scale.. 19:31:06 not something worth betting on, really 19:31:59 well then ancient/obscure hardware users are free to use LMDB :) 19:33:10 but so far Erigon and Akula have had zero reports about dead SSDs - and Ethereum's workload is magnitudes heavier compared to Monero 19:33:23 I am almost looking forward to the Monero dev team outed as hobby programmers and even dimwits and get a much faster and more modular Monero daemon in Rust as a recompense 19:34:03 We will get over it, believe me :) 19:34:22 lol 19:35:53 > <@vorot93:matrix.org> moneromooo... (full message at ) 19:36:57 If you download all data first and then check its validity, you will find out that you received invalid garbage only after you've already downloaded it all and done all the checks 19:38:57 it may be garbage data, but it loaded fast! 19:39:12 lol 19:39:19 Artem Vorotnikov: all I can say, when you release something about this (or need a little contributer help), I'd like to get notified :) 19:39:30 Ethereum is also a BFT network, mind you, so it's _not_ correct that Akula assumes data to be 100% valid 19:39:30 That said, blockchain is at least partially spam-protected by PoW seal and if you have invalid blocks floating around with good difficulty, then new nodes syncing up is the _least_ of your problems 19:39:33 Also, I don't think treating levin as a "stretch goal" is a useful idea - if you can't talk to the other nodes, you're just excluding yourself from the network 19:40:01 ghostwayabsolutely 19:40:27 A malicious node could craft a chain fork with valid (POW-wise) blocks, and you'd only find out after you've already one through all the data 19:40:30 endor00overlay network bridged using companion nodes leeching of LMDB is a good first way to get going 19:41:50 > <@endor00:matrix.org> A malicious node could craft a chain fork with valid (POW-wise) blocks, and you'd only find out after you've already one through all the data 19:41:50 in practice it would mean that adversary has huge computational power enough to rival the network 19:42:03 again, in this case getting _new_ nodes to sync is going to be the least of your pains 19:42:14 and even so it can be mitigated 19:42:51 Doesn't have to: they could just mine a few blocks to start a fork in the chain, and your nodes that can't talk to the "regular" nodes would be none the wiser 19:43:23 well then you have eclipse attack and it's the problem either way, staged sync or not 19:43:31 merope: Sounds like it would just be impractical, it's pow 19:44:02 Right, but since you're creating a nearly-isolated network (at least initially), it would be quite easy to pull off 19:44:18 Not to mention the fact that the "bridge" nodes would essentially become trusted nodes 19:44:19 Couldn’t you just sync chunks at a time instead of the entire thing? 19:44:42 * ~~Sounds like it would just be impractical, it's pow ~~ 19:44:46 Unless a user ran both implementations - in which case yours would be redundant 19:45:05 UkoeHBincremental staged sync, yes - we do have this mode in Akula 19:45:28 ghostway[m]: Double-spend attacks can be quite elaborate 19:46:18 > <@endor00:matrix.org> 19:46:18 > Double-spend attacks can be quite elaborate 19:46:18 Double-spend on unsynced node is interesting 19:46:50 If you want to replace levin with something better, the sane way to do it is to support both at first, and then push levin out after a hardfork (once the new p2p lib is deemed stable enough) 19:46:58 I'd expect established services to have synced-up infra, with backups even - at that point staged sync becomes regular (all stages for 1 block) sync 19:47:41 > <@endor00:matrix.org> If you want to replace levin with something better, the sane way to do it is to support both at first, and then push levin out after a hardfork (once the new p2p lib is deemed stable enough) 19:47:41 I'm not advocating for changes in monerod, I'm making a plan to make a new node and push it into usable state as fast as possible 19:48:41 Right - but if you forgo security in exchange for convenience, you're kinda missing the whole point :D 19:49:02 ArtemVorotnikov[: Many users sync their nodes on-and-off 19:49:12 if we were to talk about network-wide changes, it would make sense to integrate second libp2p-based node into monerod and phase out levin at some point 19:50:52 > <@endor00:matrix.org> Right - but if you forgo security in exchange for convenience, you're kinda missing the whole point :D 19:50:52 making things perfect immediately is kind of the enemy of organized software development :) 19:51:28 (looks like it was 17 years ago, nearly to the day https://git.openldap.org/openldap/openldap/-/commit/675cda1b622ed2cceea7bda54187e04e15eb8f6f) 19:51:47 There's a difference between "not perfect" and "design that introduces vulnerabilities" though ;) 19:52:49 If you want to deliver something that doesn't expose your users to trivial attacks, you need to be able to talk to the rest of the network directly, via levin 19:53:02 > <@endor00:matrix.org> There's a difference between "not perfect" and "design that introduces vulnerabilities" though ;) 19:53:02 there is a difference between vulnerabilities or practical real world vulnerabilities 19:53:29 I suggest you dig in Monero's history and look at the DoS attacks from a few years ago, then 19:53:44 anyone else around here have experience with libp2p? 19:54:00 https://github.com/libp2p/libp2p 19:55:03 > <@endor00:matrix.org> I suggest you dig in Monero's history and look at the DoS attacks from a few years ago, then 19:55:03 I'll worry about this when I have the staged sync pipeline done :) 19:57:08 a modular comms library sounds nice, but I'd prefer OpenLDAP liblber to Google protobuf 19:57:17 Given that they were directly targeting the sync mechanism, you might want to do that before you're done - or at least, I personally don't like doing things twice 19:57:27 obviously I'm biased, but it's already tuned for zero-copy reads and other stuff 19:59:41 > <@endor00:matrix.org> Given that they were directly targeting the sync mechanism, you might want to do that before you're done - or at least, I personally don't like doing things twice 19:59:41 there are always points of trust for joining the network, like bootnodes in monerod's source code 20:01:07 And since we're trying to build a trustless network, adding more of them (when we already know how to do things without them) would be counterproductive 20:03:48 > <@endor00:matrix.org> And since we're trying to build a trustless network, adding more of them (when we already know how to do things without them) would be counterproductive 20:03:48 do you plan to port-scan random IPs or? 20:04:38 that seems viable for bitcoin's network. not sure we have enough nodes 20:05:55 ArtemVorotnikov[: I was talking about adding trust in the data processing during sync, not peer-finding 20:08:18 > <@endor00:matrix.org> 20:08:18 > I was talking about adding trust in the data processing during sync, not peer-finding 20:08:18 if the worst comes to pass, and attacks become practical, it's possible to add checkpoints and run staged sync block-by-block - since the worst part is history that won't impede the sync significantly 20:08:45 as I said, staged sync is mostly about ease of development, we got most of our performance gains elsewhere 20:19:30 the monero node stuff probably needs significant work to support seraphis, so even a proof-of-concept for better node design could be impactful 21:04:07 hyc: since proving that there is a consensus that a transaction happened, one requires a chain of blocks from genesis, such that there is a block that contains that transaction's hash. since each transaction has a unique hash, therefore the space complexity of the blockchain is O(number_of_transactions) 21:40:33 .merges 21:40:33 -xmr-pr- 8319 8355 8516 8517 8525 8527 8529 8543 8564 8569 8570 8571 8578 8580 8590 8593 8594 22:24:49 whoa 23:46:56 cvmn: space complexity isn't a thing 23:47:29 a block depends on the hash of its preceding block, and the hashes of the txns it contains. 23:47:52 that's what gets fed to a miner. 23:48:23 *hash of hashes of the txns. a miner never sees the actual txns