06:49:45 Well in that case there might be some savings to be made if the decoy output is dropped in that situation. 07:51:03 btw some of you guys wanted a lemmy instance over tor : http://lemmy.nowherejezfoltodf4jiyl6r56jnzintap5vyjlia7fkirfsnfizflqd.onion/ 07:51:13 btw i know some of you guys wanted a lemmy instance over tor : http://lemmy.nowherejezfoltodf4jiyl6r56jnzintap5vyjlia7fkirfsnfizflqd.onion/ 07:51:45 <3​21bob321:monero.social> lemmy see 15:33:23 Food for thought: Would it be possible for an external viewer to, without possessing any blockchain information themselves, verify that the Monero blockchain is working “up to spec”? 15:33:25 Motivation: I am trying to see how viable a “dual chain” PoC would be, where you could “send” Monero between the two chains by burning on one and minting on the other. In order for it to work safely, each chain would have to be able to semi-trivially check that the other chain is operating properly. 15:35:12 you really need to define what operating properly means 15:35:33 if you don't know how, start to define what you consider to be operating anormaly 15:37:35 been thinking about that too. you have to verify randomx with risc0 for example to convince yourself that the work was done. Solana can verify groth16 as part of the svm now 15:41:29 another solution would be to fork it and just make the monero transactions available to the vm as part of a global data structure https://x.com/spirobel/status/1825480495494537699 "thought: 15:41:29 scale solana to a size where it becomes feasible to run nodes at home 1-10k tps 15:41:31 have the validators cross post transactions on demand 15:41:33 validators of one minisolana act as oracles in the programs of another." similar to this. just let the block leader be the oracle that gives the monero transactions to the vm when evaluating the transactions 15:41:41 another solution would be to fork it and just make the monero transactions available to the vm as part of a global data structure https://x.com/spirobel/status/1825480495494537699 " 15:41:43 scale solana to a size where it becomes feasible to run nodes at home 1-10k tps 15:41:45 have the validators cross post transactions on demand 15:41:47 validators of one minisolana act as oracles in the programs of another." similar to this. just let the block leader be the oracle that gives the monero transactions to the vm when evaluating the transactions 15:45:26 another solution would be to fork it and just make the monero transactions available to the vm as part of the sysvar global data structure that is available to all programs https://x.com/spirobel/status/1825480495494537699 " 15:45:27 scale solana to a size where it becomes feasible to run nodes at home 1-10k tps 15:45:29 have the validators cross post transactions on demand 15:45:31 validators of one minisolana act as oracles in the programs of another." similar to this. just let the block leader be the oracle that gives the monero transactions to the vm when evaluating the transactions 15:46:25 another solution would be to fork it and just make the monero transactions available to the vm as part of the sysvars data structure that is available to all programs https://x.com/spirobel/status/1825480495494537699 " 15:46:27 scale solana to a size where it becomes feasible to run nodes at home 1-10k tps 15:46:29 have the validators cross post transactions on demand 15:46:31 validators of one minisolana act as oracles in the programs of another." similar to this. just let the block leader be the oracle that gives the monero transactions to the vm when evaluating the transactions 15:58:13 I left it intentionally vague, as there are probably many ways the blockchain could be “improper” 16:00:21 Basically what it really would mean in my context is that the two chains are operating under the same rules, a very non-exhaustive list would be block rewards, block times/difficulty scaling, the scaling of the size, ensuring that there isn’t unaccounted for supply, etc. 16:00:45 If they aren’t playing by the same rules, then the chains would simply not interact with each other 16:01:23 This is pretty similar to my thinking 16:01:48 The solution to blockchain inscalability isn’t an L2; it’s an *L0* 16:02:32 Have local consensus in the form of individual blockchains, and then have a universal consensus among all the blockchains so they don’t completely isolate from each other 16:10:22 main thing to verify is that the proof of work was done and that the consensus rules were respected. essentially what the daemon code does. 16:11:02 i saw it more from a DEX experience standpoint as an alternative to thorchain, atomic swaps etc 16:11:17 solana dex experience is simply better than anything else 16:11:23 As in the Monero daemon? If so, could it reasonably do so without syncing to the blockchain? 16:11:36 Hmm 16:11:58 are we talking about blockchain scalability in terms of size? 16:12:24 Yeah my idea wouldn’t be a DEX at all 16:12:25 The point of it would be to ensure that 1 XMR on chain#3785 is equivalent in every way to 1 XMR on chain#72932 16:12:59 Partially yes, though bandwidth and transaction volume does become a concern given a large enough user base 16:13:12 yes you would bake the monero daemon into the solana validator code. Or you would have to find a way to verify the consensus rules and pow with a zk proof in risc0 and do this just as a part of a solana on chain program not as a separate network 16:14:23 local, individual chains is a no-go imo... it already kinda exists today 16:14:25 for example, Monero, Zephyr, and Salvium. all of them use rx/0 as pow 16:14:27 but 1 XMR =/= 1 ZEPH =/= 1 SAL 16:15:01 The reason for that is that there are other factors that make them incompatible 16:16:02 If you could trade the currencies 1:1 with *zero trade offs, and with no change in pricing,* then that would be what I’m describing in essence 16:17:30 partially agree. but if i had to choose between some half baked dex mini project and a solana fork that makes xmr transaction data available to the smart contracts, i would pick the latter. It will be better maintained and in general have more eyes on it / higher quality code, because the ecosystem is so large and quickly evolving. I also think smaller slower solana is an under ex 16:17:31 plored design space. More people should take a look at it. 16:27:22 my node and your node agrees on the rules 16:27:23 1 xmr on my node is accepted and treated as valid by your node 16:27:25 both are essentially "local" chains that agree on the rules 16:27:27 what's the point of having different chains if there's no other factors? why not just use the original chain instead? 16:27:29 essentially, we're back to the current system 16:29:15 This is what I’ve been thinking of in essence, albeit it requires a lot of assumptions to be able to be made: 16:29:17 Whenever a Monero blockchain becomes too large (by a combination of multiple factors) it will “split” into two blockchains, with nodes being split between them based on a “distance” function that uses the ping times of the nodes relative to the others. Conversely, if a blockchain gets too small, it can “merge” into another blockchain. 16:29:19 In-chain transactions occur as usual. The only very noticeable change to each chains is the addition of a “mint” and “burn” function, which is only used when cross-chain transactions occur. 16:29:21 Cross-chain transactions will involve burning a given amount on the sending chain and minting a given amount on the recipient chain. 16:29:23 Each node in a Monero blockchain will be randomly assigned a certain number of blockchains to “check”, to ensure they are still “canonical”. The blockchains to check will be switched around continuously. If a given blockchain fails enough checks, the blockchain will be “isolated” and unable to transact with the other chains until it comes back into compliance. 16:29:25 Tons of simplification and assumptions are made with this concept ofc. 16:34:27 Automatic splitting seems complicated, think about how this would work in the simple case with 2 chains first 16:35:50 here's how the current monero nodes work: 16:35:51 * it stores all the inputs and outputs since the genesis block 16:35:53 * you create a tx with inputs you own, create proofs, and send it to the node 16:35:55 * the node verifies that the proof is valid and announces it to the network 16:35:57 in order to verify that the tx is valid, you need to know the entire history of the chain 16:35:59 so, without having the entire history, separate chains have no way to know if the coin you're trying to "burn and mint" is valid or not 16:36:33 You want to be able to verify that coins on the other chain have been burnt, but how do you do that without syncing with the sending chain? 16:37:20 this, as well 16:46:56 if we're concerned about blockchain size, IMO, there are several ways we can go: 16:46:57 * incorporate powerful compression algo (doesn't solve the actual issue but a good band-aid) 16:46:59 * more pruned nodes w/o `--sync-pruned-blocks` (if I understand correctly, this will increase the initial block sync time but better help the network health... again, a band-aid solution that requires community collaboration) 16:47:01 * some sort of zk blockchain like mina protocol (shoutout to hardhatter for bringing it to my attention) that dramatically shortens the blockchain size to < 1 MB 16:48:46 Why without --sync-pruned-blocks? 16:49:59 realistically there is also no way to get the different projects to verify the consensus rules of the other chains. that is why projects like thorchain etc use multisig and run their own chain 16:50:18 zk proofs solve some of that 16:50:43 (in case the other chain supports smart contracts that can verify them) 16:52:07 This is my main issue that I’ve kinda realized: 16:52:09 In more traditional fields, transactions are broadcast more locally, and storage as such is localized as well. In other words, when you use a debit card, that transaction doesn’t have to propagate throughout the entire network in order to become legitimate. 16:52:11 Imagine if every single Visa transaction had to go through a centralized server. Even if you assume that they don’t all have to go to that server immediately (analogous to how a node isn’t simultaneously connected to every other node in existence) they will all have to go through it *eventually*, and that would kill it, regardless of how beefy the server is or how much bandwid 16:52:13 th you give it (at least by our current technology). 16:52:15 Now replace that server with every single node in a universal blockchain system, and you get the same problem. 16:53:41 https://getmonero.dev/interacting/monerod.html#performance 16:53:43 you download blocks that others pruned instead of pruning them yourself 16:53:45 saves sync time + some computation 16:53:47 however, afaik, those pruned data are still important and whenever needed, will be further probed from the network 16:53:49 now, the pruning process is selecting part of the prune-able data randomly and "delete" them 16:53:51 if you're downloading the block already pruned, the overall randomness of the pruned nodes of the network decreases 16:53:53 if everyone pruned the blocks themselves, the overall randomness distribution would be more smooth, thus increasing the data availability in case those pruned data is needed 16:53:55 another avenue would be layer two protocols that span 2 chains. like the colored coins concept as implemented in counter party. But let the client validate monero and bch / btc transactions and rely on the security guarantees of this consensus and tx ordering to issue coins on a layer above that has no rule enforcement at the node level, but only at the client level 16:53:59 study colored coins, counterpary, ordinals etc 16:55:15 what do colored coins have to do with this 17:17:48 in visa, there's a central database 17:17:49 through various database tech and techniques, it's split and copied to multiple servers across the globe... as a whole, it acts as if it's a single database 17:17:51 whenever you're making a tx, you're interacting with this central entity 17:17:53 the database verifies if the tx is valid and if valid, processes it 17:17:55 basically, every single visa transaction goes through the "central" server where underneath, it's a web of servers 17:17:57 in some situations, your visa tx has to wait some time before it's verified 17:17:59 that's the latency that results from the "central" server entity gradually processing millions of queued tx 17:21:04 (sorry if I'm somehow repeating what you've already said, preland ) 17:26:35 No you’re good 17:26:40 It kinda works that way 17:28:19 The main difference is this though: 17:28:21 A transaction made in a given region doesn’t have to go to the same server as in another region. So there isn’t a truly “centralized” system in that sense. 17:28:23 Transactions can occur cross-region though, as the servers trust each other to be behaving properly 17:29:08 My thinking is this: 17:29:09 Replace the regional servers with regional blockchains 17:29:11 Replace the trust between the servers with some sort of a proving system between blockchains 17:29:38 okay, smart contracts, in this sense, could be a solution 17:29:39 but to enforce a smart contract, there has to be validators, right? the validators supposedly will host copies of all the blockchains it supports 17:29:41 assuming the sum total chain size gets big enough, the number of validators will decrease proportionally 17:29:43 while a handful of validators can make it work, the power of validation then gets concentrated to a small number of people (basically, catastrophic for decentralization) 17:29:45 unless I'm totally misunderstanding the nature of smart contracts (sorry, I didn't look into them deeply) 17:30:07 (And by regional I don’t mean by physical region if you are asking: it would be purely determined by the ping times between nodes, in a perfect world at least) 17:30:15 This seems analogous to trustless sidechains 17:30:35 It is in a way 17:30:47 Although everything would be a sidechain 17:31:24 (And most sidechain research that I’ve seen with Monero has been to advance some sort of L2 tech; not what I’m thinking) 17:35:18 Except for p2pool 17:37:09 you see, the Visa servers "trust" each other because it's a "trusted" setup 17:37:11 why should the American blockchain trust the Chinese (or North Korean) blockchain w/o verifying? as far as the American blockchain is concerned, those are adversary chains that have their best interest in lying 17:37:13 again, the verification requires having the copy of the other blockchains 17:37:15 (unless there's some trustless setup possible through cryptography magic) 17:39:56 I actually like cryptography magic 17:41:48 Crypto magic is the best thing ever 17:41:57 I swear that I am not biased at all 17:42:18 🧙 17:44:49 https://matrix.monero.social/_matrix/media/v1/download/monero.social/TzJleHJAXBTExxLxxhttFUZk 17:50:02 1. This is why I said it wasn’t necessarily split by physical boundaries. It would ideally be split algorithmically. Otherwise you could get a scenario where countries run their own chains, and then start artificially censoring each other. Not good. 17:50:03 2. The only reason verification in my case would necessitate not copying the blockchain is that it’s soooooo damn big. We can’t have dedicated nodes to validate them (else then those nodes could become compromised) so they’d have to be determined randomly, and switched often. This is impossible with a blockchain in the range of hundreds of gigs. Either local networks would b 17:50:05 lock the usage of it, or the nodes would be destroying their ssds on a weekly basis. 17:50:07 However, if the blockchain could be reduced in size, *and then guaranteed to not grow over time*, then this wouldn’t need to be avoided. 18:06:31 i also agree that size is the limiting factor of blockchain scalability 18:06:33 then, zk proof based blockchain is one of the best ways forward 18:06:35 https://minaprotocol.com/blog/kimchi-the-latest-update-to-minas-proof-system 18:06:37 hell, we even have god-tier compression algorithms that don't get used (** FUCK YOU MICROSOFT**) 18:06:39 https://youtu.be/RFWJM8JMXBs 18:06:41 i also agree that size is the limiting factor of blockchain scalability 18:06:43 then, zk proof based blockchain is one of the best ways forward 18:06:45 https://minaprotocol.com/blog/kimchi-the-latest-update-to-minas-proof-system 18:06:47 hell, we even have god-tier compression algorithms that don't get used ( ** FUCK YOU MICROSOFT** ) 18:06:49 https://youtu.be/RFWJM8JMXBs 18:06:53 i also agree that size is the limiting factor of blockchain scalability 18:06:55 then, zk proof based blockchain is one of the best ways forward 18:06:57 https://minaprotocol.com/blog/kimchi-the-latest-update-to-minas-proof-system 18:06:59 hell, we even have god-tier compression algorithms that don't get used (**FUCK YOU MICROSOFT**) 18:07:01 https://youtu.be/RFWJM8JMXBs 18:19:44 https://github.com/monero-project/research-lab/issues/110 18:19:45 https://bounties.monero.social/posts/95/3-001m-research-stark-proofs-to-sync-a-monero-full-node-in-an-instant 18:19:47 hahahahaha.... we already have proposals and bounties for zk proof based blockchain 18:19:49 but not enough traction :( 20:29:45 The zk route is more practical imo. The compression route requires more computational power. And getting unfathomable compression ratios by today’s standard, as I’ve said before, is possible but imo still not worth going that route since it’s still gonna cost you more computational power. And idk how quickly you guys are gonna figure out a reasonably computationally efficien 20:29:47 t recursively repeatable compression algorithm that allows memory retrieval in the compressed form, let alone having an algorithm of that kind at all.