01:45:02 Offline Signing Library for XmrSigner to Production has moved to funding! https://ccs.getmonero.org/proposals/vThorOfflineSigningLibrary_XmrSignerToProduction.html 02:50:22 Congratulations vthor! Will post it via @MoneroSpace on X shortly. 02:51:55 https://x.com/MoneroSpace/status/1851094480130093446 02:51:55 nitter: https://xcancel.com/MoneroSpace/status/1851094480130093446 08:52:09 Thank you rotten_wheel (not gonna trigger at this time), and I want to thank also everybody supporting me in this CCS! 16:07:41 *I am once again asking to update synapse* 16:21:45 SyntheticBird what is up when a cat is roaring to you and moving its tale to you, liking to be robbed? 16:22:00 Should I watch out for da cat attack now? 🤔 16:22:44 everyone told me that's a sign thazt they wanted a hug but everytime I try they attack me 16:23:51 Yeah, she wasn't very receptive when I tried to hold her in my arms after that. Such a hoe. 16:24:02 She only wants the cuddling. 17:56:43 My public node is running from a machine with two SSDs in a RAID 0 configuration. Raid 0 is supposed to be faster than using a single drive, so maybe that could be increasing the node performance compared to other configurations. 17:58:20 probably. I think the usage also is an important factor. I mean. You have decent specs yet Cake wallet monero daemon is entirely on ramfs and they are slower than you 18:29:28 DDR2 18:31:05 Cake wallets has a bandwidth bottleneck 18:32:39 No horizontal scaling method on LMDB? 18:32:54 Need to compare time between when request for new data is sent vs when the download starts again 18:33:54 Maybe a single node writes and the 10 other nodes just read 18:35:34 Slap it in a dragonflydb 18:35:37 https://www.dragonflydb.io/ 18:42:11 There is a problem with having the database that you are supposed to read being locked for writes frequently. RavFX did the multi-tiered nodes idea. It seemed to work pretty well. Cuprate may create something that separates the program from the reading program, for better performance. 18:43:47 BTC and its cousins separate the data-serving from the node with Electrum server programs. Monero's node needs to server much more data, but it doesn't separate the roles yet. 18:44:33 But the Monero node has nothing to index since addresses are hidden, so it's not exactly the same. 19:15:51 I'm a LMDB supremacist. Dragonfly is fancy and well suited for web or network applications which makes it completely overkill for what monerod or cuprate needs. It's only strengh is coming from the fact that it is in-memory based, same as redis (which it aims to replace) and the fact that it can efficiently shard the work. 19:15:53 I think this is probably something that can be achieved in a more lightweight and performant way with cuprate stack. (Having 1 DB that receives request and respond to 10 RPC servers). 19:16:27 I'm sorry, may you explain what is it you calling multi-tiered nodes? cc: RavFX 19:17:05 I'm a LMDB supremacist. Dragonfly is fancy and well suited for web or network applications which makes it completely overkill for what monerod or cuprate needs. It's only strengh is coming from the fact that it is in-memory based, same as redis (which it aims to replace) and the fact that it can efficiently shard the work. 19:17:05 I think this is probably something that can be achieved in a more lightweight and performant way (if the db is in-memory obviously) with cuprate stack. (Having 1 DB that receives request and respond to 10 RPC servers). 19:18:20 SyntheticBirdHe probably mean the tiered node storage 19:18:21 Keep the last one year on a fast NVME drive then the rest on rust spinners (or ssd considering spacious normal SSD are getting abordable) 19:19:17 Using write-cache with LVM2 on Linux. normal LVM LN on the slow drive than a write-cache configured on the fast drive. There probably other way to do it but it's how I did it. 19:20:13 Ah yes I heard of it through a miner onion site. Glad to see it works very well 19:20:15 There is also the trick of having multiple node as "one node". I did that too 19:20:37 how ? 19:21:09 One node for P2P only 19:21:11 Then a few node that P2P only to the "one node" and these only do RPC, that way it does not overload your main node with RPC (that's how nodes seam to go down) 19:21:31 genius 19:21:58 I did test with one p2p node and 4 RPC nodes, did work pretty well 19:22:01 when the threading is so bad you're making the concurrency in the processes 19:22:19 Yeah, I was using one VM per node actually 19:22:19 On the same Epyc server 19:22:34 cc: boog900 19:24:01 Node had the same IP, I did use some PCC rules on the router so a connection to RPC port was directed to one of the four RPC nodes (and it was binding that IP to that specific node so each clients did keep the same rpc nodes). 19:24:03 I did also the same thing on my .onion routing 19:24:18 So for the outside world, it's "one node" 19:25:32 But it take a lot of ressources so for now I removed that setup. I should be able to return to that in an event on which we get under attack like in march. 19:25:33 I used like 256GB worth of ram and 2TB worth of nvme storage 20:27:02 <3​21bob321:monero.social> Sounds like HA nodes 21:29:10 Hmmm, seems like overclocking to. 5Ghz 1Gbps port and fast ram as f* ram for a ramdisk. There might still be be a way to ‘shard’ based on block height perhaps 21:31:53 It’s beyond my mental remit at the moment 🥱 21:35:05 sleep is important folks 22:14:14 <3​21bob321:monero.social> Sleep when dead 22:26:46 get no sleep and be dead 23:07:23 Cat confirms: sleep is most important 23:07:50 love cat