07:55:13 <5​m5z3q888q5prxkg:chat.lightnovel-dungeon.de> https://exch.cx/ 07:55:13 <5​m5z3q888q5prxkg:chat.lightnovel-dungeon.de> Does anyone have an experience with this? 12:20:43 25$ for HDD vs 35$ for ssd....... 12:32:04 Sure 12:32:24 They often dont have xmr 12:33:23 otherwise, good 12:35:33 <5​m5z3q888q5prxkg:chat.lightnovel-dungeon.de> interesting 12:36:20 <5​m5z3q888q5prxkg:chat.lightnovel-dungeon.de> why the lack of XMR? 13:09:51 they allow you to create trade before it confirms 13:17:21 XMR is rather low on this 😢 13:17:28 https://matrix.monero.social/_matrix/media/v1/download/monero.social/dhcYGFlcTXEiNhCkHBGcgXvX 13:18:18 Low interests, I mean. 13:28:19 Last trade was 2 Nov. 13:54:04 So.. moneroocean 13:54:35 To mine a specific coin, you enter the algo in to password field after worker name, such as `worker1~rx/0` 13:55:16 Moneroocean has 2 rx/0 coins. 13:55:16 zephry and xmr. 13:55:16 so.. it seems theres no way to mine xmr directly, moneroocean is redirecting all rx0 has to zephyr 13:55:24 Hash* 13:56:54 So if yur mining rx/0 on moneroocean.. you might have started mining zephyr without realizing. 13:56:54 it says there is only 1 person mining xmr on MO 14:17:54 Some problems at accept: Too many open files, connections_count = 997 14:17:55 ugh 14:18:20 user@serveruser:~/.bitmonero$ ulimit -n 14:18:20 1000000 14:18:23 ugh ugh 14:22:24 All limits set with the ulimit command will apply to the current Bash shell only. To make a permanent change to system wide use, you must make changes to the /etc/security/limits.conf file. 15:00:27 Fuck me! The latest trend in KYC madness: 15:00:27 ``` 15:00:28 Kittipat 15:00:28 Hello. Welcome to Orbix Trade. How can I help you? 15:00:29 With the 2FA access. 15:00:29 Kittipat 15:00:30 May I ask what problems you are encountering, sir? 15:00:30 avatar 15:00:31 I have gone through KYC with Satang in 2020, so I don't really feel like doing it again. 15:00:31 I don't have the phone with the OTP keys anymore. 15:00:32 Kittipat 15:00:32 Sorry for the inconvenience. Have you ever met us at our office, sir? 15:00:33 avatar 15:00:33 No, why would I go to your office? I'm currently in Pakchong. 15:00:34 Kittipat 15:00:34 In order to complete the KYC, Kyc includes important part which is face-to-face meeting for verification. 15:00:35 Face to face. That's a new one. Does all your customers travel to meet you in person? 15:00:35 Kittipat 15:07:03 are they buying you lunch? 15:09:40 Lmfao 15:09:49 This is satire.. right 15:10:18 Trasher must be kittipat. right?? 15:11:54 Nope. SatangPro exchange have changed hands, and the new crew are going full stazi. 15:12:20 orbixtrade dot com is the new bitch in town. 15:12:24 Id ask them who im to meet with 15:12:33 and if i can bring my entourage 15:12:42 Also, if they are hiring 15:14:14 "please sir, dont come here" 15:14:20 I saw an arbitrage opportunity and wanted to swing by. 15:14:20 The last XMR/THB trade was over a month ago, so not much action. 15:14:41 "oh baby, you cant stop me now. Already bough 40 plane tickets we'll be there saturday at 3am" 15:14:51 "you guys have loudspeakers?" 15:15:34 "Arbitrage opportunities" usually have a catch. You found it :P 15:15:43 haha 15:15:51 I was thinking about asking about reimbursement of travel expenses 🤣 15:16:21 Id offer them free tickets to ethprague 15:17:26 Did you google maps to see if the place exists? 15:17:39 If it doesnt, id call and tell them im outside 15:18:35 Well, I used to live close by, so I know the area. It's real. 15:19:23 Damn, I just got an email asking `How would you rate the support you received?` that's beyond parody. 15:19:24 well, then i dont know why theyre inviting you 15:19:45 is their motto "i dont want peace. I want problems ALWAYS!" 15:20:06 Personally, id never invite trasher to my workplace. 😆 (im jk) 15:56:55 **Voter applications and committee candidate announcements are now available for the MAGIC Monero Fund committee.** 15:56:55 With the Monero CCS incident, now is a very important time to focus on the MAGIC Monero Fund. 15:56:55 If you are keeping up-to-date with the Monero ecosystem, please apply to be a voter, and if you have the time, please consider running to the committee member positions. 15:56:56 **Apply**: https://magicgrants.org/Monero-Fund-2024-Election/ 15:56:56 **Tax-deductible donation**: https://monerofund.org 18:39:07 Hi All! Just a reminder to all wallet makers, our polyseed implemention is opensource. Here it is if you want to put Polyseed in your wallet app. https://github.com/cake-tech/polyseed_dart 19:16:38 Hi. I see that cake wallet mobile wallet allow to create wallet using 16 words (polyseed), listing as legacy the traditional 25 words. 19:16:38 Is 16 words then usable in other wallets? Especially the official UI 19:16:39 ? 19:17:35 You cannot use 16 in GUI or CLI 19:17:49 But you can convert 16 to 25, and use in cli/gui 19:18:29 Feather, anonero, cake/monerocom support 16 (polyseed) 19:18:29 mysu is almost ready, and i imagine stack not far behind 19:21:58 helloooo 19:32:35 is there a way to partition my two ssd's of 256 gb, instead of having one with 512 gb for having the full node? 19:38:13 slave_blocker: yeah you can do that with lvm and combine them to be a whole logical partition 19:38:46 You do mean, join 2x256gb ssds, right? 19:39:03 yes 19:39:24 yeah like doing raid0 on them 19:40:14 what are the bash commands? 19:42:04 slave_blocker: it goes something like this https://askubuntu.com/questions/7002/how-to-set-up-multiple-hard-drives-as-one-volume/7841#7841 19:45:44 You can also use an ssd as a ramdisk + hdd for storage 19:47:25 SSD as write cache for the HDD 19:47:31 lern LVM2 19:48:09 Do you still have your guide up? 19:48:44 Nop, the host was shitting so I let it expire, I have to get another vps for it. 19:48:44 will return online asap (with update) 19:49:13 im just asking because my 256 ssd is going to run out 19:49:21 Ok tyty 19:49:38 so an easy way todo that for each monerian would be nicu 19:49:54 I was thinking about making script for my guide update 19:50:27 so one could run the script, type the /dev/whatever he want to use for storage and cache, and have the script setup the LVM stuff 19:50:54 (and when most linux boot, the LVM stuff is automatically detected and started so once you did set it up, it stay set) 19:51:54 furthermore the other day i was discussing this 19:52:40 https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_logical_volumes/enabling-caching-to-improve-logical-volume-performance_configuring-and-managing-logical-volumes#enabling-dm-writecache-caching-for-a-logical-volume_enabling-caching-to-improve-logical-volume-performance 19:52:47 so the coupon collectors problem is isomorphic to the problem of a node syncing up partitions of a chain 19:53:11 say instead of two ssd's i have 500 ssd's 19:53:16 That's for the write cache, I assume you can find the rest of how to setup LVM easily. 19:53:16 I try to get the guide back online in less than a week 19:53:39 You just need as much HDD as you want and a chunk of a SSD (32GB would be way enough) 19:53:41 and i could choose some subset of thouse 500 ssd's 19:54:12 hum 19:54:27 no typing without thinking not such a good idea 19:54:32 :) 19:54:49 but the argument is that spacial scaling is not an issue 19:54:58 ah yes 19:55:15 because maximally one could have 500 ssd's 19:55:23 for a full node 19:55:47 but then there are some that only have 10 ssd's 19:56:08 and it grows n log n 19:56:09 ! 19:56:15 1 - add all disk (hdd and ssd chunk partition to LVM PV 19:56:16 2 - add all HDD PV to a VG 19:56:16 3 - create LV with whole size 19:56:17 4 - add the SSD chunk PV to the hdd VG 19:56:17 5 - set writecache the same size of that extra SSD chunk 19:56:18 then you should be set 19:56:18 Generic googling how to use LVM plus the link for write cache should set you up. 19:56:46 https://en.wikipedia.org/wiki/Coupon_collector%27s_problem 19:56:52 Writecache is very important, you want to keep the lasts blocks of the blockchain in the SSD cache, you don't want normal cache as other older block that get read will contaminate the caceh 19:57:18 s/caceh/cache/g 19:57:53 If setup properly, you sync the whole thing in less than 10 hours or something 19:57:56 and the most current ssd is the most crucial and should be called the slave ssd 19:57:58 :) 20:05:25 so we all agree that ssd scaling is no problemo and everybody is happy? 20:06:30 i saw yesterday that roughly half a mb is being stored per btc blocktime in xmr 20:08:22 Yep, you really don't need more than 32GB worth of SSD and that should be enough until we get to seraphis 20:09:13 get 2-3 x 8GB HDD plus a corner of a SSD and it should be enough until you expire 20:12:34 get 2-3 x 8TB HDD... lol, not GB 20:13:08 hum 20:16:14 https://www.zerohedge.com/crypto/irs-team-reports-rise-crypto-tax-investigations 20:16:14 Bring it 😂 The more the better.... for the Nero! 20:16:55 i mean 20:17:05 for 50 k tx /sec 20:17:12 global adoption stuff 20:17:18 worst case scenario 20:17:24 sleep well and stuff 20:17:26 Should not be a problem even for hdd 20:17:26 as you write new block sequencially 20:18:03 Most used decoys are recent one, hence 32GB worth of SSD buffer should be enough 20:19:41 But for 50k tx/sec to verify, I would really recommand 32GB worth of NVME write cache. Not Sata SSD... 20:19:48 And quality NVME (with DRAM) 20:21:01 If 32GB become insufficient before we get to Seraphis, then with LVM, you can "grow" the size of the cache as needed 20:26:24 I did all my test with 16GB worth of NVMe 20:26:24 And for the HDD, I did use a single SMR piece of cr*p 20:26:53 16GB is enough until you get to 98% sync or something like that lol so I recommend 32 20:27:42 16GB is enough until you get to 98% sync or something like that lol so I recommend 32 ? 20:28:26 with 16GB worth of NVMe cache you are going to get an exponential amount of "cache miss" once you get to ~98% of the blockchain synced 20:29:44 what does that mean? 20:29:47 cache? 20:29:57 what cache? 20:30:30 write cache, it's what you use to cache the 16GB or 32GB worth of recent blockchain. 20:30:30 Did you even follow what I was talking about since the beginning? 20:31:04 ie: instead of getting ton of SSD, you just buy massive amount of HDD TB and use SSD/NVMe write cache to cache the most recent part of the blockchain 20:32:02 hum 20:32:11 that sounds good 20:32:32 but not everyone will be able to buy all that stuff 20:32:59 there is no demand for huge date storage for retail 20:33:02 It's a lot cheaper than SSD/Nvme storage 20:33:15 you can get 8TB worth of HDD for the price of 2TB worth of NVME 20:33:18 a 100 tb ssd is still 40k € 20:33:56 Yeah, seriously, just use HDD with LVM write cache. 20:33:56 Cheapest solution to run the thing. 20:33:57 I will put my guide online soon 20:34:42 And I think that HDD with SSD write cache trick won't be needed anymore if we go to full membership proof... right? 20:36:09 write cache, it's what you use to cache the 16GB or 32GB worth of recent blockchain. 20:36:17 that sounds interesting 20:36:36 <1​23bob123:matrix.org> Explains why when i added l2 arc sync speed got quicker. 20:36:51 i was trying to convert my cousin dr in maths this weekend i wanted to give him some xmr 20:36:57 and he said: 20:37:06 what if everyone uses that 20:37:15 it wont work 20:37:17 hehe 20:37:18 Yes, if you use "normal cache" then each time you read a block that is farter than 32GB then that block will be written in the cache, contaminating it and increasing the number of ssd write exponentially. Hence you need Write cache 20:37:22 great talk 20:37:25 :) 20:38:38 l2 arc? 20:38:38 Isen't Arc some compression algo we did use back in the good old DOS time? 20:39:07 <1​23bob123:matrix.org> Zfs L2arc 20:39:23 oh, that probably help too, I mean, if data is compressed then it take less space on the drive. 20:39:31 I do stay away from ZFS personally. 20:39:33 <1​23bob123:matrix.org> I have an lxc running monerod on proxmox 20:39:55 <1​23bob123:matrix.org> Yeah freebsd bug 20:39:59 <1​23bob123:matrix.org> 🚀 20:40:25 I really prefer the layer way of doing thing 20:40:25 Raid layer, lvm layer, etc, etc 20:41:16 And I add drives regularly in my array, with is not possible in ZFS right? axcept if you add always sizeof(pool) 20:42:01 with LVM I can add drivers, remove drive, relocate drive (without unmounting the volumes)... 20:42:16 <1​23bob123:matrix.org> They had an upgrade 20:42:26 <1​23bob123:matrix.org> Csnt remember what it was 20:42:29 <1​23bob123:matrix.org> Cant* 20:42:44 Silent error protection is not even an argument as I have HDD for adults (they have 528 bits sectors instead of 512bits sectors... Each sector have a checksum of itself, right on the drive) 20:43:30 holy shit never heard of that 20:43:32 ! 20:43:59 But the main raison I stay away from ZFS, is... it's does not come with kernel.. If I play with out of tree FS, I play with Reiser5 20:44:07 and what about ssd's or stuff that you can or can not have data on it like hidden data 20:44:44 SAS drive for adult, you need also a proper SAS controller for adult too, that support that special sector geometry 20:45:02 my SAS controller is like 10 years old 😂 20:45:38 wtf are you talking about 20:45:44 :) 20:46:10 528 bits sectors. You need a controller that support that, the OS itself don't know about that 20:46:40 and if you don't have a controller for adult (not consumer shiet) then it will just use the drivers as like they have 512bits sectors 20:47:13 They have also 4K sector variant for that sheme too 20:47:24 It's enterprise grade hardware 20:47:27 why do you call it adult? 20:47:39 Because it's not toy like consumer hardware 20:47:51 <1​23bob123:matrix.org> Enterprise 20:51:44 let a monero tx be = 5kb, so (5 * 50 000 * 200)kb in a block 20:53:32 1.5kb 20:53:51 My cheapo Raid 5 array (HDD) can absorb 600MB/s 20:54:21 It's faster than Sata SSD 😂 20:54:23 so that makes 50 gb per block 20:54:38 wait, I have to calculate that lol 20:55:42 more like 14GB 20:55:45 per 2 minutes 20:55:58 times (30*24) 20:56:09 thats 36000 gb per day 20:56:57 36 tb per day 20:57:03 hum 20:57:05 But that 50k tx/s is like peek Visa right (peak as normally it's way lower) 20:57:25 the write cache argument sounds very healthy 20:57:34 but the tb of hdd 20:57:38 assuming the majority will continue to be slaves, I don't think we will get to that number for the real money utilization 20:57:42 should be partitioned 20:58:04 such that the coupon collectors problem handles that 20:58:37 if it rises with n log n 20:58:47 hum 20:59:05 <1​23bob123:matrix.org> We’ll have the glass hdd by then! 20:59:29 how many distinct partitions of hdds would be needed 20:59:40 It's already the case, been a while these are not made of rust 20:59:51 such that the entire set of blocks overall is covered 21:01:36 assuming the majority will continue to be slaves? 21:01:42 afaik, HDD platters are already made of "glass" with a layer of some exotic rare earth metal mix layer 21:01:49 wrong narrative 21:01:58 afaik, HDD platters are already made of "glass" with some exotic rare earth metal mix layer 21:02:24 Yeah, the majority will continue to be slaves, that's unfortunate but that's confirmed. 21:03:28 The slaves run the world (they mine materials, cook our bread and prepare our packaged foods, clean, fix....) 21:04:35 Crypto people are technically extremely unproductive 😂 21:04:35 if most people where to switch, who will work on the field, build our crap and maintain our services? 21:05:21 <1​23bob123:matrix.org> So we should buy mining farms in africa, gotcha! 21:05:27 <1​23bob123:matrix.org> Ccs in bound 21:06:20 if john doe adds 36 tb of data every day to his full node is this a problem in the future 21:06:21 ? 21:06:36 and how big would the write cache be in that case? 21:06:42 We wont geet to Visa peek number 21:06:43 get* 21:06:52 first, it's peek number, most of the time, it's lower 21:07:02 and don't expect more than 5% to free themself 21:07:02 ok 21:07:13 for me its christmas every day 21:07:19 i deliver packages 21:07:24 (not blocks) 21:07:29 :] 21:07:33 I'm waiting for packages, what are you waiting for? 21:08:05 Cyber Monday stole some nero from me :( 21:08:28 and don't expect more than 5% to free themself? 21:08:34 wrong narrative 21:09:01 if john doe adds 36 tb of data every day to his full node is this a problem in the future 21:09:06 and how big would the write cache be in that case? 21:10:04 If it's after Seraphis and we get full membership proof, you won't need the cache (as far as I understand) 21:10:27 Can someone confirm this? 21:10:30 why? 21:11:38 What slow the whole HDD shebang right now is the fact that node have to check the decoys when it verify the transactions. 16 decoys per transactions, that mean you have to read 16 older blocks per transaction you verify 21:11:45 Seraphis will change that 21:12:49 full proof membership will make it so the whole pool of TX are decoys if I understand, meaning you won't have to read 16 blocks per checked ring 21:13:22 16 decoys per input per transaction 21:13:44 yeah, it's why I then mentionned "per ring" 21:14:01 that is why you want the cache if you want to sync as fast on HDD setup 21:14:53 most decoys in rings are recent, and it's the reason you want a write cache, so that you cache 32GB worth of recent tx 21:15:34 hum 21:15:42 so if we don't get seraphis or it's does not have that full membership proof, I assume you will make the cache so it fit grossly a year worth of TX? 21:16:01 Else, just use the cache until it's no longer needed 21:16:46 (36 * 365)tb 21:17:19 non, 21:17:27 lay say 5% of the world switch to monero 21:17:33 will make it only 2.7GB per day 21:18:09 Meaning you take 8 years to fill 8TB 21:20:13 how did you get that number? 21:20:22 Yeah, i'm rechecking them lol 21:20:36 RavFX: Any estimate for the ratio of time reading ring member data from SSD and HDD? Seraphis may have ring size 128 instead of full chain membership proofs. More ring members would be older if ring size goes to 128 (not proportionally, necessarily, but just more of them bevuase 128 > 16). 21:22:03 So you fill a full 8TB hdd a year instead, still "relatively cheap" 21:22:25 Yeah, so it will be more IO intensive if we don't go to full membership proof. Thanks 21:23:06 how many tx/sec ? 21:23:09 What about transaction size? 21:23:24 2.5k 21:23:26 (not % of population) 21:23:28 at PEAK 21:23:57 Probably more IO intensive without FCMP, but the current FCMP proposal is much more CPU intensive than Seraphis or the current Monero design. 21:26:08 brb 21:26:15 how many tx/sec ? 21:26:26 g​fdshygti53 21:26:31 But bottleneck is probably the IO if we talk about adoption. 21:26:31 Cores are getting cheap (compared to fast storage). Do we have the number for the CPU usage? 21:27:02 I'd say max 2.5 (peek so rare and not long) 21:27:02 And average kb per TX : 1.5kb 21:27:45 AFAIK, the FCMP proposal has 5.5 kB for a 2in/2out tx: https://github.com/monero-project/research-lab/issues/100#issuecomment-1609536076 21:28:43 Seraphis with 128 ring size is about 2.5 kB for a 2in/2out tx: https://github.com/monero-project/research-lab/issues/91#issuecomment-1047191259 21:28:46 So more data writing (but way less randomly accessed data). 21:28:46 Thanks. So HDD will be king if we switch to FCMP 21:29:18 Current 2in/2out Monero tx is about 2.1 kB 21:29:25 So seraphis/128 would mean like 8x the number of IO we have now 21:30:11 I don't know what IO has to be done to verify txs in FCMP 21:31:40 From what I remember, might be wrong, is that you don't have to read all the decoy TX. Else it would be a problem anyway (reading the whole blockchain for each TX). So I assume there is more math stuff going it. 21:31:43 CPU time per tx for FCMP is a lot. I am looking for the estimates. The CPU estimates are less certain because there could be optimizations in the math somehow. 21:32:54 I understand. 21:32:54 So we are going to massively pump the IO requirement, or the CPU requirement, but not both. 21:34:46 Except if they have a way to not have to read 128 decoys per ring 21:34:49 Can we quantize the increase? 21:34:57 Except if they have a way to not have to read 128 decoys per ring to verify 21:35:37 You can do batched tx verification or non-batched. The CPU times required is different. AFAIK, you can do batch easily when syncing blocks, but with maintaining a mempool (verifying txs as you receive them), probably non-batched. 21:45:25 just a question 21:45:42 RavFX: https://github.com/kayabaNerve/full-chain-membership-proofs/blob/develop/Luke%20Parker%20Full%20Chain%20Membership%20Proofs%20MoneroKon%202023.pptx 21:45:50 say you keep one year of the latest blocks in the write cache 21:45:50 > Verification is batched, with the larger the batch the lower the time per proof 21:45:50 > Currently, ~100ms per proof in a batch of 10 for a set of 777 million outputs 21:46:00 > With an academic progression, it’d be just ~33ms (again, batch size equals 10) 21:46:00 > Grootle proofs, currently proposed for a ring of 128, are 3.7ms in a batch of 10 21:46:02 what comes before that? 21:46:07 > Further optimizations are still available 21:46:08 > Performance of the node overall will be impacted 21:46:12 just the headers of the blocks? 21:47:57 As of 6 months ago, FCMP tx verification time on CPU is 10x slower than Seraphis with ring size 128. 21:48:29 I think kayaba is expecting more optimizations for FCMP. 21:49:31 where can i read about that snycing only 1/8 thingy? 21:50:06 im very interested about syncing only partitions of the entire blockchain 21:50:28 ? 21:50:54 slave_blocker, that happens when you use -prune 21:50:57 LVM2 21:50:57 sync the whole thing on HDD and keep ~1Y worth of blockchain LVM write cache. 21:50:57 It's automatic, I will re-deploy the guide soon 21:51:34 --prune-blockchain 21:51:48 Prune only remove 1/3 and it work on a different way. 21:52:00 remove 2/3... you keep 1/3 21:52:21 yes 21:52:42 but that depends on the ring decoys being dropped and so on 21:53:13 is there a torrent trait to it? 21:53:14 Yep, for now the LVM trick is great but when Seraphis / FCMP that's going to change 21:53:21 that sounds interesting 21:53:24 ... 21:55:01 Well, you get the new TX that enter the mempool p2p way, but then your node verify using the data it already have 21:58:26 ok 21:58:45 but how does it work for a john doe doing ibd 21:58:58 always --prune-blockchain 21:59:29 I have no idea about how theses work. I never used prune 21:59:43 does such a syncing node always assume a full node somewhere? 22:02:29 Syncing node can sync from full and pruned nodes afaik, p2p way, from many nodes. 22:02:29 And many pruned nodes will easily produce a complete set there is no problem for syncing. 22:03:29 And many pruned nodes will easily produce a complete set 22:03:53 that is exactly what i wanted to read 22:04:06 :} 22:14:42 s​lave_blocker: IIRC no tx output public keys are removed when you prune. If you want more info on pruning, check getmonero.org or Monero's Stack Exchange. 22:15:27 Pruning only removed 7/8ths of prunable data. IIRC the prunable data isn't necessary to verify future txs. 22:42:07 Cuprate docs 👍👍 22:43:01 https://github.com/Cuprate/cuprate.github.io/blob/main/src/monero/database/pruning.md 23:09:28 They are finally doing it....WTFM 🥹 23:17:42 I've moved that pruning section to monero-book I think it's more at home there compared to Cuprates docs: https://cuprate.github.io/monero-book/pruning.html