03:11:58 non-kyc places to buy monero? 05:50:09 🤍 05:52:06 Exchange 05:52:06 https://silent.exchange 05:57:03 retoswap.com 10:41:30 just read cuprate is ready for casual usage . means i can use it for personal use without encountering any errors ? 10:42:16 <3​21bob321:monero.social> Production use ? 10:42:31 https://www.youtube.com/watch?v=spLqyfHV8TQ 10:42:35 no it's not 10:42:38 It was bad wording ngl 10:42:47 why 10:43:05 it's very experimental, prone to bug (since alpha) and isn't ready for "casual" usage since RPC not available 10:43:35 the only thing that has been assert so far is syncing performance, and the fact that it won't explode 3 sec after running it 10:44:18 obviously a lot of bugs have been smashed since a while, but there are a lot down the way 10:44:41 and i would say it's ready for "casual usage" when it reaches beta stage 10:45:07 <3​21bob321:monero.social> Alpha tag it 10:45:11 Understand hinto's message as "cuprated is now meant to be run by casual users for testing purposes" 10:45:18 <3​21bob321:monero.social> V0.01 alpha 10:45:35 which wasn't the case some times ago 10:45:58 also i bet it can't use the existing blockchain downloaded from monerod ? 10:46:07 yup 10:46:18 yes i test 10:47:29 but can it use my local monerod node to create its own blockchain db ? 10:47:30 it might make things faster ? 10:47:52 we don't have any `excluside-node` option at the moment 10:48:12 so no unfortunatel 10:48:25 ohhh k 10:49:10 pruned blockchain is also 91gb in size right now , damnit :( 10:49:34 i want pruned blockchain so bad 10:50:26 cuprate does have pruned option right ? 10:50:47 nope 10:50:59 🤔 10:51:06 and our current database is 1.2x time larger than monerod 10:51:19 currently 260GB on my computer 10:51:20 i saw rottens X post he mentioned about some pruned node 10:51:31 ? 10:51:59 well no there are no prune support in cuprate 10:52:11 all we support is fetching blockchain from pruned nodes 10:52:12 i would love to run a full node but don't have that much space right now , sorry 10:52:33 np it's understandable 10:52:44 i would love to run a full node but i don't too much space right now , sorry 10:52:53 same 10:53:04 i should've bought 2gb ssd instead 1tb 🥲 10:53:48 at one time i had blockchains for about 3-4 coins 10:54:05 at one time i had blockchains downloaded for about 3-4 coins 10:54:06 <3​21bob321:monero.social> Not dedicated storage for it ? 10:54:29 no , portable ones are better 10:54:34 lol bro wants a usb stick 10:54:50 i should've bought 2gb ssd instead of 1tb 🥲 10:55:01 i would love to run a full node but i don't have too much space right now , sorry 10:55:23 you wrote 2gb instead of 2tb 10:55:36 i should've bought 2tb ssd instead of 1tb 🥲 10:55:54 lol my bad , but you get it right xD 10:56:53 i also have a 1Tb hdd but its of no use 🫠 10:57:09 ah bloody hell u gotta change the fork heights in the wallet too? 10:57:23 maybe to keep backups but it won't sync 10:57:39 maybe to keep backups but not sync chain from it. 11:01:39 When was the last monero hard fork ? 11:01:46 2022 ? 11:21:16 raggafragga 11:21:35 * m-relay smash 11:22:06 and all the available "private testnets" make you sync the existing testnet. blargh 11:22:14 private testnet tutorials etc 11:22:30 i changed the heights. what else do you want!?>!?>!>!> 11:25:42 well i guess im not winning this battle against entropy this morning 11:38:30 Chek wownero 11:39:10 They use rct from genesis, so probably made the chsnges that youre looking for 19:30:51 Do the peer nodes check if a monero node applies consensus rules uniformly ? 19:32:26 specifically asked because cuprate is a new node implementation, so it should go by the same rules 19:33:59 does it have any effect on dandelion++ , or fingerprinting attached to a node 19:34:32 A lot of things arent consensus. A modified monerod peer node can even be configured to not follow d++ 19:35:49 cuprated follows the dandelion++ paper closer than monerod does currently so it is slightly different. The node you run is also finger printable: cuprated or monerod 19:38:01 but since a transaction cannot not be traced back to a node , so fingerprinting a node should not be a concern i guess 19:42:55 it depends, there probably wont be that many people running cuprated for a while so if someone knows you are running cuprated it might make it more easy for them to find your node and do a targeted attack. 19:43:07 A tx can be traced back to a node 19:43:43 (Thats why dandelion++ was implemented. To make it harder) 19:47:48 in bitcoin yes :) 19:51:18 And monero 19:51:38 D++ makes it harder, but far from impossible 19:52:04 i see ser 19:54:20 btc has stuff like `onlynet=onion` that (i think) works to alleviate concerns there 19:55:26 i think you can configure monerod to send transactions over Tor too 19:56:15 on btc though its a whole another issue with transparent blockchain , so hiding node don't do anything. 19:57:36 Wish it was by default, and better a monero network tor like network itself for p2p stuff 😅 19:58:29 its not hard to setup 19:58:46 many tutorials are i guess 19:59:11 after initial sync you are supposed to change the config to only use tor 19:59:18 Someday somewhere someone will make it and do a pr 20:01:41 there is another mode where it does all other gossip through clearnet , but only at the time of sending a transaction it uses Tor 20:01:42 there is another mode where it does all other gossip through clearnet , but when sending a transaction it uses Tor 20:06:21 --tx-proxy=tor,127.0.0.1:9050 20:07:50 Hey boog900, my monerod sync is at 23hrs today 🥳 20:08:27 have you tried cuprate :) 20:08:31 I dont have 260gb, but i'll sync like 150gb of cuprate after 20:08:36 Ill cross the finish line some time next aeek 20:09:05 Monerod locked up on me yesterday with 400k blocks remaining. So resyncing to see if i can reproduce 20:09:20 (Testing the gitian build of 18.4.0) 20:10:04 its real sad. Does like 1.4 blocks/sec at the tip 20:10:51 I'll likely sync cuprate w/ and w/o checkpoints to compare 20:12:52 I can sync cuprated in ~16 hrs on my PC with out fast sync, fast sync is like 7 to 8 due to poor internet 20:13:01 I can sync cuprated in ~16 hrs on my PC without fast sync, fast sync is like 7 to 8 due to poor internet 20:17:53 <3​21bob321:monero.social> Without smoke ? 20:20:39 <3​21bob321:monero.social> What's the lowest low end device you have used to run cuprate on ? 20:22:02 full verification sync does make my PC pretty hot, it's like I am mining. 20:22:17 Raspberry Pi 4 2GB 20:32:14 the plot is that boog900's pc is a raspberry pi 4 2gb 22:14:00 boog900 after restarting cuprated, the bss is very low (like 1-5). Known issue? It eventually starts to grow, but takes a few mins 22:14:31 protection against huge blocks, we always start low and build up. 22:15:05 it can actually be configed in the block downloader that just isn't exposed to end users. 22:16:42 For monerod, we used the median of the last 100 blocks as estimator 22:17:14 we do similar, the block downloader only takes into account downloaded blocks though 22:18:00 For downloaded blocks, we take the avg of the largest batch in queue to estimate a block size 22:18:25 So it compared median vs queued, and uses the higher of the two as the estimate 22:19:25 https://github.com/monero-project/monero/pull/9494 22:20:16 The only "issue" i have, is in my testing, 5+mb batches syncs less blocks/sec than slower than 3mb :/ 22:20:45 Otherwise i think its ready for production 🥲 22:22:33 (also the max a user can set is set is 50mb, but our current serialization limits only allow ~30. So setting to 50 will cause issues). I don't really see this as an issue. 50 was chosen to be half of the packet size limit (100mb) and the serialization limits should eventually be fixed to allow for 100mbs 22:23:30 Either with 7999 or 8867, or with something like 0xfffc serialization increase pr 22:26:38 i don't understand why 5+MB is so much slower than 3mb though. Were using 10MB as default.. idk if we should drop to 3 and modify it after fixing whatever issue is causing 5+MB to be slow. Ive only tested in 1 system, so that doesnt really help 22:28:22 that's for monerod? 22:31:06 Yeah 22:32:44 The seralization limit was due to a dom / oom attack where objects would deserialize to like 12gb, so we limited the counts severely, which limits the batch size to ~30mb 22:33:25 7999 was a fix for it (not merged), 8867 is another fix that replaces the serialization. Both big prs 22:42:45 Yeah cuprate's epee parser doesn't use a DOM, it doesn't even copy bytes, it references into the raw bytes using the `bytes` crate. 22:43:44 so we don't have those limits, we still have the overall message limits though 22:44:53 I also see cuprate slowing down when the batch size limit is too high, although it happens higher than 5mb, more like around 30 22:47:37 Maybe depends on the system 22:48:42 Here (near chain tip) it goes from ~1.25-1.75 blocks/sec @3mb to ~0.7-1.0 @ 5+mb 22:49:38 Any immediate thoughts on the approaches taken in 7999 or 8867? 22:49:58 Iirc they both remove the DOM 23:02:20 ngl I haven't looked and they are quite big, I think 8867 is already in the process of being adopted 23:02:29 the write interface seems to be already merged 23:03:31 Yeah there are a few merged prs related to 8867 23:04:32 think we should prioritize it? 23:05:30 I think the tx propagation changes should be prioritized first 23:06:11 and then maybe even the tx-pool being re-validated on a HF as no HF can happen until that is fixed IMO 23:06:52 yeah, i can just spam 300mb into the pool at hf time and break monero 😆 23:06:59 Smh 23:07:22 yeah lol 23:07:34 i'm loling rn 23:07:38 good 23:08:07 those should be easy fixes tho, probably 23:08:18 The hf one** 23:08:40 The tx propagation needs some design work still, right? 23:08:50 hopefully, it being part of consensus is going to make people uneasy though 23:09:36 The main idea is there, I think there are some details that need to be decided but code work can start on it. 23:09:52 I think totally unreasonable, considering that we lucked out for prior hardforks 23:10:11 Like, would cost 500$ to break monero 23:10:33 And an idiot like me could have figured it out by fkn around on testnet 23:12:10 100mb is like 2.6xmr in fees to fk up the chain for over an hour 23:12:31 yep Monero has had quite a bit of luck that some vulns were not found by bad actors. 23:12:53 i keep telling irs to stop being cheapasses 23:13:15 If they offered like 300b + immunity, maybe we'd call that hotline 23:13:36 600k?? assholes. 23:15:01 Even rpc lol. Until relatively recently, you couldnt connect to a wallet to a nodes rpc port if the txpool was >100mb 23:15:25 Now you can, but only using restricted rpc (because the responses are split into 100tx) 23:16:46 those serialization limits are big trouble too tho 🥲 23:17:01 Part of why dynamic block sync size is necessary imo 23:17:46 Dynamic doesnt solve the issue, but it prevents it from effecting all new syncs 23:18:13 And then you have the runaway spans lol 23:18:27 Eating up your ram if you download too quickly 23:19:23 https://matrix.monero.social/_matrix/media/v1/download/xmr.mx/AqtFLMvjEXjpghxPjNljdarZ 23:19:36 This was just a few hrs ago 23:21:01 monerod's syncing code is fucked, needs a refactor 23:21:41 very fragile code IMO 23:23:37 Even the 100BSS limit is fkd 23:23:41 Its supposed to be 2048 23:24:53 Idk how it got limited to 100, but it cant be fixed w/o a hard fork, because if your node requests >100, if the responding node isnt updated, it will refuse to send anything 23:28:07 But yeah.. its not just theoretically fragile, some of it is already cracking 23:28:08 like --block-download-max-size doesnt work at all 23:28:55 You could add a support flag for it, that's something I also think the new tx relay will do, so nodes know which nodes have the new protocol. 23:29:18 (i want to deprecate / replace it with dynamic spans) 23:29:37 you will always need a max limit 23:30:00 i guess so. Can only request >100 from nodes that signal that they arent broken :P 23:30:45 The dynamic spans pr currently targets in time (2min lookahead) 23:30:59 Like a video buffer 23:31:54 It checks sync speed, and only stops downloading once it reaches ~2mins ahead 23:32:59 This to give time to start downloading before you catch up (from slow peers), but also to prevent the screenshot above (downloading 2hrs in advance) 23:33:12 that makes sense although the limit is inherently a memory limit so to me it should just be a single constant of how much memory you want to use for the queue. 23:33:31 The default (broken) is suppsoed to be 10 batches.. which is way too low imo 23:34:53 If it wasnt broken, it would dl 10 batches (which can be synced in like 3 seconds). Causing a lot of time waiting for connections and downloads to resume 23:35:34 it's an `or` 23:35:47 we have a constant flag atm (broken) --block-download-max-size 23:36:19 so it proceeds if under 10 batches or less than the queue limit 23:36:23 Yeah, the or should be an and 23:37:28 why? 23:37:31 (and the rule is clearly broken anyway, as evidenced by my screenshot) 23:38:00 As its both above the size limit and above the batch limit 23:38:10 Hit 800 spans and 4gb 23:38:16 there is another exception for if you are syncing blocks below the height of your chain (alt blocks): https://github.com/monero-project/monero/blob/master/src/cryptonote_protocol/cryptonote_protocol_handler.inl#L2003 23:38:36 it's fucked 23:39:44 Yeah we changed all that in the dynamic span pr 23:40:02 Seems to work to keep spans under control 23:40:41 The span pr still needs work 23:40:59 what all on master branch constitutes "hard fork"? 23:41:17 really wish no hf stuff was put on master 23:41:28 For dbss, you can specify a fixed bss using --block-sync-size=N or a dynamic (default) bss using --batch-max-weight=N 23:42:00 I feel like the `or` was there purposefully, to make sure you were always queuing blocks 23:42:20 For dynspans, ideally we could keep --block-download-max-size as well as the time based --span-limit 23:42:52 if it was a real limit I would have thought it would be higher 23:44:36 we changed it to 10minimum and 200 threshold 23:44:52 It was 10 threshold before 23:46:08 This line was changed to 23:46:08 ``` 23:46:10 bool queue_proceed = (nspans < m_span_limit) && (size < block_queue_size_threshold); 23:46:12 ``` 23:47:39 because of the `or` it was 10 minimum 23:48:15 And 23:48:16 ``` 23:48:18 m_span_limit = m_span_limit ? m_span_limit : BLOCK_QUEUE_NSPANS_THRESHOLD; 23:48:20 ``` 23:50:14 ``` 23:50:14 m_span_limit = (m_bss && blocks_per_seconds) ? (( blocks_per_seconds * 60 * m_span_time ) / m_bss) : BLOCK_QUEUE_NSPANS_MINIMUM; 23:50:16 if (m_span_limit < BLOCK_QUEUE_NSPANS_MINIMUM) 23:50:18 m_span_limit = BLOCK_QUEUE_NSPANS_MINIMUM; 23:50:20 ``` 23:53:54 In any case, we didnt get rid of the --block-download-max-size and should probably make it override the dynamic span if set (like block-sync-size overrides dymamic bss) 23:54:54 (Block-download-max-size is supposed to set the max queue size) 23:56:04 I think default time-based spans will work best though, because it will adapt to your hardware 23:56:49 this PR is only needed due to run away spans right? 23:57:20 runaway spans was the inspiration 23:57:54 IMHO this doesn't actually fix the issue 23:58:16 this line is the line that was changed although: 23:58:33 ofrn thought "why am i downloading 30hrs worth of blocks, and then if i shut down monerod i lose them. I dont need to be more than a couple minutes ahead at any given point in time" 23:59:24 https://github.com/monero-project/monero/blob/master/src/cryptonote_protocol/cryptonote_protocol_handler.inl#L2024-L2026 those lines also decide if we proceed or not 23:59:49 `queue_proceed` should have been `false` in this case, which means the error has to be elsewhere