02:43:23 damn. xmrchain node got killed again. im running gdb, but it says the program no longer exists. i think its getting oom killed but i don't know why 02:46:15 i guess i can run it with the tagged release and see if it still happens 02:47:24 How much resources is that node using 10:38:12 currently using 25G of ram 10:53:06 im gonna limit the number of peers. i had inc at unlimited and outgoing at 126 11:12:32 hrm, trying to add block message to testnet blocks to test forking behavior, getting this "Failed to load data from /miner_conf.json" buwut 12:50:41 Bro u dont need > 32 outgoing 12:51:09 I need it as im building a super node thx for worrying tho 12:51:11 And dont really need > 100 in either 12:51:57 Unlimited peers is probably what killed it https://github.com/monero-project/monero/issues/9334 17:12:03 https://github.com/monero-project/monero/pull/9875 17:14:14 oh my good Tzadiko stop cooking! 🔥🔥🔥 17:22:14 praise good 17:22:21 tzadiko: please force-push changes to prs instead of adding commits. non-squashed prs are less likely to get reviews and will not be merged as-is. if you have a gpg key, please sign your commits. if your pr is still a wip, convert it to a draft. 17:41:38 having unlimited incoming peers is default and should not be an issue. >32 out is excessive but also should not cause issues 17:44:11 Ok, thanks. This workflow is new to me. 17:45:43 np, if you have questions feel free to ask here or dm me anytime 17:48:55 having unlimited incoming peers is default and should not be an issue. >32 out is excessive but also should not cause issues << its (iirc) unlimited to prevent privacy issues from sybils, but definitely causes huge increases in resources 17:49:28 Btc and other chains default to ~115 max incoming and, iirc, 8 or 12 out 20:01:34 tag still on for today? 20:02:39 I'm falling asleep 20:14:42 i run all my nodes with unlimited incoming peers and no resource issues 20:15:38 we can tag today but i feel there are multiple issues remaining (ginger having issues with blockchain explorer, offn having chain sync freeze multiple times) 20:15:42 selsta, yeah but are your nodes the target of skiddies 20:15:47 but maybe we just have to tag at some point 20:21:00 it's enough, we need to tag 20:25:26 Frankly, this is embarrassing that a release is delayed again and again and again because of unidentified bug. There is at least one vuln in the wild right now, the current branch is capable of syncing and is pretty much stable for everyone. We need to tag or otherwise we're slowly turning into a grub2 situation 20:27:16 delaying is giving no benefit at this point because of how large the difference is between 18.3.4 and 18.4.0 20:27:45 waiting have no benefits 20:30:24 chill, that means theyll just tag 18.4 knowing full well 18.4.1 is incoming right away 20:31:11 yeah i would like too hope in two weeks, but the reality is that 18.4.1 could come in another 3 months 20:31:26 no reason to knowingly schedule 2 back to back releases 20:32:11 you don't know its going to be back to back 20:32:22 right now we're taking a month to correct a bug on the TODO list 20:32:46 so with gingerpolous and ofrnxmr having bug we haven't even identified i don't expect this to be fixed in weeks 20:33:03 you dont know thats not. if you leave identified bugs open then you need another release sooner rather than later 20:33:31 and if a point release IS months away, best to make 18.4 as stable as possible 20:35:03 Hard disagree, a release should be delayed because another one is coming a month after, really it's some grub2 tier scheduling here. 20:35:13 a month is long 20:35:24 wut 20:35:31 18.4.0 should have been released back in 2024 20:36:15 just build the release branch? you dont need cores permission 20:36:33 its not a hard fork release 20:36:35 lmao 20:37:04 you know well 99% of people aren't updating unless we told them to do so, and they need to update 20:37:16 99% of people arent complaining here 20:37:18 its literally just you 20:37:21 so run the branch and chill 20:37:30 this was directed at YOU 20:37:33 you are misinformed 20:37:39 cool 20:37:47 because we arent on your schedule lmao 20:37:54 have a good day 20:38:06 whatever, agree to disagree 20:47:52 yeah, since the fix is not identified, and there's an exploit in the wild, yes should ship asap 20:48:12 Tbf, i have not tested to see if my bug hits on 18.3.4 20:48:14 there's no telling when the fix will be idenfied & merged. should ship asap 20:48:27 he exploit only effects _public_ rpc node 20:48:39 Most of which are already running the fix 20:48:53 having a major release that freezes or crashes is worse than a RPC vulnerability in my opinion since it can also possibly impacts p2p nodes 20:49:15 gingeropolous: is this the same box that was thought to have potential hardware issues? or was that ruled out 20:49:19 either way with such a large release we will have a .1 follow up so I will ask luigi to tag 20:52:22 Ginger and i have different issues. Both of which coukd be hardware related, and reproducing isnt easy. For mine, i have to (and have) resynced the chain like 30x, and hit the bug probably 10+% of the time. 20:53:03 I think gingers node is being killed from same issue that caused selstas OOM - tx propagation 20:53:49 Ginger having over 1 thousand connections (iirc) means he's one of very few nodes with that many 20:54:31 Most others usually top out at 100-300 (i think) 21:06:46 tx propagation causing OOM is only relevant if there is a huge txpool backlog 21:09:56 Isnt it simple a product of txpool size * connections 21:10:37 So 10mb * 100 connections == 1mb * 1000 connections (forgive me) 21:11:38 only outgoing connections are relevant from what I remember 21:15:51 luigi1111: you can tag whenever you have time 21:15:54 outgoing connections have a longer queue timer (on average), but inbound connections still have the queue. 21:16:19 longer queue timer = more time to build up txs in it 21:16:38 (v0.18.4.0) also please ping me once done so I can update gui 21:25:33 K 21:38:14 luigi-chad.gif 23:14:16 If only luigi1111 could see that