12:59:54 sech1, did you ever get your hands on one of those RISC-V prototyping SBC's? 13:05:50 Not yet 13:07:26 potential for ASIC-esque development if the price of XMR hits an all time high and China gets bored. Custom RISC-V, with 32mb L1, 128mb L2 and 512mb L3 cache could become a realisation 13:09:31 wont be anywhere near the performance of AMD per core, but the price if mass-manufactured could be a hell of a lot lower. So just end up stacking about 20 boards into a case like the ANT Miners 14:57:02 sech1: https://github.com/monero-project/monero/pull/8001 could use some more reviews / approvals 15:07:40 hyc reckon wolf-xmr-miner would work to any extent with the mali gpu's? 15:15:07 as I recall, I eventually got it to work for cryptonight, but not with any useful hashrate 15:15:40 smartphone GPUs of that period didn't have decent memory resources at all 15:17:41 just thinking of ways of squeezing out a few more hashes from the android boxes :P 15:18:04 are you guys just running xmrig on termux on your droids? 15:18:11 nobody uses cryptonight any more so what's the point 15:18:19 me yes, just using termux 15:18:51 don't have time to wrap a proper android app around it, although someone else already has? 15:19:03 they pasted a github link in here a while back 15:19:23 I imagine its been done somewhere. I'm just booting my android tv box up in linux and have it compiled and running 15:20:30 however yeah, got termux on the android phone 15:21:08 i'll probably just use termux... ssh is comfy 16:00:15 Seth For Privacy: do you add anything to you docker compose for p2pool to limit some of the logging? it seems like these could add up quickly 16:21:39 crypto_grampy[m]: you can edit the files to adjust the log level 16:22:04 "Seth For Privacy: do you add..." <- No, I leave it at default but I could lower log level. 16:22:22 So far my longest-running p2pool instance has a 114m log file lol 16:22:51 I don't have any logrotate etc built into the container, so not too simple to rotate etc. 16:23:31 Is there a more "sane" log level I should use by default? 16:23:37 Easy to switch. 16:23:39 is the hashrate for p2pool lower than regular mining? 16:24:07 No, it's still "regular mining" of RandomX. 16:24:12 It's just a different pool. 16:24:25 No mining differences for the actual compute portions. 16:24:50 hmmmmm. anything i should investigate for why the hashrate is so low? 16:25:20 Did you set huge pages? 16:25:28 sudo sysctl vm.nr_hugepages=3072 16:25:28 sudo bash -c "echo vm.nr_hugepages=3072 >> /etc/sysctl.conf" 16:25:36 And be sure to run xmrig as root for it to set MSR mod etc. 16:48:04 you only need 1168 hugepages for a typical setup 16:48:06 <\x> https://www.intel.com/content/www/us/en/developer/articles/guide/alder-lake-developer-guide.html 16:48:12 <\x> >AVX512 16:48:13 <\x> Disabled in P-cores when E-cores are enabled 16:48:31 <\x> so when you turn off the little cluster, you can still have avx512 16:48:52 <\x> intel's upcoming cpu release in 2 weeks iirc 16:49:43 doesn't avx512 still force clockrate to halve? 16:49:52 <\x> not always hyc 16:50:22 <\x> since skylake-X and then rocketlake, its controlled by either powerlimit or an avx512 offset\ 16:50:41 <\x> for example you limit the chip to 100W, itll clock as high as possible depending on the workloadf 16:51:13 ah 16:51:20 <\x> also on the other hand if you run a static 50x core overclock and set like -3 avx512 offset then it maxes out at 47x as long as temps allow it 16:57:50 Alder lake still doesn't have enough cache to run RandomX on all threads 16:58:10 <\x> yup 16:58:13 30 MB cache in 12900K is not even enough to 100% P-cores 16:59:06 but it should be enough to get ~10 kh/s 17:00:12 <\x> L2 is larger now though 17:01:01 "And be sure to run xmrig as root..." <- so running xmrig as root on linux and windows boxes and only getting about 1/4-1/2 of typical hash 17:01:37 hugepages are set as well 17:01:41 using latest xmrig 17:03:07 \x Alder lake will 100% take 1T RandomX crown 17:03:12 <\x> sech1: might be the new go-to gpu benchmarking cpu, seems single core perf is up and up and up 17:03:14 <\x> https://wccftech.com/more-intel-core-i9-12900k-alder-cpu-synthetic-gaming-benchmarks-leak-out/ 17:03:52 <\x> i still wont like it if it needs like 300W to get that perf though, intel pls 17:05:28 "while the server market enjoys a larger capacity 2 MiB of private L2" lol, server CPUs will kill Wownero :D 17:05:31 sech1: this one too please https://github.com/monero-project/monero/pull/8005 17:05:38 it's just release branch equivalent 17:05:45 <\x> yeah L2 is larger than your normal skylake shiz 17:05:46 <\x> https://wccftech.com/intel-core-i9-12900k-spotted-running-on-z690-aorus-tachyon-motherboard-ddr5-8000-memory/ 17:07:22 <\x> maan, ddr5 seems so good, its like the largest jump ive seen, when ddr3 started, late ddr2 were better, when ddr4 started, late ddr3 was way better, you cant say the same with ddr5 17:07:46 <\x> theyre saying 6400++ on the dailies which you really cannot daily on ddr4 17:09:02 CL50, no thanks 17:12:58 <\x> thats not the big thing with ddr5 sech1 17:13:05 <\x> it handles refreshes better 17:13:17 <\x> since youll only be refreshing a 32 bit wide channel at atime 17:13:42 <\x> on ddr4, one dimm is 64 bits wide, 72 with secded ecc 17:13:49 <\x> on ddr5 we will have 32 and 40 17:14:06 Seth For Privacy: not sure what i did, but everything is peachy now 🤷... i ran the xmrig benchmark on my windows box... no idea if that's what did it 17:14:08 <\x> so a ddr5 dimm is 2x32 bits wide and independent channels 17:14:42 <\x> so in a sense "channel interleaving" will be way better 17:17:16 <\x> <@sech1> \x Alder lake will 100% take 1T RandomX crown 17:17:23 <\x> yeah, ill try to get a guy to run it 17:17:36 but probably not with CL50 timings :D 17:17:36 <\x> not sure though they say intel is really strict with nda this time around 17:18:01 <\x> yeah well that cl50 thing from gigabyte is a frequency validation 17:18:12 <\x> those sticks are 6200 cl38 17:18:37 <\x> still high compared to ddr4 but yeah, clock wise, ddr5 seems way way way better 17:26:35 \x so DDR5-8000 will have how many channels and what bandwidth? 17:26:45 2x bandwidth of DDR4-4000? 17:26:50 <\x> 4 channels, 2 channels per dimm 17:27:01 <\x> <@sech1> 2x bandwidth of DDR4-4000? 17:27:02 <\x> yup 17:27:13 so something like 120 GB/s? 17:27:18 <\x> yup 17:27:21 that's enough for 15 MH/s ETHash, lol 17:27:47 I wonder if 15 MH/s ETHash will be more profitable than 10 kh/s RandomX :D 17:27:49 <\x> expect up to 6400 on daily setups though 17:28:08 It is more profitable :D 17:28:10 <\x> so it will be like 6400 * 32/8 *4 17:28:46 <\x> clock * width/bit-to-byte * number of channels 17:28:47 ok, 12 MH/s 17:28:59 it's $0.94/day ETHash 17:29:11 <\x> atleast what im hearing, for validations, single dimm valids, like 8200? 17:29:34 which is equivalent to 14 kh/s RandomX, so ETHash on that CPU will be more profitable 17:34:22 <\x> sech1: might be handy https://github.com/RAMGuide/TheRamGuide-WIP-/blob/main/DDR5%20Spec%20JESD79-5.pdf 17:34:53 <\x> DDR5 Spec JESD79-5.pdf 17:35:06 <\x> spec max is 8400, its like 3200 on ddr4 17:35:18 <\x> so yeah, even freq valids on 8200 is way too early :p 17:38:06 <\x> sech1: dual rank ddr5 wont offer much gain too, on ddr4 theres a 12.5% performance gain from dual rank assuming you can maintain same timings, on ddr5 thats down to 6.25% 17:38:18 <\x> so < 7% with all the dual rank headaches... 17:40:32 <\x> https://videocardz.com/press-release/g-skill-announces-trident-z5-ddr5-memory-up-to-6400-mt-s 17:40:42 <\x> 6400 36-36-36 18:26:59 yet another example why not to rely solely on memory hardness. 2x perf jump in 1 generation, vs CPUs getting small percents per generation 19:03:08 dumb question. Should the hashrate here, be the same as the sum of the hashrates on my individual machines running xmrig? :... (full message at https://libera.ems.host/_matrix/media/r0/download/libera.chat/629c5ebcd9f08914a4fcda06623aa63f365d19a1) 19:03:42 It's very off and on as it's wholly calculated through shares, which are rare for small miners. 19:03:51 I, personally, ignore that and just rely on local HR numbers. 19:03:56 okay cool 19:04:16 my rate immediately plummets after a minute or two 🤣 19:05:45 is there any sort of get request that can be made on the p2pool server for getting status, etc? 19:09:42 p2pool.observer has an api, but if you want more accurate stats you can just enable the http api in xmrig and pull from there, or if you have lots of workers you can run an xmrig proxy in between p2pool and xmrig and pull stats from its api 19:10:48 crypto_grampy[m]: use custom diff on xmrig 19:11:03 otherwise it'll only report actual shares with the proper sidechain difficulty 19:11:18 -u 'x+600000' for example for 600000 diff 19:11:45 then you will see more granular status on p2pool 19:12:16 the answer to your question is more or less yes, on average, if you have custom diff it'll report that better 19:15:19 if I'm currently at 600M (per my current xmrig output), do I use 'x+600000' ? 19:15:38 what unit is the M ? 😁 19:15:52 M is like million 19:15:59 okay 19:16:40 technically mega for megahash tho 19:18:44 crypto_grampy[m]: for stats reporting, usually your hashrate * 30 is good 19:18:52 so if 1K 19:18:58 do 1000 * 30 20:11:24 breaking PRNGs with machine learning https://research.nccgroup.com/2021/10/15/cracking-random-number-generators-using-machine-learning-part-1-xorshift128/