01:53:53 ^ re AI stuff we ran some random tools onto p2pool / go-p2pool stuff. Of note it found ONE single unbounded memory allocation on user input on go-p2pool for no good reason that I totally didn't use the helper I have for bounded allocs 01:54:07 Then 40 other garbage ones 01:54:25 Some took too much time to untangle. And way more it hid from us 01:54:54 "The private key is shared with others" was a fun one that triggered a tool so many times 01:55:10 This was in the context of sharing the tx private key within p2pool for cross-verification 01:56:03 "User input may trigger expensive PoW check as DDoS" < yes people send P2Pool shares to each other 01:56:59 Fuzzing stuff found more things than anything else in the past, ran AI stuff on code before that and it ofc didn't find any of these 01:57:10 Just hallucinated code or vulnerabilities 03:47:51 DataHoarder: Ask HornetSecurity how they do it... 03:51:05 probably use AI aka actual indians 06:08:41 Now the pruned blockchain is taking FOREVER. Around the 70% mark or sooner the entire block chain sync starts crawling. 06:09:00 Multiple hardware and systems 06:10:00 @robbin_da_hood:matrix.org: Yea always takes me a while too 06:10:07 Does the computational algorithm take up more resources as the BC grows? 06:11:57 @snifflz1:matrix.org: The CPU % grows over time also. At 1% the sync flies and takes hardly any CPU cycles. By 70% the CPU hits around 80%. 06:14:14 It's not a hardware issue. It's not a connectivity issue. There's a bottleneck in the algorithm. 06:16:00 @robbin_da_hood:matrix.org: No 06:16:05 Stupid question alert: the pruned BC has access to every transaction right? Including mine? Lol. 06:16:09 @robbin_da_hood:matrix.org: Wrong 06:16:20 @robbin_da_hood:matrix.org: Yes 06:16:59 @ofrnxmr:xmr.mx: I've tested hardware and I've tested connectivity. 06:17:40 The first 2million blocks or so have less transactions, smaller blocks, and different cryptography > <@robbin_da_hood:matrix.org> Does the computational algorithm take up more resources as the BC grows? 06:18:11 Larger blocks = longer to sync them 06:18:33 RandomX uses decoy transactions? 06:18:37 yes 06:19:04 You wouldn't want to be in a hurry. Lol. 06:19:12 transparent outputs -> rangeproofs -> bulletproofs -> bullerproofs++ 06:20:02 Bulletproofs reduced tx size drastically 06:20:28 But tx volume also increased 06:20:42 Yeah. I saw the interview with the uni professor who added the bulletproof feature. Brilliant guy no doubt. He was sipping on a whiskey for the entire (daytime) interview. Lol. 06:22:21 There has to be a better way of syncing this BC? 06:24:43 Publish the entire BC to a repo. SHA256 it, publish the checksum. Then encrypt with a public key. 06:24:43 I know this wouldn't work. Just brainstorming. 06:26:50 If I could SHA256 someone else's public node. Start from there? 06:36:49 This is basically "fast sync" which is what the daemon already does up to the latest checkpoint. It downloads the entire contents of the blockchain, checking the contents against a checkpoint hash, but doesn't check crypto proofs until after the latest checkpoint. It works if you don't care about validating the entire tx history 06:52:05 @jeffro256: 🫡 06:53:08 So a pruned BC is just as secure as a regular BC? And it's the same size? 06:56:49 @robbin_da_hood:matrix.org: No, its about 40% the size 06:56:57 Yes its as secure as regular 07:17:33 @robbin_da_hood:matrix.org: Fast sync in general is unrelated to pruning. Pruning means you don't keep all proof data around in the DB, Fast sync means you don't validate proofs while syncing. Fast sync is not as secure as normal sync, but a pruned node is just as "secure" (disregarding data availability for the rest of th [... too long, see https://mrelay.p2pool.observer/e/vISYqMkKOUM4NXg1 ] 07:46:05 The main relation to pruning, is --sync-pruned-blocks requires fast-sync 08:39:01 Wait. Should I be using --fast-sync ? 08:39:29 its technicallt --fast-block-sync 08:39:38 But its already enabled by default 08:40:21 @ofrnxmr:xmr.mx: It's enabled by default? 08:41:01 At this rate the pruned BC is going to take 7 days. 08:44:27 @robbin_da_hood:matrix.org: Yes 08:44:43 Are you using an hdd or n ssd? 08:45:13 Ssd, that reports itself to the kernel as a hdd 08:45:51 some driver issue probably 08:46:12 Type sync_info and send the mooooo line 08:47:22 It's just one 3 line 'mooo...' 08:47:55 I'm on a different device 08:51:40 What does the line above it say 08:53:44 dmesg says "scsi 2:0:0:0 Direct-Access ANSI : 6" 08:54:00 ... PQ: 0 08:54:18 MobileDataStar 0203 09:04:53 I can't find the monero port on localhost 09:05:27 I'm running --hide-port 09:09:39 @ofrnxmr:xmr.mx: . 09:10:16 I cant access the port to call sync_info 09:14:01 Error: Unsuccessful -- json_rpc_request 09:16:57 When I build a curl json-rpc request I get: "empty reply from server" 09:17:05 Monerod is running 09:37:49 I had the --restricted-rpc on 09:38:39 @ofrnxmr: m.oooo.... 09:38:39 next_needed_pruning_seed: 4 13:05:34 This is the future gramps > <@ofrnxmr:xmr.mx> "its not vibe coded and never will be" 13:06:52 I hope the Monero code review process is more rigorous 13:10:01 You're probably using the old models > Just hallucinated code or vulnerabilities 13:10:24 Claude Sonnet 4.5 rarely hallucinates 13:10:28 nah it kept hallucinating stuff as well in cryptography 13:10:41 again, unless you have deep knowledge of the topics in question it looks fine 13:11:29 basically the noise to signal ratio is extremely bad, and the signals get caught by regular stuff 13:11:35 i hope monero doesn't accept 100% LLM code 13:11:52 the only good cases are when a human (who knows the topic very well) uses the LLM for assistance 13:12:01 and questions and corrects the LLM 13:12:03 however what I have found it works better is to compare a canonical implementation to a different one 13:12:07 rather than blindly using the code it gives out 13:12:17 that way it's not inventing stuff but working with context 13:12:34 current transformer models can distill (make less from more) very well 13:12:42 If a human blindly uses the code it gives out, why have the human in the first place? 13:13:04 it takes more time to review that AI code than writing it firsthand :D 13:13:09 ^ 13:13:44 I had to tell it that "no, this is not a thing" and "yes, this is a thing" so many times, not even code related 13:13:57 "no this package doesn't exist in std" 13:14:07 also lol 13:14:14 I gave it a verbatim copy of the canonical code 13:14:17 i hate when it hallucinates 13:14:30 the answer already had invented parts of that code 13:14:39 it hallucinated a fucking environment variable 13:14:46 for disabling a setting in a program 13:14:48 then I paste "no see above, this snippet is like this" -> "Oh you are entirely right ... blah blah" 13:14:56 I've found that Claude Sonnet 4.5 works really well with wallet2 and other widely documented code like React. However, when I tried it with Serai, it failed miserably > current transformer models can distill (make less from more) very well 13:15:11 it works with code that it knows about, ofc 13:15:17 but not novel stuff 13:15:35 sadly all writing is novel, specially around new cryptography :) 13:15:50 or very obscure stuff 13:15:59 it took a session a couple of hours of back and forth 13:16:01 to find that a 100 line snippet 13:16:06 had a line where 1 -> i 13:16:09 as a typo 13:16:33 which was instantly identified on classical methods (and fuzzing) 13:16:46 i never used a LLM for code 13:16:59 i found that in my project, it's annoying because it knows nothing 13:17:07 and 100% hallucinates every detail to please me 13:17:25 What model are you using? 13:17:28 I throw stuff to review or find typos. Like, make less from more, or have a second something to call be insane (I am usually enough around that) 13:17:43 duckpondy: GPT 4o, claude, llama 13:17:48 any model i could use 13:17:56 I have tried across most targets, also I have a local cluster for this 13:18:11 sadly it doesn't have that much VRAM, just ~250 GiB 13:18:27 I'm surprised it hallucinates that much... 13:18:29 so it can't run that large of models. but at least I can finetune some context onto it 13:18:39 it doesn't know shit it hasn't been trained with 13:18:42 no surprise 13:18:47 and I keep working with novel things 13:18:48 duckpondy: my project is about a 40 year old RTOS 13:18:58 that is barely documented online 13:19:07 that said. in RE, "make sense of this assembly" has saved me many hours 13:19:30 identifies some specific algorithm or parts of it, while I do everything by hand 13:20:21 DataHoarder: have you used AI assistance plugins in ghidra? 13:20:28 again, the case of not knowing enough about the field to know how bad it actually is 13:20:30 no, Cindy_ 13:20:39 I'm having it now connect to local cluster soon (tm) 13:20:43 local cluster is like weeks old 13:21:12 I'm curious about the ongoing electricity costs of running AI models locally, once you've already bought the necessary hardware. Are those costs as high as mining 24/7? > sadly it doesn't have that much VRAM, just ~250 GiB 13:21:32 well it's not running 24/7 :) 13:21:39 the server fans cost more tbh 13:21:43 250GB is insane 13:21:59 I just ... found a deal Cindy_ 13:22:02 a very good deal 13:22:14 called enterprise GPU pricing error 13:22:20 With 250GB VRAM the bottleneck becomes RAM 13:22:32 yeah, and that it's split across many GPUs 13:22:34 so PCIe 13:22:44 you have 62x more RAM than i have 13:22:45 no GPU interconnects 13:22:49 250GB VRAM is more than enough for the high end local models 13:23:04 that's VRAM not RAM Cindy_! 13:23:06 yeah, and you can quantize a bit if desired 13:23:37 oh yeah VRAM 13:23:40 that said this thing usecase is not GPU specifically but mostly general compute/VMs 13:23:48 and I wanted some of these gpus for vGPU stuff 13:24:04 but alas I can spawn jobs for it 13:24:17 it'd be replacing 2-3 2U servers :D 13:25:16 see curl author blog about the levels of AI review shit they have to deal with anyhow 13:25:29 or recent talk (it's on yt) 13:25:37 It's not only curl 13:25:44 example. 13:25:51 bug bounties fucking suck nowadays 13:25:52 Serai deals with it 😭 13:25:59 or some AI using 12 TiB of bandwidth to "clone" a 100 KiB git repo 13:26:01 when people are given a reward for finding bugs 13:26:07 if AI is so smart can't it just git clone even regularly 13:26:15 there are people who are also incentivized to try to cheat 13:26:29 instead of fetching every commit page then blame every line then do that for the next one 13:26:39 when blocked they find new ways or extract the specific .bundle links 13:26:47 it's so inneficient for both sides 13:27:03 git clone gives you all instantly AND it's fast, not rate limited 13:27:11 DataHoarder: too lazy 13:27:20 their scraper probably only extracts HTML 13:27:27 so they just don't bother 13:27:30 nope 13:27:32 those are the dumb ones 13:27:34 that get blocked 13:27:36 these are git fetchers 13:27:40 they have awareness of commits 13:27:44 archive/bundle links 13:28:09 that were never exposed but know they exist on site 13:28:11 via the client api 13:28:39 they discover commits via random links then try to fetch via bundles 13:28:41 instead of again, git clone :D 13:29:08 I noticed an interesting observation. I rarely encounter people who have a truly knowledgeable perspective on LLMs. Most are either overly critical or caught up in the hype. Does anyone know where to find genuine experts in this field? For instance, where are the original authors of the "Attention Is All You Need" paper these [... too long, see https://mrelay.p2pool.observer/e/nOLosskKSUp3MTF2 ] 13:30:17 there's the old saying that "to err is human" 13:30:30 but you just need a machine to make it automated :D 13:30:39 who's responsible for bad crypto code 13:30:44 AI has been very uplifting for me 13:30:51 it has shown me that i could be like 13:31:01 extremely fucking dumb, and i could get a very good tech job 13:31:08 as long as i don't ask for wages 13:31:12 I do love some good old RRN :) 13:31:29 it keeps consistent in long context! 13:31:36 i could delete all the source code 13:31:39 I used to train some in some dual 980's I think 13:31:44 write a bunch of mess 13:31:52 and i'd still have a voluntary job 13:31:59 then you bring the expert emergency to untangle the mess 13:32:31 i feel like that kid banging on the keyboard from that windows ME commercial 13:34:05 at $work I already had to discover then warn then report to authorities actual reportable issues 13:34:29 every time it was caused by AI misuse, or AI usage, hallucinations 13:34:34 handling important data, leaking admin API credentials EVERYWHERE 13:35:12 but it's FREE LABOR 13:35:13 reviewing that code was useless so it was thrown away in the end 13:35:27 who cares if the AI destroys everything within 13:35:29 it's free labor 13:35:30 waste of time, money, and leaks everywhere 13:35:34 so more money 13:35:36 Does your workplace use proprietary models? 13:36:01 enjoy triggering GDPR and PCI DSS at the same time 13:36:18 DataHoarder: this is what happens when you hire business grads instead of senior engineers or people who know wtf they're talking about as the CEO 13:36:38 they are one of the players who are making the big models, so yes 13:36:58 we actually have a wide ban on AI usage on code projects as they are very aware of the danger, and nothing in creative work 13:37:09 @duckpondy:matrix.org: If anyone knows where I can find real information on AI, please let me know. When I research something like Monero, I can access the source code, read research papers, and so on. The same applies to most other software I use, but AI is different. Even the so-called "open source" models are quite clos [... too long, see https://mrelay.p2pool.observer/e/8JqGs8kKNVFqNHll ] 13:37:11 it's allowed on productivity stuff, emails, documents 13:37:25 I've also been to conferences, but they've been very disappointing. They're just filled with pseudo-researchers looking for funding or tech bros trying to advertise a wrapper for ChatGPT or Claude 13:37:51 AI is different because research has become more closed off 13:37:58 because profits 13:38:18 remember when OpenAI used to publish whitepapers? 13:38:38 I do 13:38:56 I miss that and now like you said it seems so fraudulent and profit-driven 13:39:10 s/seems// 13:39:26 everyone wants to be the BETTER AI 13:39:37 instead of working together 13:40:19 > we actually have a wide ban on AI usage on code projects as they are very aware of the danger, and nothing in creative work 13:40:19 Wow! Someone I know who works at Coinbase says they're implementing mandatory AI KPI requirements. Apparently, if you don't use the AI tools, you could be fired. It makes no sense, given that Coinbase is a serious business managing billions in funds and hallucinations would be disastrous 13:40:24 😀 > and questions and corrects the LLM 13:40:59 duckpondy: why force people to use the AI 13:41:24 100% that rule was made by some MBA dumbass who wants to say "our business uses AI all the time!" 13:41:33 so they can get more investors 13:41:42 Ask Brian Armstrong (CEO of Coinbase) 13:41:59 @duckpondy:matrix.org yeah they wasted money and now it comes from above. but our "dept" has more power than that, anyhow 13:42:30 similar from above, but specifically they pushed for code initially and now they are still dealing with the consequences of that 13:42:44 when every AI usage ends up with an incident they stop enforcing that qiuck lol 13:43:02 This is not my experience of CoPilot code. Anything vague though in terms of statements will lead to th AI deleting whole modules of critical & working code. Lol. > it takes more time to review that AI code than writing it firsthand :D 13:43:12 At Coinbase they enforce Copilot and Cursor 13:44:00 they suck 13:44:44 microsoft is literally paying youtubers and "social media influencers" to shill copilot 13:44:44 anyhow I just see the shitshows from a company that should be a leader in this and know about it 13:45:01 because windows is becoming an "agentic OS" 13:45:03 and if it's that bad inside, imagine how bad it is for companies that just "adopt" it without knowing internals lol 13:45:34 Windows is losing so many users 13:45:42 so, local stuff for me, make less from more, and review tasks. but I write the code, I make the decisions and I'm the accountable one 13:45:46 microsoft is becoming out of touch with what the users want 13:45:53 of course they're losing users 13:45:59 Cindy_: but what about the investors? 13:46:08 users don't put pressure investors do 13:46:13 true 13:46:22 I've noticed a massive uptick in the Linux subreddits I browse, also Steam releasing their SteamOS machine, and Apple not buying into this nonsense. I don't know what Windows is doing 13:46:39 Wine is getting better and better everyday 13:46:47 Is this Clippy the sequel? 13:46:53 Cortana the trilogy? 13:47:12 more like Bonzi Buddy 13:47:21 I highly doubt it. How many 'tokens' would you be using? For how much time? Mining is superintensive in terms of electricity/ cpu / gpu i reckon. Writing code with a local llm? > <@duckpondy:matrix.org> I'm curious about the ongoing electricity costs of running AI models locally, once you've already bought the necessary hardware. Are those costs as high as mining 24/7? 13:47:29 at least bonzi buddy and clippy tried to be useful 13:47:32 or fun 13:48:22 I don't understand why anyone still uses M$ > microsoft is becoming out of touch with what the users want 13:48:25 when Wine achieves close to 100% compatibility with windows programs 13:48:29 Windows will be dead 13:48:41 except for very niche usecases 13:48:53 Meh > microsoft is literally paying youtubers and "social media influencers" to shill copilot 13:49:14 @robbin_da_hood:matrix.org: I don't see much point in running LLMs locally unless you have the hardware for beefier models. IMO the main use case for local generation seems to be image and video, which is also much more resource-intensive, but I'm unsure of the costs 13:49:30 Cindy_: Won't M$ just muddy the waters? 13:49:46 muddy the waters? 13:52:07 Cindy_: Can't they add (or change) libs, protocols, stacks or whatever. To send Wine back to Y0? Dont they have a master key they only give out to major software vendors? I don't know anything about Wine... 13:52:27 Wine reimplements the APIs and DLLs 13:52:38 Is it a reverse engineering project? 13:52:40 yes 13:52:57 they reverse engineer Win32 (and other Windows related API) functions 13:53:08 cleanroom engineering afaik 13:53:17 @robbin_da_hood:matrix.org: If SteamOS becomes popular, it will also result in Microsoft losing their PC gaming market share 13:53:47 they can't break Wine, unless they fuck up their whole API (which will also break compatibility with their own programs) 13:54:03 or add some undocumented API functions and abuse them in their own program 13:54:07 (which is temporary) 13:54:35 I noticed this is #monero:monero.social 13:54:37 hmmm 13:54:57 Not #monero-offtopic:monero.social mods please don't kill us 13:55:12 mods are asleep 13:55:29 like most of the time, i doubt they care 13:55:31 last days this felt like offtopic 13:57:08 DataHoarder Do you know if any Monero contributors rely on LLMs? 13:57:29 I trust their opinion on this issue more than any CEO or random shill 14:02:17 I found that LLMs can be used for code reviews, but not much better than classical static analysis. And you need to know the code already to make sense of what LLMs tell you and filter out garbage 14:03:41 sech1: If used properly, almost are quite handy at code development IMHO. 14:05:05 I would disagree with the argument that you are better off writing code yourself. Except under special circumstances. The amnesic window being one. 14:06:25 It has lots of issues with API versions. It'll mix code from multiple versions. Or add random code. If the api was recently updated, you'll have issues. 14:12:03 writing glue code is different from writing novel or new libraries 14:13:08 a fun one for "glue" code is this https://clocks.brianmoore.com/ 14:13:34 > Every minute, a new clock is displayed that has been generated by nine different AI models. Each model is allowed 2000 tokens to generate its clock. Here is its prompt 14:43:23 DataHoarder: wow are these clocks terrible 14:44:53 deepseek did a very good job though 16:30:54 Cindy_: it changes every minute 16:33:16 damn 16:34:37 a question btw, can p2pool have subpools? 16:34:55 i mean a p2pool instance 16:35:19 or atleast multiple addresses, that can be selected to mine at stratum layer 16:39:34 You can run a separate network from the main ones 16:40:15 i meant pools of hashrate dedicated to one address (but in the same p2pool instance) that can be selected by the miner 16:40:26 also running in the same sidechain 17:19:30 Yes > a question btw, can p2pool have subpools? 17:19:42 Cindy_: Dont think so 17:24:57 really? 17:42:13 You can run pools on top of p2pool, and there are pools that run on top of p2pool. 17:42:13 i don't think you can split payout directly from p2pool. Foggy, but i think this was inquired about a couple days ago cc @datahoarder 18:00:00 ofrnxmr, i meant pools on top of p2pool for designated addresses 18:00:34 like stratum user abc123 = address 4AbcauDh... 18:01:29 the point is to consolidate hashrate for one address into one pool 18:01:46 instead of many p2pool instances for one address 18:15:58 Cindy_: you can do this on your own 18:16:14 go-p2pool supports this btw 18:16:36 --user 4AbcauDh... 18:16:38 And that mines to that 18:16:46 Supports multiple clients each on their ow or same address 18:16:54 As in, via stratum 18:17:02 i can't host 1000 go-p2pool 18:17:11 You only need one 18:17:39 It allows multiple xmrig each specifying a different addr 18:17:44 Like you set --user on xmrig 18:17:49 Not go-p2pool 18:18:12 can i ask you a question 18:18:52 I'm working on a v5 with quite many improvements anyhow, developed during FCMP stressnt 18:18:53 what's the difference between 500 miners to a p2pool-backed pool vs. 500 miners hosting their own p2pool mining to the pool's address 18:19:38 First case they need to trust the operator of the stratum 18:19:45 Second case it's trustless 18:19:52 As they do all verification locally 18:20:04 what about shares and hashrate 18:20:09 Not even p2pool backed pool but say, someone running p2pool 18:21:08 Hashrate is hashrate why do you mean 18:21:20 You mean a centralized pool using p2pool as hashrate backend? 18:21:35 yes 18:21:38 Or a transparent pool that passes addr to p2pool shares 18:21:44 both cases 18:21:58 If it's a centralized pool it's a centralized pool 18:22:08 Miners never mine to their own addr 18:22:10 But to the pool addr 18:22:14 i meant does it affect the share rate 18:22:20 It being p2pool or solo mining is nothing 18:22:22 Why would it 18:22:23 (rate of which the pool gets their share) 18:22:26 Hashrate is hashrate 18:22:45 Doesn't matter if it's 100 miners at 1KH/s or 1 at 100 KH/s 18:24:16 hm 18:30:25 DataHoarder: can i PM? 18:40:43 Nah it'll get lost 18:40:53 What are you trying to solve 18:44:33 DataHoarder: i thought of making a p2pool multiplexer 18:44:50 where addresses are registered (by the admin) but users can mine by selecting a pool by its ID in the xmrig config 18:45:16 That's the raffle lol 18:45:40 You just want a custom proxy layer for stratum that converts user to addr 18:45:46 Then go-p2pool below can handle all the users 18:46:10 But again, they have to trust you are first not mining elsewhere and that you aren't doing shitty stuff 18:46:15 selecting an address by its ID* 18:46:28 like setting user: "bob123" 18:46:39 Why not just let them choose the address directly 18:46:45 make the proxy layer choose the address associated with bob123 18:46:57 Then it's go-p2pool below 18:47:14 Like you have the single p2pool multiple addr part done 18:47:31 DataHoarder: recording how much hashrate going to a user 18:48:08 like a power meter 18:48:11 Then you are just a centralized pool 18:48:17 You can also use a transparent proxy 18:48:22 That records hashrate 18:48:31 how can that be set up? 18:48:51 Implement again the stratum layer 18:49:42 You also want to offer lower difficulty to measure rates 18:50:20 implementing the stratum layer to what pool? 18:53:37 DataHoarder: i wanted to like make it a option to donate hashrate 18:53:48 for a cause 19:04:28 but i wonder if i could really do this transparently (regardless if i was hosting p2pool or not) 19:21:46 Like you can already do so by just exposing go-p2pool stratum 19:21:55 People can mine to their address or someone else's 19:22:14 You can have a transparent stratum proxy in front with any extra logic you want 19:22:21 Like convert user -> addr 19:22:44 Or serve lower difficulty tasks so you get more xmrig shares reported 19:24:01 DataHoarder: does p2pool expose any way of estimating the hashrate of another address 19:25:08 Alright, @robbin_da_hood:matrix.org , the proof is in the working, reviewed and accepted PR, right? You have dozens and dozens of issues to choose from and solve with the help of those LLMs, where you should not have to code the solutions for those issues yourself, because hopefully they don't fall under "special circumstances": https://github.com/monero-project/monero/issues 19:26:12 You can see how many shares they have based on PPLNS 19:26:14 Like it's what observer does 19:33:16 @rbrunner7: Look at the damn prs from today 19:34:16 10219 on monero 19:34:16 4536 4528 4529 4530 on gui 19:49:26 i closed these. don't have permission to block the user 19:50:00 tobtoht_: maybe setup a bot to auto-close PRs they make 19:50:34 ofrnxmr: isn't it a coincidence that most of these PRs are bounty-related? 19:51:05 what nonsense PR lol 19:51:07 also lol, bold for 10219 to add a comment crediting themselves. have they not heard of git blame? 19:51:20 nonsensical syntax wise, the comment "fixed by xx" 19:52:29 this code is so shit that if i wrote it, i'd be too embarrased to put my name on it 19:53:27 you can tell no care went into the code 19:53:40 they only cared about the money, they even plastered their "donation address" on the PR description 19:53:42 Wow, 10219 is cool. 19:54:39 did they even compile it 19:54:47 skip_sync isn't even defined in the code 19:54:52 this would never compile 19:55:34 GitHub compiles it for you. GUI commit 4527 doesn't compile. 19:55:55 also wtf is issue 4259 19:56:06 i tried looking at the ID in the monero repo 19:56:12 it's a pull request related to keccak API 19:56:59 rbrunner7: but couldn't they have bothered to compile it first locally before pushing and making a PR? 19:57:12 really shows the level of care that went into it 19:57:27 Depends on the goal :) Maybe they just see this as a joke. 19:58:17 he doesn't even follow indentation, and pastes "fixed by xxx" all over the place 19:58:46 treating the code as a wall to graffiti on 19:58:52 20:55:55 also wtf is issue 4259 19:58:55 on gui repo 19:59:16 https://github.com/zcash/zcash/pull/7068 lol 19:59:20 https://github.com/monero-project/monero-gui/issues/4259 19:59:38 "add QR scanner" 19:59:42 To open such nonsensical and small PRs isn't possible only now, with the help of LLMs. Already 10 years ago it would have been easy to submit something like 10219. Doesn't look special to me, or new, frankly. 19:59:49 code is literally the EXACT SAME THING as the other PRs 20:00:12 Debug print "sync optimised" , thank you for donations!! 20:00:31 // Fixed by me, please don't remove this comment 20:01:25 tobtoht_: XMR donation address in a zcash PR 20:01:29 bold move 20:01:39 Indeed, lol 20:01:50 also he defines skipSync in the code 20:01:58 for no reason other than "Add skip sync feature for bounty Zcash" 20:02:34 did this guy really make a zcash PR for a monero GUI-related issue/bounty 20:03:38 instructions unclear: accidentally solved the bounty for zcash instead 20:03:47 but please give me the bounty anyway 20:04:04 It wasn't a "he", it was the LLM itself, roaming the Internet in an "agentic" way and looking for bounties to pay the electricity bill. 20:05:30 holy shit 20:06:08 the zcash PR actually changes nothing 20:06:10 other than the indentation 20:06:31 and the random skipSync variable that is unused 20:07:23 rbrunner7: or perhaps to fund the paperclip-making business 20:11:43 <321bob321> @rbrunner7: sentient being 20:18:56 rbrunner7: roaming agentically 20:19:23 new writing adjectives