02:15:10 although that could be what qubic is doing already 04:26:31 They look super expensive to run, like $30/hr 05:04:32 ok, so there's 1 GH/s on MRR right now. that should pull in 85 xmr/day, $23072 (based on the calculator on miningpoolstats.stream). The first page of MRR has 850 Mh/s total for 0.012 BTC, so that comes to 1.438e-5 bitcoin/(MH) , so for 1 GH, 0.01438 bitcorns, so 1668$. 05:08:00 well, that tail end of that 1 gh/s gets pretty expensive, 99 Kh/s for 0.32 bitcoin. 05:08:25 well that same calculator spits out 19632.98 USD for 850 MH/s 05:09:05 so in theory they would be making more if mining monero instead of being rented... (?) 05:12:02 at 10 cents/kwh, using 700 watts for 90 kh/s, theres a $566.15 daily profit to just mine monero 05:13:42 but wattomine is giving me a daily 60.6176 monero instead of 85, but that shouldn't be that drastic 08:30:11 What is the purpose of this room? discuss mining? 08:32:27 What is the purpose of this room? discuss mining? or RandomX technical? 12:31:33 As the name suggests it's proof of work algorithm and development of it in particular. The calculation discussion about recent network "movements". 12:54:38 Thanks. I had a question re the variability in time taken to mine a block: 12:54:39 is the variability seen as good or useful? And also are there any known ways to reduce it? 12:54:41 I'm asking because I had some thoughts about 51% attack protections that seem broken because of the wide possibility of when a block can be found by any given miner. 12:56:40 The variability itself is not good. 12:57:05 However, it is a memoryless random process, so the variability follows. 12:57:47 Faster blocks would lessen variability in practice, through the law of large numbers. 12:58:36 Things like VDFs also would, though I'm unsure how that plays out with variability in users' hardware. 12:58:55 Having a memory also would. 12:59:09 It's a compromise between things we want and things we don't. 12:59:29 is VDFs verifiable delay functions? 12:59:47 There's probably also a lot of rsearch on this in the Bitcoin world already. 12:59:49 Yes. 13:00:38 ok, and is having a memory a drawback due to bandwidth and storage? or are there other issues with it? 13:01:39 Fairness. It would mean miners with the most hash rate would get all the blocks. 13:05:09 Is there any where you know of that i could read up on that? 13:05:11 I was thinking about some way to record individual identities PoW on the chain and have the miner with the most store PoW create the next block. by creating a block the identity 'spends' the PoW. That way even small miners will always get their turn. and for more frequent payouts small miners could pool. but because the PoW is recorded it can also be capped to prevent pools from getting too big. 13:08:16 That is kinda p2pool. You record you pow on the p2pool chain. Though the final block is shared and the "memory" is finite and short. 13:09:17 To incrementally record your pow for a better chance at a future block, you'd have to use up chain space repeatedly. 13:09:28 A lot of small miners would do... 13:10:31 Presumably, it also means verification would need to run a lot more randomx hashes for the "icremental pow proof" hashes in each block. 13:11:26 Though I suppose some magi^H^H^H^Hcrypto proof might be inserted here somehow. 13:11:53 haha that's what I was thinking. 13:12:04 I have handy link for more though. Except ddg.com. 13:12:14 *no handy link 13:15:04 I wondered if it would be fair for the nodes to keep a running tally of the miners PoW like an account so to speak. actually thinking about it I see that doesn't lessen the workload. 13:16:41 One additional complication is that a future user needs to verify a chain is valid. 13:17:00 So you can't discard stuff that is needed to determine validity. 13:18:01 That's where "somehow sign a given state as a new known valid starting point" ideas graft to. 13:19:49 Thanks for the info, I'll go learn something. 14:58:41 yeah to beat 30$ an hour you'd need 29 MH/s, which i doubt one of those AWS instance could do