04:04:46 hey i was redirected here from the dev room, i have an idea and background to add a feature to the mining algorithm 04:05:33 i was asked to post here from reddit as I spoke with rucknium regarding my proposal 04:07:16 my background is in control theory and applied math. never done this before so im not sure if i should just put my proposal out on the chat to get feedback or wait for more people? 04:10:19 If its something you can share in a few messages, just post it here and give people some time to respond. If its a longer idea, maybe share a link to your proposal? 04:10:41 These rooms aren't too formal, but can be slow on a Sunday evening. 04:10:50 I'm dying to hear your idea though 04:13:19 i can describe it concisely: basically, it should be theoretically possible to automatically select and adjust thread utilization based on CPU temperature. For example, I first started mining on a workstation using a Xeon processor, which quickly rose the temps to 170F alone, but I found there was no good way to really pick the thread count that prevents both excessive temperature rise and fluctuation 04:15:17 my idea is simple: adaptively switch thread utilization based on either CPU temperature measurements (say 150F for example), or alternative sensors such as voltage+current utilized by the power supply or CPU 04:15:51 it would technically be a type of modified bang-bang control, but has some difficult nuances 04:16:26 whats the goal? 04:18:21 1) we need to both regulate temperature and minimize a phenomina called chattering, which causes rapid fluctuations in control actionn or in this cause temperaturechanges 2) it has to work on a lot of different operating systems and hardware 3) there is no good math model available to use "readily available" methods you might find in a textbook 04:18:22 I think this fits into MRL open research question "Increase mining decentralization" 04:18:22 https://github.com/monero-project/research-lab/issues/94 04:18:47 If I understand correctly, your proposal is about an improved control system for mining software, not a change to the actual PoW algo itself, right? 04:19:06 the goal would be 1) minimize hardware degredation 2) potentially increase energy efficiency 04:19:10 yes, correct 04:19:24 i dont have a background in cryptography 04:19:43 Mining is a zero sum game, yes, but auto-tuning rigs for hobbyist miners increase the incentive for hobbyists to mine. Professional miners already have well-tuned rigs, I imagine. 04:20:14 yes, you can think of it as another tool in your tool box 04:21:21 for example, you can use a cheapo tempterature regulation control architecture for your oven, but surely you want a robust solution for a lab grade kiln 04:21:38 It may also make mining feasible on a wider variety of hardware, again increasing decentralization. 04:21:42 This would be a good proposal to improve Xmrig 04:21:56 right now im not even sure if something like this would be wanted 04:22:34 If it would improve efficiency at all, you bet miners will want it. 04:23:01 ill be upfront and say it will take me about a year to work on alone. its not going to be easy, the problem statement is simple but the work under the hood wont be 04:25:56 If funding is an issue, then you could always open a CCS proposal to request funding. You might benefit from talking to the Xmrig devs (maybe open an issue for your proposal) 04:25:56 https://github.com/xmrig 04:27:28 There are several people here that can help with most of the software/system stuff. Not sure if youre suggesting the control system maths alone would take a year, or if you mean the entire software implementation would take a year. 04:31:28 "It may also make mining feasible..." <- We should make it extremely easy for a grandpa or a small kid to run monero miner on their devices. It should be made resistant to no just asic or gpus but any high end rigs to ensure every device is compensated proportionate to time and not hardware. 04:32:23 My understanding is that degradation is going to be more of a function of the general CPU temperature, rather than the gradients between cores, etc. Do CPU dies develop a significant enough temperature gradient to warrant this sort of control? If you want to limit degradation I would think you'd just use a better CPU cooler. 04:32:23 I would think that you would want RandomX going all out, and not throttling any resource. 04:33:16 degradation is due to thermal stressess, which caused by both high temperatures and temperature changes in time 04:33:55 for example, putting tempered glass container in the oven then taking it out and running cool water over it 04:34:25 the deformation in the glass causes cracks, which lead to failure 04:34:55 but, that is just one consideration, you can also look at energy usage if you have a means to measure it 04:35:49 you can even, theoretically throttle higher end CPUs to make a 51% attack more difficult using high end machines 04:36:02 you can think of this as an artifical damping 04:36:40 I static OC and undervolt my ryzen cpus so they run cool as a cucumber 04:37:02 "Even if we imagine hiding the..." <- Even block order can be shuffled in a cryptographically preserving way. When a new block is mined it is submitted to a secure node , the node will shuffle all the block request preserving the cryptography such that no external observer can decode the correct order of the blocks 04:37:28 and no choosing which CPUs to throttle is not an idea 04:38:14 control theory comes down to manipulating systems such that you obtain a desired behavior. the questions such as your objective, your actuators, states you can observe, and controllable states, dictate what you can do 04:39:19 you would only need to add a damping term that is a function of hash rate, no knowledge of a specific cpu is needed 04:39:47 its just an idea. im not saying it should or shouldnt be done 04:40:16 ok 04:40:48 Hardware degradation probably is not much concern to most miners. (I too run several ryzens, and temps are no concern for me) 04:40:56 Efficiency would get a lot of attention 04:41:29 there are several unknowns with my proposal. i can tell you i work on R&D programs for a living and manage two. I know how to approach big unknown problems 04:45:16 to do an efficiency based algorithm first we need knowledge of what can be measured 04:46:51 for example, are we limited to CPU temps, how strong of a correlation is there between this variable and energy usage, can we measure input power to the rig in real time? 04:47:04 maybe that alone is a good starting trade study 04:48:46 maxwellsdemon: MRL has meetings every Wednesday at 17:00 UTC. We can put you on the agenda if you'd like: 04:48:46 https://github.com/monero-project/meta/issues/635 04:57:30 i think i can manage that 04:57:42 is it on this? 04:58:55 Yes. It's just text chat here in Matrix/IRC in this channel. Meetings usually last about an hour. 05:00:30 that should be feasible 05:02:01 only issue i might have is im temporarily taking care of a sick family member, so unless something unexpected happens with them i should be good to go 05:03:18 Ok great. How should I describe your idea/proposal as an agenda item? 05:04:42 adaptive cpu regulation for improved mining performance 05:05:15 thank you 05:07:16 i just had an interesting idea that the adaptive algorithm can be used to resist ASCI development. 05:07:32 ill save it for then 05:10:56 Ok it's on the agenda 😎 05:12:53 bitchin 07:09:45 https://www.tari.com/updates/2021-12-01-update-66.html 07:10:29 Tari/XMR atomic swap discussion and request for input 👆 07:16:07 crypto_grampy[m]: Do you have any relation to this? 07:17:22 Nadda. Just saw it mentioned in another channel. 07:22:11 I was going to comment it seems to have some misunderstandings, along with inaccuracies. I was trying to even find out what Tari *is* when all I found out was that info is by no means easily available. 07:23:25 It's a fluffypony production 😅 07:23:29 Anyways. I'll try to leave a GH comment about the issues when I have a moment, and hopefully it's just late at night and I'm misreading this. They propose two methods, one I don't endorse and one I do, yet say the first is akin to Farcaster/COMIT while the second is their own? When the first is akin to noot's work and the second is F/C. 07:23:44 Yeah, when I was trying to see what it was I saw they had IRC logs with a few large Monero names... 07:23:50 I still have 0 idea though. Literally 07:24:32 Like I get it's a private by default chain merge mined with Monero which apparently has some level of scripting. I have no idea how its protocol is designed or what specific decisions it made regarding its tech. 07:24:57 Private by default digital asset thingamahoo 07:25:00 Every single link seems to be a mountain of real time communications, a basic tutorial, or an invite to join their community with no actual specifications or dev docs. 07:25:19 Like their dev docs link literally just talk about running a node. Nothing about scripts/the RPC. 07:25:57 It's not live yet... I'm guessing the docs will come after things are more concrete 07:26:36 The problem with noot-esque designs, despite being more than fine for PoCs AND actually more efficient as they remove a ZK proof, is that they shift responsibility to chains (fine for Tari, not for ETH) and they aren't scalable in the slightest 07:26:46 ... flexible. They aren't flexible in the slightest. 07:27:17 Tari is discussing using three separate curves of op codes. What happens when I want to swap shielded ZEC which uses Jubjub and is soon moving to Pallas/Vesta. I just can't? 07:27:58 And they do acknowledge the DlEq proof but it absolutely doesn't appear to be the one designed by the MRL. I meant to ask about that as it's not being formatted as intended so I may just not be able to read whatever shorthand this is properly... 07:28:19 Anyways. Conversation for there, not for here :p It is good to see works on further adoption of swap tech though 07:32:13 Tari is based on MimbleWimble as far as I know, kayabaNerve 07:32:49 What are the requirements for a chain to be able to use farcaster's system with monero ? ie, must support A and B... 07:32:55 Thanks for the heads up 07:33:13 moneromooo: The swap protocol (theory) or Farcaster (impl)? 07:33:36 Hmm... Both I guess. 07:34:05 Theoretically, HTLCs and custom scripting accordingly. 07:34:14 Like, if you have enough Script to impl a HTLC, you're good. 07:35:35 Farcaster is working on a variant of the protocol enabled by Taproot though, which replaces the HTLC with a PTLC thanks to the usage of Schnorr signatures. 07:36:22 If you wanted to implement Farcaster for an arbitrary chain, there should be a set of Rust traits you'd have to implement which could then be routed into a UI selecting building blocks all under that common interface. 07:36:43 See https://github.com/farcaster-project/farcaster-core/ 07:37:29 I can't really comment further, sorry, but hope that gives you the info you need. I'm sure h4sh3d or lederstrumpf[m] could comment more. 07:37:52 That's enough for my curiosity, thanks :) 07:59:49 I’d say that we already use PTLC even with ECDSA, that’s why it’s is complicated to have a proper, strong, production impl for non-taproot. Otherwise, after skimming through the blog post (not RFC yet) I’m wondering if the TariScript described disable the first output path when moving to the next one (e.g. refund) 08:00:51 If not then I have a problem: you can race the refund and the buy, both can be in the mempool, leading to both partial keys leaked 08:01:33 That’s why we have an intermediary tx on btc (that is not needed for eth) to invalidate the buy state on-chain 08:02:18 And I’m not sure to understand if they have some game theory to force the refund on xtr 08:02:41 But I have to read more carefully to be sure that my comments are accurate 08:02:50 Nice to see some work on that tho 12:24:32 maxwellsdemon: my opinion is that your stated goal would not be very useful work. There are multiple reasons why: 12:28:31 For starters, CPUs don't really "wear out" significantly over their expected lifespan (~10 years or so) - and even then, this marginal wear is caused by the heat expansion/contraction due to large temperature swings. Mining rigs tipically run in roughly constant conditions (ie temperature) with minimal fluctuations, so technically they experience *less* wear than an "common" cpu performing work in short, high-intensity bursts 12:30:38 The only significant threat posed by temperature is when the chip reaches temps that actually risk melting the circuits (typically ~100°C or so, depending on the manufacturing process and generation) - but if you're reaching those temperatures then your cooling is severely inadequate, and you should not be mining in the first place until you fix that (or doing anything else cpu-intensive, for that matter) 12:32:09 Also, cpus have had overtemperature protections for over a decade at this point, I think. Ie. if they actually get to 100°C they instantly shut down the system to protect themselves from this kind of heat damage 12:36:31 As for temperature vs performance, this is already handled by the cpu/OS as well: if your cpu starts reaching "high" temperatures (typically around ~80°C, iirc), the CPU will start "thermal throttling", ie preemptively reducing its performance to reduce the power consumption (and therefore heat generation). A good mining setup requires adequate cooling (and perhaps some system tuning) to keep the temperatures around 70-75°C *maximum* - which is 12:36:31 actually not that hard to do 12:39:54 If your cooling system is capable of satisfying this temperature constraint, then it will typically have no problem maintaining it under constant load - so the temperatures won't really swing up and down by more than 1-2°C at most unless something in the environment changes (and even then - they will easily compensate for this by adjusting fan speeds) 12:41:03 As for lower temperatures than the ones I mentioned - the heat/temperature difference that we're talking about becomes so small (relative to typical base/idle loads) that I doubt it poses a significant problem 12:43:33 Furthermore, all miners already allow you to (manually) configure the number of cpu cores you want to use for mining (and in the case of xmrig, you even have the option to change this value on the fly by editing the config file without even stopping the miner - the software will automatically detect the change and apply it right away) 12:44:28 So if someone wants to reduce their cpu load, they can already do that manually pretty easily with just a little trial and error 12:46:21 Note that mining is typically bottlenecked by the hardware capabilities of the cpu itself (for example, the amout of cpu L3 cache available on the die) - if you have temperature problems, that just means that you have bad cooling (which, strictly speaking, is an independent issue from the mining itself) 12:51:24 Regulating mining load automatically based on a target temperature might be a "nice to have" feature for people who want to mine on hardware with poor cooling (eg laptops - but those should be avoided for mining due to other additional issues), but it wouldn't really increase mining decentralization per se by a significant margin (imo - let me know if I'm wrong). It also wouldn't help very-low-end-hardware that can't currently mine, because the 12:51:24 real limitation is not the cooling but the actual cpu hardware itself 12:56:31 Sidenote: I don't think your example of the Xeon server is very representative of a typical mining setup for multiple reasons:... (full message at https://libera.ems.host/_matrix/media/r0/download/libera.chat/ef749d7a410f38659e7e23c5075ff27af40bbccf) 12:59:36 (Sorry for the wall of text, but I wanted to be as thorough as possible with my explanations - hope it gives you a clearer picture of how mining works, from the perspective of cooling/temperature) 13:11:55 A more interesting application of your knowledge of control theory would be looking into the difficulty adjustment algorithm used by the network to regulate mining difficulty for the next block. Our current algorithm uses a simple moving average of the last blocks - which works fine for the most part, but doesn't react optimally in some scenarios (such as sudden, large hashrate decreases) and suffers from (small, at least in our case) oscillations 13:12:29 I believe this should be right up your alley 13:12:51 For a quick primer, have a look at https://github.com/zawy12/difficulty-algorithms/issues/50 and https://github.com/zawy12/difficulty-algorithms/issues/48 13:14:49 (That whole repository is a collection of issues related to mining difficulty adjustments, you might find other interesting stuff/ideas in the other issues) 13:15:13 https://monero.stackexchange.com/a/7981 - this is our current difficulty adjustment algorithm 13:17:43 The c++ implementation is in https://github.com/monero-project/monero/blob/master/src/cryptonote_basic/difficulty.cpp 13:17:49 I doubt there's any perfect solution here. if you try to overdamp the oscillations you'll react poorly to longer term hashrate changes 13:18:44 whatever time interval you tune the adjustments to, an attacker can abuse the algorithm by making changes outside that window 13:20:07 The goal would not be using a SMA, rather than tuning it 13:20:08 *would be not using SMA 13:21:34 regardless 13:21:52 you also lack timely and reliable information 13:22:42 a miner can turn on a mining farm, quickly churn out a few blocks while the difficulty is low, then shut them off again when the difficulty rises 13:23:07 causing blocks to lag until the difficulty drops back down again. then lather rinse repeat for as long as it amuses them. 13:23:11 Issue #50 shows some interesting stuff actually (such as `D = D0 * e^((N*T-timestamp)/M)`, tough it seems it might not be applicable to CryptoNote-based coins?) 13:23:56 and there is no way you can defeat that sort of abuse because you have no way to accurately count the number of active miners nor their relative power 13:24:28 You're still thinking of an algo that doesn't take into account the change in the rate of block production 13:24:44 Derivatives exist for a reason 13:25:08 it could just be random luck 13:25:22 so you can't take a meaningful slope without defining a relatively wide time window 13:25:35 so you can't react instantly. 13:25:49 and the game I just described can be repeated ad nauseum 13:26:03 PID controllers used by drones to stay up would like to disagree with you 13:26:15 You're just listing design goals of the control system 13:26:37 Which are the whole point of control theory 13:26:38 drones are hardly a comparable system 13:26:55 unless you think they can cope with attackers messing with their airflow\ 13:27:09 Aka wind 13:27:17 wind is more uniform than that 13:27:23 And yes they can 13:27:28 lmfao 13:27:45 turbulence would like to have a word with you 13:28:08 all the turbulence in the world can still be plotted as sine waves 13:28:23 whereas the games miners can play would be plotted as square waves 13:28:34 good luck with your derivatives for infinite slope 13:30:11 I guess you've just proven that fourier transforms of square waves don't exist, and that digital control systems don't know how to handle them 13:30:18 ( /s, in case it's not clear) 13:30:23 lol 13:32:03 well in that case I guess the problem's solved. next topic. 13:32:11 Dude 13:33:03 You're the one who keeps listing design parameters of the control system as if they are impossible things that nobody ever could solve 13:33:50 if it were so simple it would have been done by now 13:34:24 Possible != simple 13:35:12 theoretically possible also != feasible in practice 13:35:19 And since we have someone proficient in control theory in the room, I would like to hear their opinion 13:35:20 particularly with imperfect information 13:37:59 essentially what you want is a low-pass filter, to damp down high frequency oscillations. in terms of an fft, your data samples are individual blocks. 13:38:22 so again, you need a relatively large window to have enough samples to do a meanngful computation 13:38:34 and we don't even have reliable clocks 13:51:19 It's as if you've never heard of the concept of "control robustness" 13:53:44 come back when you figure out how large the sample window needs to be... 13:54:27 sigh 14:26:46 You'll need to introduce game theory into the mix in some way -- which is feasible, since game theory has been thoroughly studied. 14:43:24 Hi, kayabaNerve_ Here is a link to the scripting in Tari: https://rfc.tari.com/RFC-0201_TariScript.html 14:43:38 Most of our current stuff is the RFCs. 14:47:20 Might I enquire about the misunderstandings and inaccuracies, we would love to fix them. 15:37:00 what is the question, specifically? 15:57:02 "https://monero.stackexchange.com..." <- If you could implement a difficulty adjustment algorithm better than the current one ^, which doesn't handle fast changes in network hashrate (particularly drops) very well and is prone to oscillations. The difficulty adjustment algorithm is what ensures that miners produce new blocks at a roughly constant rate (target time: 2 minutes) as the hashrate of the network increases or decreases over time 15:57:02 (or stays stable) 15:59:44 In other words: the goal is that new blocks show up every 2 minutes regardless of how many people are mining (whether it's 1 MH/s or 100 GH/s or whatever). The catch is that we have no way of knowing what's the real hashrate that's actually mining - we can only make inferences by looking at the timestamps of the previous blocks found 16:00:34 (which are not always strictly accurate, and could be potentially manipulated by an attacker trying to mess with the system) 16:02:42 Right now, when the hashrate drops by a significant amout all of a sudden, blocks times become longer than 2 minutes for a while (because the difficulty can't adjust yet, because no blocks are being found, because difficulty hasn't adjusted...) 16:03:38 (not they they aren't being found, but they aren't being found fast enough for the adjustment to kick in) 16:27:06 thats something i can look into, yes. My first question to you would be what type of behavior is needed/what are the performance requirements 17:08:50 consider it wishlist, not requirements 17:09:03 nominally, blocks should be found every 2 minutes 17:09:47 that cannot be assured in the face of sudden hashrate drops, but it would be nice to recover more quickly than currently 17:11:52 the mining difficulty is calculated based on the preceding blocks timing 17:12:25 a miner gets this difficulty value and then goes to work on trying to solve a block meeting that difficulty 17:13:23 if the actual network hashrate suddenly drops, then there won't be enough miners for the difficulty value that was handed out to them 17:13:36 so the next block will take much longer than the nominal 2 minutes 17:14:53 to actually fix this requires the network/nodes to recognize that the next block is taking "too long" and compute an easier difficulty value 17:15:33 and requires the miners to also recognize "too long" time has passed, and query the network/nodes for a new difficulty value that's more reflective of the current hash power 17:15:52 ^ that's the ideal solution. not feasible in reality. 17:18:09 wait, are you talking about adjusting the difficulty before the next block is found? (ie. "nobody has found a block in 10 minutes, so let's reduce difficulty for everyone"?) 17:19:29 that would be the ideal solution, yes. but as I said, not possible. 17:21:10 It is theoretically possible I think. 17:21:44 "Issue #50 shows some interesting..." <- I mean, technically there *is* this solution mentioned in issue #50 that I linked - which only requires the timestamp of the origin block and the current timestamp 17:22:39 You could maybe allow blocks with difficulty 50% of the target one, which would only give, say, 20% of the block reward. And possibly have some rule which gets them orphaned easier than "real" blocks. 17:22:49 So if blocks are found faster than expected, difficulty rises exponentially, and if no blocks are found for a while then it decreases exponentially 17:23:51 But more generally speaking, it's simpler to implement a system based on the timestamps of the previous blocks found 17:25:35 That way, if the last N blocks are consistently taking longer than average, you might consider reducing the difficulty accordingly 17:25:46 with mooo's idea we could just implement a smooth decay. at 2 minutes + N time, difficulty is decreased by N% 17:27:40 (Keeping in mind that individual blocks times follow a Poisson distribution) 17:28:48 (in the absence of adversaries) 17:29:16 timestamps can be fake 17:30:10 yeah I wouldn't base it on block timestamps, but on node receipt time 17:30:13 Pretty sure that's a dead end with the existing consensus model. You need SCP or a similar model based on trust networks to consense on timestamps. 17:31:12 Node receipt time can't be used to determine new diff since they're jiggly (and even worse for historical sync). 17:31:39 hm good point 17:32:31 The "low diff block" thing is "kinda" that: a timestamp that nodes do agree with, that comes before the "real" full diff block. 17:32:48 50% being of course a placeholder amount. 17:35:26 if the real hashrate has dropped tho, the "real" full diff block may be hours away 17:42:14 what youre talking about is essentially an exponentially waitin FIR filter 17:43:58 its definitely doabbe, stability will be a big concern. you can implement a recursive filter which will have a more smooth performance, but it can "blow up" 17:44:54 its certainly an interesting problem. a good first start would be getting access todata sets that emulate the hash rate charactaristics 17:45:03 Note BCH's ASERT difficulty adjustment algorithm. It was implemented in BCH in Nov 2020: 17:45:03 https://read.cash/@jtoomim/bch-upgrade-proposal-use-asert-as-the-new-daa-8887b9c1 17:45:03 https://bitcoincashresearch.org/t/asert-before-and-after/312 17:46:19 another approach would be to use a least square filter, which is very stable, but can be slow if you have crappy hardware 17:47:08 i would recommend taking an iterative approach - start with a simple solution and refine through design cycles 17:48:25 Slow if you have poor hardware? How? 17:48:26 I suppose you can use block timestamps from late 2017-mid 2018, since we had massive hashrate fluctuations around then due to ASICs 17:49:32 This is as much of an economics problem as an engineering problem, by the way. 17:51:15 you have to compute a least squares solution eachbtime step 17:51:18 requires matrix inversion, it can be slow 17:52:19 it can also be poorly conditioned, which means you have a large difference in eigenvalues of your system. the result is a solution that might numerically divirge 17:53:17 what's the problem trying to be solved i.e. what do we gain from a s more predictable block time? I would imagine throughput is one advantage resulting from fewer empty blocks 17:53:18 Rucknium[m]: it is I agree. essentially we are making decisions based on filtered data, which by its nature means a loss of information 17:54:31 see hyc's reply to me 17:55:40 i think this might be a good thing for the community to have: hash rate data as a time series, maybe other variables too 17:55:49 good for devs anyway 17:56:10 maxwellsdemon: Aren't there very efficient methods of OLS [ordinary least squares]? QR factorization, etc? And we are not talking about very many observations here 17:57:03 the monero-blockchain-stats utility will give you the historical difficulty 17:58:00 though perhaps not as fine grained as you'd want 17:58:16 there are many choices 17:58:56 Rucknium[m]: if you can avoid matrix inversion that is preferred. lots of ways to approach the problem 18:00:32 If we need difficulty of every block, then neptune can probably provide it. 18:01:59 i would look at the rawest data possible, the discrete events of block arrival times, if enough trusted community members have the data logged 18:03:51 and compute the distributions by convolving that to some half a gaussian kernel, or such kind of functions, that the future doesnt contaminate the past 18:03:55 Historical data is going to be of limited use. Change the system and you change the behavior. Still useful, of course. 18:07:47 "another approach would be to use..." <- How slow are we talking? A few milliseconds, or several seconds? 18:08:49 Keeping in mind that the base process is: compute difficulty from the last block, start mining, someone finds a new block, everyone recomputes difficulty, start mining again, and so on 18:09:30 if historical block timestamps are too unreliable then you just have to run a testnet with real miners 18:09:33 "I suppose you can use block..." <- Also the two big hashrate spikes earlier this year 18:10:04 Everyone who syncs the blockchain with monerod will have to verify that the difficulty of each block was calculated correctly, though, right? So speed could be a concern even if it was very fast. 18:10:22 yes 18:10:43 how can you make a reliable estimate of a poisson process without having a large window of observation? 18:11:20 you can't. ... will always need a large sample window ... 18:11:41 Shrinkage estimator. 18:12:10 And now we are on to statistics! My favorite 😍 18:12:41 With a good shrinkage estimator you could do it. 18:13:03 hyc: you can use recursive methods that only dependson the output of the filter, requiring little passed information. A numerical integrator is the most simple example. Again, they can be unstable, which i think is very important for this use case 18:13:21 indeed 18:13:33 im also not using autocorrect on my phone, which is why my typing looks like shit 18:13:42 i dont trust it to not send data 18:15:14 A shrinkage estimator is not unstable. In fact, one could argue that it is the opposite. 18:15:55 A shrinkage estimator will in general reduce variance at the cost of increased (finite-sample) bias. 18:16:52 im not familiar with a shrinkage estimator 18:16:56 what does it do 18:18:00 At a guess, it estimates the age of shrinks ? 18:18:03 * moneromooo flees 18:18:18 Shrinkage estimators are a broad class of estimators. Bayesian estimators can be thought of as a type of shrinkage estimator, for example. 18:20:04 Basically, it "shrinks" the estimate toward some value that is not contained in the observational data itself. If the data is uninformative, the estimate will be close to that value that is outside of the data. If the data is informative, the estimate will be close to what a more standard "frequentist" estimator would yield. 18:20:47 Rucknium[m]: how much data do you need for these real time estimates? 18:20:48 The Wikipedia page is fine: 18:20:48 https://en.wikipedia.org/wiki/Shrinkage_(statistics) 18:21:02 how many events? 18:21:47 u guys over estimate how shit reality is haha 18:23:22 zkao: What do you mean by this comment? I work with empirical data almost every day. 18:24:14 zkao: I don't know for now. It depends on myriad factors. 18:25:15 the block rate is very low, Rucknium[m] 18:27:30 Compared to bitcoin it is five times faster.