-
maxwellsdemon[m]
hey i was redirected here from the dev room, i have an idea and background to add a feature to the mining algorithm
-
maxwellsdemon[m]
i was asked to post here from reddit as I spoke with rucknium regarding my proposal
-
maxwellsdemon[m]
my background is in control theory and applied math. never done this before so im not sure if i should just put my proposal out on the chat to get feedback or wait for more people?
-
chad[m]1
If its something you can share in a few messages, just post it here and give people some time to respond. If its a longer idea, maybe share a link to your proposal?
-
chad[m]1
These rooms aren't too formal, but can be slow on a Sunday evening.
-
chad[m]1
I'm dying to hear your idea though
-
maxwellsdemon[m]
i can describe it concisely: basically, it should be theoretically possible to automatically select and adjust thread utilization based on CPU temperature. For example, I first started mining on a workstation using a Xeon processor, which quickly rose the temps to 170F alone, but I found there was no good way to really pick the thread count that prevents both excessive temperature rise and fluctuation
-
maxwellsdemon[m]
my idea is simple: adaptively switch thread utilization based on either CPU temperature measurements (say 150F for example), or alternative sensors such as voltage+current utilized by the power supply or CPU
-
maxwellsdemon[m]
it would technically be a type of modified bang-bang control, but has some difficult nuances
-
gingeropolous
whats the goal?
-
maxwellsdemon[m]
1) we need to both regulate temperature and minimize a phenomina called chattering, which causes rapid fluctuations in control actionn or in this cause temperaturechanges 2) it has to work on a lot of different operating systems and hardware 3) there is no good math model available to use "readily available" methods you might find in a textbook
-
Rucknium[m]
I think this fits into MRL open research question "Increase mining decentralization"
-
Rucknium[m]
-
chad[m]1
If I understand correctly, your proposal is about an improved control system for mining software, not a change to the actual PoW algo itself, right?
-
maxwellsdemon[m]
the goal would be 1) minimize hardware degredation 2) potentially increase energy efficiency
-
maxwellsdemon[m]
yes, correct
-
maxwellsdemon[m]
i dont have a background in cryptography
-
Rucknium[m]
Mining is a zero sum game, yes, but auto-tuning rigs for hobbyist miners increase the incentive for hobbyists to mine. Professional miners already have well-tuned rigs, I imagine.
-
maxwellsdemon[m]
yes, you can think of it as another tool in your tool box
-
maxwellsdemon[m]
for example, you can use a cheapo tempterature regulation control architecture for your oven, but surely you want a robust solution for a lab grade kiln
-
Rucknium[m]
It may also make mining feasible on a wider variety of hardware, again increasing decentralization.
-
chad[m]1
This would be a good proposal to improve Xmrig
-
maxwellsdemon[m]
right now im not even sure if something like this would be wanted
-
chad[m]1
If it would improve efficiency at all, you bet miners will want it.
-
maxwellsdemon[m]
ill be upfront and say it will take me about a year to work on alone. its not going to be easy, the problem statement is simple but the work under the hood wont be
-
chad[m]1
If funding is an issue, then you could always open a CCS proposal to request funding. You might benefit from talking to the Xmrig devs (maybe open an issue for your proposal)
-
chad[m]1
-
chad[m]1
There are several people here that can help with most of the software/system stuff. Not sure if youre suggesting the control system maths alone would take a year, or if you mean the entire software implementation would take a year.
-
aberdeenik[m]
<Rucknium[m]> "It may also make mining feasible..." <- We should make it extremely easy for a grandpa or a small kid to run monero miner on their devices. It should be made resistant to no just asic or gpus but any high end rigs to ensure every device is compensated proportionate to time and not hardware.
-
spackle[m]
My understanding is that degradation is going to be more of a function of the general CPU temperature, rather than the gradients between cores, etc. Do CPU dies develop a significant enough temperature gradient to warrant this sort of control? If you want to limit degradation I would think you'd just use a better CPU cooler.
-
spackle[m]
I would think that you would want RandomX going all out, and not throttling any resource.
-
maxwellsdemon[m]
degradation is due to thermal stressess, which caused by both high temperatures and temperature changes in time
-
maxwellsdemon[m]
for example, putting tempered glass container in the oven then taking it out and running cool water over it
-
maxwellsdemon[m]
the deformation in the glass causes cracks, which lead to failure
-
maxwellsdemon[m]
but, that is just one consideration, you can also look at energy usage if you have a means to measure it
-
maxwellsdemon[m]
you can even, theoretically throttle higher end CPUs to make a 51% attack more difficult using high end machines
-
maxwellsdemon[m]
you can think of this as an artifical damping
-
nioc
I static OC and undervolt my ryzen cpus so they run cool as a cucumber
-
aberdeenik[m]
<Halver[m]> "Even if we imagine hiding the..." <- Even block order can be shuffled in a cryptographically preserving way. When a new block is mined it is submitted to a secure node , the node will shuffle all the block request preserving the cryptography such that no external observer can decode the correct order of the blocks
-
nioc
and no choosing which CPUs to throttle is not an idea
-
maxwellsdemon[m]
control theory comes down to manipulating systems such that you obtain a desired behavior. the questions such as your objective, your actuators, states you can observe, and controllable states, dictate what you can do
-
maxwellsdemon[m]
you would only need to add a damping term that is a function of hash rate, no knowledge of a specific cpu is needed
-
maxwellsdemon[m]
its just an idea. im not saying it should or shouldnt be done
-
nioc
ok
-
chad[m]1
Hardware degradation probably is not much concern to most miners. (I too run several ryzens, and temps are no concern for me)
-
chad[m]1
Efficiency would get a lot of attention
-
maxwellsdemon[m]
there are several unknowns with my proposal. i can tell you i work on R&D programs for a living and manage two. I know how to approach big unknown problems
-
maxwellsdemon[m]
to do an efficiency based algorithm first we need knowledge of what can be measured
-
maxwellsdemon[m]
for example, are we limited to CPU temps, how strong of a correlation is there between this variable and energy usage, can we measure input power to the rig in real time?
-
maxwellsdemon[m]
maybe that alone is a good starting trade study
-
Rucknium[m]
maxwellsdemon: MRL has meetings every Wednesday at 17:00 UTC. We can put you on the agenda if you'd like:
-
Rucknium[m]
-
maxwellsdemon[m]
i think i can manage that
-
maxwellsdemon[m]
is it on this?
-
Rucknium[m]
Yes. It's just text chat here in Matrix/IRC in this channel. Meetings usually last about an hour.
-
maxwellsdemon[m]
that should be feasible
-
maxwellsdemon[m]
only issue i might have is im temporarily taking care of a sick family member, so unless something unexpected happens with them i should be good to go
-
Rucknium[m]
Ok great. How should I describe your idea/proposal as an agenda item?
-
maxwellsdemon[m]
adaptive cpu regulation for improved mining performance
-
maxwellsdemon[m]
thank you
-
maxwellsdemon[m]
i just had an interesting idea that the adaptive algorithm can be used to resist ASCI development.
-
maxwellsdemon[m]
ill save it for then
-
Rucknium[m]
Ok it's on the agenda 😎
-
maxwellsdemon[m]
bitchin
-
crypto_grampy[m]
-
crypto_grampy[m]
Tari/XMR atomic swap discussion and request for input 👆
-
kayabaNerve
crypto_grampy[m]: Do you have any relation to this?
-
crypto_grampy[m]
Nadda. Just saw it mentioned in another channel.
-
kayabaNerve
I was going to comment it seems to have some misunderstandings, along with inaccuracies. I was trying to even find out what Tari *is* when all I found out was that info is by no means easily available.
-
crypto_grampy[m]
It's a fluffypony production 😅
-
kayabaNerve
Anyways. I'll try to leave a GH comment about the issues when I have a moment, and hopefully it's just late at night and I'm misreading this. They propose two methods, one I don't endorse and one I do, yet say the first is akin to Farcaster/COMIT while the second is their own? When the first is akin to noot's work and the second is F/C.
-
kayabaNerve
Yeah, when I was trying to see what it was I saw they had IRC logs with a few large Monero names...
-
kayabaNerve
I still have 0 idea though. Literally
-
kayabaNerve
Like I get it's a private by default chain merge mined with Monero which apparently has some level of scripting. I have no idea how its protocol is designed or what specific decisions it made regarding its tech.
-
crypto_grampy[m]
Private by default digital asset thingamahoo
-
kayabaNerve
Every single link seems to be a mountain of real time communications, a basic tutorial, or an invite to join their community with no actual specifications or dev docs.
-
kayabaNerve
Like their dev docs link literally just talk about running a node. Nothing about scripts/the RPC.
-
crypto_grampy[m]
It's not live yet... I'm guessing the docs will come after things are more concrete
-
kayabaNerve
The problem with noot-esque designs, despite being more than fine for PoCs AND actually more efficient as they remove a ZK proof, is that they shift responsibility to chains (fine for Tari, not for ETH) and they aren't scalable in the slightest
-
kayabaNerve
... flexible. They aren't flexible in the slightest.
-
kayabaNerve
Tari is discussing using three separate curves of op codes. What happens when I want to swap shielded ZEC which uses Jubjub and is soon moving to Pallas/Vesta. I just can't?
-
kayabaNerve
And they do acknowledge the DlEq proof but it absolutely doesn't appear to be the one designed by the MRL. I meant to ask about that as it's not being formatted as intended so I may just not be able to read whatever shorthand this is properly...
-
kayabaNerve
Anyways. Conversation for there, not for here :p It is good to see works on further adoption of swap tech though
-
dEBRUYNE
Tari is based on MimbleWimble as far as I know, kayabaNerve
-
moneromooo
What are the requirements for a chain to be able to use farcaster's system with monero ? ie, must support A and B...
-
kayabaNerve
Thanks for the heads up
-
kayabaNerve
moneromooo: The swap protocol (theory) or Farcaster (impl)?
-
moneromooo
Hmm... Both I guess.
-
kayabaNerve
Theoretically, HTLCs and custom scripting accordingly.
-
kayabaNerve
Like, if you have enough Script to impl a HTLC, you're good.
-
kayabaNerve
Farcaster is working on a variant of the protocol enabled by Taproot though, which replaces the HTLC with a PTLC thanks to the usage of Schnorr signatures.
-
kayabaNerve
If you wanted to implement Farcaster for an arbitrary chain, there should be a set of Rust traits you'd have to implement which could then be routed into a UI selecting building blocks all under that common interface.
-
kayabaNerve
-
kayabaNerve
I can't really comment further, sorry, but hope that gives you the info you need. I'm sure h4sh3d or lederstrumpf[m] could comment more.
-
moneromooo
That's enough for my curiosity, thanks :)
-
h4sh3d
I’d say that we already use PTLC even with ECDSA, that’s why it’s is complicated to have a proper, strong, production impl for non-taproot. Otherwise, after skimming through the blog post (not RFC yet) I’m wondering if the TariScript described disable the first output path when moving to the next one (e.g. refund)
-
h4sh3d
If not then I have a problem: you can race the refund and the buy, both can be in the mempool, leading to both partial keys leaked
-
h4sh3d
That’s why we have an intermediary tx on btc (that is not needed for eth) to invalidate the buy state on-chain
-
h4sh3d
And I’m not sure to understand if they have some game theory to force the refund on xtr
-
h4sh3d
But I have to read more carefully to be sure that my comments are accurate
-
h4sh3d
Nice to see some work on that tho
-
merope
maxwellsdemon: my opinion is that your stated goal would not be very useful work. There are multiple reasons why:
-
merope
For starters, CPUs don't really "wear out" significantly over their expected lifespan (~10 years or so) - and even then, this marginal wear is caused by the heat expansion/contraction due to large temperature swings. Mining rigs tipically run in roughly constant conditions (ie temperature) with minimal fluctuations, so technically they experience *less* wear than an "common" cpu performing work in short, high-intensity bursts
-
merope
The only significant threat posed by temperature is when the chip reaches temps that actually risk melting the circuits (typically ~100°C or so, depending on the manufacturing process and generation) - but if you're reaching those temperatures then your cooling is severely inadequate, and you should not be mining in the first place until you fix that (or doing anything else cpu-intensive, for that matter)
-
merope
Also, cpus have had overtemperature protections for over a decade at this point, I think. Ie. if they actually get to 100°C they instantly shut down the system to protect themselves from this kind of heat damage
-
merope
As for temperature vs performance, this is already handled by the cpu/OS as well: if your cpu starts reaching "high" temperatures (typically around ~80°C, iirc), the CPU will start "thermal throttling", ie preemptively reducing its performance to reduce the power consumption (and therefore heat generation). A good mining setup requires adequate cooling (and perhaps some system tuning) to keep the temperatures around 70-75°C *maximum* - which is
-
merope
actually not that hard to do
-
merope
If your cooling system is capable of satisfying this temperature constraint, then it will typically have no problem maintaining it under constant load - so the temperatures won't really swing up and down by more than 1-2°C at most unless something in the environment changes (and even then - they will easily compensate for this by adjusting fan speeds)
-
merope
As for lower temperatures than the ones I mentioned - the heat/temperature difference that we're talking about becomes so small (relative to typical base/idle loads) that I doubt it poses a significant problem
-
merope
Furthermore, all miners already allow you to (manually) configure the number of cpu cores you want to use for mining (and in the case of xmrig, you even have the option to change this value on the fly by editing the config file without even stopping the miner - the software will automatically detect the change and apply it right away)
-
merope
So if someone wants to reduce their cpu load, they can already do that manually pretty easily with just a little trial and error
-
merope
Note that mining is typically bottlenecked by the hardware capabilities of the cpu itself (for example, the amout of cpu L3 cache available on the die) - if you have temperature problems, that just means that you have bad cooling (which, strictly speaking, is an independent issue from the mining itself)
-
merope
Regulating mining load automatically based on a target temperature might be a "nice to have" feature for people who want to mine on hardware with poor cooling (eg laptops - but those should be avoided for mining due to other additional issues), but it wouldn't really increase mining decentralization per se by a significant margin (imo - let me know if I'm wrong). It also wouldn't help very-low-end-hardware that can't currently mine, because the
-
merope
real limitation is not the cooling but the actual cpu hardware itself
-
merope
Sidenote: I don't think your example of the Xeon server is very representative of a typical mining setup for multiple reasons:... (full message at
libera.ems.host/_matrix/media/r0/do…d7a410f38659e7e23c5075ff27af40bbccf)
-
merope
(Sorry for the wall of text, but I wanted to be as thorough as possible with my explanations - hope it gives you a clearer picture of how mining works, from the perspective of cooling/temperature)
-
merope
A more interesting application of your knowledge of control theory would be looking into the difficulty adjustment algorithm used by the network to regulate mining difficulty for the next block. Our current algorithm uses a simple moving average of the last blocks - which works fine for the most part, but doesn't react optimally in some scenarios (such as sudden, large hashrate decreases) and suffers from (small, at least in our case) oscillations
-
merope
I believe this should be right up your alley
-
merope
-
merope
(That whole repository is a collection of issues related to mining difficulty adjustments, you might find other interesting stuff/ideas in the other issues)
-
merope
monero.stackexchange.com/a/7981 - this is our current difficulty adjustment algorithm
-
merope
-
hyc
I doubt there's any perfect solution here. if you try to overdamp the oscillations you'll react poorly to longer term hashrate changes
-
hyc
whatever time interval you tune the adjustments to, an attacker can abuse the algorithm by making changes outside that window
-
merope
The goal would not be using a SMA, rather than tuning it
-
merope
*would be not using SMA
-
hyc
regardless
-
hyc
you also lack timely and reliable information
-
hyc
a miner can turn on a mining farm, quickly churn out a few blocks while the difficulty is low, then shut them off again when the difficulty rises
-
hyc
causing blocks to lag until the difficulty drops back down again. then lather rinse repeat for as long as it amuses them.
-
merope
Issue #50 shows some interesting stuff actually (such as `D = D0 * e^((N*T-timestamp)/M)`, tough it seems it might not be applicable to CryptoNote-based coins?)
-
hyc
and there is no way you can defeat that sort of abuse because you have no way to accurately count the number of active miners nor their relative power
-
merope
You're still thinking of an algo that doesn't take into account the change in the rate of block production
-
merope
Derivatives exist for a reason
-
hyc
it could just be random luck
-
hyc
so you can't take a meaningful slope without defining a relatively wide time window
-
hyc
so you can't react instantly.
-
hyc
and the game I just described can be repeated ad nauseum
-
merope
PID controllers used by drones to stay up would like to disagree with you
-
merope
You're just listing design goals of the control system
-
merope
Which are the whole point of control theory
-
hyc
drones are hardly a comparable system
-
hyc
unless you think they can cope with attackers messing with their airflow\
-
merope
Aka wind
-
hyc
wind is more uniform than that
-
merope
And yes they can
-
merope
lmfao
-
merope
turbulence would like to have a word with you
-
hyc
all the turbulence in the world can still be plotted as sine waves
-
hyc
whereas the games miners can play would be plotted as square waves
-
hyc
good luck with your derivatives for infinite slope
-
merope
I guess you've just proven that fourier transforms of square waves don't exist, and that digital control systems don't know how to handle them
-
merope
( /s, in case it's not clear)
-
hyc
lol
-
hyc
well in that case I guess the problem's solved. next topic.
-
merope
Dude
-
merope
You're the one who keeps listing design parameters of the control system as if they are impossible things that nobody ever could solve
-
hyc
if it were so simple it would have been done by now
-
merope
Possible != simple
-
hyc
theoretically possible also != feasible in practice
-
merope
And since we have someone proficient in control theory in the room, I would like to hear their opinion
-
hyc
particularly with imperfect information
-
hyc
essentially what you want is a low-pass filter, to damp down high frequency oscillations. in terms of an fft, your data samples are individual blocks.
-
hyc
so again, you need a relatively large window to have enough samples to do a meanngful computation
-
hyc
and we don't even have reliable clocks
-
merope
It's as if you've never heard of the concept of "control robustness"
-
hyc
come back when you figure out how large the sample window needs to be...
-
merope
sigh
-
Rucknium[m]
You'll need to introduce game theory into the mix in some way -- which is feasible, since game theory has been thoroughly studied.
-
Blackwolfsa
Hi, kayabaNerve_ Here is a link to the scripting in Tari:
rfc.tari.com/RFC-0201_TariScript.html
-
Blackwolfsa
Most of our current stuff is the RFCs.
-
Blackwolfsa
Might I enquire about the misunderstandings and inaccuracies, we would love to fix them.
-
maxwellsdemon[m]
what is the question, specifically?
-
merope
<merope> "
monero.stackexchange.com..." <- If you could implement a difficulty adjustment algorithm better than the current one ^, which doesn't handle fast changes in network hashrate (particularly drops) very well and is prone to oscillations. The difficulty adjustment algorithm is what ensures that miners produce new blocks at a roughly constant rate (target time: 2 minutes) as the hashrate of the network increases or decreases over time
-
merope
(or stays stable)
-
merope
In other words: the goal is that new blocks show up every 2 minutes regardless of how many people are mining (whether it's 1 MH/s or 100 GH/s or whatever). The catch is that we have no way of knowing what's the real hashrate that's actually mining - we can only make inferences by looking at the timestamps of the previous blocks found
-
merope
(which are not always strictly accurate, and could be potentially manipulated by an attacker trying to mess with the system)
-
merope
Right now, when the hashrate drops by a significant amout all of a sudden, blocks times become longer than 2 minutes for a while (because the difficulty can't adjust yet, because no blocks are being found, because difficulty hasn't adjusted...)
-
merope
(not they they aren't being found, but they aren't being found fast enough for the adjustment to kick in)
-
maxwellsdemon[m]
thats something i can look into, yes. My first question to you would be what type of behavior is needed/what are the performance requirements
-
hyc
consider it wishlist, not requirements
-
hyc
nominally, blocks should be found every 2 minutes
-
hyc
that cannot be assured in the face of sudden hashrate drops, but it would be nice to recover more quickly than currently
-
hyc
the mining difficulty is calculated based on the preceding blocks timing
-
hyc
a miner gets this difficulty value and then goes to work on trying to solve a block meeting that difficulty
-
hyc
if the actual network hashrate suddenly drops, then there won't be enough miners for the difficulty value that was handed out to them
-
hyc
so the next block will take much longer than the nominal 2 minutes
-
hyc
to actually fix this requires the network/nodes to recognize that the next block is taking "too long" and compute an easier difficulty value
-
hyc
and requires the miners to also recognize "too long" time has passed, and query the network/nodes for a new difficulty value that's more reflective of the current hash power
-
hyc
^ that's the ideal solution. not feasible in reality.
-
merope
wait, are you talking about adjusting the difficulty before the next block is found? (ie. "nobody has found a block in 10 minutes, so let's reduce difficulty for everyone"?)
-
hyc
that would be the ideal solution, yes. but as I said, not possible.
-
moneromooo
It is theoretically possible I think.
-
merope
<merope> "Issue #50 shows some interesting..." <- I mean, technically there *is* this solution mentioned in issue #50 that I linked - which only requires the timestamp of the origin block and the current timestamp
-
moneromooo
You could maybe allow blocks with difficulty 50% of the target one, which would only give, say, 20% of the block reward. And possibly have some rule which gets them orphaned easier than "real" blocks.
-
merope
So if blocks are found faster than expected, difficulty rises exponentially, and if no blocks are found for a while then it decreases exponentially
-
merope
But more generally speaking, it's simpler to implement a system based on the timestamps of the previous blocks found
-
merope
That way, if the last N blocks are consistently taking longer than average, you might consider reducing the difficulty accordingly
-
hyc
with mooo's idea we could just implement a smooth decay. at 2 minutes + N time, difficulty is decreased by N%
-
merope
(Keeping in mind that individual blocks times follow a Poisson distribution)
-
moneromooo
(in the absence of adversaries)
-
sech1
timestamps can be fake
-
hyc
yeah I wouldn't base it on block timestamps, but on node receipt time
-
UkoeHB
Pretty sure that's a dead end with the existing consensus model. You need SCP or a similar model based on trust networks to consense on timestamps.
-
moneromooo
Node receipt time can't be used to determine new diff since they're jiggly (and even worse for historical sync).
-
hyc
hm good point
-
moneromooo
The "low diff block" thing is "kinda" that: a timestamp that nodes do agree with, that comes before the "real" full diff block.
-
moneromooo
50% being of course a placeholder amount.
-
hyc
if the real hashrate has dropped tho, the "real" full diff block may be hours away
-
maxwellsdemon[m]
what youre talking about is essentially an exponentially waitin FIR filter
-
maxwellsdemon[m]
its definitely doabbe, stability will be a big concern. you can implement a recursive filter which will have a more smooth performance, but it can "blow up"
-
maxwellsdemon[m]
its certainly an interesting problem. a good first start would be getting access todata sets that emulate the hash rate charactaristics
-
Rucknium[m]
Note BCH's ASERT difficulty adjustment algorithm. It was implemented in BCH in Nov 2020:
-
Rucknium[m]
-
Rucknium[m]
-
maxwellsdemon[m]
another approach would be to use a least square filter, which is very stable, but can be slow if you have crappy hardware
-
maxwellsdemon[m]
i would recommend taking an iterative approach - start with a simple solution and refine through design cycles
-
Rucknium[m]
Slow if you have poor hardware? How?
-
hyc
I suppose you can use block timestamps from late 2017-mid 2018, since we had massive hashrate fluctuations around then due to ASICs
-
Rucknium[m]
This is as much of an economics problem as an engineering problem, by the way.
-
maxwellsdemon[m]
you have to compute a least squares solution eachbtime step
-
maxwellsdemon[m]
requires matrix inversion, it can be slow
-
maxwellsdemon[m]
it can also be poorly conditioned, which means you have a large difference in eigenvalues of your system. the result is a solution that might numerically divirge
-
LyzaL
what's the problem trying to be solved i.e. what do we gain from a s more predictable block time? I would imagine throughput is one advantage resulting from fewer empty blocks
-
maxwellsdemon[m]
Rucknium[m]: it is I agree. essentially we are making decisions based on filtered data, which by its nature means a loss of information
-
maxwellsdemon[m]
see hyc's reply to me
-
maxwellsdemon[m]
i think this might be a good thing for the community to have: hash rate data as a time series, maybe other variables too
-
maxwellsdemon[m]
good for devs anyway
-
Rucknium[m]
maxwellsdemon: Aren't there very efficient methods of OLS [ordinary least squares]? QR factorization, etc? And we are not talking about very many observations here
-
hyc
the monero-blockchain-stats utility will give you the historical difficulty
-
hyc
though perhaps not as fine grained as you'd want
-
maxwellsdemon[m]
there are many choices
-
maxwellsdemon[m]
Rucknium[m]: if you can avoid matrix inversion that is preferred. lots of ways to approach the problem
-
Rucknium[m]
If we need difficulty of every block, then neptune can probably provide it.
-
zkao
i would look at the rawest data possible, the discrete events of block arrival times, if enough trusted community members have the data logged
-
zkao
and compute the distributions by convolving that to some half a gaussian kernel, or such kind of functions, that the future doesnt contaminate the past
-
Rucknium[m]
Historical data is going to be of limited use. Change the system and you change the behavior. Still useful, of course.
-
merope
<maxwellsdemon[m]> "another approach would be to use..." <- How slow are we talking? A few milliseconds, or several seconds?
-
merope
Keeping in mind that the base process is: compute difficulty from the last block, start mining, someone finds a new block, everyone recomputes difficulty, start mining again, and so on
-
hyc
if historical block timestamps are too unreliable then you just have to run a testnet with real miners
-
merope
<hyc> "I suppose you can use block..." <- Also the two big hashrate spikes earlier this year
-
Rucknium[m]
Everyone who syncs the blockchain with monerod will have to verify that the difficulty of each block was calculated correctly, though, right? So speed could be a concern even if it was very fast.
-
hyc
yes
-
zkao
how can you make a reliable estimate of a poisson process without having a large window of observation?
-
hyc
you can't. ... will always need a large sample window ...
-
Rucknium[m]
Shrinkage estimator.
-
Rucknium[m]
And now we are on to statistics! My favorite 😍
-
Rucknium[m]
With a good shrinkage estimator you could do it.
-
maxwellsdemon[m]
hyc: you can use recursive methods that only dependson the output of the filter, requiring little passed information. A numerical integrator is the most simple example. Again, they can be unstable, which i think is very important for this use case
-
hyc
indeed
-
maxwellsdemon[m]
im also not using autocorrect on my phone, which is why my typing looks like shit
-
maxwellsdemon[m]
i dont trust it to not send data
-
Rucknium[m]
A shrinkage estimator is not unstable. In fact, one could argue that it is the opposite.
-
Rucknium[m]
A shrinkage estimator will in general reduce variance at the cost of increased (finite-sample) bias.
-
maxwellsdemon[m]
im not familiar with a shrinkage estimator
-
maxwellsdemon[m]
what does it do
-
moneromooo
At a guess, it estimates the age of shrinks ?
-
» moneromooo flees
-
Rucknium[m]
Shrinkage estimators are a broad class of estimators. Bayesian estimators can be thought of as a type of shrinkage estimator, for example.
-
Rucknium[m]
Basically, it "shrinks" the estimate toward some value that is not contained in the observational data itself. If the data is uninformative, the estimate will be close to that value that is outside of the data. If the data is informative, the estimate will be close to what a more standard "frequentist" estimator would yield.
-
zkao
Rucknium[m]: how much data do you need for these real time estimates?
-
Rucknium[m]
The Wikipedia page is fine:
-
Rucknium[m]
-
zkao
how many events?
-
zkao
u guys over estimate how shit reality is haha
-
Rucknium[m]
zkao: What do you mean by this comment? I work with empirical data almost every day.
-
Rucknium[m]
zkao: I don't know for now. It depends on myriad factors.
-
zkao
the block rate is very low, Rucknium[m]
-
Rucknium[m]
Compared to bitcoin it is five times faster.