-
br-m
<ofrnxmr> Doge, zcash, any other pow coin, they dont actually care about pool dominance
-
br-m
<ofrnxmr> isnt like 70% of Btc hashrate all the same pool operator behind the curtain
-
br-m
<ofrnxmr> If qubic, or fed.gov became a dominant pool, none of these miners would care. They already mine on compliant pools and censor txs
-
br-m
<ofrnxmr> Attacking monero only makes noise, because 1) our txs can be invalidated 2) we care about pool dominance
-
br-m
<ofrnxmr> This attack wouldnt work on zcash, because they dont give a shit that 75% of hashrate is on 1 pool
-
DataHoarder
22:50:37 <br-m> <privacyx> It appears the laregest monero pools like supportxmr and nanoppool have teamed up to fight against Qubit selfish mining by selfish mining
-
DataHoarder
no, you can see that yourself with logs
-
DataHoarder
I found it funny on their self-made FUD so also added stratum tracking for other pools on the tips.txt
-
DataHoarder
They also seem to misunderstand timestamps are entirely miner set and within p2pool it wasn't uncommon to see people with clocks more than six hours apart (future/past)
-
DataHoarder
I think we now limit this to like 30m or so as it was making effectively invalid blocks at that point
-
DataHoarder
You need to explicitly keep track of when blocks and alt chains arrive on your node(s) and for that you need higher monero verbosity. nioc explanation of it being sluggish in broadcasting is correct
-
DataHoarder
Qubic also has a very delayed switch time that causes their blocks to naturally orphan more
-
br-m
<kerenken:matrix.org> How is p2pool orphanic qubic?
-
DataHoarder
qubic is either not finding enough blocks to be able to make a longer altchain in time or it's hit by their own made severe transmission delay
-
DataHoarder
Orphaning is not free - you need longer chains (or well
-
DataHoarder
more work)
-
DataHoarder
-
DataHoarder
In this cases qubic was one block ahead, but then Monero found two in a row before qubic could broadcast a longer chain
-
DataHoarder
If you follow tips.txt it mentions when either side needs more blocks to orphan the other
-
br-m
<barthman132:matrix.org> Aaajww291: It takes months for him to actually make that change. The fact that our pools are orphaning his blocks now probably isn’t helping him, but because the barrier of entry to attack monero is low. It probably won’t matter anyway
-
br-m
<privacyx> Not sure if this has been suggested or discussed but would instead of purely “longest chain wins,” nodes could incorporate propagation speed into tie-breaking decisions.
-
br-m
<privacyx> Or heaviest chain wins ( eg. total difficulty)
-
br-m
<privacyx> I was reading this paper about defense systems to prevent or reduce risk of selfish mining & double spend attacks
-
br-m
-
plowsof
That ones "a dud" sadly
-
plowsof
tevador and Rucknium left comments about it in the prev meeting
-
br-m
<privacyx> Oh I missed it thanks
-
nioc
DataHoarder: site is behind 200+ blocks and not updating
-
DataHoarder
nioc: indeed, lemme see
-
DataHoarder
nioc: some upstream monerod had died, switched to local. upstream ZMQ kept running but I couldn't use RPC :D
-
DataHoarder
10:34:11 <br-m> <privacyx> Or heaviest chain wins ( eg. total difficulty)
-
DataHoarder
this is effectively already done, not longest
-
DataHoarder
within short periods longest = heaviest
-
DataHoarder
next qubic marathon, Thursday 12:00 utc
-
DataHoarder
I made a $5/m setup on DO :D
-
DataHoarder
hopefully that runs some of the reporting, no idea if that can run xmrconsensus @rucknium?
-
br-m
<rucknium> You mean you want to run another public-facing xmrconsensus instance?
-
DataHoarder
to expose the one that I have with tagged qubic, unless you have time to improving what you are missing to include it on yours
-
DataHoarder
maybe I should cache the last generated image instead and just display that on a page :D
-
br-m
<rucknium> IIRC, xmrconsensus doesn't consume a lot of RAM. CPU usage is low if number of users is low. You would want to use shiny-server to serve it publicly.
-
br-m
<rucknium> Just checked top. The R process of moneroconsensus.info is using 0.8 GB of RAM (with ??? users). shiny-server is using 0.1 GB of RAM.
-
DataHoarder
very nice then
-
br-m
<rucknium> You need access to an unrestricted RPC port
-
br-m
<rucknium> The URL of the unrestricted RPC port is configurable, but you need unrestricted.
-
br-m
-
DataHoarder
yeah, I run one of my instances already but it's on a "bloated" spec wise system
-
DataHoarder
I had an R corrupted db a couple of times by having two sessions open...
-
br-m
<rucknium> If you want to run it with shiny-server, you would edit line 7 of this file:
github.com/Rucknium/xmrconsensus/blob/main/app.R#L7
-
DataHoarder
I run it manually via $ xmrconsensus::run_app(options = list(launch.browser = FALSE, port = 8474, host = "0.0.0.0"))
-
br-m
<rucknium> I mean, if you wanted to change the default URL of the unrestricted RPC and you wanted to run it in shiny-server
-
br-m
<rucknium> If you're using Digital Ocean, it looks like there is already a Droplet with shiny-server :
deanattali.com/2015/05/09/setup-rstudio-shiny-server-digital-ocean
-
DataHoarder
eh, I can install the needed things :D
-
DataHoarder
woah that's a long tutorial
-
br-m
<rucknium> It's single-threaded, so the worst it can do is lock up a single CPU thread.
-
DataHoarder
good! it has a single CPU core :)
-
br-m
<rucknium> DataHoarder: Remember, it's written for scientists. Every little thing needs explaining.
-
DataHoarder
yeah, I noted that with the RStudio line "Great, R is working, but RStudio has become such an integral part of our lives that we can’t do any R without it!"
-
DataHoarder
I opened the R code on jetbrains code IDE :DD
-
DataHoarder
I remember implementing some state-of-the-art paper on complex audio fingerprinting (with amazing performance both in matching and runtime already) in "non-scientist code" and not only did I find bugs on original implementation but made it two orders of magnitude more efficient runtime wise, and the bug fix improved matching!
-
DataHoarder
or "yeah I don't have a gamma function available in my toolset" so ended up with clamping to positive integers, using factorials and adjusting the algorithm to only work on Integers instead of complex or real numbers
-
DataHoarder
anyhow, thanks rucknium I'll try to get one of these running but then only display online the last cached image
-
br-m
<rucknium> Shiny has pretty easy-to-implement cacheing if you want to display the whole app. Then just disable the different display options so everyone sees the same view.
-
DataHoarder
all I see as a dev is a websocket call sending 3 MiB of png data to every user :D
-
DataHoarder
I'll extract that as a standalone png, and make webp/avif/png versions to serve clients optimally in a page that refreshes
-
DataHoarder
and images will get just cached by browsers and/or buttflare
-
DataHoarder
rucknium (and for others)
qubic-snooper.p2pool.observer/tree page refreshes every 30s or so automatically
-
br-m
<rucknium> Very nice :)
-
br-m
<rucknium> You can display the timestamps (if they aren't too verbose for you) by pulling in the latest: git pull.
-
DataHoarder
I'm already reaching close to the max length I can encode on webp :D
-
DataHoarder
I might split the image in chunks to allow that
-
DataHoarder
had to lower desired pixel density from 2x to 1.5x :D
-
DataHoarder
originally I displayed 250 entries as well on my instance
-
DataHoarder
image data is cached by cloudflare, so it's nice
-
DataHoarder
all I'm doing is send websocket jobs to xmrconsensus and handling data, or sending refresh jobs if it doesn't send new ones
-
br-m
<ofrnxmr:xmr.mx> Its pretty much a design flaw that reorgs permanently invalidate decoys (aiui, due to using output indexing)
-
br-m
<ofrnxmr:xmr.mx> anyway, this is also an issue for fcmp
-
br-m
<ofrnxmr:xmr.mx> Arguably worse for fcmp
-
br-m
-
br-m
<ofrnxmr:xmr.mx> > if a pool tx's FCMP++ reference height is >= 10 current chain tip [...] the daemon could end up needlessly trying to re-validate a tx that would never re-validate correctly.
-
br-m
<chaser> does the Generalized Schnorr Protocol in the FCMP++ SA/L proofs require special support from hardware devices, or is Ed25519 support sufficient?
-
br-m
<jeffro256> Hardware devices will have to run SA/L-specific code, like they do for CLSAG signing, but the hardware requirements are not prohibitively expensive; instead they are comparable to 16-member CLSAG signing