-
mightysnowman[m]
Wait is it possible to use p2pool with remote daemon?
-
DataHoarder
yes
-
sech1
you need a compatible remote daemon though
-
DataHoarder
21 outputs on the last block
-
sech1
and I'm still unlucky: Current effort = 366.184%
-
sech1
even 22 outputs in 2448328
-
DataHoarder
sech1: 23 on last block!
-
Inge
sech1: sorry about that. I stole your payouts
-
wfaressuissia
anybody here using p2pool with local monerod from home ?
-
wfaressuissia
How much out/in bandwidth do you have there ?
-
sech1
it depends on how many peers you have
-
sech1
3-4 KB/s per peer outgoing bandwidth
-
wfaressuissia
How much out/in network bandwidth do you pay for to have at home ?
-
sech1
incoming bandwidth is 9-10 times smaller
-
sech1
I have fix price 100 Mbit
-
DataHoarder
wfaressuissia: no bandwidth limits here
-
DataHoarder
10G/10G too
-
DataHoarder
should be minimal, and you can reduce bw of monerod/p2pool by just having desired number of outgoing connections
-
wfaressuissia
I'm interested in lower boundary for bandwidth that is someone could have with 5KH/s hashrate
-
DataHoarder
if your hashrate is local, you won't be seeing much traffic
-
wfaressuissia
* ... that is available to someone with ...
-
sech1
if you don't open ports for incoming connections and use pruned node (it reduces bandwidth), I think you could get away with 100-200 Kbit outgoing
-
DataHoarder
lots of blocks this morning found by p2pool
-
DataHoarder
checked some logs, with pruned node with 100+ connections and p2pool with more hash than that, even remote, it's 50-80KB/s
-
Inge
heh. non-pruned node and p2pool running for 18 hours shows 2.5GB in and 13.7GB out
-
DataHoarder
includes initial sync right Inge ?
-
DataHoarder
wait no
-
DataHoarder
saw 250GB in, need more sleep
-
Inge
hehe
-
wfaressuissia
10G == 10Gbps ? are you living in data center ?
-
DataHoarder
no, just residential cheap internet wfaressuissia
-
sech1
he IS data hoarder after all
-
DataHoarder
ah, and a free 1G backup ISP
-
DataHoarder
some countries have better internet that is all :)
-
Inge
not so long ago it was 18Mbps/0,8Mbps here
-
Inge
now you can get 500/500 most places
-
pauliouk
meh, my ISP 'upgraded' our internet last week. All they did was reflash the router with a new image to get rid of the 4 year old vulnerbilities, and set it to 100mb/s down, 20mb/s up. Was all good, until I realised they've made the ethernet DHCP release IPs every hour :/
-
DataHoarder
yeah, don't use any ISP router
-
pauliouk
oh and the connection itself is as stable as chihuahua on crack.
-
pauliouk
cable modem :/
-
DataHoarder
not like security is necessary given the private tx key for coinbase is known, but it uses the mersenne twister, an alternate version (mt19937_64)
-
QuickBASIC
moneromooo: "Maybe this channel could be split between dev type interesting stuff and the mining banter." I suppose you might've gotten your wish since DataHoarder's bot is in #p2pool-log I imagine most of the "banter" will go on there.
-
DataHoarder
(I created that to place bot output, can move things elsewhere, if another more "official" channel gets created)
-
moneromooo
Sounds good, thank you.
-
\x
.tell hyc rumored 12600k
lewd.pics/p/GqtP.png
-
\x
oh
-
\x
hyc: rumored 12600k
lewd.pics/p/GqtP.png
-
URoRRuRRR[m]
Is it possible to get the names of the workers from p2pool? I have one that ha disappeared and being able to identify it would be nice
-
DataHoarder
don't think so URoRRuRRR[m], but you should be able to put xmrig-proxy in front of it
-
DataHoarder
(and change ports, so for example p2pool uses stratum 3332, xmrig-proxy now uses 3333 and takes connections, then xmrig-proxy connects to p2pool)
-
DataHoarder
that way you can just use that instead :)
-
DataHoarder
no need to change config on workers just yet
-
URoRRuRRR[m]
great idea! thanks
-
sech1
we just missed a block because someone was mining to a lagging monerod instance:
paste.debian.net/hidden/6fac52c9
-
gingeropolous
one of those IPs there?
-
sech1
185.230.112.148
-
sech1
mined Monero block 2448479, but the network was already at height 2448484
-
gingeropolous
any way to create countermeasures for this?
-
sech1
people need to fix their damn Monero nodes :D
-
gingeropolous
my morning caffeine brain can't make sense of how a single mining node can mine an old block and make the whole pool go stupid
-
DataHoarder
XvB it seems sech1 again
-
gingeropolous
like, wouldn't my node hear the news of this block and go "no way dude, your behind, im not dealing with you" and then, ignore them for a bit or something
-
sech1
yes, if it's more than 5 blocks behind
-
DataHoarder
hadn't heard from them for a while, then all their blocks come at once
paste.debian.net/1211578
-
sech1
but it was exactly 5 blocks behind
-
DataHoarder
has happened a few times across the week
-
gingeropolous
but how did we *miss* a block?
-
sech1
because it was too late to submit it
-
gingeropolous
did that old block just force the rest of us to think that we should be on that .... oh
-
sech1
XvB's node soft-forked after that
-
DataHoarder
submitted height H when monero was at height H + 5
-
sech1
it might be one of the reasons why we have more than 100% average effort
-
gingeropolous
perhaps it should be 3 blocks and not 5 ... .shrugemoji
-
sech1
I didn't have this detection in logs until recently
-
DataHoarder
side chain at 32 MH/s
-
sech1
right now we have 108.91% average effort over 94 blocks that have effort tracked
-
sech1
probability of this happening naturally is 22%
-
sech1
I suspect we missed a few Monero blocks because of this
-
sech1
or maybe we just had bad luck
-
DataHoarder
I wonder if it's something on xmrvsbeast[m] node specific or we just notice cause hashrate accumulation
-
sech1
He needs to run monero with proper command line to avoid lagging
-
sech1
./monerod --rpc-bind-ip 0.0.0.0 --rpc-bind-port 18081 --confirm-external-bind --zmq-pub tcp://0.0.0.0:18083 --out-peers 32 --in-peers 32 --add-priority-node=node.supportxmr.com:18080 --ban-list block.txt
-
sech1
well, bind/external bind part can be skipped
-
sech1
I use external bind to connect p2pool remotely, but only for whitelisted IPs
-
pauliouk
hmm, is it worth setting up a centralised monerod side chain? :D
-
hyc
another 24hrs no shares
-
sech1
hyc same
-
DataHoarder
side chain hash rate is low atm, time to get some in
-
DataHoarder
(and not get blocks mined!)
-
hyc
is this just a delayed effect of the hashrate being 60MH+ earlier?
-
sethsimmons
Is the important thing here the `- "--add-priority-node=node.supportxmr.com:18080"`?
-
sech1
it's supposed to be a well-connected and fast node
-
sech1
so at least 1 peer will be good from the start
-
sech1
ban list is also important
-
sethsimmons
Yeah, I use ban lists on all nodes
-
DataHoarder
on my compose added a few known ones indeed
-
sech1
-
hyc
sech1: hmmm, logger is not detecting that logrotate moved the logfile
-
hyc
so it's still writing to the old one
-
pauliouk
I just got a load of 2021-09-13 12:16:02.373 E Transaction not found in pool
-
pauliouk
-
pauliouk
well I say just, about an hour ago
-
hyc
I think you should add a check of st_ino to make sure it's actually a different file
-
sech1
-
hyc
right
-
sethsimmons
`2021-09-13 13:27:52.4625 P2Pool BLOCK FOUND: main chain block at height 2448496 was mined by this p2pool`
-
sethsimmons
There we go :)
-
pauliouk
2448496 was mined by this p2pool
-
pauliouk
:_
-
hyc
i think the problem is that logrotate creates a new logfile so this existence check always succeeds
-
pauliouk
now if only I could get some shares :D
-
sech1
hyc so is that code correct?
-
hyc
not sufficient, no
-
hyc
I'll cook up a patch
-
DataHoarder
well at least we found another block now
-
DataHoarder
27 miners paid out, p2pool is growing
-
mandelbug
could do with finding a few more whilst I have 3 shares in the PPLNS window :D
-
QuickBASIC
Yeah my logrotate did something weird last night too. It started logging into p2pool.log.1 and left p2pool.log blank.
-
QuickBASIC
Is there currently any way to output log other than working directory. I'd like to put it in /var/log/p2pool/ for example, but right now it's going into the service account's home /var/lib/p2pool.
-
DataHoarder
links is a hack you can do
-
DataHoarder
so you get both
-
QuickBASIC
Yeah I ln it in there but I thought that's what messed up my logrotate since it's pointed at the hardlink.
-
hyc
ah ok, so add nocreate to the logrotate conf to prevent this screwup
-
hyc
easier than patching the p2pool code
-
hyc
QuickBASIC: just delete the current p2pool.log file. it's prob empty right now anyway
-
hyc
then p2pool will notice and switch to a new logfile
-
hyc
and for now, just use a startup script that cd's to /var/log/p2pool before running
-
hyc
so, at what point do we figure the current sidechain is big enough, and tell people to configure a new sidechain of their own?
-
DataHoarder
then no one will find enough blocks on that other sidechain, and most shares be left unused :)
-
hyc
meh. eventually this sidechain will max out on addresses it can support. more sidechains will have to spring up, just like there are multiple pools today
-
QuickBASIC
hyc pretty sure sech1 said around 150 M/hs (5%) of network hashrate, then it might be necessary. (MoneroTalk podcast).
-
QuickBASIC
The problem is going to be finding other small miners to join your side chain. I wonder if it would be helpful to have several named side chains in the documentation so at least you can pick one and not have to find other people.
-
QuickBASIC
i.e. Default Default-Low
-
DataHoarder
can it max out on addresses hyc? it'll just average out and pay different shares each time
-
DataHoarder
like it already kind of does
-
DataHoarder
random idea, two chains, small, big. p2pool does both at once. Hashrate above a threshold after gets pushed towards big one, while below the threshold stays on small
-
DataHoarder
still will end up with same payouts anyhow
-
DataHoarder
and might confuse people using it
-
QuickBASIC
I think it's best to let people configure it manually or include a couple of defaults they can select b/c otherwise they might be mad they're on the "wrong" chain.
-
DataHoarder
they will always be on the wrong chain
-
DataHoarder
the other one will always either: mine more blocks, or give you more shares
-
QuickBASIC
Haha. I think I still have shares on the pool named 'default' that never got mined to a block b/c we switched to 'mainet test 2 electric boogaloo'.
-
QuickBASIC
So wrong chain indeed.
-
DataHoarder
nothing is stopping you from setting a password and mining 100% of the shares :>
-
DataHoarder
good luck finding a block though
-
QuickBASIC
No, somone was still mining on the 'default' chain so it never died so my shares were still in there, but I just checked again and it's dead now.
-
QuickBASIC
I was just joking that I got the shares on the "wrong" chain b/c we switched pools right after I got them lol.
-
sech1
Can anyone ping Cake Wallet on twitter? They still haven't updated to the latest Monero code
-
DataHoarder
maybe an Issue on GH?
-
sech1
When they update, I'll start preparing for p2pool release
-
sech1
oh, they have github
-
DataHoarder
-
DataHoarder
27 issues open, 5 closed
-
DataHoarder
PRs they do action on more
-
DataHoarder
but seems that they are their own
-
sech1
-
QuickBASIC
Apparently they follow me on twitter so i was able t o DM them the issue and ask.
-
sech1
Wallet support + xmrvsbeast[m] needs to fix this node to improve pool luck, and then I think it will be a good time to release
-
sech1
*fix his node
-
QuickBASIC
In case they're like me and don't see GitHub notifs until days later.
-
sech1
it will still have to use custom monerod at first, but I can include monerod binary
-
sech1
they ought to update to Monero v0.17.2.3 anyway because of other fixes there (decoy selection fixes)
-
sech1
no idea what takes them so long, Monerujo updated 5 days ago
-
DataHoarder
maybe sech1 wait till PR merge, so at least you are not using a "monerod fork" but a "dev monero build"
-
DataHoarder
no idea how long though
-
DataHoarder
-
DataHoarder
nice, but when
-
sech1
eventually (c)
-
DataHoarder
soon (tm)
-
sethsimmons
If you're using Matrix to join these chats, be sure to update your client immediately:
matrix.org/blog/2021/09/13/vulnerability-disclosure-key-sharing
-
sech1
so I checked logs and found 2 more blocks that were wasted:
paste.debian.net/hidden/80fb872d
-
sech1
3 blocks wasted today only, since I started logging it
-
DataHoarder
oh no :(
-
DataHoarder
maybe should have part of the notes about running properly connected monerod
-
sech1
no wonder we linger aroung 110% average effort
-
sech1
*aroun
-
sech1
*around
-
sech1
all these blocks follow the same lag pattern (XvB's node desyncs and then sends a lot of shares)
-
DataHoarder
maybe it lags because it's checking the block very carefully
-
DataHoarder
maybe I can connect a few nodes to it and force in new blocks :)
-
sech1
it's something with that node - it desyncs from Monero network and after that all hashrate goes to waste
-
DataHoarder
I think the seed node I have connects with their node already
-
sech1
hmm, I think I'll add it as a priority node too
-
sech1
how do you know?
-
DataHoarder
cause I added it as priority
-
DataHoarder
(to monerod)
-
DataHoarder
an alternative maybe it's that p2pool lags behind, but then XvB monero would submit it either way
-
wssh
failing to verify chains.
-
DataHoarder
unless it lags everywhere
-
DataHoarder
-
sech1
XvB's p2pool found that block and at that point it should've submitted it to monerod, but monerod was out of sync
-
jaska087
I think the xvb node is just bad
-
sech1
so that block was orphaned by Monero network
-
sech1
3 blocks in the last 24 hours, not sure about earlier days
-
sech1
but it's probably the case we're at 110%
-
jaska087
is xbv running their pool/p2pool node as public node too?
-
DataHoarder
seems like p2pmd.xmrvsbeast.com:18080 is not really open?
-
jaska087
multiple peeps accessing it could cause it to lag
-
sech1
it's better to separate public node and the actual mining node
-
sech1
all major pools do it to avoid DDoS
-
jaska087
^this
-
sech1
mining node must be on a fast machine, preferable dedicated server and limit in/out peers like this: --out-peers 32 --in-peers 32
-
sech1
or it'll end up with 553 in peers like mine did :D
-
DataHoarder
I have limits but higher than that heh
-
sech1
XvB's node might actually have more than 1000 peers and hit ulimit of 1024 open descriptors
-
DataHoarder
OH
-
DataHoarder
that could be interesting
-
sech1
553 is not far from 1024
-
wssh
restarting p2pool and monerod.
-
DataHoarder
what error do you get on logs wssh ?
-
wssh
just failing to verify chains.
-
DataHoarder
in monerod, or p2pool?
-
wssh
p2pool
-
wssh
and now p2pool crashes when trying.
-
DataHoarder
can't find that error message on p2pool code, weird
-
DataHoarder
you have free space on disk right?
-
wssh
tons.
-
wssh
3 300gb sas disks.
-
wssh
4*
-
sech1
paste your p2pool logs (10-20 lines before the crash) to
paste.debian.net
-
DataHoarder
you can always delete p2pool.cache I believe to force a resync from stratch, but I wonder what made that happen, so maybe don't delete and just move it to p2pool.cache.old
-
wssh
ihttps://paste.debian.net/1211615/
-
sech1
can you add "--loglevel 5" to p2pool command line and try again? I need more detailed logs
-
QuickBASIC
So question, is the only p2pool instance that transmits a solution to monerod the one that finds it or do other p2pool instances also broadcast it to their monerod?
-
DataHoarder
they also try to broadcast it
-
DataHoarder
as they get the sidechain block from other p2pool peers
-
QuickBASIC
Cool cool. Thank you.
-
DataHoarder
(this sometimes fails due to having unknown transactions on it, as transactions can take time to get across nodes)
-
sech1
which means XvB's node didn't broadcast these blocks when it found them, probably because it was lagging too much
-
sech1
lagging nodes wouldn't be a big problem on p2pool if all miners had the same hashrate
-
sech1
but lagging node with 96% hashrate is a problem
-
wssh
no crash this time.
-
QuickBASIC
So solution is adoption.
-
DataHoarder
(and also please fix your nodes so they perform as it matters)
-
DataHoarder
XvB is noticeable but smaller ones might also suffer if they lag and find shares that exceed uncle height diff
-
sech1
I've submitted a fix to make this issue less severe. It's recommended to update both binaries (p2pool and monerod)
-
DataHoarder
sech1: seems like you brought changes from p2pool-api branch only right, not new ones?
-
sech1
I synchronized branches
-
DataHoarder
yes, checked commit, it's not newer than the changes from 2 days ago I mean
-
sech1
p2pool-api is the latest version of monerod changs, p2pool-api-v0.17 now reflects it
-
DataHoarder
great!
-
sethsimmons
sech1: So p2pool-api is best branch moving forward? Want to rebuild my image shortly.
-
DataHoarder
that is based off a version of master monero upstream
-
DataHoarder
anyhow, fired several builds
-
sech1
sethsimmons it's built on top of master branch, so it's not exactly what the release would be. Also it doesn't have recent checkpoints, it will sync from scratch slowly
-
sethsimmons
So keep using apu-v0.17 branch?
-
sech1
yes
-
sech1
both branches should work fine
-
sethsimmons
Thanks!
-
DataHoarder
I'll be pretty happy once it's merged and released cause that means I can grab the produced builds gz, attach a hash / check signature on it, and just use that instead of compiling monerod from source on starved machines :)
-
QuickBASIC
My VPS is taking sweet time compiling Monero can I just compile locally on Ubuntu and scp the monerod?
-
gingeropolous
if you build it right
-
DataHoarder
if you build statically maybe, depends on architecture
-
gingeropolous
something static
-
DataHoarder
if you are VERY lazy I think you can grab an artifact from here?
monero-project/monero #7891/checks?check_run_id=3575081687
-
DataHoarder
contains monerod heh
-
sech1
gitian builds are static, they should run everywhere
-
sech1
-
QuickBASIC
Man that sucked because I'm dumb. I recompiled p2pool first and restarted it before the new monerod, and it wanted the new RPC version.
-
QuickBASIC
Had to snag the giant build b/c monero compile on VPS kept failing.
-
QuickBASIC
s/giant/gitian
-
DataHoarder
got xmrig-proxy in front, which points to first p2pool instance, fallbacks to second instance elsewhere, then fallbacks on MO
-
QuickBASIC
Yeah I had fallback, but it's still syncing b/c installed SSD and didn't bother to copy Blockchain off old hd.
-
sethsimmons
`v0.17.2.3-04bfd948a` is latest, correct? Just built.
-
sech1
sethsimmons yes, it's the latest
-
sethsimmons
Thanks!
-
abberant[m]
Any suggestions for cutting down internet usage of monerod and p2pool? They're using like half a terrabyte a week, and according to monerod's netstats I'm assuming the majority must be from p2pool.
-
DataHoarder
Disable number of peers, disable incoming connections kind of
-
DataHoarder
also if you have remote workers for mining, make sure they report only the necessary shares
-
DataHoarder
pruned monerod too
-
DataHoarder
other than that you do need the information coming from p2pool at least, that said, I am not seeing that kind of bandwidth usage
-
abberant[m]
<DataHoarder> "also if you have remote workers..." <- How would I make remote workers only report necessary shares? is it an xmrig option?
-
DataHoarder
Do not set any custom difficulty
-
DataHoarder
that would be it
-
DataHoarder
That said you should probably log traffic per port and see what uses the most, then adjust accordingly
-
abberant[m]
I used vnstat to log internet usage and it aligns with what the node is telling me, so it seems something else on my network must be the culprit, but 20g per day from the node is still a bit much so I'm going to use those other tips to try and bring it down.