-
nikg83[m]
<geonic> "please define "a reasonable..." <- 20-30x
-
nikg83[m]
currently it’s approx $0.001, even we 10x in usd terms it would be $0.2-0.3 if we go with 20-30x increase
-
nikg83[m]
We can decide about reduce fee later on if price stays 10x current usd for a good time (200ema)
-
carrington[m]
I posted a simplified summary of the meeting to reddit (with links) to hopefully spark discussion.
-
carrington[m]
-
ArticMine
<geonic> please define "a reasonable increase in price" :-) <--- If the transaction rate increases by k, a price increase between k and k^2 in terms of inflation adjusted USD
-
ArticMine
In the issue I calculated for the 5 year period ending in October 2020 an increase in transaction rate of k led to an increase in price of k^y where y = 1.59
-
ArticMine
By the way one can increase fees by increasing the reference transaction size of 3000 bytes. An increase to 6000 bytes would increase fees by 4x for the lowest tiers, and the minimum node relay fee and 2x for the two highest tiers.
-
ArticMine
This is totally separate from whether the maximum increase in the Long term median is 1.7x or 2x per cycle, and does not require a HF
-
ArticMine
This can be done without a HF.
-
ArticMine
Still we are raising the minimum fee by 5x, and we are at about 33% of the 300,000 byte minimum penalty zone so there is significant room for a price increase before the adaptive blocksize starts
-
ArticMine
I will be preparing a response to UkoeHB's comment
-
ArticMine
<carrington[m]> Limiting verification to GPU owners is a TERRIBLE tradeoff IMO. Dynamic block size is fantastic up to a point, but there are other ways to scale transactions. <---- I am very interested to see what specifically "the other ways to scale transactions" you are proposing instead
-
ArticMine
What is the "other" scaling alternative?
-
john_r365
@ArticMine - is it possible (or useful) to keep the long term median "as is" for this HF, but proceed with the other changes (specifically including increase in minimum fee). Then between now and the next HF, come to a concensus on the long term median?
-
UkoeHB
-
knaccc
UkoeHB thanks. I found some timings from 2017 that I just added to that comment
-
knaccc
x25519 was just over 2x faster than ed25519 for scalarmults
-
knaccc
(variable base)
-
UkoeHB
knaccc: supercop's optimizations are like 2x as well, so we'd probably only get a benefit from x25519 if it was also supercopd
-
knaccc
UkoeHB we're still using supercop's ref10 implementation for ed25519 right?
-
knaccc
in general, I'd imagine people spend way more time optimizing libraries for x25519 varbase scalarmults, so hopefully we'd be spoiled for choice
-
knaccc
actually it was hyc that did those timings, as reported in #monero-research-lab on 2017-09-23
-
knaccc
i wonder if there have been faster x25519 scalarmult libraries released since then
-
knaccc
oh and hyc's code for the timing test is still available here:
highlandsun.com/hyc/scalarmult3.c
-
knaccc
i think libsodium uses ref10, but i'm not sure
-
ArticMine
<john_r365> @ArticMine - is it possible (or useful) to keep the long term median "as is" for this HF, but proceed with the other changes (specifically including increase in minimum fee). Then between now and the next HF, come to a concensus on the long term median? :--- No
-
ArticMine
That is way far worse than what sgp_ proposed
-
john_r365
Thanks for clarifying @ArticMine
-
ArticMine
It is possible to implement the whole proposal, including the proposed fees, using 1.7 / 0.58823529... rather than 2 / 0.5 as sgp suggested
-
ArticMine
While this does not fully address the responsiveness over a 2- 5 month period it is still a major improvement over the current situation and would address the issue 70 in most cases
-
ArticMine
We could then try to come to consensus on 2 / 0.5 vs 1.7 / 0.58823529. for the long term median for the subsequent HF
-
ArticMine
The fee structure would work because it is a less responsive LT median.
-
ArticMine
My concerns would be significantly mitigated vs the current situation but not fully addressed
-
john_r365
That seems like a good approach @ArticMine. Possibly add that as a comment to Issue #70?
-
ArticMine
I will incorporate it into my comment on issue 70
-
knaccc
UkoeHB FYI I just wrote some code to do ed25519<->curve25519 point conversion, and it's really simple to do. I've updated the github comment with the info.
-
UkoeHB
knaccc: sweet thanks :)
-
knaccc
UkoeHB my tests were in java btw, not C. I have no idea what I'm doing in C
-
string111[m]
I could also have a look, have been coding C professionally for 5 years.
-
string111[m]
But do not have that clear view of Montero yet.
-
carrington[m]
ArticMine @ArticMine:libera.chat: There have been vague ideas thrown around of ephemeral/overlapping sidechains and stuff, but I suppose those only help with the storage scaling and not the verification scaling. I think increasing fees further should be considered, personally.
-
carrington[m]
Is the IRC bridge here still working? It is broken in -community and -events where there are meetings today
-
UkoeHB
carrington[m]: I read you
-
netrik182
I came back just yet I think
-
netrik182
It*
-
ArticMine
<carrington[m]> ArticMine @ArticMine:libera.chat: There have been vague ideas thrown around of ephemeral/overlapping sidechains and stuff, but I suppose those only help with the storage scaling and not the verification scaling. I think increasing fees further should be considered, personally <---- In creasing fees can be done by increasing the reference tx size for example from 3000 bytes to 6000 bytes. This can be done
-
ArticMine
without a HF.
-
ArticMine
Thank you for alternative scaling options. I agree it does not help with verification but it could help in a situation where the limitation was storage and there was plenty of bandwidth
-
vtnerd
knaccc: supercop is only used for wallet2 scanning
-
vtnerd
ooops sorry, ref10 is also supercop I guess
-
vtnerd
wallet2 (and monero-lws, mymonero, etc) are using x86-64 asm speedups
-
vtnerd
but the validation code is not, so there is speedup for validation, but the initial thought was to push back on that (some voiced forking concerns over different implementations in use)
-
vtnerd
as per x25519 - I've always wondered about that, particularly because supercop has no arm64 speedups for ed25519 but does for x25519
-
vtnerd
my primary concern was the cost of conversions (assuming no change to protocol to use x25519 directly), and working through the details on recovering the x-component of ed25519 to match the protocol
-
vtnerd
theres also donna64 which ed25519 written in C, and therefore should be portable to other cpu arches. but it doesn't come from the authors of ref10 and the amdx86-64 series, so theres that as well
-
knaccc
vtnerd we could keep publishing ed25519 tx pub keys, and keep deriving an ed25519 shared secret point for the output pubkey, and convert the txpubkey from ed25519->curve25519 just for the purposes of speeding up view tag checking
-
knaccc
or we could go all the way and publish tx pubkeys as curve25519 points, and use a curve35519 ecdh derivation
-
vtnerd
yes, but there's computation cost for that, so the scalarmult would have to be faster than that cost, etc, afaik
-
vtnerd
*computation cost for any conversion
-
knaccc
vtnerd in my tests, the ed25519->curve25519 conversion looks 40x faster than the varbase scalarmult
-
knaccc
but that's java
-
knaccc
i'm very interested to see what the timings would look like in C
-
vtnerd
yeah if those are your numbers, then it might be worth trying the equivalent in C. I haven't looked at the Java implementations and I am (way) more familiar with C compiler optimizations than JRE optimizations, so its difficult for me to give any decent predictions
-
knaccc
i'd try the C timings myself but i'm useless at C
-
jberman[m]
knaccc: can you share your java code?
-
knaccc
jberman[m] it's kind of a mess, since i use a modified NEM ed25519 library. The ed25519->curve25519 thing i added in is just a one-liner: y.add(Ed25519Field.ONE).multiply(Ed25519Field.ONE.subtract(y).invert()).encode().getRaw()
-
UkoeHB
-
knaccc
yeah exactly
-
knaccc
it's just y+1 * inv(1-y)
-
vtnerd
I guess? those are high-level calls, each of which expands to decent chunks of C code for the computation
-
UkoeHB
A single point addition is more fe ops than that
-
jberman[m]
is this what you wannt test?
paste.debian.net/1228996
-
jberman[m]
Or should I go for a full key exchange, I'm not exactly sure
-
vtnerd
x25519 typical uses a ladder implementation which very different, and doesn't compute the y-coord at all. so I dunno. I mean it wouldn't surprise me either way, just looking at the two implementations
-
knaccc
jberman[m] we don't need to test scalarmultbase speeds, we only want to test varbase scalarmults
-
knaccc
jberman[m] so we want to precompute N random curve25519 points, and then scalarmult each of them with the same scalar
-
knaccc
and then do the same except with ed25519 points instead of curve25519 points
-
knaccc
and then a third time, but starting with an ed25519 point before converting it into a curve25519 point and then doing the scalarmult
-
knaccc
jberman[m] i assume you're using the same ed25519/curve25519 libraries currently in the monero codebase?
-
jberman[m]
think I got it, ok will re-work
-
knaccc
jberman[m] a fourth interesting test would be to precompute N random ed25519 points, and then time how long it would take to just convert them all to curve25519 points without doing any scalarmults at all
-
jberman[m]
no was just using libsodium
-
knaccc
jberman[m] ah ok, that'll still be interesting, but won't reflect reality
-
knaccc
jberman[m] btw please could you also do a variant of that fourth test, but commenting out the ge25519_has_small_order/ge25519_frombytes_negate_vartime/ge25519_is_on_main_subgroup tests in crypto_sign_ed25519_pk_to_curve25519?
-
jberman[m]
yep yep. I'll get this working with high level calls to libsodium then will re-work for that^, then use Monero's
-
knaccc
jberman[m] you're awesome, thanks!
-
knaccc
vtnerd would there be any objection to using sandy2x for curve25519 vs donna? it looks 17% faster
-
knaccc
jberman[m] sandy2x curve25519 code is here:
tungchou.github.io/sandy2x
-
knaccc
and in any curve25519, the following would need to be commented out so that ed25519 scalars are compatible: e[0] &= 248; e[31] &= 127; e[31] |= 64;
-
knaccc
(although that shouldn't affect performance for the tests)
-
jberman[m]
knaccc: got distracted, here's libsodium no modifications with results at the bottom:
paste.debian.net/1229003
-
jberman[m]
ed25519->curve25519, then curve25519 variable base scalar mult looks 10% faster than ed25519 variable base scalar mult over here
-
jberman[m]
i'll do all the rest in one go
-
jberman[m]
And curve25519 alone is fast af