-
DataHoarder
-
DataHoarder
ScalarDerive(x) in chart has different behavior than ScalarDerive(x) as in the markdown document
-
DataHoarder
sc_reduce32 only acts on the first 32 bytes as fed to the function
-
DataHoarder
while markdown specifies mod l which is the generalized form (so it can work on 64 byte input)
-
DataHoarder
so specifically ScalarDerive(x) = sc_reduce32(H64(x)) will drop the low value 32 bytes of the 64-byte H64 result (little endian)
-
DataHoarder
-
DataHoarder
-
br-m
<helene:unredacted.org> DataHoarder: the endianness seems very important to clarify and specify especially when it comes to using hash functions ^^;;
-
DataHoarder
yeah not just the endianness but that function specifically doesn't work there
-
DataHoarder
if you are treating the ops for hashing bytes, endianness by default on all ops has been little endian
-
DataHoarder
and question > "Here Hp1 and Hp2 refer to two hash-to-point functions on Ed25519."
-
DataHoarder
they are two functions, but ... which are they? :)
-
br-m
<helene:unredacted.org> well, in the codebase, it seems to mean "just use the hash as-is", which surely can't be quite right?
-
DataHoarder
it is little endian :)
-
DataHoarder
so yes, use as-is
-
br-m
<helene:unredacted.org> my problem with it being "just use as-is" is that H_p^1 clearly has an effect on G, so it can't just be cast-to-int256
-
br-m
<helene:unredacted.org> could it be elligator's map to curve, maybe?
-
DataHoarder
then H_p^2 ?
-
DataHoarder
and yeah I could just use these values as-is but same as my randomx implementation I'll derive them from their most raw form
-
br-m
<helene:unredacted.org> yeah it's best to make sure we can actually reproduce this, because otherwise it would be Funny Business
-
DataHoarder
I was doing something else and yet again I meet my worst friend, ge_fromfe_frombytes_vartime
-
DataHoarder
-
br-m
<helene:unredacted.org> DataHoarder: this is quite an odd thing to do
-
DataHoarder
-
br-m
<helene:unredacted.org> but is there a reason to keep doing this on Carrot, especially since... a lot of things get converted back to Curve25519?
-
DataHoarder
oh, this is somewhat unrelated, while awaiting clarification on the above I am just doing hash to point :)
-
br-m
<jeffro256> Oops I didn't read the small text ;). Yes DataHoarder is correct: ScalarDerive(x) doesn't use sc_reduce32, it uses a 64-byte wide reduce (called sc_reduce in Monero's codebase) > <DataHoarder> ScalarDerive(x) in chart has different behavior than ScalarDerive(x) as in the markdown document
-
br-m
<jeffro256> Hp1 is the hash-to-point function used to get H, which is just doing Keccak into a canonical compress-Y representation, hoping that it's a valid point, and multiplying by 8:
github.com/seraphis-migration/moner…9a1e/src/crypto/generators.cpp#L122 > <DataHoarder> and question > "Here Hp1 and Hp2 refer to two hash-to-point functions on Ed25519."
-
br-m
<jeffro256> Hp2 is the hash-to-point function we currently use for key images which uses a modified version of Elligator:
github.com/seraphis-migration/moner…442d1b5c/src/crypto/crypto.cpp#L611
-
br-m
<jeffro256> DataHoarder: The Carrot document actually needs to be updated to include a Hp3 since that's a new hash-to-point function that is referenced in this linked MRL issue
-
DataHoarder
> hoping that it's a valid point
-
DataHoarder
that makes it easier
-
DataHoarder
so we have Hp1 and Hp2, and hash to point is ... Hp2
-
DataHoarder
which ends up with the somewhat unrelated being related :)
-
DataHoarder
back into "ge_fromfe_frombytes_vartime"
-
br-m
<jeffro256> @helene:unredacted.org: Carrot uses X25519 only for the ephemeral pubkeys to do ECDH. The actual output pubkeys and amount commitments are still Ed25519
-
br-m
<jeffro256> DataHoarder: Yes this is what's used for Hp2
-
br-m
-
br-m
<syntheticbird> just sharing
-
br-m
<barthman132:matrix.org> What is the current plan to combat quantum computing?
-
br-m
-
br-m
<rucknium> @ofrnxmr:monero.social: Two (of 7) of my checkpoints2 DNS server VPSes expired. Should I get new ones to replace them? How is the checkpoints bug investigation going? Anything I can do to help?
-
br-m
<ofrnxmr:xmr.mx> I think ive found where the issue is, but 0x and i have been unable to fix it (reorg handling)
-
br-m
<ofrnxmr:xmr.mx> I asked jberman to take a look, he'll probably take a look this week
-
br-m
<rucknium> Thanks.
-
br-m
<ofrnxmr:xmr.mx> i can probably just drop the 2 domains from the test and save you some $
-
br-m
<ofrnxmr:xmr.mx> The reorg handling also effects json checkpoints, so we might move to using json for the testing
-
br-m
<rucknium> The expired servers were 64.176.52.172 (checkpoints2.bchmempool.space) and 139.84.176.196 (checkpoints2.moneroresearch.info).
-
baz
anyone need VMs? free for reseach etc. i have a beefy big boy 10GbE hypervisor in my Colo, traceroute test 186.190.208.136
-
br-m
-
br-m
<one-horse-wagon> @barthman132:matrix.org: You realize they've been working on quantum computing for 50 years. So far, it a beautiful theory and quite far from reality last I heard.