03:18:27 hey 03:26:13 FROM ubuntu:20.04. this is inside Dockerfile.Windows 09:47:59 I have a stream of texts (order of terabytes), are there any compression method/algorithm that supports append operation? Program A generates texts, dies, starts again, generates texts, and dies again. I want to have a service that keeps all the genenerated text. The size is too big for my hard drive, order of 5T. So I have to compress on the fly. And for appending we should not decompress. Any idea? 09:49:38 (and all of that happens in shell. If not have to do it in C++) 09:53:05 Most compression schemes do. 09:54:14 You may want to batch things maybe, if your slices are very small. 10:00:45 moneromooo: This thing is going to run for hours, if not days. So quick question. If you had to chose between lz4 and xz, which one you would prefer to give first shot? 10:06:00 nvm https://linuxreviews.org/Comparison_of_Compression_Algorithms 10:07:06 I use lz4 for stuff that I need compressed/decompressed fast, xz for stuff I compress once. 10:07:33 er, sorry, zstd, not xz. But same thing would apply, roughly. 10:07:52 gzip is pretty standard and in the low/middle. 10:09:31 moneromooo: thanks. I decided to use xz for this one based on benchmark numbers. Hopefully I will not regret it in 24 hours :) 10:11:37 That table shows zstd can compress better *and* faster at the low end... I might need to revisit TF compression... 13:15:18 how to validate modnero address in node js? acn anyone help me? 13:18:03 monero-ts can do that 17:22:29 re: 8619 (background sync) - we've implemented it in ANONERO and its been live for a month with 0 issues 17:23:23 jberman has finalized the PR including GUI stuff and rbrunner has reviewed it 17:23:42 it would be great to get some more eyes on it since its such a great upgrade for UX and security 19:52:34 sounds great. I should take a look at ANONERO again.