-
Notifications
You must be signed in to change notification settings - Fork 210
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for lossy animated WebP with alpha channel #988
base: next
Are you sure you want to change the base?
Conversation
@@ -63,8 +63,9 @@ protected void readFilterChunk(byte[] ID, int size, Object context, DataInputStr | |||
writeLittleEndianInt(output, size); | |||
output.write(buf); | |||
passthroughBytes(input, output, size - buf.length); | |||
if((size & 1) != 0) // Add padding if necessary | |||
if((size & 1) != 0) { // Add padding if necessary |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you’re at it… a space between the if
and the (
would be great. 🙂
It seems there are needs to use animated WebP in the short term as video playback is not working. |
Thank you for your work! mkvalidator looks like it’s written in C, so we’d have to ship compiled versions of it to different platforms (arm, x86_64, M1, ...), so we really need pure Java tools. jebml seems to be pure Java, though, and it is lgplv2.1 or later (compatible): https://github.com/Matroska-Org/jebml Since that has not been changed for 6 years, I’d suggest to just add the sources to the fred git repo. We may need to change it to strip out anything that could cause web requests. It may be possible to directly merge the repository with the fred repository to keep the history of jebml. |
@torusrxxx tak from FMS detailed how webm could be parsed, and it looks like parsing it "by hand" could actually be easy enough that the risk of a library leaking elements might be bigger than the work to write a low-level validator:
|
I still think a library is much safer and easier to use for complex formats like MKV, PDF and so on. Otherwise we will need to have write lots of code just to avoid IndexOutOfBoundsException and it will be hard to maintain. But I have not used jebml before, so more time is needed before I come to a conclusion. Also, starting with WebM, I think partial download (i.e. |
For jebml the difference is between reviewing jebml and writing new code. We have to make sure that jebml does not violate user privacy via some convenience feature. That’s much easier than a full quality review, but it isn’t just "drop in library and use it", because our privacy requirements are higher than what we can expect from other projects. For example PDF can send web-requests.¹ Partial content is something we can’t really do: that does not work with how encryption and retrieving content works on the network. But http range requests always allow sending more data than requested. So it may be possible to send http-range headers when a range request is sent, but simply provide all data. https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests#partial_request_responses That’s why our video on demand solution (generate media site) pre-chunks the video before insertion and assembles it on the client during playback. ¹ "In addition to visible links in a PDF document, form fields can contain hidden JavaScript calls that open a page in a browser or silently requests data from the Internet." — https://www.adobe.com/devnet-docs/acrobatetk/tools/AppSec/external.html |
PS: as an aside: using a prepared stream instead of range requests is similar to what DASH and HLS do: pre-chunk the data and reference it from a playlist file. So we aren’t that far from the state of the art there. I’m not sure how much data we can get from downloads in progress. An entry point to check that would be ClientGet.java. |
It's possible to random access an encrypted file if a block cipher mode designed for random access (for example GCM) instead of CBC is used. It's also possible to random access FEC protected data. Compressed data is usually not random accessible, but I think videos aren't compressed by Freenet. So, there's nothing fundamental that prohibits range requests to encrypted videos. And this is the ultimate solution for gapless playback (playlist isn't, even music playlist can have a gap while the next music is loading, and even while the current music is restarted). And the author no longer needs to upload the same video twice, one chunked and another one full. |
Videos can be compressed — it depends on whether compression helps. And for random access of data during download, you’ll need to go deep into the persistent-temp layer (note also multi-segment files built from several splitfiles), and it may be hard to do without adding huge complexity by entangling filters much with storage and the request layer. If you want to get there, that can have a huge benefit (partial file access has been a long standing wish), but I need to warn you, that that’s a huge change. You’ll need a persistence I’ve seen rarely — or a plan in which every smaller step on the way has its own benefits, so you can work on it with regular successes. |
Splitfiles need to be updated as well, why can't a freesite have unlimited number of sub pages like https://github.com/hyphanet? This is forcing freesites like FMS archive to use a longer URL than necessary, and makes it almost impossible to have an anonymous world map as a freesite. I have helped an open source project to become #1 in the field in terms of user friendliness. It takes many years. This is a project that should have become #1 in the field as well, it just has a lot of issues currently, so let's see. |
There are already multilevel metadata and subcontainers, so a site can have unlimited sub-pages, that just becomes inefficient, because files are assigned to different sub-containers, so access delays can happen at unexpected points (clicking a link to a html file can cause the retrieval of a new 4 MiB container). My personal site is currently at 1569 files. Improvements to DefaultManifestPutter.java could help there, but those are hard to get right. The first thing would be to only use an intermediate CHK for the metadata if the metadata does not fit into an SSK. That would increase the lifetime of Sharesites and Spider indexes a lot, but the two times I tried I had to stop before getting it done. It looks like it needs refactoring of the logic. |
No description provided.