StreamSaver.js | StreamSaver writes stream to the filesystem | Stream Processing library
kandi X-RAY | StreamSaver.js Summary
kandi X-RAY | StreamSaver.js Summary
don’t worry it’s not deprecated. it’s still maintained and i still recommend using this when needed. just want to let you know that there is this new native way to save files to the hd: which is more or less going to make filesaver, streamsaver and similar packages a bit obsolete in the future, it’still in a experimental stage and not implemented by all browser. that is why i also built [native-file-system-adapter] so you can have it in all browsers, deno, and nodejs with different storages. first i want to thank [eli grey][1] for a fantastic work implementing the [filesaver.js][2] to save files & blobs so easily! but there is one obstacle - the ram it can hold and the max blob size limitation. streamsaver.js takes a different approach. instead of saving data in client-side storage or in memory you could now actually create a writable stream directly to the file system
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of StreamSaver.js
StreamSaver.js Key Features
StreamSaver.js Examples and Code Snippets
Community Discussions
Trending Discussions on StreamSaver.js
QUESTION
I'm trying to download a large data file from a server directly to the file system using StreamSaver.js in an Angular component. But after ~2GB an error occurs. It seems that the data is streamed into a blob in the browser memory first. And there is probably that 2GB limitation. My code is basically taken from the StreamSaver example. Any idea what I'm doing wrong and why the file is not directly saved on the filesystem?
Service:
...ANSWER
Answered 2021-Jun-02 at 08:44StreamSaver is targeted for those who generate large amount of data on the client side, like a long camera recording for instance. If the file is coming from the cloud and you already have a Content-Disposition
attachment header then the only thing you have to do is to open this URL in the browser.
There is a few ways to download the file:
location.href = url
download
</code></li> <li>and for those who need to post data or use a other HTTP method, they can post a (hidden) <code><form></code> instead.</li> </ul> <p>As long as the browser does not know how to handle the file then it will trigger a download instead, and that is what you are already doing with <code>Content-Type: application/octet-stream</code></p> <hr /> <p>Since you are downloading the file using Ajax and the browser knows how to handle the data (giving it to main JS thread), then <code>Content-Type</code> and <code>Content-Disposition</code> don't serve any purpose.</p> <p>StreamSaver tries to mimic how the server saves files with ServiceWorkers and custom responses.<br /> You are already doing it on the server! The only thing you have to do is stop using AJAX to download files. So I don't think you will need StreamSaver at all.</p> <hr /> <h3>Your problem</h3> <p>... is that you first download the whole data into memory as a Blob first and then you save the file. This defeats the whole purpose of using StreamSaver, then you could just as well use the simpler FileSaver.js library or manually create an object url + link from a Blob like FileSaver.js does.</p> <pre><code>Object.assign( document.createElement('a'), { href: URL.createObjectURL(blob), download: 'name.txt' } ).click() </code></pre> <p>Besides, you can't use Angular's HTTP service, since they use the old <code>XMLHttpRequest</code> instead, and it can't give you a ReadableStream like <code>fetch</code> does from <code>response.body</code> so my advice is to just simply use the Fetch API instead.</p> <p><a href="https://github.com/angular/angular/issues/36246" rel="nofollow noreferrer">https://github.com/angular/angular/issues/36246</a></p>
QUESTION
Suppose I have a large file I generate client-side that I want to allow the user to save to their hard drive.
The usual method would be to create a Blob, and then create an object URL for it:
...ANSWER
Answered 2020-Nov-27 at 09:38There is one being defined... File System Access.
It's still an early draft and only Chrome has implementing it.
You would be particularly interested in the FileSystemWritableFileStream interface which will allow to write on disk after the user chooses where you can mess with their data ;-)
Non live code since "Sandboxed documents aren't allowed to show a file picker."...
QUESTION
On my server-side, which is built using Spring Boot framework, it returns a stream which looks like this:
...ANSWER
Answered 2019-Nov-01 at 11:18So as it seems the streaming function is not implemented for the browser (see also https://github.com/axios/axios/issues/479). So you might have to use fetch like in the example.
QUESTION
We have an Angular 6 application that used to download large generated files from the backend using streaming meaning neither the backend nor the client did ever load the whole file into memory because those files could be hundreds or thousands of MB. We used the following response headers:
...ANSWER
Answered 2019-Oct-14 at 17:45One thing that you can try is single-use-token:
- Add endpoint under usual authentication that will return link in format ?token=token
- Add download endpoint under single-token authentication to download files (and remove token)
when u need to download file - u first get url using 1st endpoint, then download it any way u prefer
QUESTION
I am using https://github.com/jimmywarting/StreamSaver.js to stream some geometry data into file, however I cannot get it to work with my limited Promise knowledge
...ANSWER
Answered 2019-Feb-13 at 00:46This is what I would suggest. You have streamExportOBJ()
which is trying to behave synchronously, but it's calling a writer method that is actually asynchronous so there was no way for streamExportOBJ()
to actually know when any of the asynchronous stuff it was calling was done. So, I made the callback you pass to streamExportOBJ()
have an asynchronous interface and then streamExportOBJ()
can await
it.
I'm not entirely sure what you want to do with error handling here. If any errors occur in writer.write()
, then the whole process is aborted and the error percolates back up to your top level where a catch(e) {}
block will get it. You could develop different strategies there.
QUESTION
I want to add an export/import functionality to my FrontEnd written in Angular 6.
My FrontEnd gets a DTO in JSON format from a .NET API. My plan is to first get a functionality like a "Save As" Button, so the user can store the DTO[] array as a JSON file on his local harddrive.
In second way he should afterwords then be able to have an "Import" Button to load a JSON File from the local harddrive in to a DTO[] array in FrontEnd.
ComponentA where I subscribe my Observable using async pipe:
...ANSWER
Answered 2018-Nov-02 at 07:11The general nature of the internet is going to prevent an application from accessing the uses's hard drive. We do have access to Local Storage, which has some pretty large limits.
QUESTION
I'm searching for a way for downloading big files (like 2 - 4 Gb) on a bearer token protected api endpoint, that works on all common browsers (IE 11, Chrome, Firefox, Android Browsers, Safari). It should work with angular/ts and browsers only (without providing an app).
The problem with FileSaver
At the moment I'm using eligrey/filesaver which kinda combines all browser specific possibilities for a client side blob download. With this approach i can easely use the bearer token (ex. with http interceptors).
The problem is, IE pumps up the RAM and gets stuck at 1.4 GB download.
The problem with StreamSaver
I saw there is a new modern way with StreamSaver.js, which allows streaming directly on disk, but it is only available on chrome and opera.
The Problem with an unprotected endpoint
Another common way for such a scenario would be to allow anonymous access for the download endpoint, with requesting a one-time-token first, creating a url containing this token, let the user download it directly via browser with opening a new tab and immediately close it after starting the download.
This approach takes 2 requests for a download and looks flashy for a user to watch. I'm not sure, if this works on mobile browsers (opening new tab for download) and it looks like a hack, at least to me. The API have to ensure that url's are only valid during a short period of time and/or cannot be used 2 times in a row.
Ideas, anyone?
Does anyone knows a clean, modern/state of the art & performant way for such a common scenario?
How are the big companys dealing with this problem (google drive, dropbox, amazaon)?
I personally would prefer to let the browser download the file (instead of client side blob download). Maybe there is a way to "inject" the bearer tokens as default request headers to the browser when the client clicks a (href'ed) link.
Why isn't it easy for an modern angular rich-client to delegate a protected binary download to the browser?
...ANSWER
Answered 2018-Jan-23 at 08:21Unfortunately I'm not aware of any method to solve your issue regarding FileSaver.js
and StreamSaver.js
.
Nonetheless I had a similiar problem to solve, which points at your thoughts of using an "unprotected endpoint".
Well, I would go and create a one-time token to the file, which is included in URL of course. (I know, I know, it's not the best solution - I'd appreciate if someone comes up with a "best" practice for this)
Your thoughts mentioned a "flashy" and nasty behaviour for the user. I found a solution to solve this "flashy" problem by injecting a form which triggers the download.
QUESTION
I have a file represented as a list of chunks, and the goal is to download all chunks, join and save as a file.
Requirements- It should work for large files
- It should be cross-browser solution
- Use JS Array
Yes, we can download and store all chunks in regular Javascript array.- It's cross-browser solution
- But it uses RAM, and if file size exceeds free memory browser just crashes...
- FileSaver.js
- Partly cross-browser
- Limited file size
- StreamSaver.js
- Not cross-browser
- Works for large files
- Filesystem API
- It's Chrome sandbox filesystem api
- Works for large files
But I still can't achieve my goal with covered requirements...
If someone has experience for best solution I kindly ask to share it here. Thanks
ANSWER
Answered 2017-Dec-13 at 21:54There isn't really a cross-browser option here yet unfortunately.
In Chrome, you can use either the non-standard Filesystem API, or Blobs which Chrome will use the file-system for if the blob is large.
In Firefox, you can use maybe use the non-standard IDBMutableFile. However, it will not work with the download API, so you would have to use window.location
to send the browser to the blob URL, which the browser must then download (may not happen for all file extensions). You also may need to use the IDB persistent option to have files larger than ~2GB.
In other browsers, Blob is your only real option. On the up side, the OS the browser runs on may use paging which could enable the browser to create blobs larger than memory.
A service-worker-based option like StreamSaver may also help (perhaps this could be a download API alternative for Firefox), but there is (or was?) a limit to how long the browser will wait for a complete response, meaning you would probably have to download and store the chunks somewhere to complete the response in time.
QUESTION
I need to download a large file from angular. I have a link and when the client click, the file must be downloaded. Code in Controller:
...ANSWER
Answered 2017-Feb-15 at 08:50See below links it gives good explanation.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install StreamSaver.js
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page