admin 管理员组

文章数量: 1086019

I am currently downloading large files to my client code using the axios lib. Files are around 600MB. But during the download process the page crashes, like it runs out of memory or similar.

I need to hold the file in the memmory because the content is encrypted and I need to decrypt it before passing it to the user.

I use the REST GET Http request like this:

  axios.get(url, {
            headers: { 
                "Authorization": authHeader().Authorization,
                "Accept" : "application/octet-stream, application/json, text/plain, */*"            
            },   responseType: 'arraybuffer'

          })
          .then(function(response) {
          console.log(response);

Are there any mon workaround around the problem. So far I wasn't able to find any.

I am currently downloading large files to my client code using the axios lib. Files are around 600MB. But during the download process the page crashes, like it runs out of memory or similar.

I need to hold the file in the memmory because the content is encrypted and I need to decrypt it before passing it to the user.

I use the REST GET Http request like this:

  axios.get(url, {
            headers: { 
                "Authorization": authHeader().Authorization,
                "Accept" : "application/octet-stream, application/json, text/plain, */*"            
            },   responseType: 'arraybuffer'

          })
          .then(function(response) {
          console.log(response);

Are there any mon workaround around the problem. So far I wasn't able to find any.

Share Improve this question edited Jun 18, 2022 at 22:14 Daniel A. White 191k49 gold badges379 silver badges466 bronze badges asked Jun 10, 2022 at 9:19 JacobJacob 4,02123 gold badges91 silver badges151 bronze badges 5
  • stackoverflow./a/41940307/11046238 – Dmitriy Mozgovoy Commented Jun 10, 2022 at 11:44
  • What actually is the problem? Can you show some error message, error code or stack trace? – Marcel Commented Jun 13, 2022 at 6:38
  • The browser crashes loking like it runs out of memmory – Jacob Commented Jun 13, 2022 at 6:54
  • 2 Browser has limited access to memory and that's dependent on OS. It's basically outside the realm of your app. By attempting to hold the contents of the file in memory to decode it, you're reaching that limit (and breaking the streaming pattern). A working solution would be to either get access to file system, save the file before decoding it and decode it after (e.g: using electron). Another approach would be to decode it on server side it and not require holding all the data in memory at once. – tao Commented Jun 14, 2022 at 15:57
  • Why do you want that much for a file? Most clients would easily choke on that – Daniel A. White Commented Jun 18, 2022 at 22:15
Add a ment  | 

5 Answers 5

Reset to default 1 +75

Open the url in the new tab on the client side using window.open(url)

Let the browser handle the document automatically, If you want to decrypt the data, please try to decrypt on server side since you'll be giving out decryption key on the client side, which can have security issues.

Do you actually need to do it with axios? There is the Fetch API, which can serve the purpose. Here's how I do it for files in the same size range as yours (media files and ZIPs of up to 1 GB):

        fetch(url, {
            mode: 'no-cors', // to allow any accessible resource
            method: 'GET',
        })
            .then((response) => {
                console.debug('LOAD_FROM_URL::response', response);
                //NOTE: response URL is possibly redirected 

See https://github./suterma/replayer-pwa/blob/main/src/store/actions.ts for more context.

It's working flawless so far for me.

Can you please give a try by setting maxContentLength and maxBodyLength to Infinity in the axios call.

axios.get(url, {
  headers: { 
    "Authorization": authHeader().Authorization,
    "Accept" : "application/octet-stream, application/json, text/plain, */*"            
  },
  responseType: 'arraybuffer',
  maxContentLength: Infinity,
  maxBodyLength: Infinity
})
  .then(function(response) {
  console.log(response);
}

You can also have a look into this axios issue forum for the same.

HTTP Protocol: Range Parameter

Use the 'Range' header for getting big files part by part.

Example Curl HTTP Request : https://developer.mozilla/en-US/docs/Web/HTTP/Range_requests

curl http://i.imgur./z4d4kWk.jpg -i -H "Range: bytes=0-1023"

Example JS

axios.get(url, {
  headers: { 
    "Authorization": authHeader().Authorization,
    "Accept" : "application/octet-stream, application/json, text/plain, */*",
    //--------------------------|
    "Range" : "bytes=0-1023"    // <------ ADD HERE 
    //--------------------------|
  },
  responseType: 'arraybuffer'
}).then(function(response) {
  console.log(response);
}

You need to revisit the design of your frontend. Either its ReactJs or Angular or plain JS, just Offload such large downloads as well as other long running scritps to a concept of JavaScript Web Workers. This is latest way to offloading javascript tasks to the background so the page responsiveness remains intact.

Otherwise JavaScript is single threaded so there is no way to bypass the issue. WebWorkers are only thing to make exceptions here

It is as of today now supported on all latest browsers.

More on this: link

本文标签: javascriptDownloading large files with axiosStack Overflow