Starman large file upload
Hi All Hit a bit of a wall We are using a standard jquery uploader connected to Dancer2 and that in general works great We are testing uploading large files (600M) and having a lot of issues: We see the file uploading to /tmp as expected. At the end of the upload, Starman has a lot of CPU and Mem usage and the defined upload route (our code) does not get hit for several minutes. It looks like Starman is loading this into memory and hitting swap issues Apart from increasing memory - is there something specific we should be doing to resolve this? Zahir Lalani Head of Development & Architecture [cid:96F784CA-E917-4CC1-A9A9-96CBDA2AF2BC] 151 Rosebery Avenue, London, EC1R 4AB m: +44 (0)7956 455168 t: +44 (0)203 142 3619 e: zahirlalani@oliver<mailto:zahirlalani@oliver-marketing.com>.agency w: www.oliver.agency
Zahir> Hit a bit of a wall Zahir> We are using a standard jquery uploader connected to Dancer2 and that in Zahir> general works great Can you give us an example of your code so we can help debug this? But from my googling around, it's almost certainly a problem with how the file is written from the client, through starman, to dancer and then to disk. Are you trying to parse forms or parts in the upload by any chance? Zahir> We are testing uploading large files (600M) and having a lot of issues: Zahir> We see the file uploading to /tmp as expected. At the end of Zahir> the upload, Starman has a lot of CPU and Mem usage and the Zahir> defined upload route (our code) does not get hit for several Zahir> minutes. It looks like Starman is loading this into memory and Zahir> hitting swap issues So you need to show us how you're using starman and what it's config is before we can help here. Just increasing memory won't solve the problem. You need to figure out why starman is buffering the upload and how to get it out of the middle. John
Hi John As I said, our back-end code does not get called for literally 1-2 mins. It seems the server goes into a massive swap space issue. The post is using a standard jquery library (jQuery-File-Upload) and this uses multi-part file upload in an ajax post This works really well with <300MB files - our 600MB test fails badly Our setup is Apache as the front end proxy, which then forwards data to Starman on port 5000. Starman of course runs the Dancer code. As I said, the temp file is being created but at the end of the transfer Starman eats up a lot of resource but we don't see our code being called for ages. So not sure what code I should send you. If you need to know the dancer launch configs or apache - I can certainly send those. We are also looking into chunked uploads, but are working out how we handle this at the dancer end - the chunks upload fine, but each one creates a new upload at the back end - we should be looking at the content range header or similar to prevent this - Anyone implemented this in Dancer before? Thx again Z
-----Original Message----- From: dancer-users [mailto:dancer-users-bounces@dancer.pm] On Behalf Of John Stoffel Sent: 13 April 2017 21:40 To: Perl Dancer users mailing list <dancer-users@dancer.pm> Subject: Re: [dancer-users] Starman large file upload
Zahir> Hit a bit of a wall
Zahir> We are using a standard jquery uploader connected to Dancer2 and Zahir> that in general works great
Can you give us an example of your code so we can help debug this? But from my googling around, it's almost certainly a problem with how the file is written from the client, through starman, to dancer and then to disk.
Are you trying to parse forms or parts in the upload by any chance?
Zahir> We are testing uploading large files (600M) and having a lot of issues:
Zahir> We see the file uploading to /tmp as expected. At the end of the Zahir> upload, Starman has a lot of CPU and Mem usage and the defined Zahir> upload route (our code) does not get hit for several minutes. It Zahir> looks like Starman is loading this into memory and hitting swap Zahir> issues
So you need to show us how you're using starman and what it's config is before we can help here.
Just increasing memory won't solve the problem. You need to figure out why starman is buffering the upload and how to get it out of the middle.
John _______________________________________________ dancer-users mailing list dancer-users@dancer.pm http://lists.preshweb.co.uk/mailman/listinfo/dancer-users
On Apr 18, 2017, at 9:01 AM, Zahir Lalani <ZahirLalani@oliver.agency> wrote:
We are also looking into chunked uploads
For anything taking more than a couple of seconds or 2 GB (whichever comes first) you should indeed be using HTML5’s new File API to send chunks of the file individually rather than try to send it all at once.
we should be looking at the content range header or similar to prevent this
No, you should be sending query parameters that identify which chunk number this is and how many more chunks there will be, so that the Dancer route handler you’re sending this to knows when it has received the last chunk. Dancer pseudocode: post ‘/upload’ => sub { my $chunkNumber = params ‘chunkNumber’; return { error => ‘no chunk number’ } unless isNaturalNumber($chunkNumber); my $chunkCount = params ‘chunkCount’; my $chunk = request->upload(‘chunk’); # Do something intelligent with $chunk* } HTML pseudocode: <input type=“file” onchange=“sendFile(this.files[0]”> jQuery pseudocode: function sendFile(file) { var chunkSize = X; // a base-2 “round” number suitable to your app var fileBytes = file.size; var numChunks = Math.floor(fileBytes / chunkSize + 1); function sendChunk() { var start, end; // calculated using available values var chunk = file.slice(start, end); var fd = new FormData(); fd.append(‘chunk', chunk); fd.append(‘chunkNumber', chunksSent); fd.append(‘chunkCount', chunksTotal); $.ajax(‘/upload’, { data: fd, success: function() { if (++chunksSent < numChunks) sendChunk(); }, }); } sendChunk(); } That will certainly not run as-is, but if it should be sufficient to get you to working code.
participants (3)
-
John Stoffel -
Warren Young -
Zahir Lalani