do something after sending response
Hello all, is it possible to run some code after sending response to upstream [and closing socket] with Dancer? For example: I want to get request, send response and after closing socket to back-end do some job with big timeouts: write logs, send notifications via emails, etc. -- Best Regards, Nick Knutov http://knutov.com ICQ: 272873706 Voice: +7-904-84-23-130
I think you're better off having another server-side process to do this. For example, your dancer app adds a row to an "email_queue" table and then you have a cron job on the back end that looks for new rows and sends out the emails. This way it's decoupled from your application code. It's also easier to manage logs and notifications when you're not doing it under a web server environment. On Thu, Aug 11, 2011 at 9:54 AM, Nick Knutov <mail@knutov.com> wrote:
Hello all,
is it possible to run some code after sending response to upstream [and closing socket] with Dancer?
For example: I want to get request, send response and after closing socket to back-end do some job with big timeouts: write logs, send notifications via emails, etc.
-- Best Regards, Nick Knutov http://knutov.com ICQ: 272873706 Voice: +7-904-84-23-130 _______________________________________________ Dancer-users mailing list Dancer-users@perldancer.org http://www.backup-manager.org/cgi-bin/listinfo/dancer-users
On Thursday 11 August 2011 18:00:37 Brian E. Lozier wrote:
I think you're better off having another server-side process to do this. For example, your dancer app adds a row to an "email_queue" table and then you have a cron job on the back end that looks for new rows and sends out the emails. This way it's decoupled from your application code. It's also easier to manage logs and notifications when you're not doing it under a web server environment.
I'd tend to agree, that sounds like a sensible way to handle it. Another possible alternative would be to fork a new process that will do the stuff in the background, whilst the original process continues onwards to send the response back to the client. That might be a little trickier and harder to debug, though, possibly. -- David Precious ("bigpresh") http://www.preshweb.co.uk/ "Programming is like sex. One mistake and you have to support it for the rest of your life". (Michael Sinz)
Yes, the table of tasks is how it is implemented now, but in this way I have to serialize and store all context data and variables and restore them in the second cron/daemon script. Sometimes context variables are big and anyway it takes some process time, which is not optimal [under high load]. It will be great to have something like 'after_response', if it is technically possible. 11.08.2011 23:17, David Precious пишет:
On Thursday 11 August 2011 18:00:37 Brian E. Lozier wrote:
I think you're better off having another server-side process to do this. For example, your dancer app adds a row to an "email_queue" table and then you have a cron job on the back end that looks for new rows and sends out the emails. This way it's decoupled from your application code. It's also easier to manage logs and notifications when you're not doing it under a web server environment.
I'd tend to agree, that sounds like a sensible way to handle it.
Another possible alternative would be to fork a new process that will do the stuff in the background, whilst the original process continues onwards to send the response back to the client.
That might be a little trickier and harder to debug, though, possibly.
-- Best Regards, Nick Knutov http://knutov.com ICQ: 272873706 Voice: +7-904-84-23-130
On Thursday 11 August 2011 18:17:47 David Precious wrote:
Another possible alternative would be to fork a new process that will do the stuff in the background, whilst the original process continues onwards to send the response back to the client.
For the time being, at least, this is probably the easiest option. Something like: hook after => sub { if (fork) { # parent - do nothing } else { # Child - sleep for a while sleep 50; exit; } }; With a quick test, that works as expected. (The exit is required, so that the child process doesn't then go on return control to Dancer.) -- David Precious ("bigpresh") http://www.preshweb.co.uk/ "Programming is like sex. One mistake and you have to support it for the rest of your life". (Michael Sinz)
For what it's worth, I was investigating adding a keyword for delayed responses with PSGI. Haven't really got done with it yet though. Perhaps tomorrow, if I go to the yearly Linux meeting in Israel, I might work on it there. On Thu, Aug 11, 2011 at 9:27 PM, David Precious <davidp@preshweb.co.uk>wrote:
On Thursday 11 August 2011 18:17:47 David Precious wrote:
Another possible alternative would be to fork a new process that will do the stuff in the background, whilst the original process continues onwards to send the response back to the client.
For the time being, at least, this is probably the easiest option.
Something like:
hook after => sub { if (fork) { # parent - do nothing } else { # Child - sleep for a while sleep 50; exit; } };
With a quick test, that works as expected. (The exit is required, so that the child process doesn't then go on return control to Dancer.)
-- David Precious ("bigpresh") http://www.preshweb.co.uk/
"Programming is like sex. One mistake and you have to support it for the rest of your life". (Michael Sinz) _______________________________________________ Dancer-users mailing list Dancer-users@perldancer.org http://www.backup-manager.org/cgi-bin/listinfo/dancer-users
On Thu, Aug 11, 2011 at 2:27 PM, David Precious <davidp@preshweb.co.uk> wrote:
On Thursday 11 August 2011 18:17:47 David Precious wrote:
Another possible alternative would be to fork a new process that will do the stuff in the background, whilst the original process continues
Being from a CGI background my initial thought was "yuck" because there I was thinking of how there would be all of the child processes spawned off without any kind of control. But as I thought about it a little more I realised that Dancer is usually used in an application situation (is it application, daemon, or some other term?) and so Dancer itself would keep running, unlike a CGI script, and so be able to manage the child processes. You could queue extra data if there are too many child processes or if the load is too high, etc. You shouldn't even need to serialize it as the OP mentioned unless it is large and you don't want it in memory or for data safety in case of a restart of the application. Is Dancer thread safe? as then you could use threads instead of forks. Regards, Colin.
On Thu, Aug 11, 2011 at 9:37 PM, Colin Keith <colinmkeith@gmail.com> wrote:
On Thu, Aug 11, 2011 at 2:27 PM, David Precious <davidp@preshweb.co.uk> wrote:
On Thursday 11 August 2011 18:17:47 David Precious wrote:
Another possible alternative would be to fork a new process that will do the stuff in the background, whilst the original process continues
But as I thought about it a little more I realised that Dancer is usually used in an application situation (is it application, daemon, or some other term?) and so Dancer itself would keep running, unlike a CGI script, and so be able to manage the child processes. You could queue extra data if there are too many child processes or if the load is too high, etc. You shouldn't even need to serialize it as the OP mentioned unless it is large and you don't want it in memory or for data safety in case of a restart of the application.
Actually, since Dancer is run on top of PSGI (not necessarily, but mainly), it means that you could use the AnyEvent PSGI server and push stuff to be run in async, on your own.
On 11 August 2011 20:40, sawyer x <xsawyerx@gmail.com> wrote:
On Thu, Aug 11, 2011 at 9:37 PM, Colin Keith <colinmkeith@gmail.com>wrote:
On Thu, Aug 11, 2011 at 2:27 PM, David Precious <davidp@preshweb.co.uk> wrote:
On Thursday 11 August 2011 18:17:47 David Precious wrote:
Another possible alternative would be to fork a new process that will do the stuff in the background, whilst the original process continues
But as I thought about it a little more I realised that Dancer is usually used in an application situation (is it application, daemon, or some other term?) and so Dancer itself would keep running, unlike a CGI script, and so be able to manage the child processes. You could queue extra data if there are too many child processes or if the load is too high, etc. You shouldn't even need to serialize it as the OP mentioned unless it is large and you don't want it in memory or for data safety in case of a restart of the application.
Actually, since Dancer is run on top of PSGI (not necessarily, but mainly), it means that you could use the AnyEvent PSGI server and push stuff to be run in async, on your own.
That's a brillant idea, and probably the best compromise here. Forking has a high payload, but keeping a unique background child or using msg queues means serialization, which is not very convenient.
On 12 August 2011 12:29, damien krotkine <dkrotkine@gmail.com> wrote:
On 11 August 2011 20:40, sawyer x <xsawyerx@gmail.com> wrote:
On Thu, Aug 11, 2011 at 9:37 PM, Colin Keith <colinmkeith@gmail.com> wrote:
On Thu, Aug 11, 2011 at 2:27 PM, David Precious <davidp@preshweb.co.uk> wrote:
On Thursday 11 August 2011 18:17:47 David Precious wrote:
Another possible alternative would be to fork a new process that will do the stuff in the background, whilst the original process continues
But as I thought about it a little more I realised that Dancer is usually used in an application situation (is it application, daemon, or some other term?) and so Dancer itself would keep running, unlike a CGI script, and so be able to manage the child processes. You could queue extra data if there are too many child processes or if the load is too high, etc. You shouldn't even need to serialize it as the OP mentioned unless it is large and you don't want it in memory or for data safety in case of a restart of the application.
Actually, since Dancer is run on top of PSGI (not necessarily, but mainly), it means that you could use the AnyEvent PSGI server and push stuff to be run in async, on your own.
That's a brillant idea, and probably the best compromise here. Forking has a high payload, but keeping a unique background child or using msg queues means serialization, which is not very convenient.
There is also the POEx::Role::PSGIServer module, to do the same in POE.
I agree, forking is another way to do it, but as far as I know, it can lead to very high load. Also, dying process is a heavy operation, as far as I remember. 12.08.2011 0:27, David Precious пишет:
On Thursday 11 August 2011 18:17:47 David Precious wrote:
Another possible alternative would be to fork a new process that will do the stuff in the background, whilst the original process continues onwards to send the response back to the client.
For the time being, at least, this is probably the easiest option.
Something like:
hook after => sub { if (fork) { # parent - do nothing } else { # Child - sleep for a while sleep 50; exit; } };
With a quick test, that works as expected. (The exit is required, so that the child process doesn't then go on return control to Dancer.)
-- Best Regards, Nick Knutov http://knutov.com ICQ: 272873706 Voice: +7-904-84-23-130
On Fri, Aug 12, 2011 at 5:49 AM, Nick Knutov <mail@knutov.com> wrote:
I agree, forking is another way to do it, but as far as I know, it can lead to very high load. Also, dying process is a heavy operation, as far as I remember.
I don't fully understand the role of server load here. On the one hand you want to perform those heavy operations inside your Dancer process after the page has been delivered. I would say that this is reasonable if the load on your server is very low, because you're asking your front-end infrastructure to take care of back-end duties as well. On the other hand you're very concerned with load. If this is the case, are you sure that blocking your front-end process to carry out the back-end duties is the right way to go? You might shave off the fork penalties (but you have to do your benchmarking homework here, and probably with a reasonable copy-on-write handling you don't suffer that much), but this would be nothing compared to how stuck your front-ends would become. If the load is your concern, you should decouple the frontend from the backend as much as possible. I'd suggest to serialize the relevant data in some way and push to a queue that is dealt with by some other process, which might even run on another machine if the load on a single server gets too high. If you need more, with proper sharding you could also reach full horizontal scalability. Cheers, Flavio.
Yes, in my case the load can be high and I still want to do some job after sending response. For example, I need to send emails, and it can be done with cron scripts (with overhead on serialization), but I need to send emails as soon as possible after sending rendered page to upstream/browser. So, in my case, I can load more back-end instances and do it as described to save some seconds between sending response and sending emails. 12.08.2011 16:13, Flavio Poletti пишет:
On Fri, Aug 12, 2011 at 5:49 AM, Nick Knutov <mail@knutov.com <mailto:mail@knutov.com>> wrote:
I agree, forking is another way to do it, but as far as I know, it can lead to very high load. Also, dying process is a heavy operation, as far as I remember.
I don't fully understand the role of server load here.
On the one hand you want to perform those heavy operations inside your Dancer process after the page has been delivered. I would say that this is reasonable if the load on your server is very low, because you're asking your front-end infrastructure to take care of back-end duties as well.
On the other hand you're very concerned with load. If this is the case, are you sure that blocking your front-end process to carry out the back-end duties is the right way to go? You might shave off the fork penalties (but you have to do your benchmarking homework here, and probably with a reasonable copy-on-write handling you don't suffer that much), but this would be nothing compared to how stuck your front-ends would become.
If the load is your concern, you should decouple the frontend from the backend as much as possible. I'd suggest to serialize the relevant data in some way and push to a queue that is dealt with by some other process, which might even run on another machine if the load on a single server gets too high. If you need more, with proper sharding you could also reach full horizontal scalability.
Cheers,
Flavio.
_______________________________________________ Dancer-users mailing list Dancer-users@perldancer.org http://www.backup-manager.org/cgi-bin/listinfo/dancer-users
-- Best Regards, Nick Knutov http://knutov.com ICQ: 272873706 Voice: +7-904-84-23-130
On Fri, Aug 12, 2011 at 6:56 AM, Nick Knutov <mail@knutov.com> wrote:
Yes, in my case the load can be high and I still want to do some job after sending response.
For example, I need to send emails, and it can be done with cron scripts (with overhead on serialization), but I need to send emails as soon as possible after sending rendered page to upstream/browser.
I made this simple Dancer app that queues up emails on a job queue. It uses Net::Stomp as the client. https://github.com/ironcamel/postmail/blob/master/lib/Postmail.pm There is a separate worker script running that is processing and sending the emails: https://github.com/ironcamel/postmail/blob/master/bin/worker.pl I use POE::Component::MessageQueue (pocomq) as the message broker (queue server). It is all actually quite simple to set up. There are lots of other good messaging systems. Others have mentioned some good ones. For this sort of thing, I think you really should use some kind of job queue. It helps to decouple your application, and makes scaling very easy. I don't think Dancer needs some new fancy hook. This problem can easily be solved without that. All these new hooks in Dancer are making me nervous :) -Naveed
So, in my case, I can load more back-end instances and do it as described to save some seconds between sending response and sending emails.
12.08.2011 16:13, Flavio Poletti пишет:
On Fri, Aug 12, 2011 at 5:49 AM, Nick Knutov <mail@knutov.com <mailto:mail@knutov.com>> wrote:
I agree, forking is another way to do it, but as far as I know, it can lead to very high load. Also, dying process is a heavy operation, as far as I remember.
I don't fully understand the role of server load here.
On the one hand you want to perform those heavy operations inside your Dancer process after the page has been delivered. I would say that this is reasonable if the load on your server is very low, because you're asking your front-end infrastructure to take care of back-end duties as well.
On the other hand you're very concerned with load. If this is the case, are you sure that blocking your front-end process to carry out the back-end duties is the right way to go? You might shave off the fork penalties (but you have to do your benchmarking homework here, and probably with a reasonable copy-on-write handling you don't suffer that much), but this would be nothing compared to how stuck your front-ends would become.
If the load is your concern, you should decouple the frontend from the backend as much as possible. I'd suggest to serialize the relevant data in some way and push to a queue that is dealt with by some other process, which might even run on another machine if the load on a single server gets too high. If you need more, with proper sharding you could also reach full horizontal scalability.
Cheers,
Flavio.
_______________________________________________ Dancer-users mailing list Dancer-users@perldancer.org http://www.backup-manager.org/cgi-bin/listinfo/dancer-users
-- Best Regards, Nick Knutov http://knutov.com ICQ: 272873706 Voice: +7-904-84-23-130 _______________________________________________ Dancer-users mailing list Dancer-users@perldancer.org http://www.backup-manager.org/cgi-bin/listinfo/dancer-users
On Fri, Aug 12, 2011 at 09:49:45AM +0600, Nick Knutov wrote:
I agree, forking is another way to do it, but as far as I know, it can lead to very high load. Also, dying process is a heavy operation, as far as I remember.
Simple benchmark with ab can prove that: ab2 -c 1 -n 500 http://127.0.0.1:5000/ ... Requests per second: 299.41 [#/sec] (mean) Time per request: 3.340 [ms] (mean) ... Adding fork() to a "/" route: exit unless(fork); waitpid( -1, WNOHANG ); ab2 -c 1 -n 500 http://127.0.0.1:5000/ ... Requests per second: 52.60 [#/sec] (mean) Time per request: 19.011 [ms] (mean) ... 6X time slower!
12.08.2011 0:27, David Precious пишет:
On Thursday 11 August 2011 18:17:47 David Precious wrote:
Another possible alternative would be to fork a new process that will do the stuff in the background, whilst the original process continues onwards to send the response back to the client.
For the time being, at least, this is probably the easiest option.
Something like:
hook after => sub { if (fork) { # parent - do nothing } else { # Child - sleep for a while sleep 50; exit; } };
With a quick test, that works as expected. (The exit is required, so that the child process doesn't then go on return control to Dancer.)
-- Vladimir Lettiev aka crux ✉ theCrux@gmail.com
On Thursday 11 August 2011 17:54:33 Nick Knutov wrote:
Hello all,
is it possible to run some code after sending response to upstream [and closing socket] with Dancer?
For example: I want to get request, send response and after closing socket to back-end do some job with big timeouts: write logs, send notifications via emails, etc.
Currently, no; the closest is the 'after' hook, but that runs after the route handler execution completes, but before the response is sent. It could potentially make sense for us to add a new hook which fires once the request is complete - 'after_response', or 'after_response_sent' or something vaguely like that. However, I think implementing that may be tricky thanks to the way Plack works. We'll have to have more of a think about this one :) -- David Precious ("bigpresh") http://www.preshweb.co.uk/ "Programming is like sex. One mistake and you have to support it for the rest of your life". (Michael Sinz)
On Thu, 2011-08-11 at 17:54 +0100, Nick Knutov wrote:
Hello all,
is it possible to run some code after sending response to upstream [and closing socket] with Dancer?
For example: I want to get request, send response and after closing socket to back-end do some job with big timeouts: write logs, send notifications via emails, etc.
Further to all the other answers. We use TheSchwartz quite heavily in our main (non dancer) environment. It's a little hairy, but a job queue (not necessarily TheSchwartz) may well be what you want... a
On 12.08.2011 11:52, Alex Knowles wrote:
On Thu, 2011-08-11 at 17:54 +0100, Nick Knutov wrote:
Hello all,
is it possible to run some code after sending response to upstream [and closing socket] with Dancer?
For example: I want to get request, send response and after closing socket to back-end do some job with big timeouts: write logs, send notifications via emails, etc.
Further to all the other answers. We use TheSchwartz quite heavily in our main (non dancer) environment. It's a little hairy, but a job queue (not necessarily TheSchwartz) may well be what you want...
I used a similar approach, to solve the same problem as OP, but I used beanstalk job queue. http://kr.github.com/beanstalkd/ It worked great, it's very fast and efficient, there are client libraries for all popular languages (including Perl, of course). It solves the problem of forking, but you still have to serialize data you want to send to your workers (processes doing stuff for you). What I liked is multiple tubes (in beanstalk nomenclature) - you can have one tube for logging, and another one for sending emails etc... And also persistance, beanstalkd (written in C, so very fast) can log job requests to file, so you won't lose any, even if you reboot server! Beanstalk is also very simple and easy to use, took me no longer than an hour to install it, configure it, modify original (PHP) code to dispatch requests to beanstalk, and write a simple worker (Perl) to just log some stuff to database. Logging AFTER the response has already been sent to web visitor. Yay! :) -- tydu
On Thu, Aug 11, 2011 at 10:54:33PM +0600, Nick Knutov wrote:
Hello all,
is it possible to run some code after sending response to upstream [and closing socket] with Dancer?
For example: I want to get request, send response and after closing socket to back-end do some job with big timeouts: write logs, send notifications via emails, etc.
Hi. For example you can use AnyEvent::Gearman::Client in your Dancer app and twiggy as a backend web-server (AnyEvent based), Gearman as a job-server and AnyEvent::Gearman::Worker for worker daemon. (Dind't check that scheme, but will try to) -- Vladimir Lettiev aka crux ✉ theCrux@gmail.com
participants (11)
-
Alex Knowles -
Brian E. Lozier -
Colin Keith -
damien krotkine -
David Precious -
Flavio Poletti -
Naveed Massjouni -
Nick Knutov -
sawyer x -
thecrux@gmail.com -
Tyler Durden