[Dancer-users] do something after sending response
naveedm9 at gmail.com
Sun Aug 14 12:22:50 CEST 2011
On Fri, Aug 12, 2011 at 6:56 AM, Nick Knutov <mail at knutov.com> wrote:
> Yes, in my case the load can be high and I still want to do some job after
> sending response.
> For example, I need to send emails, and it can be done with cron scripts
> (with overhead on serialization), but I need to send emails as soon as
> possible after sending rendered page to upstream/browser.
I made this simple Dancer app that queues up emails on a job queue. It
uses Net::Stomp as the client.
There is a separate worker script running that is processing and
sending the emails:
I use POE::Component::MessageQueue (pocomq) as the message broker
(queue server). It is all actually quite simple to set up.
There are lots of other good messaging systems. Others have mentioned
some good ones. For this sort of thing, I think you really should use
some kind of job queue. It helps to decouple your application, and
makes scaling very easy. I don't think Dancer needs some new fancy
hook. This problem can easily be solved without that. All these new
hooks in Dancer are making me nervous :)
> So, in my case, I can load more back-end instances and do it as described to
> save some seconds between sending response and sending emails.
> 12.08.2011 16:13, Flavio Poletti пишет:
>> On Fri, Aug 12, 2011 at 5:49 AM, Nick Knutov <mail at knutov.com
>> <mailto:mail at knutov.com>> wrote:
>> I agree, forking is another way to do it, but as far as I know, it
>> can lead to very high load. Also, dying process is a heavy
>> operation, as far as I remember.
>> I don't fully understand the role of server load here.
>> On the one hand you want to perform those heavy operations inside your
>> Dancer process after the page has been delivered. I would say that this
>> is reasonable if the load on your server is very low, because you're
>> asking your front-end infrastructure to take care of back-end duties as
>> On the other hand you're very concerned with load. If this is the case,
>> are you sure that blocking your front-end process to carry out the
>> back-end duties is the right way to go? You might shave off the fork
>> penalties (but you have to do your benchmarking homework here, and
>> probably with a reasonable copy-on-write handling you don't suffer that
>> much), but this would be nothing compared to how stuck your front-ends
>> would become.
>> If the load is your concern, you should decouple the frontend from the
>> backend as much as possible. I'd suggest to serialize the relevant data
>> in some way and push to a queue that is dealt with by some other
>> process, which might even run on another machine if the load on a single
>> server gets too high. If you need more, with proper sharding you could
>> also reach full horizontal scalability.
>> Dancer-users mailing list
>> Dancer-users at perldancer.org
> Best Regards,
> Nick Knutov
> ICQ: 272873706
> Voice: +7-904-84-23-130
> Dancer-users mailing list
> Dancer-users at perldancer.org
More information about the Dancer-users