[Dancer-users] URI design question

Mr. Puneet Kishor punk.kish at gmail.com
Wed May 25 15:36:54 CEST 2011


On May 25, 2011, at 2:47 AM, Lars Dɪᴇᴄᴋᴏᴡ 迪拉斯 wrote:

>> REST
> The rest of the message doesn't read like that. I'll pretend you wrote "HTTP 
> API" instead.
> 

Care to explain? Most likely you are correct, but I would like to understand why I wrote REST and thought I was talking REST but actually ended up talking HTTP API? 

From what I understand about REST -- every "item" that is to be retrieved via a URI is a resource (the R in REST). Every URI results in a unique resource, and every resource has one and only one URI. Hence, the URI can be thought of as a name of the resource, the really, truly unique resource identifier.

The only interface to the world is the URI (the "name" of the resource), and a handful of verbs, namely GET, PUT, POST and DELETE, in the world of HTTP.

Am I talking nonsense above? And, is there a disconnect between what I am thinking I am doing and what I am actually doing?

>> Any other thoughts on such an application design?
> In addition to what Brian said, Google built a convention around hash 
> fragments to make AJAXy resources crawlable. It's a hack to accomodate people 
> who didn't know better about the drawbacks and are now stuck with their 
> horrible design. I call it a hack because AJAX by itself, unlike other Web 
> tech we have seen so far, does not degrade gracefully; see and puke for 
> yourself: <http://code.google.com/web/ajaxcrawling/>


I like this bit

<a href="ajax.htm?foo=32" onClick="navigate('ajax.html#foo=32'); return false">foo 32</a>

I didn't know I could do the above, and that seems like a good design feature.

That said, I have the following clients -- 

(1) the web users whose only interface to my data and application is the web interface;

(2) a program, such as a scientific application, that wants access to my data via a command line, so it can do it own work, say, modeling. Or, even a user who just wants to download the data for further use elsewhere;

and now (something I really hand't thought of until your email)

(3) the darned web crawler. Actually, I had sort-of thought of the web-crawler because I asked about permalinks.

Of course, there is the question -- do/should my data be exposed to the web crawler? It may make sense for certain kinds of data, but what about highly discrete, small data? Say, weather readings -- I have a historical climate dataset for the US that, if I were to count each reading separately (in other words, each bit of data that would be retrieval via a unique name, or a URI), it would be upward of 120 billion (yes, no typos there... 120 Billion). Is there any point in pretending it is all a static web, and letting Google have at it?

> 
> Since you are still in the design phase, design your interaction properly with 
> *progressive enhancement*. Write semantic markup and forms that work in a text 
> browser. Then apply styles. Then apply JS that modifies the behaviour of an 
> existing document to load partial content with AJAX. Employ pushState so that 
> the URL changes the same way like it would for a user agent without JS. I can 
> recommend jQuery BBQ to handle these bits.


I am gonna learn about pushState today, and jquery-pjax. What is jQuery BBQ?

> _______________________________________________
> Dancer-users mailing list
> Dancer-users at perldancer.org
> http://www.backup-manager.org/cgi-bin/listinfo/dancer-users



More information about the Dancer-users mailing list