Category Archives: RSS

http://purl.org/rss/1.0/

Planet Timeline

Following up on the other day’s post on SPARQL and the Simile Timeline, here is another way of putting items on a timeline.

I’ve created a SPARQL template for Planet Planet, you can see it in action on my personal planet.

The template makes (optional) use of identifying icons for each feed, which are configured in the Planet config.ini (don’t forget to add the SPARQL template to the template_files configuration option as well):

[http://www.wasab.dk/morten/blog/feed/rdf]
name = Binary Relations
timeline_icon = dull-red-circle.png 

Once set up, the generated SPARQL results (example) can be run through the SPARQL Timeline Service (example).

Integrating the timeline into the default Planet HTML isn’t hard, I’ve created a tweaked version of examples/basic/index.html.tmpl from the latest nightly — see it in action.

Getting this running on a working Planet installation should be as simple as downloading the two templates to the right location, and updating the Planet configuration file (default dull-blue icons will be used if not specificed per feed).

Update: Note that the planet-sparql.srx file needs to be returned by the web server with an XML mime type, preferably application/sparql-results+xml. If you’re unable to configure your server to make it do that, try renaming the template and file to use .xml instead of .srx as the file extension — that should make it return with application/xml. Thanks to Edward Summers for pointing this out on the Planet developer list.

SPARQL and SIMILE Timeline

Danny Ayers has been working on getting the SIMILE Timeline to eat SPARQL through the use of its JSON interface and some XSLT, he has notes on the ESW wiki.

While trying to get his work running here, I realized that the trip through XSLT to create JSON output really wasn’t necessary.

Instead, I’ve created a custom SPARQL event source parser, to load SPARQL results directly into the timeline. This way, the SPARQL results format generated by running the query doesn’t need a round trip into either JSON or the custom Timeline XML format.

The SPARQL Timeline demo works with any RSS 1.0 feed (try it with the one from Planet RDF) .

Update: Now also works with “raw” SPARQL results, try it with photos of laptops from The Gargonza Experiment (scroll to April of 2005). Expected variable bindings are date, title, description, and link, although the latter is optional and the first can be replaced by start.

Update: Now really works with “raw” SPARQL results. Due to javascript’s security model, only files on this server worked — until now. Also, a buglet regarding empty literal elements have been fixed.

Planet Changes

Recently, a new solar system was discovered, one with a planet that just might contain liquid water.

This is not about that.

Rather, this is about the Planet Planet, a flexible feed aggregator, that Sam Ruby and Danny Ayers (among others) have been hacking on recently.

I have created a personal planet for myself, one of the introverted ones that gather what I produce rather than what I consume: Planet Morten (styling yet to be perfected).

While setting it up, and getting it running like I wanted to, I noticed that it updated the generated files on every run, even though no new entries had been included. On a web that knows about Last-Modified and ETag (as Planet Planet itself does), it seemed like waste of bandwidth to preserve the incoming bytes but not the outgoing ones.

My limited Python skills to the rescue.

Two patches against the latest nightly — the one with a Last-Modified header of Mon, 22 May 2006 16:02:22 GMT (even though it contains files that were changed in the future when I GOT it):

planet-filecmp.diff
This patch makes Planet Planet write its output to a temporary file, which is then compared to the previous version, which is then only overwritten if the contents differ. This precludes the use of <TMPL_VAR date> in templates, as that will surely make the files differ, but the patch has the added bonus of not trashing the previous version of the generated file, in case something goes wrong during the write process.
planet-conditional-output.diff
This patch contains the above patch and additional logic to prevent output files from being generated if no channels were updated. Thus, the original files will be left untouched if no new entries were found, logic that also somewhat invalidates <TMPL_VAR date> in templates, since it can’t be trusted anymore.

The Planet Planet development list has been notified.

Update: Sam Ruby was kind enough to point out some shortcomings in my solution and prompt me for a test case. Thus:

SPARQL Conversions XSLT

To help develop and test my new Sparqlette service, I hacked a couple of XSLTs that might come in handy here and there…

SPARQL to RSS (lastest version: 0.3)
As its name implies, this XSLT turns a SPARQL Query Results XML Format document (Variable Binding Results) into an RSS channel, making it possible to subscribe to the results of an (almost) standard SPARQL query without using CONSTRUCT. As can be expected, not all query results work, as the RSS specification mandates certain elements. Thus, the value of the channel’s rss:link property is taken from an XSLT parameter named _uri, the variable bindings to use for rss:link and rss:title in each item is determined via some crude heuristics, and only items that have a URI for the chosen rss:link binding are created.

Variable selection heuristics for rss:link / rss:title:

  1. If there’s a variable named rsslink or rsstitle respectively, the bindings for that variable is used for all items.
  2. Otherwise, the first variable that has a binding to a URI is used for rss:link, and the first variable that has a binding to a literal is used for rss:title

For rss:link I wanted to add another option between the two, that would locate a variable that only has bindings to URIs, but I couldn’t get it working with a single XPath expression, so I gave up.
Example RSS (view Sparqlette input parameters).

SPARQL to SPARQL (latest version: 0.1)
This XSLT simply converts documents in the syntax of any of the currently two SPARQL Query Results XML Format draft specifications, W3C Working Draft 21 December 2004 and $Revision: 1.29 $ of $Date: 2005/05/03 09:58:04 $, into the syntax of the latest version, currently the latter. I promise to do my best to stay up to date…

Note: This entry — as all entries in the Release category — will serve as a changelog (you can subscribe to its RSS feed if you want to make sure you don’t miss out on any updates).

Friendly Reviews

Things have been a bit hectic lately, but I have actually managed to make something that might be worth going public with. (Actually, I’ve already pointed it out on #swig, but that’s another matter.)

Over at FilmTrust they let you not only rate and review movies, but also connect with your friends to see what kind of movies they like. Of course, this being a service from the first site on the Semantic Web, it offers a nice FOAF document, like mine.

As you can see, it contains information about the reviews I’ve written and a list of my friends, with rdfs:seeAlso‘s provided for the latter, which makes it possible to create an RSS feed with some XSLT and use of the document() function, like this: FilmTrust reviews by mortenf and friends. The output is generated via W3C’s XSLT Service — note how at least three URIs are involved in this, that’s (minimal) REST for you. Oh, let’s add another one: Via the Syndication Subscription Service.

A nice addition to the original source FOAF would be dates on the reviews — that’d make it possible to limit the size of the resulting RSS file. As it is, there’s no way to know which are “new”. Also, there are some escaping issues on FilmTrust, I had to remove golbeck and sbp from my friend list to get a running example…

Note that the XSLT takes an optional parameter, user-only. That’s provided in case you’re only interested in your own reviews — I use this to drop them into my personal planet feed.

Try subscribing to reviews from your own social network!