Category Archives: XML

Slamming the Semantic Web

Whenever I read an article, blog post or a comment that disses the Semantic Web, RDF or RDF/XML, I wonder why so many people find it necessary to try to belittle another technology or approach. It is almost always clear — at least to me — that there are no facts or knowledge to back up the claims, but it seems there are always enough readers out there that come away with the impression of facts instead of opinions.

I don’t know much about Topic Maps, Django or FreeBSD, but you don’t see me trying to argue that they are inferior, somewhat misguied or simply plain wrong. Even if that may be the case, I wouldn’t know, and I refuse to argue for or against something I don’t know anything about.

Perhaps those people out there feel out-of-the-loop for not understanding, and then pick arguments against another view instead of arguing for their own view? Perhaps they are trying to push an inferior technology and feel a need to make alternatives look inferior as well?

I don’t know.

But I do know, that the next time I see an “argument” against the Semantic Web, I will check the new GetSemantic Argument Wiki, and I hope that you will too.

In the meantime, please take 8 minutes of your day to watch Tim Berners-Lee talk about his vision for the web. That’s real.

Recursitivity Galore

Sam Ruby: Of course, I would create the consolidated feed using Venus.


It’s really quite simple:

Through my use of Venus for e.g. Planet SF, I started using Bazaar, for which I created an Atom feed generator, the code for which is also stored in a Bazaar repository, which of course provides a feed and is being picked up by e.g. Sam, who in turn maintains another Bazaar repository that provides another feed, that gets picked up by my Venus installation, that then generates a global feed with all the changes — once.

Did I mention that I think Bazaar hits a sweet spot?

Bazaar Development

Inspired — again — by Sam Ruby, I have begun using Bazaar for source control. My first use case was creating a branch of Venus to implement a cache expunge mechanism. Also, I think Bazaar hits a sweet spot regarding ease of use for personal as well as distributed development, and once the prerequisites are in place, it’s easy to set up.

While doing that I learned some more about Python, and found out I wanted to be able to subscribe to the changes in a Bazaar branch.

Starting out with Sam’s tarify.cgi and Joe Gregorio’s sparklines as working examples I have managed to create a simple Python-script for generating an Atom feed: bzr-feed. You can of course subscribe to the changes!

On the TODO is creating RDF output with DOAP, but I think I might need to figure out a way to store and report more information than is currently available in the Bazaar repository.

To use bzr-feed, you will need something like the following in the .htaccess file in the directory containing the branches:

<FilesMatch ".*\\.cgi">
Options ExecCGI
AddHandler cgi-script .cgi
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-s
RewriteRule (.*).atom$ bzr-feed.cgi?dir=$1

As a bonus, while working on bzr-feed, I realized that Apache apparently supports If-Modified-Since out of the box for CGI scripts as long as the Last-Modified header is sent (though ETag support still needs to be implemented separately). Nice.

Why REST Matters

Try this:

  1. Go to
  2. Open a story from the front page in a new window or tab (via the More… link).
  3. Open another story in yet another new window or tab.
  4. Switch to the window or tab from step 2.
  5. Login (using the Ajax widget in the left column).

Notice how your window or tab opened in step 2 now contains the article that was opened in step 3?

Clearly, digg doesn’t use or understand web architecture. Somewhere on their server they try to store session information, under the assumption that one user equals one browser, one window, and one tab.

If instead of using session information (through cookies), they actually used the principles of the web and Representational State Transfer (REST), I would end up with the story I was actually reading, not some other story I happened to have opened at a later point.

Planet Timeline

Following up on the other day’s post on SPARQL and the Simile Timeline, here is another way of putting items on a timeline.

I’ve created a SPARQL template for Planet Planet, you can see it in action on my personal planet.

The template makes (optional) use of identifying icons for each feed, which are configured in the Planet config.ini (don’t forget to add the SPARQL template to the template_files configuration option as well):

name = Binary Relations
timeline_icon = dull-red-circle.png 

Once set up, the generated SPARQL results (example) can be run through the SPARQL Timeline Service (example).

Integrating the timeline into the default Planet HTML isn’t hard, I’ve created a tweaked version of examples/basic/index.html.tmpl from the latest nightly — see it in action.

Getting this running on a working Planet installation should be as simple as downloading the two templates to the right location, and updating the Planet configuration file (default dull-blue icons will be used if not specificed per feed).

Update: Note that the planet-sparql.srx file needs to be returned by the web server with an XML mime type, preferably application/sparql-results+xml. If you’re unable to configure your server to make it do that, try renaming the template and file to use .xml instead of .srx as the file extension — that should make it return with application/xml. Thanks to Edward Summers for pointing this out on the Planet developer list.