- Go to digg.com.
- Open a story from the front page in a new window or tab (via the More… link).
- Open another story in yet another new window or tab.
- Switch to the window or tab from step 2.
- Login (using the Ajax widget in the left column).
Notice how your window or tab opened in step 2 now contains the article that was opened in step 3?
Clearly, digg doesn’t use or understand web architecture. Somewhere on their server they try to store session information, under the assumption that one user equals one browser, one window, and one tab.
If instead of using session information (through cookies), they actually used the principles of the web and Representational State Transfer (REST), I would end up with the story I was actually reading, not some other story I happened to have opened at a later point.
Following up on the other day’s post on SPARQL and the Simile Timeline, here is another way of putting items on a timeline.
I’ve created a SPARQL template for Planet Planet, you can see it in action on my personal planet.
The template makes (optional) use of identifying icons for each feed, which are configured in the Planet
config.ini (don’t forget to add the SPARQL template to the
template_files configuration option as well):
name = Binary Relations
timeline_icon = dull-red-circle.png
Once set up, the generated SPARQL results (example) can be run through the SPARQL Timeline Service (example).
Integrating the timeline into the default Planet HTML isn’t hard, I’ve created a tweaked version of
examples/basic/index.html.tmpl from the latest nightly — see it in action.
Getting this running on a working Planet installation should be as simple as downloading the two templates to the right location, and updating the Planet configuration file (default dull-blue icons will be used if not specificed per feed).
Update: Note that the planet-sparql.srx file needs to be returned by the web server with an XML mime type, preferably
application/sparql-results+xml. If you’re unable to configure your server to make it do that, try renaming the template and file to use
.xml instead of
.srx as the file extension — that should make it return with
application/xml. Thanks to Edward Summers for pointing this out on the Planet developer list.
Danny Ayers has been working on getting the SIMILE Timeline to eat SPARQL through the use of its JSON interface and some XSLT, he has notes on the ESW wiki.
While trying to get his work running here, I realized that the trip through XSLT to create JSON output really wasn’t necessary.
Instead, I’ve created a custom SPARQL event source parser, to load SPARQL results directly into the timeline. This way, the SPARQL results format generated by running the query doesn’t need a round trip into either JSON or the custom Timeline XML format.
The SPARQL Timeline demo works with any RSS 1.0 feed (try it with the one from Planet RDF) .
Update: Now also works with “raw” SPARQL results, try it with photos of laptops from The Gargonza Experiment (scroll to April of 2005). Expected variable bindings are
link, although the latter is optional and the first can be replaced by
I generate and store quite a lot of metadata with my photos, as can be gathered from my faceted photo index. Until now, I have simply displayed most of it beneath each photo on its page, but I wanted to make the interesting parts stand out more, while still providing access to the rest.
Simon Willison created a small script for toggling sections of page, easytoggle and debugging in Safari, which was subsequently improved to also handle Safari. That seemed like a great way to approach the problem — making it possible to structure the information, while still leaving it accessible to all.
#toggle, and add a CSS instruction to make it not display:
display: none. The rest of the script works just as the original, where links with
class="toggle" are used to identify the parts that should be togglable.
At some point this fall, I promised myself I’d refactor my web pages, to give them all a similar look, while making it easy to update that look in the future, and drive most of the content with RDF — after all, web pages are resources.
I’m not quite done with all the corners, most notably my homepage, but at least now the weblog and the photo albums share a common stylesheet, with everything in place for tweaking the rest, including a Planet Morten feed!
For the coming year, I intend to continue my switch of focus from producing RDF to consuming it. I have started out by generating a faceted interface for my photos (which could use an additional interface like libby’s calendar view), and with Leigh Dodds releasing Slug: A Simple Semantic Web Crawler, I’m reminded to get back to work with my scutter, Scutter Strategies and the Scutter Vocabulary. Also, Bob DuCharme has created rdfdata.org, which means that it’s now easier than ever to find data to play around with. Integral to most of this is me getting around to writing/porting the RDQL/SPARQL rewriting code to the Redland/MySQL storage backend.
To see what it’s like, I also intend to start a “real” (Danish) weblog, one that is updated on an (almost) daily basis, I think it’ll be good for me to get into the habit of writing more often than now, where most of the stuff I do sits quietly behind the scenes, waiting for that elusive moment when there’s time to refactor and document it properly. In short: Moving to a state of mind where (seemingly!) perfect is an option, not a requirement — a state I’m finding it hard to get to, but also a state from which I have learned a lot from others in the Open Source community.
So much to do, so little time, but I think it’s important to showcase how RDF can actually be used, not just produced, all the while making interesting stuff simpler.