All posts by Morten Høybye Frederiksen

About Morten Høybye Frederiksen

Morten Høybye Frederiksen is a freelance developer working with the Semantic Web, mostly with FOAF (Friend-of-a-Friend) and other RDF vocabularies, trying to also uncover more general issues.

SPARQL Endpoint

SPARQL

As I hinted to in my post about Named Graph Exchange, and later picked up by Danny, it is possible to create a SPARQL endpoint using ARC with only six lines of PHP — or rather was, as Benjamin Nowack has now announced a suggested update, that makes it possible in only fourthree lines:

include_once('path/to/arc/ARC2.php');
$ep = ARC2::getStoreEndpoint(array(...));
$ep->go();

The same announcement, the ARC Release Notes and Change Log, and Benjamins post about his cool data wiki also mentions the new ARC plugin system, that I helped take off.

My plugin submission is essentially a SPARQL client, making it possible to use ARC for accessing remote SPARQL endpoints: ARC2::RemoteEndpointPlugin

The plugin works, in that it supports the read-only query types SELECT, CONSTRUCT, DESCRIBE, and ASK, but not yet the ones that write, since the ARC class that does HTTP only speaks GET at the moment.

The plugin homepage doubles as its documentation — it contains an example of how to use the plugin: Basically just like with a regular ARC2::Store, only with a simpler configuration — only a SPARQL endpoint URL is needed.

The homepage is also a Bazaar repository, and as such provides an Atom feed with updates, and — of course — a DOAP file, that I hope will one day play a role as the authoritative source of information shown on the ARC plugins page, with automatic updates.

Fitting, also, that this surfaces on the very day that SPARQL becomes a recommendation. It has been quite a journey, and fun to be a (small) part of, starting way back when Squish and RDQL were state of the art, and later with The Gargonza Experiment — perhaps it is now time to retire my partial SPARQL Rewriter and resurrect Sparqlette

DOAP from the Bazaar

Last year, when I first released bzr-feed for generating an Atom feed for a Bazaar repository, I added an item to the TODO almost while writing the first lines:

* Add RDF/XML output with DOAP support

Just now, I removed that item, not because I updated bzr-feed, but because I have created a new Python script for generating DOAP using the same technique: bzr-doap.

Usage is quite simple — simply add something along the following lines to your .htaccess file, and you’re good to go (presuming the usual cgi-bin stuff is in order):

RewriteCond %{REQUEST_FILENAME} !-s
RewriteRule (.*).rdf$ bzr-doap.cgi?dir=$1

Output is generated based on the information present in the bzr branch, but it can be augmented/overridden through the use of a .doaprc in the current and/or parent directory, like this:

[Project]
short_desc: DOAP generator for a Bazaar repository branch.
description: A CGI script for automatically generating DOAP for a Bazaar repository branch.
programming-language: Python
license: http://usefulinc.com/doap/licenses/python
[Maintainer]
foaf_homepage: http://www.wasab.dk/morten/

There is of course a DOAP for bzr-doap, and as usual an Atom feed for you to follow its development in its Bazaar repository.

Named Graph Exchange

Following up on Exchange of Named RDF Graphs and the rapidly developing ARC2 RDF system, I have written a PHP/ARC2 version of my script for parsing and serialising a graph archive, and repackaged the original version into a single script for Redland.

I will be using this for testing ARC2 (performance) with my photo database, to see if I can manage a simpler interface without sacrificing the excellent performance from Redland. So far, it seems parsing might be a bottleneck, but that isn’t really important, if the query handling is good (so far it looks great, I can implement a SPARQL endpoint in 6 lines of PHP) — I can do batch processing offline.

You can find the scripts and some example archives in its bzr repository: named-graph-exchange, and download the whole package in .zip– or .tgz-format.

Next in the pipeline is an implementation that talks to a SPARQL endpoint, only downstream for now, but possibly using SPARQL+ or SPARUL for remote updates in the future.

The scripts are licensed under the Eiffel Forum License, version 2, per sbp’s considerations.

Authorization by Codepiction

Over the last fours years and counting, I have been providing access to a triplestore with description of my photos through a facetted interface. Through those years I have received several requests for excluding people from the searchable interface, as they showed up quite prominently on Google.

It turned out to make the interface much less useful for me — and possibly for others as well — so not too long ago I ended up simply excluding photos depicting people — using SPARQL, of course:

PREFIX foaf: <http://xmlns.com/foaf/0.1/>
SELECT ?Photo 
WHERE { ?Photo foaf:depicts ?Person .
        ?Person a foaf:Person }

That, of course, didn’t really improve things from my personal perspective, so watching others experiment with OpenID and social networking, I decided to take the same route — with a twist.

I didn’t really want to maintain or discover a social network for this use, but then it dawned on me: My photos actually represent a social network, as codepiction describes relations between people even better than explicitly stated ones.

With that in mind, I got the most recent version of the PHP OpenID Library up and running, and now use SPARQL to extend the non-person subset created above with photos of codepictees for users logged in with their OpenID — assuming they match with the descriptions in the store:

PREFIX foaf: <http://xmlns.com/foaf/0.1/>
SELECT ?Photo 
WHERE { ?Photo foaf:depicts ?Friend .
        ?OtherPhoto foaf:depicts ?Friend .
        ?OtherPhoto foaf:depicts ?User .
        ?User ?ifp ?OpenID .
        ?ifp a owl:InverseFunctionalProperty }

Note that since I don’t restrict ?User to be distinct from ?Friend (and ?Photo from ?OtherPhoto), the query also returns photos depicting the user. The query also doesn’t explicitly look for foaf:openid, but rather any inverse functional property, and since I’ve used foaf:mbox/foaf:mbox_sha1sum quite a lot in the descriptions of the photos’ depictees, I have added the option to verify your e-mail address — it is not enough to trust what’s returned from an OpenID provider, as there is no validation, and I wouldn’t want everyone to create OpenID identities with my e-mail address.

All in all, you can now use OpenID to log into my facetted interface and — if you are recognized — get access to additional photos. Try it!

Quite slick use of codepiction, if I may say so myself…

NB: I cheated. Parts of the backend still use an old RDQL engine, so some of this was actually not done in SPARQL. Also, I supplemented the above codepiction query with queries regarding albums with codepiction and photographer information (mostly useful for myself, of course), meaning you will actually see more photos than you might expect.