Oslo public library

Posted by

Title: Oslo public library
Team: Arve Søreide, Asgeir Rekkavik, Benjamin Rokseth, Bjørn-Erik Bjørgan, Cecilie Bergh, Igor Koudrik, Kjetil Jørgensen-Dahl, Kristoffer Moe, Marianne Rolfsen, Nicho Paulik, Petter Goksøyr-Åsen, Rurik Greenall and the rest of Deichmanske bibliotek

Short description:
The Application project has created a new library services ecosystem. Based at Oslo public library, we have created ls.ext; it’s modular and uses containers that interface using REST-APIs; it does everything a library system should do and does it cleanly, quickly, universally-accessibly — like other systems you enjoy using on the web. There are a couple of differences though: it’s open source, under constant development and it uses linked data as its core metadata format. Take a look at Oslo public library’s production system

Download the source: github.com/digibib/ls.ext

19 votes, average: 4.11 out of 519 votes, average: 4.11 out of 519 votes, average: 4.11 out of 519 votes, average: 4.11 out of 519 votes, average: 4.11 out of 5 (19 votes, average: 4.11 out of 5)
You need to be a registered member to rate this.

Long description:

Since August 2014, we’ve been working on producing a new library system for Oslo public library. It was important to create a system that could be maintained and extended by the library — to create a system that would work for them in the future.

Many alternatives were assessed, but it seemed that there were no viable options on the commercial market that offered a flexible platform for development of the kind of services users expected in 2014. In fact, it seemed that the only way to exercise freedom of choice was to take full control of things.

The choice of Koha as a core component in the library system was made before the project started — it was deemed wasteful to develop systems for solved problems like circulation, logistics, acquisitions and so forth. Indeed, Koha provides an excellent basis for these functions, but it is still a system that provides the traditional MARC-oriented functionality every other library system provides. We wanted something else.

One of the problems of traditional systems is that they are hampered by the data format that has become standard since the 1970s. This is understandable given that development has been oriented towards standardisation rather than usability for end users. We wanted to shift this focus back to a system that users would understand intuitively.

We wanted to build a system that was geared towards patrons, rather than libraries.

The key to doing this was to reassess what users really need; what their expectations of a modern, web-based system are. It’s important to note that every function in the system has been carefully described and planned from a need-to-have perspective that focusses entirely on user-friendliness for the patrons.

Perhaps unsurprisingly, after several months of development, we found that MARC did not provide the support needed for the kind of functionality we wanted to incorporate — specifically work-level entities broke down the original hypothesis that we could simply extend MARC.

The road then lead to RDF, which is a technology that has been used in several previous applications by several members of the team — we could have chosen another format, but we were also interested in the possibilities of using shared linked-data resources. To this end, RDF and linked data seemed like a must.

Developing an entire system that can round-trip RDF and present it in a way that is not obviously linked data is something that we felt hadn’t been achieved in our domain and we have had to learn a lot of lessons from various other domains to help us along the way. We have developed new ways of doing things that made our project possible — JSON-LD-PATCH — and utilised various technologies that are widely touted, but often not much used in production.

We have experienced the highs and lows of developing something no-one has seen before and have succeeded in putting the system into production — providing 250,000 users with access to 450,000 manifestations distributed on 350,000 works. The system serves 25 000 unique sessions per week with an average 10,5 pageviews per session.

The system is entirely modular, built as a docker-based ecosystem. It can be installed and up and running locally or on a cloud platform in a matter of minutes.

The modular nature of the system is enabled by being based entirely on REST APIs, which means that new containers for any functionality a new installer feels is lacking can simply be dropped into place.

For developers, it’s easy to contribute to the project via Github.

It’s also worth noting that any work that is done on Koha, which provides the closed-world data for the system, is fed back into that project — Oslo public library has a strong commitment to participating on the Koha community’s terms and uses the community distribution in ls.ext.

As of the current time, out linked data support is limited to lexvo languages for a simple reason — the wider support of external data sets hasn’t been relevant up to now. Providing the public with a good basic data layer and a competent search (something we’re constantly working on) has taken time. It’s also worth noting that we have struggled to find production-ready sources of linked data for our domain. Nevertheless, we’re looking for partners who can share high-availability (i.e. within our tolerances) metadata as linked data and collaborate on read-write web cataloguing.

We have provided an expert system for importing data from external resources in various formats, workflows for applying these data to our data model and tools for working with the data model as needs change over time — aggregating and splitting works/series/publications. The cataloguing tools are an area that speak of our commitment to providing an efficient and simple interface for creating linked data.

You can take a look at the system here: http://sok.deichman.no/work/w64f6147a28f0c12869c143ebb3c49f91 and see the data at http://sok.deichman.no/services/work/w64f6147a28f0c12869c143ebb3c49f91. The latter link isn’t an official data endpoint, but it returns the data we use in the system and is part of the API used by the patron client.

At a point in the very near future, we hope to be able to provide this data as dereferenceable linked data with content negotiation, in addition to providing RDFa markup for schema.org in the patron client.

Country: Norway

Skip to toolbar