Wednesday, April 12, 2017 - 11:19

We’re in full on Islandora testing mode in preparation for the upcoming 7.x - 1.9 release and the release of our Web Annotation Utility Module and our Oral History Solution Pack prior to IslandoraCon.

As part of our testing, we found we needed the ability to have multiple users simultaneously accessing the same VM. Our systems administrator showed us a neat trick for those of you using Islandora VMs for testing and development. So, here it is:

Irfan's cool trick for letting others into your VM: With your Virtual Machine off, go to settings/network. There are slots for 4 adapters. By default the drop-down is set to "Nat." Change this to "Bridged Adapter" and start machine.  Login to the machine using the interface provided by the VM and find your IP address by running ifconfig -a | grep inet Provide this address to others. The IP + :8000 is Drupal for the VM provided by the Islandora release team (for example, Note the following: You now login (ssh) at vagrant@IPaddress (like vagrant@ Your IP might change, and you may to have to find the address again  Your network may change, causing you to have to run sudo /etc/init.d/networking and restart to update the machine's IP address This may have some unintended affects when performing Drupal functions

Overall, YMMV, but this has been very useful to us when testing. 

Wednesday, March 29, 2017 - 14:51

Our own Kim Pham has returned from Code4Lib 2017 in Los Angeles, where she presented a unit poster describing our work on the Web Annotation Framework (a solution pack with a release coming soon!). You can view the Poster in Tspace.


Thursday, March 16, 2017 - 12:38

We're happy to announce that a new publication by the unit, titled Supporting Oral Histories in Islandora is available in the January issue of Code4Lib

"Since 2014, the University of Toronto Scarborough Library’s Digital Scholarship Unit (DSU) has been working on an Islandora-based solution for creating and stewarding oral histories (the Oral Histories solution pack). Although regular updates regarding the status of this work have been presented at Open Repositories conferences, this is the first article to describe the goals and features associated with this codebase, as well as the roadmap for development. An Islandora-based approach is appropriate for addressing the challenges of Oral History, an interdisciplinary methodology with complex notions of authorship and audience that both brings a corresponding complexity of use cases and roots Oral Histories projects in the ever-emergent technical and preservation challenges associated with multimedia and born digital assets. By leveraging Islandora, those embarking on Oral Histories projects benefit from existing community-supported code. By writing and maintaining the Oral Histories solution pack, the library seeks to build on common ground for those supporting Oral Histories projects and encourage a sustainable solution and feature set."

Check us out in issue 35: online and open source! If you're interested in the module, check out the code on Github.

Thursday, April 14, 2016 - 08:59

[The following post is shared on behalf of Haoran Wang, one of the practicum students hosted by the Digital Scholarship Unit this term.]

Make Metadata Discoverable via the OAI-PMH in WorldCat

For the past few months, I was working as a practicum student at the University of Toronto Scarborough Library trying to figure out how to utilize the OAI module to help the Digital Scholarship Unit (DSU) make their metadata discoverable via the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) in WorldCat. Based on what I learned from the course INF2186 Metadata Schemas & Apps at UofT, I already had some basic knowledge of how to share metadata with a local, state, or regional digital metadata repository, and expose current metadata for OAI harvesting. This tutorial will teach you how I did this step by step.

Let’s start with some basic terms.

Step 0 - Terms to Get Started

Open Archive Initiative (OAI) is an initiative to develop and promote interoperability standards that aim to facilitate the efficient dissemination of content.

OAI Protocol for Metadata Harvesting (OAI-PMH) is a lightweight harvesting protocol for sharing metadata between services. In the OAI context, harvesting refers specifically to the gathering together of metadata from a number of distributed repositories into a combined data store.

There are two classes of participants in the OAI-PMH framework:

Data Providers administer systems that support the OAI-PMH as a means of exposing metadata; and Service Providers use metadata harvested via the OAI-PMH as a basis for building value-added services.

Data Providers (open archives, repositories) provide free access to metadata, and may, but do not necessarily, offer free access to full texts or other resources. OAI-PMH provides an easy to implement, low barrier solution for Data Providers.

Service Providers use the OAI interfaces of the Data Providers to harvest and store metadata. Note that this means that there are no live search requests to the Data Providers; rather, services are based on the harvested data via OAI-PMH. Service Providers may select certain subsets from Data Providers (e.g., by set hierarchy or date stamp). Service Providers offer (value-added) services on the basis of the metadata harvested, and they may enrich the harvested metadata in order to do so.

Basic functioning of OAI-PMH

The OAI-PMH protocol is based on HTTP. Responses are encoded in XML syntax. OAI-PMH supports any metadata format encoded in XML. Dublin Core is the minimal format specified for basic interoperability.


The diagram below is the overview and structure model of OAI-PMH.

Step 1 - Set Up Your OAI Module

The DSU currently use Islandora, an open-source, OAIS-based digital preservation repository and asset management system built on Drupal. First of all, going to the DSU home page, select Islandora ←  Islandora Utility Modules ← Islandora OAI from the navigation bar.

Then, the OAI module allows you to configure your URL path to the Repository. In this example, the base URL is If you want to see more records on your base URL, input the number you want to see under the “Maximum Response Size”. The default number here is 20 records per response.

Click on the Configure button below, you will find more setting configurations based on OAI request handler.

In OAI request handler, select the dc.identifier.thumbnail. If selected, a URL to the object's thumbnail will be added as a dc:identifier.thumbnail if the object has a thumbnail.

The DSU currently use MODS for all generic content going forwards - in past DSU used Dublin Core, but Islandora natively prefers MODS and it's more flexible for complex objects. For all fields that you want to display in WorldCat, you have to configure the metadata fields so that they are mapped to Dublin Core. Thus, I choose to transform MODS to Dublin Core.

Services like WorldCat expect links back to the object such as a Handle URL. If your metadata doesn't have this, self transforming XSLTs can be used to add specific elements tailored to individual needs.

Make sure you save all the settings in the end by clicking the Save Configuration button.

Step 2 - Test Your Base URL

OAI-PMH supports six request types (known as "verbs"). You can use them by simply adding these verbs after the base URL.

URLs for GET requests have keyword arguments appended to the base URL, separated from it by a question mark [?]. For example, the URL of a GetRecord request to DSU base URL that is could be:


Here is an explanation of all six request types:

GetRecords: This verb is used to retrieve an individual metadata record from a repository. Identify: to retrieve information about a repository. ListIdentifiers: retrieving only headers rather than records. ListMetadataFormats: retrieve the metadata formats available from a repository. ListRecords: used to harvest records from a repository. ListSets: used to retrieve the set structure of a repository.

After you have exposed content types and some fields, your repository is available at /oai2

Some example requests are as follows:

Step 3 - Build a Gateway from WorldCat to Add Records from OAI-PMH

In order to use the Gateway, be sure that the following conditions are met:  

Your OAI-PMH compliant repository is running.   You have one or more existing collections with metadata fields mapped to Dublin Core and/or Qualified Dublin Core (dcterms).   You have an OCLC-supplied Key for the Gateway.

If your institution does not already have a Gateway account, Go to the Gateway registration page at to register your account.

In a few days, OCLC will send a welcome Email that includes user credentials that you can use to log in to the Gateway and add additional users. After you have registered and have received your Gateway user credentials, you can log in to the Gateway and begin synchronizing metadata with WorldCat from your OAI repositories.

After you have registered your account with the Gateway, you need to associate your repositories with the appropriate Gateway key.

Go to the Gateway login page and login If you’re not already in the Manage Account tab, click to select it now. Click Keys and Repositories. Click to select the key for which you want to add repositories. Click Add Repository. You’ll see a display something like this:


6. When the Add Repository window appears, enter the OAI-PMH base URL for the selected repository. Then click Test.

7. When the repository has been tested successfully, Gateway displays the message “All OAI tests passed.” You can now click Add to associate the repository with your Key.

8. After you have successfully added the repository, you’ll be able to edit and manage settings for the repository you just added.



In the Repository area of the page, the following information is displayed:  

Institution symbol (OCLC symbol)   Gateway license key   URL, name – The OAI-PMH base URL and name of this repository Type – To change the repository type, use the pull-down menu to select one of the following: CONTENTdm (pre version 5), DSpace, Fedora, Eprints, Digital Commons or other. After changing the type, you must click Change to save your choice.

You can use the Show Sets in Collection List? Pull-down menu to configure the way the Gateway harvests content from a repository.

Your OAI repository allows you to manage sets (collections of records) separately in the Gateway. Using sets is the default approach. In the Gateway, a set name is the same as a collection name.

By default, Show Sets… is Yes. This default setting allows you to set up different metadata maps for each collection (or set) in a repository.

If you want to create a single metadata map for all records in your OAI repository, regardless of what collection the records are in, you can select No from the pull-down menu. Selecting No will create a special collection named Entire Repository. When you create a metadata map for that special collection, your mappings apply to the entire repository.  

Note: If you select No, you cannot subsequently undo that setting in the Gateway. For this reason, we strongly recommend that you do not change the default setting. Moreover, with multiple sets you may choose to apply one profile to several (or all) of the sets at any time.

In this example, I select No to apply mappings in the entire repository.


9. Since your license key may be used by more than one Gateway user, you can assign users with that key to particular collections. The users can then map metadata and synchronize with WorldCat for each collection to which they are assigned. Then you have to select the type of record processing for this collection and prepare your collection for synchronization with WorldCat through the Gateway.

Then on homepage you will find several sections:  

Collection Details – In addition to the general information displayed for this collection, you can set the WorldCat Record Processing type, collection-level record, and more.   Sync Details – You can edit the synchronization schedule for this collection, view its synchronization history, or view a synchronization status report. Metadata Map – You can click the link to edit the collection’s metadata map.  


Congratulations! Now you will be able to see your collections in the WorldCat.

The entire repository is avaliable at:

QA Analysis for Current Repository

Finally, I also did a quality assurance analysis for DSU’s repository. As you can see, the total DC completeness is 73.12%. Some collections need to add dates.

Enjoy your harvesting!


Lagoze, C & Van de Sompel, H. (2002). The Open Archives Initiative Protocol for Metadata Harvesting. Available:


OCLC Online Computer Library Center, Inc. (2012). The WorldCat Digital Collection Gateway Tutorial. Available:


Tripp, E. (2014). Get Discovered: Sitemaps, OAI and More. Available:


Jackson et al. (2008). Dublin Core metadata harvested through OAI-PMH. Available:


Shreeves et al. (2006). Moving towards shareable metadata. Available:


Wednesday, April 13, 2016 - 17:52

[This post is shared on behalf of Jaclyn DeGasperin, who completed her UofT iSchool practicum at the Digital Scholarship Unit this winter term.]

I knew going into the Practicum course that it would be a challenge for me, I had never worked in a traditional library so this would be a lot of firsts for me (it's a good thing I decided to pick a project that would have me working in a non-traditional library, smart choice on my part) . But when on the list of projects was one called "Building and Assessing Digital Collections in Islandora: 15th century manuscripts" I figured I would give it a shot. With a name like that it was hard to say no; rare books? Yes, please. Digital Humanities? Exactly what I'm looking for. Islandora? I'm sure I can figure it out (this may have been a bad idea, overestimating my abilities to work with technology, but my overeager desire to challenge and prove myself as a competent librarian won out over my other sensibilities)

But here's the thing, I managed, and I can even say that I have the confidence to work with repositories; in a few short months I my knowledge digital repository software went from basically nothing to fairly developed (of course I still have room to grow, hopefully a job will come along that will allow me to meet this challenge, but that's for the future). You see, with supervisors like Lydia and Kim, who genuinely want you to succeed and understand and excel in the field that they themselves love, it's impossible to just coast by and not learn. The environment at the DSU is open and friendly, there was not a day that I came in and wasn't greeted with a smile by one of the wonderful women who worked in the office.

It turns out I like digital librarianship. My work at the DSU started with the Scarborough Oral History Project -- Stories of UTSC:1964-2014, which we started working on in February. The goal of the project was to draw attention to the voices within the UTSC community that are often ignored or overlooked; this project tells the untold and unofficial stories of the University of Toronto Scarborough Campus. It's finished and lives in a tiny corner of UTSC.

As we moved into working with the Gunda Gunde manuscripts our focus shifted from working with the surface of a repository to digging into how a collection is put together, pulling at the guts of Islandora and seeing what makes it ticks (and how to talk to it nicely so it does what you want it to do). For this project we were provided with digital photographs of manuscripts from the Gunda Gunde monastery in Ethiopia; they already had an Islandora collection and had been turned into what can only be called a digital book. It was our job to check that the metadata attached to the images was accurate and that the pages followed in the proper order to match the original artifacts.

While this quality checking was monotonous it provided me with access to wonderfully rare books that I enjoyed flipping through. More importantly though it was a chance to see how a digital repository worked in so far as it is a preservation platform.

Overall my experience with the DSU was positive; it gave me the chance to do work that I would not have otherwise been able to do and experience personal and professional growth. Also, the DSU now has a collection of photogenic animals in its virtual box - you're welcome (I think?).

Wednesday, April 13, 2016 - 10:37

[The following post is shared on behalf of Julia King, one of the practicum students hosted by the Digital Scholarship Unit this term.]

I went to library school in order to work with rare books. At the start of the degree, my list of courses I expected I would need included coursework in book history, archives, and “digital humanities”, whatever that was. But somehow, through work and school and projects, I found myself in my final semester without actually getting any digital scholarship experience. So, imagine my surprise and delight when I found a project on the practicum list called “Building and Assessing Digital Collections in Islandora: 15th century Manuscripts”! Not only was it digital scholarship, but it was also exactly in my area of research interest!

So, TL;DR, I came into this practicum with a lot of expectations.

So I signed up, made the trek out to Scarborough twice a week, and took my knowledge of digital repository software from absolute zero to being pretty confident in my knowledge and skills. Along with Jaclyn, my fellow practicum colleague and classmate at the UofT iSchool, we jumped into the world of Islandora and digital repositories head first.

We started our work on the Stories of UTSC project, which collected oral histories from members of the UTSC community to celebrate the campus’s 50th anniversary. When we started, what we had were collections of files that were saved onto the Islandora repository (an open source place to store and display files that the DSU uses for most if not all of its projects), and a spreadsheet of data gathered by the undergraduate students who did the interviews as part of a class project. We looked at those spreadsheets, changed them into functional metadata (which is basically data about data, or how information about the objects in the repository is sorted and then made searchable). We then used code magic to “ingest” (my favourite word from the practicum, which makes me feel like we are directing a giant digital Kirby) all of this information and attach it to the files in Islandora.


This is basically what an ingest looks like in my brain, but with more XML tags.

Once the metadata was safely attached to the files in the repository, we cleaned it up, added thumbnails, and basically made the collection look presentable for its big unveil in late February.

After this project finished, we jumped into working with the manuscripts. Of course, the DSU isn’t a rare book library, or a physical library at all—we instead take data given to us by professors and store it in a usable way on the internet. So in this case, we had photographs of manuscripts from the Gunda Gunde monastery in Ethiopia, and they had already been ingested into Islandora and turned into essentially a digital book using the Internet Archive Bookreader. Our job was to check that the metadata that was attached to each image was accurate, and that the pages in the book had actually uploaded in the correct order.

Metadata checking is monotonous—you look at a .pdf, and then make sure the fields in your metadata form say the same thing. But, this also was where I used the majority of my rare book skills. For example, I noticed that the “author” field on the form was being used for donor and owner names—so we set up a new section of the form to accommodate this. I made suggestions on improving the system for citing which folios information came from (although this has yet to be resolved, because we couldn’t figure out an easy way to do it that didn’t involve insane amounts of coding or, worse, re-checking all the metadata by hand.)

Checking the page ordering could have been even worse—except that we were actually working with the individual pages, so we were able to experience the manuscripts visually—and there are some stunners in the project. You can look at all of them here, or get a taste through my excited Instagram that I took in March:


Screen at #practicum at #utsc with medieval Ethiopian Nativity. by @julialilinoe

As a closing project, we also created our own mock collections in the Islandora virtual box (basically a place where you can test features of Islandora on a fake collection in order to play with the functionality of the repository). This was by far the hardest and most rewarding thing we did on this project. Both the Stories of UTSC project and the Gunda Gunde project were easy enough to figure out—you filled in a form, or uploaded a document. But with this project, we really had to do research and dig deep to understand what exactly the system was doing, and what our collection needed to be like in order to help the system do its thing. We built our own forms for metadata, and had to figure out how to do this within the confines of the extremely confusing Islandora form builder, we figured out how to make our forms autocomplete, and we struggled with the concept of dynamic websites vs. static ones.

Basically, we figured out what the “?” was, where Phase 1 was “ingest items into Islandora”.

Take aways from this experience include:

An understanding of what a repository is, and how it can work to organize data online How repositories can be used in the rare books world An introductory understanding of how dynamic websites work vs. static websites A visual understanding of what medieval Ethiopian manuscripts look like A job offer to work on a medieval digital humanities project with another unit at U of T!

This practicum changed my understanding of what my role could be in a library working with rare books, and as you’ve just read, I’ll be continuing on in the digital world working with manuscripts. I am positive that without this practicum, I wouldn’t be able to jump into such a role, and I would recommend anybody with a passing interest in Digital Humanities or metadata to jump at the chance to work here. This is definitely one of the most valuable things I’ve done during the MI degree. The team is very friendly, supportive, and ready to explain anything about digital repositories and the digital scholarship role in the library. This practicum definitely exceeded my expectations of what knowledge and experience I would gain through the course, and I encourage other students to consider choosing to work here or in a similar project in the future!

Monday, August 17, 2015 - 11:18
310+ people have RSVP'd so far

Date: Sunday, September 27, 2015
Time: Event starts at 8pm. Total eclipse starts at 10:11pm
Location: Rm 309, Science Wing Building, University of Toronto Scarborough Campus
Email address for more information: Subject line: Lunar Eclipse Live Event

Amaury Triaud, Postdoctoral Fellow, University of Toronto
Ari Silburt, PhD Candidate, Astrophysics, University of Toronto
Daniel Tamayo, Postdoctoral Fellow, University of Toronto
Hanno Rein, Assistant Professor, University of Toronto

In celebration of Science Literacy Week, join us for an evening to view the last visible total lunar eclipse until 2019 accompanied by a series of short lectures on astronomy.  On September 27, the moon will pass through Earth's shadow, blocking any direct sunlight to the moon and causing it to glow red.  The eclipse will also be streamed live on screen in case of weather. Attendees will get a chance to tour UTSC's observatory and look through our telescopes.
Refreshments will be provided.

About Science Literacy Week
Science Literacy Week is a coast to coast celebration encompassing over 100 such institutions from nearly 50 cities. It is a forum for all these great organizations to showcase the amazing work they do year round.  Whether it’s in highlighting great books, hands on demonstrations, talks by great scientists or so much more, my hope is that you’ll learn a lot and have a great time in the process.

Register Here: Loading...
Tuesday, May 12, 2015 - 13:43

Check out the conference website and save the date for great speakers and discussions about digital literacy in the classroom!

Monday, April 20, 2015 - 13:33

The following sessions may be of interest to Humanities faculty and library staff this Friday, April 24th.

11.30-1.30 (lunch provided), Instructional Centre, Room IC 318 

"BigDIVA: Search as Research” Laura Mandell (Texas A&M)

"The Renaissance Knowledge Network as Social Knowledge Hub" Daniel Powell (Kings College London), with William Bowen (UTSC) and Ray Siemens (University of Victoria)

2:00-3:00 BigDiva update, Humanities Wing, Room HW 525C

3:00-5:00 ARC Metadata Committee presentation, Humanities Wing, Room HW 525C

Thursday, April 16, 2015 - 09:19
Thursday, April 2, 2015 - 10:10

Are you a doctoral student (or an advanced MA student) wondering about all the buzz around “the Digital Humanities”? Do you wish you could get a concise introduction to this emerging field as it applies to your own research (and meet a community of scholars working in this area)? If so, please join UTSC faculty, digital librarians, and other graduate students for a day-long workshop on April 21st, 2015. In addition to an overview of the field of DH and a sneak peak of some exciting projects being developed at UTSC, you’ll learn about key principles, methods, and tools available to you right now. No coding or prior experience necessary -- just register to secure your spot by filling out this brief survey. As a bonus, your feedback will help us finalize the schedule to ensure that the training provided is as useful as possible. We will provide lunch, instructional materials, and lots of food for thought. All you need to bring is your laptop and questions.

Please note: All graduate students are welcome to register to this free event, but as seating is limited, priority will be given to pre-candidacy students in the Departments of History, Classics, East Asian Studies, Near and Middle Eastern Civilizations, and the Collaborative Programs in Women and Gender Studies and South Asian Studies. Register by April 13th, and you’ll be notified of acceptance by April 15th.

This event is hosted by the Department of Historical and Cultural Studies, in partnership with the UTSC Library.

If you have any questions, please contact us at

Social Media

Tweet the day at #utscDH15

Join our Facebook Group.


Getting Here

Location - the Social Sciences Building (MW) at the University of Toronto Scarborough, RM 120



According to the TTC Trip Planner
Take the 198 ROCKET towards U OF T SCARBOROUGH - EAST
198 Bus Schedule


Preliminary Schedule

9:30 - Welcome, housekeeping, roundtable, what is DH?

10:00 - Introduction to Digital Humanities at UTSC (Faculty Projects)

10:30 - Open Discussion/break

10:45 - Zotero, and Beyond Bibliographic Managment - What is Zotero? How do you use features for data collection and research collaboration? Intro to open-source community and plugins

12:00 - Lunch (Provided)

1:00 - Academic Blogging, Digital & Data Storytelling, Natural Language Processing & Social Media Analysis

3:00 - break

3:15 - Structuring and Analyzing Data (Network Visualizations) - Structured data and your thesis

4:15 -Closing remarks (Frontiers & Data Curation)

4:30 -  Exit survey

Register Now Event Updates!

We have a full house! Thanks to all for applying to attend this workshop. We're excited at the amount of interest it has garnered. For those of you attending, we have some updates.

Learn more about who's attending!

Join our Facebook Group.


Thursday, October 23, 2014 - 09:51

We’re on day 4 of Open Access (OA) Week! We had a great turn out yesterday at the button making station outside the library and the social media activity is still going strong.  Photos are posted on the @digitalUTSC Instagram and Twitter accounts, as well as the EPSA Facebook page. Thanks to all who have been participating!  

There are still great OA Week activities happening across the tri-campus the rest of the week. Check out what is happening today at across all 3 U of T campuses

Of particular interest to those at UTSC:

We're back TODAY 10:30am-3pm outside the library so if you have a spare moment, please drop by to say hello, make a button and share your thoughts on OA. 
  Drop by the Library Instruction Lab (AC286A) 2-3pm today and Friday if have any questions about depositing copies of your publications in our research repository (TSpace) or want to know more about publishing in an OA journal. 
  If you’re looking to publish in an OA journal but don’t have any funding remaining, you may want to consider the Library’s OA Author Fund pilot which has been extended into 2015. We also have RSC Gold-for-Gold vouchers if you’re publishing in an RSC publication. Questions? Email me or come to one of the drop-in sessions listed above.
  I’ve been periodically tweeting out links to OA publications/open data/other open research outputs to showcase the amazing scholarly output of UTSC researchers.  If you have something you’d like me to highlight, please either email me or include the Library's Digital Scholarship Unit (@digitalUTSC) or my personal (@4Bes) Twitter handle if you decide tweet out a link yourself.
  There will also be more oaweek trivia coming tomorrow! Show off your OA skills on the @UofTSCCO FB page to win a Starbucks gift card
Thursday, October 23, 2014 - 09:28
Open Access Week 2014: Events Listing   Monday, October 20 Wednesday, October 22 Thursday, October 23 Friday, October 24
  Monday, October 20 Open Access Week 2014 Kick Off Event at the World Bank: Generation Open (WEBCAST)

3:00 - 4:00 pm (EDT)

VIEW THE WEBCAST: (pre-registration not required)

Join the World Bank and SPARC (The Scholarly Publishing and Academic Resources Coalition) as they host the International Open Access Week Kick Off Event, live streamed from Washington D.C. The event seeks to provide a forum for early-career researchers and students to discuss how a transition to open access could affect researchers at various stages in their careers.  A panel of experts will also discuss how academic and research institutions can become involved in supporting early-career researchers to make their scholarly articles and data accessible to all.

The following panelists will be involved in the event:

Stefano Bertuzzi: Executive Director, American Society for Cell Biology José-Marie Griffiths: Vice President for Academic Affairs, Bryant University Meredith Niles: Postdoctoral Research Fellow, Sustainability Science Program, Harvard University Jerry Sheehan: Assistant Director for Policy Development, National Library of Medicine
  Wednesday, October 22 Humanities Informational Drop-In Session with a Scholarly Communications Librarian

2:00 - 3:00 pm

AC286A Library Instruction Lab

Have questions about publishing your research? Inquiries about copyright or open access? Or just want to explore your options for the future? Come in from 2 to 3 pm to the Library Instruction Lab to talk one-on-one with the UTSC Scholarly Communications Librarian!

  Student Social Media & Button Table

10:00 am - 3:00 pm

Outside the Library

Is access to research important to you? Are you considering publication of your research someday? Come chat with Sarah Forbes, the UTSC Scholarly Communication Librarian, and EPSA representatives to learn more about open access and show your support by making personalized buttons and sharing on social media how much these issues matter to UTSC students.


Thursday, October 23 Social Sciences Informational Drop-In Session with a Scholarly Communications Librarian 

2:00 - 3:00 pm

AC286A Library Instruction Lab

Have questions about publishing your research? Inquiries about copyright or open access? Or just want to explore your options for the future? Come in from 2 to 3 pm to the Library Instruction Lab to talk one-on-one with the UTSC Scholarly Communications Librarian!


Student Social Media & Button Table

10:30 am - 3:00 pm

Outside the Library

Is access to research important to you? Are you considering publication of your research someday? Come chat with Sarah Forbes, the UTSC Scholarly Communication Librarian, and EPSA representatives to learn more about open access and show your support by making personalized buttons and sharing on social media how much these issues matter to UTSC students.


Friday, October 24 Sciences Informational Drop-In Session with a Scholarly Communications Librarian

2:00 - 3:00 pm

AC286A Library Instruction Lab

Have questions about publishing your research? Inquiries about copyright or open access? Or just want to explore your options for the future? Come in from 2 to 3 pm to the Library Instruction Lab to talk one-on-one with the UTSC Scholarly Communications Librarian!


Complete listing of tri-campus events   For more information contact:

Sarah Forbes

Wednesday, October 15, 2014 - 10:29

For the original post, visit

Recently we found that we needed to revisit our old friend the Drupal Feeds module and get it to play nice with Zotero's API. This time we wanted to use it with Culinaria's Zotero library. Our goal was to pull all of the items and their content from the library into their UTSC website (much like how an RSS feed works). In the original post, Kirsta brought up that she was having trouble when the processor was set to periodically import items to keep the feed up to date with the Zotero Library. What happened was that every time the import ran, the Processor wouldn't just add any new items and update existing ones , but it would create a new item every single time from the API. This meant that there were lots of duplicate entries in Drupal that needed to be removed. We got it to work now, but first:

A Recap

How to set up a Zotero feed in Drupal:
1. Create 2 new content types: Custom Feeds Processor, Custom Feed Item (under Structure)
2. Add a new Feed Importer (under Structure)
3. Add a new Custom Feed Processor (under Add Content)
4. Configure the new Feed Importer: attach it to the Custom Feed Processor, map it to the fields in the Custom Feed Item which will be pased with XPath (under Structure)
5. Go to the Custom Feed Processor-> use XPath as your field output (under Content)
6. Run the Import in your Custom Feed Processor. You should see that every entry in the Zotero library has created a new custom feed item in Drupal in your Content view. You can then create a page with all of your feed items using the Views module.


You can see the final page here:

We needed to set a unique key that we can use to match up with any existing feed item in Drupal. It seemed to work best when we used the element zapi:key in the Title field. That way every time the import runs, it checks if that key exists and if it does it will update (but not create) a new feed item with that key.

These are the fields we selected in the Feed Importer  to map to the processor.  We also set Title as the unique key in the target configuration column. 

Here are the XPaths we used in our Custom Feeds Processor:


Other Tips

The Zotero API maxes out at 100 items, but we had 115 items. Our workaround was to import all the items in the library by running your processor twice, once by sorting your items in descending order, once by sorting your items in ascending order. Then we set the API to sort by descending order from hereon out so it will only grab the 100 most recent items.

Earlier we used Oxygen to get the XPaths, but Google Chrome has an XML tree extension that you can use that will also quickly get you automatic XPaths:

In your Feed Importer, it's useful to use the Tamper settings to clean up your feeds.  We used HTML entity decode  and URL decode which converts hex values such as "&amp;" into "&".  You can also use plugins such as Find and Replace, Filter empty values, or Explode.

You can turn tags in the Zotero into a taxonomy in Drupal, then create a Menu for those terms.  First you'll need to create the new terms from your Feed Importer: