Thursday 4 September 2008
Briefing Notes for the IntraLibrary Repository from Intrallect: A Technical Perspective
Executive Summary
The IntraLibrary repository is web application that requires a Java Server Pages enabled webserver, such as Tomcat, and a MySQL database. It is fairly straight-forward to install and the software support for the product is good. It has a Powerlink that will allow users to search the repository from our Virtual Learning Environment and incorporate content in courses in a controlled manner. This paper is intended for systems administrators and technical staff. It adopts the seemingly casual tone and terseness commonly used among systems people!
Install Apache and Tomcat
IntraLibrary is a Java Server Pages application, so you will need a server that serves JSPs. The Keele installation uses Apache 2.0 with Tomcat 5.5, a popular combination, running on Linux. There are instructions available on the Internet for installing a JSP-enabled webserver. Happily, a number of major Linux distributions have pre-packaged Tomcat to make installing it a breeze. Tools such as yum (Fedora), apt-get (Debian/Ubuntu) and the Synaptic Package Manager will all let you search their repositories for the current version of Tomcat. Java is obviously a requirement. Talk with your local Linux guru.
Read the Instructions
Seriously. Read the instructions. My only problems happened when I missed part of a step in the instructions. They are fairly clear and Intrallect's support is quite good if you have missed something, but before you call, just check the Troubleshooting section at the end to avoid embarrassment. Other than a few wrinkles which I will detail below, I have found the installation procedure goes smoothly.
My Experience
Create a directory (such as /usr/local/intralibrary3p0) on the filesystem that runs Tomcat and place the intralibrary.war there as well as the config directory. Reading the documentation, you may find that TOMCAT_HOME and CATALINA_HOME are equivalent. Copy the context.xml sample file to $CATALINA_HOME/conf/Catalina/localhost/$CONTEXT.xml and make a backup copy somewhere. It's been deleted when I've made errors doing development work. Copy all of the required jar files to the 3 different directories. (I missed one)
In our MySQL configuration file which lives in /etc/my.cnf , we increased the max_allowed_packet from 20MB to 200MB because of the size of some of the objects we were putting in the repository. You will know that you need to increase it when you get errors in creating a backup with mysql.
[mysqld]
set-variable = max_allowed_packet=200M
When troubleshooting, the logfile you're most interested in is $TOMCAT_HOME/logs/catalina.out Don't worry if you get errors when the IntraLibrary mail function tries to connect to Intrallect. These occur in my log files when it starts up because it's blocked by our system. IntraLibrary has it's own log directory under config, but it's not as relevant for troubleshooting.
For the sake of your users, implement a good database backup policy. We have a cron job running overnight on a separate machine that executes the following command:
ssh -n $REPOSITORY_HOSTNAME 'mysqldump --max_allowed_packet=1073741824 $DATABASE_NAME --password=$PASSWORD' | gzip -1 > $BACKUP_HOME/intralibrary-`date +%Y-%m-%d`.sql.gz
My setup creates a web application at http://repository.keele.ac.uk:8080/intralibrary/ but there is the annoying default Tomcat page at http://repository.keele.ac.uk:8080 . If you edit the index.jsp page or insert a index.html page in $TOMCAT_HOME/webapps/ROOT/, you can create your personalized information page for your repository. We are in the process of developing a method to make the repository searchable by Google using RSS feeds to create Google Sitemaps . Look at Google Webmaster Tools for more information.
We find that the developers at Intrallect are candid about the software's capabilities and future direction. They usually make an appearance at the user conference in Edinburgh where the Keele team successfully found the answers to all the questions that we needed answering for the evaluation process. The first installation took less than a day to complete, including emails to the support team. Now I can usually upgrade the software in under an hour. When upgrading, you may need to remove the context from the webapps directory and start Tomcat in order for the new context to unpack.
WebCT Powerlink
If you have installed a WebCT Powerlink before, connecting WebCT with intraLibrary follows the standard procedure. If you have never installed a Powerlink before, you'll find it a right song and dance. Once again, the instructions from Intrallect are sufficient for the task. Installing a Powerlink requires a restart of WebCT which takes us 10 minutes. There is almost no time of day when we don't have users logged in, so this becomes an issue of managing user expectations. Having a test/backup instance of the VLE has been very convenient for any development and testing.
Further information and Contacts
Intrallect is a small Edinburgh-based software company whose website is http://www.intrallect.com/
Documentation on JSP can be found at the Apache Tomcat home, http://tomcat.apache.org/
The MySQL site is at http://www.mysql.com/
Google Webmaster Tools http://www.google.com/webmasters/tools will require a Google Account.
If you have any questions about issues raised in this document or find any errors, feel free to contact me at Keele University, I would be interested in hearing other experiences in setting up IntraLibrary.
Making resouces visible to Google....
A new use for the repository has been mooted; as a place to collect research papers. Makes sense to put these into one place, but obviously it needs to be found by the outside world. Herein lies the catch: Google has recently pulled support for OAI-PMH, the standard way for a repository to reveal its contents to wandering search engines.
They have their own way of doing things similar to submitting an RSS feed (which IntraLibrary can do), but it will need a couple of days to read the documentation and at worst a couple more to bend the code to mein will. (I say this not knowing how it will happen, but I have the inkling that it'll be easier than some of the code diving that I've done that definitely did not have a guaranteed solution - and I trust my instincts) The only sticking point is that I can't definitively claim it's possible without a working example - if I gamble, there's the chance of a few long nights when I can't afford them - and other projects are trying to catch my attention.
If the problem gets really nasty, I'll have to dig out the URLs for the objects myself which means getting up to my elbows in XML ... but Team, you're worth it.
So my official line is, "Of course it's possible. I'll try and have a look at it next week"
Keywords: Google, intraLibrary, OAI-PMH, repository
Staff volunteer feedback on the user Interface
Towards the end of April Scott and I spent some time with two members of staff demonstrating the use of the repository. Scott provided training materials and we asked them to spend some time on their own experimenting with using the repository and then providing us with feedback on their experiences. One member of staff was an academic in American Studies and Politics who was interested in sharing resources for his students as well as his own research with colleagues. The second staff member works in the Learning Development Team (LDT) within the Faculty of Humanities and Social Sciences and would be using the repository to encourage staff to share resources for their teaching. Both were competent IT users who were familiar with searching for materials for their research. Two weeks later they both sent me feedback on how they had found using the repository.
During the initial session they both raised questions that it would be important to answer during a staff development session. It reminded me how easy it is to become too familiar with a piece of software and believe it’s completely straightforward and intuitive… when in reality… Questions such as ‘what does JACS mean?’, ‘what do you mean by workflow?’ and more fundamentally ‘how do you envision the repository being used – staff to share, students to browse, or both?’ Other issues such as ‘how are the search results ranked?’ led into discussions around the importance of inputting the right keywords at the cataloguing phase to make it as easy as possible to retrieve the right documents. They also both agreed that there needed to be a clear explanation of the object in the search results, so you didn’t have to preview everything just because the title looked relevant. This emphasised to me to importance of staff development being delivered by a team containing the teaching and learning point of view (how will we use the repository to support the pedagogy) and the library expertise (how do we successfully catalogue these objects so they can be found again!).
After a couple of weeks playing around with the repository, their feedback contained many similar points. They both found Scott’s training materials to be really helpful, but they would have liked a really simple flowchart as well showing the workflow and the stages they had to move through. They felt that this kind of ‘memory jog’ would help once they were more familiar with the why and what of each stage of the process. Pictures of the buttons would also be useful as they felt some of the stages are a bit hidden (both users had access to Intralibrary 2.9). There was a feeling that there seemed to be too many stages and they weren’t clear what they were all for. One of them compared it to purchasing an item on Amazon, where you are clearly guided through the stages at the top of the screen and know how much you’ve done and how much is left to do. I think this is slightly improved in what we’ve seen of the new version of Intralibrary, as the user is guided through the stages of the workflow much more clearly. The worry was that after inputting all the data, classifying and clicking save, the user might forget they needed to do anything else and objects would get forgotten in the work area.
When searching, our academic volunteer found the Advanced Search screen quite frustrating and confusing. He wrote;“On the Advanced Search page it's possible to get quite confused over where to put your main search term. When one chooses a first constraint (so I chose to search the 'faculty first' collection), one is very tempted to look around for somewhere else to put your basic search term: I ended up trying the simple search box in the bar at the top of the screen before registering that I'd need to open a new constraint field and choose a new category of constraint…..must whatever's in the search box be deleted when one changes constraints? People will want to play with words through various search devices. For example, if I'm moving from a title search to a keyword search I don't want to have to type in "onomatopoeic" twice!” Again, this is something to keep in mind when producing training materials and delivering staff development sessions.
On the plus side, they were both really supportive of the concept and really liked the idea of being able to use the same document via the public URL on the VLE without having to upload it several times. The idea of being able to share pre-published research for peer review amongst colleagues or with students to use was also seen as a positive. The academic member of staff liked the flexibility of the metadata and the fact he could add extra fields if necessary. He finished by saying that for those involved with learning and teaching, particularly cross disciplinary, he viewed it as a really useful facility and hoped he would be able to continue using it.
Keywords: feedback, Repository, staff training
Posted by Georgina Spencer @ Keele Pathfinder Team
Metadata Configuration For CLA Workflow Using Intralibrary v3
Preamble
What follows after the preamble is an outline of the Intralibrary v3 Learning Object Metadata (LOM) fields used in the test workflow for adding CLA licence scanned material.
In the initial (pre-v3) Intralibrary software client the metadata fields available to catalogue CLA licensed material was limited. Intralibrary had not yet included the additional “CLA” data fields required in their default LOM application profile at the time Keele wanted to launch their digitisation service.
Throughout the Pathfinder project we simply used the “title”, “description” and “classification” LOM entries to describe CLA material in the “pre-v3” repository. This enabled Keele to launch a digitisation service, using the repository as a store for PDF documents, held securely in a “private” Intralibrary collection. Items digitised and used in Keele’s Virtual Learning Environment (VLE) were “bundled” under classification nodes in the repository “library”, and were tagged in such a way that they cross-referenced with order numbers used by the library digitisation team. A separate record was maintained of all items scanned in the meantime, to ensure reporting to the CLA went to plan in February 2008.
Intralibrary has worked well as a storage solution for these materials, and the “public URLs” provided for academic staff to use in the VLE have worked reliably. Academics and students at Keele have benefited from the service, and more than 70 course modules are accessing over 350 documents from the repository server.
With the release of Intralibrary v3 earlier in April it was now possible to test whether the new CLA application profile could accommodate CLA type data, and also the CLA reporting function being built into the software client by Intrallect (see earlier blog entry).
It’s clear that there is a high degree of flexibility in how metadata managers might want to structure their application profiles within Intralibrary. The choices made by Keele Pathfinder may not provide the definitive answer to where CLA metadata can be located within the LOM template.
What has been tested in v3 in relation to the CLA data recording and reporting requirements does work though, within the existing LOM application profiles provided by Intrallect so far.
The nature of a self-archiving repository is such that there has to be a degree of flexibility in what the user wants to do as regards adding metadata for any items they add. Some sort of balance has to be struck where users follow simple “agreed norms” in adding metadata, and efforts made on the part of system administrators to ensure some degree of consistency in how items are classified and tagged in some way to enable others to simply find it.
However, for CLA reporting requirements, things are clear cut; institutions have to record certain types of data as a condition of holding the licence. Anyone using Intralibrary v3 has to make a decision on how best to use the LOM template provided, and to make sure the report function isolates the metadata fields they have selected to describe such material. The metadata application profile can also be used to record information useful in the management of CLA scanning, but not necessarily essential for the return report (see below).
Given the beta version we’ve had to work with (and the limited time to do the test also!) the following is the metadata fields selected for use in the metadata application profile for our CLA workflow. Our hope is any future user of this repository product will find the following useful.
The Application Profile
(The following fields were set as “mandatory” within the Intralibrary v3 application profile).
LOM Reference | Label | Notes |
1.2 | Title | We allowed for 2 incidences of this field, one for file title based on “order number” and “extract title”. |
1.3.1 | Catalogue | ISBN (International Standard Book Number) of source |
1.3.2 | LOM Identifier | “ISBN” entered here to “identify” the preceding entry as the ISBN. |
1.3.3.1 | Source Publication Type | Selected from a controlled vocabulary set up on Intralibrary (e.g. book chapter, journal article). |
1.3.3.2 | Source Title | Title of item extract scanned from. |
1.3.3.3 | Journal Volume | An “n/a” had to be entered in the metadata if the item wasn’t a journal! |
1.3.3.4 | Journal Issue | As above. |
1.3.3.5 | Source Publication Date |
|
1.3.3.6 | Start Page | Of the extract. |
1.3.3.7 | End Page | As above. |
1.5 | Description | Harvard style reference of the extract was added here. |
2.4.1 | Module Code | Keele Module Code |
2.4.2 | Module Title | Keele Module Title |
2.4.3.3.1 | Module Start Date |
|
2.4.3.4.1 | Module End Date |
|
2.4.3.5.1 | Module Duration | Entered as 1 year for all. |
2.4.3.6 | Number of Students |
|
2.4.3.7 | Lecturer |
|
3.3.1 | Role Of Metadata Contributor | In this workflow, the source “author”. |
3.3.2 | Metadata Contributor | The source author name. |
4.1 | Technical Format | Entered automatically during item upload. |
4.3 | Location Of Resource | The shelf number for the source item scanned. |
7.2.3.1 | Catalogue | Used for added author entries (2nd author or editor). |
7.2.4.4.1 | CLA Code A | “Source information” for CLA, controlled vocabulary option provided. |
7.2.4.2 | CLA Code B | “Reason for Scanning” for CLA, controlled vocabulary option provided. |
7.2.4.3 | CLA Code C | “Artistic Work” statement for CLA, controlled vocabulary option provided. |
Keywords: CLA, Intralibrary, Intrallect, Keele, Metadata, Pathfinder, Workflows
Intralibrary v3 – CLA Reporting Function
The CLA requires institutions to submit reports on all items digitised under the “blanket licence” (see previous blog entries).
Intralibrary has a new “reports” function within the administration area of the v3 software client. The partly menu-driven reporting function can be used to “generate metadata reports” which scans selected metadata within any repository collection, and output data.
The user selects a variety of fields within the relevant metadata application profile, and any data entered within them can be exported in Excel or CSV (comma separated values or “flat file”) format.
I checked through the v3 application profile created for the “CLA” workflow and identified those fields essential for a CLA report (the code numbers refer to the actual Learning Object Metadata (LOM) reference number for the fields within the Intralibrary product application profile);
LOM Reference | Label |
2.4.1 | Module Code |
2.4.2 | Module Title |
2.4.3.6 | Number of Students |
1.3.1 | Catalogue Entry (ISBN) |
1.2 | Title (Keele CLA “order number” and separate entry for “extract title”) |
1.3.3.5 | Source Publication Date |
3.3.2 | Metadata Contributor (“extract author”) |
1.3.3.6 | Extract start page |
1.3.3.7 | Extract end page |
7.2.4.1.1 | CLA Code A – “Scanning Source” |
7.2.4.2 | CLA Code B – “Reason for Scanning” |
7.2.4.3 | CLA Code C – “Artistic Works Statement” |
These fields were selected to form the basis of a report “template”, which can be saved and run again in future. In creating a report you can also select only those repository collection items added between two particular dates (again selected using a calendar wizard). This is definitely useful for CLA return reports which want everything reported within a certain period.
The report function worked fine when tested on the CLA “test” collection created within Intralibrary v3. An excel spreadsheet was successfully created and all the metadata was output into appropriate cells. (This has been added to the file collection on our blog).
Where there were two metadata “entries” at a particular part of the metadata record (for instance at 1.2 above, each record has two “title” entries, one for the CLA order number, the other for the extract titles) this was also properly output by ensuring that the correct field “cardinality” was selected in the report generation template (i.e. we knew there were two entries in the metadata record so we asked the report generator to output both!).
The only glitch noted was that when the report was saved for some reason it “reversed” the order of the fields selected and they came out in the spreadsheet “back to front” (i.e. the desired first column of data came out as the last column!). That means the user would have to edit the spreadsheet by “cutting and pasting” the columns around until they were in the correct order for the CLA report, but this may well be a simple glitch in the beta version of Intralibrary v3.
This looks to be a useful reporting tool for CLA purposes, but any data element within the application profile can be isolated and listed.
However, as it is a “listing” reporting function, it means it can’t yet create a report of repository items based on a typed in search query. The user may want to type in a subject, title or author keyword or phrase, and ask Intralibrary to output an excel report of all items which contain these words somewhere in the metadata.
All that side however, given that this report function was developed in direct response to the CLA report requirement, this beta version looks a working solution.
Keywords: Administration, CLA, Intralibrary, Keele, LOM, Metadata, Pathfinder, Report
CLA Data Recording And Reporting With Intralibrary v3
April 30, 2008
In early April Intrallect provided the latest test “build” of the Intralibrary repository software. This release contained new functions to enable the recording of metadata required by the Copyright Licensing Agency for any items scanned and added to the repository under their Higher Education Trial Blanket Licence.
Institutions that digitise books and journals under the CLA licence have to record additional information about each item scanned and added to a repository. This includes for what course of study each item is scanned, course codes and titles, lecturer names, alongside the bibliographic information about each item from which the scanned extract was derived. Earlier in our project the CLA reporting requirements were passed to Intrallect software developers for including in a potential “finished product”.
This new release contained an enhanced “application profile” which could be used alongside a “workflow” that will enable the repository user to add “CLA type” metadata. Application profiles are a series of metadata options which can be tailored for particular workflows on Intralibrary. The profile contains “fields” (containing the sort of item descriptions mentioned above) which you can set in a workflow as “mandatory”, “recommended”, “optional” etc. depending on what sort of metadata your user would like to add for any given item.
The default application profile for Intralibrary has new fields added that can be used for adding metadata for “CLA” items, including “source publication type”, “source title”, “journal volume”, “journal issue”, course information (code and title), academic’s name and so on. Controlled “vocabulary” fields were added too, so the CLA reporting “codes” for “document source” and “reason for scanning” could be entered from easy “pull down” menus. As well as providing a suitable level of description for each item, this should make it easier to isolate the metadata in these fields to create an output report of all items added to a “CLA” collection.A copy was made within Intralibrary of the default application profile, and this was tailored for CLA purposes. To help get things straight in my mind, I cross matched the fields which appear in a typical CLA report with those fields in the new CLA application profile template. I set up these fields within the template as “essential”, so they will appear in the metadata editor for anyone using the CLA “workflow”. The CLA workflow already existed from the previous version of Intralibrary, and the new application profile was “attached” to this.
As a rule, there is no need to use every conceivable metadata option in the application profile (this would mean the user would be confronted with a very long “cataloguing” screen when adding metadata!). I worked through the application profile selecting those fields which could provide the most suitable and easiest “template” for recording items added to the repository scanned under the CLA licence. This was done with some very useful input and advice from Sarah Currier at Intralibrary.
The nature of the application profiles provided with Intralibrary is such that there is a degree of flexibility as to where you could place certain bits of metadata. Self-archiving in a repository is, after all, intended to be more flexible than what I could call “standard librarian type” cataloguing. For instance, author, ISBN (International Standard Book Number) and editor metadata could be placed in two possible places in the application profile. This goes against what some librarians might view as “standard” cataloguing practice, but in these days of self-archiving such flexibility could be a good thing. For one thing, it makes it possible to tailor the metadata requirements for each user and not force them to use a metadata template they would find too cumbersome and time consuming to “fill in”.
A series of “test” PDF documents were created for using in a test “course module” created within a classification “node” of Intralibrary v3. These were uploaded using the existing CLA workflow, which was attached to the new application profile in v3.
Initially there were some little bugs after installing v3. For example, the application profiles were not editable, and the “classification” function didn’t work either, but these were soon fixed thanks to help from Sarah at Intralibrary and Boyd from the Pathfinder team. Some of the system and navigation options from the previous version (“reserve item” and rebuild object cache) also seemed hidden in the new version, especially if you used Internet Explorer 7.
In contrast, using Mozilla meant that these did work! In many respects the “build” being used throughout this process was very much a test version, although it was clear from the outset that v3 was easier to use and navigate than previous.
Using the new CLA customised application profile the metadata editor screen appeared as it should, with the new CLA metadata options appearing. One thing which became clear is that for all these CLA type materials, the course module information (code, title, academic, student numbers etc) has to be retyped into each record for each item. A way to copy this metadata into a record from a “scratch pad” or import it into the record automatically would be useful, but I don’t yet know whether it’s possible (and how to do it anyway if it is).
The existing CLA blanket licence forces you to classify and provide resources “by module”, so all this information has to be entered in some way for every item added to the repository. Using the classification nodes in the Intralibrary repository help to “bundle” documents by course module code for easy retrieval, but this stipulation in the licence (to only provide documents to particular module audiences) is not widely popular (from the opinions regularly voiced in discussion with colleagues in Keele and professional colleagues elsewhere). It also means that “versions” of digital documents have to be created for each separate module, each with a separate URL, should the same extract be requested by more than one academic.
After working with the metadata editor and uploading a few documents some tricky little anomalies appeared. It became apparent that within our CLA workflow the “contributor” field in the metadata was set up to automatically add the name of the user uploading the item. The new plan was originally to put the “source author” of the item scanned in this field. To achieve this, a new “vocabulary” was created for this field in the application profile which allowed for the recording of the appropriate information here (“author”, “editor” etc) followed by the actual name in the following field. This seems to have worked initially.
Where to put the “source ISBN” (of the work digitised under the CLA licence) in the application profile also created a pause for thought. Could it be placed in the profile section intended for “catalogue” or “description” metadata? The ISBN is a very useful piece of metadata which uniquely identifies a book or journal, and it was decided to locate this in the “1.3 Catalogue” section of the application profile.
When digitising items under the CLA licence it’s essential that you make sure the restrictions of the licence are observed, mainly in relation to how much of a particular work is provided electronically to a course in any one year. Being able to cross-reference an ISBN is also useful from the CLA administrator’s view in that you can see whether you’ve scanned an extract before for another course too, saving the effort in doing the document scanning again. Being able to search on ISBN is very important in this respect, so where it goes in the metadata can be something you want to decide on, and stick to, as the repository grows.
If you wanted to adopt a sort of “belt and braces” approach in applying metadata to CLA items, you could add the source title and the extract title to the “description” field in the application profile. In using the earlier version of Intralibrary provided, I adopted a “Harvard” bibliographic reference style for this metadata entry (detailing author name, year of publication, publisher, etc). This field, being searchable, could also provide a way to effectively search the repository, but wouldn’t be useful for reporting purposes as it doesn’t “separate” out the data.
The version of Intralibrary we’ve used for “live” CLA document delivery since September 2007 was used in this way, with all the metadata included in the file title, description field, and classification node. The new application profile, provided with v3, provides more scope for proper recording of this metadata in appropriate fields. It is certainly an advance, but as mentioned above, you need to think about where you place the metadata, and also how you type it in. Consistency in this is up to the user.
Today, Intralibrary have provided a new release of the repository software, so with all the above in mind, as soon as it is installed the next thing to fully test is the reporting function. In the release we’ve had for the past few weeks doing a report seemed straightforward. Within the report area you select the fields in the application profile you want the report to include from a “pull down” menu, and clicking on a button should provide an excel spreadsheet detailing all the items added to the CLA collection with the appropriate metadata from the application profile appended. This can be used as a basis for a “CLA return report”.
Keywords: Application Profile, CLA, Copyright Licensing Agency, Intralibrary, Metadata, Workflows
User testing - Intralibrary 3 beta
Having gained access to the new Beta of Version 3 of the repository we embarked on an early set of user testing. In the first instance The system of uploading workflows and tying them to groups and metadata subsets was tested, and this appears to be as straightforward as was the case with the previous version. Unfortunately workflows still have to be editted in XML and then uploaded to the repository, rather than being editable within the administration area.
A new generic workflow and metadata subset were created in order to enable the ease of use for contributors while relieving the work burden for administrators. Once these ahd been loaded into the repository it was ready to be tested by a contributor. The health library had already been using the repository to store some induction materials which were made available to users through the VLE. We invited a member of the health library team to try out the upload process in Version 3. Here follows the conclusions.
The upload process has become much easier as the contributor is taken through the steps such as choosing a group and a collection. The systematic approach makes the process more intuitive. However, once the resource has been uploaded it is not entirely obvious where the resources have gone, there is an 'upload successful' message sitting in an otherwise blank screen.
The 'Edit resource' button is also a little difficult to spot, and the replacement of text by icons has also rendered other facets of the reserved 'objects screen' somewhat mystical. If the clean look of the icons is to be beneficial, then the rollover desriptions have to be working properly, otherwise an unfamiliarity with the icons ensures progress is slow.
The general look of the reopsitory was thought to be better and cleaner, and the added feautures of tagging and likeness searching within the resource screens along with the basket options make it easier to find personal resources or groups of resources.
Coming soon..... deletion issues.
Keywords: contributor, upload, workflows