I was ill yesterday, a migraine and periodic vomiting. So I spent the day in bed. It felt like a huge waste of time, which made me think again about the difficulties faced by G, my wife of twenty plus years.
For most of the time we have been married, she has been ill with myalgic encephalitis (M.E.), with two of its common complications, multiple sensitivities to chemicals and noise, and to some extent, to bright lights, along with an autoimmune consequence, type one diabetes. With this illness, trying to keep going regardless can lead to severe consequences - long periods completely bed-ridden, for instance. So G spends about 3 hours every day resting in bed. That means that for every eight years she has been ill, she has spend an extra year just lying down compared to most healthy people. There are all the things she cannot do. Things that most people do without apparent effort, like speaking, standing up, or thinking, are hard work and have to be rationed. Exercise, still recommended by some doctors, makes things worse. Associated cognitive dysfunction makes it difficult to have conversations with people she doesn't know, or use the phone, as she suffers from word blindness (not remembering common words at random), or says the opposite word without intending to (very confusing when she's directing me while I'm driving!), or cannot remember the answers to common questions (such as "What is your name?").
The aspect of her illness which most affects her day to day life is the sensitivity to chemicals. These sensitivities produce symptoms of lack of breath, and nausea and shaking (if a foodstuff) - and, if serious exposure occurs, convulsions (which happened when she was in hospital, and was prepared for an injection with an alcohol-soaked swab). Among the things she is allergic to are: most artificial scents including almost all modern cleaning products (washing powders and aftershave in particular make it very hard for us to have visitors or socialise - even what I pick up from work means that I wear different clothes around the house from outside it), alcohol, citric acid (a common additive to many sweet foods such as yoghurt, cakes, and jam), adrenaline (which is problematic for modern hospital care), onions and garlic, tobacco smoke, and so on. Avoiding these things is a major effort in the modern world; we buy the same products over and over, and the words "New improved recipe" on something she has been able to eat for years send us on a frantic effort of reading through the ingredient lists on anything she might be able to consider as a replacement.
The effects of the noise sensitivity has also been pretty devastating. G trained to be a lecturer in drama, and theatre was her big love. But she cannot now be around crowds of people, or loud or amplified music - the noise is physically painful. In addition to this, of course, many of the people in a crowded auditorium will be wearing clothes washed in scented products. So it's fifteen years since we've been to the theatre, and longer since we saw a film at a cinema. When we are invited to weddings or other special occasions, she cannot go. She has lost contact with many old friends as a result. Going to shops is a major hassle when she has to wear dark wraparound glasses and a breathing mask, just to be able to cope with the environment.
The diabetes, caused when her immune system decided that G's pancreas was an alien body, is comparatively easy to cope with. Of course, this means that it is the one problem she has which doctors immediately take seriously - six monthly specialist consultations, frequent blood tests, and a great deal of concern. But it is something which is well understood and can be managed with a little bit of discipline. Consultants find G very interesting, because type 1 diabetes usually affects young people (it used to be known as juvenile diabetes) and she developed it in her forties. Apart from the difficulty of persuading them that she shouldn't be subject to lots of extra tests and treatments (difficult because of her allergies), the main consequence of the diabetes is that G has to carry a fair amount of paraphernalia around with her, just in case there are problems with her blood sugar levels.
But - and here's the point of what I want to say - G still manages to be cheerful most of the time, even positive. She has taken up new pastimes (light gardening, with me doing the heavy work; card making) and has made new friends through forums and support groups for M.E. She even manages to put up with me being frustrated and irritable - and her illness affects me vastly less than it does her. Depression would be an easy way out (and doctors do often confuse M.E. with depression), but G does not let herself fall into that trap. She is strong inside in a way I think I would not be. She is great.
Sunday 31 May 2015
Wednesday 20 March 2013
Sakai Development: Post Ten
So now to work on the presentation side of things, using the velocity files. This is something new to me, so I'm going to start off by reading through a tutorial suggested by Andrew Martin, with reference to the Sakai content tool. The first thing to do is to find the velocity configuration, which is a file listed in the relevant web.xml file, and which turns out to be called (unsurprisingly) velocity.properties. This in turn tells me that the velocity templates are stored where I would expect them to be, in the sakai-src/content/content-tool/tool/src/webapp/vm directory, sorted into two sub-directories, content and resources (which is a directory containing templates for common fragments, judging by the file names). One which we probably want to emulate is content/chef_resources_deleteConfirm.vm, so I'll start by opening that. Basically, the intention is to extend it by adding a metadata form. (I'll also need to make an archiving version equivalent to content/sakai_resources_deleteFinish.vm, I think.)
This is also where I come across the Sakai language support mechanism:
I'll have to follow that through and add an appropriate entry for archiving, at least for the English language - there aren't all that many languages I could add entries to apart from that one. This, "del.del1", and "gen.cancel" appear several times in the two files. The values in different languages are set in files in config/localization/bundles/src/bundle/org/sakaiproject/localization/bundle/content/ (according to grep -R), so I'll need to sort things there. The file content.properties is the list of default phrases to use, which effectively means American English. So I add in content.properties, which usefully also contains a set of Dublin Core term descriptions:
I may need to add more later, when the form itself takes shape properly. Next, the form for confirmation itself. I'll also need to alphabeticise, as that's how the properties file is organised. This will start out as a copy of the delete confirmation form. Most of the changes amount to altering the word "delete" to "archive", to match equivalent updates to the ResourcesAction script which processes these files.
Then the main part of the change, to add the metadata input table. The metadata table needs to come before the file listing, simply because we don't know how many files might be in the listing, and so how far down the page the listing table might reach; confusing if the submitter needs to scroll down several times to find it. It starts with a table tag, like that for the listing, but instantly I see a problem. The file listing table has class "listHier" (presumably hierarchical list), but that is obviously not going to be appropriate for the metadata part of the form. The easiest way to solve this (as the vm file is a fragment of HTML and doesn't have the CSS declarations in it, so finding the right CSS file might be awkward - especially as there are almost 300 of them in the Sakai source code) is to find a table which is the right type, and copy the style information from that. Looking at the tables in the content-tool VMs, there are three types: listHier, itemSummary and no declared class, the first two with or without classes "lines" (not sure what this will do) and labelindnt (indent labels, presumably). I'll try not declaring a class first, so that the table will just inherit the styles current at this point in the page where it is displayed. So I add a table and appropriate DC terms - noticing that Sakai's interpretation of the meaning of a couple of them appears to be slightly different from that of the SWORD library developers, judging by the descriptions of the fields in the vocabulary list.
On to the second form. The delete version of the form seems to be identical, which is sensible - it takes the reader back to the file listing, which would now be missing the deleted items. At least, I think that's what happens; the form has submission buttons, which is a bit strange if this is the case. What I think I want to see displayed here for the archiving is the result of the submission (and a summary of what has been entered by the user on the previous page). So I add appropriate context.put statements to ResourcesAction to display the metadata which has been sent off, and display the results as text.
It would be possible to display the status of the submission in real time, using JQuery and Ajax. But I decided that I probably don't have the time to set this up, and most deposits should be pretty quick. So I just want to get the receipt, check the status, and then (assuming that the status indicates a successful deposit) display the information in the receipt - or at least, some of it, as there are quite a lot of fields which might be included in a returned receipt by the SWORD2 server. The swordapp lists the following fields which may be set in a DepositReceipt object:
Without going back to the SWORD2 specification it is not immediately apparent what all these fields mean. First, though, is a field which is not strictly speaking part of the receipt, but tells us whether the deposit itself was successful (if it has the value 200, 201, or 202), or not. The SWORD2 specification lists a small number of error codes which a server could send, so I only need to process these (and provide a default response, just in case - I could understand the web server returning 500 or 404, neither of which are listed part of the specification but would indicate respectively a problem with the way the SWORD server is set up on the web server or an incorrect URL respectively). Some are provided for SWORD2 clients, rather than direct access by a human being using a web browser - to allow, for example, the editing of the metadata for the deposited item at the location of the client rather than needing to go through the repository interface; such things would not be relevant for our purposes. So I need to add code to the script to set appropriate information into the context, so it be available to Velocity, and then add code to the archiveFinish Velocity template to display them.
I'm now going to finally attempt a compilation. This is a make or break moment, as the outcome of the process will be used to decide whether it is worth going on to finish off the work or whether it will need to be left in an incomplete state while I go on to other things that the Research360 project needs. First time, it complains about the missing swordapp library, which I installed. Then there's the irritating missing end bracket:
Easily fixed. At least, once I realise I'm looking at line 7696 not line 6796...
And then again, and I get hundreds of errors from files I've not even changed -
This is also where I come across the Sakai language support mechanism:
$tlang.getString("del.del")
I'll have to follow that through and add an appropriate entry for archiving, at least for the English language - there aren't all that many languages I could add entries to apart from that one. This, "del.del1", and "gen.cancel" appear several times in the two files. The values in different languages are set in files in config/localization/bundles/src/bundle/org/sakaiproject/localization/bundle/content/ (according to grep -R), so I'll need to sort things there. The file content.properties is the list of default phrases to use, which effectively means American English. So I add in content.properties, which usefully also contains a set of Dublin Core term descriptions:
#Archive Vm archive.confirm = Archive confirmation and metadata creation archive.sure = Enter information which describes the collection you have selected, and check the resources listed are correct. archive.archive = Archive archive.form = Enter information which describes the collection you have selected, and check the resources listed are correct. archive.metadatatable = Table allows the input of metadata to describe the collection of items for archiving. archive.resourcetable = Table holds information about resources selected for archiving. Column headers contain descriptions of contents and links for sorting.
I may need to add more later, when the form itself takes shape properly. Next, the form for confirmation itself. I'll also need to alphabeticise, as that's how the properties file is organised. This will start out as a copy of the delete confirmation form. Most of the changes amount to altering the word "delete" to "archive", to match equivalent updates to the ResourcesAction script which processes these files.
Then the main part of the change, to add the metadata input table. The metadata table needs to come before the file listing, simply because we don't know how many files might be in the listing, and so how far down the page the listing table might reach; confusing if the submitter needs to scroll down several times to find it. It starts with a table tag, like that for the listing, but instantly I see a problem. The file listing table has class "listHier" (presumably hierarchical list), but that is obviously not going to be appropriate for the metadata part of the form. The easiest way to solve this (as the vm file is a fragment of HTML and doesn't have the CSS declarations in it, so finding the right CSS file might be awkward - especially as there are almost 300 of them in the Sakai source code) is to find a table which is the right type, and copy the style information from that. Looking at the tables in the content-tool VMs, there are three types: listHier, itemSummary and no declared class, the first two with or without classes "lines" (not sure what this will do) and labelindnt (indent labels, presumably). I'll try not declaring a class first, so that the table will just inherit the styles current at this point in the page where it is displayed. So I add a table and appropriate DC terms - noticing that Sakai's interpretation of the meaning of a couple of them appears to be slightly different from that of the SWORD library developers, judging by the descriptions of the fields in the vocabulary list.
On to the second form. The delete version of the form seems to be identical, which is sensible - it takes the reader back to the file listing, which would now be missing the deleted items. At least, I think that's what happens; the form has submission buttons, which is a bit strange if this is the case. What I think I want to see displayed here for the archiving is the result of the submission (and a summary of what has been entered by the user on the previous page). So I add appropriate context.put statements to ResourcesAction to display the metadata which has been sent off, and display the results as text.
It would be possible to display the status of the submission in real time, using JQuery and Ajax. But I decided that I probably don't have the time to set this up, and most deposits should be pretty quick. So I just want to get the receipt, check the status, and then (assuming that the status indicates a successful deposit) display the information in the receipt - or at least, some of it, as there are quite a lot of fields which might be included in a returned receipt by the SWORD2 server. The swordapp lists the following fields which may be set in a DepositReceipt object:
receipt.getStatusCode(); receipt.getLocation(); receipt.getDerivedResourceLinks(); receipt.getOriginalDepositLink(); receipt.getEditMediaLink(); receipt.getAtomStatementLink(); receipt.getContentLink(); receipt.getEditLink(); receipt.getOREStatementLink(); receipt.getPackaging(); receipt.getSplashPageLink(); receipt.getStatementLink("application/rdf+xml"); receipt.getStatementLink("application/atom+xml;type=feed"); receipt.getSwordEditLink(); receipt.getTreatment(); receipt.getVerboseDescription();
Without going back to the SWORD2 specification it is not immediately apparent what all these fields mean. First, though, is a field which is not strictly speaking part of the receipt, but tells us whether the deposit itself was successful (if it has the value 200, 201, or 202), or not. The SWORD2 specification lists a small number of error codes which a server could send, so I only need to process these (and provide a default response, just in case - I could understand the web server returning 500 or 404, neither of which are listed part of the specification but would indicate respectively a problem with the way the SWORD server is set up on the web server or an incorrect URL respectively). Some are provided for SWORD2 clients, rather than direct access by a human being using a web browser - to allow, for example, the editing of the metadata for the deposited item at the location of the client rather than needing to go through the repository interface; such things would not be relevant for our purposes. So I need to add code to the script to set appropriate information into the context, so it be available to Velocity, and then add code to the archiveFinish Velocity template to display them.
I'm now going to finally attempt a compilation. This is a make or break moment, as the outcome of the process will be used to decide whether it is worth going on to finish off the work or whether it will need to be left in an incomplete state while I go on to other things that the Research360 project needs. First time, it complains about the missing swordapp library, which I installed. Then there's the irritating missing end bracket:
sakai-src/content/content-tool/tool/src/java/org/sakaiproject/content/tool/ResourcesAction.java:[6796,73] error: ')' expected
Easily fixed. At least, once I realise I'm looking at line 7696 not line 6796...
And then again, and I get hundreds of errors from files I've not even changed -
could not parse error message: symbol: variable Menu location: class AttachmentAction /home/simon/work/sakai-src/content/content-tool/tool/src/java/org/sakaiproject/content/tool/AttachmentAction.java:339: error: cannot find symbol state.setAttribute(MenuItem.STATE_MENU, bar); ^
Labels:
java,
Research360,
Sakai,
source code,
SWORD2,
velocity
Monday 19 November 2012
Open Source Development in an Ideal World
What should the ideal large open source project offer potential developers to help them to get started as quickly and easily as possible? These are my ideas on what would make my life easier when I'm thinking about producing some code for a project. I wouldn't expect any project to manage to get all of these right, but the more that can be managed the better! The larger a project becomes, and the more developers involved, and more code to understand for even the smallest development project, the more these issues become important.
- There should be one major release at a time, or it should be clear which release is to be developed against for best integration with the current wishes of the software project community (whether it is the current stable release, existing alpha and beta releases, nightly build versions, or the code available at the moment from the repository). It should be easy to download the right version of the source code at any particular time (including the build which is recommended for development).
- There should be clear links from the main project website to instructions for developers.
- There should be a mailing list which should support a helpful and understanding development culture.
- Code conventions and architectural assumptions (e.g. which Java technologies should be used for specific purposes) should be clearly documented, either on the Internet or in the code itself. It should not be necessary to use file search tools or the search interface of an integrated development environment (IDE) to find the source code for a major element of the product.
- It should be clear how back-end code and interface interact.
- It should be clear how authorisation to carry out actions is handled.
- There should be a general architectural philosophy which allows the use of plug-in code where this would be useful. Examples of where this would make life easier would be in adding small features (often known as "widgets") to the interface, which can then just consist of a fragment of HTML template and the code to support the feature which uses an API to interface to the main project code, as opposed to actually needing to modify the main code itself.
- Java and other modern languages are associated with a wide range of supporting technologies (e.g. compilers, HTML template engines, and development frameworks), and the specific choices made for an individual open source project should be clearly documented, as it is unlikely that developers new to the project will be familiar with all of them.
- The source code should compile without too much difficulty most of the time (daily builds may be expected to be broken sometimes, and in alpha and beta releases, some parts of the code may fail to compile if components are not yet updated to match new requirements).
- There should be Javadoc style documentation for every class in the source code to at least make it easy to determine the structure of each class, especially for large classes with many methods. A lot of information can often be obtained about how the code works just from the names of the methods, even if there is no extra detail available. The location of Javadocs should be central and clearly signposted. The Javadocs should be indexed by search engines (if they are automatically generated frequently, this might not happen, so it makes sense to permanently store static versions associated with stable releases, at least).
- Code should have at least some comments, and java classes should not be unnecessarily complex and long – one of the points of modern programming languages is that developers do not need to put all the code into one huge source file.
- If a development tool such as an IDE is recommended, this should be made clear and it should be possible to import the code into the IDE either as a whole or in sections without much difficulty, even when it does not compile. This should be possible with the current version of the IDE (or, if not, the version to use should be clear) on a computer with a reasonable amount of memory.
- Where there are requirements to install third party code, these should be clear and properly maintained (i.e., links to download libraries should not produce “not found” errors). This includes, for example, extensions which need to be installed for the code to work with an IDE of choice, as well as the libraries used by the project itself, for which technologies such as maven make the management of the required libraries much easier.
Wednesday 10 October 2012
Sakai Development: Post Nine
Before actually starting writing the code to do the deposit itself, I need to set up and include the SWORD2 Java client libraries. If you're not used to github, you might take a while to see the button, which you can use to download the library as a zip file. Unzip it, cd to the created directory, and run
to compile (and download a large number of new library files). This should hopefully end up with:
and then it's a question of copying the jar file to somewhere where it can be picked up by Sakai. This requires two things (assuming my understanding of how maven POM files work is correct):
The process is relatively simple; however, I should point out that I have just noticed in the SWORD2 client documentation that multi-part deposit is not supported, and the way I have been thinking has assumed that it is. So I will have to make a zip file or something of the items to be deposited (as their nature as a collection is important). java.util.zip is a core package, but not one I've ever used before; I'll start by adding the import for the package at the top of the file (new line 52).
The steps for producing a SWORD2 deposit from the selected files are:
mvn clean package
to compile (and download a large number of new library files). This should hopefully end up with:
[INFO] [jar:jar {execution: default-jar}] [INFO] Building jar: /home/simon/work/swordapp-JavaClient2.0-420485d/target/sword2-client-0.9.2.jar [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESSFUL [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 minutes 24 seconds [INFO] Finished at: Wed Sep 26 15:14:32 BST 2012 [INFO] Final Memory: 27M/121M [INFO] ------------------------------------------------------------------------
and then it's a question of copying the jar file to somewhere where it can be picked up by Sakai. This requires two things (assuming my understanding of how maven POM files work is correct):
- Add a dependency to the relevant pom.xml file, which will be picked up on compilation, so that maven will attempt to download the relevant jar file, and, if it can't, will ask for it to be installed locally by hand. The relevant file is in the content/content-tool/tool directory, and needs the following added (with line numbering):
- Import the necessary classes into the ResourcesAction.java file so that the library can be used in this script. This is a simple pair
import org.swordapp.client.SWORDClient;
import org.swordapp.client.SWORDClient.*;
at line 135 of the file.The code which will use this is based on doFinalizeDelete (lines 6545-6675), which follows doDeleteConfirm when the confirmation is selected. I haven't yet worked out where the actual display of the confirmation request happens, so this is not the last change by any means. The confirmation step could also include the opportunity to select from a list of repositories, and from the collections which exist at the chosen repository (as obtained from the repository SWORD2 service document). But that is a complication I don't really want to get into at this stage, I just want to be able to carry out a simple deposit. So I'm going to have both the repository and the collection set in the configuration file for the servlet.
The process is relatively simple; however, I should point out that I have just noticed in the SWORD2 client documentation that multi-part deposit is not supported, and the way I have been thinking has assumed that it is. So I will have to make a zip file or something of the items to be deposited (as their nature as a collection is important). java.util.zip is a core package, but not one I've ever used before; I'll start by adding the import for the package at the top of the file (new line 52).
The steps for producing a SWORD2 deposit from the selected files are:
- Get archive details from configuration (some will always need to be obtained from configuration, but a service description could be downloaded on the fly to get information about collections etc. - just not in this version as I'm already overrunning the schedule);
- Prepare metadata, using data from the confirmation form, which should basically be a subset of DC terms for use in the deposit (bearing in mind that it's possible for the depositor to go to the repository and change the metadata later if necessary);
- Prepare files - create a zip as a file stream;
- Authenticate to repository using userid/password provided in confirmation form;
- Make deposit and (hopefully) get back a receipt.
While the information given in the swordapp documentation at first looks pretty complete, it is missing some details which I need, as I discover on starting to put the code together. I'll need to look at the source code for the app to get them.
The first issue is with the metadata. Dublin Core is not the only metadata used in SWORD2; there is some basic information which is part of the Atom profile: title, author, last modification date, and so on, as seen in the example here. The documentation gives no information about how to set this, and in fact I can't find anything useful in the source code (the string "updated", which is the element name for the last modification date, does not appear anywhere in the client code). I'm not particularly familiar with Atom, so it is possible that these are optional. I'll ignore this for the moment and see what happens. I'm also going to assume that in order to give multiple entries, I just repeatedly add a term: this needs to be supported for DC contributor - I think this should work, but I haven't actually gone through the apache Abdera library .which the swordapp code uses to check this.
Just at this point Andrew Martin put up a useful blog post which details his journey to working with the SWORD2 java library. He's not doing exactly the same thing, though we have already been in contact. While I need to go deeper into the coding, his post is probably a very useful resource for anyone reading this one.
The next thing to sort out is creating a ZIP file (virtually) to contain the items selected for archiving. I've not done this before, and the ZIP creation stuff I can find online, as well in my ancient Java book, concentrates on making a ZIP from files (this looks pretty useful for that, and is likely to form the framework I'll use) rather than from the Sakai content platform where the content may not even be stored in a filesystem. So I need to work out how to get the content out of the files as a Java input stream. I'll start by looking through the ResourcesAction.java code, and then move on to other files in the same directory if I can't find anything. All the input streams in ResourcesAction are for making changes to content rather than reading it - makes sense, as reading is not an action whcih affects the resource. But this code from FilePickerAction.java (lines 1710-13) makes it look very simple:
I just need to work back through the context to be sure that this code is doing what it appears to be doing. Although it doesn't appear to be (because it's in a method for up dating the resource), this is what is is in fact doing, as I eventually discover when I find the relevant javadocs (ContentHostingService, BaseContentService, and ContentResource - not from the current release, though). To re-use this code, the ContentResource class needs to be loaded, which it already is, and the content service needs to be set up (outside the loop which runs through the selected items):
The first problem, then, is that what I have is a ListItem object, when what I want is the itemID (which is a String); this is simple, as id is a property of the ListItem object, so I can just get it. I'll also need to protect against the itemid being null, which I don't think should happen. I'm not quite sure what the correct thing would be to do if it does, so I'll just log a warning if that happens. So the code I add is (lines 6741-4):
and then in the conditional block (6752-6774),
Some preparation has to happen before this, using piped streams to turn the zip output into the input for the SWORD library, calculating the MD5 hash we want for the deposit stage on the way:
And now we should be in a position to set up the deposit with the SWORD2 library..
It's also occurred to me that the solution to the problem of multiple archives is to embed them in the confirmation web form - the user selects the archive there from a drop down list, and the script reaps the URL used for deposit. So the URL to use is then just a form parameter. Except - the sword client readme file suggests that a collection object (derived from the collection description in a service document) is needed for deposit, so I need to check in the source code to see if there's a method with a deposit URL as an alternative. Turns out that there is, so I'll use that. So we have (ignoring a whole load of exceptions which will surely need to be caught, for the moment):
The next issue is what to do with the receipt. And how to alert the user to the success or failure of the deposit. The confirmation web page should still be available (especially if it has an indication that the status of the archiving will be displayed there). So it be displayed there, dynamically. So for the moment, I'll just hold on to the receipt and revisit this code when I've written the appropriate Velocity file.
There's just one final bit of code to add to this file, which is to add a call to the confirm delete method as a new state resolution function (lines 7083-6):
Then I can start work on the Velocity file. I should say at this point that I don't expect this code to compile without error. I'm absolutely certain there will be exceptions which haven't been caught, and I may well have confused some of the variable names in the course of this. But I want to get on to the next stage before coming back here.
Just at this point Andrew Martin put up a useful blog post which details his journey to working with the SWORD2 java library. He's not doing exactly the same thing, though we have already been in contact. While I need to go deeper into the coding, his post is probably a very useful resource for anyone reading this one.
The next thing to sort out is creating a ZIP file (virtually) to contain the items selected for archiving. I've not done this before, and the ZIP creation stuff I can find online, as well in my ancient Java book, concentrates on making a ZIP from files (this looks pretty useful for that, and is likely to form the framework I'll use) rather than from the Sakai content platform where the content may not even be stored in a filesystem. So I need to work out how to get the content out of the files as a Java input stream. I'll start by looking through the ResourcesAction.java code, and then move on to other files in the same directory if I can't find anything. All the input streams in ResourcesAction are for making changes to content rather than reading it - makes sense, as reading is not an action whcih affects the resource. But this code from FilePickerAction.java (lines 1710-13) makes it look very simple:
InputStream contentStream = resource.streamContent(); String contentType = resource.getContentType(); String filename = Validator.getFileName(itemId); String resourceId = Validator.escapeResourceName(filename);
I just need to work back through the context to be sure that this code is doing what it appears to be doing. Although it doesn't appear to be (because it's in a method for up dating the resource), this is what is is in fact doing, as I eventually discover when I find the relevant javadocs (ContentHostingService, BaseContentService, and ContentResource - not from the current release, though). To re-use this code, the ContentResource class needs to be loaded, which it already is, and the content service needs to be set up (outside the loop which runs through the selected items):
ContentHostingService contentService = (ContentHostingService) toolSession.getAttribute (STATE_CONTENT_SERVICE);
The first problem, then, is that what I have is a ListItem object, when what I want is the itemID (which is a String); this is simple, as id is a property of the ListItem object, so I can just get it. I'll also need to protect against the itemid being null, which I don't think should happen. I'm not quite sure what the correct thing would be to do if it does, so I'll just log a warning if that happens. So the code I add is (lines 6741-4):
String itemid = item.id;
ContentResource resource = null; if (itemid != null) {
and then in the conditional block (6752-6774),
resource = contentService.getResource(itemId); InputStream contentStream = resource.streamContent(); byte[] buf = new byte[1024]; //get filename and add to zip entry String fileName = item.getName(); if (fileName != null) { zip.putNextEntry(new ZipEntry(fileName); } else { zip.putNextEntry(new ZipEntry("Unnamed resource with ID " + itemid); } int len; while ((len = contentStream.read(buf)) > 0) { zip.write(buf, 0, len); } zip.closeEntry(); contentStream.close();
Some preparation has to happen before this, using piped streams to turn the zip output into the input for the SWORD library, calculating the MD5 hash we want for the deposit stage on the way:
MessageDigest md = MessageDigest.getInstance("MD5"); DigestInputStream dpins = new DigestInputStream(pins, md);
And now we should be in a position to set up the deposit with the SWORD2 library..
It's also occurred to me that the solution to the problem of multiple archives is to embed them in the confirmation web form - the user selects the archive there from a drop down list, and the script reaps the URL used for deposit. So the URL to use is then just a form parameter. Except - the sword client readme file suggests that a collection object (derived from the collection description in a service document) is needed for deposit, so I need to check in the source code to see if there's a method with a deposit URL as an alternative. Turns out that there is, so I'll use that. So we have (ignoring a whole load of exceptions which will surely need to be caught, for the moment):
// Set up authentication AuthCredentials auth = new AuthCredentials(params.getString("userid"),params.getString("password")); // Make deposit and (hopefully) get back a receipt Deposit deposit = new org.swordapp.client.Deposit(); deposit.setFile(dpins); dpins.close(); pins.close(); pouts.close(); byte[] fileMD5 = md.digest(); deposit.setMimeType("application/zip"); deposit.setFilename(params.getString("title") + ".zip"); deposit.setPackaging(UriRegistry.PACKAGE_SIMPLE_ZIP); deposit.setMd5(fileMD5); deposit.setEntryPart(ep); deposit.setInProgress(true); // And now make the deposit DepositReceipt receipt = client.deposit(params.getString("depositurl"), deposit, auth);
The next issue is what to do with the receipt. And how to alert the user to the success or failure of the deposit. The confirmation web page should still be available (especially if it has an indication that the status of the archiving will be displayed there). So it be displayed there, dynamically. So for the moment, I'll just hold on to the receipt and revisit this code when I've written the appropriate Velocity file.
There's just one final bit of code to add to this file, which is to add a call to the confirm delete method as a new state resolution function (lines 7083-6):
else if(ResourceToolAction.ARCHIVE.equals(actionId)) { doArchiveconfirm(data); }
Then I can start work on the Velocity file. I should say at this point that I don't expect this code to compile without error. I'm absolutely certain there will be exceptions which haven't been caught, and I may well have confused some of the variable names in the course of this. But I want to get on to the next stage before coming back here.
Wednesday 3 October 2012
Sakai Development: Post Eight
This post follows straight on from the last, especially as I've missed a constant setting which goes with the ones listed at its end, a new line 857:
Looking at the next section I might need to change, where permissions are sorted out at lines 1721-1801, I don't think anything needs to be altered, because this draws in constant values which I have already altered. However, things do need to be changed where permissions are set for items. This is done with new lines 2203 and 2209:
Of course, there will need to be corresponding alterations in the ContentHostingService class. I just need to find it - there is no ContentHostingService.java file in the source code. And searching online doesn't find anything useful. It's time to email those in the know, but meanwhile I can carry on with other bits of code. But I got a really quick answer - before I had a chance to do so, which tells me that I'm on the right lines:
"Yeah this one is in the kernel. The CHS Api lives there.
Sounds like what you are doing is correct. You'll need to add a method there that, most likely, checks some permission and returns true or false. If you want it set via a permission of course. You could just make it always available if that is what you wanted to, then you may not need to mod the kernel.
Then you could do item.canaddarchive(true)"
Thinking a bit more about this, I feel that perhaps I don't need a new kernel method, but I can be more sophisticated than making the service always available. To archive an item is basically making a copy, so what I really should do is to tie the archiving permission to the copy permission. So instead of the lines 2203 and 2209 above, I'll just check the canRead permission which is already there. (It strikes me, though, that in this world of DRM, read and copy are not necessarily the same thing, but never mind.) At least, I'm now at the end of the setting of permission booleans - the next bit of code should actually do something. (And no, I still have no idea why none of the kernel appears to be in the source code I downloaded, but this difficulty is one of the factors in my decision.)
What I want to work on now is to build the three archiving intermediate pages, which should be almost precisely like existing pages. The code for this starts at line 3961, which is where the delete confirmation page building code begins. To remind you, the three intermediate pages are the confirmation, collection metadata entry, and finish; the first and third will basically be copies of the equivalent delete pages and will therefore be built much in the same way. (I expected that most of the work for this little project would consist of copying and then amending existing code, nothing terribly difficult, and this is exactly what is being done here.) Since the code being copied is quite long, I'm not going to quote it all here. The two routines are very similar, so it looks as though copy and paste has already been used. I don't think I have time to look into the lines which are commented "//%%%% FIXME". I'll also need a third copy for MODE_ADD_ARCHIVE_METADATA. The new code becomes lines 4063 to 4207.
The code to call one of these new routines comes next, also a simple copy and modification of existing code. This comes in the context for the main page being re-displayed, as it will be on MODE_ARCHIVE_FINISH: lines 4823-7:
This gives a total of about 200 lines of boilerplate code copied, modified slightly, and inserted back into the class. From this point onwards, we start getting into more exciting development (though there is still a little bit more to add, to call the code to create the context for the metadata form and for the confirmation page - which, it has just occurred to me, could sensibly be the same thing...I may want to revisit some of the above changes to do this, but for the moment I'll just leave things as they are, as it shouldn't do any harm to create constants but not use them).
The model I have used so far for these changes is the existing code for item deletion. Now, there is a slight problem: the actual deletion code appears to delete items one at a time when more than one is selected for deletion, and we can't do this for the archiving; we want one SWORD2 transaction whether there are one or more items. We now need to copy and modify two routines - doCopy, and doDeleteconfirm - which set the application state to doCopy or Deleteconfirm respectively. The first will set the doArchive state, and the second will set Archiveconfirm state, in both cases processing the list of items selected for archiving into a vector format, and these states should then be processed to make the actual archiving or the display of the confirmation page happen. This gives another hundred or so lines of code modified which doesn't really do much.
The next place where something needs to be added is now line 6288. The doDispatchItem method, of which this switch block forms part, is, like most of the rest this class, sparsely commented, but appears to be the part which determines what to do - hence the switch block, with cases corresponding to the different actions. The question is, whether the ARCHIVE case should be like the COPY case or the DELETE case? It's hard to tell without more documentation. The COPY case basically adds the ID of a selected item to a list of items to copy, while the DELETE case actually calls deleteItem - a method we have already decided isn't appropriate to copy directly (as we don't want to break down the archiving of a collection of items into a sequence of archiving actions on the individual items). So the ARCHIVE case needs to be something in between, something like this (I hope):
which forms the new lines 6288 to 6301.
ACTIONS_ON_MULTIPLE_ITEMS.add(ActionType.ARCHIVE);
Looking at the next section I might need to change, where permissions are sorted out at lines 1721-1801, I don't think anything needs to be altered, because this draws in constant values which I have already altered. However, things do need to be changed where permissions are set for items. This is done with new lines 2203 and 2209:
boolean canArchive = ContentHostingService.allowArchiveResource(id); item.setCanArchive(canArchive);
Of course, there will need to be corresponding alterations in the ContentHostingService class. I just need to find it - there is no ContentHostingService.java file in the source code. And searching online doesn't find anything useful. It's time to email those in the know, but meanwhile I can carry on with other bits of code. But I got a really quick answer - before I had a chance to do so, which tells me that I'm on the right lines:
"Yeah this one is in the kernel. The CHS Api lives there.
Sounds like what you are doing is correct. You'll need to add a method there that, most likely, checks some permission and returns true or false. If you want it set via a permission of course. You could just make it always available if that is what you wanted to, then you may not need to mod the kernel.
Then you could do item.canaddarchive(true)"
Thinking a bit more about this, I feel that perhaps I don't need a new kernel method, but I can be more sophisticated than making the service always available. To archive an item is basically making a copy, so what I really should do is to tie the archiving permission to the copy permission. So instead of the lines 2203 and 2209 above, I'll just check the canRead permission which is already there. (It strikes me, though, that in this world of DRM, read and copy are not necessarily the same thing, but never mind.) At least, I'm now at the end of the setting of permission booleans - the next bit of code should actually do something. (And no, I still have no idea why none of the kernel appears to be in the source code I downloaded, but this difficulty is one of the factors in my decision.)
What I want to work on now is to build the three archiving intermediate pages, which should be almost precisely like existing pages. The code for this starts at line 3961, which is where the delete confirmation page building code begins. To remind you, the three intermediate pages are the confirmation, collection metadata entry, and finish; the first and third will basically be copies of the equivalent delete pages and will therefore be built much in the same way. (I expected that most of the work for this little project would consist of copying and then amending existing code, nothing terribly difficult, and this is exactly what is being done here.) Since the code being copied is quite long, I'm not going to quote it all here. The two routines are very similar, so it looks as though copy and paste has already been used. I don't think I have time to look into the lines which are commented "//%%%% FIXME". I'll also need a third copy for MODE_ADD_ARCHIVE_METADATA. The new code becomes lines 4063 to 4207.
The code to call one of these new routines comes next, also a simple copy and modification of existing code. This comes in the context for the main page being re-displayed, as it will be on MODE_ARCHIVE_FINISH: lines 4823-7:
else if (mode.equals (MODE_ARCHIVE_FINISH)) { // build the context for the basic step of archive confirm page template = buildArchiveFinishContext (portlet, context, data, state); }
This gives a total of about 200 lines of boilerplate code copied, modified slightly, and inserted back into the class. From this point onwards, we start getting into more exciting development (though there is still a little bit more to add, to call the code to create the context for the metadata form and for the confirmation page - which, it has just occurred to me, could sensibly be the same thing...I may want to revisit some of the above changes to do this, but for the moment I'll just leave things as they are, as it shouldn't do any harm to create constants but not use them).
The model I have used so far for these changes is the existing code for item deletion. Now, there is a slight problem: the actual deletion code appears to delete items one at a time when more than one is selected for deletion, and we can't do this for the archiving; we want one SWORD2 transaction whether there are one or more items. We now need to copy and modify two routines - doCopy, and doDeleteconfirm - which set the application state to doCopy or Deleteconfirm respectively. The first will set the doArchive state, and the second will set Archiveconfirm state, in both cases processing the list of items selected for archiving into a vector format, and these states should then be processed to make the actual archiving or the display of the confirmation page happen. This gives another hundred or so lines of code modified which doesn't really do much.
The next place where something needs to be added is now line 6288. The doDispatchItem method, of which this switch block forms part, is, like most of the rest this class, sparsely commented, but appears to be the part which determines what to do - hence the switch block, with cases corresponding to the different actions. The question is, whether the ARCHIVE case should be like the COPY case or the DELETE case? It's hard to tell without more documentation. The COPY case basically adds the ID of a selected item to a list of items to copy, while the DELETE case actually calls deleteItem - a method we have already decided isn't appropriate to copy directly (as we don't want to break down the archiving of a collection of items into a sequence of archiving actions on the individual items). So the ARCHIVE case needs to be something in between, something like this (I hope):
case ARCHIVE: Listitems_to_be_archived = new ArrayList (); if(selectedItemId != null) { items_to_be_archived.add(selectedItemId); } state.removeAttribute(STATE_ITEMS_TO_BE_MOVED); state.setAttribute(STATE_ITEMS_TO_BE_ARCHIVED, items_to_be_archived); if (state.getAttribute(STATE_MESSAGE) == null) { // need new context state.setAttribute (STATE_MODE, MODE_ARCHIVE_FINISH); } break;
which forms the new lines 6288 to 6301.
Monday 24 September 2012
Sakai Development: Post Seven
The plan
How exactly do I expect the SWORD2 application to work? From the point of view of a Sakai user, I think that the first version should do the following:- add a new action to the drop down menu which appears in a resources page, which would have a configurable name but basically be "Submit to archive";
- when files and/or directories are selected, this action may display a metadata form which would need to be completed to continue to submit;
- the items chosen are submitted using SWORD2.
So the development tasks consist of:
- work out how to add a new action and give it configuration information;
- work out how to pick up pointers to the selected items and display an error if none are selected (and a list of what has been selected and a confirmation that the list is correct);
- work out how to create a new form for the metadata and configure its contents and when it is displayed;
- integrate with the SWORD2 java library for deposit using configured information about the archive (and obtain any other information about the submitted items which is required by SWORD2 for deposit);
- install patch on server as well as on laptop;
- test against at least EPrints and DSpace.
Figuring Things Out and the First Code Updates
The first thing to do is to figure out where the specific action drop down is actually set up. The easiest way I can think of to do this is to carry out a very simple search of the source code. There are 15,826 files in the source code, according to eclipse, so there may well be lots of hits for words likely to be common in the source code of any application, such as "action". Eclipse does provide a comprehensive interface to search the contents of the files, but it does't appear to do anything really useful for a large number like this, such as indexing them. (It's probably possible to set this up, maybe with an eclipse interface to a tool such as apache's solr, but I don't know how and I'm not sure it would be worth it for a small task like this.) Running a search goes through about 8,000 files and then creates a pop-up which tells me "Problem occurred with search", with a button "Show details" which freezes eclipse when pressed. So this is less useful than it appears to be.Linux has a large number of command line tools which can be used to search files. There are file indexers around, but I don't run one on my desktop as the indexing process is usually slow and memory intensive. Other tools are rater complicated to use - the find command, for example, has a syntax I can never remember how to use properly even though I first came across it in 1990 in a Unix version which is close to that still available in Linux in 2012 and which appears in several shell scripts I have written over the years. I know what the name of the graphics file which is used to display the button which expands into the drop-down menu I want to change is (because I've found it in the web page source code): icon-dropdn.gif. So I can search for that in the source code, as it's likely to appear near the place I want to work on. (I actually did this before, when trying to get the resources tool to work properly.) Then a simple series of greps finds the file I want in tomcat:
% cd $CATALINA_HOME/webapps % grep "icon-dropdn.gif" */vm/* % grep "icon-dropdn.gif" */vm/*/* sakai-content-tool/vm/content/sakai_filepicker_select.vm: img alt="$tlang.getString(" border="0" class="dropdn" gen.add="gen.add" icon-dropdn.gif="icon-dropdn.gif" nbsp="nbsp" sakai="sakai" src="#imageLink(" / sakai-content-tool/vm/content/sakai_filepicker_select.vm: img alt="$tlang.getString(" border="0" button.add="button.add" class="dropdn" icon-dropdn.gif="icon-dropdn.gif" nbsp="nbsp" sakai="sakai" src="#imageLink(" / sakai-content-tool/vm/content/sakai_resources_list.vm: img alt="$tlang.getString(" border="0" button.add="button.add" class="dropdn" icon-dropdn.gif="icon-dropdn.gif" nbsp="nbsp" sakai="sakai" src="#imageLink(" / sakai-content-tool/vm/content/sakai_resources_list.vm: img alt="$tlang.getString(" border="0" button.actions="button.actions" class="dropdn" icon-dropdn.gif="icon-dropdn.gif" nbsp="nbsp" sakai="sakai" src="#imageLink(" /span class="Apple-tab-span" style="white-space: pre;" /span> sakai-content-tool/vm/content/sakai_resources_list.vm: img alt="$tlang.getString(" border="0" button.add="button.add" class="dropdn" icon-dropdn.gif="icon-dropdn.gif" nbsp="nbsp" sakai="sakai" src="#imageLink(" / sakai-content-tool/vm/content/sakai_resources_list.vm: img alt="$tlang.getString(" border="0" button.actions="button.actions" class="dropdn" icon-dropdn.gif="icon-dropdn.gif" nbsp="nbsp" sakai="sakai" src="#imageLink(" /
(I searched the vm directories because they contain the files which generate the interface for Sakai; I've removed some of the HTML markup from the output lines as a quick way to get the results to display properly in blogger.) The results make it clear that the file we want ends up as sakai-content-tool/vm/content/sakai_resources_list.vm. But what is the file which is used to generate this one in the source code? There is no sakai-content-tool directory there. For this, a very simple find command will do the job perfectly:
% find work/sakai-src -name "sakai_resources_list.vm"
./work/sakai-src/content/content-tool/tool/target/sakai-content-tool-2.8-SNAPSHOT/vm/content/sakai_resources_list.vm
./work/sakai-src/content/content-tool/tool/src/webapp/vm/content/sakai_resources_list.vm
and there we have it: work/sakai-src/content/content-tool/tool/src/webapp/vm/content/sakai_resources_list.vm. Now to actually look at this file in eclipse - but it turns out that the listing in eclipse doesn't follow the directory/file structure of the source code itself and doesn't contain the vm file at all (I should perhaps have expected this, but it's about five years since I last used the IDE). And even on a fairly new laptop bought specifically to have as much RAM as financially possible (3 GB), it's using 50% of the available memory just sitting there open but doing nothing. For reading the source code files, it's definitely going to be easier to use a normal text editor, of which my choice tends to be gedit. So that's where I now open the file. (Note: I will want to create a patch eventually, so I'm going to need to have two copies of the source code, as well as the one in the Eclipse workspace so that the changes can be extracted.)
Now, vm files are not a type of HTML generating script I've used before (the older JSPs being more familiar in my Java development experience), so I want to do a little work understanding the format before going on with this. A quick search on filesuffix.com tells me that the program used to parse them is Apache Velocity, and on that site is a user guide.
While the graphic was useful to find the file, and indeed the part of the file which generates the drop down list which I want to alter, it doesn't do any work of itself. The graphic appears several times on each row of the table containing the directories and files managed by the resource tool, with the various menus being different types of actions, such as ones which are only applicable to folders (e.g. create new folders) and some to both files and folders (copy/paste, etc). Two of the drop down lists apply to single items at a time (Add, Actions) and appear next to each item in the list, rather than letting the user select a group of items and then carry out an action on all of them at once. However, what I want is more an action which is applied to every selected item in the table, whether it is a file or a folder, and these appear at the top of the table, greyed out unless at least one item is selected. (This set of buttons consists of text, rather than a drop down list.) The buttons are "Remove", "Move" and "Copy", by default.
A bit of fiddling with "Inspect Element" in the web browser (a really useful feature for understanding complex web pages), and I see that the relevant lines of code are those near the HTML class definition "navIntraToolLink viewNav" (line 178), which displays a button for each member of an array called listActions. This is not itself set in this vm file, but comes from the relevant Java class which is listed at the top of the file in a useful comment: "sakai_resources_list.vm, use with org.sakaiproject.content.tool.ResourcesAction.java". So that is the next file to look at, and here it is possible to see the array being created and populated (lines 4315-4336). It's a little difficult to work out what these lines do without a significant amount of context, as they basically turn one type of list into another, and the file is really too big to be readily understood without a great deal more commentary (I've removed much of the spacing to make the code display better in a fairly narrow style of blog post):
ContentCollection collection = ContentHostingService.getCollection(collectionId);
ListItem item = ListItem.getListItem(collection, null, registry, need_to_expand_all, expandedCollections, items_to_be_moved, items_to_be_copied, 0, userSelectedSort, false, null); MaplistActions = new HashMap (); List items = item.convert2list(); for(ListItem lItem : items){ if(lItem.hasMultipleItemActions()){ for(String listActionId : lItem.getMultipleItemActions().keySet()){ ServiceLevelAction listAction = registry.getMultiItemAction(listActionId); if(listAction != null){ listActions.put(listActionId, listAction); } } } }
This then indicates that the I need to understand the ListItem class to work out how this is populated, because this is what is used to create the initial list which is then manipulated. This class is defined in the sakaiproject/content/tool/ListItem.java file, another 4000+ lines of code with fairly minimal commenting. I worked through the relevant code (the getListItem method starts on line 151) adding temporary comments to ensure I remembered what I had already worked out. The first important bit is to work out what permissions the user has over the items in the collection, which in fact has an extremely useful comment:
* calculate permissions for this entity. If its access mode is * GROUPED, we need to calculate permissions based on current user's * role in group. Otherwise, we inherit from containing collection * and check to see if additional permissions are set on this entity * that were't set on containing collection...
and it's a sensible way to do it, dynamically changing the list of actions depending on what the user is allowed to do, but isn't something that I'd thought of. Stupid of me, especially as access control is my main field of expertise... So the place to look is not here at all, but the code which lists the permissions available to be set. The form which does this looks like this in a default setup (presumably the creation of custom groups of users would add new columns):
So to find the vm file which generates the table, I need to find a fairly distinctive looking piece of the HTML source for this table. The name attribute of the form tag, "permissionForm" is an obvious one, and finds me the authz/authz-tool/tool/src/webapp/vm/helper/chef_permissions.vm VM file. The line which generates the rows of the table, which correspond to actions and which we want to add to, is line 65:
#foreach($lock in $abilities)
so next I need to find out where the list $abilities is set. Unlike the VM I looked at earlier, this one doesn't have a useful comment indicating which java file is responsible for its processing, but I think I can assume it will be authz/authz-tool/tool/src/java/org/sakaiproject/authz/tool/PermissionsHelperAction.java. Having done so, I can see that it is from line 120:
private static final String TEMPLATE_MAIN = "helper/chef_permissions";
The abilities variable is exported to the VM in line 385, which is set between lines 368 and 382 from a list of functions (potentially filtered, which is why it takes 14 lines of code). The code is set by lines 357-363:
// in state is the list of abilities we will present List functions = (List) state.getAttribute(STATE_ABILITIES); if (functions == null) { // get all functions prefixed with our prefix functions = FunctionManager.getRegisteredFunctions(prefix); }
which will first get data from the class attribute STATE_ABILITIES, and, if this is empty, it will ask the FunctionManager for a list of registered functions. I suspect that it is the latter which is needed, as the script then goes on to write the list of functions back into STATE_ABILITIES. The function manager is defined in the class org.sakaiproject.authz.cover.FunctionManager, so that is where I need to look next. However, finding it is more problematic. There is no file in the source code named FunctionManager.java, and no file containing a class definition for FunctionManager. However, I did eventually find some documentation on the use of the FunctionManager which seems to explain how I should add new functions (it appears when searching using google for "sakai FunctionManager" but not when searching the Sakai WIKI through Confluence's own search box...). It's clearly out of date (linking to non-existent files), but I hope still helpful. I also found a document in the reference documentation section of the source code, reference/docs/architecture/sakai_security_reg.doc, which explains how the function manager works. However, what I really need is an overview of how the security in Sakai works, and I eventually found a useful collection of WIKI pages, https://confluence.sakaiproject.org/display/~markjnorton/Sakai+Security+Model and the four linked to from it.
My understanding of the Sakai security model is now this. There are four components: users, (permission) groups, functions, and objects. New groups can be created (using the Security Service); users can be placed in groups (using a GroupProvider); functions can be associated with objects (using the function manager); and users or groups can be granted permission to carry out a function (using a Permission Helper). Resolution of a user's permission to carry out a function on an object is done by the AuthzGroup service.
So what I need to do is to add a new function via the ListItem.java script (not the Permissions Helper as I previously thought) and work out how to add that permission to the owner of a collection of files. The function manager service documentation tells me that the preferred method to do this is by using Spring to inject the service, via an XML file; however, I quickly discover (with my old friend grep - in this case, the command "grep -R --include *.xml FunctionManager .") that there are no XML files which include the string FunctionManager in the Sakai source. So: should I be good and follow the documentation (last edited in 2010), or should I do this the way that everyone else seems to have done? In fact, it looks on close inspection as though the content manager tool doesn't do things like this at all, because it inherits its permissions from elsewhere (and I've now basically gone round a circle to ResourcesActions.java again). But this time I have more idea what I'm looking for, and can actually start making changes. I'm not sure how or indeed if I can create a configurable name for it, however.
In what follows, line numbers refer to the line number displayed in the editor as the file is changed, so once a line has been added, all those with higher numbers are one greater than they were in the original file. Don't worry: at the end of this process, I'll publish a patch file with all the changes included.
Change line 348 to have ARCHIVE as a member of the list of actions:
CREATE, DELETE, READ, REVISE, SITE_UPDATE, ARCHIVEAdd new line 415 to be similar to the other actions here:
public static final ListCONTENT_ARCHIVE_ACTIONS = new ArrayList ();
There probably need to be some constants set in this section to govern the behaviour of the archiving process, but at the moment I'm not entirely sure what they should be. Perhaps one to indicate the requirement for metadata, and another to give the status of the archiving process (similar to line 510, "private static final String MODE_DELETE_FINISH = "deleteFinish";" - if that is indeed what this constant indicates!). I have to revisit this. I could really do with finding proper javadocs for this part of the code, but Sakai is exceptionally unhelpful for this. For example, https://confluence.sakaiproject.org/display/DOC/Sakai+CLE+2.8+Release+Notes lists some sources for javadocs, but there are separate javadoc locations for each of the projects which make up sakai, and chasing links in from this list ends up at 404 not found errors in many cases. The Sakai project is desperately in need of a tidier documentation collection for developers, but creating one will be an enormous job. Looking at way that MODE_DELETE_FINISH is used later on, it is used to set up what is displayed after the deletion has occurred, and I need some sort of equivalent, a message indicating the archive submission has been made, as well as a similar confirmation message. So I add a new line 512 and 514 (with a blank line between them):
private static final String MODE_ARCHIVE_FINISH = "archiveFinish"; private static final String MODE_ARCHIVE_CONFIRM = "archiveConfirm";There is already a "MODE_REVISE_METADATA" constant, which appears to make Sakai display a metadata form, though I presume that this is for a single resource at a time rather than for a collection. So I'm going to want to have a mode for adding archive metadata, which forms a new line 531:
protected static final String MODE_ADD_ARCHIVE_METADATA = "add_archive_metadata";
Further down, there are more constants to add, with comments similar to those already there (lines 605-11):
/** The name of the state attribute containing a list of ListItem objects corresponding to resources selected for submission to the archive */ private static final String STATE_ARCHIVE_ITEMS = PREFIX + REQUEST + "archive_items"; /** The name of the state attribute containing a list of ListItem objects corresponding to nonempty folders selected for submission to the archive */ private static final String STATE_ARCHIVE_ITEMS_NOT_EMPTY = PREFIX + REQUEST + "archive_items_not_empty"; protected static final String STATE_ARCHIVE_SET = PREFIX + REQUEST + "archive_set";
This is proving quite complicated. I think that, if I had more time, I'd probably want to build a generic method for adding new actions, by configuring the action name and a Java class (implementing some interface, say) to carry it out. The difficulty with that approach would presumably be how to handle actions which need to take the control away from the current page, as happens with deletion confirmation and with the archive metadata I'll need.
Next, I will set the VMs to use to handle the deposit confirmation, completion, and the metadata form, at line 785.
private static final String TEMPLATE_ARCHIVE_FINISH = "content/chef_resources_archiveFinish"; private static final String TEMPLATE_ARCHIVE_CONFIRM = "content/chef_resources_archiveConfirm"; private static final String TEMPLATE_ARCHIVE_METADATA "content/sakai_resources_archiveMetadata";
The need to set these in the code rather than making them configurable details seems rather poor design to me, but never mind.
Still more constants need to be added. Archiving should make no change to the resources themselves, so it is an action which should be in the same category as copy. So at line 816, I add:
CONTENT_READ_ACTIONS.add(ActionType.ARCHIVE);
and at 836:
ACTIONS_ON_FOLDERS.add(ActionType.ARCHIVE);
and at 850:
ACTIONS_ON_RESOURCES.add(ActionType.ARCHIVE);
There's a lot more to do, but that will have to be in the next post, I think.
Labels:
eclipse,
java,
Research360,
Sakai,
source code
Wednesday 5 September 2012
Sakai Development: Post Six
Get Sakai source code and set up Eclipse IDE
There is a really useful guide to setting up a development environment on the Sakai WIKI. I found it by accident when searching for a solution to one of the problems I encountered: if only it had been linked to from the Sakai website's "Getting Started/Technical Contributors" page. I think I would have saved a lot of time and effort over the last few weeks.
To download the source from the repository (rather than a bundled release), use subversion to add the code to the development environment (as opposed to a server to use for testing, which is what most of the preceding work to this was about). A new enough version of subversion is already installed on Debian:
$ svn --version svn, version 1.6.12 (r955767) compiled May 31 2011, 16:12:12 (etc)
so now download the code this way:
$ svn checkout https://source.sakaiproject.org/svn/sakai/branches/sakai-2.8.x/ sakai-src
This takes a while. If you need to use a web proxy, it should be set up in the global section of /etc/subversion/servers (uncomment the existing http-proxy lines and add appropriate values). Note that HTTP proxies may not enable every subversion function, though this checkout should be fine.
I have a chequered history with IDEs, partly because they are memory intensive applications, and the times I've needed to use them I've been near the end-of-useful-life for the computer I was using, so there was never enough memory to run them properly, and partly because I've never really used one intensively, so my programming habits have remained the way they were before IDEs became popular tools. But this time, there should be no real excuse, as the computer I'm using is just 16 months old, and I deliberately bought one with as much virtual memory as I could afford. Following the instructions at https://confluence.sakaiproject.org/display/BOOT/Install+Eclipse+WTP, I downloaded eclipse (installation just consists of unzipping the packaged archive, basically) then installed Webtools, subclipse (for which http://subclipse.tigris.org/update_1.8.x needs to be added to the download sites), and the maven eclipse plugin (ditto http://download.eclipse.org/technology/m2e/releases) through the Eclipse updater, though the main component was already installed in the base package. Then I set eclipse to ignore "bin" and "target" directories when running svn - from the Window-Preferences-Team-Ignored Resources menu of Eclipse.
Some settings need to be changed. Eclipse doesn't run with memory settings high enough for Sakai (even given what I said about IDEs in the last paragraph). So edit eclipse/eclipse.ini and upgrade -Xms and Xmx to 128m and 1024m respectively, and add "-XX:+UseParallelGC" as a new line.
According to the instructions (but don't do this until you have read the rest of this paragraph!!), to prepare this for use in Eclipse, cd to the sakai-src directory and run:
$ mvn eclipse:clean
$ mvn eclipse:eclipse
which removes any existing eclipse files in the source code (presumably there shouldn't be any anyway) and then creates new ones - which takes a while and has several failed dependencies which have to be resolved manually (the error message helpfully tells you how - the probelm is basically that some library files are not found where expected). Next, create a new workspace for Sakai in Eclipse, using the File-Workspace-Other menu to enter a new workspace name (I used "ws-sakai"); slightly disconcertingly (even when warned this will happen), Eclipse immediately shuts down and restarts when you click OK. Then, add the source code to this workspace. Switch to the Java perspective (Window-Open Perspective-Java), turn off automatic builds (checkbox in Project menu), and import the Sakai source code (File-Import-General-Existing Projects into Workspace and browse to the Sakai source code directory). This fails, because eclipse thinks that this the source code is not a project file. This issue (and the missing dependencies) has been raised on the mailing list before I had got round to doing so, and the response from Steve again is not to do things this way; you only need to have the code in eclipse when you want to modify it, and even then only the specific project which is to be modified. Missing dependencies are then solved by adding the shared library directory of the tomcat installation to the classpath in Eclipse. With a large project like Sakai, this approach makes sense, but it really needs to be spelt out in the documentation! What's a bit annoying about this is that I now need to install tomcat on my laptop, not something I really want to do - I was hoping to write code on the laptop and test it on the server.
So I carry out the steps for an actual Sakai install which I haven't already done for this: setting up tomcat, creating the mysql DB, add the mysql connector to tomcat, edit sakai.properties, and compile with mvn. Of course, this gives a new error:
[INFO] Failed to resolve artifact. Missing: ---------- 1) com.sun:tools:jar:1.5.0 Try downloading the file manually from the project website. Then, install it using the command: mvn install:install-file -DgroupId=com.sun -DartifactId=tools -Dversion=1.5.0 -Dpackaging=jar -Dfile=/path/to/file Alternatively, if you host your own repository you can deploy the file there: mvn deploy:deploy-file -DgroupId=com.sun -DartifactId=tools -Dversion=1.5.0 -Dpackaging=jar -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] Path to dependency: 1) org.sakaiproject.kernel:sakai-component-manager:jar:1.4.0-SNAPSHOT 2) com.sun:tools:jar:1.5.0 ---------- 1 required artifact is missing. for artifact: org.sakaiproject.kernel:sakai-component-manager:jar:1.4.0-SNAPSHOT from the specified remote repositories: default (http://repo1.maven.org/maven2), central (http://repo1.maven.org/maven2), sakai-maven (http://source.sakaiproject.org/maven2), sonatype-nexus-snapshots (https://oss.sonatype.org/content/repositories/snapshots)
So I found the files which would fix this problem, carried out the mvn commands suggested in the error, and tried again, only to end up with the same to missing files. It ended up with me realising that I was using the wrong java implementation for this - I have several installed on the laptop, and /usr/bin/java is pointing to openjdk. So I tried again with the (Oracle) Sun java SDK, and this time the compilation and installation proceeded without error. However, sakai itself was inaccessible to the web browser, and this is caused by missing libraries:
SEVERE: Error configuring application listener of class org.sakaiproject.portal.charon.velocity.PortalRenderEngineContextListener java.lang.NoClassDefFoundError: org/sakaiproject/portal/api/PortalRenderEngine at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Class.java:2406) at java.lang.Class.getConstructor0(Class.java:2716) at java.lang.Class.newInstance0(Class.java:343) at java.lang.Class.newInstance(Class.java:325) (and so on)
The right jar file has been created in the source tree, just not deployed to tomcat. So having found it (sakai-src/portal/iportal-render-engine-impl/mpl/target/sakai-portal-render-engine-impl-2.10-SNAPSHOT.jar - checking it has the missing class using jar -tf sakai-portal-render-engine-impl-2.10-SNAPSHOT.jar) and copied it to $CATALINA_HOME/shared/lib, try again - but this does not fix the problem.
My thought at this point is that there is a possibility that symbolic links may be the cause of a lot of the problems I had earlier, both on the laptop and on the test server. If tomcat is installed from the debian repositories, it is distributed across the filesystem in accordance with Linux standards which the tomcat project itself does not follow (libraries go under /usr/lib, configuration under /etc, log files under /var/log, and so on). This is problematic because many tomcat applications need a single directory which is $CATALINA_HOME, which has all the tomcat components in it, and the debian package solution is to set up a directory at /var/lib/tomcat6 which contains symbolic links to the real locations of the distributed files. If bits of Sakai are not clever enough to follow these symbolic links, it is not surprising that there are large number of inaccessible jar files. Similarly, on the server I followed my usual practice of creating a symbolic link to the actual tomcat installation directory (this makes life much easier when upgrading tomcat, or installing new versions of Sakai, because this can all be done invisibly to the users of the site, who only see something when the symbolic link is recreated to point to a new tomcat installation), and it is possible that the problem with the missing image files is caused by this too. I'm not going to bother having another go at the source installation on the server, but I will try a tomcat as downloaded from apache for the laptop.
OK, the next compilation caused some serious laptop problems - it crashed it. And this is without updating the code from the last compilation, which went fine. Time to stop for the day.
A re-install later (I was thinking about changing my distro anyway) and a reset maven, tomcat and re-downloaded source later, and I'm ready to compile. And again I end up with the dreaded missing library error:
[INFO] snapshot org.sakaiproject:sakai-announcement-help:2.8-SNAPSHOT: checking for updates from sakai-maven2-snapshots Downloading: http://source.sakaiproject.org/maven2-snapshots/org/sakaiproject/sakai-announcement-help/2.8-SNAPSHOT/sakai-announcement-help-2.8-SNAPSHOT.jar [INFO] Unable to find resource 'org.sakaiproject:sakai-announcement-help:jar:2.8-SNAPSHOT' in repository sakai-maven2-snapshots (http://source.sakaiproject.org/maven2-snapshots) [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Fialed to deploy to container :Unable to download the artifact from any repository
(and at the same time notice a typo in the mvn output). Now, I have a good source for the missing files in the working installation on the server, so I can download it and install it, as per the instructions from the mvn output. And this is just the first of 8 - until it starts going round in circles.
The solution for me was to change the deployment, and just run
% mvn -Pcafe sakai:deploy
which is then installing a cut-down version of Sakai which seems not to include any of the modules which have these dependency issues. And it works:
So now back to importing Sakai code into eclipse. I re-ran the Maven eclipse commands above, this time without error. I created a ws-sakai workspace; as before, sakai the restarts. The .m2/repository directory is already in the class path in eclipse, so no need to add it (presumably this was done on the installation of the maven eclipse plugin). I thought I'd try just once to import the whole of the sakai-src tree into eclipse. This resulted in the following error, that the java libraries to interface to subversion could not be found:
Failed to load JavaHL Library. These are the errors that were encountered: no libsvnjavahl-1 in java.library.path no svnjavahl-1 in java.library.path no svnjavahl in java.library.path java.library.path = /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Usefully, eclipse (or more specifically, the subclipse plugin) suggested a site for a fix: http://subclipse.tigris.org/wiki/JavaHL (a page on the subclipse WIKI), which also explained why the libraries are not built into the subclipse distribution (too complicated due to differing installation methods on different operating systems). Finding the instructions for the redhat based linux distribution (after rebuilding my laptop, I'm using fedora - temporarily, it turns out to be too irritating to keep as my main desktop), I downloaded the file I needed from http://www.open.collab.net/downloads. The list of available files doesn't quite match the subclipse wiki page description (there doesn't seem to be an rpm available any more, for instance), and CollabNet required me to register before downloading, neither of which seem ideal. Howeve, CollabNet Subversion Edge does include the required library, as the file csvn/lib/libsvnjavahl-1.so.0 (csvn being the name of the directory that the downloaded tar file expands into). It's then probably sensible to update the JAVA_OPTS so that the jvm loads the new library each time it is started adding the following to the user's .profile file
JAVA_OPTS=-Djava.library.path=/lib
or amending an existing JAVA_OPTS entry, then adding JAVA_OPTS to the list of exported environment variables. The same path needs to be added to the eclipse configuration as well, by shutting down eclipse, editing the eclipse.ini file in the eclipse home directory, adding the same information (no need to specify JAVA_OPTS, this time), and then re-starting. What's now annoying is that I now need to switch workspace, which means that eclipse will shut and restart - I could put the sakai workspace in the shortcut call to start eclipse, I suppose, if I'm going to need to keep doing this. The projects from the source directory now seem to be loaded completely - success!
All this effort required to get the source code into a tool to make it easier to work with. I seriously think I'd have been better off just downloading the source code and working with a simple text editor directly - the old school method of programming, before these time-consuming products were invented to save developer time and effort. But I now feel something of a sense of accomplishment: I have the sakai source code imported into eclipse. Now for the real work to begin...
Subscribe to:
Posts (Atom)