I thought: I could put together a quick blog post linking to some of the blogs which I follow, and spend a little time trying to fill gaps in the list. But while doing a little searching I found the Planet Identity blogroll (which I'd not seen before) and 360tek's list of blogs. Nothing I could post would be anything like as comprehensive. But there's still scope for a post, though...
Planet Identity aggregates over 170 blogs, with about 30 on 360tek's list most of which are on the Planet Identity list). I presume the former is no longer updated (it's on an old Sun server, not moved to Oracle). Blogs tend to be evanescent, and it's no surprise that some of the links in the blogroll are dead, or that others have not been updated in over two years. Many of the corporate bloggers have been amalgamated into a single company blog, which suggests to me some developing maturity in the identity market - these companies are making themselves more "corporate", which unfortunately often makes the blogs less interesting. A few of the blogs listed are inaccessible to me as someone pretty much restricted to English language writing, to my shame. My interests are also pretty much UK centred, and I'm not particularly into the latest marketing release from commercial vendors - mainly because getting identity management right is at least as much about good business processes as it is about technology. I'll just list some of the best of those which seem to be live (and which I didn't already know - or did know, but had just been too lazy to pick up and follow).
Where the blog author (if a single person) is also on twitter, I have listed their twitter ID as well as the blog URL.
Identity Networks: The blog of Ingrid Melve, Federation Manager for Feide - a FAM slant, and well worth reading (one of the blogs I really should have been following already)
Identity Woman: Although recent posts are taken up with the naming policies of Google+ (the spate of discussion over pseudonyms on the network being sparked off because Google would not allow an account in the name of Identity Woman), there is a lot of interesting material on this blog about user-centric identity.
Identity Happens: A great blog which is more technical than most of the others in this list. Not updated all that frequently.
Racingsnake: Robin Wilton's personal blog, focusing mainly on public policy relating to security and IAM. He also blogs at Gartner.
Ian Yip's Security and Identity Thought Stream: Good stuff here, too; interest in why technical security problems arise in the first place from Ian Yip.
I use Akregator to read most of the blogs I follow, and I have a fair number of Identity and Security blogs in there. A lot of security bloggers talk about identity - it has become massively important in IT security now that people have started to realise just how insecure most systems become if identity management is compromised.
eFoundations: Not all IAM, but always interesting blog from Pete Johnston and Andy Powell at Eduserv.
UK Access Management Focus (formerly JISC Access Management Team blog): Essential reading if you want to know what's happening in IAM in UK higher education. Maintained by Nicole Harris, a former LSE colleague of mine.
Kim Cameron's Identity Blog: thoughtful posting about identity (from, unsurprisingly, Kim Cameron), most recently (at the time of writing) about how disintermediation might affect identity.
Light Blue Touchpaper The blog of the security research group at Cambridge University: they often have something interesting, or even controversial to say (particularly if you believe in bank security). Posters here include Steven Murdoch.
Talking Identity, from Nishant Kaushik: He works for Identropy, so some content is cross posted from their corporate blog. Sensible and pretty authoritative stuff here (and, indeed, there).
Stephan's Ramblings: Another former colleague, who blogs about security generally.
Schneier on Security: Bruce Scheier, security guru (author of one of the best technical books on cryptography), describes himself as "head curmudgeon at the table". Fascinating comment, and a weekly squid-related post.
Naked Security, the Sophos blog on IT security, has timely posts on most current security stories. Perhaps less identity content than the ones above, but helps to keep up to date.
Not all essential reading comes in blog form, even in 2011, though these web sites also provide feeds.
The security tag at Slashdot Any Slashdot story tagged as "security" can be seen here, which includes just about any IAM related discussion on the place to go for computer geekery.
Security coverage at The Register Some may not like the jokey tone of "El Reg" (as it calls itself), but they cover a lot of interesting stories in an idiosyncratic way. The Identity stories have a subject feed here.
Electronic Frontier Foundation: Fighting for rights in the digital world, many of which have some connection to identity.
I follow some other relevant people on twitter:
Robert Garskamp, of IDentity.Next
Christopher Brown, of JISC - eResearch Programme Manager responsible for the Access & Identity Management programme
Rhys Smith, of Cardiff University and JANET, who worked on the Identity Project and the Identity Toolkit with me
John Chapman, also at JANET
RL "Bob" Morgan, University of Washington and Shibboleth (most people involved in Shibboleth seem not to tweet or blog)
I hope this list is useful - but I've probably missed some obvious and interesting blogs...
Tuesday, 25 October 2011
Saturday, 1 October 2011
Identity and Access Management and the Technology Outlook for UK Tertiary Education 2011-2016 (Part Three)
Recently, the NMC Horizon project published its report, Technology Outlook for UK Tertiary Education 2011-2016: An NMC Horizon Report Regional Analysis, produced in collaboration with CETIS and UKOLN. The last ten years have seen massive changes in the ways in which UK tertiary education institutions handle authentication, identity, and access controls, and I would like to take a look at each of the technologies it mentions and discuss whether their adoption will force or encourage further change.
The report groups technologies into three groups of four, the first group being those which are imminent (time to adoption one year or less), then those which are likely to be adopted in two to three years, and finally those which the contributors to the report expect to be adopted in four to five years. I will devote a single post to each group of four. This is post two of the three; go to post one, post two.
Augmented Reality
This particular technology has no interesting identity component that I can see - it's just going to be the usual issues of data ownership and, possibly, privacy. However, the nature of augmented reality is such that it is likely to lead to all sorts of new applications which may have privacy issues - in particular, those which allow visitors to tag the online information to add comments, or even graffiti to the augmented presence.
Collective Intelligence
In the educational context, the key point (clear in the example project links given in the report, though strangely not actually mentioned in the main text) is curation of the collected information, as learners and researchers have a need for accuracy. This in turn necessitates some form of identity management, otherwise the curation itself will need curating. This should already be well understood, as it is crucial to much open data already available, so there will be no excuse for not managing it sensibly by 2015.
Smart Objects
This is the use of unique identifiers embedded with an object which can be used (for example) to provide a linkage to a point on the Web. The current technologies for doing this are mainly RFID tags and QR codes. The sample uses discussed in the report don't seem to me to be of huge relevance for most forms of tertiary education specifically, though they will be useful for such tasks as keeping track of sample materials in labs, or the location of medical cameras and sensors in patients. Again, there seems to be nothing much new here in terms of identity.
Telepresence
The future of video conferencing is telepresence, which has had some high profile demonstrations; the name suggests the point, which is to make it appear to each participant that the others are present at a shared conference space (which may of course be a purely virtual location). As with smart objects, I have some difficulty thinking of applications for this technology specific to the education sector (surely it isn't going to enhance remote learning all that much?). I also experienced the nightmare which was UK higher education videoconferencing about a decade ago - too little bandwidth even in the dedicated video suite needed made it unusable, less good than Skype video calls are now. And I know how difficult the Open University found it when they first made it a requirement for some of their courses for students to have access to a fairly basic standard of computer equipment. So my feeling is that the date suggested for this is rather optimistic, as institutions will be conservative about the widespread adoption of something which has high bandwidth and processing requirements without extremely clear benefits for students and researchers. Small scale adoption where it's useful to research, possibly - the final use suggested for the technology is for the exploration of locations difficult or impossible for human beings to access. Generally, though, my feeling is that the report is being optimistic over the timescale needed for the hardware and bandwidth requirements to be sufficiently easy to meet.
This is a technology with clear identity elements - the participants in a conference will be identified to be able to take part (in the main), and will be releasing large quantities of information about themselves to the other participants. That said, it seems unlikely that most uses will provide any new or even particularly unusual use cases for IAM.
General Conclusions
Overall, it seems to me that there is little which is likely to provide new challenges for IAM in the adoption of any of these technologies. However, there is ample scope for developers to get the IAM components wrong for components of both the tools needed to deliver the technology and of applications which are built to make use of them for education and research. This is especially important as many of those involved in delivering the applications and tools will not be experts in IAM themselves. We often see elementary errors in security particularly: while I was typing this, I was alerted to a blog post linking to a paper about insecurities in Chrome browser extensions - exactly the kind of problem which a software developer can create through lack of thinking through the implications of what they're doing, or by trying to re-invent the wheel because they don't know that others have done it before them.
The potential problems are compounded because the hardware being used by students and staff is going to be more and more their own rather than under the control of the institution, with all the potential for poor security as self-support becomes the norm. The multiplicity of devices and the fragmentation of the software market that it entails will make it much harder to make fixes; the days when an institution can have a "standard build" on every PC with a single supported web browser which can be updated at need from central servers are numbered. As the report concludes, "The computer is smaller, lighter, and better connected than ever before, without the need for wires or bulky peripherals. In many cases, smart phones and other mobile devices are sufficient for basic computing needs, and only specialized tasks require a keyboard, large monitor, and a mouse. Mobiles are connected to an ecosystem of applications supported by cloud computing technologies that can be downloaded and used instantly, for pennies. As the capabilities and interfaces of small computing devices improve, our ideas about when — or whether — a traditional computer is necessary are changing as well."
It is also possible that some applications built for education using these technologies could present some challenges for IAM. It seems likely that no one will now be able to predict the uses to which these technologies can be used, and I'd suspect that the most interesting uses will be ones that no one has yet invented. There may well be other technologies which will prove more revolutionary in tertiary education in the UK than any of the twelve listed here, but which we don't know about.
A common thread to many of the technologies is linking individuals or information - and sharing is obviously a potential source of privacy issues. Indeed, the tone of the report seems to suggest that within the next few years, privacy will be an outmoded idea; we will all be willing to share just about everything online. Is this true, or even likely? While naive users continue to share everything that occurs to them without caring about or understanding security settings (e.g. on Facebook), there is at least some evidence that many users are now thinking more about what they post and what it might mean for them later on, when read by a prospective employer, for example. The recent "nym wars" (usefully summarised here with discussion relevant to how privacy should be seen in the future) show that many people put a high value on privacy and the possibility of keeping a real world identity secret in particular. To the list of challenges summarised at the end of the report, I would add the investigation of the developing attitudes to privacy and how they should affect implementation and use of the technologies from this report in tertiary education.
The report groups technologies into three groups of four, the first group being those which are imminent (time to adoption one year or less), then those which are likely to be adopted in two to three years, and finally those which the contributors to the report expect to be adopted in four to five years. I will devote a single post to each group of four. This is post two of the three; go to post one, post two.
Augmented Reality
This particular technology has no interesting identity component that I can see - it's just going to be the usual issues of data ownership and, possibly, privacy. However, the nature of augmented reality is such that it is likely to lead to all sorts of new applications which may have privacy issues - in particular, those which allow visitors to tag the online information to add comments, or even graffiti to the augmented presence.
Collective Intelligence
In the educational context, the key point (clear in the example project links given in the report, though strangely not actually mentioned in the main text) is curation of the collected information, as learners and researchers have a need for accuracy. This in turn necessitates some form of identity management, otherwise the curation itself will need curating. This should already be well understood, as it is crucial to much open data already available, so there will be no excuse for not managing it sensibly by 2015.
Smart Objects
This is the use of unique identifiers embedded with an object which can be used (for example) to provide a linkage to a point on the Web. The current technologies for doing this are mainly RFID tags and QR codes. The sample uses discussed in the report don't seem to me to be of huge relevance for most forms of tertiary education specifically, though they will be useful for such tasks as keeping track of sample materials in labs, or the location of medical cameras and sensors in patients. Again, there seems to be nothing much new here in terms of identity.
Telepresence
The future of video conferencing is telepresence, which has had some high profile demonstrations; the name suggests the point, which is to make it appear to each participant that the others are present at a shared conference space (which may of course be a purely virtual location). As with smart objects, I have some difficulty thinking of applications for this technology specific to the education sector (surely it isn't going to enhance remote learning all that much?). I also experienced the nightmare which was UK higher education videoconferencing about a decade ago - too little bandwidth even in the dedicated video suite needed made it unusable, less good than Skype video calls are now. And I know how difficult the Open University found it when they first made it a requirement for some of their courses for students to have access to a fairly basic standard of computer equipment. So my feeling is that the date suggested for this is rather optimistic, as institutions will be conservative about the widespread adoption of something which has high bandwidth and processing requirements without extremely clear benefits for students and researchers. Small scale adoption where it's useful to research, possibly - the final use suggested for the technology is for the exploration of locations difficult or impossible for human beings to access. Generally, though, my feeling is that the report is being optimistic over the timescale needed for the hardware and bandwidth requirements to be sufficiently easy to meet.
This is a technology with clear identity elements - the participants in a conference will be identified to be able to take part (in the main), and will be releasing large quantities of information about themselves to the other participants. That said, it seems unlikely that most uses will provide any new or even particularly unusual use cases for IAM.
General Conclusions
Overall, it seems to me that there is little which is likely to provide new challenges for IAM in the adoption of any of these technologies. However, there is ample scope for developers to get the IAM components wrong for components of both the tools needed to deliver the technology and of applications which are built to make use of them for education and research. This is especially important as many of those involved in delivering the applications and tools will not be experts in IAM themselves. We often see elementary errors in security particularly: while I was typing this, I was alerted to a blog post linking to a paper about insecurities in Chrome browser extensions - exactly the kind of problem which a software developer can create through lack of thinking through the implications of what they're doing, or by trying to re-invent the wheel because they don't know that others have done it before them.
The potential problems are compounded because the hardware being used by students and staff is going to be more and more their own rather than under the control of the institution, with all the potential for poor security as self-support becomes the norm. The multiplicity of devices and the fragmentation of the software market that it entails will make it much harder to make fixes; the days when an institution can have a "standard build" on every PC with a single supported web browser which can be updated at need from central servers are numbered. As the report concludes, "The computer is smaller, lighter, and better connected than ever before, without the need for wires or bulky peripherals. In many cases, smart phones and other mobile devices are sufficient for basic computing needs, and only specialized tasks require a keyboard, large monitor, and a mouse. Mobiles are connected to an ecosystem of applications supported by cloud computing technologies that can be downloaded and used instantly, for pennies. As the capabilities and interfaces of small computing devices improve, our ideas about when — or whether — a traditional computer is necessary are changing as well."
It is also possible that some applications built for education using these technologies could present some challenges for IAM. It seems likely that no one will now be able to predict the uses to which these technologies can be used, and I'd suspect that the most interesting uses will be ones that no one has yet invented. There may well be other technologies which will prove more revolutionary in tertiary education in the UK than any of the twelve listed here, but which we don't know about.
A common thread to many of the technologies is linking individuals or information - and sharing is obviously a potential source of privacy issues. Indeed, the tone of the report seems to suggest that within the next few years, privacy will be an outmoded idea; we will all be willing to share just about everything online. Is this true, or even likely? While naive users continue to share everything that occurs to them without caring about or understanding security settings (e.g. on Facebook), there is at least some evidence that many users are now thinking more about what they post and what it might mean for them later on, when read by a prospective employer, for example. The recent "nym wars" (usefully summarised here with discussion relevant to how privacy should be seen in the future) show that many people put a high value on privacy and the possibility of keeping a real world identity secret in particular. To the list of challenges summarised at the end of the report, I would add the investigation of the developing attitudes to privacy and how they should affect implementation and use of the technologies from this report in tertiary education.
Tuesday, 27 September 2011
Identity and Access Management and the Technology Outlook for UK Tertiary Education 2011-2016 (Part Two)
Recently, the NMC Horizon project published its report, Technology Outlook for UK Tertiary Education 2011-2016: An NMC Horizon Report Regional Analysis, produced in collaboration with CETIS and UKOLN. The last ten years have seen massive changes in the ways in which UK tertiary education institutions handle authentication, identity, and access controls, and I would like to take a look at each of the technologies it mentions and discuss whether their adoption will force or encourage further change.
The report groups technologies into three groups of four, the first group being those which are imminent (time to adoption one year or less), then those which are likely to be adopted in two to three years, and finally those which the contributors to the report expect to be adopted in four to five years. I will devote a single post to each group of four. This is post two of the three; go to post one, post three.
Game Based Learning
This is the first of the second set of technologies, due for adoption in two or three years. As far as access is concerned, there are two points to make. First, since in the tertiary education context, games used for learning will presumably be connected to courses, the access policies will basically match those for existing VLE services. Indeed, it is likely that if adoption is widespread, many institutions will wish to embed games in their VLE, if they use one. So there should be existing processes which determine who has access to a game (at several levels: to play, to access scoring and other records, and to manage it), and there should be existing procedures to implement whatever is required for access for those people who should be permitted it - adding identifiers to an access control list from an student information system database, for example.
The second point is that how access controls are enforced will depend on the game environment and its implementation. The links given in the report are not explicit about how their games are implemented, though one of them is clearly using Flash, and another is embedded into social networking and will presumably also use Flash. Other candidates for game development will include HTML5. It seems likely to me that most of these games will be browser and/or app based, and so will have authentication methods which are of these types, which could utilise existing methods such as Web SSO technology for authentication.
As with the technologies in the first part of the report, there will be privacy requirements which will need to be insisted on in the development of games. In many online games, users are interested in league tables for players; will these be shareable? If games have a collaborative element, how will the information sharing required for this work - and how will it affect assessment? What about the sharing of hints and tips - another activity common in gaming communities?
Learning Analytics
Essentially, this describes the analysis of the large quantities of data generated by student activity on the Internet - including activity not necessarily considered to be part of a course, such as social network activity. Stated like this, as it is in the report, it is immediately clear that there are implications for student privacy in this work. Employees already complain about similar activities (on a smaller scale) by their employer, such as the monitoring of Facebook use (one of the issues on the US-based Privacy Rights Clearinghouse Workplace Privacy worksheet, to pick just one example of a discussion of this practice; one particular service offering to do this for employers is discussed on ReadWriteWeb).
There are other issues, too. As one of the links from the report says, "Both data mining and the use of analytics applications introduce a number of legal and ethical considerations, including privacy, security, and ownership". It then goes on to suggest that these concerns will decrease over time, due to the introduction of new tools and "as institutions are forced to cope with greater financial constraints that will make the careful targeting of available resources increasingly important". I am not sure I agree, particularly outside the US - privacy has long been much more important to legislators in Europe. It will be interesting to see how this develops in the UK, and how students over the next few years feel about it. And learning is not the only field in which analytics of this type could be used: how about research assessment in 2016? Or your annual appraisal in 2015?
New Scholarship
This topic is really about the use of non-traditional means of publishing for research (blogging, podcasting, etc.) basically, rather than (or, more usefully, alongside) peer reviewed academic journals. This is really an extension of traditional methods of exchanging ideas within the academic community (but consuming less coffee). It is actually a change which has been going on for quite a while: when I was a graduate student in the early 1990s, worldwide communication by email for special interest groups was just beginning to be embraced by members of the department.
The interest for IAM is not in the authentication side of things; shared access blogs, authenticated comments, and so on are all commonplace. There are two issues that immediately come to mind Firstly, the question of how controlled such new media are, and how an institution can protect its reputation. The LSE, where I worked until recently, was embroiled in controversy over just this issue earlier in 2011. Of course, universities have been embarrassed by the utterances of their staff for many years; people don't need a blog in order to say controversial things. But it is becoming harder to even keep track of the places where an institution needs to check to find out what those who are affiliated to it are saying in public. After all, a director doesn't want to discover a budding problem only when a tabloid reporter contacts them.
The second issue is one of authenticity. How is it possible to be sure that a blogger is really the person you think he or she is? Linking published journal articles to individuals is hard enough, without having to manage every staff member's personal blog or blogs - hence the ongoing Names project. This is an issue which is only going to become more difficult.
Semantic Applications
This technology is about the intelligent use of material from online sources, usually the open Internet but possibly including protected content, to make connections between items of data automatically, without intervention from human researchers. (This is also, and perhaps better, known as Linked Data.) This may not seem to have any identity component whatsoever, but in fact there are two issues: data provenance (ownership and authenticity), as discussed above, and allowing access for the intelligent applications to closed content. The second of these is a technical issue, and should be readily soluble in the timescale suggested for the adoption of semantic technology, two or three years.
It's fairly clear that many of the promoters of Linked Data are not keen on the use of closed content, but there is no particular reason why (parts of) the data processed need to be accessible to everybody on the Internet; obviously the ability to use it for widespread use will be compromised, but that may well be considered a small price to pay (see also the entry on the topic in the Structured Dynamics Linked Data FAQ).
Thursday, 22 September 2011
Identity and Access Management and the Technology Outlook for UK Tertiary Education 2011-2016 (Part One)
Last week, the NMC Horizon project published its report, Technology Outlook for UK Tertiary Education 2011-2016: An NMC Horizon Report Regional Analysis, produced in collaboration with CETIS and UKOLN. The last ten years have seen massive changes in the ways in which UK tertiary education institutions handle authentication, identity, and access controls, and I would like to take a look at each of the technologies it mentions and discuss whether their adoption will force or encourage further change.
The report groups technologies into three groups of four, the first group being those which are imminent (time to adoption one year or less), then those which are likely to be adopted in two to three years, and finally those which the contributors to the report expect to be adopted in four to five years. I will devote a single post to each group of four. This is post one of the three; go to post two, post three.
Cloud Computing
The report describes this as an almost ubiquitous technology. The main access challenges must therefore have been solved, surely?
However, a quick glance at the project links given in the section to relevant initiatives in the sector shows that access to cloud resources is not as simple as it might be. The Bloomsbury Media Cloud requires an email to request the setting up an account, and considers access to be sufficiently difficult to have created a video in its user guide section to show how to access content (and the video itself is hard to access, giving me a 404 not found error when I tried it). "Investigating and applying authentication methods" is one of the objectives of the project, but I would suggest that more work is needed. But that is better than the second link, to Oxford's Flexible Services for the Support of Research which does not exist at all. They really should have employed a more persistent URL: it has moved from a "current research" directory to a "research" directory, here. This is a far less glossy project, more technical in content, as can be seen from the Installation documentation, which describes access control in the following terms:
"Security Groups: users can define groups with access rules indicating what port can be
accessed from which source IP(s). Multiple Virtual Machines (VMs) can then be instantiated and associated to a defined group. In this way, a security group works analogously to a firewall put in front of one or more VM. Crucially, such a 'firewall' is managed directly by the owner of the VM(s)".
Flexible, but a bit of a challenge for those with little knowledge of virtual machine firewall configuration. The final project link is to HEFCE's funding announcement of shared cloud services for higher education institutions.
So there is still work to be done, in terms of the user experience mainly. Clearly this aspect is of importance to commercial providers of cloud-based services, such as Google docs, and this is inspiring the frequent occurrence of questions on the relevant user mailing lists about the integration of Shibboleth, as a single sign on product used in many tertiary education contexts, with the authentication regimes imposed by these providers.
Mobiles
Again, mobile technology is being adopted rapidly by many institutions. The main IAM related issue is how to ensure security, and it is something which is quite well understood - but that hasn't prevented implementations of other systems having embarrassing security holes which should have been avoided. With mobiles, the issue is more about making sure that known issues are dealt with rather than extensive research to work out what should be done. An introduction to the issues can be found here (among many other places). Since most of the resources being discussed are web based, issues of integration and single sign on are not likely to be important, as they will have been solved for traditional web clients (e.g. by using a standards based SSO solution).
Open Content
In the past, I promoted the idea that even when repositories have open access, there is still a need for authentication and authorisation, unless the repository really allows anyone to anonymously store any item, with no audit trail: a situation which is not likely to happen in the academic community. Similar remarks also hold for open content. The holders of the content will want to retain at least some control over the much of the content being posted. In fact, deposit is likely to be quite restricted, in order to retain a degree of academic respectability and to keep control of intellectual property rights. This is true in the example project links which are given in the report, except for one: P2PU. There, all that is needed to post content, either comments on existing teaching material or a course of your own, is a login. This can be an OpenID identity, or one which is derived from the completion of a registration form.
As is the case with mobile use, there is little new here; developers of open content repositories just need to be sure to apply known security principles correctly to safeguard the holdings that will fill them.
Tablet Computing
Here, the main point is the potential for the use of apps for educational and/or research purposes. This means that the use of apps is the main issue for IAM in this context: how an app (and associated remote data stores if any) handles identity, privacy, security and so on will be the major concern. As with the previous two technologies, it seems that the principal focus for IAM work here will be on developer/deployer education rather than finding something new. Heterogeneity is a potentially serious issue for tablets than less advanced mobiles, because apps can take non-standard approaches to IAM and services provided by institutions will need to be flexible in order to cope: but this should not be at the expense of security and privacy.
Overall, there is an excellent discussion (Part One, Part Two) of what Frank Villavicencio calls the "consumerization of IAM" - the consequences for Identity and Access Management of the explosion in the use of different devices and methods for accessing systems. Although it deals with the commercial market, much of what he says is going to be at least as applicable to FHEIs. With all these new devices and methods for accessing services, a user's multiple roles (as student or employee, as a private individual, as a consumer, etc.) become immensely important, whether they want to merge them or keep them separate. As with much of Identity, the issue is the precise way to manage the trade off between privacy and convenience. The main recommendation of the Identropy discussion is that organisations need to embrace this change, rather than trying to bury their heads in the sand; this is something which applies even more to FHEIs if they want to meet the expectations of their students, who will expect them to live in this decade not the last.
The report groups technologies into three groups of four, the first group being those which are imminent (time to adoption one year or less), then those which are likely to be adopted in two to three years, and finally those which the contributors to the report expect to be adopted in four to five years. I will devote a single post to each group of four. This is post one of the three; go to post two, post three.
Cloud Computing
The report describes this as an almost ubiquitous technology. The main access challenges must therefore have been solved, surely?
However, a quick glance at the project links given in the section to relevant initiatives in the sector shows that access to cloud resources is not as simple as it might be. The Bloomsbury Media Cloud requires an email to request the setting up an account, and considers access to be sufficiently difficult to have created a video in its user guide section to show how to access content (and the video itself is hard to access, giving me a 404 not found error when I tried it). "Investigating and applying authentication methods" is one of the objectives of the project, but I would suggest that more work is needed. But that is better than the second link, to Oxford's Flexible Services for the Support of Research which does not exist at all. They really should have employed a more persistent URL: it has moved from a "current research" directory to a "research" directory, here. This is a far less glossy project, more technical in content, as can be seen from the Installation documentation, which describes access control in the following terms:
"Security Groups: users can define groups with access rules indicating what port can be
accessed from which source IP(s). Multiple Virtual Machines (VMs) can then be instantiated and associated to a defined group. In this way, a security group works analogously to a firewall put in front of one or more VM. Crucially, such a 'firewall' is managed directly by the owner of the VM(s)".
Flexible, but a bit of a challenge for those with little knowledge of virtual machine firewall configuration. The final project link is to HEFCE's funding announcement of shared cloud services for higher education institutions.
So there is still work to be done, in terms of the user experience mainly. Clearly this aspect is of importance to commercial providers of cloud-based services, such as Google docs, and this is inspiring the frequent occurrence of questions on the relevant user mailing lists about the integration of Shibboleth, as a single sign on product used in many tertiary education contexts, with the authentication regimes imposed by these providers.
Mobiles
Again, mobile technology is being adopted rapidly by many institutions. The main IAM related issue is how to ensure security, and it is something which is quite well understood - but that hasn't prevented implementations of other systems having embarrassing security holes which should have been avoided. With mobiles, the issue is more about making sure that known issues are dealt with rather than extensive research to work out what should be done. An introduction to the issues can be found here (among many other places). Since most of the resources being discussed are web based, issues of integration and single sign on are not likely to be important, as they will have been solved for traditional web clients (e.g. by using a standards based SSO solution).
Open Content
In the past, I promoted the idea that even when repositories have open access, there is still a need for authentication and authorisation, unless the repository really allows anyone to anonymously store any item, with no audit trail: a situation which is not likely to happen in the academic community. Similar remarks also hold for open content. The holders of the content will want to retain at least some control over the much of the content being posted. In fact, deposit is likely to be quite restricted, in order to retain a degree of academic respectability and to keep control of intellectual property rights. This is true in the example project links which are given in the report, except for one: P2PU. There, all that is needed to post content, either comments on existing teaching material or a course of your own, is a login. This can be an OpenID identity, or one which is derived from the completion of a registration form.
As is the case with mobile use, there is little new here; developers of open content repositories just need to be sure to apply known security principles correctly to safeguard the holdings that will fill them.
Tablet Computing
Here, the main point is the potential for the use of apps for educational and/or research purposes. This means that the use of apps is the main issue for IAM in this context: how an app (and associated remote data stores if any) handles identity, privacy, security and so on will be the major concern. As with the previous two technologies, it seems that the principal focus for IAM work here will be on developer/deployer education rather than finding something new. Heterogeneity is a potentially serious issue for tablets than less advanced mobiles, because apps can take non-standard approaches to IAM and services provided by institutions will need to be flexible in order to cope: but this should not be at the expense of security and privacy.
Overall, there is an excellent discussion (Part One, Part Two) of what Frank Villavicencio calls the "consumerization of IAM" - the consequences for Identity and Access Management of the explosion in the use of different devices and methods for accessing systems. Although it deals with the commercial market, much of what he says is going to be at least as applicable to FHEIs. With all these new devices and methods for accessing services, a user's multiple roles (as student or employee, as a private individual, as a consumer, etc.) become immensely important, whether they want to merge them or keep them separate. As with much of Identity, the issue is the precise way to manage the trade off between privacy and convenience. The main recommendation of the Identropy discussion is that organisations need to embrace this change, rather than trying to bury their heads in the sand; this is something which applies even more to FHEIs if they want to meet the expectations of their students, who will expect them to live in this decade not the last.
Saturday, 2 April 2011
Installing Debian on MSi CR629 (Novatech I3) laptop
I recently bought a new laptop from Novatech, without any operating system pre installed (it turns out to be labelled an MSi CR629). I would, by the way, recommend Novatech as a supplier, particularly for people who want to buy computers which don't have Windows pre-installed. The directions I give will probably be considered a bit terse by a first time Linux installer, as I mainly want to explore how to obtain a working, Linux installation on this particular hardware, not to replicate other people's documents which describe how to install Linux of various flavours.
This post will describe the steps I went through to get a working Debian Linux stable installation on the laptop, for the guidance of any others who might wish to do the same. I might move to testing at some point, rather than stable; this is something I haven't decided yet.
I spent some time beforehand thinking about the Linux distribution to use. The short-list was Gentoo, Debian, and OpenSuse. I rejected ubuntu, which I have used for a long time on my desktops, because I don't like some of the recent changes and the direction the project seems to be going in, in terms of user interface design and flexibility. I ended up rejecting Gentoo, after actually attempting an installation; I have in the past spent too much time manually configuring Xorg.conf, and I have no desire to do so again. I never got round to trying OpenSuse...
1. Create CD and Install Debian Linux
I did this on one of my existing computers. I downloaded via the Debian website, and burnt a CD using brasero on my existing ubuntu desktop. Then I booted up the laptop, inserted the CD, and rebooted, following the instructions on screen. The network install works fine, though it must be done with a wired internet connection (see below on how to add wireless support).
I chose to partition manually, though there is no particular need to do so. This was mainly so that I could leave some empty space on the disk, possibly for installing Windows later, possibly for exploring other Linux distributions. But I created a small /boot partition, a larger than recommended swap (as I've had problems in the past from too small swap space), and a large / partition (as I have also suffered from using machines which had many partitions which did not work out as the needs of the server did not match the ideas of the installer).
Once the installation process is complete, follow the menu item to restart, remove the CD, and let the machine boot from the hard drive. With the exception of some hardware accessories, this process worked fine.
Some hardware problems will only become apparent over time; there are several pieces of kit integrated into the laptop which I have not used yet, and even some which I am unlikely to ever use (the MMC card reader). A couple of items did immediately need fixing to work with Debian, and this is probably the most technical requirement of the installation.
2. Wireless networking
The laptop comes with RaLink RT3090 wireless hardware, which has known issues with linux drivers. But before we come to that, the device is bizarrely switched off by default. If you press Fn and F8 simultaneously, wireless networking is turned on, and the status light second on the left below the mousepad should become green.
Next, install the basic wireless software. Open a terminal, and as root or using sudo, run
# apt-get install firmware-ralink wireless-tools
(Using synaptic is of course an acceptable alternative.) Enable the drivers by restarting the network manager, either by rebooting or
# /etc/init.d/network-manager restart
The software package named firmware-ralink contains several drivers, some of which will work with the RT3090 hardware, but none of which allow stable wireless connections. They are however needed to establish a connection. For a more stable connection, download the proprietary driver for the RT3090 directly from Ralink's support website. You will need to accept the license, entering name and email, to download. To install, you will need to be root or use sudo.
# unzip 2010_1217_RT3090_LinuxSTA_V2.4.0.4_WiFiBTCombo_DPO.zip
# cd 20101216_RT3090_LinuxSTA_V2.4.0.4_WiFiBTCombo_DPO
# make
# make install
(The original text here included a link to a website where a .deb file could be downloaded; I hadn't taken in that this was ubuntu only and would not work in debian.)
Rebooting is probably the easiest way to see if the driver is picked up. You can see a list of which drivers are loaded using
$ lsmod | grep -e rt2 -e rt3
(you don't need to be root to run this). If rt3090sta is not listed, you need to add it (as root):
# modprobe rt3090sta
This immediately got the wireless working, searching for networks to connect to, and with a reasonably (but not perfectly) stable connection.
To force the driver to be loaded on booting the laptop, run the following command as root:
# echo "rt3090sta" >> /etc/modules
This simply adds the module name for the driver to the list in this file of the kernel modules to load on boot. (N.B. This does not always appear to work, but if not, run the modprobe command again to load the driver manually.)
(These instructions are based on those here, which although they didn't work for me as written, gave enough clues that I could get wireless working.)
3. Integrated card reader
Inserting an SD card into the card reader (second drawer from the front on the left hand side; you need to remove the cover first) does nothing. This is because the driver for the reader is not included as a module for use with the current linux kernel - there is a kernel bug report, but it is complicated and confusing in its advice and in fact easier to follow instructions are available elsewhere. The card reader is a USB device, and can be seen with
$ lsusb
The output should include a line like:
Bus 001 Device 005: ID 0cf2:6250 ENE Technology, Inc.
which indicates the manufacturer of the hardware. These instructions are intended to get card readers made by this manufacturer to work in debian linux: if the hardware for the laptop has changed and a different card reader is included, the method will be different.
You will need to be root to install the driver, which needs to be compiled as a kernel module.
First, download the driver here. The downloaded zip file needs to be unzipped to a particular location:
# unzip -d /usr/src/keucr-0.0.1 R100_02_ene_card_reader.zip
Create a file named /usr/src/keucr-0.0.1/dkms.conf containing:
PACKAGE_NAME="keucr"
PACKAGE_VERSION="0.0.1"
CLEAN="rm -f *.*o"
BUILT_MODULE_NAME[0]="keucr"
MAKE[0]="make -C $kernel_source_dir M=$dkms_tree/$PACKAGE_NAME/$PACKAGE_VERSION/build"
DEST_MODULE_LOCATION[0]="/extra"
AUTOINSTALL="yes"
and then use dkms to build and install the driver:
# dkms add -m keucr -v 0.0.1
# dkms build -m keucr -v 0.0.1
# dkms install -m keucr -v 0.0.1
# echo "keucr" >> /etc/modules
The last line means that the module will be loaded on boot. To load it manually in the current session use:
# modprobe keucr
Test by inserting a card; it should now be recognised and automatically mounted. Whenever a new kernel is built/installed, this driver will break and will need to be re-installed:
# dkms remove -m keucr -v 0.0.1 --all
# dkms add -m keucr -v 0.0.1
# dkms build -m keucr -v 0.0.1
# dkms install -m keucr -v 0.0.1
# modprobe keucr
4. Mousepad tapping
Not everyone likes mousepad tapping (that is, being able to tap on the pad to simulate a mouse button click, rather than using the bar at the bottom of the pad), and it is turned off by default in gnome (the default windowing environment in debian). Enabling is much simpler than the preceding tasks. To enable, simply use the menu at the top of the screen; select System/Preferences/Mouse, go to the Touchpad tab, and click on "Enable mouse clicks with touchpad". This should take effect immediately.
5. Webcam
Like the wireless hardware, this is switched off by default. I spent a while trying to work out what the problem might be, before turning it on, which can be done by pressing Fn and F6 simultaneously. When you do this, you should straightaway find:
$ ls /dev/video*
returns
/dev/video0
and that you can see yourself if you start the application cheese (select Applications/Sound and Video/Cheese Webcam Booth). Additionally, lsusb will now return an additional entry which includes the manufacturer's name: "Acer, Inc".
You can turn the webcam off again with the same key combination.
Update: Fixed some errors in the wireless networking section.
Update 2011-10-13: Updated URL for Ralink Support.
This post will describe the steps I went through to get a working Debian Linux stable installation on the laptop, for the guidance of any others who might wish to do the same. I might move to testing at some point, rather than stable; this is something I haven't decided yet.
I spent some time beforehand thinking about the Linux distribution to use. The short-list was Gentoo, Debian, and OpenSuse. I rejected ubuntu, which I have used for a long time on my desktops, because I don't like some of the recent changes and the direction the project seems to be going in, in terms of user interface design and flexibility. I ended up rejecting Gentoo, after actually attempting an installation; I have in the past spent too much time manually configuring Xorg.conf, and I have no desire to do so again. I never got round to trying OpenSuse...
1. Create CD and Install Debian Linux
I did this on one of my existing computers. I downloaded via the Debian website, and burnt a CD using brasero on my existing ubuntu desktop. Then I booted up the laptop, inserted the CD, and rebooted, following the instructions on screen. The network install works fine, though it must be done with a wired internet connection (see below on how to add wireless support).
I chose to partition manually, though there is no particular need to do so. This was mainly so that I could leave some empty space on the disk, possibly for installing Windows later, possibly for exploring other Linux distributions. But I created a small /boot partition, a larger than recommended swap (as I've had problems in the past from too small swap space), and a large / partition (as I have also suffered from using machines which had many partitions which did not work out as the needs of the server did not match the ideas of the installer).
Once the installation process is complete, follow the menu item to restart, remove the CD, and let the machine boot from the hard drive. With the exception of some hardware accessories, this process worked fine.
Some hardware problems will only become apparent over time; there are several pieces of kit integrated into the laptop which I have not used yet, and even some which I am unlikely to ever use (the MMC card reader). A couple of items did immediately need fixing to work with Debian, and this is probably the most technical requirement of the installation.
2. Wireless networking
The laptop comes with RaLink RT3090 wireless hardware, which has known issues with linux drivers. But before we come to that, the device is bizarrely switched off by default. If you press Fn and F8 simultaneously, wireless networking is turned on, and the status light second on the left below the mousepad should become green.
Next, install the basic wireless software. Open a terminal, and as root or using sudo, run
# apt-get install firmware-ralink wireless-tools
(Using synaptic is of course an acceptable alternative.) Enable the drivers by restarting the network manager, either by rebooting or
# /etc/init.d/network-manager restart
The software package named firmware-ralink contains several drivers, some of which will work with the RT3090 hardware, but none of which allow stable wireless connections. They are however needed to establish a connection. For a more stable connection, download the proprietary driver for the RT3090 directly from Ralink's support website. You will need to accept the license, entering name and email, to download. To install, you will need to be root or use sudo.
# unzip 2010_1217_RT3090_LinuxSTA_V2.4.0.4_WiFiBTCombo_DPO.zip
# cd 20101216_RT3090_LinuxSTA_V2.4.0.4_WiFiBTCombo_DPO
# make
# make install
(The original text here included a link to a website where a .deb file could be downloaded; I hadn't taken in that this was ubuntu only and would not work in debian.)
Rebooting is probably the easiest way to see if the driver is picked up. You can see a list of which drivers are loaded using
$ lsmod | grep -e rt2 -e rt3
(you don't need to be root to run this). If rt3090sta is not listed, you need to add it (as root):
# modprobe rt3090sta
This immediately got the wireless working, searching for networks to connect to, and with a reasonably (but not perfectly) stable connection.
To force the driver to be loaded on booting the laptop, run the following command as root:
# echo "rt3090sta" >> /etc/modules
This simply adds the module name for the driver to the list in this file of the kernel modules to load on boot. (N.B. This does not always appear to work, but if not, run the modprobe command again to load the driver manually.)
(These instructions are based on those here, which although they didn't work for me as written, gave enough clues that I could get wireless working.)
3. Integrated card reader
Inserting an SD card into the card reader (second drawer from the front on the left hand side; you need to remove the cover first) does nothing. This is because the driver for the reader is not included as a module for use with the current linux kernel - there is a kernel bug report, but it is complicated and confusing in its advice and in fact easier to follow instructions are available elsewhere. The card reader is a USB device, and can be seen with
$ lsusb
The output should include a line like:
Bus 001 Device 005: ID 0cf2:6250 ENE Technology, Inc.
which indicates the manufacturer of the hardware. These instructions are intended to get card readers made by this manufacturer to work in debian linux: if the hardware for the laptop has changed and a different card reader is included, the method will be different.
You will need to be root to install the driver, which needs to be compiled as a kernel module.
First, download the driver here. The downloaded zip file needs to be unzipped to a particular location:
# unzip -d /usr/src/keucr-0.0.1 R100_02_ene_card_reader.zip
Create a file named /usr/src/keucr-0.0.1/dkms.conf containing:
PACKAGE_NAME="keucr"
PACKAGE_VERSION="0.0.1"
CLEAN="rm -f *.*o"
BUILT_MODULE_NAME[0]="keucr"
MAKE[0]="make -C $kernel_source_dir M=$dkms_tree/$PACKAGE_NAME/$PACKAGE_VERSION/build"
DEST_MODULE_LOCATION[0]="/extra"
AUTOINSTALL="yes"
and then use dkms to build and install the driver:
# dkms add -m keucr -v 0.0.1
# dkms build -m keucr -v 0.0.1
# dkms install -m keucr -v 0.0.1
# echo "keucr" >> /etc/modules
The last line means that the module will be loaded on boot. To load it manually in the current session use:
# modprobe keucr
Test by inserting a card; it should now be recognised and automatically mounted. Whenever a new kernel is built/installed, this driver will break and will need to be re-installed:
# dkms remove -m keucr -v 0.0.1 --all
# dkms add -m keucr -v 0.0.1
# dkms build -m keucr -v 0.0.1
# dkms install -m keucr -v 0.0.1
# modprobe keucr
4. Mousepad tapping
Not everyone likes mousepad tapping (that is, being able to tap on the pad to simulate a mouse button click, rather than using the bar at the bottom of the pad), and it is turned off by default in gnome (the default windowing environment in debian). Enabling is much simpler than the preceding tasks. To enable, simply use the menu at the top of the screen; select System/Preferences/Mouse, go to the Touchpad tab, and click on "Enable mouse clicks with touchpad". This should take effect immediately.
5. Webcam
Like the wireless hardware, this is switched off by default. I spent a while trying to work out what the problem might be, before turning it on, which can be done by pressing Fn and F6 simultaneously. When you do this, you should straightaway find:
$ ls /dev/video*
returns
/dev/video0
and that you can see yourself if you start the application cheese (select Applications/Sound and Video/Cheese Webcam Booth). Additionally, lsusb will now return an additional entry which includes the manufacturer's name: "Acer, Inc".
You can turn the webcam off again with the same key combination.
Update: Fixed some errors in the wireless networking section.
Update 2011-10-13: Updated URL for Ralink Support.
Friday, 23 July 2010
How to Ask Me For Money
About twenty years ago, my partner and I decided we'd give a proportion of our income to charity. We chose a number of charities which we support, about a dozen in all. These are a mixture of environmental, heritage and social action organisations, from the National Trust to Garden Organic and Friends of the Earth. Generally, the way we do this is pretty simple; we work through the list of charities in turn and give the money more or less monthly. The charities respond to this by sending us quite a lot of literature; newsletters, magazines, appeals, and so on. Most of this has very little effect, because of the way we decided to do our giving, but it means we see the different approaches to asking us for money taken by the charities. If anything, the way that they ask is more likely to put us off giving than to persuade us to give more: we generally react to advertising of all kinds by not buying the product. So perhaps we're not typical: if we were, there would be no advertising.
It's easy to come up with a list of actions which put us off giving to the various charities, less easy to say what constitutes a good way to encourage us. I'll start with some observations on actions which discourage us.
We don't want to be asked too frequently. Or having unnecessary reminders of why I support them in the first place. We make an effort to choose charities which have aims we support and do what we think of as good work, and we generally feel we know what they do well enough not to need constant bombardment with letters and leaflets reminding us of why the charity itself considers its work important. This is aimed most at Friends of the Earth of the charities we have supported. They have started having their appeals on the outside of the envelope as well as inside, which is just silly - if it comes from FOTE, we're not likely to assume it's junk mail to be recycled without opening. Not only that, but the appeals tend to be very lengthy and hysterical in tone: we must have another £500,000 this month, or the world will come to an end! (This is the mailing which prompted this blog post, incidentally.) While agreeing that the environment is something which we need to sort out to continue living on this planet, I am not so convinced that one campaigning group is quite so important in the world, even if they are one of the more militant.
People give money to a charity because they want to help with the front line work they do. Unfortunately, a certain amount of administration is necessary for this work to be possible; it's unglamorous and people generally don't want to fund it, but it has to be done. Sometimes employing a high quality administrator behind the scenes could help the work far more than, say, sending food to a disaster area which doesn't reach the starving because it hasn't had paperwork correctly completed to pass through customs. Think of the recent case of the Baptist aid workers in Haiti after the earthquake at the beginning of 2010, who fell foul of the laws banning child trafficking when seeking to take children to be adopted out of the country (see Wikipedia article). If, as they claim, they were sincere in believing the children to be orphans, there must have been problems which could have been solved by a good administrator: checking the status of the children properly (not easy in a country devastated by an earthquake), arrangement of permits to take the children out of the country, and understanding of the legal position on the movement of children in Haiti. While charities should keep their administration costs to a minimum, we don't actually mind how the charity spends the money we give them. Sometimes, charities asking for money include a "how do you want this money to be spent?" query on their forms, and because we feel that admin can be important, and because we don't want emotional feelings about the latest widely publicised disaster (or initiative, or whatever) to take money away from other important work, we always select "in any suitable manner", which almost certainly means that the money goes on administration.
We're not unhappy about paying for administration, but we do feel it needs to be reasonably efficient. Sometimes a charity's administrative methods are so poor that it inhibits giving to them. Plantlife is a charity we want to support, but became unable to do so because they were so inefficient. Firstly we set up a direct debit, which they failed to collect. So we cancelled the direct debit, and sent them a cheque for annual membership and a donation. But then we started getting monthly letters saying that our subscription had lapsed because they'd been unable to process our payment because the direct debit hadn't been collectable. Of course it wasn't: we'd cancelled it when the cheque was cashed. After a year of emails and phone calls, each of which ended with their administrators saying that they'd sort the problem (after all, sending us monthly letters costs them money) which seemed to have no effect on the processing of the letters, we gave up, and took our money elsewhere. We've had charities failing to cash cheques before they expire, or having online giving pages which aren't at the places which they link to, or which don't use SSL to secure the transactions (how difficult is that going to be to set up in 2010?): none of these administrative issues encourage us to give money. If a charity can't manage their receipt of donations, how can they be trusted to efficiently use the money they do receive?
We don't necessarily want to read too much about the details of the work being done. We chose a charity to support because we have some idea of what we do, but sometimes the work is such that too many details become depressing. We know things are difficult for refugees around the world, but we don't need to know all the details of individual cases of hardship (Refugee Council), we know that human rights are abused by many governments, but we don't want to be depressed by just how difficult it can be to be a dissident even in 2010 (Amnesty), we know that life for those excluded from society even in a relatively rich country in the West is hard, but individual cases can be distressing (Salvation Army). I suppose we have got a duty to ensure our money isn't being misused as much as we can, but that isn't quite what I mean by this. Almost every charity produces some sort of magazine for supporters. Sometimes this has a campaigning brief, like Amnesty's, which is used to inspire the letter writing campaigns which have been so successful for them in the past in raising awareness. We're not really people who do much letter writing, so this kind of campaigning isn't our style, and we're not planning to stop giving any time soon, so we would like to have the option of not receiving the magazine, which costs money to produce and distribute, only for us to skim it and recycle it. A short email newsletter would be better suited to the way we want to interact with these social action charities. We have explained this to the Salvation Army, who took us off their magazine list, and to the Refugee Council, who took us off their list for a time but soon started sending the magazine again (the next time we sent money to support the work).
A lot of charities want to encourage their supporters to become involved in their work. For many people, this can be extremely rewarding: working at a local charity shop, volunteering at a local National Trust property, and so on. I've already mentioned that we aren't really suited to become involved in letter writing campaigns. We also have issues caused by a long term illness which make it virtually impossible to be involved in activities which require meeting other people. As a result of this, I work at home almost all the time, so I don't do much socialising with work colleagues. So we're not likely to become involved in many of these activities, even if it's just selling raffle tickets or running awareness raising events at work. (Other people there hold cake days, or run marathons, to aid their causes.) Because of this, we are perhaps more aware than most of how charities ask people to become involved, and think it wrong to be made to feel guilty for not doing more. In some ways, we feel that giving quite considerable sums of money should be enough! Raffle tickets are a case in point; we get sent these a lot, to sell to our friends, and we would prefer not to be. Even if our lifestyle permitted us to do this easily, it is not something we're comfortable doing.
We don't want to have money wasted on cheap trinkets as rewards for support. A lot of charities include pens with requests for donations, to make it easier to complete the form and return it. The Red Cross has sent me, over the last few years, address labels, pens, notelets, postcards, seeds and tea. While it is nice to be appreciated, we feel that this is a waste of money which could be better spent on the charity's work, especially as we are being sent far more than we can use. There are only so many address labels with pictures of flowers I can/want to use, after all!
On the other hand, charities can lose out by being too lackadaisical. The Musicians' Benevolent Fund is a charity I wanted to give to, to encourage young musicians, but it proved hard to track them down, and when we did, we kept on dropping off their mailing list entirely. They appear to have a policy of only sending out information to people who responded to the previous appeal, and we don't give to an individual charity frequently enough to do that.
Many charities send receipts and thank you letters for donations over a certain amount. It is nice to be appreciated - some of the charities send out Christmas cards to supporters, which is a nice way to show that they matter. But this isn't really necessary: we are quite happy to give money without thanks, feeling that the work is more important than spending money this way. We can make sure the donation has been received from our bank statements, after all. Realising that not all donors want a receipt, some charities include a box on the form to accompany donations to tick if a receipt is not required. However, if we tick the box, then the charity shouldn't send a receipt, and this doesn't always work. This is another administrative failure, and one which is particularly irritating, as we specifically followed a procedure they set only for the charity to get it wrong.
Two of the charities we support are ones which I would like to highlight as ones which get it right, at least as far as we're concerned. The first is the National Trust. They seem to have really good writers of appeals. Their letters are relatively infrequent, and take the tone "We have this wonderful opportunity - do you want to be part of it?" when many charities are asking "Bad things are/could be happening - help us do something them". This is encouraging rather than depressing, especially as they stand out against the crowd as a result. Charities like the Refugee Council should take note: instead of telling us how bad things are for the people they are trying to help, send us stories of how their work has helped someone, and how it could do so again. (To be honest, they may now be doing this: I've stopped reading their letters, just removing the donation forms and recycling the rest because they were so depressing.)
The second good example is the Salvation Army. As mentioned above, they actually stopped sending us their magazine when we asked them to, as it was a waste as we didn't have time to read it. Now we get two or three appeals a year, and are sent nice Easter and Christmas cards as recognition of our support.
So, if you want my money:
It's easy to come up with a list of actions which put us off giving to the various charities, less easy to say what constitutes a good way to encourage us. I'll start with some observations on actions which discourage us.
We don't want to be asked too frequently. Or having unnecessary reminders of why I support them in the first place. We make an effort to choose charities which have aims we support and do what we think of as good work, and we generally feel we know what they do well enough not to need constant bombardment with letters and leaflets reminding us of why the charity itself considers its work important. This is aimed most at Friends of the Earth of the charities we have supported. They have started having their appeals on the outside of the envelope as well as inside, which is just silly - if it comes from FOTE, we're not likely to assume it's junk mail to be recycled without opening. Not only that, but the appeals tend to be very lengthy and hysterical in tone: we must have another £500,000 this month, or the world will come to an end! (This is the mailing which prompted this blog post, incidentally.) While agreeing that the environment is something which we need to sort out to continue living on this planet, I am not so convinced that one campaigning group is quite so important in the world, even if they are one of the more militant.
People give money to a charity because they want to help with the front line work they do. Unfortunately, a certain amount of administration is necessary for this work to be possible; it's unglamorous and people generally don't want to fund it, but it has to be done. Sometimes employing a high quality administrator behind the scenes could help the work far more than, say, sending food to a disaster area which doesn't reach the starving because it hasn't had paperwork correctly completed to pass through customs. Think of the recent case of the Baptist aid workers in Haiti after the earthquake at the beginning of 2010, who fell foul of the laws banning child trafficking when seeking to take children to be adopted out of the country (see Wikipedia article). If, as they claim, they were sincere in believing the children to be orphans, there must have been problems which could have been solved by a good administrator: checking the status of the children properly (not easy in a country devastated by an earthquake), arrangement of permits to take the children out of the country, and understanding of the legal position on the movement of children in Haiti. While charities should keep their administration costs to a minimum, we don't actually mind how the charity spends the money we give them. Sometimes, charities asking for money include a "how do you want this money to be spent?" query on their forms, and because we feel that admin can be important, and because we don't want emotional feelings about the latest widely publicised disaster (or initiative, or whatever) to take money away from other important work, we always select "in any suitable manner", which almost certainly means that the money goes on administration.
We're not unhappy about paying for administration, but we do feel it needs to be reasonably efficient. Sometimes a charity's administrative methods are so poor that it inhibits giving to them. Plantlife is a charity we want to support, but became unable to do so because they were so inefficient. Firstly we set up a direct debit, which they failed to collect. So we cancelled the direct debit, and sent them a cheque for annual membership and a donation. But then we started getting monthly letters saying that our subscription had lapsed because they'd been unable to process our payment because the direct debit hadn't been collectable. Of course it wasn't: we'd cancelled it when the cheque was cashed. After a year of emails and phone calls, each of which ended with their administrators saying that they'd sort the problem (after all, sending us monthly letters costs them money) which seemed to have no effect on the processing of the letters, we gave up, and took our money elsewhere. We've had charities failing to cash cheques before they expire, or having online giving pages which aren't at the places which they link to, or which don't use SSL to secure the transactions (how difficult is that going to be to set up in 2010?): none of these administrative issues encourage us to give money. If a charity can't manage their receipt of donations, how can they be trusted to efficiently use the money they do receive?
We don't necessarily want to read too much about the details of the work being done. We chose a charity to support because we have some idea of what we do, but sometimes the work is such that too many details become depressing. We know things are difficult for refugees around the world, but we don't need to know all the details of individual cases of hardship (Refugee Council), we know that human rights are abused by many governments, but we don't want to be depressed by just how difficult it can be to be a dissident even in 2010 (Amnesty), we know that life for those excluded from society even in a relatively rich country in the West is hard, but individual cases can be distressing (Salvation Army). I suppose we have got a duty to ensure our money isn't being misused as much as we can, but that isn't quite what I mean by this. Almost every charity produces some sort of magazine for supporters. Sometimes this has a campaigning brief, like Amnesty's, which is used to inspire the letter writing campaigns which have been so successful for them in the past in raising awareness. We're not really people who do much letter writing, so this kind of campaigning isn't our style, and we're not planning to stop giving any time soon, so we would like to have the option of not receiving the magazine, which costs money to produce and distribute, only for us to skim it and recycle it. A short email newsletter would be better suited to the way we want to interact with these social action charities. We have explained this to the Salvation Army, who took us off their magazine list, and to the Refugee Council, who took us off their list for a time but soon started sending the magazine again (the next time we sent money to support the work).
A lot of charities want to encourage their supporters to become involved in their work. For many people, this can be extremely rewarding: working at a local charity shop, volunteering at a local National Trust property, and so on. I've already mentioned that we aren't really suited to become involved in letter writing campaigns. We also have issues caused by a long term illness which make it virtually impossible to be involved in activities which require meeting other people. As a result of this, I work at home almost all the time, so I don't do much socialising with work colleagues. So we're not likely to become involved in many of these activities, even if it's just selling raffle tickets or running awareness raising events at work. (Other people there hold cake days, or run marathons, to aid their causes.) Because of this, we are perhaps more aware than most of how charities ask people to become involved, and think it wrong to be made to feel guilty for not doing more. In some ways, we feel that giving quite considerable sums of money should be enough! Raffle tickets are a case in point; we get sent these a lot, to sell to our friends, and we would prefer not to be. Even if our lifestyle permitted us to do this easily, it is not something we're comfortable doing.
We don't want to have money wasted on cheap trinkets as rewards for support. A lot of charities include pens with requests for donations, to make it easier to complete the form and return it. The Red Cross has sent me, over the last few years, address labels, pens, notelets, postcards, seeds and tea. While it is nice to be appreciated, we feel that this is a waste of money which could be better spent on the charity's work, especially as we are being sent far more than we can use. There are only so many address labels with pictures of flowers I can/want to use, after all!
On the other hand, charities can lose out by being too lackadaisical. The Musicians' Benevolent Fund is a charity I wanted to give to, to encourage young musicians, but it proved hard to track them down, and when we did, we kept on dropping off their mailing list entirely. They appear to have a policy of only sending out information to people who responded to the previous appeal, and we don't give to an individual charity frequently enough to do that.
Many charities send receipts and thank you letters for donations over a certain amount. It is nice to be appreciated - some of the charities send out Christmas cards to supporters, which is a nice way to show that they matter. But this isn't really necessary: we are quite happy to give money without thanks, feeling that the work is more important than spending money this way. We can make sure the donation has been received from our bank statements, after all. Realising that not all donors want a receipt, some charities include a box on the form to accompany donations to tick if a receipt is not required. However, if we tick the box, then the charity shouldn't send a receipt, and this doesn't always work. This is another administrative failure, and one which is particularly irritating, as we specifically followed a procedure they set only for the charity to get it wrong.
Two of the charities we support are ones which I would like to highlight as ones which get it right, at least as far as we're concerned. The first is the National Trust. They seem to have really good writers of appeals. Their letters are relatively infrequent, and take the tone "We have this wonderful opportunity - do you want to be part of it?" when many charities are asking "Bad things are/could be happening - help us do something them". This is encouraging rather than depressing, especially as they stand out against the crowd as a result. Charities like the Refugee Council should take note: instead of telling us how bad things are for the people they are trying to help, send us stories of how their work has helped someone, and how it could do so again. (To be honest, they may now be doing this: I've stopped reading their letters, just removing the donation forms and recycling the rest because they were so depressing.)
The second good example is the Salvation Army. As mentioned above, they actually stopped sending us their magazine when we asked them to, as it was a waste as we didn't have time to read it. Now we get two or three appeals a year, and are sent nice Easter and Christmas cards as recognition of our support.
So, if you want my money:
- don't ask me every five minutes
- don't send me large numbers of cheap "gifts"
- don't expect me to act as a fundraising or campaiging volunteer unless I tell you I want to do this
- do be positive about what you are doing with the money
- do administer my giving effectively and efficiently
- do keep in touch
Tuesday, 11 May 2010
Job Hunting in UK Higher Education VII: Online Application Forms
I made a couple of job applications using online web-based software applications (the word being rather overused here, but I think it's clear enough). It was fairly clear that the two applications were in fact the same piece of software, with somewhat different configuration, sharing exact limitations on the two sites. Since using the web for making job applications is likely to be of increasing importance, I think it would be useful to write about some of these limitations and how they should be addressed (from the point of view of the applicant).
Looking back at the sites now in order to obtain screenshots, it is clear that the application has been updated since I used it, and some of the criticisms I am about to make have already been addressed. Once I have completed this set of posts, I will contact the institutions involved (and both the THES and jobs.ac.uk, subjects of earlier posts) for feedback and in the hope that what I say might be useful to them.
The appearance of the web application was extremely old fashioned
While I am of the opinion that form and appearance are less important than content, we are living in 2010 not in 1995. The web has changed a lot since then: backgrounds (the web in 1995 was black text on a grey background as far as browsers were concerned), CSS, and many minor formatting aids in successive updates to HTML have made a huge difference to the experience of browsing the Internet over the last fifteen years. And to design a web application so that it appears to have ignored all these changes makes the institution which hosts such an application appear to be out of date itself: not an image that any university really wants to promote of itself in this day and age. Universities wish to attract good quality staff members, and one of the ways that they can do this is to emphasise how much they are an exciting and innovative place to work; and to give the impression that job applicants are so unimportant that the facilities given them are over a decade out of date is not at all the way to do this.
Additionally, most universities are concerned that their online presence should present a reasonably uniform, branded, experience to browsers, an aim which CSS has made relatively easy to implement, at least in part. So a part of the institutional website which totally ignores any requirement to uniformity must be a nightmare to those concerned with marketing the university's image.
This is one aspect which has been improved drastically since I used the web application, so I am clearly not the only person to have been concerned about this. Since this has changed, I have not included an illustrative screenshot, but some idea of what the site looked like can be seen here (the Mosaic 1.0 screenshot is the right vintage for the web experience I'm talking about), or here.
Help and Guidance Was Lacking
One of the most serious lacks in software generally is decent documentation. With this web application, how to use it gradually became clearer, but was never spelt out in advance. This affects users in two ways.
The application can contain what are called "branching questions", where what the applicant is asked next depends on answers to previous questions. ("Do you have unspent convictions?" would lead to the follow-up "List them" if answered affirmatively, but could skip to a question about ethnic background if answered negatively, for example.) These questions are clearly configurable - they were considerably different at the sites where I have used the application. Apart from questions usually found on an equal opportunities monitoring form, in both cases this section included custom questions - "Why do you wish to work at University X?", for example. In order to sensibly answer such questions, it can be extremely useful to know what all the questions are in advance, a trivial matter with a paper application form. But the applicant only sees the next question when the previous one is answered. Only once something has been entered for every question is is possible to see all the answered questions (i.e. without the branches that were not followed), by clicking on the "Print summary" link, which displays a web page containing the whole of a user's application - useful for printing out a copy to read offline or to store for future reference (e.g. for re-reading on the way to an interview); this is not obvious.
The second question which is hard to answer at the start is how an application is submitted. Submission actually happens as follows. The questions are organised into sections. After the applicant is happy with each section, they can tick a box marking the section complete. (This doesn't prevent them from coming back and changing it again later.) Once every section is complete, a new button appears on the summary page (not the same as the summary displayed by "Print summary" in the previous paragraph, but the one listing the sections) to submit the application. At this point, it is still possible to edit the answers, but it is not clear whether these will be received by the Human Resources department.
While this is a sensible way to organise the form to prevent accidental submission, it would be useful to see the information in the paragraph above beforehand. Otherwise, the obvious supposition is that submission is an automatic process, and will just occur when all the sections are marked completed.
No formatting was available for text input
Many people use formatting - particularly bold and italic - to make it easier to pick salient points out from their CV. I have tables in mine, mainly because I have worked on over a dozen different named projects in my current post, and it's a convenient way to summarise the different projects by name, date and my contribution. So if the word processed CV is replaced by a form, applicants will want to be able to duplicate as much formatting as possible.
Clearly, this ability is possible to implement: I am typing this blog entry in an Javascript editor which allows most of this formatting via familiar icons (to include tables the direct HTML entry option needs to be used). This sort of interface will be familiar to many job applicants in academic circles: it has become commonplace to edit blogs, WIKIs and website CMS systems.
So why is it that the web application requires the entry of data as unformatted text? This makes it easier to store in a database, though reducing it to HTML markup as the blogger editor does will leave it in as good a form for storage as straight text. Other decisions about the design of the application seem to have been made because of the necessity of storing the information in a database (see below), but this is surely unnecessary in 2010: storage space is relatively cheap.
Incidentally, the Javascript editor in blogger is not perfect by any means - the first draft of this post was lost when I used CTL-z to reverse a change, saw the whole post disappear and then immediately become unavailable as the autosave function ran to save these changes! Nevertheless, it is massively easier to use to produce information which matches the job applicant's requirements to present as good a picture of him or herself as possible.
Bizarre Usability Choices
With a job application, large numbers of dates often need to be entered: start and end dates for employment, dates when qualifications were obtained, and so on. With qualifications, it is common to take several at the same time when in school, which effectively means that the same date needs to be entered multiple times. With this in mind, an application for handling job applications needs to be designed to allow dates to be entered easily, preferably by copy and paste from existing documents.
However, that isn't the case - and not just in this specific application. There seems to be something of a fashion for poorly designed date entry controls at the moment, which require the selection of the day of the month, month, and year from three separate drop down menus. While it's usually possible to short cut the tedious process of finding the right value for each (select the first, then type the day, escape, enough letters from the month to uniquely identify it, escape, and the year), it's not as simple as copying a date from another document, especially if it needs to be done several times.
When I made the applications, there was a difference between the two sites, which now seems to have disappeared. That is that one of them actually required the input of the day of the month for dates as well as the month and year, with the rather strange hint that the first should be used for a date where the applicant didn't know the exact day of the month. How many people could actually remember the exact day on which they obtained a qualification twenty years ago: and is it really a defined date - should it match the day of the final exam, the results being published, the graduation ceremony, the arrival of the official certificate?
At Least One Bug in Basic Input Processing
In a job application, there are likely to be a fair number of lengthy textual sections, responses to directions such as "Describe your responsibilities in your current post", "Indicate how you fulfil the person specification for the vacancy", and so on. The web application makes a requirement that these free text submissions are limited in length - I'm not sure why this is necessary, as it's very simple to store effectively unlimited text in a database (certainly to allow more than any legitimate applicant is going to submit except by error). The character limits are clearly configurable per question, which makes sense provided that the configuration is sufficiently imaginative: it would have been nice if it had been borne in mind that a character limit of 4000 for "Describe how you fulfil the person specification for the vacancy" where this lists twenty requirements gives only an average of 200 characters for each requirement: not much longer than a tweet.
Leaving that aside, there also turned out to be a bug affecting how the text limit was applied. The way I went about answering these questions was to use a word processor to develop an answer, because it provides a convenient way to keep track of how many characters have been used. Then I copied and pasted the answer into the web form, and submitted it. This showed that the counting done by the word processor and the form checking code didn't count the characters in the same way, so I needed to cut 50 or so characters to reduce 3980 characters to 4000 (if you see what I mean). While irritating, this is effectively outside the control of the developers. Possible reasons could include UNIX-to-DOS format line ending conversion, which would effectively add an invisible character for each line break in the text.
Note: I was unable to update my existing answers for an application for a post for which the deadline had already passed in a way which would have confirmed the continued existence of this bug or to provide screenshots for this post.
However, worse was to come. Having made the effort to cut the number of characters, and persuaded the web form that the input was really less than 4000 characters, I submitted the answer again. Apparently successfully. But then when I looked at the summary of the job application, I discovered that about 20 characters had been snipped from the end of the answer, presumably meaning that the web application didn't count the characters in the input in the same way that it counted the characters in the submission to the database. Now this, I think, is inexcusable and should have been found in testing.
It made me wonder what other basic issues had been missed from the software: did it have any protection from SQL injection attacks, for example?
Looking back at the sites now in order to obtain screenshots, it is clear that the application has been updated since I used it, and some of the criticisms I am about to make have already been addressed. Once I have completed this set of posts, I will contact the institutions involved (and both the THES and jobs.ac.uk, subjects of earlier posts) for feedback and in the hope that what I say might be useful to them.
The appearance of the web application was extremely old fashioned
While I am of the opinion that form and appearance are less important than content, we are living in 2010 not in 1995. The web has changed a lot since then: backgrounds (the web in 1995 was black text on a grey background as far as browsers were concerned), CSS, and many minor formatting aids in successive updates to HTML have made a huge difference to the experience of browsing the Internet over the last fifteen years. And to design a web application so that it appears to have ignored all these changes makes the institution which hosts such an application appear to be out of date itself: not an image that any university really wants to promote of itself in this day and age. Universities wish to attract good quality staff members, and one of the ways that they can do this is to emphasise how much they are an exciting and innovative place to work; and to give the impression that job applicants are so unimportant that the facilities given them are over a decade out of date is not at all the way to do this.
Additionally, most universities are concerned that their online presence should present a reasonably uniform, branded, experience to browsers, an aim which CSS has made relatively easy to implement, at least in part. So a part of the institutional website which totally ignores any requirement to uniformity must be a nightmare to those concerned with marketing the university's image.
This is one aspect which has been improved drastically since I used the web application, so I am clearly not the only person to have been concerned about this. Since this has changed, I have not included an illustrative screenshot, but some idea of what the site looked like can be seen here (the Mosaic 1.0 screenshot is the right vintage for the web experience I'm talking about), or here.
Help and Guidance Was Lacking
One of the most serious lacks in software generally is decent documentation. With this web application, how to use it gradually became clearer, but was never spelt out in advance. This affects users in two ways.
The application can contain what are called "branching questions", where what the applicant is asked next depends on answers to previous questions. ("Do you have unspent convictions?" would lead to the follow-up "List them" if answered affirmatively, but could skip to a question about ethnic background if answered negatively, for example.) These questions are clearly configurable - they were considerably different at the sites where I have used the application. Apart from questions usually found on an equal opportunities monitoring form, in both cases this section included custom questions - "Why do you wish to work at University X?", for example. In order to sensibly answer such questions, it can be extremely useful to know what all the questions are in advance, a trivial matter with a paper application form. But the applicant only sees the next question when the previous one is answered. Only once something has been entered for every question is is possible to see all the answered questions (i.e. without the branches that were not followed), by clicking on the "Print summary" link, which displays a web page containing the whole of a user's application - useful for printing out a copy to read offline or to store for future reference (e.g. for re-reading on the way to an interview); this is not obvious.
The second question which is hard to answer at the start is how an application is submitted. Submission actually happens as follows. The questions are organised into sections. After the applicant is happy with each section, they can tick a box marking the section complete. (This doesn't prevent them from coming back and changing it again later.) Once every section is complete, a new button appears on the summary page (not the same as the summary displayed by "Print summary" in the previous paragraph, but the one listing the sections) to submit the application. At this point, it is still possible to edit the answers, but it is not clear whether these will be received by the Human Resources department.
While this is a sensible way to organise the form to prevent accidental submission, it would be useful to see the information in the paragraph above beforehand. Otherwise, the obvious supposition is that submission is an automatic process, and will just occur when all the sections are marked completed.
No formatting was available for text input
Many people use formatting - particularly bold and italic - to make it easier to pick salient points out from their CV. I have tables in mine, mainly because I have worked on over a dozen different named projects in my current post, and it's a convenient way to summarise the different projects by name, date and my contribution. So if the word processed CV is replaced by a form, applicants will want to be able to duplicate as much formatting as possible.
Clearly, this ability is possible to implement: I am typing this blog entry in an Javascript editor which allows most of this formatting via familiar icons (to include tables the direct HTML entry option needs to be used). This sort of interface will be familiar to many job applicants in academic circles: it has become commonplace to edit blogs, WIKIs and website CMS systems.
So why is it that the web application requires the entry of data as unformatted text? This makes it easier to store in a database, though reducing it to HTML markup as the blogger editor does will leave it in as good a form for storage as straight text. Other decisions about the design of the application seem to have been made because of the necessity of storing the information in a database (see below), but this is surely unnecessary in 2010: storage space is relatively cheap.
Incidentally, the Javascript editor in blogger is not perfect by any means - the first draft of this post was lost when I used CTL-z to reverse a change, saw the whole post disappear and then immediately become unavailable as the autosave function ran to save these changes! Nevertheless, it is massively easier to use to produce information which matches the job applicant's requirements to present as good a picture of him or herself as possible.
Bizarre Usability Choices
With a job application, large numbers of dates often need to be entered: start and end dates for employment, dates when qualifications were obtained, and so on. With qualifications, it is common to take several at the same time when in school, which effectively means that the same date needs to be entered multiple times. With this in mind, an application for handling job applications needs to be designed to allow dates to be entered easily, preferably by copy and paste from existing documents.
However, that isn't the case - and not just in this specific application. There seems to be something of a fashion for poorly designed date entry controls at the moment, which require the selection of the day of the month, month, and year from three separate drop down menus. While it's usually possible to short cut the tedious process of finding the right value for each (select the first, then type the day, escape, enough letters from the month to uniquely identify it, escape, and the year), it's not as simple as copying a date from another document, especially if it needs to be done several times.
When I made the applications, there was a difference between the two sites, which now seems to have disappeared. That is that one of them actually required the input of the day of the month for dates as well as the month and year, with the rather strange hint that the first should be used for a date where the applicant didn't know the exact day of the month. How many people could actually remember the exact day on which they obtained a qualification twenty years ago: and is it really a defined date - should it match the day of the final exam, the results being published, the graduation ceremony, the arrival of the official certificate?
At Least One Bug in Basic Input Processing
In a job application, there are likely to be a fair number of lengthy textual sections, responses to directions such as "Describe your responsibilities in your current post", "Indicate how you fulfil the person specification for the vacancy", and so on. The web application makes a requirement that these free text submissions are limited in length - I'm not sure why this is necessary, as it's very simple to store effectively unlimited text in a database (certainly to allow more than any legitimate applicant is going to submit except by error). The character limits are clearly configurable per question, which makes sense provided that the configuration is sufficiently imaginative: it would have been nice if it had been borne in mind that a character limit of 4000 for "Describe how you fulfil the person specification for the vacancy" where this lists twenty requirements gives only an average of 200 characters for each requirement: not much longer than a tweet.
Leaving that aside, there also turned out to be a bug affecting how the text limit was applied. The way I went about answering these questions was to use a word processor to develop an answer, because it provides a convenient way to keep track of how many characters have been used. Then I copied and pasted the answer into the web form, and submitted it. This showed that the counting done by the word processor and the form checking code didn't count the characters in the same way, so I needed to cut 50 or so characters to reduce 3980 characters to 4000 (if you see what I mean). While irritating, this is effectively outside the control of the developers. Possible reasons could include UNIX-to-DOS format line ending conversion, which would effectively add an invisible character for each line break in the text.
Note: I was unable to update my existing answers for an application for a post for which the deadline had already passed in a way which would have confirmed the continued existence of this bug or to provide screenshots for this post.
However, worse was to come. Having made the effort to cut the number of characters, and persuaded the web form that the input was really less than 4000 characters, I submitted the answer again. Apparently successfully. But then when I looked at the summary of the job application, I discovered that about 20 characters had been snipped from the end of the answer, presumably meaning that the web application didn't count the characters in the input in the same way that it counted the characters in the submission to the database. Now this, I think, is inexcusable and should have been found in testing.
It made me wonder what other basic issues had been missed from the software: did it have any protection from SQL injection attacks, for example?
Labels:
higher education,
human resources,
website design
Subscribe to:
Comments (Atom)



