This blog post shares its title with one of the great books about mathematics, by Richard Courant and Herbert Robbins (while not in itself making any claims to greatness, naturally!). That book is the idea bridge between school mathematics and university mathematics, and should still be read by anyone who is considering studying the subject at university. While most school mathematics is a question of learning how to solve specific types of problems by rote, higher mathematics is not at all like that: that is both the point of Courant and Robbins' book, and the reason for this post.
The nature of mathematics has been an issue for philosophical discussion ever since the ancient Greeks, particularly where the question is about the relationship between mathematics and the real world. I'm not going to try to summarise, let alone add to, the huge body of literature on the subject. What I want to talk about is how to classify mathematics as a part of human learning. Today, it is usually classified as a "science", at least by educators: it is a science as far as GCSE and A-levels are concerned, though in higher education there is often a separate faculty which contains mathematics and IT.
What is a science? - Inductive Reasoning
To say anything about whether mathematics is a science requires some idea of what a science is. The exact details of this are matters of debate, with interesting contributions from people like David Hume, Karl Popper, Thomas S. Kuhn, Paul Feyerabend, and Michael Polanyi (whose Personal Knowledge I was reading while working on a second draft of this post, and who inspired a number of revisions). All these authors well worth reading if interested in the subject. I think it would be fair to say that most of those who consider themselves scientists will believe in some version of the "scientific method": a cycle of theory, prediction (i.e. the design of an experiment with an idea of what should happen if the theory is correct), and experiment, leading to a revised theory if the existing one falls short which is the source of new predictions and so on.
The point of this kind of scientific method is that is is supposed to support the use of inductive reasoning, which basically boils down to (OK, is oversimplified to) the idea that it is possible to use the past to predict the future. If something holds for some class of situations in the past, it at least suggests that it will hold similar future situations: every time I have dropped a ball from my hand, it has fallen towards the centre of the Earth; the next time I drop it, it should do the same. This is in fact a statistical argument. The problem with this is that it is not possible to prove that induction will work (as pointed out by Hume), even when the type of statements which it is used to support are restricted.
Mathematics and Deductive Reasoning
But nothing like this is involved in mathematics. There is no experimentation, no expectation that it is possible to prove existing results incorrect (or incomplete), and no inductive reasoning of the kind which is involved in scientific research. Instead, mathematics is the domain of deductive reasoning, and its results are sure and certain (with some possible exceptions, which I will come to later).
In mathematics, results are established using deductive reasoning. The idea is that this consists of a set of rules which can be applied to prove a conclusion from a collection of hypotheses, usually known as "axioms" in this context. Things are rather more complex than this makes it sound. The rules are pretty convincing (for example, one, known as modus ponens from medieval logic textbooks, says that if it's possible to prove one statement A and another statement which has the form "A implies B", then it's possible to prove B too). But they really work best applied in a very formal way, using symbols which indicate the logical relationship between statements. And then establishing any useful theorem is extremely complex, time-consuming, and hard to follow. So in practice mathematicians tend to use relatively informal arguments to establish results, which could in theory be turned into formal demonstrations in symbolic logic - at least, hopefully they could. Such arguments often use well established proof methods, such as reductio ad absurdam (proof by contradiction - also a medieval term), where the opposite of the theorem's conclusion is assumed, and then this is shown to lead to a contradiction, implying that the result of the theorem must hold. In the end, a proof of a theorem is correct if it convinces the worldwide community of mathematicians, and there have been movements to make this notion different from the then-accepted norm, such as the intuitionists.
As well as the complexities of formalising the idea of proof, another nuance in the brief definition of deductive reasoning is the role of axioms. Basically, mathematicians are free to define any collection of axioms they want, and see if they lead to interesting results. Most mathematicians work with already well established collections of axioms. These often take the form of definitions for convenience - it is much easier to state a theorem "If A is an X, then..." rather than "If A satisfies the axioms X1,X2,X3,X4,...", by using the definition "A is an X if it satisfies the axioms X1,X2,X3,X4,...". The number of axioms is unlimited, and can even be infinite (in which case, there would be a rule used to define the axioms, along the lines of, "For any mathematical property P, if P is true for 0 and, whenever P is true for n it is also true for n+1, then P is true for all numbers" (this is an axiom from number theory known as the principle of induction).
An example would be to work with groups, which are one of the most important mathematical structures; the term "group" has a formal definition, and this is turned into a simple axiom in many theorems: "If G is a group, and ..." as opposed to "If G is a set with a binary operation . with the properties that G has an identity, all members of G have inverses, and . is associative, and ..." - long enough to be cumbersome even without expanding the meanings of the subsidiary definitions of operations, identities, inverses, and associativity. (Another condition, closure, is frequently added to this definition, but this can be, and often is, sensibly subsumed into the meaning of "operation".) And that's without even considering what a set is: set theory is formulated so it defines what can be done with sets rather than what a set is (e.g. it tells you that the union of two sets is a set), and this would hardly fit easily into the statement of a theorem.
Axioms
What determines a mathematician's choice of axioms? There is one basic rule: they should be consistent. The reason for this is that it is possible to prove that from inconsistent axioms any statement can be proved. If it is possible to give a model - basically an example - which satisfies the axioms, then they must be consistent (this is another theorem from logic). For many axioms, particularly those collected into definitions, there are huge numbers of models already known. Where a mathematician is seeking to solve a particular problem, and develops axioms related to this problem, the motivation behind the problem may well give an obvious model for the axioms.
A small warning - where the mathematical objects become really fundamental, it is hard or impossible to prove that they are consistent. There is a circularity about the most fundamental mathematical objects: logic is used to reason about sets, and sets are employed in many aspects of logic, including the definition of "model" which is used in the theorem alluded to in the preceding paragraph. This means, too, that there is no way to prove that mathematics as a whole is consistent, and indeed, that consistency of a set of axioms is only relative to the presumed consistency of more fundamental definitions and axioms such as those of set theory. In fact, the axioms defining set theory (in the form they are normally seen) are the end results of an attempt to restrict the idea of a set to avoid the paradoxes which were first seen at the end of the nineteenth century (for example, the "set which contains all sets which are not members of themselves" - is it a member of itself or not?).
Consistency is the only real requirement, though a mathematician would look for other desirable properties of a new set of axioms with which he or she wishes to work. They should be productive: it should be possible to derive lots of useful (and beautiful, in mathematical terms) theorems. The re-use of ideas in other contexts is important in mathematics, and productivity of axioms is one of the things which makes it possible to do this. The definition of a group is a prime example, as it has been re-used in hundreds if not thousands of contexts, and lies behind many important applications of mathematics in physics. Note the word "useful" - from any collection of consistent axioms, it is possible to derive infinite numbers of trivialities (such as "1=1"), and mathematicians are not interested.
But perhaps even more important to many mathematicians is the idea of beauty. It is hard to explain why some mathematics is elegant or beautiful, but it is certainly a value which is recognised by those who work in the field. Sciences do in fact have a fairly similar concept. Basically, I think beauty is about how smoothly the mathematics seems to flow, so that the truth of what is being proven seems obvious, at least in retrospect.
But What is the Point of Deductive Reasoning?
One philosophical criticism of mathematics is that it never tells you anything which is not effectively included in the axioms involved. But generally working out the implications of a set of axioms is not always easy to do, even if a specific desired endpoint is set: this is illustrated by the time it took for mathematicians to find a proof of Fermat's Last Theorem. In general, it isn't possible to look at a statement and say whether it is provably true, false, or unprovable from a given set of axioms.
What axioms do is to define a certain kind of structure, which can then be investigated with deductive reasoning. If we make a axiom that some aspect of the universe has a particular structural characteristic, then all the mathematics which has been devised which describes that type of structure becomes available to deduce information about the aspect and to predict other properties of the aspect which can (in principle) be tested. Mathematics also includes the study of the absence of structure, that is, randomness: statistics.
Why Does Science Work?
One of the big questions is basically this: why is mathematics as applied in science so good at describing the universe? There isn't necessarily any connection between our thoughts and the behaviour of a black hole thousands of light years away: it is entirely conceivable (and possibly even likely) that our brains will never be capable of understanding the way the universe works at its most fundamental level. The main reason for doubting our ability is that we are insignificant parts of the universe, and therefore indubitably less complex than its whole; it would be even more unlikely that we can understand it than it would be for an amoeba to understand human culture. But this means that the success of science desperately needs an explanation, and I think that defining mathematics as the study of structure provides one.
It is important to realise that we don't directly understand the universe. One of the things that human beings are unbelievably good at is the perception of patterns (it's why we see pictures in the cracks in a wall or in the clouds, for example), and if we can see one in a particular phenomenon, that is a way to find a structure which models that phenomenon. This structure can be investigated using mathematics. Of course, there is no guarantee that the pattern is correct, as is the case with an optical illusion. What mathematics as an analysis of structure makes possible is the use of deductive reasoning to work out what a possible structure in the real world would mean - in effect, it is what enables the design of experiments to check whether the structure is valid or not. When a pattern is false, the mathematical predictions will (in all likelihood) turn out to be false, and the perceived structure will have to be abandoned for another possibility.
One obvious example of how this works is that of symmetry. In mathematics, a symmetry is going to be linked to a kind of group, which describes how objects with that symmetry can be changed (e.g. by reflection) in a way that appears not to change the object. So given an apparent symmetry in the universe, the whole of group theory can be used to understand it better. Such groups and symmetries are used in the theory which underlies particle physics (known as the Standard Model), and which leads to the predictions which are being tested by the Large Hadron Collider.
A second is evolutionary theory. In Darwinian evolution, survival of the fittest is related to probability: an individual which is better fitted for their environment is more likely to survive and produce descendants. And there is a great deal of mathematics about probability and statistics, so this can then be used to look at evolution. This mathematics was used to come up with an evolutionary description of altruism, explaining why it can be better in the long term for individual organisms to act against their personal interests in order to improve the odds for survival of close relatives who will have similar genes.
While mathematics is an important part of science, its processes and methods are different. The differences arise from the need to connect to the real world in scientific work; mathematics can be entirely abstract. And many mathematicians prefer it that way, too, even if it is an attitude which has a less than sympathetic reception from those people who tend to fund research, who want work which can be useful now rather than if and when an application is found for it.
Modern Mathematics and Science
There is a way in which scientific method may creep into mathematics, as I mentioned earlier. Some of the most complex results proved since the mid-seventies rely on computers to look through large numbers of cases and check that a result is valid in each case. This was done, for example, in the proof of the four colour theorem, which states that a map consisting of regions drawn on a flat piece of paper can always be coloured in so that no adjacent regions use the same colour with at most four colours. This is proved by looking at the ways in which a map can be turned into a simpler map which can only be coloured with four colours if the original map can be. The proof can be completed by looking at all the maps which cannot be simplified in this way, and seeing that they can be coloured in; the large but finite number is such that computers were used to check all of them. This means that the proof is basically dependent on the computer software and hardware: all the calculations it carries out must be accurate if the result is to be proved. I don't know what methods were used to establish the accuracy of the software and the hardware involved in this particular case, but to ensure that there isn't a fundamental design flaw in all the hardware available to test the result is hard when computer chips have become as complex as they now are. (There are methods to verify the accuracy of software and hardware: but bugs and manufacturing flaws are still hard to pick up.) This problem has led some to question whether such results can truly be considered proved; and certainly no mathematician is likely to consider the proofs as beautiful.
Polanyi describes mathematics as being physics when it can be applied to models of the real world, and like engineering when it can provide solutions directly applicable to the real world, but that it has more than that in it. I would say that the relationship is not really like that. Mathematics is a method which is immensely important to science, basically forming the whole of a scientist's analytical toolkit. The development of modern science really took off at the point when scientists began to seriously use mathematics for this purpose, rather than basing their thought on non-mathematical philosophical ideas (such as the idea that objects will travel in circular paths if not impeded, as a circle is the perfect shape). But mathematics is not, and can never be, a science.
Art?
So, we have established something which mathematics is not. If it isn't a type of science, maybe it is an art. In some ways, it feels like an art when practised at a high enough level. There is a philosophical question of what type of existence a mathematical object has, and indeed whether it can be said to exist before someone thinks about it, but it is pretty clear that mathematics is not really created from nothing when someone proves a theorem. Mathematicians feel that they discover a result, rather than inventing it, and definitely would not want to believe that it is the act of thinking about it which determines whether a proposed theorem is true or false. Perhaps mathematics is more of a craft than an art. But that is only in the way that most satisfying human activities can be viewed as a craft: there is pleasure to be had from the creation of something beautiful, whether it is a proof of a theorem, or a statue, or a design for an experiment, or a piece of furniture - or even a physical move in a game of football. Similarly, mathematicians may make use of a shared toolkit of techniques, and a shared technical vocabulary, but this is again something shared with huge numbers of other activities which could be labelled a craft. (Indeed, I would go so far as to say that these two properties could be used to define what a craft is.) The technical vocabulary in mathematics is exceptionally highly developed, with symbolic notation and a huge number of words which are given specific technical meanings, as well as an accepted way to add to their number (by creating a definition).
I would say, therefore, that any claim that mathematics is an art or a craft does not capture any of its properties which mark it out from other activities that humans undertake. However, just as with science, mathematics is fundamentally important to many art forms, for similar reasons due to the human appreciation of structure and patterns. Music is perhaps the art form most clearly using mathematics, probably because it is in the main abstract when words are not being sung. It is structured on several levels - form (such as verse/chorus or sonata form), rhythm (both a pulse and patterns such as the "Scotch snap" or the stretching of tempo in waltz time) and pitch (such as the use of a key, or with a twelve note series) for example. Similarly, pictorial art often has structural elements based on perspective (which is all about geometry). This use of patterns is important to our enjoyment of art and, in some cases, to our ability to connect with it in the first place. Of course, art is not just about getting these technicalities right. The spark of creativity which marks art of any quality from the rest needs to be part of any art object as well. Not only that, but great artists will adapt and develop the existing technical ideas, and make something new, which can then act as the ideas which are used by the next generation.
As with science, mathematics may underlie much of the technical side of art. But it is not an art, and this is even clearer than it is to say that it is not a science.
In a Category of Its Own?
My contention, then, is that mathematics is nether science nor art, and only a craft in a sense common many activities of disparate kinds, but that, instead, it is sui generis. In fact, I would go so far as to say that to call it a science or an art is a category error. For mathematics is a part of the theory of every science and every art or craft, but no science, art or craft forms the basis for any mathematics whatsoever. The inspiration, perhaps, but not the basis. To me, pattern recognition and structure are so important parts of what it means to be human that the study of these things deserves to be in a category of its own.
Monday, 7 November 2011
Tuesday, 25 October 2011
Identity and Access Management Blogs etc.
I thought: I could put together a quick blog post linking to some of the blogs which I follow, and spend a little time trying to fill gaps in the list. But while doing a little searching I found the Planet Identity blogroll (which I'd not seen before) and 360tek's list of blogs. Nothing I could post would be anything like as comprehensive. But there's still scope for a post, though...
Planet Identity aggregates over 170 blogs, with about 30 on 360tek's list most of which are on the Planet Identity list). I presume the former is no longer updated (it's on an old Sun server, not moved to Oracle). Blogs tend to be evanescent, and it's no surprise that some of the links in the blogroll are dead, or that others have not been updated in over two years. Many of the corporate bloggers have been amalgamated into a single company blog, which suggests to me some developing maturity in the identity market - these companies are making themselves more "corporate", which unfortunately often makes the blogs less interesting. A few of the blogs listed are inaccessible to me as someone pretty much restricted to English language writing, to my shame. My interests are also pretty much UK centred, and I'm not particularly into the latest marketing release from commercial vendors - mainly because getting identity management right is at least as much about good business processes as it is about technology. I'll just list some of the best of those which seem to be live (and which I didn't already know - or did know, but had just been too lazy to pick up and follow).
Where the blog author (if a single person) is also on twitter, I have listed their twitter ID as well as the blog URL.
Identity Networks: The blog of Ingrid Melve, Federation Manager for Feide - a FAM slant, and well worth reading (one of the blogs I really should have been following already)
Identity Woman: Although recent posts are taken up with the naming policies of Google+ (the spate of discussion over pseudonyms on the network being sparked off because Google would not allow an account in the name of Identity Woman), there is a lot of interesting material on this blog about user-centric identity.
Identity Happens: A great blog which is more technical than most of the others in this list. Not updated all that frequently.
Racingsnake: Robin Wilton's personal blog, focusing mainly on public policy relating to security and IAM. He also blogs at Gartner.
Ian Yip's Security and Identity Thought Stream: Good stuff here, too; interest in why technical security problems arise in the first place from Ian Yip.
I use Akregator to read most of the blogs I follow, and I have a fair number of Identity and Security blogs in there. A lot of security bloggers talk about identity - it has become massively important in IT security now that people have started to realise just how insecure most systems become if identity management is compromised.
eFoundations: Not all IAM, but always interesting blog from Pete Johnston and Andy Powell at Eduserv.
UK Access Management Focus (formerly JISC Access Management Team blog): Essential reading if you want to know what's happening in IAM in UK higher education. Maintained by Nicole Harris, a former LSE colleague of mine.
Kim Cameron's Identity Blog: thoughtful posting about identity (from, unsurprisingly, Kim Cameron), most recently (at the time of writing) about how disintermediation might affect identity.
Light Blue Touchpaper The blog of the security research group at Cambridge University: they often have something interesting, or even controversial to say (particularly if you believe in bank security). Posters here include Steven Murdoch.
Talking Identity, from Nishant Kaushik: He works for Identropy, so some content is cross posted from their corporate blog. Sensible and pretty authoritative stuff here (and, indeed, there).
Stephan's Ramblings: Another former colleague, who blogs about security generally.
Schneier on Security: Bruce Scheier, security guru (author of one of the best technical books on cryptography), describes himself as "head curmudgeon at the table". Fascinating comment, and a weekly squid-related post.
Naked Security, the Sophos blog on IT security, has timely posts on most current security stories. Perhaps less identity content than the ones above, but helps to keep up to date.
Not all essential reading comes in blog form, even in 2011, though these web sites also provide feeds.
The security tag at Slashdot Any Slashdot story tagged as "security" can be seen here, which includes just about any IAM related discussion on the place to go for computer geekery.
Security coverage at The Register Some may not like the jokey tone of "El Reg" (as it calls itself), but they cover a lot of interesting stories in an idiosyncratic way. The Identity stories have a subject feed here.
Electronic Frontier Foundation: Fighting for rights in the digital world, many of which have some connection to identity.
I follow some other relevant people on twitter:
Robert Garskamp, of IDentity.Next
Christopher Brown, of JISC - eResearch Programme Manager responsible for the Access & Identity Management programme
Rhys Smith, of Cardiff University and JANET, who worked on the Identity Project and the Identity Toolkit with me
John Chapman, also at JANET
RL "Bob" Morgan, University of Washington and Shibboleth (most people involved in Shibboleth seem not to tweet or blog)
I hope this list is useful - but I've probably missed some obvious and interesting blogs...
Planet Identity aggregates over 170 blogs, with about 30 on 360tek's list most of which are on the Planet Identity list). I presume the former is no longer updated (it's on an old Sun server, not moved to Oracle). Blogs tend to be evanescent, and it's no surprise that some of the links in the blogroll are dead, or that others have not been updated in over two years. Many of the corporate bloggers have been amalgamated into a single company blog, which suggests to me some developing maturity in the identity market - these companies are making themselves more "corporate", which unfortunately often makes the blogs less interesting. A few of the blogs listed are inaccessible to me as someone pretty much restricted to English language writing, to my shame. My interests are also pretty much UK centred, and I'm not particularly into the latest marketing release from commercial vendors - mainly because getting identity management right is at least as much about good business processes as it is about technology. I'll just list some of the best of those which seem to be live (and which I didn't already know - or did know, but had just been too lazy to pick up and follow).
Where the blog author (if a single person) is also on twitter, I have listed their twitter ID as well as the blog URL.
Identity Networks: The blog of Ingrid Melve, Federation Manager for Feide - a FAM slant, and well worth reading (one of the blogs I really should have been following already)
Identity Woman: Although recent posts are taken up with the naming policies of Google+ (the spate of discussion over pseudonyms on the network being sparked off because Google would not allow an account in the name of Identity Woman), there is a lot of interesting material on this blog about user-centric identity.
Identity Happens: A great blog which is more technical than most of the others in this list. Not updated all that frequently.
Racingsnake: Robin Wilton's personal blog, focusing mainly on public policy relating to security and IAM. He also blogs at Gartner.
Ian Yip's Security and Identity Thought Stream: Good stuff here, too; interest in why technical security problems arise in the first place from Ian Yip.
I use Akregator to read most of the blogs I follow, and I have a fair number of Identity and Security blogs in there. A lot of security bloggers talk about identity - it has become massively important in IT security now that people have started to realise just how insecure most systems become if identity management is compromised.
eFoundations: Not all IAM, but always interesting blog from Pete Johnston and Andy Powell at Eduserv.
UK Access Management Focus (formerly JISC Access Management Team blog): Essential reading if you want to know what's happening in IAM in UK higher education. Maintained by Nicole Harris, a former LSE colleague of mine.
Kim Cameron's Identity Blog: thoughtful posting about identity (from, unsurprisingly, Kim Cameron), most recently (at the time of writing) about how disintermediation might affect identity.
Light Blue Touchpaper The blog of the security research group at Cambridge University: they often have something interesting, or even controversial to say (particularly if you believe in bank security). Posters here include Steven Murdoch.
Talking Identity, from Nishant Kaushik: He works for Identropy, so some content is cross posted from their corporate blog. Sensible and pretty authoritative stuff here (and, indeed, there).
Stephan's Ramblings: Another former colleague, who blogs about security generally.
Schneier on Security: Bruce Scheier, security guru (author of one of the best technical books on cryptography), describes himself as "head curmudgeon at the table". Fascinating comment, and a weekly squid-related post.
Naked Security, the Sophos blog on IT security, has timely posts on most current security stories. Perhaps less identity content than the ones above, but helps to keep up to date.
Not all essential reading comes in blog form, even in 2011, though these web sites also provide feeds.
The security tag at Slashdot Any Slashdot story tagged as "security" can be seen here, which includes just about any IAM related discussion on the place to go for computer geekery.
Security coverage at The Register Some may not like the jokey tone of "El Reg" (as it calls itself), but they cover a lot of interesting stories in an idiosyncratic way. The Identity stories have a subject feed here.
Electronic Frontier Foundation: Fighting for rights in the digital world, many of which have some connection to identity.
I follow some other relevant people on twitter:
Robert Garskamp, of IDentity.Next
Christopher Brown, of JISC - eResearch Programme Manager responsible for the Access & Identity Management programme
Rhys Smith, of Cardiff University and JANET, who worked on the Identity Project and the Identity Toolkit with me
John Chapman, also at JANET
RL "Bob" Morgan, University of Washington and Shibboleth (most people involved in Shibboleth seem not to tweet or blog)
I hope this list is useful - but I've probably missed some obvious and interesting blogs...
Saturday, 1 October 2011
Identity and Access Management and the Technology Outlook for UK Tertiary Education 2011-2016 (Part Three)
Recently, the NMC Horizon project published its report, Technology Outlook for UK Tertiary Education 2011-2016: An NMC Horizon Report Regional Analysis, produced in collaboration with CETIS and UKOLN. The last ten years have seen massive changes in the ways in which UK tertiary education institutions handle authentication, identity, and access controls, and I would like to take a look at each of the technologies it mentions and discuss whether their adoption will force or encourage further change.
The report groups technologies into three groups of four, the first group being those which are imminent (time to adoption one year or less), then those which are likely to be adopted in two to three years, and finally those which the contributors to the report expect to be adopted in four to five years. I will devote a single post to each group of four. This is post two of the three; go to post one, post two.
Augmented Reality
This particular technology has no interesting identity component that I can see - it's just going to be the usual issues of data ownership and, possibly, privacy. However, the nature of augmented reality is such that it is likely to lead to all sorts of new applications which may have privacy issues - in particular, those which allow visitors to tag the online information to add comments, or even graffiti to the augmented presence.
Collective Intelligence
In the educational context, the key point (clear in the example project links given in the report, though strangely not actually mentioned in the main text) is curation of the collected information, as learners and researchers have a need for accuracy. This in turn necessitates some form of identity management, otherwise the curation itself will need curating. This should already be well understood, as it is crucial to much open data already available, so there will be no excuse for not managing it sensibly by 2015.
Smart Objects
This is the use of unique identifiers embedded with an object which can be used (for example) to provide a linkage to a point on the Web. The current technologies for doing this are mainly RFID tags and QR codes. The sample uses discussed in the report don't seem to me to be of huge relevance for most forms of tertiary education specifically, though they will be useful for such tasks as keeping track of sample materials in labs, or the location of medical cameras and sensors in patients. Again, there seems to be nothing much new here in terms of identity.
Telepresence
The future of video conferencing is telepresence, which has had some high profile demonstrations; the name suggests the point, which is to make it appear to each participant that the others are present at a shared conference space (which may of course be a purely virtual location). As with smart objects, I have some difficulty thinking of applications for this technology specific to the education sector (surely it isn't going to enhance remote learning all that much?). I also experienced the nightmare which was UK higher education videoconferencing about a decade ago - too little bandwidth even in the dedicated video suite needed made it unusable, less good than Skype video calls are now. And I know how difficult the Open University found it when they first made it a requirement for some of their courses for students to have access to a fairly basic standard of computer equipment. So my feeling is that the date suggested for this is rather optimistic, as institutions will be conservative about the widespread adoption of something which has high bandwidth and processing requirements without extremely clear benefits for students and researchers. Small scale adoption where it's useful to research, possibly - the final use suggested for the technology is for the exploration of locations difficult or impossible for human beings to access. Generally, though, my feeling is that the report is being optimistic over the timescale needed for the hardware and bandwidth requirements to be sufficiently easy to meet.
This is a technology with clear identity elements - the participants in a conference will be identified to be able to take part (in the main), and will be releasing large quantities of information about themselves to the other participants. That said, it seems unlikely that most uses will provide any new or even particularly unusual use cases for IAM.
General Conclusions
Overall, it seems to me that there is little which is likely to provide new challenges for IAM in the adoption of any of these technologies. However, there is ample scope for developers to get the IAM components wrong for components of both the tools needed to deliver the technology and of applications which are built to make use of them for education and research. This is especially important as many of those involved in delivering the applications and tools will not be experts in IAM themselves. We often see elementary errors in security particularly: while I was typing this, I was alerted to a blog post linking to a paper about insecurities in Chrome browser extensions - exactly the kind of problem which a software developer can create through lack of thinking through the implications of what they're doing, or by trying to re-invent the wheel because they don't know that others have done it before them.
The potential problems are compounded because the hardware being used by students and staff is going to be more and more their own rather than under the control of the institution, with all the potential for poor security as self-support becomes the norm. The multiplicity of devices and the fragmentation of the software market that it entails will make it much harder to make fixes; the days when an institution can have a "standard build" on every PC with a single supported web browser which can be updated at need from central servers are numbered. As the report concludes, "The computer is smaller, lighter, and better connected than ever before, without the need for wires or bulky peripherals. In many cases, smart phones and other mobile devices are sufficient for basic computing needs, and only specialized tasks require a keyboard, large monitor, and a mouse. Mobiles are connected to an ecosystem of applications supported by cloud computing technologies that can be downloaded and used instantly, for pennies. As the capabilities and interfaces of small computing devices improve, our ideas about when — or whether — a traditional computer is necessary are changing as well."
It is also possible that some applications built for education using these technologies could present some challenges for IAM. It seems likely that no one will now be able to predict the uses to which these technologies can be used, and I'd suspect that the most interesting uses will be ones that no one has yet invented. There may well be other technologies which will prove more revolutionary in tertiary education in the UK than any of the twelve listed here, but which we don't know about.
A common thread to many of the technologies is linking individuals or information - and sharing is obviously a potential source of privacy issues. Indeed, the tone of the report seems to suggest that within the next few years, privacy will be an outmoded idea; we will all be willing to share just about everything online. Is this true, or even likely? While naive users continue to share everything that occurs to them without caring about or understanding security settings (e.g. on Facebook), there is at least some evidence that many users are now thinking more about what they post and what it might mean for them later on, when read by a prospective employer, for example. The recent "nym wars" (usefully summarised here with discussion relevant to how privacy should be seen in the future) show that many people put a high value on privacy and the possibility of keeping a real world identity secret in particular. To the list of challenges summarised at the end of the report, I would add the investigation of the developing attitudes to privacy and how they should affect implementation and use of the technologies from this report in tertiary education.
The report groups technologies into three groups of four, the first group being those which are imminent (time to adoption one year or less), then those which are likely to be adopted in two to three years, and finally those which the contributors to the report expect to be adopted in four to five years. I will devote a single post to each group of four. This is post two of the three; go to post one, post two.
Augmented Reality
This particular technology has no interesting identity component that I can see - it's just going to be the usual issues of data ownership and, possibly, privacy. However, the nature of augmented reality is such that it is likely to lead to all sorts of new applications which may have privacy issues - in particular, those which allow visitors to tag the online information to add comments, or even graffiti to the augmented presence.
Collective Intelligence
In the educational context, the key point (clear in the example project links given in the report, though strangely not actually mentioned in the main text) is curation of the collected information, as learners and researchers have a need for accuracy. This in turn necessitates some form of identity management, otherwise the curation itself will need curating. This should already be well understood, as it is crucial to much open data already available, so there will be no excuse for not managing it sensibly by 2015.
Smart Objects
This is the use of unique identifiers embedded with an object which can be used (for example) to provide a linkage to a point on the Web. The current technologies for doing this are mainly RFID tags and QR codes. The sample uses discussed in the report don't seem to me to be of huge relevance for most forms of tertiary education specifically, though they will be useful for such tasks as keeping track of sample materials in labs, or the location of medical cameras and sensors in patients. Again, there seems to be nothing much new here in terms of identity.
Telepresence
The future of video conferencing is telepresence, which has had some high profile demonstrations; the name suggests the point, which is to make it appear to each participant that the others are present at a shared conference space (which may of course be a purely virtual location). As with smart objects, I have some difficulty thinking of applications for this technology specific to the education sector (surely it isn't going to enhance remote learning all that much?). I also experienced the nightmare which was UK higher education videoconferencing about a decade ago - too little bandwidth even in the dedicated video suite needed made it unusable, less good than Skype video calls are now. And I know how difficult the Open University found it when they first made it a requirement for some of their courses for students to have access to a fairly basic standard of computer equipment. So my feeling is that the date suggested for this is rather optimistic, as institutions will be conservative about the widespread adoption of something which has high bandwidth and processing requirements without extremely clear benefits for students and researchers. Small scale adoption where it's useful to research, possibly - the final use suggested for the technology is for the exploration of locations difficult or impossible for human beings to access. Generally, though, my feeling is that the report is being optimistic over the timescale needed for the hardware and bandwidth requirements to be sufficiently easy to meet.
This is a technology with clear identity elements - the participants in a conference will be identified to be able to take part (in the main), and will be releasing large quantities of information about themselves to the other participants. That said, it seems unlikely that most uses will provide any new or even particularly unusual use cases for IAM.
General Conclusions
Overall, it seems to me that there is little which is likely to provide new challenges for IAM in the adoption of any of these technologies. However, there is ample scope for developers to get the IAM components wrong for components of both the tools needed to deliver the technology and of applications which are built to make use of them for education and research. This is especially important as many of those involved in delivering the applications and tools will not be experts in IAM themselves. We often see elementary errors in security particularly: while I was typing this, I was alerted to a blog post linking to a paper about insecurities in Chrome browser extensions - exactly the kind of problem which a software developer can create through lack of thinking through the implications of what they're doing, or by trying to re-invent the wheel because they don't know that others have done it before them.
The potential problems are compounded because the hardware being used by students and staff is going to be more and more their own rather than under the control of the institution, with all the potential for poor security as self-support becomes the norm. The multiplicity of devices and the fragmentation of the software market that it entails will make it much harder to make fixes; the days when an institution can have a "standard build" on every PC with a single supported web browser which can be updated at need from central servers are numbered. As the report concludes, "The computer is smaller, lighter, and better connected than ever before, without the need for wires or bulky peripherals. In many cases, smart phones and other mobile devices are sufficient for basic computing needs, and only specialized tasks require a keyboard, large monitor, and a mouse. Mobiles are connected to an ecosystem of applications supported by cloud computing technologies that can be downloaded and used instantly, for pennies. As the capabilities and interfaces of small computing devices improve, our ideas about when — or whether — a traditional computer is necessary are changing as well."
It is also possible that some applications built for education using these technologies could present some challenges for IAM. It seems likely that no one will now be able to predict the uses to which these technologies can be used, and I'd suspect that the most interesting uses will be ones that no one has yet invented. There may well be other technologies which will prove more revolutionary in tertiary education in the UK than any of the twelve listed here, but which we don't know about.
A common thread to many of the technologies is linking individuals or information - and sharing is obviously a potential source of privacy issues. Indeed, the tone of the report seems to suggest that within the next few years, privacy will be an outmoded idea; we will all be willing to share just about everything online. Is this true, or even likely? While naive users continue to share everything that occurs to them without caring about or understanding security settings (e.g. on Facebook), there is at least some evidence that many users are now thinking more about what they post and what it might mean for them later on, when read by a prospective employer, for example. The recent "nym wars" (usefully summarised here with discussion relevant to how privacy should be seen in the future) show that many people put a high value on privacy and the possibility of keeping a real world identity secret in particular. To the list of challenges summarised at the end of the report, I would add the investigation of the developing attitudes to privacy and how they should affect implementation and use of the technologies from this report in tertiary education.
Tuesday, 27 September 2011
Identity and Access Management and the Technology Outlook for UK Tertiary Education 2011-2016 (Part Two)
Recently, the NMC Horizon project published its report, Technology Outlook for UK Tertiary Education 2011-2016: An NMC Horizon Report Regional Analysis, produced in collaboration with CETIS and UKOLN. The last ten years have seen massive changes in the ways in which UK tertiary education institutions handle authentication, identity, and access controls, and I would like to take a look at each of the technologies it mentions and discuss whether their adoption will force or encourage further change.
The report groups technologies into three groups of four, the first group being those which are imminent (time to adoption one year or less), then those which are likely to be adopted in two to three years, and finally those which the contributors to the report expect to be adopted in four to five years. I will devote a single post to each group of four. This is post two of the three; go to post one, post three.
Game Based Learning
This is the first of the second set of technologies, due for adoption in two or three years. As far as access is concerned, there are two points to make. First, since in the tertiary education context, games used for learning will presumably be connected to courses, the access policies will basically match those for existing VLE services. Indeed, it is likely that if adoption is widespread, many institutions will wish to embed games in their VLE, if they use one. So there should be existing processes which determine who has access to a game (at several levels: to play, to access scoring and other records, and to manage it), and there should be existing procedures to implement whatever is required for access for those people who should be permitted it - adding identifiers to an access control list from an student information system database, for example.
The second point is that how access controls are enforced will depend on the game environment and its implementation. The links given in the report are not explicit about how their games are implemented, though one of them is clearly using Flash, and another is embedded into social networking and will presumably also use Flash. Other candidates for game development will include HTML5. It seems likely to me that most of these games will be browser and/or app based, and so will have authentication methods which are of these types, which could utilise existing methods such as Web SSO technology for authentication.
As with the technologies in the first part of the report, there will be privacy requirements which will need to be insisted on in the development of games. In many online games, users are interested in league tables for players; will these be shareable? If games have a collaborative element, how will the information sharing required for this work - and how will it affect assessment? What about the sharing of hints and tips - another activity common in gaming communities?
Learning Analytics
Essentially, this describes the analysis of the large quantities of data generated by student activity on the Internet - including activity not necessarily considered to be part of a course, such as social network activity. Stated like this, as it is in the report, it is immediately clear that there are implications for student privacy in this work. Employees already complain about similar activities (on a smaller scale) by their employer, such as the monitoring of Facebook use (one of the issues on the US-based Privacy Rights Clearinghouse Workplace Privacy worksheet, to pick just one example of a discussion of this practice; one particular service offering to do this for employers is discussed on ReadWriteWeb).
There are other issues, too. As one of the links from the report says, "Both data mining and the use of analytics applications introduce a number of legal and ethical considerations, including privacy, security, and ownership". It then goes on to suggest that these concerns will decrease over time, due to the introduction of new tools and "as institutions are forced to cope with greater financial constraints that will make the careful targeting of available resources increasingly important". I am not sure I agree, particularly outside the US - privacy has long been much more important to legislators in Europe. It will be interesting to see how this develops in the UK, and how students over the next few years feel about it. And learning is not the only field in which analytics of this type could be used: how about research assessment in 2016? Or your annual appraisal in 2015?
New Scholarship
This topic is really about the use of non-traditional means of publishing for research (blogging, podcasting, etc.) basically, rather than (or, more usefully, alongside) peer reviewed academic journals. This is really an extension of traditional methods of exchanging ideas within the academic community (but consuming less coffee). It is actually a change which has been going on for quite a while: when I was a graduate student in the early 1990s, worldwide communication by email for special interest groups was just beginning to be embraced by members of the department.
The interest for IAM is not in the authentication side of things; shared access blogs, authenticated comments, and so on are all commonplace. There are two issues that immediately come to mind Firstly, the question of how controlled such new media are, and how an institution can protect its reputation. The LSE, where I worked until recently, was embroiled in controversy over just this issue earlier in 2011. Of course, universities have been embarrassed by the utterances of their staff for many years; people don't need a blog in order to say controversial things. But it is becoming harder to even keep track of the places where an institution needs to check to find out what those who are affiliated to it are saying in public. After all, a director doesn't want to discover a budding problem only when a tabloid reporter contacts them.
The second issue is one of authenticity. How is it possible to be sure that a blogger is really the person you think he or she is? Linking published journal articles to individuals is hard enough, without having to manage every staff member's personal blog or blogs - hence the ongoing Names project. This is an issue which is only going to become more difficult.
Semantic Applications
This technology is about the intelligent use of material from online sources, usually the open Internet but possibly including protected content, to make connections between items of data automatically, without intervention from human researchers. (This is also, and perhaps better, known as Linked Data.) This may not seem to have any identity component whatsoever, but in fact there are two issues: data provenance (ownership and authenticity), as discussed above, and allowing access for the intelligent applications to closed content. The second of these is a technical issue, and should be readily soluble in the timescale suggested for the adoption of semantic technology, two or three years.
It's fairly clear that many of the promoters of Linked Data are not keen on the use of closed content, but there is no particular reason why (parts of) the data processed need to be accessible to everybody on the Internet; obviously the ability to use it for widespread use will be compromised, but that may well be considered a small price to pay (see also the entry on the topic in the Structured Dynamics Linked Data FAQ).
Thursday, 22 September 2011
Identity and Access Management and the Technology Outlook for UK Tertiary Education 2011-2016 (Part One)
Last week, the NMC Horizon project published its report, Technology Outlook for UK Tertiary Education 2011-2016: An NMC Horizon Report Regional Analysis, produced in collaboration with CETIS and UKOLN. The last ten years have seen massive changes in the ways in which UK tertiary education institutions handle authentication, identity, and access controls, and I would like to take a look at each of the technologies it mentions and discuss whether their adoption will force or encourage further change.
The report groups technologies into three groups of four, the first group being those which are imminent (time to adoption one year or less), then those which are likely to be adopted in two to three years, and finally those which the contributors to the report expect to be adopted in four to five years. I will devote a single post to each group of four. This is post one of the three; go to post two, post three.
Cloud Computing
The report describes this as an almost ubiquitous technology. The main access challenges must therefore have been solved, surely?
However, a quick glance at the project links given in the section to relevant initiatives in the sector shows that access to cloud resources is not as simple as it might be. The Bloomsbury Media Cloud requires an email to request the setting up an account, and considers access to be sufficiently difficult to have created a video in its user guide section to show how to access content (and the video itself is hard to access, giving me a 404 not found error when I tried it). "Investigating and applying authentication methods" is one of the objectives of the project, but I would suggest that more work is needed. But that is better than the second link, to Oxford's Flexible Services for the Support of Research which does not exist at all. They really should have employed a more persistent URL: it has moved from a "current research" directory to a "research" directory, here. This is a far less glossy project, more technical in content, as can be seen from the Installation documentation, which describes access control in the following terms:
"Security Groups: users can define groups with access rules indicating what port can be
accessed from which source IP(s). Multiple Virtual Machines (VMs) can then be instantiated and associated to a defined group. In this way, a security group works analogously to a firewall put in front of one or more VM. Crucially, such a 'firewall' is managed directly by the owner of the VM(s)".
Flexible, but a bit of a challenge for those with little knowledge of virtual machine firewall configuration. The final project link is to HEFCE's funding announcement of shared cloud services for higher education institutions.
So there is still work to be done, in terms of the user experience mainly. Clearly this aspect is of importance to commercial providers of cloud-based services, such as Google docs, and this is inspiring the frequent occurrence of questions on the relevant user mailing lists about the integration of Shibboleth, as a single sign on product used in many tertiary education contexts, with the authentication regimes imposed by these providers.
Mobiles
Again, mobile technology is being adopted rapidly by many institutions. The main IAM related issue is how to ensure security, and it is something which is quite well understood - but that hasn't prevented implementations of other systems having embarrassing security holes which should have been avoided. With mobiles, the issue is more about making sure that known issues are dealt with rather than extensive research to work out what should be done. An introduction to the issues can be found here (among many other places). Since most of the resources being discussed are web based, issues of integration and single sign on are not likely to be important, as they will have been solved for traditional web clients (e.g. by using a standards based SSO solution).
Open Content
In the past, I promoted the idea that even when repositories have open access, there is still a need for authentication and authorisation, unless the repository really allows anyone to anonymously store any item, with no audit trail: a situation which is not likely to happen in the academic community. Similar remarks also hold for open content. The holders of the content will want to retain at least some control over the much of the content being posted. In fact, deposit is likely to be quite restricted, in order to retain a degree of academic respectability and to keep control of intellectual property rights. This is true in the example project links which are given in the report, except for one: P2PU. There, all that is needed to post content, either comments on existing teaching material or a course of your own, is a login. This can be an OpenID identity, or one which is derived from the completion of a registration form.
As is the case with mobile use, there is little new here; developers of open content repositories just need to be sure to apply known security principles correctly to safeguard the holdings that will fill them.
Tablet Computing
Here, the main point is the potential for the use of apps for educational and/or research purposes. This means that the use of apps is the main issue for IAM in this context: how an app (and associated remote data stores if any) handles identity, privacy, security and so on will be the major concern. As with the previous two technologies, it seems that the principal focus for IAM work here will be on developer/deployer education rather than finding something new. Heterogeneity is a potentially serious issue for tablets than less advanced mobiles, because apps can take non-standard approaches to IAM and services provided by institutions will need to be flexible in order to cope: but this should not be at the expense of security and privacy.
Overall, there is an excellent discussion (Part One, Part Two) of what Frank Villavicencio calls the "consumerization of IAM" - the consequences for Identity and Access Management of the explosion in the use of different devices and methods for accessing systems. Although it deals with the commercial market, much of what he says is going to be at least as applicable to FHEIs. With all these new devices and methods for accessing services, a user's multiple roles (as student or employee, as a private individual, as a consumer, etc.) become immensely important, whether they want to merge them or keep them separate. As with much of Identity, the issue is the precise way to manage the trade off between privacy and convenience. The main recommendation of the Identropy discussion is that organisations need to embrace this change, rather than trying to bury their heads in the sand; this is something which applies even more to FHEIs if they want to meet the expectations of their students, who will expect them to live in this decade not the last.
The report groups technologies into three groups of four, the first group being those which are imminent (time to adoption one year or less), then those which are likely to be adopted in two to three years, and finally those which the contributors to the report expect to be adopted in four to five years. I will devote a single post to each group of four. This is post one of the three; go to post two, post three.
Cloud Computing
The report describes this as an almost ubiquitous technology. The main access challenges must therefore have been solved, surely?
However, a quick glance at the project links given in the section to relevant initiatives in the sector shows that access to cloud resources is not as simple as it might be. The Bloomsbury Media Cloud requires an email to request the setting up an account, and considers access to be sufficiently difficult to have created a video in its user guide section to show how to access content (and the video itself is hard to access, giving me a 404 not found error when I tried it). "Investigating and applying authentication methods" is one of the objectives of the project, but I would suggest that more work is needed. But that is better than the second link, to Oxford's Flexible Services for the Support of Research which does not exist at all. They really should have employed a more persistent URL: it has moved from a "current research" directory to a "research" directory, here. This is a far less glossy project, more technical in content, as can be seen from the Installation documentation, which describes access control in the following terms:
"Security Groups: users can define groups with access rules indicating what port can be
accessed from which source IP(s). Multiple Virtual Machines (VMs) can then be instantiated and associated to a defined group. In this way, a security group works analogously to a firewall put in front of one or more VM. Crucially, such a 'firewall' is managed directly by the owner of the VM(s)".
Flexible, but a bit of a challenge for those with little knowledge of virtual machine firewall configuration. The final project link is to HEFCE's funding announcement of shared cloud services for higher education institutions.
So there is still work to be done, in terms of the user experience mainly. Clearly this aspect is of importance to commercial providers of cloud-based services, such as Google docs, and this is inspiring the frequent occurrence of questions on the relevant user mailing lists about the integration of Shibboleth, as a single sign on product used in many tertiary education contexts, with the authentication regimes imposed by these providers.
Mobiles
Again, mobile technology is being adopted rapidly by many institutions. The main IAM related issue is how to ensure security, and it is something which is quite well understood - but that hasn't prevented implementations of other systems having embarrassing security holes which should have been avoided. With mobiles, the issue is more about making sure that known issues are dealt with rather than extensive research to work out what should be done. An introduction to the issues can be found here (among many other places). Since most of the resources being discussed are web based, issues of integration and single sign on are not likely to be important, as they will have been solved for traditional web clients (e.g. by using a standards based SSO solution).
Open Content
In the past, I promoted the idea that even when repositories have open access, there is still a need for authentication and authorisation, unless the repository really allows anyone to anonymously store any item, with no audit trail: a situation which is not likely to happen in the academic community. Similar remarks also hold for open content. The holders of the content will want to retain at least some control over the much of the content being posted. In fact, deposit is likely to be quite restricted, in order to retain a degree of academic respectability and to keep control of intellectual property rights. This is true in the example project links which are given in the report, except for one: P2PU. There, all that is needed to post content, either comments on existing teaching material or a course of your own, is a login. This can be an OpenID identity, or one which is derived from the completion of a registration form.
As is the case with mobile use, there is little new here; developers of open content repositories just need to be sure to apply known security principles correctly to safeguard the holdings that will fill them.
Tablet Computing
Here, the main point is the potential for the use of apps for educational and/or research purposes. This means that the use of apps is the main issue for IAM in this context: how an app (and associated remote data stores if any) handles identity, privacy, security and so on will be the major concern. As with the previous two technologies, it seems that the principal focus for IAM work here will be on developer/deployer education rather than finding something new. Heterogeneity is a potentially serious issue for tablets than less advanced mobiles, because apps can take non-standard approaches to IAM and services provided by institutions will need to be flexible in order to cope: but this should not be at the expense of security and privacy.
Overall, there is an excellent discussion (Part One, Part Two) of what Frank Villavicencio calls the "consumerization of IAM" - the consequences for Identity and Access Management of the explosion in the use of different devices and methods for accessing systems. Although it deals with the commercial market, much of what he says is going to be at least as applicable to FHEIs. With all these new devices and methods for accessing services, a user's multiple roles (as student or employee, as a private individual, as a consumer, etc.) become immensely important, whether they want to merge them or keep them separate. As with much of Identity, the issue is the precise way to manage the trade off between privacy and convenience. The main recommendation of the Identropy discussion is that organisations need to embrace this change, rather than trying to bury their heads in the sand; this is something which applies even more to FHEIs if they want to meet the expectations of their students, who will expect them to live in this decade not the last.
Saturday, 2 April 2011
Installing Debian on MSi CR629 (Novatech I3) laptop
I recently bought a new laptop from Novatech, without any operating system pre installed (it turns out to be labelled an MSi CR629). I would, by the way, recommend Novatech as a supplier, particularly for people who want to buy computers which don't have Windows pre-installed. The directions I give will probably be considered a bit terse by a first time Linux installer, as I mainly want to explore how to obtain a working, Linux installation on this particular hardware, not to replicate other people's documents which describe how to install Linux of various flavours.
This post will describe the steps I went through to get a working Debian Linux stable installation on the laptop, for the guidance of any others who might wish to do the same. I might move to testing at some point, rather than stable; this is something I haven't decided yet.
I spent some time beforehand thinking about the Linux distribution to use. The short-list was Gentoo, Debian, and OpenSuse. I rejected ubuntu, which I have used for a long time on my desktops, because I don't like some of the recent changes and the direction the project seems to be going in, in terms of user interface design and flexibility. I ended up rejecting Gentoo, after actually attempting an installation; I have in the past spent too much time manually configuring Xorg.conf, and I have no desire to do so again. I never got round to trying OpenSuse...
1. Create CD and Install Debian Linux
I did this on one of my existing computers. I downloaded via the Debian website, and burnt a CD using brasero on my existing ubuntu desktop. Then I booted up the laptop, inserted the CD, and rebooted, following the instructions on screen. The network install works fine, though it must be done with a wired internet connection (see below on how to add wireless support).
I chose to partition manually, though there is no particular need to do so. This was mainly so that I could leave some empty space on the disk, possibly for installing Windows later, possibly for exploring other Linux distributions. But I created a small /boot partition, a larger than recommended swap (as I've had problems in the past from too small swap space), and a large / partition (as I have also suffered from using machines which had many partitions which did not work out as the needs of the server did not match the ideas of the installer).
Once the installation process is complete, follow the menu item to restart, remove the CD, and let the machine boot from the hard drive. With the exception of some hardware accessories, this process worked fine.
Some hardware problems will only become apparent over time; there are several pieces of kit integrated into the laptop which I have not used yet, and even some which I am unlikely to ever use (the MMC card reader). A couple of items did immediately need fixing to work with Debian, and this is probably the most technical requirement of the installation.
2. Wireless networking
The laptop comes with RaLink RT3090 wireless hardware, which has known issues with linux drivers. But before we come to that, the device is bizarrely switched off by default. If you press Fn and F8 simultaneously, wireless networking is turned on, and the status light second on the left below the mousepad should become green.
Next, install the basic wireless software. Open a terminal, and as root or using sudo, run
# apt-get install firmware-ralink wireless-tools
(Using synaptic is of course an acceptable alternative.) Enable the drivers by restarting the network manager, either by rebooting or
# /etc/init.d/network-manager restart
The software package named firmware-ralink contains several drivers, some of which will work with the RT3090 hardware, but none of which allow stable wireless connections. They are however needed to establish a connection. For a more stable connection, download the proprietary driver for the RT3090 directly from Ralink's support website. You will need to accept the license, entering name and email, to download. To install, you will need to be root or use sudo.
# unzip 2010_1217_RT3090_LinuxSTA_V2.4.0.4_WiFiBTCombo_DPO.zip
# cd 20101216_RT3090_LinuxSTA_V2.4.0.4_WiFiBTCombo_DPO
# make
# make install
(The original text here included a link to a website where a .deb file could be downloaded; I hadn't taken in that this was ubuntu only and would not work in debian.)
Rebooting is probably the easiest way to see if the driver is picked up. You can see a list of which drivers are loaded using
$ lsmod | grep -e rt2 -e rt3
(you don't need to be root to run this). If rt3090sta is not listed, you need to add it (as root):
# modprobe rt3090sta
This immediately got the wireless working, searching for networks to connect to, and with a reasonably (but not perfectly) stable connection.
To force the driver to be loaded on booting the laptop, run the following command as root:
# echo "rt3090sta" >> /etc/modules
This simply adds the module name for the driver to the list in this file of the kernel modules to load on boot. (N.B. This does not always appear to work, but if not, run the modprobe command again to load the driver manually.)
(These instructions are based on those here, which although they didn't work for me as written, gave enough clues that I could get wireless working.)
3. Integrated card reader
Inserting an SD card into the card reader (second drawer from the front on the left hand side; you need to remove the cover first) does nothing. This is because the driver for the reader is not included as a module for use with the current linux kernel - there is a kernel bug report, but it is complicated and confusing in its advice and in fact easier to follow instructions are available elsewhere. The card reader is a USB device, and can be seen with
$ lsusb
The output should include a line like:
Bus 001 Device 005: ID 0cf2:6250 ENE Technology, Inc.
which indicates the manufacturer of the hardware. These instructions are intended to get card readers made by this manufacturer to work in debian linux: if the hardware for the laptop has changed and a different card reader is included, the method will be different.
You will need to be root to install the driver, which needs to be compiled as a kernel module.
First, download the driver here. The downloaded zip file needs to be unzipped to a particular location:
# unzip -d /usr/src/keucr-0.0.1 R100_02_ene_card_reader.zip
Create a file named /usr/src/keucr-0.0.1/dkms.conf containing:
PACKAGE_NAME="keucr"
PACKAGE_VERSION="0.0.1"
CLEAN="rm -f *.*o"
BUILT_MODULE_NAME[0]="keucr"
MAKE[0]="make -C $kernel_source_dir M=$dkms_tree/$PACKAGE_NAME/$PACKAGE_VERSION/build"
DEST_MODULE_LOCATION[0]="/extra"
AUTOINSTALL="yes"
and then use dkms to build and install the driver:
# dkms add -m keucr -v 0.0.1
# dkms build -m keucr -v 0.0.1
# dkms install -m keucr -v 0.0.1
# echo "keucr" >> /etc/modules
The last line means that the module will be loaded on boot. To load it manually in the current session use:
# modprobe keucr
Test by inserting a card; it should now be recognised and automatically mounted. Whenever a new kernel is built/installed, this driver will break and will need to be re-installed:
# dkms remove -m keucr -v 0.0.1 --all
# dkms add -m keucr -v 0.0.1
# dkms build -m keucr -v 0.0.1
# dkms install -m keucr -v 0.0.1
# modprobe keucr
4. Mousepad tapping
Not everyone likes mousepad tapping (that is, being able to tap on the pad to simulate a mouse button click, rather than using the bar at the bottom of the pad), and it is turned off by default in gnome (the default windowing environment in debian). Enabling is much simpler than the preceding tasks. To enable, simply use the menu at the top of the screen; select System/Preferences/Mouse, go to the Touchpad tab, and click on "Enable mouse clicks with touchpad". This should take effect immediately.
5. Webcam
Like the wireless hardware, this is switched off by default. I spent a while trying to work out what the problem might be, before turning it on, which can be done by pressing Fn and F6 simultaneously. When you do this, you should straightaway find:
$ ls /dev/video*
returns
/dev/video0
and that you can see yourself if you start the application cheese (select Applications/Sound and Video/Cheese Webcam Booth). Additionally, lsusb will now return an additional entry which includes the manufacturer's name: "Acer, Inc".
You can turn the webcam off again with the same key combination.
Update: Fixed some errors in the wireless networking section.
Update 2011-10-13: Updated URL for Ralink Support.
This post will describe the steps I went through to get a working Debian Linux stable installation on the laptop, for the guidance of any others who might wish to do the same. I might move to testing at some point, rather than stable; this is something I haven't decided yet.
I spent some time beforehand thinking about the Linux distribution to use. The short-list was Gentoo, Debian, and OpenSuse. I rejected ubuntu, which I have used for a long time on my desktops, because I don't like some of the recent changes and the direction the project seems to be going in, in terms of user interface design and flexibility. I ended up rejecting Gentoo, after actually attempting an installation; I have in the past spent too much time manually configuring Xorg.conf, and I have no desire to do so again. I never got round to trying OpenSuse...
1. Create CD and Install Debian Linux
I did this on one of my existing computers. I downloaded via the Debian website, and burnt a CD using brasero on my existing ubuntu desktop. Then I booted up the laptop, inserted the CD, and rebooted, following the instructions on screen. The network install works fine, though it must be done with a wired internet connection (see below on how to add wireless support).
I chose to partition manually, though there is no particular need to do so. This was mainly so that I could leave some empty space on the disk, possibly for installing Windows later, possibly for exploring other Linux distributions. But I created a small /boot partition, a larger than recommended swap (as I've had problems in the past from too small swap space), and a large / partition (as I have also suffered from using machines which had many partitions which did not work out as the needs of the server did not match the ideas of the installer).
Once the installation process is complete, follow the menu item to restart, remove the CD, and let the machine boot from the hard drive. With the exception of some hardware accessories, this process worked fine.
Some hardware problems will only become apparent over time; there are several pieces of kit integrated into the laptop which I have not used yet, and even some which I am unlikely to ever use (the MMC card reader). A couple of items did immediately need fixing to work with Debian, and this is probably the most technical requirement of the installation.
2. Wireless networking
The laptop comes with RaLink RT3090 wireless hardware, which has known issues with linux drivers. But before we come to that, the device is bizarrely switched off by default. If you press Fn and F8 simultaneously, wireless networking is turned on, and the status light second on the left below the mousepad should become green.
Next, install the basic wireless software. Open a terminal, and as root or using sudo, run
# apt-get install firmware-ralink wireless-tools
(Using synaptic is of course an acceptable alternative.) Enable the drivers by restarting the network manager, either by rebooting or
# /etc/init.d/network-manager restart
The software package named firmware-ralink contains several drivers, some of which will work with the RT3090 hardware, but none of which allow stable wireless connections. They are however needed to establish a connection. For a more stable connection, download the proprietary driver for the RT3090 directly from Ralink's support website. You will need to accept the license, entering name and email, to download. To install, you will need to be root or use sudo.
# unzip 2010_1217_RT3090_LinuxSTA_V2.4.0.4_WiFiBTCombo_DPO.zip
# cd 20101216_RT3090_LinuxSTA_V2.4.0.4_WiFiBTCombo_DPO
# make
# make install
(The original text here included a link to a website where a .deb file could be downloaded; I hadn't taken in that this was ubuntu only and would not work in debian.)
Rebooting is probably the easiest way to see if the driver is picked up. You can see a list of which drivers are loaded using
$ lsmod | grep -e rt2 -e rt3
(you don't need to be root to run this). If rt3090sta is not listed, you need to add it (as root):
# modprobe rt3090sta
This immediately got the wireless working, searching for networks to connect to, and with a reasonably (but not perfectly) stable connection.
To force the driver to be loaded on booting the laptop, run the following command as root:
# echo "rt3090sta" >> /etc/modules
This simply adds the module name for the driver to the list in this file of the kernel modules to load on boot. (N.B. This does not always appear to work, but if not, run the modprobe command again to load the driver manually.)
(These instructions are based on those here, which although they didn't work for me as written, gave enough clues that I could get wireless working.)
3. Integrated card reader
Inserting an SD card into the card reader (second drawer from the front on the left hand side; you need to remove the cover first) does nothing. This is because the driver for the reader is not included as a module for use with the current linux kernel - there is a kernel bug report, but it is complicated and confusing in its advice and in fact easier to follow instructions are available elsewhere. The card reader is a USB device, and can be seen with
$ lsusb
The output should include a line like:
Bus 001 Device 005: ID 0cf2:6250 ENE Technology, Inc.
which indicates the manufacturer of the hardware. These instructions are intended to get card readers made by this manufacturer to work in debian linux: if the hardware for the laptop has changed and a different card reader is included, the method will be different.
You will need to be root to install the driver, which needs to be compiled as a kernel module.
First, download the driver here. The downloaded zip file needs to be unzipped to a particular location:
# unzip -d /usr/src/keucr-0.0.1 R100_02_ene_card_reader.zip
Create a file named /usr/src/keucr-0.0.1/dkms.conf containing:
PACKAGE_NAME="keucr"
PACKAGE_VERSION="0.0.1"
CLEAN="rm -f *.*o"
BUILT_MODULE_NAME[0]="keucr"
MAKE[0]="make -C $kernel_source_dir M=$dkms_tree/$PACKAGE_NAME/$PACKAGE_VERSION/build"
DEST_MODULE_LOCATION[0]="/extra"
AUTOINSTALL="yes"
and then use dkms to build and install the driver:
# dkms add -m keucr -v 0.0.1
# dkms build -m keucr -v 0.0.1
# dkms install -m keucr -v 0.0.1
# echo "keucr" >> /etc/modules
The last line means that the module will be loaded on boot. To load it manually in the current session use:
# modprobe keucr
Test by inserting a card; it should now be recognised and automatically mounted. Whenever a new kernel is built/installed, this driver will break and will need to be re-installed:
# dkms remove -m keucr -v 0.0.1 --all
# dkms add -m keucr -v 0.0.1
# dkms build -m keucr -v 0.0.1
# dkms install -m keucr -v 0.0.1
# modprobe keucr
4. Mousepad tapping
Not everyone likes mousepad tapping (that is, being able to tap on the pad to simulate a mouse button click, rather than using the bar at the bottom of the pad), and it is turned off by default in gnome (the default windowing environment in debian). Enabling is much simpler than the preceding tasks. To enable, simply use the menu at the top of the screen; select System/Preferences/Mouse, go to the Touchpad tab, and click on "Enable mouse clicks with touchpad". This should take effect immediately.
5. Webcam
Like the wireless hardware, this is switched off by default. I spent a while trying to work out what the problem might be, before turning it on, which can be done by pressing Fn and F6 simultaneously. When you do this, you should straightaway find:
$ ls /dev/video*
returns
/dev/video0
and that you can see yourself if you start the application cheese (select Applications/Sound and Video/Cheese Webcam Booth). Additionally, lsusb will now return an additional entry which includes the manufacturer's name: "Acer, Inc".
You can turn the webcam off again with the same key combination.
Update: Fixed some errors in the wireless networking section.
Update 2011-10-13: Updated URL for Ralink Support.
Subscribe to:
Posts (Atom)