I was reading more about this company that was selling the cheap Linux notebook, and getting lots of returns.
I think this was in reference to Netbooks (not "notebooks," these are smaller and lower powered than a traditional laptop) being sold by MSI. MSI is primarily a hardware company that targets OEMs and people who would assemble their own computers (like me) and so doesn't have much experience making usable operating systems.
Apparently, they (foolishly) decided to use a custom Linux distribution rather than something like Ubuntu. It seems like the problem is more one of MSI giving people a bad installation rather than an inherent problem of Linux.
This particular article was pointing out that people who bought this in the first place were more adventurous and more knowledgeable than most computer users to begin with, but still could not deal with it.
Again, I think that this was largely a problem of a poor install than a fundamental problem of Linux.
I would amend to say "Linux is a great choice for extremely technically sophisticated users who prefer being as far as possible from the mainstream."
That has certainly been the traditional user-base, and is still a significant part of the development community (Richard Stallman refuses to use the *Web*), but there is effort going into changing that. With the increasing popularity of Linux on these Netbooks (would this story even have been possible a few years ago?) as well as cell phones, there is a lot of effort going into usability improvements.
The great thing about open source programs is that it is very hard for useful programs to "just die." If a commercial program loses its corporate overlord, it can fade out and whither away. If a company gets bought up or out-competed, applications can disappear. This has been the Microsoft strategy. The reason they are so scared of Linux and open source is that even if you kill every developer of open source programs, the code is still there, and anyone with the knowledge and inclination can work on it.
Since most open source programs don't have the burden of needing to make money off of their direct sale, they tend not to get worse for the sake of adding features. Look at any version of Norton after around 2003. They needed people to keep buying the program, so they needed to add /something/ to make it different from the previous version. The problem is that it basically already did most of what it needed to do, so they had to add un-features and made it worse than it was, to the point where a computer was better off without Norton than with it.
For open source programs, if they reach maturity, people will maintain them, but not add features for the sake of making more money, since the developers generally don't make money directly off of the sale of the program. This means that programs which are basically done don't try to add useless features.
Also, it seems like investments in technologies and frameworks provide more of a network-effect benefit within the open source world than in the proprietary world. Open source has been playing catch-up for a while, but it is starting to pull ahead, with the web browser space is the most dramatic example. While it took a while for Firefox to reach parity with Internet Explorer, the current version of Firefox is much faster and more featureful than the latest IE, and the development version of both Firefox and WebKit (the engine that powers Safari) have javascript execution engines up to 40x faster than IE.
The level of polish and development that constitutes "acceptable" is not static, but it is not moving as fast as the development of the open source ecosystem. Ubuntu is usable for many people's day-to-day tasks already, and is only getting more usable. As time goes on, it will become acceptably easy for an increasing number of people.
Linux can take over from Windows, but they need to make it easy. And for that they have a ways to go.
Agreed, but there has been huge progress within the last few years. And it shows no signs of slowing.
It is sort of like in my field we generate many different kinds of images. The neurologists complain that the labelling of the images is inconsistent so "they can't tell what they are looking at". We never look at the labels, because it is obvious from a glance what kind of image it is. So to us an elaborate system to produce consistent labels would be so useless as to be a waste of whatever time it took to implement. To the neurologists not having it is a problem. If I were as into Linux as I am into brain images, then I suspect I would find GUI as useless as you do. As it is, I have fewer demands on my computer, but "labor saving" is at the top of the list.
I think this is what has kept the GUI less newbie-friendly than that of Mac or certain aspects of Windows (I would assert that Ubuntu is more user-friendly in many aspects than Windows, but not as familiar to many people). It is easier for a commercial company to hire usability experts and compel interface designers to produce good GUIs than for a group of hobbyists who don't mind a CLI to spontaneously produce a good GUI.
The instructions may not always be clear, but they do not run to "type the following with exactly this syntax, except, of course, changing the part you need to change."
I will have to say that it is a lot easier to tell someone:
Type this:ps aux | grep zfs
and copy and paste the results to me
than it is to say,
Open task manager, try to find each entry with the string 'zfs' in it, and tell me what appears in each column.
On the other hand, it is a lot easier to click the "Applications" menu and then look through the "Accessories" or "Games" menu rather than memorizing that your chess game is launched with "glchess" or that "Manage Passwords and Encryption Keys" is launched with "seahorse." It all depends on what you're trying to do and how much up-front time you are willing to invest in order to save time later.