There is a lot of truth here. In some ways I got off easier, my
career taught me early and repeatedly that there is the best technical
solution and the best business solution and the business solution wins,
because if it's not selling then it's game over.
This is my big complaint with TDD, don't get me wrong, testing is good
and most of us aren't doing enough. But TDD takes it too far, the goal
is not to write tests, the goal is to ship product, testing is a means
to an end.
And don't make the mistake of thinking OSS is immune to this, being paid
in cred and kudos instead of $ doesn't change the fact that if nobody is
using your code then it might as well not exist.
Good one (though it could be much shorter :-)
You are a ugly sick atheist like I used to be. Trust me atheists are
less fit — get no girls. You cannot argue that it's not fair Germany
lost the war. Natural selection is blind.
@Tolomea: agreed about OSS; generally "viable" is about users of some
sort, not necessarily money.
When one makes the epiphany towards evolutionary construction it will
guide much of what you do. The first version is a mad dash to complete
that feedback loop, without the feedback loop nothing you do is viable.
Then you iterate and increase the fitness, all while maintaining the
feedback loop. If you spend too much time increasing the fitness w/o
getting it into the market to compete, you are optimizing the wrong
thing. The market will tell you what you need to fix.
Any extra engineering that goes into making it better is energy
wasted in not shipping sooner. Things can only be better in context of
what they are not, what they are competing against.
Write drunk, edit sober.
@Tolomea, testing is vitale to completing the feedback loop and
optimizing the fitness. If the testing isn't increasing the fitness
relative to the expenditure then stop. One can certainly over test. You
need the least amount of energy to go from on local minima to
another.
> completeness, consistency and correctness all trump
simplicity
It's very difficult to claim that Lisp is on the left side of this
comparison (maybe C++ would be a better example). Would anybody
seriously claim that C is simpler than Lisp?
I think the wikipedia article gives a very good explanation of how
simplicity fits into Worse is Better:
http://en.wikipedia.org/wiki/Worse_is_better
In this post you've also captured something of why I have a some
amount of disdain for certain programming language dilettantes that
frequent forums like LtU. They arrogantly proclaim such and such
features of their pet languages as clearly superior than what has won in
the marketplace, without seeming to understand market constraints, the
whole messy hairy beast of it. The smug moral high ground is actually a
trap.
@IT: actually, Gabriel claims that C is simpler than Lisp – simpler
to implement efficiently and in some ways simpler to write efficient
code in. It's all in the original essay (and in the larger text that
it's part of that is concerned mostly with the future of Lisp).
I think you mis-characterize what Linus Torvalds meant when you say
that he meant that competition is a source of progress. It isn't.
Iteration of change and a real world fitness function are the source of
progress.
Alan Kay said that if not the need to ship or to pay programmers for
their time, he could spend infinite amount of time polishing a product
until it was perfect. That deadlines and budgets and backwards
compatibility all make products less than perfect. This is true.
Linus spoke against "big design up front", and in favor of iteration,
because during iteration you discover information you did not have
before (true in all circumstances), and specifically in open source,
because releasing an early (barely working) version sooner lets you
attract more help sooner.
Multi-player development (cooperative or competitive) is useful
because no single person knows what the whole human race knows, and for
scalability. But it's not necessary: a lone genius who is financially
secure, given enough time, can iterate a design and develop the perfect
product, without any help from economic forces. This design would not be
simple, would not be backwards compatible and would not be constrained
by deadlines or budgets. Exactly what Alan Kay wished. The question is
would it ever ship and would it even matter.
Design process using iteration ("Every large system that works
started as a small system that works") and economic forces influencing a
design for the better are orthogonal.
The most perfect software ever written is conventionally assumed to
be TeX. Any thoughts on how it fits into this paradigm?
It's not about evolution, but about the criteria to decide what
"better" means: whether "better" means useful to a mass of people or
whether the creator's mind is the sole judge of quality. It's the same
conflict that exists in Art between the Artist's view and that of the
rest of the world.
Nice. Many examples from human evolution parallel this, i.e.: We
don't have great speed, strength, or sharp claws or teeth. But, having
lost a lot of hair (which was also a disadvantage) we could run long
distances without overheating — due to perspiration. This gave us a
distinct advantage in getting away from predators. Many of our worst
attributes helped support one attribute we could exploit for advantage.
Others abound too: upright walking, formation of our larynx, nutritional
needs of our brains....
@Z.T.: I guess you're right that it doesn't have to be competition;
you do need a fitness function though, and it has to be (quote)
"ruthless". A rich designer facing no economic pressure would submit his
work to what ruthless fitness function? Generally, if you can think of
any real-world fitness function that doesn't involve the concept of a
practical alternative being better (which is what competition brings,
and which is where "ruthlessness" comes from), then you're right and
"competition" isn't necessarily what was meant.
@Aaron Davies: most people, including me, don't use TeX and would go
to great lengths to avoid using it. I'm not very informed about TeX, but
as a guess, I think it's a good example of The Right Thing design. (I
mean TeX as the entire system, including the editing interface and the
macro language, and not parts of the rendering algorithms).
@M.R.: well, it's related, to the extent that "useful to a mass of
people" is what evolution pushes you towards. Likewise, the disdain for
the masses and the belief in the extraordinary talents of a select few
(such as capitalized Artists) is related to the disdain for economic
evolution.
Do not hesitate to settle for the 2nd best solution
Natural selection had a bit of a head start...
@Yossi: you're entirely wrong in considering "evolution" as the
primary issue here. That would be true if those you say are against
"evolution" would build everything in one go and declare the result as
perfect, but in fact(and provably so) that's not how things happened:
Symbolics didn't just build their OS without any kind of feedback loop,
without making&abandoning prototypes and trying different
approaches; painters would make many sketches(études) and change their
minds before setting to complete the final version*, poets would
likewise rewrite verses even hundreds of times before publishing.
The essence of the matter is whether the maker/builder is ego-centric or
exo-centric, i.e. it's all about who decides when the result is fine and
no further work is necessary.
* Except for "modern" abstract painters who trow a few buckets of
paint on a canvas and decide that the result is a masterpiece, but
modern art is a joke anyway
Still reading, but I had to comment on a problem I have with the part
I have read. Information theory puts the lie to Linus' thoughts on the
human genome. The classic example used in young-earth-creation versus
evolution arguments is the 747. Unfortunately, people fail to realize
that a 747 is *enormously* more complicated than a human. We have lots
and lots of redundancy. That's the whole point of research involving
undifferentiated cells.
Our genetic code fits in a few hundred megabytes (roughly 3 billion
quaternary digits into bytes comes out to ~640 MB). If you trimmed the
sections that don't code for any proteins we actually manufacture, it
would probably be less than a hundred megabytes of actual information.
The 747, meanwhile, takes hundreds of megabytes just for the software to
run it. The mechanical specifications add hundreds more on top of that.
Keep in mind, you can't simply reference parts like a 555 timer. You
essentially have to include the VHDL files for every chip used. Our
genetic code specifies about 20 different amino acids. That's how many
fundamental parts/operations you get to use to describe the plane.
Sure, specifying a particular human would take a lot of information,
but specifying a particular 747 would also. In each case, you would have
to describe wear patterns and the like, but in the 747's case, its sheer
bulk means it would still take more information to describe to the same
level of detail.
Don't forget that in the Real World, markets came about with the rise
of states, and often were used as a tool of domination. So while in a
theoretical, math sense, markets make sense, in the world of atoms
rather than bits, 'worse is better is markets' may not actually make
very much sense.
David Graeber's "Debt: the first 5000 Years" is pretty fantastic if
you've never read about this anthropology.
Yosef, shouldn't Alan and Linus be switched in the paragraph that
starts out "Just like Alan Kay said" ?
Great article, and lots of good points!
I wrote the Unix-Haters chapter on X-Windows, which was kind of like
shooting fish in a barrel. At the time I would have been quite
astonished to know that X-Windows would be alive and well today. It has
however adopted some of the ideas from NeWS as higher level libraries,
like Cairo's stencil-paint / Porter-Duff imaging model. But it never got
the extensibility thing right. However that problem has been solved
again at an even higher level: the web browser.
One way I explain what NeWS was is this, which I contributed to the
wikipedia page http://en.wikipedia.org/wiki/NeWS :
NeWS was architecturally similar to what is now called AJAX, except
that NeWS:
used PostScript code instead of JavaScript for programming.
used PostScript graphics instead of DHTML/CSS for rendering.
used PostScript data instead of XML/JSON for data representation.
NeWS had a lot of flaws (most importantly the fact that it was not
free), but the thing it got right was having a full-fledged programming
language in the window server (what we now call the web browser). Yes,
PostScript is a very high level language, more like Lisp than Forth, and
NeWS had a nice dynamic object oriented programming system that was a
lot like Smalltalk. The thing NeWS really needed was to have that same
language on the client side (what we now call the web application
server). So I like the approach that node.js has taken — there's a huge
advantage to being able to share the same libraries and data structures
between the client and the server. And it takes a lot of mental energy
to switch between different languages and data models when you're
writing code on both the client and the server side.
Another example of the perils of "worse is better" that I've had
experience with is pie menus — http://en.wikipedia.org/wiki/Pie_menu . Research has
proven that they're faster and less error prone than linear menus, yet
they haven't been widely adopted. There are many reasons, some
technical, but one of the major non-technical problems seems to have
been the cargo-cult approach to user interface design that the industry
has taken, and the "not invented here" attitude of user interface
standards pushers.
I gave a demo of pie menus to Steve Jobs right after he released NeXT
Step, and he jumped up and down yelling "That sucks! That sucks! Wow,
that's neat! That sucks!" and claimed that the NeXT Step menus were
superior because they did lots of user testing, even though they never
compared them to pie menus. So Apple has never adopted pie menus, even
though the "swipe" gesture is so common on the iPhone and iPad — yet
they never provide a "self revealing" pie menu to prompt you which swipe
directions perform what actions.
I gave up trying to convince user interface standards pushers to
adopt pie menus for standards like OPEN LOOK and applications like
Microsoft Word, and decided a better approach to making them popular
would be to use them in a game, whose user interfaces are more open to
innovations, and whose users are more accepting of novelty.
I joined Maxis to work on The Sims, and implemented pie menus for
controlling the people. That worked out pretty well, and exposed a lot
of people to pie menus.
I had another experience in developing The Sims, which confirms the
"Worse is Better" hypothesis, which is a harsh reality of the games
industry and the software development industry in general: I pointed out
to my manager that the code was shit, and we really needed to clean it
up before shipping. So he sat me down and explained: "Don, your job is
TURD POLISHING. If you can just make your turd nice and shiny, we will
ship it, and everybody will be happy with you, because that is what we
hired you to do." But then at least he gave me a few weeks to clean up
and overhaul the worst code. The moral is be careful what you ask for,
or you might have to be the one who shovels out all the shit.
Nice article, thanks for sharing.
@ M. R.
I view it as different degrees of openness to feedback. Zero is the
hermit, one is the for-profit business. Artists, researchers and FLOSS
movement leaders fall in between at various degrees.
All of them are necessary, and it is a matter of personal
preference.
But, the results of zero-feedback efforts are much more dependant on
the personal ability of the proponent.
On the other hand, when you ask for feedback to someone, you
implicitly enter into a negotiation about the overall objective of the
work.
Pure, uncompromised ideas (and terse code) are more likely to come
from zero-feedback (if the proponent is not very smart it will be a bad
idea, if he/she is very smart it will be a shiny, inspiring idea).
Products (and crufty code) are more likely to come from
one-feedback.
M.R. beat me to the punch in #16. I would go on to say it depends on what the
maker considers the source of worth in his creation: do they seek to
create something that in itself embodies some whatever measure of
rightness or do they seek to create something... “effective” (for want
of a better term)? In an essential sense, then: is what they are doing
art, or design?
Now that said, I’ve got me here a hand grenade, so let me pull the
pin and throw it into this argument (*cackle cackle*):
Do Apple practice “Worse is Better” or is it “The Right Thing”, and
are they succeeding with that or not?
I don't feel that you're making the right dichotomy. I think the
major distinction is not between those who think competition and markets
are good or bad, but those who think that markets and competition are
tools to be employed by rules-makers to aid in human achievement and
those who feel that markets and competition are independent natural
forces which no human concern need bow to.
I stopped after a few paragraphs as there was an initial
contradiction. If conservatives are more risk averse tha liberals then
it would follow that the former would be anti-marketplace and the latter
pro-marketplace, as nothing is more risk and uncertain than the open
market.
@Don Hopkins: thanks for Unix-Haters! If you have some mailing list
archives (I only found archives from 90 to 93), and/or some stories on
how the book came about (were you really the first in the genre? I was
certainly inspired by Unix-Haters, in part, when writing the C++ FQA), I'd be
delighted.
Node looks interesting; JavaScript ought to be the absolute
worst-is-better piece of software ever though...
I saw pie menus just yesterday in an Android tablet's camera app –
perhaps touch gave them a push.
@M.R., Fabrizio, Aristotle Pagaltzis: I guess evolution without any
sort of market pressure is like evolution without natural enemies or
competition for food. Perhaps evolution but not quite as "ruthless" a
"feedback cycle". And, I didn't say The Right Thing was "against
evolution" – at least not everywhere; I specifically said "economic
evolution". Sure, The Right Thing is all for evolution from a Right
Thing to a Righter Thing – but it's a different kind of evolution.
As to Apple – I know too little about the history of its products or
what's under the hood to say much about this... Also, Apple is arguably
a product company first and only secondarily a software company, a chip
company, etc., and I'm absolutely not qualified to discuss physical
end-user products. Nice hand grenade though.
@Zimmie: you're looking at program size as a measure of complexity;
I'd rather look at functionality. That our specs are more verbose
doesn't compliment us – "information theory" interpreted that way would
rank a petabyte worth of random noise as still more
"complicated"/"irreducible" than the 747, but so what? As to
functionality – hard to measure, but, I dunno, humans can build the 747
but 747s cannot build humans, I really don't know how to argue about
this. I think anyone who's been around complex machinery intuitively
feels a certain revulsion at how dumb and common-sense-lacking it really
is and would never claim it to have "exceeded" humans.
@John: if you read those few paragraphs more carefully, then you'd
realize that the part about risk aversion isn't mine but Steve Yegge's,
and the fact that I summarize someone's writing and his writing
contradicts my own words that follow isn't a contradiction. As to
"nothing is more uncertain than the open market" – try living under a
communist government and predict its moves for a while and then we'll
see what you think...
@Ryan: you mean the distinction in politics, not in software, right?
There, I think the right distinction is between people who think "rule
makers" are likely to improve market outcomes by rule tweaking – "rule
makers are wiser and better than markets when left alone" – and those
who think "rule makers" are likely to only make things worse by tweaking
– "rule makers are dumber and more evil than markets when left alone".
Which could be summarized for brevity as how I put it or how you put
it.
@Steve Klabnik: let's talk about real world tech markets of today,
and then you can point out how the difference between some theory of
markets that you think I assume is different from the reality that is
relevant to us today as you perceive it.
@wannabe editor: Linus and Alan should stay as they are in that
paragraph.
@Yossi Kreinin: Minor nitpick about TeX. The core typesetting engine
may be the Right Thing, but the whole infrastructure around it (LaTeX
etc.) is actually the typical evolutionary design, consisting of many
intermingled parts.
As I said, I know little about TeX; I do know that LaTeX isn't
technically a part of TeX though. How many people use it out of those
using TeX I don't know. I think the core of TeX does include the macro
language that LaTeX is built atop and that I find rather awful. An
evolving program would grow something more decent, I think, much like
gdb 7 finally added Python scriptability in addition to gdb's own
scripting facilities.
In your effort to make stuff up as you go along, you make both
Torvalds and Kay look like they have vague guiding principles and put
yourself as the arbiter between them.
But if you want to be productive and learn something out of this, you
first have to assume that both Torvalds and Kay are way more intelligent
than you are. Because that is the probable situation after all. Both
persons may understand the other's viewpoint perfectly, but they have
very different goals.
(Also, Smalltalk inspired by Lisp? Wtf? They have the idea of
syntactical simplicity in common, but differ in every other possible
way, and Kay is not known to be a Lisper.)
Better examples might be found in the comments, which include TeX and
a few select pieces of Apple stuff. What they have in common is that
they are for a niche market, but one which they completely own even
thirty years later with legacy code. That's something. So worse is not
always better.
Don Hokins: I really loved those stories. If you have a blog I'd love
to read more!
same idea was described at
https://plus.google.com/u/0/110981030061712822816/posts/KaSKeg4vQtz
Much of this discussion reminds me of Barry Schwartz' 'The Paradox of
Choice', and the entire maximizers vs satisficers line of
discussion.
Schwartz gave a tech talk at Google a few years back, and it can be
viewed here:
http://video.google.com/videoplay?docid=6127548813950043200
Wikipedia's article of satisficing is a worthy read as well.
http://en.wikipedia.org/wiki/Satisficing
@neleai: not quite the same idea, but a related one, which is why I
linked to that page right at the second paragraph.
@Robert M: it related, somewhat; the thing is, "good enough" doesn't
look like 70% of "the best" – sometimes it's going in an entirely
different direction, a step back if you hope to reach "the best"
eventually – this is why a "maximizer" is so grieved by the work of a
"satisficer" – here's my inappropriately-titled piece on that
one.
If he is a perfectionist and has been for many years, then his worst
can only be so bad...
Reading that your motives have "evolved" away from perfection, I
couldn't help but feel both sorry for, and jealous of you, all at once.
I mean, to have had the devotion to perfection beaten from you by the
constraints of software development in the real world sounds akin to
realizing that even the deepest love is still only driven by the
biological imperitives of its participants. Fellow idealists, cry out!
At the same time, it must be nice not to be feel the angst I feel over
the fact that everything I write turns out to be something less that
completely satisfying to me. The way I have come to reconcile myself to
this is that, instead of your "realistic" lowering of standards, I am
resigned to my own dissatisfaction, as long it results in a better
product for my users.
"if you want it all, consistency, completeness, correctness – you’ll
get nothing" Isn't that what Godel said? http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems.
As a politically left-oriented person, I see problem with markets not
in competition or evolution, but with the monetary inequalities they
create, which translate to power inequalities. And I believe the most
leftist have the same problem with free markets, i.e. inequality they
create, so the article is one giant strawman.
And Steve Yegge's liberal/conservative distinction is yet different
then right/left view of the world.
@JS: the problem you see is inequalities, but what is your solution
that is consistent with competitive markets? If objecting an outcome
effectively leads you to object its cause, then I think one might say
you object the cause.
@Daniel Lee: biological imperatives, my ass. As to devotion to
perfection – it is indeed a rather sweet drug, just one that is rather
hard to afford.
@Joshua Drake: ...unless of course you're willing to consider simple
enough sets of axioms, or infinite, non-computable sets of axioms.
If you're going to compare Worse-Is-Better vs. The Right Thing, and
refer to The UNIX HATER'S Handbook, how could you omit mentioning the OS
with everything *and* the kitchen bit-sink, VMS?
Wasn't it Ken Olsen who said, "The beauty of Unix is it's simple, the
beauty of VMS is that it's all there"?
Интересная статья! Как и все остальные в этом блоге.
Но вот по поводу перфекционизма думаю так: все дело в том, что для
того, чтобы найти оптимальный компромисс между несколькими значимыми
факторами, надо задействовать гораздо больше нейрончиков, чем чтобы
развивать какой-то один :D
Поскольку значимые факторы, как правило, связаны обратно
пропорциональной зависимостью (напр. знаменитый «проектный треугольник»
время-качество-деньги), становится понятно, почему так тяжко что-то
нормально сделать.
А ведь их еще и найти-то надо, эти значимые факторы (по принципу
Парето, действительно значимых меньшинство)! Они и для каждого проекта
свои.
Удивительно, что иногда, наиболее значимый фактор нет-нет да и
оказывается как раз тем, на котором и сосредоточился «перфекционист», и,
если он не достаточно навредил проекту, возникает обманчивое ощущение
The Right Thing.
Но расстраиваться думаю не стоит: если организм и есть оптимальное
для данной эволюционной ситуации соотношение факторов (а он и есть), то
он зря нейрончики напрягать не будет. А если напрягает, и до сих пор не
отсеялся отбором – стало быть, напрягает не зря. ))
I should note that TeX itself, not LaTeX or any of the rest of the
TeX ecosystem, is what I was thinking of in my comment—what makes TeX
special is the amount of time, both in design and implementation, that
Knuth has put into it.
Substituting your definition of "worse is better" for Gabriel's
renders the thesis tautological: "Those things which have better
survival characteristics are better, for they are more likely to
survive."
Read the original article again. It says that simplicity wins.
@Jeremy Thorpe: I realize that the original essay is around
simplicity, and I point this out very clearly in my text above. Two
things though: one, apart from your tautology, there's the bit about the
"best" things in the evolutionary sense tending to be "worse" than "the
right thing" in some other significant senses – this is not
tautological. And, what I tried to show was, based on real examples of
"survivors that look worse than non-survivors" and others' perception of
the essay – it's not just me who tends to de-emphasize simplicity in the
essay and come out with a different take-away that is more central on
"survival of the worse".
@Aaron Davies: I was also talking about TeX, not LaTeX; the
what-you-say-is-what-you-get editing model and the macro language, which
put off most users including me, are part of TeX's core (though the
macros making up LaTeX aren't).
@gus3: I didn't mention VMS because of being utterly ignorant about
VMS...
LISP is definitely much simpler than C. I remember one line codes in
LISP that were the equivalent of pages in C. (Remember, high order
functions)
The nice thing about LISP is that you can write to generate code that
generates code – and that's why it was the language of choice for AI.
LISP was dismissed because it was memory hungry and slow.
Nowadays, LISP is used in research labs and to teach university
students AI.
I wonder how the world would be if LISP prevailed.
@Fadi El-Eter: Gabriel, who said C came out of a design style
favoring simplicity, didn't mean that C was simpler as in "more
expressive", but simple as in "simpler to implement efficiently and
write efficient code in".
The two axioms are what are known as a dialectic. One is the thesis
(probably this is "The Right Thing"), the other is the antithesis (this
is arguably "Worse is better"). What's always happening is some kind of
synthesis.
Because you cannot know in advance, cannot definitively prove, the
correct thesis for a problem that involves something as complex as human
and machine interaction. The Halting Problem alone says this. And you
cannot just throw a bunch of bits at a computer and hope for the best.
You must work somewhere along the continuum between the two. The
extremes are wrong, both of them. They are myths. Myths are best when
taken as metaphor (which everyone around this issue seems to be doing,
which is good).
Thanks for the insight. I followed the "Worse is Better" debate for a
long time at the time it was written. I always wondered what criteria
the debaters were using to decide "better". "consistency, completeness,
correctness" of course, but as applied to what definition of the
problem? That's the kicker, I think: do we want a compromised definition
of the problem or do we want to stick to the aesthetically more pleasing
definition which is in our individual designer's head?
The problem definition in the designer's head is more aesthetically
pleasing because it is in itself "consistent and complete" and is of
course then solvable "correctly". The more compromised (i.e. more
inclusive) definition will include other views of the problem as well as
questions of profitability, timeliness, etc. and is not itself (and can
never be) "consistent and complete" and so can never have a "correct"
solution.
So my view is not that the question is tied to views of evolution or
whatever, but that there are no isolated problems: all problems are
interrelated and the definition of any problem is infinitely expandable
in all directions, so that the selection of an isolated issue to solve
"correctly" always ends up imposing arbitrary boundaries. Including a
larger public in the definition of the problem always fuzzies up the
edges and makes the problem both less attractive and incapable of an
obviously "correct" solution. But the solutions to these fuzzy problem
definitions are always more useful than the solutions to the smaller
cleaner definitions because of the synergy created in expanding each
definition to be inclusive of more points of view.
@Yossi: It's not my tautology either. It's the tautology that is left
once you generalize as much as you have. Okay maybe it's not a complete
tautology: "The right thing doesn't always win." You don't say!
I realize that you're not the only one to do this, and Atwood is
equally wrong to lump x86 in with "Worse is better". There are some
things that neither MIT nor New Jersey would be proud of, and they may
even have won in the market.
Gabriel connects two dots: simplicity and fitness. "The right thing"
is in the background. Why you'd want to build something fit for survival
is also an exercise left to the reader, though you seem to have taken it
up.
@Jeremy Thorpe: it's more like "the right thing never wins"; still
one could reply, "you don't say" I guess...
@RobD: I'm not sure that The Right Thing is about solving more
isolated problems; frequently it's actually more about expanding your
solution to handle every imaginable case. Much of the outrage in
Unix-Haters is against all the parts of the problem Unix ignores that
others systems handle, and how real problems (such as irretrievable data
loss after typing rm *>o instead of rm *.o) are ignored by Unix
aficionados because in the Unix aesthetics, there is no problem here.
That The Right Thing is "right" in the designer's head more so than in
the real world could be considered true from a Worse is Better
perspective, I guess; but it's not necessarily because of ignoring more
real-world uses or points of view.
If anything I think Linux has proved the problem with evolution, and
with being to "progressive". I guess it could apply to politics, too,
good analogy.
Good stuff gets thrown out or mauled due to people not understanding
or caring why it was there in the first place, or even people who have
an agenda they want to push onto computing.
Everything gets thrown in just because, regardless of how much cruft
you introduce.
I don't think there is a real issue about "simplicity" but when you
choose to do one thing then it sets the design off in a certain
direction, however so slightly.
So look at the uncanny valley, for example. Is it really that the
more near to humanlike something it is, the creepier? I don't think
so.
To me it's always been obvious that the cartoony images are sort of a
least common denominator. It only shows what's there and more or less
correct (if exaggerated) in everyone. The more detail you add, the more
fine detail, the further it deviates from correctness or in this case
what you personally need or want.
At the high end you get windows 8 completely insane over the top
complexity masquerading as being simple because it does it all for you.
Well, I guess it does but what it really does is instead of giving you
tools do do what you need tries to make something that does everything
for the least knowledgable people caters to handhelds.
And success is just nothing to do with it. They are making an
argument that one thing is better based on success, but a lot of that is
based on who does the deciding and how they decide.
So probably just a pointless comment, sorry, but interesting post
though I'm not sure how much of what you're saying really has any
connection.
Nicely written, well researched, and well concluded. Thanks :)
I had a similar experience of watching my beautifully simple, smug
architecture totally fail to handle an important use case. I had to hack
it up to make it work, and that was humbling, and an important
lesson.
@rus: glad you liked it!
Fast, cheap, or easy: pick one. Better or worse is always relative to
some measure of merit. Change the measure to change the perceived value.
Want to destroy a meeting, a product, or an organization, keep changing
the target. When everyone is focused on quality, raise concerns about
schedule or cost. When everyone focuses on schedule, bring up quality
and cost issues. If the discussion addresses cost, consider quality or
schedule.
Why Unix/C ? Simply put, the price was/is right. It was effectively a
$0 cost option to educational institutions. ATT couldn't sell it, so
they gave it away. This "gift" sent many generations of computer
scientists, computer engineers, and software engineers on down the road
from college or university with an interesting OS and language bias. A
classic continuing case of the cheap limiting the available options.
Better? Worse? Just different? On what dimension are we determining
"merit". I am now retired, and can just throw darts at the balloons as
they drift by.
i can't condone any good things said about unix.
about Yegge's article categorizing coders as American political
left/right, i think it was silly, and i half expect him to declare it
being a hoax.
lots of these discussions is philosophy (as opposed to science), and
not with philosopher's stringent training.
It reminds me of the tale of 7 blind men feeling an elephant ( http://en.wikipedia.org/wiki/Blind_men_and_an_elephant
). Each person see it to his experiences, seemingly fitting, and
arguably not incorrect. Steve yegge want to fit American left/right
poltical thought. In this essay we have economic evolution as framework.
I don't see value in either of these two descriptions. It seems like
politicians validating opposite view points with the same data.
of all the essays mentioned, i do highly admire Richard P Gabriel's
Worse Is Better section of his lisp talk. It's not scientific analysis
or analytic philosophy, but it hits my spot. Because i felt his
description of unix vs lisp design mindset is perfect, and he used virus
to describe the survival advantages of the unix mindset.
i do not believe that Worse Is Better has better survival
charateristics in the long run. Nor do is see today's software as
dominated by Worse Is Better. Of course, this is again all babbling of
gut feelings, until one scientifically defines what's really “Worse Is
Better” or “the right thing”.
PS Second Life used pie chat from at least 2006 to 2010, and i loved
it, and it's easier.
TeX is Worse Is Better. I can't hate it more. ☺
enjoyed your C++ criticism very much. It was how i found you few
years ago.
I am quite late to the party, but feel obliged to leave my response
here, on what seemed like the longest list of responses. I do not know
if it was on your mind, and I guess you certainly did not want to push
discussion on this, but does not this all, remind you a bit of the
Evolution vs. Intelligent design culture clash ?
The timeless Religious/Agnositics debate non withstanding, I guess
you can also put it to Idealist vs Cynic, etc ....
Its all damn too much philosophical to ever have somebody one one
camp ever convince the other party, that's for sure ....
@GD: it is related somehow, but in many different ways or so it seems
to me, which is why I didn't want to go there.
One such relationship that I did mention was, people who do believe
in evolution and don't believe in a single bit of intelligent design
ever taking place – those people differ among themselves in the extent
of "awe" they have for what they believe are results of evolution. Some
think the results are amazing (implying they could never achieve
anything like it if they were tasked with an "intelligent
design" of this kind) and some see a tangled mess of genetic bugs
(implying that they, or someone not unlike them, could in fact do
better).
The upshot being, some people have a much higher esteem of human
ability and those tend to sneer at evolution, both biological and
economical, while others are much more pessimistic in that regard, and
those tend to think of evolution as a good thing.
Unless you can predict the future, there is no such thing as "The
Right Thing". There is only "Attempting to Guess The Right Thing" —
which fails more often than doing that and just solving immediate
problems in a way that is simple, effective, and as open as possible
with the limited perspective you have at the time.
@SteveP: If you believe that you know what the Right Thing is based
on your own aesthetics and value system, then this belief will not be
shattered when it turns out, at whatever point in the future, that
people choose to use something else. Then the people are simply wrong,
or misinformed, or robbed of their choice by some force or circumstance.
If, to you, "The Right Thing" means a correct prediction rather than
recovering a timeless truth, then you have a kind of a Worth is Better
attitude.
And what if I don't care if anybody uses my stuff? What if I am doing
it for *myself*, not for anybody else? What if I am doing it because I
am sick of using crap tools. What if my own stuff actually does work
better for *me*? To me that is superiority enough. I don't need to be a
world shaker. I just want to work with tools that suit me.
In that case, it's a bit unclear why you care to tell this to the
world.
To me, the important thing about "worse is better" solutions is that
they are simplified. I mean that in the same way a physicist might
calculate the outcome of a horse race by supposing perfectly spherical
horses running in a vacuum. These solutions are also simpler, but that
is a side effect rather than a goal.
Simplicity is just a starting point. The 8086 was very simple
compared to the "right thing", which was the Intel iAPX432. It had half
the transistors of a Motorola 68000 or a National 16032. But once it get
its foothold in the market, the "worse is better" solution grows and
grows in complexity. How long did it take Unix to become far more
complex than Multics?
Another good example of the struggle between these two alternatives
is Xanadu vs the World Wide Web.
Sure simplicity is (sometimes) just a starting point; sometimes it's
not – JavaScript is intrinsically not that simple and it wasn't all that
simple from day one. At any rate, sure the 432 is more complex than the
x86 implementations of the time, but Itanium is less complex than the
x86 implementations of its time; both were eaten by x86. When we call
x86 "worse is better" when comparing to both, it's pretty clear that
complexity is not what makes us classify things as we do. "The Right
Thing" may be more or less complex; what makes it "The Right Thing" is
the focus on doing the right thing through deliberate design vs the
focus on evolutionary pressures and where they push you.
My take on the matter is that, like evolution, the more flexible is
typically the survivor. Taking the human genome example, we have a vast
surplus of codons for different traits that either have no clear merit
or are actively detrimental in common circumstances. What this gives us
is the flexibility for some of the species to survive in just about any
possible circumstance. Likewise with code, the code that can be modified
to suit changing needs is longer lasting than the code that solves one
problem perfectly. Occasionally a problem is identified that is
persistent but has only a handful of useful solutions. TeX seems to
solve one of these and succeeds despite having an unpleasant macro
language.
Unix succeeds because it has been able to evolve to keep up with both
user needs and changing hardware where less successful systems typically
could not be easily ported and more portable systems had to sacrifice
even more features. X86 succeeds because by rigorously maintaining
backwards compatibility it allows multiple generations of software to
benefit from the latest hardware.
"Worse is Better" is a loaded phrase. It presumes the existence of
something superior...but isn't that what this debate is about?
Perhaps it is more accurate to describe this side of the debate as
"Good Enough".
I know I'm coming to this years after the fact, but I wanted to
comment on Jonas's WTF about Lisp and Smalltalk. Alan Kay is in fact a
big fan of Lisp, and has said in so many words that Lisp was a big
influence on Smalltalk. A few examples:
"We had two ideas, really. One of them we got from Lisp: late
binding. The other one was the idea of objects." (So, Smalltalk was
built on two ideas, and one of them came from Lisp.)
[This is from a description of a talk AK gave in2006.] "Alan uses
John McCarthy and Lisp as an example of real science in computer
science. He showed us that you can build a system that’s also it’s own
metasystem. [...] Alan used McCarthy’s method to design an object
oriented system." (So, the other big idea in Smalltalk was objects; and
Kay designed the object system in Smalltalk using a technique he learned
from the original Lisp paper.)
I think it's pretty fair to say that Smalltalk was inspired by Lisp.
(Alan Kay also called Lisp "the single greatest programming language
ever designed", and he called "The art of the metaobject protocol" — a
book describing the metaobject system in Common Lisp — "the best book
written in computing for ten years". As I say, a big fan.)
I think the history of technology is this. Grand visions that are
publicly funded (Computers, The Internet, etc) have been since
privatized and destroyed by the market.
I don't think you should compare Smalltalk vs Linux, but Alan Kay's
Desktop GUI and Dynabook to today's UIs and the iPad.
It isn't a history of competing ideas that the market choose is
correct. It is the history of grand ideas incubated in the public sector
and then further distilled and misunderstood by the private sector.
The reality is the Market has little in making the decision between
the grand idea and the distilled one. They never had the opportunity to
decide. They were always given worse ideas to choose from.
And by "They" in my previous remark, I mean the consumer.
Just like when you walk into a super market. You are given options
that were decided for you. That were filtered through thousands of
decisions made outside the market beforehand. The market only chooses
things available to it. But it doesn't have to be this way. We can
choose to make things available outside the market. The market would
have never produced computers or the internet, for example. Those things
came about outside the market.
You beat me to it as I'm mentally working on a similar essay.
Backward compatibility and shipping pressure I already covered a lot in
my posts on Schneier's blog elaborating on this. See Steve Lipner's
Ethics of Perfection essay for a great take on "ship first, fix later"
mentality. He had previously done a high-assurance, secure VMM. So, he
had been on both sides.
On backward compatibility, you need to explore lock-in and network
effects. These are the strongest drivers of the revenues of the biggest
tech firms. Once you get the market with shipping, people will start
building on top of and around your solution. They get stuck with it
after they do that enough to make it hard to move. Familiarity with
language or platform matters here, too. The economics become more
monopolistic where you determine just enough additions to keep them from
moving.
I agree with other poster on OpenVMS: it's a great example of Right
Thing vs Worse is Better that *won* in market. While their management
was good. ;) It had better security architecture, individual servers
went years without reboot, mainframe-like features (eg batch &
transactions), cross-language development of apps, clustering in 1980's,
more English-like command language, management tech, something like
email... the whole kitchen sink all integrated pretty well. Reason was
it was a company of engineers making what they themselves would like to
use then selling it to others. Also mandated quality where they'd
develop for a week, run tests over weekend, fix problems for a week, and
repeat. That's why sysadmins forgot how to reboot them sometimes. ;)
https://en.wikipedia.org/wiki/OpenVMS
Here's a few others that fall under Cathedral and Right Thing model
that got great results with vastly fewer people than Worse is Better
and/or were successful in the market. Burroughs and System/38 still
exist as Unisys MCP and IBM i respectively. Lilith/Oberon tradition of
safe, easy-to-analyze, and still fast lives on in Go language designed
to recreate it. There's nothing like Genera anymore but Franz Allegro CL
still has a consistent, do-about-anything experience. QNX deserves
mention since it's a Cathedral counter to UNIX where they implemented
POSIX OS with real-time predictability, fault-isolation via microkernel,
self-healing capabilities, and still very fast. Still sold commercially
and was how Blackberry Playbook smashed iPad in comparisons I saw. They
once put a whole desktop (w/ GUI & browser) on a floppy with it.
Throw in BeOS demo showing what its great concurrency architecture could
do for desktops. Remember this was mid-1990's, mentally compare to your
Win95 (or Linux lol) experience, and let your jaw drop. Mac OS X, due to
Nextstep, could probably be called a Cathedral or Right Thing that made
it in market, too.
http://www.smecc.org/The%20Architecture%20%20of%20the%20Burroughs%20B-5000.htm
https://homes.cs.washington.edu/~levy/capabook/Chapter8.pdf
https://en.wikipedia.org/wiki/Lilith_%28computer%29
http://www.symbolics-dks.com/Genera-why-1.htm
http://www.qnx.com/products/neutrino-rtos/neutrino-rtos.html#technology
https://youtu.be/cjriSNgFHsM?t=16m5s
So, more food for thought. The thing the long-term winners had in
common is that (a) they grabbed a market, (b) they held it long enough
for legacy code/user-base to build, (c) incrementally added what people
wanted, and (d) stick around due to legacy effect from there. Seems to
be the only proven model. It can be The Right Thing or Worse is Better
so long as it has those components. So, we Right Thing lovers can
continue to trying to make the world look more Right. :)
Nick P
Security Engineer/Researcher
(High assurance systems)
Sine qua non, a success of scale must first be successful in a raw
land-grab. The ability to grab land is not really a technical or an
evolutionary merit. I would not consider an invasive species of rat that
overruns its small island, destroying the habitat of every other animal
species, to be a success — and much less so if it destroys its own food
supply, dooming the line to extinction. Nor would I consider the death
of the last island reptile to indicate that the rat was 'better'.
I am not even sure what are the criteria of evolutionary success,
especially for parasites. True success should include succession.
Perhaps subspeciation into distinct populations in homeostasis with
their environments would be unambiguous in general. If so, Unix is a
manifest success, and Windows a failure.
I suspect most readers are more interested in commercial criteria of
success. While the commercial competitive landscape may share some
features with an evolutionary competitive landscape, let's not press the
analogy beyond its breaking point. Instead, we should usefully clarify
the range and limits of the analogy.
@Maxim: well, you certainly made it clear where you stand on these
issues :-) Your CV however says you've worked a lot in the private
sector. If it blows so much and all the good stuff originates in
publicly funded projects, why don't you work on one of these instead?
Also, why didn't the fastest (or pretty much any...) computers come out
of the USSR whose public sector was always larger than that of the US
and who had plenty of educated people in the relevant areas?
@Nick: I don't think I really "beat you to it", in that you're
looking at it from a different angle. My main points were (1) "worse is
better" is really about evolutionary forces, regardless of the details
of what these forces are, and even when people don't realize that it's
what they're arguing about, and (2) the reasons different people side
with or against "worse is better" – and I didn't get to that second part
yet... (I promised a follow-up which has failed to arrived in the 4
years since.)
@aminorex: it's simple: rats won the first round, reptiles lost it.
If rats then became extinct, they've lost the second round.
(Incidentally, rats rarely do.)
Maybe it's different from someone else's point of view, especially if
they don't like rats, but it's certainly as simple as that for the rats.
Maybe I don't like to use Linux and I sure as heck wouldn't like to have
to contribute to it, but I end up using it and people working for
various companies end up having to contribute to it, and getting
subjected to verbal abuse by Torvalds, and I bet he rather
likes how it all came out.
So when it comes out to you're the rat or you're the reptile, make
sure to narrow down your definition of success enough to be the rat, is
all I say.
What is measured as worse or better is technology that has to sell to
the widest audience possible. It is common in microeconomics that when
you target the average consumer, you are going for a marketable low cost
solution, so that you can be competitive on the market. This is because
the average consumer cannot pay any price you set, as long as the
product deserves it. You don't need a degree from a business school to
know that quality drives up prices. In the examples above, Worse is
Better because it is more affordable.
For future reference, the UNIX-Haters' Handbook has been moved
here:
https://simson.net/ref/ugh.pdf
Still relevant.
Post a comment