Is program speed less important than X?

February 13th, 2013

Is program speed less important than safety? Sometimes it is – and sometimes speed is safety. A slow autopilot is a dangerous autopilot. That's why so much safety-critical software is written in the least safe programming languages.

A lot of programs aren't like autopilots – a slower, safer transaction processor is usually better. Google's early credit card charging disaster is my favorite example (it would never happen with a bounds-checked, garbage-collected language runtime).

However, there's the "counter-anecdote" of a cell phone company switching to a fancy new billing system which took more time to charge customers for calls than it took customers to make calls. It fell behind to the point where they had to avoid charging customers for a month worth of calls because they couldn't compute the right amounts. So sometimes trusting one's money to a slow program is rather unsafe.

And then there are high-frequency trading algorithms. Speaking of which: is speed less important than program correctness? Sometimes it is – and sometimes speed is correctness. A slower chess program playing under time control will settle for a worse move – a less correct move. Slower project scheduling software will come up with a worse schedule.

Life in general is a game played under time control. Often, "slower" means "being able to process less information in less ways" – in other words, dumber, further away from "correct".

What about time to market – isn't program speed less important than time to market? Sometimes it is – and sometimes higher speed is shorterΒ time to market. A breathtaking game or special effect on today's hardware that others can only pull off on tomorrow's hardware means that the game or the special effect made it first to the market.

("Performance tricks" sound more relevant to the real-time world of games than to the offline rendering of movies; a great counter-example is procedural generation of outdoor landscape in Brave by Inigo Quilez.)

But time to market is also affected by development time; is program speed less important than development time? Sometimes it is – and sometimes higher speed is less development time. A developer waiting for slow programs develops more slowly – and developers often wait for their own programs (tools searching for stuff or summarizing stuff, build systems, tests, machine learning algorithms, ...).

Another point is that a developer whose code tends to be too slow will waste time looking for a faster, fancier, buggier algorithm, sometimes sifting through many options, each of which could be fast enough if coded by someone else. A developer whose code tends to be fast the first time will move on to the next thing more quickly.

Is program speed less important than X? Sometimes it is – but sometimes speed is inseparable from X.

1. Michael MoserFeb 14, 2013

With bounds checked, garbage collected language you can't crash because a poor pointer has gone missing; you still can crash due to null reference or OutOfMemoryException ; the later gets sometimes really hard to fix/handle ; so instead one is at the mercy of a virtual machine. Is that safe?

2. Yossi KreininFeb 15, 2013

OOM or NPE is safer than memory corruption where you charge the customer -$2.5681M because the amount you should charge was a local variable on the stack of a thread that has since returned from the function – safer by a very large margin.

3. ChadFeb 16, 2013

Okay I'm going to bite.

Programmers can control the correctness, speed and also the delivery date of the product. Though the one thing that programmers miss ineptness is the culture code regarding the meaning behind correctness, robustness, and also what is a bug.

Just simply taking an American about quality vs a Japan you will get vastly different cultural code (meanings,feelings) behind the word. So one can not just simply take quality to mean one rooted thing because quality, same with correctness is a relative term.

Understandable, you can hold up different test case project and point at them saying "this is what they did wrong", and "this is how they should of done it". The truth is, there are many many decisions that need to be made during the software development life cycle. These have to be considered and weighed up by a team of technical lead for the over-all project goal and conventions. This includes (correctness, robustness), though saying that these are relative terms if you're speaking to developers at large.

Instead of focusing what went wrong, there should be more of a focus on "what decisions they did right?". Development is about wining many many small insignificant battles though the accumulation of those small battles as a whole make up the complete software package. Some problems get through, but I would say the guy from Google that made the wrong billing calculation was a awesome developer (who btw when he did find the mistake handed in his resignation that was rejected).

Some technologies do help in making you fall into the pit of success. Though they normally tend to leverage experience from people who made the small mistakes, and have collectively been put into a framework for developers. Checked and Unchecked exceptions in (.net,Java) spring to mind. Though if you find out what you're goal is from the start of the project (correctness, speed, quality) and the meaning behind those words (customers interpretation of it) then you're much more inline in delivering a product that is going to be successful.

One example is the case regarding early web browsers. There was a war on between correctness (containing all the web content before conducting a search on it) and just good unuf (ranked based algorithms). Today we know the answer to this question, though back in yahoo, alvista, or any library search query we just didn't. So the interpretation of correctness from the user and the programmers where vastly different. One wanted reliant search based query based on rank based links, vs perfect search result based on the current known state of the internet.

Sorry for my ramblings must go and feed the baby.

4. gsgFeb 17, 2013

I'm surprised you didn't mention compression or video encoding, which drive the point home: a slow encoder is a *bad* encoder (since you could do a better job with a faster one, not just the same job in less time).

As for null pointers, good type systems solve that problem rather nicely by clearly separating values for which null is expected to be a possibility. Of course, few people program in the various non-mainstream languages that get that right...

5. Yossi KreininFeb 17, 2013

Actually I'm not particularly worried about null pointers because you crash in an immediate and clear way with these. In fact I'm instinctively more worried about non-nullable types, under the assumption that someone could start passing dummy objects around and then instead of crashing programs you get programs doing dumb things and proceeding. (A sufficiently smart programmer/team wouldn't run into that of course but then they wouldn't run into NPEs, either; the question is whether nullability everywhere vs having to explicitly ask for nullability and not having it somewhere is better given the real programmers out there; I dunno.)

6. gsgFeb 17, 2013

Yeah, it's true that sometimes preventing superficial junk precipitates the formation of more insidious junk.

A nice example of this is uninitialised variables. Supposedly they are the most terrible evil thing ever, and yet you can use Valgrind to detect bugs involving their use quite handily. In contrast the more "principled" initialisation-before-use (or worse, declaration-is-default-initialisation) rules force a meaningless, error-masking assignment.

I'm not sure I buy it as an argument against programming with principled constructs though. There are too many cases where there is no good way to detect the brain damage.

7. saurabhFeb 23, 2013

What is this google credit card disaster?

8. Yossi KreininFeb 23, 2013

It's described here: http://www.flownet.com/ron/xooglers.pdf (look for "The billing disaster"). The upshot is that they had a money counter allocated at a thread's stack and they accessed the counter after the thread returned from the function.

9. mkMay 14, 2013

"With bounds checked, garbage collected language you can't crash because a poor pointer has gone missing; you still can crash due to null reference or OutOfMemoryException ; the later gets sometimes really hard to fix/handle ; so instead one is at the mercy of a virtual machine. Is that safe?"

This is a fine example of how stupid people think. In this case, a stupid person thought of *a* way that unmanaged code can crash but managed code can't (due to dangling pointers), noticed that managed code can fail in other ways ... and then promptly forgot his starting point.

It's extra stupid here because the article he's responding to is all about examples of this sort of faulty reasoning resulting from only considering those cases that support a claim while overlooking cases that don't.

10. mkMay 14, 2013

"(or worse, declaration-is-default-initialisation) rules force a meaningless, error-masking assignment"

What a nutty and erroneous notion ... that automatic initialization of all variables to the default value for their type is "error-masking" and is worse than leaving them uninitialized so that they *might* contain the default value (and often will at program startup) or any other value at all, depending on such things as the phase of the moon and the compiler version and flags. Depending on valgrind for detecting uninitialized values is a good way to lose spacecraft or kill radiation patients.

How many C++ constructors initialize everything they should? I just saw a case today where a cow orker added a pointer member to a class but didn't initialize it in the constructor. C++ (pre-11)'s moronic enforced separation of the point of declaration and the point of initialization magnifies the effects of careless incompetence severalfold.

11. mkMay 14, 2013

"A sufficiently smart programmer/team wouldn't run into that of course but then they wouldn't run into NPEs, either"

No, that's quite false ... or rather it ignores the probabilities. Virtually all programmers/teams will occasionally have NPE bugs ... avoiding them takes *extraordinary* care. But it's quite easy to program in a sensible way with non-nullables, even though incompetent programmers will always find ways to screw up.

12. MSimonJul 30, 2013

Suppose you are correcting errors in real time – a PID program say. Close enough results on time is much better than perfect results late. Errors left over can be corrected in the next cycle.

The idea of the OODA loop in the battle space is the same. The perfect pilot gets beaten by the good enough one.

Or as we like to say in controls – if the process changes faster than the machine can respond you WILL get oscillations.



Post a comment