In high-performance, resource-constrained projects, you're not likely to suddenly run out of cycles - but you're quite likely to suddenly run out of memory. I think it's a bit similar to how it's easy to buy fuel for your car - but sometimes you just can't find a parking spot.
I think the difference comes from pricing. Processor cycles are priced similarly to fuel, whereas memory bytes are priced similarly to parking spots. I think I know the problem but not the solution - and will be glad to hear suggestions for a solution.
Cycles: gradual price adjustment
If you work on a processing-intensive project - rendering, simulation, machine learning - then, roughly, every time someone adds a feature, the program becomes a bit slower. Every slowdown makes the program a bit "worse" - marginally less useable and more annoying.
What this means is that every slowdown is frowned upon, and the slower the program becomes, the more a new slowdown is frowned upon. From "it got slower" to "it's annoyingly slow" to "we don't want to ship it this way" to "listen, we just can't ship it this way" - effectively, a developer slowing things down pays an increasingly high price. Not money, but a real price nonetheless - organizational pressure to optimize is proportionate to the amount of cycles spent.
Therefore, you can't "suddenly" run out of cycles - long before you really can't ship the program, there will be a growing pressure to optimize.
This is a bit similar to fuel prices - we can't "suddenly" run out of fuel. Rather, fuel prices will rise long before there'll actually be no fossil fuels left to dig out of the ground. (I'm not saying prices will rise "early enough to readjust", whatever "enough" means and whatever the options to "readjust" are - just that prices will rise much earlier in absolute terms, at least 5-10 years earlier).
This also means that there can be no fuel shortages. When prices rise, less is purchased, but there's always (expensive) fuel waiting for those willing to pay the price. Similarly, when cycles become scarce, everyone spends more effort optimizing (pays a higher price), and some features become too costly to add (less is purchased) - but when you really need cycles, you can get them.
Memory: price jumps from zero to infinity
When there's enough memory, the cost of an allocated byte is zero. Nobody notices the memory footprint - roughly, RAM truly is RAM, the cost of memory access is the same no matter where objects are located and how much memory they occupy together. So who cares?
However, there comes a moment where the process won't fit into RAM anymore. If there's no swap space (embedded devices), the cost of allocated byte immediately jumps to infinity - the program won't run. Even if swapping is supported, once your working set doesn't fit into memory, things get very slow. So going past that limit is very costly - whereas getting near it costs nothing.
Since nobody cares about memory before you hit some arbitrary limit, this moment can be very sudden: without warning, suddenly you can't allocate anything.
This is a bit similar to a parking lot, where the first vehicle is as cheap to park as the next and the last - and then you can't park at all. Actually, it's even worse - memory is more similar to an unmarked parking lot, where people park any way they like, leaving much unused space. Then when a new car arrives, it can't be parked unless every other car is moved - but the drivers are not there.
(Actually, an unmarked parking lot is analogous to fragmented memory, and it's solved by heap compaction by introducing a runtime latency. But the biggest problem with real memory is that people allocate many big chunks where few small ones could be used, and probably would be used if memory cost was something above zero. Can you think of a real-world analogy for that?..)
Why not price memory the way we price cycles?
I'd very much like to find a way to price memory - both instructions and data - the way we naturally price cycles. It'd be nice to have organizational pressure mount proportionately to the amount of memory spent.
But I just don't quite see how to do it, except in environments where it happens naturally. For instance, on a server farm, larger memory footprint can mean that you need more servers - pressure naturally mounts to reduce the footprint. Not so on a dedicated PC or an embedded device.
Why isn't parking like fuel, for that matter? Why are there so many places where you'd expect to find huge underground parking lots - everybody wants to park there - but instead find parking shortages? Why doesn't the price of parking spots rise as spots become taken, at least where I live?
Well, basically, fuel is not parking - you can transport fuel but not parking spots, for example, so it's a different kind of competition - and then we treat them differently for whatever social reason. I'm not going to dwell on fuel vs parking - it's my analogy, not my subject. But, just as an example, it's perfectly possible to establish fuel price controls and get fuel shortages, and then fuel becomes like parking, in a bad way. Likewise, you could implement dynamic pricing of parking spots - more easily with today's technology than, say, 50 years ago.
Back to cycles vs memory - you could, in theory, "start worrying" long before you're out of memory, seeing that memory consumption increases. It's just not how worrying works, though. If you have 1G of memory, everybody knows that you can ship the program when it consumes 950M as easily as when it consumes 250M. Developers just shrug and move along. With speed, you genuinely start worrying when it starts dropping, because both you and the users notice the drop - even if the program is "still usable".
It's pretty hard to create "artificial worries". Maybe it's a cultural thing - maybe some organizations more easily believe in goals set by management than others. If a manager says, "reduce memory consumption", do you say "Yes, sir!" - or do you say, "Are you serious? We have 100M available - but features X, Y and Z are not implemented and users want them!"
Do you seriously fight to achieve nominal goals, or do you only care about the ultimate goals of the project? Does management reward you for achieving nominal goals, or does it ultimately only care about real goals?
If the organization believes in nominal goals, then it can actually start optimizing memory consumption long before it runs out of memory - but sincerely believing in nominal goals is dangerous. There's something very healthy in a culture skeptical about anything that sounds good but clearly isn't the most important and urgent thing to do. Without that skepticism, it's easy to get off track.
How would you go about creating a "memory-consumption-aware culture"? I can think of nothing except paying per byte saved - but, while it sounds like a good direction with parking spots, with developers it could become quite a perverse incentive…