AI problems

January 11th, 2008

Animals have a uniform, closed architecture. The human brain is an open platform; people get by using a wide variety of techniques called "professions". The flexibility has its drawbacks. We aren't tuned for any particular profession, and apparently that's why everybody develops some sort of profession-related problems. Proctologists reportedly turn rude as time goes by. Rock stars live fast, but die young. Hunters in the African deserts get to chase antelopes for a couple of days. Antelopes run much faster, so you would never try to chase one; but the hunter knows better โ€“ compared to him, the animal lacks stamina, and will get tired, and then he can kill it. But sometimes the hunter has the misfortune of chasing a particularly strong antelope, in which case he still won't be able to get close enough at the end of the second day. But having wasted all that energy, he now certainly has to refuel, so he settles for a nearby half-rotten corpse. The effect of that sort of meal on his digestive system is one problem that comes with his profession.

Programmers develop their own problems. Today, we'll talk about AI problems some of us are having. As you probably already know, but my trademark thoroughness still obliges me to say, AI stands for "Artificial Intelligence" and comes in two flavors, "deterministic" (like minmax) and "statistical" (like SVM). The combined efforts of various researches lead to an important breakthrough in this field, known to meteorologists as "the AI winter". This is the season when you can't get any VC money if you mention AI anywhere in your business plan. During this season, an alternate term was invented for AI, "Machine Learning". I think that the money/no money distinction between "ML" and "AI" isn't the only one, and that in other contexts, AI=deterministic and ML=statistical, but I don't care. In real systems, you do both. Lots of things labeled as "AI" work and are useful in practical contexts. Others are crap. It's always like that, but this isn't what I came to talk about today. By "AI problems", I didn't mean the problems that people face which require the application of methods associated with the term "AI". I meant "problems" in the psychiatric sense.

A certain kind of reader will wonder whether I have the necessary qualifications to deal with a psychiatric issue so advanced. My credentials are humble, but I do work on hairy computer vision applications. The general problem computer vision deals with (identify, classify and track "objects" in real-world scenes) is considered "AI complete" by some, and I tend to agree. I don't actually work on the AI bits โ€“ the algorithms are born a level and a half above me; I'm working on the hardware & software that's supposed to run them fast. I did get to see how fairly successful AI stacks up, with different people approaching it differently. Some readers of the credential-sensitive kind will conclude that I still have no right to tackle the deep philosophical bullshit underlying Artificial Intelligence, and others will decide otherwise. Anyway, off we go.

The AI problems make a vast area; we'll only talk about a few major ones. First of all, we'll deal with my favorite issue, which is of course The Psychophysical Problem. There are folks out there who actually think they believe that their mind is software, and that consciousness can be defined as a certain structural property of information processing machines. They don't really believe it, as the ground-breaking yosefk's Mind Expansion Experiment can easily demonstrate. I'll introduce that simple yet powerful experiment in a moment, but first, I want to pay a tribute to the best movie of the previous century, which, among other notable achievements, provided the most comprehensive treatment of the psychophysical problem in the popular culture. That motion picture is of course The Terminator, part I and, to an extent, part II. World-class drama. Remarkable acting (especially in part I โ€“ there are a couple of facial expressions conveying aggressive, hopeless, cowardly and impatient stupidity previously unheard of). Loads of fun.

Back to our topic, the movie features a digital computer with an impressive set of peripheral devices, capable of passing the Turing test. The system is based on Atari hardware, as this guy has figured out from the assembly listings cleverly edited into the sequences depicting the black-and-red "perspective" of the machine. According to the mind-is-software AI weenies, the device from the movie has Real Consciousness. The fascinating question whether this is in fact the case is extensively discussed in the witty dialogs throughout theย film. "I sense injuries", says the Atari-powered gadget. "This information could be called pain". Pain. The key to our elusive subject. I'm telling you, these people know their stuff.

The mind-is-software approach is based on two assumptions: the Church-Turing thesis and the feelings-are-information axiom. In my trademark orderly fashion, I'll treat the first assumption second and the second assumption first. To show the invalidity of the feelings-are-information assumption, we'll use yosefk's Mind Expansion Experiment. It has two versions: the right-handed and the left-handed, and it goes like this. If you're right-handed, put a needle in your right hand and start pushing it into your left arm. If you're left-handed, put a needle in your left hand and start pushing it into your right arm. While you're engaged in this entertaining activity, consider the question: "Is this information? How many bits would it take to represent?" Most people will reach enlightenment long before they'll cause themselves irreversible damage. Critics have pointed out that the method can cause die-hard AI weenies to actually injure themselves; the question whether this is a bug or a feature is still a subject of a hot debate in the scientific community. Anyway, we do process something that isn't exactly information, because it fucking hurts; I hope we're done with this issue now.

Some people don't believe the first of the two above-mentioned assumptions, namely, the Church-Turing thesis. Most of these people aren't programmers; they simply lack the experience needed to equate "thinking" and "doing". But once you actually try to implement decision-making as opposed to making the decision yourself, your perspective changes. You usually come to think that in order to decide, you need to move stuff around according to some procedure, which isn't very different from the method of people doing manual labor at low-tech construction sites. Thinking is working; that's why "computational power" is called "power". I've only heard one programmer go "...but maybe there's a different way of thinking from the one based on logic". I couldn't think of any, except from the way based on psychoactive chemicals, maybe. "A different way of thinking". To me, it's like arguing that you can eat without food or kick ass without an ass, and I bet you feel the same way, so let's not waste time on that.

Next problem: some people actually think that a machine will pass the Turing test sooner or later. I wouldn't count on that one. Physicists claim that a bullet can fly out of one's body with the wound closing and healing in the process, because observations indicate that you can get shot and wounded, and if a process is physically possible, that same process reversed in time is also physically possible. It's just that the probability of the reverse process is low. Very low. Not messing with the kind of people who can shoot you is a safer bet than counting on this reversibility business. Similarly, the Church-Turing claims that if a person can do it, a universal computing device can emulate it. It's just the feasibility of this simulation that's the problem. One good way to go about it would be to simulate a human brain in a chip hooked to enough peripherals to walk and talk and then let it develop in the normal human environment (breastfeeding, playing with other kids, love & marriage, that kind of thing). The brain simulation should of course be precise enough, and the other kids should be good kids and not behave as dirty racists when our Turing machine drives into their sand pit. If the experiment is conducted in this clean and unbiased way, we have a good chance to have our pet machine pass the Turing test by the time the other kids will be struggling with their IQ tests and other human-oriented benchmarks.

Seriously, the human language is so damn human that it hardly means anything to you if you are a Turing-complete alien. To truly understand even the simplest concepts, such as "eat shit" or "fuck off and die", you need to have first-hand experience of operating a human body with all of its elaborate hardware. This doesn't invalidate the Church-Turing thesis in the slightest, but it does mean that automatic translation between languages will always look like automatic translation. Because the human that can interpret the original that way clearly lives inside a box with flashing lights, a reset button and a ventilator. For similar reasons, a translation by a poorly educated person will always look like a translation by a poorly educated person. I know all about it, because in Israel, there's a million ex-Russians, so they hire people to put Russian subtitles into movies on some channels. Unfortunately, they don't seem to have any prerequisites for the job, which means that I get to read a lot of Russian translations by morons. Loads of fun. These people equipped with their natural intelligence barely pass the Turing test, if you ask me, so I keep my hopes low on Turing-test-passing AI.

Moving on to our next problem, we meet the people who think that we actually need AI. We don't. Not if it means "a system that is supposed to scale so that it could pass the Turing test". And this is the only thing AI means as far as I'm concerned here. We already have "artificial intelligence" that isn't at all like our natural intelligence, but still beats our best representatives in chess, finds web pages, navigates by GPS and maps and so on. Computers already work. So the only thing we don't have is artificial intelligence that simulates our own. And this is as tremendously useless as it is infeasible. Natural intelligence as we know it is a property of a person. Who needs an artificial person? If you want to have a relationship, there's 6G of featherless two-leg Turing machines to pick from. If you want a kid to raise, you can make one in a fairly reliable and well-known way. We don't build machines in order to raise them and love them; we build them to get work done.

If the thing is even remotely close to "intelligent", you can no longer issue commands; you must explain yourself and ask for something and then it will misunderstand you. Normal for a person, pretty shitty for a machine. Humans have the sacred right to make mistakes. Machines should be working as designed. And animals are free to mark their territory using their old-fashioned defecation-oriented methodology. That's the way I want my world to look like. Maybe you think that we'll be able to give precise commands to intelligent machines. Your typical AI weenie will disagree; I'll mention just one high-profile AI weenie, Douglas Hofstadter of Gรถdel, Escher, Bach. Real-life attempts at "smart" systems also indicate that with intelligence, commands aren't. The reported atrocities of the DWIM rival those of such precise a command as "rm .* -rf", which is supposed to remove the dot files in the current directory, but really removes more than that.

Finally, many people think that AIish work is Scientific and Glamorous. They feel that working on AI will get them closer to The Essence of The Mind. I think that 40 years ago, parsing had that vibe. Regular, Context-Free, automatic parser generation, neat stuff, look, we actually know how language works! Yeah, right.

You can build a beautiful AI app, and take your experience with you to the next AI app, but you won't build a Mind that you can then run on the new problem and have it solved. If you succeed, you will have built a software system solving your particular problem. Software is always like that. A customers database front-end isn't a geographical database front-end. Similarly, face recognition software isn't vehicle detection software. Some people feel that mere mortal programmers are biting bits, some obscure boring bits on their way to obsolescence, while AI hackers are hacking the Universe itself. The truth is that AI work is specialized to the obscure constraints of each project to a greater extent than work in most other areas of programming. If you won't take my word for it, listen to David Chapman from the MIT AI Lab. "Unlike most other programmers, AI programmers rarely can borrow code from each other." By the way, he mentions my example, machine vision, as an exception, but most likely, he refers to lower-level code. And why can't we borrow code? "This is partly because AI programs rarely really work." The page is a great read; I recommend to point and click.

As I've promised, this wasn't about AI; it was about AI-related bullshit. And as I've already mentioned, lots of working stuff is spelled with "AI" in it. I've been even thinking about reading an AI book lately to refresh some things and learn some new ones. And then lots of AI-related work is in Lisp. They have taste, you can't take that away.

1. ZungBangJan 12, 2008

A note from the nitpicking department:
It was actually 6502 assembly code, which at the time was the CPU of several computers, most notably Apple II, Commodore 64 and my personal favorite: Atari 800XL.

You should probably watch Blade Runner (I strongly advise against the "director's cut") โ€“ where you have a whole police force for dealing with (read: terminating) rogue wetware androids (the in-movie term is "replicants").

2. Yossi KreininJan 12, 2008

I had a bad feeling about this; CPUs never have cute names like "Atari", there's always a numeric tail in that name, and in some notable cases, like the case in point, there's actually no alphabetic head... Sloppy. Well, I'll comfort myself with the thought that "Atari-based gadget" has a better sound to it than "6502-based gadget".

I should definitely watch everything combining high class drama with in-depth treatment of the psychophysical problem. That always stirs my faith in the human race. And my faith in the human race is definitely something I like stirred.

3. nmancusoJan 30, 2008

"Physicists claim that a bullet can fly out of oneโ€™s body with the wound closing and healing in the process, because observations indicate that you can get shot and wounded, and if a process is physically possible, that same process reversed in time is also physically possible."

Are you so sure about that? Wouldn't that break the 2nd Law of Thermodynamics?

4. Yossi KreininJan 30, 2008

Of course it would, but that law is statistical. For example, a gambler who keeps gambling and winning breaks the law of "The House Always Wins", and in fact none of us will live to see that, but it's physically possible. Particles around me can combine to form an exact copy of me without breaking physical laws, it's just extremely unlikely. So is a Turing-test-passing AI :)

5. SteevNov 25, 2008

so really what one can come away from this with is that we have nothing better to do than to write programs to do it for us?

6. Yossi KreininNov 26, 2008

I don't know how this is related to any of the above, but, um, I'd say that we have better things to do than things programs can do for us.

I have a warm spot in my heart to the "replacing ourselves with machines is insane" attitude, of which Vonnegut was one prominent proponent. However, I think this world view is fatally misguided and brain-damaging.

7. BenOct 11, 2009

Perhaps you've seen a machine in the throes of trying to write copious numbers of error messages to a log because of a hardware fault. That's pain. Does the machine know that it's in pain? Perhaps, if there's some monitoring software with a rule that says "When the VMM is thrashing, the disks and CPU are pegged, and the hardware error log is increasing by more than 1000 entries per second, email/page/beep loudly for an admin to fix the problem". Human pain is information meant for the lowest parts of our brains, and they take over from the rational parts and try to get away from the carnivorous predator or the forest fire that evolution taught them to deal with.

8. Yossi KreininOct 11, 2009

@Ben: I disagree, twice: (1) pain is not just information โ€“ try it and you'll see that you can't brush off the psychophysical problem that easily, and (2) pain is not just there "for the lowest parts of our brain" โ€“ it's a pervasive feedback mechanism without which we're almost completely blind despite having seemingly impressive reasoning faculties.

9. humanNov 10, 2009

why
pain is information
and feeling,born of pain are information
if you turn off the pain center of human,he will not feel it.
lobotomy?
learn the brain(not that i know it)

10. Yossi KreininNov 10, 2009

If you strangle a human, he will not feel pain after some time, either. Nor will a computer process information if you short-circuit it. The ability to break something fully or partially provides no exhaustive explanation of its nature.

11. BalNov 7, 2010

And you cannot implement "pervasive feedback mechanisms" in software? Take the behaviorist approach: what do exactly feelings do which cannot be implemented in a machine? Heightened heart pulse? Sweating hands? A motivation to run? A motivation to kiss? Which?

12. Yossi KreininNov 7, 2010

These aren't feelings, these are behaviors โ€“ which is OK from a behaviorist perspective I guess, but not from the simplest of human perspectives which kinda focuses on feelings very much.

13. BalNov 7, 2010

If you strip feelings from their causes and effects, what does remain? It all leads to the old philosophical question of whether perfect imitation is the thing or not, current frontline is the p-zombie, which IMHO is just pure nonsense.

AI is more about understanding how the mind works. Copying an entire human is mostly uninteresting, but copying some of the capabilities of the brain is not. Feelings, for example, are not too interesting, they are just our gene's regulating mechanism to control the computing we do. But the fact that we are capable of building mental models from unrelated previous experience in a short time and from a small amount of data is really fascinating and mysterious.

And AI is about power, too. A machine just somewhat 'smarter' (whatever it means) than Neumann or Einstein or Perelman can do things which cannot be done by anybody else, probably ever. And that will have an impact, just like these people had.

14. Yossi KreininNov 7, 2010

Feelings are perhaps not too interesting if you're programming a computer or commenting on an article, but very interesting if you drop a brick on your foot.

Not all of life is confined to logic, mathematics, philosophy, physics, technology, thinking and talking.

15. BalNov 7, 2010

I agree on that, but AI is not aiming to reproduce the entire human life. It is aiming to reproduce a thing which can experience the entire human life. The two is not the same.

You are mixing explanation levels here. Denying strong AI because of feelings does not work. It is like trying to prove a mathematical theorem with a poem.

16. Yossi KreininNov 8, 2010

Well, you're mixing two different claims here. I didn't deny strong AI because of feelings โ€“ I agree it would make a problematic argument โ€“ but because of its implausibility. There's a bet over this one, Kapor vs Kurzweil โ€“ we'll see how that works out.

(Although arguably strong AI is as much or more about poetry than math or science โ€“ Turing used the ability to write sonnets and then explain why they're written thus and not otherwise as an example of what it means to be intelligent; but there is no need to argue over small details here โ€“ it's like arguing over the fine points of what communism means, totally unnecessary when making the claim that we won't see a society where each gets according to his needs and gives according to his abilities in the next 50 years.)

17. Will PearsonMay 15, 2011

There is something between human like AI and current AI research, that we don't currently do, that would be useful.

We can't currently make a computer that looks after itself, there is always a human root or admin somewhere that is supposed to be looking after the computer. Someone updating the software, installing new stuff and removing malware. We get things like botnets because that assumption is not true.

This is very different from what people traditionally think AI is about. But it is a useful trait of evolved creatures that we are missing from computers. Animals manage to improve their skills without a super user. We haven't really explored how we might make a computer look after itself and what the limitations are. Autonomic Computing is the closest but it is generally oriented towards maintaining a known system state.

18. Yossi KreininMay 15, 2011

I think a key non-technical reason to keep things the way they are, with a human in the loop, is to make sure that things evolve in accordance to the changing perceived purposes of humans rather than the changing perceived purposes of machines.

19. Will PearsonMay 17, 2011

I agree that we will want to keep humans in the loop, as it were, but I think there are ways of doing that while not assuming that humans will be able pick the things to change or to understand all the changes made. But a human should be able to evaluate the resulting changes.

So having humans as more CEOs of computers rather than as engineers. Or another way to think about it, if we understand how the brain works there is no reason that we have to duplicate it in its entirety. We could make the bits that don't deal with goals (such as the sensory systems) and use humans to provide the goal orientation information (probably slowly to start with due to lack of bandwidth, but with the potential for more integration as we improve our ability to integrate with brains). I suppose exo-brain add ons is what I am thinking of.

I think the key non-technical reason no one is pushing for this is because it is hard, and everyone in computing can make sufficient money with the status quo.

20. Yossi KreininMay 17, 2011

Economically, the fact that the status quo is profitable to existing vendors does nothing to stop a new vendor from pushing this and then eating into their market share.

As to CEOs of computers โ€“ let's say that it works badly enough with humans (a rogue employee can do a whole lot of damage to his employer; for that matter, a CEO is himself an employee of the shareholders and not infrequently wipes them out for his own gain).

21. Will PearsonMay 17, 2011

I meant under the status quo, new vendors/researchers can find new markets for extensions of the existing computing/AI paradigm and be vastly profitable a lot more easily than they can pushing something speculative like self-upgrading computers.

Indeed CEOs of computers will go wrong, the question is would it be worse than the current botnet/phishing identity theft stuff we have by assuming that computers have engineers to look after them.

As we don't know how they will work it is hard to say.

22. vikramJul 17, 2011

i am study sybca

23. JeramieHSep 27, 2011

I don't get the pain argument. As far as I ever thought of pain, it really was only a signal to indicate damage, and to limit activity that worsens damage. What more are you implying pain is? Your explanations (comment 8, etc) are vague.

24. Yossi KreininSep 27, 2011

@JeramieH: it's not a logical or a scientific argument. I'm just saying that pain is a feeling โ€“ that there's a fundamental difference between an SMS telling you "damage detected, limit your current activity" and actual pain; that the message can be represented and studied as a mathematical object, but pain can not be; and that everybody understands this at the gut feeling level โ€“ that pain is not a number though an image file is, which is why I use pain as an example and not, say, eyesight.

Some people dismiss feelings as "illusion" on the basis of, for instance, our ability to drug ourselves into a state where pain is not felt. I think it's about the same as saying that life does not exist because you can club someone to death โ€“ and in fact that's a view some people subscribe to (that there's no fundamental difference between a living human, his dead body and a pile of dirt).

It's fine as far as logic is concerned โ€“ feelings are impossible to objectively observe and formalize and thus do not create a problem for any sort of logical argument, being outside its realm. I think one's worldview shouldn't be restricted to things accessible to logic simply because only a tiny minority of things are, but it's of course a subjective position.

However, there are exactly zero people who actually subscribe to the life-and-death-are-the-same worldview as evidenced by their actions. Even psychopaths tend to inflict pain on the living and do not derive equal pleasure from "hurting machines".

25. sreekalaDec 5, 2011

good

26. bfrsFeb 28, 2012

I think Qualia (http://en.wikipedia.org/wiki/Qualia) is a word you would like to check out.

27. Yossi KreininFeb 28, 2012

Qualia I know about. But what is bfrs? Brominated flame retardants?

28. bfrsFeb 28, 2012

bfrs is now sort of like xkcd, meaningless. It was the first handle I could think of when Hacker News asked for one. At that time, it stood for Ben-Franklin:Richard-Saunders (Franklin's pen name). Ben Franklin (for reasons now forgotten) is the mascot of my passport photo web app (http://freepassportphoto.dyndns.org/).

Some comments on stuff I found interesting in your post:

"...I want to pay a tribute to the best movie of the previous century, which, among other notable achievements, provided the most comprehensive treatment of the psychophysical problem in the popular culture. That motion picture is of course The Terminator, part I and, to an extent, part II."

People called me crazy for expressing the same opinion. I lost count of how many times I watched Terminator โ€“ part I. Its what got me interested in AI. Couple of years back, it got knocked off from the top position by Carl Sagan's 1997 "Contact". When I was a kid, the strange code listings seemed impressive, now I know that such low level atari assembly code would never surface to the top level. If at all any code did come to the top level, it would be lisp. Unfortunately lisp, as Larry Wall put it, "has all the visual appeal of oatmeal with fingernail clippings mixed in", unlike the orderly and imposing looking assembly.
โ€”โ€”โ€”-

"...To pass the Turing test, a machine needs to have first-hand experience of operating a human body with all of its elaborate hardware."

This is well appreciated by AI "weenies" today. For example see Philip Greenspun and Ellen Spertus' male-female Turing test (http://philip.greenspun.com/software/watson).
โ€”โ€”โ€”-

"...Moving on to our next problem, we meet the people who think that we actually need AI. We donโ€™t. Not if it means โ€œa system that is supposed to scale so that it could pass the Turing testโ€. And this is the only thing AI means as far as Iโ€™m concerned here...So the only thing we donโ€™t have is artificial intelligence that simulates our own. And this is as tremendously useless as it is infeasible. Natural intelligence as we know it is a property of a person. Who needs an artificial person?...If you want a kid to raise, you can make one in a fairly reliable and well-known way."

It looks like you are overlooking the possibility of an "intelligence explosion". A frequent oversight that prompted I. J. Good (one of Turing's colleagues at Bletchey Park) to write his famous essay: "Speculations Concerning the First Ultraintelligent Machine"
(http://web.archive.org/web/20010527181244/http://www.aeiveos.com/~bradbury/Authors/Computing/Good-IJ/SCtFUM.html#6_CellAssembliesAndSubassemblies)
"...highly intelligent people can overlook the intelligence-explosion. They say that there is no point in building a machine with the intelligence of a man, since it is easier to construct human brains by the usual method...Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make."
โ€”โ€”โ€”-

Your irreverance gave me the lulz. Calling the formidable T-800/Arnold "Atari-powered-gadget", and AI heavyweights, Hofstadter and Kurzweil as AI weenies, I find very funny! Thanks for the lulz.

29. Yossi KreininFeb 29, 2012

I also like Phil Greenspun's angle on AI there; he isn't what I'd call an AI weenie โ€“ he's very sensible about the whole thing.

Regarding intelligence explosion: I can believe it in a limited context โ€“ where you have a machine (program) living in a formal universe (of theorems or 3D models or whatever) and this machine gets better and better in its domain. I don't believe this sort of thing will lead to a Turing-test-passing AI, for reasons that I mostly discussed above.

30. bfrsFeb 29, 2012

philg, ofcourse, is no AI weenie. Some of his best satire is directed towards AI (http://philip.greenspun.com/humor/ai.text). What I should have added is that, philg being popular, people are aware of the man-woman Turing test etc.

Today's AI heavyweights, Hawkins, Kurzweil, Ng, Norvig, Thrun et al., don't consider the goal of AI to be to build a machine that can pass as a human. Instead the goal is to replicate the capabilities displayed by the cortex. The core idea goes by various names, "Mountcastle's hypothesis" being a popular one.

By the way, I found your blog while trying to decide if I should write my new project (a streaming X11 screen grabber) in C++. I started with x11vnc code, which is pure C89, and kept thinking to myself, "what a pain C is, must define variables at the beginning of scope, can't write type-agnostic functions, no OOP enabled abstraction facilities, etc. etc." However, I know C++ only in a half-assed way (like most people?), and if I did choose it, there was a steep learning curve to contend with. I think your FQA has scared me enough to stay the hell away from C++. As you put it, "an itch here or a scratch there is much preferable to having your legs blown off!". My decision, hopefully for the better, is to stick with my usual Python/C combo.

31. Yossi KreininFeb 29, 2012

Hope it works out well to you... IMO the good reason to use C++ is if there are great libraries in it doing what you want and the interfaces are too big to wrap.

32. FrankOct 1, 2012

I would recommend a couple pounds of Daniel Dennett, that should be able to start your healing process...

33. robJun 22, 2013

This is awesome. Really. I didn't immediately attempt the YKMEE but I did laugh out loud after parsing the word 'damage'. I tend to hastily duck and declare that pain is an event, which solves nothingโ€“ the problem simply becomes "is an event information?" My favorite answer, for the next thirty seconds, is "Snow".

34. Yossi KreininJun 22, 2013

Well, keep not attempting it; it's not nearly as enjoyable when attempted.

35. josephSep 8, 2013

Great article. I guess I'm considered an AI researcher, but I agree that it's pointless to consider the philosophical problems associated with "strong AI." To me, AI is about getting machines to solve harder problems.

About Terminator: I've always found it weird that the Terminator has textual information overlayed on its camera feed like a HUD (HUMAN CASUALTIES: 0.0). This means that its computer takes machine readable data, renders it as images of text, integrates it with the camera data, then uses OCD to translate it back into machine readable data. Maybe this is the Terminator's version of the internal dialogues going on in our heads.

36. Yossi KreininSep 9, 2013

I guess the assembly listings and HUMAN CASUALTIES is how they communicate to the audience "it's a robot".

37. MarkFeb 17, 2014

I agree with some stuff you say. But I don't agree that AI is useless. Indeed, you almost admit it would be very nice to have, when you talk about translation. Can you seriously deny that having machine translation is very useful? Sure, you may say we will never get it, but that doesn't make it useless.

38. ludkillerMay 24, 2016

Well, now people are writing AI which can write AI, True AI soon :)

39. YousefMar 13, 2017

Great writing on the topic. Insightful and humane. Thank you for spending the time and effort to write this piece.



Post a comment