02 Aug, 2010, Davion wrote in the 41st comment:
Votes: 0
Ssolvarain said:
I wish I could ask my computer how it's feeling today.
.


"tail /var/log/messages" :)
02 Aug, 2010, Deimos wrote in the 42nd comment:
Votes: 0
KaVir said:
The computer reads input and makes decisions, but it does so based on precise computations, following the instructions it's been given.

This is fundamentally incorrect. In fact, there's a whole branch of AI that I'm sure you're at least partially familiar with called neural networks that wouldn't even exist if what you're saying was true. And if you want to take the "it takes instructions to create a neural network to begin with" route, I'll just point you to DNA, which is also nothing but a set of biological instructions with which to create our own brains.

ATT_Turan said:
I know this is another philosophical discussion between you and KaVir, but…SERIOUSLY? The functioning of a computer is, traced back to its most basic level, due to humans figuring out how to manipulate the flow of electric current. Are you saying that your transistors are sentient? If not, at what point did they gain awareness (which is what perception refers to)?

The computer itself (the hardware) is not intelligent, in the same way that your body is not intelligent (or even your brain, if it lacks the ability to perceive, reason, and make decisions - see: coma, mental defects, etc.). When we talk about computers being intelligent, we're referring to the software that drives them. Also, when traced back to its most basic level, the functioning of a human brain is nothing but the manipulation of electric current, as well. ;)
02 Aug, 2010, Ssolvarain wrote in the 43rd comment:
Votes: 0
The point is that golems are constructs. Constructs work on strict orders. Altering those orders is extremely difficult by design, since creating a golem is usually a fairly long and arduous process. What would be the point if someone could just steal it :P
02 Aug, 2010, Runter wrote in the 44th comment:
Votes: 0
It seems to me that both charm and golems could be redefined as needed. Charm under those circumstances may or may not make sense but comparing it to charming laptops and castles is rather silly.

It seems the overall point is that mindless things can't be cfarned under the typical mechanics of generic fantasy mythos.

Perhaps golems in a certain mud do have minds and a shackled will of their own. Maybe the charm spell has different explained mechanics.

Its useful in these types of discussions to explain the required foundation it will take place on. Not only did I not read that. It seems like we are discounting any other possibility as ridiculous off hand. Even in the context of a made up fantasy universe.
02 Aug, 2010, KaVir wrote in the 45th comment:
Votes: 0
Deimos said:
KaVir said:
The computer reads input and makes decisions, but it does so based on precise computations, following the instructions it's been given.

This is fundamentally incorrect. In fact, there's a whole branch of AI that I'm sure you're at least partially familiar with called neural networks that wouldn't even exist if what you're saying was true.

No, I'm sorry, but it is not incorrect. Artificial intelligence and cognitive modeling try to simulate some properties of neural networks, and there's been some degree of success in certain areas, but that does not mean that my laptop, or my coffee machine, or my alarm clock, or my washing machine, have "minds". They only respond to direct instructions, which they follow precisely. In short, they work just the typical fantasy golem.

Runter said:
Its useful in these types of discussions to explain the required foundation it will take place on. Not only did I not read that.

Then I'll post a summary for you:

Post #21: "The typical interpretation of a golem is of a mindless creature - an automaton, much like a fantasy version of a robot."

Post #27: "Please bear in mind that I did specifically refer to "a mindless creature - an automaton"

Post #29: "Well golems originally came from Jewish folklore, and in many depictions they're portrayed in much the same way as D&D - perfectly obedient, following instructions to the letter."

Post #31: "Being mindless, they do nothing without orders from their creators … That is how they work in D&D. It is how they work in my mud"

Post #33: "You're honestly suggesting that the reason I can program a computer is because it has a mind capable of understanding my instructions?"

Deimos's response: "Yes, that's what I'm suggesting. Things without minds can't understand instructions."

He believes that "Something without a mind can't be ordered", and that the golems in my mud (and D&D, and much of the mythology they're based on) are therefore a "paradox". He also believes that my coffee machine is intelligent.
02 Aug, 2010, David Haley wrote in the 46th comment:
Votes: 0
Quote
Note that something doesn't have to have complex intelligence to be considered intelligent by most experts. After all, there's a reason you still call a computer tic-tac-toe player an AI (emphasis on the I) opponent.

Actually, most AI experts would not consider a Tic-Tac-Toe program to be the slightest bit intelligent. Even a very cursory reading of Wikipedia's page on intelligence shows that intelligence is much broader than what you have called.

By your definition, toasters are intelligent because they "decide" when to pop the bread up when they "perceive" that the timer has run out, and rice cookers are intelligent because they "decide" to stop cooking when they "perceive" that the pressure level has gone to a certain point. I'm sorry, but that's really kind of silly. :smile:

Surely, your definition requires some finessing, unless you are willing to make it so broad as to include toasters, coffee machines and rice cookers. (And if you broaden it that much, surely you see that it's not terribly useful in determining what exactly makes humans 'intelligent' compared to machines or even animals – many being surely more intelligent than machines! – in the first place.)

Anyhow, there is a difference between charming something's "mind", and influencing the way it works. You could conceivably have a magic spell that affects an intelligent being's reasoning, and then another magic spell that manipulates mindless programming or changes mechanical operation. I'm not sure what the big deal here is.

Quote
In fact, there's a whole branch of AI that I'm sure you're at least partially familiar with called neural networks that wouldn't even exist if what you're saying was true.

It's a stretch to compare neural networks to DNA, but ok. I don't think KaVir fully understood what you meant by neural networks, but he's still not incorrect. Neural networks as we know them today are hardly "intelligent" – your argument is demonstrably flawed if only that we simply do not have AI agents yet despite having rather sophisticated neural networks. (You also might be biting off more than you can chew if you want to get into a proper technical discussion of what exactly neural networks are. You might find that some people here know quite a bit about them.)
02 Aug, 2010, Deimos wrote in the 47th comment:
Votes: 0
Runter said:
It seems to me that both charm and golems could be redefined as needed. Charm under those circumstances may or may not make sense but comparing it to charming laptops and castles is rather silly.

I agree, which is why I conceded that if you've defined golems to be truly mindless (which seems rational enough to me), then it makes no sense for them to be able to be charmed. The reason we're still discussing it is because I gave that concession with the caveat that if it's mindless, it also can't take instructions or make its own decisions. It would just sit there and do nothing, because it has no mind with which to drive its actions. It would be like a sorcerer ordering a sword to swing itself. Nothing would happen. At least, until the sorcerer used magic to force the sword to swing, but that's not the giving and following of orders - that's actually controlling an object's actions yourself. This seemingly contradicts KaVir's explanation of what golems are and how they behave in his implementation, but he doesn't agree with that.

Runter said:
It seems like we are discounting any other possibility as ridiculous off hand. Even in the context of a made up fantasy universe.

Well, we're talking about concepts that only exist in a fantasy context, within which, quite literally, anything is possible. Because of that, I've disclaimed plenty of times that the stances I take are only my opinion. KaVir, on the other hand, has seemingly taken the stance that anyone who does not agree simply doesn't have the same attention to detail or isn't well-read in the D&D manual and Jewish folklore. That's rather off-putting to me, but hey, I'm not easily offended, so whatever. :p

David Haley said:
By your definition, toasters are intelligent because they "decide" when to pop the bread up when they "perceive" that the timer has run out, and rice cookers are intelligent because they "decide" to stop cooking when they "perceive" that the pressure level has gone to a certain point. I'm sorry, but that's really kind of silly.

Hmm, no, I think you're confused as to what perception is. If I trigger a mouse trap, it does not "perceive" that I've touched its trigger. It also does not "reason" based on that and "decide" to snap shut. It's all just a mechanical reaction based on some physical action. The same goes for a toaster timer. The toaster does not "perceive" that the timer has stopped. The timer is just a mechanical device that just stops, which triggers a mechanical reaction. Now, if for whatever reason, you have some kind of toaster with a processor inside that has a temperature gauge, an eye that monitors browning, and can be programmed to deliver you perfectly toasted bread every time with fail-safes to prevent burning and jazz like that - then you have an intelligent toaster. This is how KaVir was describing his coffee maker.

David Haley said:
Anyhow, there is a difference between charming something's "mind", and influencing the way it works. You could conceivably have a magic spell that affects an intelligent being's reasoning, and then another magic spell that manipulates mindless programming or changes mechanical operation.

If something has no mind to influence, how can it be influenced? It can be physically or magically controlled, which I've already conceded, but then a mindless golem would have to be controlled at all times by a sorcerer, like a puppet. KaVir is making the case for a mindless golem which can be ordered to go do something and it's capable of doing that. Can you order a sword to go do something? Sure. Will it go do it? No - because it has no mind to process and carry out your instructions.

David Haley said:
It's a stretch to compare neural networks to DNA, but ok. I don't think KaVir fully understood what you meant by neural networks, but he's still not incorrect. Neural networks as we know them today are hardly "intelligent" – your argument is demonstrably flawed if only that we simply do not have AI agents yet despite having rather sophisticated neural networks. (You also might be biting off more than you can chew if you want to get into a proper technical discussion of what exactly neural networks are. You might find that some people here know quite a bit about them.)

I didn't compare neural nets to DNA - read the post again carefully. I'm comparing neural nets to brains. And I cut him off at the pass by saying that you can't argue that a neural net requires instructions to be created in the first place, because so does a brain (DNA being those instructions). Also, I'm not sure why you think AI agents with neural nets don't exist. Perhaps you're trying to redefine AI to mean human-like?
02 Aug, 2010, David Haley wrote in the 48th comment:
Votes: 0
Quote
Now, if for whatever reason, you have some kind of toaster with a processor inside that has a temperature gauge, an eye that monitors browning, and can be programmed to deliver you perfectly toasted bread every time with fail-safes to prevent burning and jazz like that - then you have an intelligent toaster.

I'm sorry, but if your definition of intelligence is so broad as to include any device with even the simplest processor in it, it is far too broad to be truly useful.

We can come up with a machine that works solely with mechanical processes (pulleys, water valves, what have you) and your definition would exclude it, even though a trivial electronic pressure sensor with a simple circuit board somehow qualifies. I'm not sure why you place any value whatsoever on a process being electrical rather than mechanical.

Go pick up any textbook on philosophy of the mind or other forms of intelligence and you will find that to qualify for intelligence, you need a far more complex set of conditions than a simple circuit board.

I mean, really, saying that toasters, rice cookers or coffee machines are intelligent is ridiculous.

Quote
If something has no mind to influence, how can it be influenced?

Well, if you start from your extremely loose definition of what a "mind" is, then almost everything has a "mind". But, well, I reject that definition of "mind" and put in a more useful one, in which case KaVir's golem can not have a "mind" and yet still be capable of simple autonomous motion.

Quote
Also, I'm not sure why you think AI agents with neural nets don't exist. Perhaps you're trying to redefine AI to mean human-like?

No, I'm not trying to redefine AI to mean human-like, I'm trying to use a useful definition of intelligence that includes humans (and perhaps some other animals as well) and excludes toasters and coffee machines.

There is no thing currently in existence that has reasoning powers even approaching a human's, regardless of whether or not the thing is "human-like". Therefore, your statement to KaVir that neural networks somehow disprove anything he's saying is incorrect.


There is a difference between something that can perform a task and something that is intelligent. As I said before, people do not call a Tic-Tac-Toe player intelligent in any way resembling a human's intelligence. The TTT player is in fact rather 'stupid', only knowing how to perform a single task.


This comes down to a definitional question. If your definition of intelligence is this broad, we don't really have anything more to say to each other because I think your definition is basically useless (and I imagine you have your own feelings about the position(s) the rest of us are taking).

Insofar as the practical question here is concerned, one needs only limit oneself to whether or not something can be made to do something it would not normally do. The philosophical questions of identifying intelligence and minds are (fortunately for us) irrelevant.
02 Aug, 2010, Deimos wrote in the 49th comment:
Votes: 0
David Haley said:
I'm sorry, but if your definition of intelligence is so broad as to include any device with even the simplest processor in it, it is far too broad to be truly useful.

I didn't come up with that definition. It's quite common as it applies to the field of AI (where I pulled it from). Obviously there's a good deal of controversy over what actually constitutes intelligence (in general; not just on this thread), so I'm not saying that this is The One True Definition ™. I'm merely saying that it's a fairly popular one, it makes the most sense to me, and in my opinion, it's easily understandable.

David Haley said:
We can come up with a machine that works solely with mechanical processes (pulleys, water valves, what have you) and your definition would exclude it, even though a trivial electronic pressure sensor with a simple circuit board somehow qualifies. I'm not sure why you place any value whatsoever on a process being electrical rather than mechanical.

It has nothing to do with electrical vs. mechanical. It's theoretically possible (though ridiculously impractical) to create a purely mechanical computer. In that case, you could create intelligence mechanically, but this has nothing to do with my point. Think of my mouse trap example in terms of a human reflex. If I hit your knee with a hammer, your leg will likely fly up. This was not intelligent behavior. Your brain had nothing to do that action - it was purely a "mechanical reaction" like what happens when you trip a mouse trap. You did not perceive, reason, or decide anything. Now, having said that, after the reflex you could have perceived that I hit your knee, reasoned that this hurt, and decide to punch me in the face for hitting you with a hammer. That's intelligent behavior. A mouse trap (and any other object without a "mind") is obviously incapable of this.

David Haley said:
Go pick up any textbook on philosophy of the mind or other forms of intelligence and you will find that to qualify for intelligence, you need a far more complex set of conditions than a simple circuit board.

Many philosophers posit that creativity, imagination, and self-awareness are required for intelligence. Many AI researchers disagree, and believe that this is simply a higher level of intelligence (hence why they dubbed it "strong AI"). You'll also find some religious people who believe that morality and the concept of a "soul" can be lumped into the definition, but I find that hard to swallow for many other reasons.

David Haley said:
As I said before, people do not call a Tic-Tac-Toe player intelligent in any way resembling a human's intelligence. The TTT player is in fact rather 'stupid', only knowing how to perform a single task.

Something doesn't need to be "as intelligent as a human" to be intelligent, just as something doesn't need to be "as hard as diamond" to be hard. A TTT player is quite stupid when compared to a human, yes. But that does not make it unintelligent. It just makes it "a lot less intelligent than a human." If everytime you see me using the term "intelligent", you're reading that as "as intelligent as a human" then it's no wonder you disagree with what I'm saying.

David Haley said:
Insofar as the practical question here is concerned, one needs only limit oneself to whether or not something can be made to do something it would not normally do. The philosophical questions of identifying intelligence and minds are (fortunately for us) irrelevant.

I completely agree, and I'm rather burned out on this topic now, since it appears to be going in circles, so I'll let this be my last post on the issue.
02 Aug, 2010, David Haley wrote in the 50th comment:
Votes: 0
Quote
Many philosophers posit that creativity, imagination, and self-awareness are required for intelligence. Many AI researchers disagree, and believe that this is simply a higher level of intelligence (hence why they dubbed it "strong AI").

I've yet to meet an AI researcher who seriously argued that a TTT program was "intelligent", even weakly so, any more than a coffee machine or rice cooker is "intelligent" just because it happens to follow some algorithm based on stimuli from the external environment.

I'd ask you to provide citations for all these AI researchers' views on why their search-space exploring programs are in fact intelligent, but you said you were done, so.

Quote
It has nothing to do with electrical vs. mechanical.

Well, that's not what you said earlier. You explained that mouse traps are not intelligent because "it's all just a mechanical reaction based on some physical action". Therefore, your "theoretically possible (though ridiculously impractical) <…> purely mechanical computer" should be just as unintelligent, because all of its operation will be mere mechanical reactions to physical actions.

Quote
<…> like what happens when you trip a mouse trap. You did not perceive, reason, or decide anything.

I'm not sure why you're talking about mouse traps because I was not talking about mouse traps; in fact the examples I gave (and KaVir's) were directly based on conditional reaction to external stimuli following some programmed logic (such as a simple circuit board).

Incidentally, the nervous system is involved in reflex reactions, although the higher thought processes of the brain are not. This is why you flinch away from burning heat before you realize that it is hot, for example; the nervous system in the spine is responsible for that.

Quote
Something doesn't need to be "as intelligent as a human" to be intelligent, just as something doesn't need to be "as hard as diamond" to be hard. A TTT player is quite stupid when compared to a human, yes. But that does not make it unintelligent. It just makes it "a lot less intelligent than a human." If everytime you see me using the term "intelligent", you're reading that as "as intelligent as a human" then it's no wonder you disagree with what I'm saying.

I'm not sure why you keep insisting that I am equating intelligence with human intelligence, especially after I said I wasn't the last time you asked. My problem with your definition of "intelligence" is simply that it is too broad to be useful, as it doesn't even come close to identifying agents that can perceive, reason, learn, react, adapt, innovate, synthesize, transfer, and so forth. (Note that I am not including 'emotion', which is arguably a human trait.) Coffee machines, toasters and TTT players exhibit none of the above (you might stretch "learning" to include gathering statistics in the way a TTT player might, although that is a very simplistic form of learning).
02 Aug, 2010, KaVir wrote in the 51st comment:
Votes: 0
Deimos said:
I agree, which is why I conceded that if you've defined golems to be truly mindless (which seems rational enough to me), then it makes no sense for them to be able to be charmed. The reason we're still discussing it is because I gave that concession with the caveat that if it's mindless, it also can't take instructions or make its own decisions. It would just sit there and do nothing, because it has no mind with which to drive its actions. It would be like a sorcerer ordering a sword to swing itself. Nothing would happen. At least, until the sorcerer used magic to force the sword to swing, but that's not the giving and following of orders - that's actually controlling an object's actions yourself. This seemingly contradicts KaVir's explanation of what golems are and how they behave in his implementation, but he doesn't agree with that.

I don't agree, no, because the way I described golems working is very similar to the way computers work: you give them instructions, and they follow them to the letter. They don't reason, they have no emotions, they show no self-awareness, no survival instinct…they do precisely what they're told, no more and no less. That is not "intelligence".

Deimos said:
KaVir, on the other hand, has seemingly taken the stance that anyone who does not agree simply doesn't have the same attention to detail or isn't well-read in the D&D manual and Jewish folklore.

You stated that the approach used by my mud, by D&D, and by mythology are an inconsistent "paradox", and have taken the stance that computers have minds and that my coffee machines is intelligent. I find your perspective…unique, and therefore cannot help but inquire further.
02 Aug, 2010, Runter wrote in the 52nd comment:
Votes: 0
In my opinion machine learning in meaningful ways is a hurdle before I would consider a device intelligent.

Even if it had impressive decision making processing. Appearing to be intelligent isn't proof of it. If we didn't know better perhaps a toaster would appear intelligent.

This gets more difficult to discern as thing try to emulate intelligence. No matter how complex a decision tree is it still doesn't make the grade with me.
02 Aug, 2010, Ssolvarain wrote in the 53rd comment:
Votes: 0
Runter said:
In my opinion machine learning in meaningful ways is a hurdle before I would consider a device intelligent.


Yarr. I'm going off on another ignorable tangent, but it's occurred to me that we've designed machine intelligence in our own image. Or our own thinking, at least. I often wonder if the evolutionary path of technology is the only one.

And notice how you can't ask google a specific question? It doesn't actually search based on the words you've used (or comprehended the question)… it just matches the words you've given against a page's content…
02 Aug, 2010, kiasyn wrote in the 54th comment:
Votes: 0
Lemmings are intelligent:



*Splat splat splat* oh wait…
09 Aug, 2010, David Haley wrote in the 55th comment:
Votes: 0
I thought that this NYT article was very interesting and at least somewhat relevant to the conversations here.

The introduction:
Quote
THE news of the day often includes an item about some development in artificial intelligence: a machine that smiles, a program that can predict human tastes in mates or music, a robot that teaches foreign languages to children. This constant stream of stories suggests that machines are becoming smart and autonomous, a new form of life, and that we should think of them as fellow creatures instead of as tools.


One of the passages I found most pertinent:
Quote
I myself have worked on projects like machine vision algorithms that can detect human facial expressions in order to animate avatars or recognize individuals. Some would say these too are examples of A.I., but I would say it is research on a specific software problem that shouldnt be confused with the deeper issues of intelligence or the nature of personhood.


In other words, these are just software problems, algorithm design if you prefer, not studies into deeper meaning.

Of course, one might argue – and indeed I was somewhat surprised that Deimos never did – that we human beings ourselves are (potentially) nothing more than a complex series of chemical reactions with neurons firing off this way and that and different levels of neuro-transmitter activities. Therefore, we are not that different from a computer processor or other machine; it just so happens that our processor is written in fleshy matter rather than silicone circuitry and transistors. But this way lie dragons, at least insofar as you believe that a purely physical interpretation of the brain removes any possibility of some separate, intangible soul – you start getting into rather delicate religious arguments.
10 Aug, 2010, Cratylus wrote in the 56th comment:
Votes: 0
David Haley said:
Of course, one might argue – and indeed I was somewhat surprised that Deimos never did – that we human beings ourselves are (potentially) nothing more than a complex series of chemical reactions with neurons firing off this way and that and different levels of neuro-transmitter activities. Therefore, we are not that different from a computer processor or other machine; it just so happens that our processor is written in fleshy matter rather than silicone circuitry and transistors. But this way lie dragons, at least insofar as you believe that a purely physical interpretation of the brain removes any possibility of some separate, intangible soul – you start getting into rather delicate religious arguments.


http://baetzler.de/humor/meat_beings.htm...

Having said that, it's fair to suggest that a mechanical device can be as good at thinking as a submarine can be good at swimming.

As to how good we are at thinking, I quote Emo Philips:
Quote
I used to think that the brain was the most wonderful organ in my body. Then I realized who was telling me this.


Particularly fascinating to me is the Cartesian theater demonstration of the infinite regress problem inherent in
the idea of a sparkly spirit operating the controls.

I don't think there's a religious problem in discussing human identity and intelligence in comparison to machine
identity and intelligence unless someone has problems with boundaries. This would be an issue on a fourm
where members are held responsible for acts of other members, and I think that's no longer the case here.

-Crat
http://lpmuds.net
10 Aug, 2010, Scandum wrote in the 57th comment:
Votes: 0
KaVir said:
I don't agree, no, because the way I described golems working is very similar to the way computers work: you give them instructions, and they follow them to the letter. They don't reason, they have no emotions, they show no self-awareness, no survival instinct…they do precisely what they're told, no more and no less. That is not "intelligence".

Basically you're calling a terminator a non intelligent entity, and I think few people would fully agree to such a statement.

One part of an AI system is setting goals and priorities, and any system incapable of 'intelligently' determining goals and priorities (like a terminator) is arguably non intelligent.

Another part is strategy, an NPC that randomly picks an exit when fleeing is much easier to kill than an NPC that picks the exit that has the most buddies in it, as logic predicts there is one strategy that is better than the countless other strategies, and if you find that strategy you might be able to create an AI with 20 lines of code that is more effective than 1000 lines of poorly chosen strategies.

Another part is tactical intelligence, which would be the various direct responses to specific situations. This is probably the easiest and most tedious (when there are a lot of unique situations and responses) part of AI to get right.

Memory is important as well, a lot of AIs don't have a memory and are therefor arguably non intelligent (see the movie Memento to observe the human equivalent of a typical AI.), like with strategy it's important to remember the most important things. Computers (arguably) have a huge advantage over humans in this department, a fuzzy memory might be advantageous.

What is obviously left out is the ability to create goals and strategies from scratch, but I wouldn't be surprised if human brains are inherently equipped with a general purpose algorithm from which goals and strategies are formed. Instincts in turn are used for tactical decision making, while instincts themselves are hardwired and strategical, though they're strategies created by evolution rather than intelligent design (as per scientific consensus).

I've never really seen a good generic resource on AI that outlines the various aspects, but I'd say if you get these 4 right you're way ahead of most implementations.
10 Aug, 2010, Cratylus wrote in the 58th comment:
Votes: 0
Scandum said:
I wouldn't be surprised if human brains are inherently equipped with a general purpose algorithm from which goals and strategies are formed.


"sensation"

ooh pretty

mmm tasty

oohh baby dont stop

ouch burns
10 Aug, 2010, David Haley wrote in the 59th comment:
Votes: 0
Scandum said:
Another part is strategy, an NPC that randomly picks an exit when fleeing is much easier to kill than an NPC that picks the exit that has the most buddies in it, as logic predicts there is one strategy that is better than the countless other strategies, and if you find that strategy you might be able to create an AI with 20 lines of code that is more effective than 1000 lines of poorly chosen strategies.

The semblance of intelligence in a very specific situation is hardly a proof of general intelligence.

Scandum said:
Another part is tactical intelligence, which would be the various direct responses to specific situations. This is probably the easiest and most tedious (when there are a lot of unique situations and responses) part of AI to get right.

If this is how you think intelligent beings develop all of their responses to specific situations, I'm somewhat confused as to how you lead your normal life. Part of intelligence is generalizing and transferring your knowledge to new situations, ones you have not seen before.

You most likely would not hard-code the response to every single situation. Rather, you would capture the general characteristics and have the mob react to those according to its usual goal metrics. Common, specific situations can be given hard-coded handling if the AI can't figure them out well enough, but that is a special case, not the general case.

Cratylus said:
Particularly fascinating to me is the Cartesian theater demonstration of the infinite regress problem inherent in
the idea of a sparkly spirit operating the controls.

Ah, good ol' Dennett.

Cratylus said:
I don't think there's a religious problem in discussing human identity and intelligence in comparison to machine
identity and intelligence unless someone has problems with boundaries.

"Problem"? Well, there is certainly an issue; when a religion posits the existence of a soul separate from the body, you cannot speak of human materialism without going against that religion's tenets. Whether or not that becomes a problem is another question entirely…
10 Aug, 2010, KaVir wrote in the 60th comment:
Votes: 0
For the record, I do not disagree with the idea that a machine could one day become intelligent.

What I disagree with is the idea that a machine must be intelligent if it can follow instructions. In particular, I disagree with the suggestion that my coffee machine, my washing machine or even my laptop have a "mind".

The "robot" I mentioned previously was not a reference to some futuristic science fiction android, but to simple modern day robots. It was made as a direct comparison to the typical D&D-style fantasy golem.

And the golem itself was only given as an example of a creature you probably don't want people effecting with a charm spell. You could extend the same reasoning to undead, or plants, or slimes, or magically animated suits of armour, or whatever else. Perhaps you also have "control undead" or "dominate plants" spells, but that doesn't change the underlying point: Not all fantasy creatures should be effected by everything in the same way.

And that is part of the bigger point, which is: There isn't always a clear distinction between "creature" and "object".
40.0/104