10 Aug, 2010, Cratylus wrote in the 61st comment:
Votes: 0
KaVir said:
And that is part of the bigger point, which is: There isn't always a clear distinction between "creature" and "object".


Just ask Hugh Hefner!

My unscientific, romaticized opinion is that we have difficulty assigning legitimate sentience to automata
because we do not feel that they feel. Our own Weltanschauung comes from millions of years of sensory
programming, it's at least as fair to say we are feeling machines as thinking machines, if there is even
a real distinction between the two. A toaster, zombie, or golem do not inspire that fellow feeling of
fellow being, because we know if we prick them, they do not scream, and we don't trust them, and
we value them less, no matter the exaflops they are capable of.

My point about the submarine is that without the motivation that drives organic "thinking", an object
can have the same practical outward effect without amounting to an event of equal significance to
us as having "thought".

There is a Wittgensteinian argument to be made that those external artifacts are all we have,
even as 1st person observers (I know, it' weird) that an intelligence is at work and it's the sum
of those communicative events that counts, regardless of motivation. I hope I will be forgiven
for not dwelling too much on such a cold and alien view.

I never could stomach more than a few pages of Neuromancer…dunno why, but as I understand
it the book contains an interesting thought experiment. If I take a snapshot of my brain's
machine state, and load it onto a sophisticated computer, and resume that machine-state's
operation, and that program is allowed to communicate…do the communications represent evidence
of a program that is intelligent? Is it sentient?

Does it have rights?

-Crat
http://lpmuds.net
10 Aug, 2010, KaVir wrote in the 62nd comment:
Votes: 0
Well my "creature" and "object" comment was really meant in mud terms - a reference to the beginning of this thread, the discussion about the traditional division (from a game design and implementation perspective) of things into clear-cut categories such as "creature", "object" and "room", and scenarios where a mud might wish to blur the lines between them.

And yes, if I added submarines I might well implement them as "creatures". Not because I consider them real creatures, but because I would want them to have much the same functionality as a creature. Of course it wouldn't look like a creature to the players, I would wrap it with some appropriate cosmetics - but from an implementation perspective, it would be a creature.

As I mentioned earlier, I also represent buildings as "creatures", so here are a couple of examples of what I mean by hiding the functionality behind cosmetics.

Example 1: I attack a creature. The creature responds with an attack of its own, which in this case is a one-use creature summoning action (yes, a swarm is also a type of creature):

You breathe a blast of fire at the village hall, burning its door.
A mob of angry villagers come charging out through the door.


Example 2: I attack a creature with enough damage to kill it. After displaying a customised death message, the creature drops some treasure, which in this case happens to be another creature:

You breathe a blast of fire at the small farmhouse, blowing the top of its roof off!
The farmhouse burns down, leaving behind only a smoking ruin.
An angry man clambers out of the wreckage.


If you know what you're looking at, it's obviously exactly the same functionality as fighting a typical monster. However the cosmetics hide that fact away from the players - and even if you know what it's doing, I don't think it's particularly jarring from an immersion perspective.
10 Aug, 2010, Scandum wrote in the 63rd comment:
Votes: 0
David Haley said:
The semblance of intelligence in a very specific situation is hardly a proof of general intelligence.

Not really, but it's somehow uncommon for AI implementations to consider strategy, most of the behavior is a mess of various tactical decisions.

David Haley said:
If this is how you think intelligent beings develop all of their responses to specific situations, I'm somewhat confused as to how you lead your normal life. Part of intelligence is generalizing and transferring your knowledge to new situations, ones you have not seen before.

That's really a part of the memory section, like the ability to discriminate. On the fly tactical responses are best compared to instincts, they're typically predefined.

On the topic of general intelligence, generalizing data and applying it to unknown situations has yet to be done (as far as I know), hence why I would use an intelligent design approach where the best strategies are picked beforehand, which is more of a challenge of the programmer's IQ than designing complex algorithms.
10 Aug, 2010, David Haley wrote in the 64th comment:
Votes: 0
Scandum said:
Not really, but it's somehow uncommon for AI implementations to consider strategy, most of the behavior is a mess of various tactical decisions.

I'm not sure why you say this.

Scandum said:
On the topic of general intelligence, generalizing data and applying it to unknown situations has yet to be done (as far as I know), hence why I would use an intelligent design approach where the best strategies are picked beforehand, which is more of a challenge of the programmer's IQ than designing complex algorithms.

You realize that this is not how things actually work in many game-playing AIs like chess, right? Programmers don't sit down and conceive of every possible board state, and program the response to it…
10 Aug, 2010, Scandum wrote in the 65th comment:
Votes: 0
David Haley said:
You realize that this is not how things actually work in many game-playing AIs like chess, right? Programmers don't sit down and conceive of every possible board state, and program the response to it…

You realize chess doesn't require generalizing data, nor applying it to unknown situations? Not to mention that in a chess AI the best strategies are picked beforehand, allowing fewer brute force searches at deeper depths.
10 Aug, 2010, David Haley wrote in the 66th comment:
Votes: 0
Quote
You realize chess doesn't require generalizing data, nor applying it to unknown situations?

Oh, so now you are fond of generalizing data after all. :rolleyes:
You were talking about writing an AI to use pre-programmed tactics in every possible situation – and I was saying that that's not how things work. If you've now decided that generalization is good after all, I have no disagreement here…

Quote
Not to mention that in a chess AI the best strategies are picked beforehand, allowing fewer brute force searches at deeper depths.

Oh? Pray tell, then, what are the best strategies? How much have you actually looked at the best chess AI players?
11 Aug, 2010, Deimos wrote in the 67th comment:
Votes: 0
I just wanted to say that I think Cratylus pretty much hit the nail on the head, and I also wanted to leave you guys with another thought experiment:

Working your way backwards from a human being (which I think we can all agree is intelligent), find out at what point you stop attributing intelligence to things. Is a chimpanzee intelligent? A dolphin? A dog? A slug? A tree? A microbe? A computer? A rock? Etc. Now, for those things that you labelled intelligent, list the common behaviors that they exhibit (self-awareness, creativity, perception, decision-making, etc.).

Doing this will indirectly force you to actually define intelligence, rather than labelling something intelligent or not based on gut-feeling or what have you. Once you've defined it, then you can apply that definition to things you didn't think about at first and it may surprise you to find out what you actually consider intelligent based on your own definition.
11 Aug, 2010, Koron wrote in the 68th comment:
Votes: 0
I'm going to have to toss my hat into Deimos's ring here. Golems follow verbal instructions, and the interpretation of language requires an external, independent processing center. We know this center isn't located within a golem's creator, because it will follow precise literal instructions as stated rather than following what the creator actually wanted (see also: djinni and wishes). For this reason, we have to assume that the magic(s) propelling the golem are not under the direct control of the magus in question, so a direct puppet-and-puppeteer style control does not apply.

How does a golem, given instruction to "defend the Jews", end up killing everyone else if it does not possess at least a rudimentary form of intelligence? If it did not have this, it would be incapable of interpreting the order, and it would sit there motionless, much like a child who has been given instructions in a language he does not speak. Does it have a mind in the stereotypical biological sense? Of course not. I have to agree that D&D's interpretation is not logically consistent, especially considering the apparent spiteful nature of golems when it comes to selectively interpreting their masters' instructions, inflicting upon their masters great pain and headacheyness (perhaps deliberately for having been trapped in a state of perpetual subservience). It is a far better explanation to assume that the magic itself creates, at the very least, a basic language processing center. If this isn't indicative of an intelligence, I'm not sure what is.

Does this intelligence more closely resemble a sentient or mechanical form? I don't think this question has much merit, to be honest. In the case of either, an interloper with a very specific skill set will be able to interfere with its order processing. If it is driven by the magical equivalent of ones and zeroes, what's to stop a golem "hacker" from adjusting where the bits fall? Where are a golem's initial orders stored? Are these orders invariable? (Why would they be? If the creator can change a golem's orders, why couldn't new orders be spoofed? Wouldn't this essentially be a "charm golem" spell?)

Essentially, I reject the idea that a golem's defenses are impenetrable–it may be impractical enough to be considered outside the realm of possibility for most people who face golems, but this still doesn't make it impossible. The idea comes down to a debate between sentience and intelligence, and while the latter is required for the former, the inverse is not true. The comparison of a golem, which follows verbal instructions, to a coffee maker with on-off functionality hooked into a timer* is a ridiculous oversimplification that distracts from the actual issue. To suggest "you think my coffee maker is intelligent lololol that's silly" does not actually refute any of his arguments.



*Besides, even your coffee maker can be hacked. Consider changing the internal timer your "charm coffee maker" spell.
11 Aug, 2010, David Haley wrote in the 69th comment:
Votes: 0
Koron said:
Golems follow verbal instructions, and the interpretation of language requires an external, independent processing center.

Why does it have to be external or independent? I'm assuming that you meant w.r.t. the golem.

Koron said:
Essentially, I reject the idea that a golem's defenses are impenetrable <…> *Besides, even your coffee maker can be hacked. Consider changing the internal timer your "charm coffee maker" spell.

I'm not sure that position was taken here. KaVir has been saying that a golem does not have a mind susceptible to the same kind of charming as, say, a human. I don't believe he ever said it cannot be interfered with in any way whatsoever.

Koron said:
The comparison of a golem, which follows verbal instructions, to a coffee maker with on-off functionality hooked into a timer* is a ridiculous oversimplification that distracts from the actual issue.

No it's not. It was following the exact same methodology that Deimos just requested that other people use, namely taking his definition and seeing where it leads. He said himself that such appliances are "intelligent".

Koron said:
To suggest "you think my coffee maker is intelligent lololol that's silly" does not actually refute any of his arguments.

When an argument leads to absurd logical conclusions, chances are that there is something wrong with the argument.
Furthermore, if a definition of what is interesting about intelligence includes coffee makers, chances are also rather high that the definition has not, in fact, identified what is interesting about intelligence.
11 Aug, 2010, Koron wrote in the 70th comment:
Votes: 0
David Haley said:
Why does it have to be external or independent? I'm assuming that you meant w.r.t. the golem.

It's clearly external to the magus because the golem does not follow desire, but rote instruction. It's independent because the magus does not have to be in the presence of the golem for the golem to continue operating. Language is essentially a series of symbols used to represent ideas, and it takes intelligence to convert symbol to meaning. A mindless statue, even if it's animate, cannot make this transition.

David Haley said:
I'm not sure that position was taken here. KaVir has been saying that a golem does not have a mind susceptible to the same kind of charming as, say, a human. I don't believe he ever said it cannot be interfered with in any way whatsoever.

Fair enough. I don't mean to put words in KaVir's (or anyone else's) mouth. I was just having a little rant.

David Haley said:
No it's not. It was following the exact same methodology that Deimos just requested that other people use, namely taking his definition and seeing where it leads. He said himself that such appliances are "intelligent".

I really think it is. A golem's orders are constantly interpreted and reinterpreted in every situation as even the smallest details change. This certainly resembles a coffee maker in the "is it 00:00 yet? if so, make coffee!" sense, but it also resembles a sentient human guard as he patrols the old Sherwood forest trail for signs of Robin's men. A coffee maker may or may not be intelligent, but establishing a pedantic interpretation of the word to determine whether it is ultimately isn't useful for analyzing our golem. Our stereotypical magus does not spend hours and hours writing hard-coded instructions because that's not how magic works. Instead, he relies on the golem's ability to interpret those instructions, applying them to a variety of situations for which he cannot plan. Coffee machine makers cannot do the same because the machine is not capable of reasoning. The golem's approach has to be "I am in situation X. What is the best action to take to achieve instructions Y?" The golem's creativity ultimately depends on your GM's preferences. Furthermore, some constructs are obviously more intelligent than others (with some being fully sentient). I see a coffee maker as having the approximate intelligence of a crossbow (if it has any, it's totally irrelevant) while a golem has that of an animal (which is really not a useful label; it might be utilitarian even if it's not particularly smart).
11 Aug, 2010, David Haley wrote in the 71st comment:
Votes: 0
Quote
It's clearly external to the magus because the golem does not follow desire, but rote instruction.

OK, I thought you were talking about external w.r.t. the actor receiving and acting upon the instructions.

Quote
A golem's orders are constantly interpreted and reinterpreted in every situation as even the smallest details change. This certainly resembles a coffee maker in the "is it 00:00 yet? if so, make coffee!" sense

What resemblance do you see here? You say later on that they're really quite different. If they're similar, it is perhaps in the sense that some condition makes some thing happen, but then again what is this really saying?

Quote
A coffee maker may or may not be intelligent, but establishing a pedantic interpretation of the word to determine whether it is ultimately isn't useful for analyzing our golem.

I think it only came up because the given definition of intelligence was being disputed, and the coffee maker was used as the example. The definition of intelligence was being disputed because it gave too broad of criteria for what can be affected by spells that affect mental state.

Quote
I see a coffee maker as having the approximate intelligence of a crossbow (if it has any, it's totally irrelevant) while a golem has that of an animal (which is really not a useful label; it might be utilitarian even if it's not particularly smart).

Well, ok. I'm not really sure what it means to speak of the intelligence of a crossbow. Clearly, there is something very different between the coffee maker/crossbow and the golem/animal.

I'm not sure your golem and KaVir's are the same, incidentally, because it's unclear that in KaVir's world the golem is capable of any kind of reasoning, reinterpretation, etc. (You have argued, though, that his definition is inconsistent/impossible due to the need to understand language, etc., and this would be an interesting point.)
11 Aug, 2010, Koron wrote in the 72nd comment:
Votes: 0
KaVir said:
What I disagree with is the idea that a machine must be intelligent if it can follow instructions.

^^^ This.
KaVir said:
In particular, I disagree with the suggestion that my coffee machine, my washing machine or even my laptop have a "mind".

This is a difficult issue because it depends on your interpretation of instructions and mind. Essentially, most coffee and washing machines are not following instructions, but behaving like advanced mouse traps. Our washing machines follow a pattern something like:
Quote
if((currentSetting==2) && (timer<20)) spin(clockwise);

There's clearly no room for interpretation there. If you buy a really really fancy washing machine from the future, it might do something like:
Quote
(clothesNotPerfectlyClean && !cleanEnough) ? keepWashing : separateCleanFromDirtyAndContinueWashing

We've now switched from simple variable comparison to more in-depth function calls. How do we know if all or some are dirty? How do we separate the clean ones out to make sure we're not wasting water and soap? Hard rules for these can be delineated through code, but we've introduced an element of subjectivity. Is there a mind here? Not necessarily, but we're edging closer to intelligence. If we have to explicitly define how we want these, it's probably not intelligence as is commonly understood. Even if the washer is intelligent, it's hard to see it as having a "mind," although I don't see any reason why the term should exclude highly advanced programs. Certainly if we do ever see self-aware AI, we will eventually come to accept them as having minds, especially once they integrate and express their own desires.

To get back to the point, though, the golem does not require explicit clarification. It will act according to its interpretation of cleanEnough, and if you don't like the results, it's your own fault for not adequately expressing your desire. Without providing highly restrictive instructions, it will decide of its own accord how and when, if ever, to separate your clothes.
11 Aug, 2010, Koron wrote in the 73rd comment:
Votes: 0
I should probably state that I didn't really have any one particular quote or person in mind when I wrote my initial reply. I just browsed the entire thread, chewed on my thoughts for a little while, and then threw it together. Apologies all around if I got anything mixed up between reading and posting.
11 Aug, 2010, KaVir wrote in the 74th comment:
Votes: 0
Koron said:
Golems follow verbal instructions, and the interpretation of language requires an external, independent processing center.

Like a computer with a voice recognition application.

Koron said:
How does a golem, given instruction to "defend the Jews", end up killing everyone else if it does not possess at least a rudimentary form of intelligence?

You might as well argue: How does a computer, given the instruction to "remove viruses", end up wiping the entire hard drive if it does not possess at least a rudimentary form of intelligence?

Computer programs often have bugs because of "mistakes" in the instructions. This isn't really any different to the above scenario with the golem - the magus was careless with his instructions.

Koron said:
If it is driven by the magical equivalent of ones and zeroes, what's to stop a golem "hacker" from adjusting where the bits fall? Where are a golem's initial orders stored? Are these orders invariable? (Why would they be? If the creator can change a golem's orders, why couldn't new orders be spoofed? Wouldn't this essentially be a "charm golem" spell?)

Some constructs, such as the Shield Guardian, use a Dongle. But most golems appear to be keyed directly to their creator.

Golems in D&D are immune to magic, but I'm pretty sure there are spells for controlling other types of construct. However a regular "charm" spell won't work on constructs in D&D - and that's the main point I was trying to make: Not every creature should be treated the same, you'll need ways to differentate between them, and this can allow you to represent objects (such as a statue) as specialised creatures in much the same way.

Koron said:
It's clearly external to the magus because the golem does not follow desire, but rote instruction. It's independent because the magus does not have to be in the presence of the golem for the golem to continue operating. Language is essentially a series of symbols used to represent ideas, and it takes intelligence to convert symbol to meaning. A mindless statue, even if it's animate, cannot make this transition.

That's the same line of reasoning that led to the coffee machine comparison. It converts symbol (in this case a barcode on the coffee tab) to meaning (heat up and transfer a certain amount of water). But if you want something that can be programmed to function when not in the presence of the magus, it might be better to compare the golem with a VCR or a washing machine.

Of course the golem can also move, visually recognise certain things, etc, which is why I prefer comparing it to a (modern day) robot. I suppose you could also compare it to a C++Robot, if you were willing to accept a virtual environment.

Is a C++Robot intelligent?
11 Aug, 2010, Koron wrote in the 75th comment:
Votes: 0
KaVir said:
Like a computer with a voice recognition application.

Perhaps, but while a person (or more likely a group of people) had to sit down and code the voice recognition software. As far as I know, no such requirement has been stated for golems.

KaVir said:
You might as well argue: How does a computer, given the instruction to "remove viruses", end up wiping the entire hard drive if it does not possess at least a rudimentary form of intelligence?

Computer programs often have bugs because of "mistakes" in the instructions. This isn't really any different to the above scenario with the golem - the magus was careless with his instructions.

Heuristic functionality allows antivirus software to learn over time. If this isn't directly intelligence, it closely enough resembles it to make the distinction meaningless. Note that it's still not smart, nor is it sentient, but it does possess (again, at least something resembling) an intelligence.

KaVir said:
Golems in D&D are immune to magic, but I'm pretty sure there are spells for controlling other types of construct. However a regular "charm" spell won't work on constructs in D&D - and that's the main point I was trying to make: Not every creature should be treated the same, you'll need ways to differentate between them, and this can allow you to represent objects (such as a statue) as specialised creatures in much the same way.

Sure, golems created through magical means are theoretically immune to magic, and equivalent constructs created psionically are immune to psionics. This leaves a loophole wherein a golem is vulnerable to psionics (and any other form of nonmagical tampering). Still, I agree that a regular "charm person" spell isn't likely to be useful on an animated statue, but when that statue is given motion by magic and then possessed directly by an intelligence (perhaps even of the magus who created it), there's a mind there to charm that may or may not be a person. I also agree that you need ways to differentiate between them.

KaVir said:
That's the same line of reasoning that led to the coffee machine comparison. It converts symbol (in this case a barcode on the coffee tab) to meaning (heat up and transfer a certain amount of water). But if you want something that can be programmed to function when not in the presence of the magus, it might be better to compare the golem with a VCR or a washing machine.

This is kind of a "Yes, but" thing for me. The barcode can only be interpreted if the thickness of the lines is just so and if the lines are spaced properly. The slightest variation can mangle the instructions. This is different from a golem who is not guaranteed to fail if bad (or even nonsensical) instructions are given.

KaVir said:
Is a C++Robot intelligent?

Don't have time to look into these right now, but I suspect my answer would be "A little bit." Again, sentience is obviously not there, but you cannot quite say the same for a golem.
11 Aug, 2010, David Haley wrote in the 76th comment:
Votes: 0
KaVir said:
Koron said:
Golems follow verbal instructions, and the interpretation of language requires an external, independent processing center.


Like a computer with a voice recognition application.

No, not really, although certainly one needs at least voice recognition. Voice recognition only transcribes the words spoken; it does nothing with actually understanding the semantics of those words and interpreting what they mean in particular contexts.

Koron said:
Heuristic functionality allows antivirus software to learn over time. If this isn't directly intelligence, it closely enough resembles it to make the distinction meaningless. Note that it's still not smart, nor is it sentient, but it does possess (again, at least something resembling) an intelligence.

Sorry, I simply can't agree here. You are now defining intelligence as the application of statistics. Certainly a form of learning is necessary for intelligence, but not sufficient. Your statistics-gathering device is a far cry from the golem you have been describing; I recognize that you said it's "not smart" or "sentient" but to state that the form of intelligence is indistinguishable is an awfully large stretch.
11 Aug, 2010, KaVir wrote in the 77th comment:
Votes: 0
Koron said:
Perhaps, but while a person (or more likely a group of people) had to sit down and code the voice recognition software. As far as I know, no such requirement has been stated for golems.

The magus still has to create the body and then activate it, so it's basically the same except using "mad magic skills" instead of "mad coding skills".

Koron said:
Heuristic functionality allows antivirus software to learn over time.

Golems don't learn. They will make the same mistakes over and over.

Koron said:
Sure, golems created through magical means are theoretically immune to magic, and equivalent constructs created psionically are immune to psionics. This leaves a loophole wherein a golem is vulnerable to psionics (and any other form of nonmagical tampering).

Well now you're into optional D&D rule territory. The default rule for psionics and magic is that they're treated as the same thing. Therefore a golem, being immune to magic, would also be immune to psionics. This is discussed on page 36 of the Psionics Handbook.

But even if you houserule them to be vulnerable to psionics, there's still the fact that golems have the construct type, which grants them "Immunity to all mind-affecting effects (charms, compulsions, phantasms, patterns, and morale effects)."

So there's no real getting around it. If you want to have golems like those in D&D, you're going to need to make them immune to charm.

Koron said:
This is kind of a "Yes, but" thing for me. The barcode can only be interpreted if the thickness of the lines is just so and if the lines are spaced properly. The slightest variation can mangle the instructions. This is different from a golem who is not guaranteed to fail if bad (or even nonsensical) instructions are given.

Er yes, it would "fail" - thus the example of the killer golem mentioned earlier. That wasn't the intended behaviour.

David Haley said:
KaVir said:
Koron said:
Golems follow verbal instructions, and the interpretation of language requires an external, independent processing center.

Like a computer with a voice recognition application.

No, not really, although certainly one needs at least voice recognition. Voice recognition only transcribes the words spoken;

And a computer with a voice recognition application can then parse those words as instructions.
11 Aug, 2010, David Haley wrote in the 78th comment:
Votes: 0
KaVir said:
And a computer with a voice recognition application can then parse those words as instructions.

What Koron was saying is that in order to understand those instructions accurately and be able to interpret them reasonably, with context sensitivity, one must essentially have a form of intelligence.

Note that no computer today can do anything near what you are describing.
11 Aug, 2010, KaVir wrote in the 79th comment:
Votes: 0
Computers already understand instructions, and follow them exactly. Voice recognition is just a method of input. It is perfectly possible to give a computer instructions with voice recognition.

However I am not suggesting that I could build a golem, either with technology or magic.
11 Aug, 2010, David Haley wrote in the 80th comment:
Votes: 0
The kind of instructions given to computers currently are simplistic or only available in very specific, domain-limited contexts. If you know of software that has solved the NLP problem, I think many researchers would be interested… In the meantime, I think it should have gone without saying that I could not possibly have meant something so obviously wrong in the context of what it means to be intelligent. :thinking:

Now you could very well define your golem as, similarly, only understanding such limited instructions. That's fine, but then you and Koron are talking past each other because you are not speaking of the same sort of device.
60.0/104