28 Feb, 2008, David Haley wrote in the 41st comment:
Votes: 0
shasarak said:
And instead of dealing only with functions and primitive types, you deal in objects which are a higher-level abstraction

I know a few functional programmers who might object to the claim that objects are a higher-level abstraction than functions! :wink:

I agree though that the best measure is how far away you are from the details of the machine language, and how briefly you may express relatively complex concepts due to syntax or library support. (Usually a little of both. C++ has STL lists, but iterating over them is a little uglier than in dynamic languages.)
28 Feb, 2008, shasarak wrote in the 42nd comment:
Votes: 0
Objects are a higher level of abstraction than integers and strings, not higher-level than functions; you can't really compare functions and objects.
28 Feb, 2008, David Haley wrote in the 43rd comment:
Votes: 0
Ah. The way you said it, it sounded like objects are a higher-level abstraction than functions and primitive types.

That said, I'm not sure it's really sensible to compare objects and primitive types in some cases. Smalltalk is rather peculiar in treating even integer constants as objects. How is an "int" object a higher-abstraction than an int constant? Sure, you can do funny things like redefine addition, but arguably that's not such a great thing to do. (And yes, I know there are benefits to primitives-as-objects, too.)

Well, anyhoo…
28 Feb, 2008, shasarak wrote in the 44th comment:
Votes: 0
DavidHaley said:
How is an "int" object a higher-abstraction than an int constant?

An int constant directly corresponds to a single numerical value held at a single memory location: it is what CPUs directly work with. An integer object has all sorts of intelligent behaviour associated with it, such as the ability to be garbage-collected automagically when no longer required. It is also arbitrarily extensible.

DavidHaley said:
Smalltalk is rather peculiar in treating even integer constants as objects…. Sure, you can do funny things like redefine addition, but arguably that's not such a great thing to do.

Personally I think it's the coolest feature of the language. Not the ability to redefine integer arithmetic as such(!) but the fact that nearly every feature which, in other languages, is a part of the language itself, in Smalltalk is part of the class library; and the fact that the base class library can be modified or extended to any degree that you wish. The level of flexibility this gives is simply breathtaking in comparison with more conventional environments. You can completely rewrite the IDE, rewrite the compiler, convert the language to being strongly typed, change the way classes and inheritance work, anything you want.
28 Feb, 2008, drrck wrote in the 45th comment:
Votes: 0
shasarak said:
An int constant directly corresponds to a single numerical value held at a single memory location: it is what CPUs directly work with. An integer object has all sorts of intelligent behaviour associated with it, such as the ability to be garbage-collected automagically when no longer required. It is also arbitrarily extensible.


I think what DavidHaley was getting at is that an integer object is only adding complexity to an already simple concept. By most definitions, being higher-level removes complexity. That said, one could argue exactly the opposite, citing the fact that most high-level languages are purely (and completely) object-oriented. It just boils down to opinions.
28 Feb, 2008, David Haley wrote in the 46th comment:
Votes: 0
Yes, I was getting at the fact that an integer primitive is the purest form of int. I disagree that it has anything to do with memory locations; it's just a number floating in space, just like in math. All this stuff about GC and extensibility is kind of noisy with respect to that pure concept.

drrck said:
citing the fact that most high-level languages are purely (and completely) object-oriented.

Well, that's not true, though. As I said there are many functional languages for which objects are almost an afterthought. And there aren't many OOP languages that take "everything is an object" quite as far as SmallTalk does…

shasarak said:
You can completely rewrite the IDE, rewrite the compiler, convert the language to being strongly typed

Those are interesting academically and kind of cool, but do they help me write better programs? If I want strong types, I can grab a strongly typed language. If I want another IDE, I can use a language that has a better IDE. Even thinking about writing compilers is very far away from getting on with whatever you need to do.

Or put this way: has the fact that integers are not primitives but objects been relevant and useful in real-world programming for you?
28 Feb, 2008, shasarak wrote in the 47th comment:
Votes: 0
drrck said:
By most definitions, being higher-level removes complexity.

Being high-level reduces the amount of code you need to write to create complex behaviour by automating the complexity. Being high-level therefore increases complexity in terms of what the CPU is actually doing; it makes the code necessary to achieve said complexity much quicker to develop and easier to read. Hence it reduces complexity from the perspective of the programmer, but not from the perspective of the CPU.

As I said, "high-level" means "high level of abstraction".

DavidHaley said:
Or put this way: has the fact that integers are not primitives but objects been relevant and useful in real-world programming for you?

Yes, absolutely. If I add a new method to class Object, for example, every single class in the system instantly inherits that new behaviour, including "types" likes integers and strings.

I've quite often re-written "casting" methods on some "type" values in order to improve performance; converting a date to a string, for instance, is now faster than the default in this environment. I got round a Y2K problem by changing the standard conversion of a Date to a String. I've also rewritten some of the basic "arithmetic" functions on dates. It used to be that ('28/02/2007' asDate addMonths: 1) evaluated to 28/03/2007; I've changed it so that it now returns 31/03/2007.

I've also added an asSQLString method on many classes including Integer and Float which converts the value into a string suitable for including in an SQLQuery. (So a string, for example, returns itself with added single quotes, while an Integer does a simple conversion to a string; a more complex object that represents a database record returns its primary key). Similarly there are class methods on many classes that convert a string got from a query result back into a Smalltalk value.

Obviously there are always other ways of achieving the same thing, but it's far more intuitive to actually change the behaviour of dates to make them do what you want than it is to write a separate calculator class that you have to invoke every time you want to do date arithmetic.

I actually think this level of flexibility would be exceptionally useful in the context of a scripting language used inside a MUD - analogous to the way LPC works inside an LPMud. LPC is remarkably powerful as it is, but it would be even cooler if you could rewrite the entire language from inside the MUD without even having to reboot to do it.
:biggrin:
28 Feb, 2008, David Haley wrote in the 48th comment:
Votes: 0
I think he was referring to complexity in the concepts, not complexity in terms of how many instructions the CPU needs to deal with. But one might argue that having everything be treated exactly the same is "simpler" in some sense. I still think it's a little unusual, but that is perhaps due to habit. Thing is, if you like, you can completely ignore that integers are objects, and still write programs…

shasarak said:
Obviously there are always other ways of achieving the same thing, but it's far more intuitive to actually change the behaviour of dates to make them do what you want than it is to write a separate calculator class that you have to invoke every time you want to do date arithmetic.

Well, yes, but you've also introduced a major gotcha in that things no longer behave as they usually do for adding months to dates. You've simplified parts of the program by overriding generic behavior with program-specific behavior. That makes me twitch a little because somebody reading the code has to be aware of that, as opposed to differences being clearly marked by the usage of new methods or helper functions.

That said, it is true that being able to add new methods to Object is very cool. In Java, if you wanted to SQLize a generic object by calling obj.toSqlString() or something like that you'd have to define a new interface Sqlizable or whatever that had that method, and then use that as the parameter to your method that converts objects to string values. This is a feature that I like about SmallTalk, Self, etc.



RE: Editing the whole MUD's language without rebooting it:
Err… that sounds kind of dangerous to me… :wink:
28 Feb, 2008, shasarak wrote in the 49th comment:
Votes: 0
DavidHaley said:
shasarak said:
Obviously there are always other ways of achieving the same thing, but it's far more intuitive to actually change the behaviour of dates to make them do what you want than it is to write a separate calculator class that you have to invoke every time you want to do date arithmetic.

Well, yes, but you've also introduced a major gotcha in that things no longer behave as they usually do for adding months to dates. You've simplified parts of the program by overriding generic behavior with program-specific behavior. That makes me twitch a little because somebody reading the code has to be aware of that, as opposed to differences being clearly marked by the usage of new methods or helper functions.

That's not how I would look at it. I would say that I have changed the generic behaviour because the previous generic behaviour was wrong; now it's correct. This is not a programme-specific function at all: it's a modification to the platform as a whole. All applications derived from that platform would benefit from the change. I've similarly made a large number of changes to the standard IDE (which is also written in Smalltalk), and updated various low-level interactions between Smalltalk and the Operating System as new OS versions have come out.

And, similarly, there's nothing to stop you from performance-enhancing refactoring of the platform. For example, I've changed the default sorting method on a collection to one that's faster. No other developer would need to know about that, as it's completely transparent.

However, yes, you could indeed make programme-specific changes to platform-level code if it were convenient to do so.

But really we're getting much too hung up on specific examples, here. The point is not "we can modify the behaviour of dates" but "we can modify the behaviour of absolutely anything". In particular this means that if there is an underlying platform bug, you can simply fix it; if you wanted to do the same thing in (say) C# you'd be stuffed: you'd have to wait for a new release of VisualStudio or the .NET runtime. In Smalltalk fixing a bug in the foundation classes or even the basic language implementation is no different from fixing a bug in normal code.

Edit: Actually another cool aspect of this is simply that you've got all of the source code for all of the foundation classes available to read just as easily as your own code. Even if you don't plan on making changes to it, it's lovely to be able to jump into a foundation class method and actually read what it is doing instead of having to rely on (potentially third party) documentation which describes only the external interface and not the internal workings.
28 Feb, 2008, David Haley wrote in the 50th comment:
Votes: 0
shasarak said:
In particular this means that if there is an underlying platform bug, you can simply fix it; if you wanted to do the same thing in (say) C# you'd be stuffed: you'd have to wait for a new release of VisualStudio or the .NET runtime.

Well, not really… if the bug were something on the order of what you're talking about, i.e. a bug in the library instead of the OS system calls, you could just wrap the OS calls and fix whatever library call is broken. Yes, you'd have to remember to call the right function (your version instead of the library) but it's not the end of the world. I agree that fixing the actual function is nicer; Lua lets you do this by letting you replace any function with a function of your own.

shasarak said:
Actually another cool aspect of this is simply that you've got all of the source code for all of the foundation classes available to read just as easily as your own code.

Yes, same thing for Java, and it's pretty nifty, I agree. Well, actually, same thing for many platforms; on a gnu system, you can get the entire source to the C runtime library; Lua provides the source to its libraries, I'm sure that Python/Ruby/etc. do the same… don't know about C# but I would assume so. (Then again, knowing MS…)
28 Feb, 2008, drrck wrote in the 51st comment:
Votes: 0
shasarak said:
Being high-level reduces the amount of code you need to write to create complex behaviour by automating the complexity. Being high-level therefore increases complexity in terms of what the CPU is actually doing; it makes the code necessary to achieve said complexity much quicker to develop and easier to read. Hence it reduces complexity from the perspective of the programmer, but not from the perspective of the CPU.

As I said, "high-level" means "high level of abstraction".


Fair enough, though I think it was pretty obvious that we were referring to complexity from a programmer's perspective seeing as how we were talking about teaching beginning programmers. Also, the point of abstraction is to generalize, remove specificity, and simplify concepts - all of which would reduce complexity for the programmer.
40.0/51