15 Jun, 2007, Ugha wrote in the 21st comment:
Votes: 0
Umm… This debate has strayed a bit from the topic.

Besides the recent… off topic comments, it looks like this is working almost as I had hoped :)

Let's talk about another C++ topic… user friendliness.
Or in this case, programmer friendliness.

C can do anything C++ can do. I think we can all agree to that.

C is also far easier to learn than C++, I think we can agree to that too.

Yes, it's faster to write in C++, and its "easier" once you grasp the concept.

But whats the purpose of "dumbing down" a programming language? Anyone who uses a programming language
is already intelligent enough to grasp the abstract concepts needed to master C, so why make a "user-friendly"
version?

Please note I'm setting aside various concepts that ARE improved (such as overloading and the like)… I just
want to focus on the very essence of why the language was created when we already had one to do the
job.
15 Jun, 2007, KaVir wrote in the 22nd comment:
Votes: 0
Ugha said:
C can do anything C++ can do. I think we can all agree to that.


No, certainly not, unless you're only talking in the most abstract sense. C++ can do anything that C can do (barring things like "have a variable called 'class'"), but the reverse is not true. You can fake most programming concepts (such as polymorphism) in C, but it is much more complex and time consuming (and far easier to mess up) than using C++ - so if you want those features, you might as well use a language that supports them properly.

Ugha said:
C is also far easier to learn than C++, I think we can agree to that too.


No, not really. C is certainly a far simpler language than C++, but that doesn't make it easier to learn - it just means there's less to learn.

Ugha said:
Yes, it's faster to write in C++, and its "easier" once you grasp the concept.

But whats the purpose of "dumbing down" a programming language? Anyone who uses a programming language
is already intelligent enough to grasp the abstract concepts needed to master C, so why make a "user-friendly"
version?


You've lost me. C++ contains all the features of C, along with a load of additional features. How does adding extra features to a programming language result in it being "dumbed down"?

That aside, even if you were correct, the point of a programming language isn't to be difficult to use - otherwise we'd all be writing in machine code.

Ugha said:
Please note I'm setting aside various concepts that ARE improved (such as overloading and the like)… I just want to focus on the very essence of why the language was created when we already had one to do the job.


Why create any new programming language? C++ addresses a number of programming concepts that C handles poorly (if at all), and does so in such a way that those already familiar with C can use their existing knowledge.
15 Jun, 2007, Justice wrote in the 23rd comment:
Votes: 0
Ugha said:
C can do anything C++ can do. I think we can all agree to that.

Technically that may true, but to simulate much of C++'s behavior would take alot of work and be extremely fragile, and difficult to maintain.

Ugha said:
C is also far easier to learn than C++, I think we can agree to that too.

Since both languages are extremely similar, I'd say that learning to use them is about the same. Because C++ allows you to do more however, it takes more experience to fully understand how to use them.

Ugha said:
Yes, it's faster to write in C++, and its "easier" once you grasp the concept.


Yes and No. It really depends on what you're trying to do. The initial phases of development may take longer with C++ than C. This is because the major benefits of an object oriented language come in to play with code re-use. To reuse code it must be written, and it must flexible.

Ugha said:
But whats the purpose of "dumbing down" a programming language? Anyone who uses a programming language
is already intelligent enough to grasp the abstract concepts needed to master C, so why make a "user-friendly"
version?


C++ is not dumbed down C, in fact it's quite the opposite. C++ allows the programmer to have a greater degree of control over the code.

In terms of reusing code, both C and C++ can do this with the use of functions. Things like "can_see" are vital to a mud and are used repeatedly. You wouldn't want to edit the code in a thousand places just because you created a new form of invisibility. Or to say… add fog to a room that adds a chance you won't see someone. A class does the same thing except it also stores data.

In C, you can use function pointers do handle different things. In SMAUG function pointers are used for commands and skills. Using inheritance, classes can behave like this as well. Once again, the class can store data about what is being done.

When I was writing a java based mud, I used objects to represent commands. Each command had a fields that controlled how they behaved. One thing I enjoyed was being able to use the same command for say and whisper and being able to change the act strings online.

Ugha said:
Please note I'm setting aside various concepts that ARE improved (such as overloading and the like)… I just
want to focus on the very essence of why the language was created when we already had one to do the
job.


Well, there's alot of reasons why C++ was developed. Many of these relate to how code is organized to avoid conflicts and make it easier to use.

Namespaces are necessary for large projects to prevent naming conflicts. That is, multiple types using the same name. I could for example, write a list function, since the standard library has a "list" class it would cause a conflict except the standard library uses std as it's namespace. Thus… std::list is different from list.

In C, the form and function are separate. In C++ you can keep them together which can make code easier to read. For example, ch->can_see(victim) vs can_see(ch, victim). With the function method it is possible to pass the parameters in the wrong order, or to pass bad parameters (NULL).

There are several ways that inheritance can be used to simplify code. For example, rooms, characters, and objects can store objects. In C, each system is handled by a separate piece of code. Using C++ you can write an object that handles this and extend them from each of these classes. This is known as a "Mixin" and is the "object oriented" way to handle this. Additionally you can use the class as a member. In this case you accomplish the same goal using "composition".

Classes themselves are the major advancement in C++. The ability to combine form and function simplifies alot of code. Being able to inherit other classes and treat the class as it's parents allows you to reuse code easily. The separation of static vs instance members allows provides a great deal of flexibility as well.
15 Jun, 2007, Justice wrote in the 24th comment:
Votes: 0


I just skimmed these, but I may take the time to take a closer look later. In my own experience, I've come to learn and develop tecniques to handle some of these issues. As I've pointed out previously in this thread, a major issue with polymorphic behavior is how you create the objects. Since you're dealing with many classes that represent the same "type", you need some central point that recognises them. Generally I use a factory model to handle this, that is a single object that knows what these are. In languages that support reflection, I'm able to register these objects with the factory through configuration allowing external objects to be used without any hard-code modification. C++ however doesn't support this method.

I've found that javascript's composition and prototype system is often better than a class system. I often need to add methods to the string class like… trim and replaceAll. Quite often I'll compose objects on the fly to handle tasks that need done. Using simple naming conventions they can meet the interface required. In C++, I use templates to work with multiple types by composition. That is, by naming things in both classes the same, I can use a template to work with both classes without the overhead of inheritance.
15 Jun, 2007, Ugha wrote in the 25th comment:
Votes: 0
First, let me say I absolutely love your replies Justice, especially when you tie it back to something we all have in common, MUD programming.

Ok…
By "dumbing down" I mean the repeated use of concepts I've read in my C++ books that relate to abstraction.

They talk about how you "don't need to know" how something works, just use its interface.
They also say that C++ is a simpler and easier to use language due to the fact its OOP and humans think in forms of OO naturally. This is
kinda BS, different people think different ways.

An example in one of my books gave a Microwave. It said it doesn't matter how it works, as long as its interface (the keypad) is simple
and functional.

I want to know how my microwave works. I want to be able to fix it if it breaks and I want to be able to build a better one if I have the
chance.

Perhaps this is just a flaw in the books I'm reading and I'm not fully grasping the concept of abstraction. Either way, its proof that OO
is a foriegn concept to an average human thought process.
15 Jun, 2007, Justice wrote in the 26th comment:
Votes: 0
Ugha said:
First, let me say I absolutely love your replies Justice, especially when you tie it back to something we all have in common, MUD programming.


Thanks, heh, always find it's easier to explain a concept through example.

Ugha said:
They talk about how you "don't need to know" how something works, just use its interface.
They also say that C++ is a simpler and easier to use language due to the fact its OOP and humans think in forms of OO naturally. This is
kinda BS, different people think different ways.

An example in one of my books gave a Microwave. It said it doesn't matter how it works, as long as its interface (the keypad) is simple
and functional.

I want to know how my microwave works. I want to be able to fix it if it breaks and I want to be able to build a better one if I have the
chance.

Perhaps this is just a flaw in the books I'm reading and I'm not fully grasping the concept of abstraction. Either way, its proof that OO
is a foriegn concept to an average human thought process.


First off, people do tend to categorize things. That's the basis of how we communicate with each other. If someone talks about a car, you don't need to know the exact make model and year to understand what it does. Same with a tree vs plant. Fish vs Mammal.

The way I view abstraction is this… the code does not need to know how it works… only that it meets the interface. Obviously someone needed to know how it works to write it. Obviously knowing how something works will allow you to use it better. Sort of like with the microwave, by sending waves of energy, it doesn't cook food evenly. To use older microwaves you often needed to turn the food to cook evenly, newer ones have been improved to have a rotating carousel.

The problem with OOP is that there are several ways it can be expressed, often different techniques are used to handle different problems. Software tends to be very complex, and the requirements tend to change. In my experience, the client often doesn't really understand what they want either.

Instead of attempting to solve the "larger" problem, what I find works best is to write very specific tools. These tools use an interface to handle their work, and that is where abstraction and polymorphism come into play. You'll notice this theme in many of my examples.

The ranged search code has a series of functions that find rooms and pass them to an interface for evaluation. These three functions are the "service" being provided, the interface is how you use the service. The abstraction in this case is simple. The search functions simply traverses exits in rooms to find rooms that meet the search criteria. The search_callback is used to evaluate rooms discovered this way.

The affect system I described also has 2 tools, being spells and equipment. The abstraction being that affects are applied and removed. The implementation being exactly what the affect does when applied and what it does when being removed.

It sounds to me as if the books are going over general concepts but now how they apply in the real world. In the real world you start with a problem, and abstraction is a method used to simplify it. Additionally, abstraction is generally used for a 1 is many type of problem. That is a single interface being used by multiple classes. OOP also supports the many is 1 policy as well. If multiple classes need to exhibit identical behavior, you can then define that behavior in a class and extend it from each class that needs it. In this case each class can exhibit the same behavior as the parent class.

Finally inheritance and abstraction receive alot of hype. However with complex systems such as a mud, composition may be a better tool. If you take a complex object like a character which needs to perform many different things… it is extremely difficult to abstract them. By placing everything into a single class, you also defeat the purpose of object oriented code. What you can do however, is define a series of small objects that fulfill a very specific purpose, and use them with a character. By doing this, the character delegates it's responsibilities to each of the smaller objects. Each class can then perform a very specific task.

These concepts aren't conflicting however. Quite often you can use list of implementations to perform tasks. I have some code I fiddle with that emulates a pk mud called GZ. GZ was a heavily modified DIKU, while mine is completely custom. In my code, objects can have several object types (or even multiple instances of the same object type). These types can then pass events around to perform a wide variety of tasks. Quite often these types will react to events generated by other types. Such as ammo that explodes when it hits the target. Or a hypospray that heals when it explodes.
16 Jun, 2007, KaVir wrote in the 27th comment:
Votes: 0
Ugha said:
By "dumbing down" I mean the repeated use of concepts I've read in my C++ books that relate to abstraction.

They talk about how you "don't need to know" how something works, just use its interface.


I do exactly the same thing when I program in C as well, though - it makes it far easier to manage large projects. If you hire someone to add a new spell, do you really want them to have to learn the ins and outs of the entire codebase before they can start? Much easier to say "Modify this one file, and these interface files show how you interact with the rest of the mud".
16 Jun, 2007, Scandum wrote in the 28th comment:
Votes: 0
Ugha said:
Besides the recent… off topic comments, it looks like this is working almost as I had hoped :)

Don't underestimate the valuable insights some off topic banter can give you.

Ugha said:
But whats the purpose of "dumbing down" a programming language?

The obvious answer is that many people are stupid and need a fool proof as well as "dumbed down", programming language.

This dawned on the smart people eventually, and hence they decided to no longer engage in the difficult task of learning people people how to program, and instead taught them to OOP, and it worked beyond expectations, because every time an OOP-er goes "oops" it seems like the proper thing to do.
17 Jun, 2007, Tyche wrote in the 29th comment:
Votes: 0
KaVir said:
Ugha said:
By "dumbing down" I mean the repeated use of concepts I've read in my C++ books that relate to abstraction.

They talk about how you "don't need to know" how something works, just use its interface.


I do exactly the same thing when I program in C as well, though - it makes it far easier to manage large projects. If you hire someone to add a new spell, do you really want them to have to learn the ins and outs of the entire codebase before they can start? Much easier to say "Modify this one file, and these interface files show how you interact with the rest of the mud".


Yep. While the concept is part of OO, it didn't originate with OO. It's called structured or modular programming. In C you partition your application into modules, expose you interface, and hide your local data and functions with static. In Fortran, you partition your common data into named sections.

Mud sockets programming is a good example. How the TCP stack works is hidden from the programmer (the construction and handling of packets, demultiplexing I/O and buffer management). The mud programmer uses a very much simplified interface.
17 Jun, 2007, Justice wrote in the 30th comment:
Votes: 0
Tyche said:
Yep. While the concept is part of OO, it didn't originate with OO. It's called structured or modular programming. In C you partition your application into modules, expose you interface, and hide your local data and functions with static. In Fortran, you partition your common data into named sections.


Yup, OOP wasn't created in a vacuum. The fact is that code reuse exists outside of OOP in various forms. It's one reason I try to point out the way things may be handled in C prior, as well as the C++ way. In some situations, a class is a poor way to handle things. Several C constructs can exhibit limited polymorphic behavior. Macros such as LINK and UNLINK allow code to be written that support a wide variety of structures. Function pointers allow a single interface to be used for multiple implementations.
17 Jun, 2007, Scandum wrote in the 31st comment:
Votes: 0
When working with multiple files in C it's also quite easy to create global and private functions.

I've personally never seen much use for this, since function names are unique, which would make using this restriction a psychological rather than a technical issue. I prefer dumping all function declarations in one .h file which is used by all .c files as an easy way to avoid clutter.

I've always been slightly puzzled by people having a .h file for every .c file.
17 Jun, 2007, KaVir wrote in the 32nd comment:
Votes: 0
Scandum said:
I prefer dumping all function declarations in one .h file which is used by all .c files as an easy way to avoid clutter.


You could reduce the clutter even further by having only one .c file. In fact, if you did that, you wouldn't even need a header file any more. All you need to do then is obfuscate it :cool:

Of course, sometimes it's nice to have code that doesn't need to be completely recompiled every time you make a change, broken down into high-cohesion low-coupling modules that make the software far more robust, reliable, reusable, efficient, understandable, testable and maintainable. In these cases, its preferable to have multiple source and header files.
18 Jun, 2007, Scandum wrote in the 33rd comment:
Votes: 0
KaVir said:
Of course, sometimes it's nice to have code that doesn't need to be completely recompiled every time you make a change, broken down into high-cohesion low-coupling modules that make the software far more robust, reliable, reusable, efficient, understandable, testable and maintainable. In these cases, its preferable to have multiple source and header files.

With my setup you only need to recompile the entire source code if you change one of the global structures, which isn't all that often.

Having one .c file wouldn't make the code less robust, reliable, reusable, efficient, understandable, or testable. The layout might be psychologically important to a programmer, but other than a different layout it'd be the same code, whether it sucks or rocks.

The only thing that would suffer would be maintainability, and not even that if you add jump points and an index of what is contained in the file. This could even increase maintainability because a good index would be more informative than typing 'ls'.
18 Jun, 2007, KaVir wrote in the 34th comment:
Votes: 0
Breaking the software down into smaller modules means that you don't have to understand the entire program, only the module you're currently working on - and it's obviously easier to understand a small section than the entire thing. This also greatly helps testing, particularly when the modules are weakly coupled, as you can test each module in isolation.

When it comes to reusability, it's far easier to be able to simply copy a weakly coupled module from one application into another than to try and dig out a chunk of functionality that has roots throughout the rest of the application.

Robustness, reliability and efficiency aren't directly inherent to a modular design, but are instead due such software being so much easier to understand, maintain and test.
19 Jun, 2007, Scandum wrote in the 35th comment:
Votes: 0
While a weakly coupled module has advantages when it comes to portability there are several issues:

1. Code bloat. What might take 10 lines of code with a strongly coupled module might take 100 lines of code with a weakly coupled module.

2. Lack of centralization. By updating 10 lines of centralized code the behavior of 10 modules could be significantly altered. With weakly coupled modules you might have to update 50 lines of code in 10 modules, totaling 500 lines of code.

3. Lack of Consistency. If one strongly coupled module works all other modules are likely to work. If one module is broken all others are likely to be broken, hence bugs will show up more quickly and can be fixed system wide.

Hence a total disregard for modular design could significantly increase maintainability (given you have the required inside knowledge) while avoiding code bloat and all the issues that tends to bring along.
19 Jun, 2007, Justice wrote in the 36th comment:
Votes: 0
Scandum said:
When working with multiple files in C it's also quite easy to create global and private functions.


All functions in C are global. You can't define them as a member of any construct. Also, C does not support any form of visibility. Simply omiting the declaration does not make it "private" as any C file can simply declare it.
19 Jun, 2007, kiasyn wrote in the 37th comment:
Votes: 0
what about static c functions =o
19 Jun, 2007, Justice wrote in the 38th comment:
Votes: 0
Scandum said:
While a weakly coupled module has advantages when it comes to portability there are several issues:

1. Code bloat. What might take 10 lines of code with a strongly coupled module might take 100 lines of code with a weakly coupled module.


In practice, a properly designed module reduces the amount of code necessary by having a limited scope. A "strongly coupled module" (do I smell an oxymoron?) implies that each side of the system know about each other. This generally increases complexity and thus the difficulty in maintaining and debugging.

Scandum said:
2. Lack of centralization. By updating 10 lines of centralized code the behavior of 10 modules could be significantly altered. With weakly coupled modules you might have to update 50 lines of code in 10 modules, totaling 500 lines of code.

This is blatantly false and based on poor assumptions. First, modular designs tend to have greater centralization by allowing multiple systems to function with the same interface. Second, with "strongly coupled" code, it's generally MORE important to modify the dependant code than with "weakly coupled" code. This is because the dependant code is "strongly" tied to the central system.

Scandum said:
3. Lack of Consistency. If one strongly coupled module works all other modules are likely to work. If one module is broken all others are likely to be broken, hence bugs will show up more quickly and can be fixed system wide.

This explaination has nothing to do with "consistency", it's more related to debugging. While it sounds logical, this is also false. Ironically it also contradicts your previous point. It depends largly on the design of the individual systems as to whether they are sensitive to each other. Additionally, the existance of a bug does not make it readily apparent. The symptoms of a bug may appear in one of the dependant systems making it much more difficult to debug. With a weakly coupled system, it tends to be easier to isolate the modules to prevent this sort of behavior. However, depending on the design, either a weakly or strongly coupled system may exhibit any of these behaviors.

Scandum said:
Hence a total disregard for modular design could significantly increase maintainability (given you have the required inside knowledge) while avoiding code bloat and all the issues that tends to bring along.


The first part of this statement is only true for very small projects (ie, 1 or 2 programmers), "could" being the key word. The more developers involved, the more important concepts like interface and modular design become. The second part of this statement is blatantly false and obvious based on biased assumptions and a lack of experience.
19 Jun, 2007, Tyche wrote in the 39th comment:
Votes: 0
Scandum said:
2. Lack of centralization. By updating 10 lines of centralized code the behavior of 10 modules could be significantly altered. With weakly coupled modules you might have to update 50 lines of code in 10 modules, totaling 500 lines of code.


If you had to update 50 lines of code in 10 modules to make a feature change, then your application is almost by definition poorly partitioned and strongly coupled….not weakly coupled. I mean the point of partitioning into modules is to centralize (and encapsulate) the code for subsystems/features in a module. Of course obviously we're playing with muds/toys and I don't think there's anything wrong with finding your code mispartitioned after making cool feature changes.

BTW, when I released Murk++ it was one big whopping file with as few forward declarations as possible. Several people commented that they didn't like their C++ that way. I actually prefer to write all my own stuff initially as one big file, as I don't like to continually search across files or switch files to edit.
19 Jun, 2007, Justice wrote in the 40th comment:
Votes: 0
kiasyn said:
what about static c functions =o


The static keyword restricts the scope of the function to the file in which it was declared. While it behaves similarly to "private" it does not exhibit the full behavior. You can view it as a grandparent of sorts.
20.0/41