12 Nov, 2009, Lobotomy wrote in the 1st comment:
Votes: 0
I've more or less found all the information I need through Google regarding this subject, but I feel like fielding the question anyways. Other than the functionality differences, I've never really given pre/post-incrementation any thought (with regards to its speed/efficiency). Primarily I think this is because in C, for all intents and purposes, it just didn't matter - they were (as far as I know) the same. However, while looking through information regarding C++ I came across a suggestion that one should always use pre-increment and only use post-increment as necessary, as the former is faster than the latter - the latter involving the creation of a temporary object (or more) to store the original value to be returned. After researching the matter further, though, there seems to be a consensus that even in C++ it doesn't matter as, aside from the suggestion that the speed differences are negligible, that most/all compilers will optimize a post-increment so that it is the same as a pre-increment when the value isn't being used.

So, with the above in mind, the question is this: What is your preference? Do you always post-increment and then pre-increment as necessary, or do you always pre-increment and then post-increment as necessary?
12 Nov, 2009, David Haley wrote in the 2nd comment:
Votes: 0
I use post-increment except when it matters, because I think it looks nicer and reads more naturally (to me, at least). (And, it's C++, not ++C…) And, it almost never matters.

This is just about personal preference, though, isn't it? Isn't it kind of like asking if one prefers vanilla or chocolate? You seem to have already answered for yourself the problems of efficiency etc., so I'm not sure if you're asking something other than personal preference.
12 Nov, 2009, Caius wrote in the 3rd comment:
Votes: 0
Lobotomy said:
After researching the matter further, though, there seems to be a consensus that even in C++ it doesn't matter as, aside from the suggestion that the speed differences are negligible, that most/all compilers will optimize a post-increment so that it is the same as a pre-increment when the value isn't being used.

So, with the above in mind, the question is this: What is your preference? Do you always post-increment and then pre-increment as necessary, or do you always pre-increment and then post-increment as necessary?


The speed difference is neglible for primitive types. For classes overloading the pre/post-increment operators it can be a different story. If an object of the class is expensive to construct, then the performance difference can be very significant when you need a temp object. I use pre-increment when I can, and post-increment when I must.
12 Nov, 2009, Tyche wrote in the 4th comment:
Votes: 0
Lobotomy said:
Primarily I think this is because in C, for all intents and purposes, it just didn't matter - they were (as far as I know) the same. However, while looking through information regarding C++ I came across a suggestion that one should always use pre-increment and only use post-increment as necessary, as the former is faster than the latter - the latter involving the creation of a temporary object (or more) to store the original value to be returned.


Well the issue is similar in C or C++.

g++-4 (GCC) 4.3.2 20080827 (beta) 2
g++-4 -ggdb -O3 inc.cpp

while (i < 100)
printf("%d\n", ++i);
0x00401120 <main+32>: add $0x1,%ebx
0x00401123 <main+35>: mov %ebx,0x4(%esp)
0x00401127 <main+39>: movl $0x403040,(%esp)
0x0040112e <main+46>: call 0x4011e0 <printf>
0x00401133 <main+51>: cmp $0x64,%ebx
0x00401136 <main+54>: jne 0x401120 <main+32>

while (i < 100)
printf("%d\n", i++);
0x00401140 <main+64>: mov %ebx,%eax
0x00401142 <main+66>: lea 0x1(%eax),%ebx
0x00401145 <main+69>: mov %eax,0x4(%esp)
0x00401149 <main+73>: movl $0x403040,(%esp)
0x00401150 <main+80>: call 0x4011e0 <printf>
0x00401155 <main+85>: cmp $0x64,%ebx
0x00401158 <main+88>: jne 0x401140 <main+64>
12 Nov, 2009, Runter wrote in the 5th comment:
Votes: 0
I use pre-increment except where I specifically need to use post-increment. It's always read better to me that way, as well.
13 Nov, 2009, Davion wrote in the 6th comment:
Votes: 0
Lobotomy said:
I've more or less found all the information I need through Google regarding this subject, but I feel like fielding the question anyways. Other than the functionality differences, I've never really given pre/post-incrementation any thought (with regards to its speed/efficiency). Primarily I think this is because in C, for all intents and purposes, it just didn't matter - they were (as far as I know) the same. However, while looking through information regarding C++ I came across a suggestion that one should always use pre-increment and only use post-increment as necessary, as the former is faster than the latter - the latter involving the creation of a temporary object (or more) to store the original value to be returned. After researching the matter further, though, there seems to be a consensus that even in C++ it doesn't matter as, aside from the suggestion that the speed differences are negligible, that most/all compilers will optimize a post-increment so that it is the same as a pre-increment when the value isn't being used.

So, with the above in mind, the question is this: What is your preference? Do you always post-increment and then pre-increment as necessary, or do you always pre-increment and then post-increment as necessary?


I personally always use ++increment. On another note though, with C++'s operator overloading you can overload the prefix, and postfix increment (and decrement) to do different things entirely.

I also remember somewhere that gcc will at least optimize out the extra steps when it's not needed so in some cases it might not even matter.
13 Nov, 2009, David Haley wrote in the 7th comment:
Votes: 0
Davion said:
On another note though, with C++'s operator overloading you can overload the prefix, and postfix increment (and decrement) to do different things entirely.

This is a great example though of something evil and confusing; if ++a and a++ do rather different things (other than the normal pre and post behavior) you have a recipe for serious confusion.

Davion said:
I also remember somewhere that gcc will at least optimize out the extra steps when it's not needed so in some cases it might not even matter.

It's a good thing that Lobotomy mentioned this exact point already in the first post. :tongue:
13 Nov, 2009, quixadhal wrote in the 8th comment:
Votes: 0
I tend to use post-increment more often. Once upon a time I used to write really dense code and used whichever ones let me do more things in fewer statements… but I (and anyone else, I'm sure) hate to look back at that kind of stuff now, and the compiler does a pretty good job at peephole optimization these days.

It's also good to know how your language will treat complex expressions… for example:

#include <stdio.h>

int main(int argc, char **argv) {
int a[2];
int b[2];
int i;

a[0] = 1; a[1] = 2;
b[0] = 1; b[1] = 2;
i = 0; if( a[i++] == a[i] ) printf("yep\n");
i = 0; if( a[++i] == a[i] ) printf("yep\n");
i = 0; if( a[i++] == b[i] ) printf("yep\n");
i = 0; if( a[++i] == b[i] ) printf("yep\n");
return 0;
}


In all cases, the whole expression inside the if clause gets evaluated before (or after) the increment, in C and C++. Is that true for ALL languages? Not if the precedence for ++ is higher than ==.
13 Nov, 2009, David Haley wrote in the 9th comment:
Votes: 0
Not a lot of other (recently designed) languages support this kind of stuff anyhow, and the majority of the time the justification given is that it's too confusing and complicated. A classic example is *p++ = ++i or something along those lines.

To be honest, I almost never use the increments inside larger expressions. I don't think it's cute to write as much code as possible in the smallest amount of space possible – as Quix said it gets kind of annoying to read that code again later on.
13 Nov, 2009, KaVir wrote in the 10th comment:
Votes: 0
Davion said:
On another note though, with C++'s operator overloading you can overload the prefix, and postfix increment (and decrement) to do different things entirely.

That can be a real problem - not so much for you (because you can make sure you implement the operators properly) but for your compiler. It can optimise integral types, as has been pointed out already, but it can't do the same for objects with their own overloaded operators, because it doesn't know what those operators will do.

And as Caius pointed out, the speed difference between the prefix and postfix operators isn't always neglible when it comes to objects, each of which might be allocating and deallocating large chunks of memory.
13 Nov, 2009, David Haley wrote in the 11th comment:
Votes: 0
KaVir said:
but it can't do the same for objects with their own overloaded operators, because it doesn't know what those operators will do.

Well, it kind of does – in fact it knows exactly what the operators will do. What those operators do, however, might be complex enough that it doesn't know how to optimize them.

IMO, though, something a little fishy is going on if operator overloading does something so complex that one has to think about the issue.

Note however that what Davion said is that you can override ++o and o++ such that prefix and postfix operators might do something entirely different, and perhaps even have terrible performance on the prefix but wonderful performance on the postfix. Or, both might be entirely equivalent. When you override the ++ operator, you're not necessarily creating temporary objects anymore – it depends on what exactly you do in there.

All of this is yet another reason why, in many cases, operator overloading is A Bad Thing. (Or at least, a Very Confusing Thing.)
13 Nov, 2009, KaVir wrote in the 12th comment:
Votes: 0
David Haley said:
KaVir said:
but it can't do the same for objects with their own overloaded operators, because it doesn't know what those operators will do.

Well, it kind of does – in fact it knows exactly what the operators will do.

Not in relation to each other. It cannot optimise "Creature++" into "++Creature" because it doesn't know what you've overloaded the operators to do - they could be doing anything, therefore the compiler has to call the postfix increment operator (which for a typical implementation would create a useless temporary object).
13 Nov, 2009, David Haley wrote in the 13th comment:
Votes: 0
KaVir said:
Not in relation to each other. It cannot optimise "Creature++" into "++Creature" because it doesn't know what you've overloaded the operators to do - they could be doing anything, therefore the compiler has to call the postfix increment operator (which for a typical implementation would create a useless temporary object).

We're talking about overloaded operators here: I'm not sure what exactly you are referring to by "a typical implementation". The whole point is that we're no longer using the normal implementation because the operators are overloaded.

In fact, I might argue that a "typical implementation" of an overloaded postfix operator might not create a temporary object unless it is somehow crucial to the operation. And if it's unused, the compiler will most likely optimize it away.

For example, here is the g++ 4.3 implementation of the STL list iterator:

_Self&
operator++()
{
_M_node = _M_node->_M_next;
return *this;
}

As you can see, there is no temporary object created.

Here is another version of the same, from the same implementation:

_Self
operator++(int)
{
_Self __tmp = *this;
_M_node = _M_node->_M_next;
return __tmp;
}

Here, we do create the temporary, but if it goes unused, the compiler is (as has been pointed out several times by others in this thread) likely to optimize it away.



Anyhow, the main point I want to make here is that when talking about overloaded operators, it's unclear what exactly is meant by "typical implementations", and really, all bets are off as to what the overloaded version is doing.
13 Nov, 2009, KaVir wrote in the 14th comment:
Votes: 0
David Haley said:
We're talking about overloaded operators here: I'm not sure what exactly you are referring to by "a typical implementation".

When applied to integral types, the prefix operator increments and returns the variable, while the postfix operator creates a copy of the variable, increments the variable, then returns the copy.

A "typical implementation" (from a coding standards perspective) of an overloaded prefix operator would make some change to the object and then return a reference to it. A "typical implementation" of an overloaded postfix operator would create a copy of the object, call the prefix operator on the object, then return the copy.

In both cases the prefix operator is clearly more efficient. This isn't really a problem for integral types, because the difference is negligible and most compilers will optimise it anyway, replacing the postfix with a prefix where possible. But the same does not hold true for objects with overloaded prefix/postfix operators - the compiler cannot change the postfix to a prefix (because it has no way to know whether that's a valid optimisation) and cloning a large object may no longer be a negligible difference.

David Haley said:
For example, here is the g++ 4.3 implementation of the STL list iterator: … As you can see, there is no temporary object created.

Yes, that's a fairly typical implementation of an overloaded prefix increment operator.

David Haley said:
Here is another version of the same, from the same implementation: … Here, we do create the temporary,

It's not "the same" - that's the overloaded postfix operator. The two examples you've given are exactly what I described earlier as a "typical implementation" of the overloaded increment operators.

David Haley said:
but if it goes unused, the compiler is (as has been pointed out several times by others in this thread) likely to optimize it away.

No, it won't - that's the point I'm making. If you use the overloaded postfix increment operator on your object, then that's what will be called. With integral types the compiler knows what the postfix and prefix do, and can optimise accordingly. But overloaded operators could do anything - the compiler can't simply convert a postfix to a prefix, it could completely break the software.
13 Nov, 2009, Tyche wrote in the 15th comment:
Votes: 0
KaVir said:
It's not "the same" - that's the overloaded postfix operator. The two examples you've given are exactly what I described earlier as a "typical implementation" of the overloaded increment operators.


Right. The overload postfix signature actually requires an object to be created.
13 Nov, 2009, David Haley wrote in the 16th comment:
Votes: 0
KaVir said:
But overloaded operators could do anything - the compiler can't simply convert a postfix to a prefix, it could completely break the software.

I'm a little unsure why you seem to believe that the compiler won't simply look at what the code is doing, and optimize the way it always does. Once the code is generated, the compiler doesn't give a hoot if it came from pre- or post-fix operators.

You've made this claim several times now; here's the clearest phrasing:
"because [the compiler] has no way to know whether that's a valid optimisation"
This is a very strong statement. You are making a statement of theoretical impossibility, and not merely saying that "the problem is difficult". What proof of this do you have?


Still, just to be clear, everybody should realize that these situations are relatively infrequent. It's not at all common to increment objects (other than e.g., iterators, which have extremely cheap penalty for postfix even if you assume no optimization) – dealing with numbers is a far more common use case. And, we've all very heatedly agreed by now that the compiler will deal with this.
I don't think we've really said that much useful, to be honest. FWIW, I maintain that it's a matter of personal preference in the vast majority of cases. Oh well.
13 Nov, 2009, David Haley wrote in the 17th comment:
Votes: 0
BTW, if you think I'm saying it never matters, I'd point out that in post #2 I did in fact say that when it matters I use prefix instead of postfix. It just happens that (also as I said) it almost never matters.
13 Nov, 2009, KaVir wrote in the 18th comment:
Votes: 0
David Haley said:
I'm a little unsure why you seem to believe that the compiler won't simply look at what the code is doing, and optimize the way it always does.

Because as you yourself said, "all bets are off as to what the overloaded version is doing". When performing a postfix increment operation on an integral type it's pretty simple for the optimiser to check whether the return value is used, and if not, change it to a prefix operator. But with overloaded operators the optimiser would literally need to compare two separate functions and determine whether one was an optimal version of the other.

David Haley said:
Once the code is generated, the compiler doesn't give a hoot if it came from pre- or post-fix operators.

Yes it does - the overloaded prefix and postfix operators are two independent functions. They could do two completely different things.
13 Nov, 2009, David Haley wrote in the 19th comment:
Votes: 0
KaVir said:
Because as you yourself said, "all bets are off as to what the overloaded version is doing".

Well, yes. That doesn't mean that it's always impossible to simply look at the generated code and deduce that the created object is useless, and that the creation process is side-effect free, and therefore can be skipped.

KaVir said:
But with overloaded operators the optimiser would literally need to compare two separate functions and determine whether one was an optimal version of the other.

No. Compilers have very many ways of optimizing code that don't rely on anything like this. In the simplest case, it's very easy to tell if a given instruction is actually doing anything. A simple example:

local int a
a = 3
print a
a = a + 1
<end of scope; a is destroyed>


In this case, it's quite obvious that addition is useless, and so it can be skipped. This hardly involves comparing things to other things the user might or might not have done.

KaVir said:
Yes it does - the overloaded prefix and postfix operators are two independent functions. They could do two completely different things.

Umm. I think you might have missed the point. You might have read too quickly and missed the "once the code is generated" part; I was talking explicitly about what happens after code has been generated (but before optimization, obviously). The fact that this sequence of intermediate-language instructions comes from here and that that sequence comes from there is really utterly irrelevant as far as many optimization techniques are concerned.
(I'm assuming that you have non-trivial background in compilers and optimization here. If not, I can go into more detail. Given your previous statement about optimization, I'm not really sure how much you've worked with optimization theory.)
14 Nov, 2009, David Haley wrote in the 20th comment:
Votes: 0
As an afterthought, it is indeed possible that many compilers don't attempt to look into object creation because it's not easy to do, but that doesn't mean that it's a theoretical impossibility. It means that people haven't done it because they don't think it's worth it (in terms of cost to develop the optimization vs. gains produced by the optimization).
0.0/32