02 Jun, 2009, flumpy wrote in the 41st comment:
Votes: 0
David Haley said:
.. I have concluded that I was correct to doubt its general scope


You're never wrong are you?
David Haley said:
It is true that I moved past my initial skepticism of your claim, as I have concluded that I was correct to doubt its general scope


Good grief. The scope is entirely relative to the context in which you are having a discussion. I doubt many MUD writers provide financial forecasts for their players unless they are specifically modelling a very complex stock market*. I would have thought that most would be doing book keeping type calculations for their shops and so on, at least in my (admittedly) limited mudding experience…

David Haley said:
I am not sure what kind of substance you are looking for as I have given many examples


When you do not provide demonstrations or hard evidence (not just empirical) that your claims are true, then people have every reason to doubt you. Semper necessitas probandi incumbit ei qui agit, and not actually being from a financial background myself I have every reason to disbelieve your position. Especially when someone else from a financial background appears to be saying almost the opposite.

Provide me with ONE example of a forecast, in code or otherwise, with boundary cases, where floating points can be used safely and results not misinterpreted.

David Haley said:
So really, is there anything else, or can we stop wasting time please


When are you going to apologise for insulting and belittling me **?

[EDIT: * admittedly the other thread was discussing a stock system, however the comment I made was within the context of performing calculations with shares and NOT forecasting]

[EDIT: ** which you seem to do a lot btw]
02 Jun, 2009, David Haley wrote in the 42nd comment:
Votes: 0
You seem to be serious here so I'll elaborate further. Tyche's background is in book-keeping, as he said, so it's rather unsurprising that he says you shouldn't use floating point arithmetic for that (which I have agreed with several times). As for your question, it would have been a lot easier if you'd said you wanted me to explain forecasting etc. from the very beginning: we'd have been able to shorten this whole affair to just a post or two.

Anyhow, any kind of factor risk computation involves taking a quantity, a price, a risk loading, multiplying them all together, thereby getting an exposure, and one can get, e.g., P&L due to that factor by further multiplying that factor exposure by the return of that factor. Another application is to hedge a book against some factor (e.g., the market) by making sure that your total exposure is more or less zero. Floating point arithmetic is completely adequate for these purposes with a precision of a few cents, and when talking about hedging a multi-million or even multi-billion dollar book, worrying about a few cents or even a few dollars or hell, even a few hundred dollars is a waste of time. Besides, there's enough noise in the input variables to begin with (the loadings being computed by various forms of regression, for example) that there's no point introducing total precision where there isn't total precision to begin with.

Another example of forecasting is the kind of stuff you would do in elementary economics or business management classes, where you compute the net present value of some cash flow. When with cash flow forecasts over many years, and again when already dealing with very large sums of money, and yet again dealing with approximations and assumptions such as interest rate remaining constant over the time period considered, it's a fool's precision to keep total precision because your problem domain is noisy. It's just a false sense of security.

It's just like significant figures in pre-college physics. If you only have mass to the tenth of a gram and acceleration to the meter per second, there's no point reporting force = mass * accel as being precise to the thousandth of a Newton – it's false precision.

The only time total precision matters is when you're keeping track of things you actually have, such as dollars in a bank, or when you have some kind of legal requirement to have a very clearly reproducible computation that will work the same way regardless of some particular machine architecture's implementation of whatever kind of fixed/floating point arithmetic.

Note that many instances of book-keeping in MUDs don't involve divisible quantities to begin with, so floats are irrelevant from the get-go. When keeping track of inventory in a store, one would count integers; when tracking money; one would count gold coins, silver coins, and so forth (I've never seen a MUD that had fractional money except as a derived value, e.g. 1 gold and 50 silver is 1.5 gold). Even if one were to have a stock system, you wouldn't talk about shares in anything but integers. If you were to purchase shares, you would compute the price and apply rounding rules just like how things work in the real world. This would then be subtracted from the player's money, which again is extraordinarily likely to be stored as integral values anyhow – so no matter what you're doing, there is rounding, thus making any potential loss of precision (which is going to be extremely small anyhow) irrelevant.
02 Jun, 2009, Cratylus wrote in the 43rd comment:
Votes: 0
David Haley said:
…Anyhow, any kind of factor risk computation involves taking a quantity, a price, a risk loading, multiplying them all together, thereby getting an exposure, and one can get, e.g., P&L due to…


OMFG. We'll file this under "flumpy just had".

How bout we resolve to be sparing with "always" and "never" from now on?

(and while we're at it, "should be shot in the face")

-Crat
02 Jun, 2009, elanthis wrote in the 44th comment:
Votes: 0
Yeah, for things like MUDs and other games, I would never use floats because there's no reason to. Even if fractional values were needed I'd use ints, e.g., US dollar amounts can be stored as whole pennies instead of fractional dollars, or kilograms can be stored as whole grams instead of fractional kilograms. Especially given the prevalence of 64-bit computers these days (I'm not sure they even sell desktop/laptop CPUs anymore that can't do 64-bit, but I may well be wrong) there's really little need for floats or doubles for basic accounting of anything in a game.

Essentially, pick a base value for whatever type of unit you're working with. For money, you may pick thousandth-silver-coins, for example. Shares would be valued using that unit, any different denominations of coins would be valued using that unit, store goods would be valued using that unit, bank notes would use it, and so on. Makes it all quite simple.
02 Jun, 2009, elanthis wrote in the 45th comment:
Votes: 0
Quote
(and while we're at it, "should be shot in the face")


Is "should be stabbed in the eye" still kosher, at least?
03 Jun, 2009, Tyche wrote in the 46th comment:
Votes: 0
David Haley said:
Tyche's background is in book-keeping, as he said, so it's rather unsurprising that he says you shouldn't use floating point arithmetic for that (which I have agreed with several times).


I did not say my background was bookkeeping. You did.
03 Jun, 2009, David Haley wrote in the 47th comment:
Votes: 0
True. You said that you primarily worked at large banks and insurance companies for 24 years, so I assumed (based on my knowledge of what people I know do at such firms) that many programming tasks were centered on book-keeping of various forms (where book-keeping is a fairly broad concept, really). Still, if you broaden the scope of what you did too much, what I said above would start applying to you as well. :wink: Perhaps the "business [people] required" fixed point math, which is a pretty darn good reason to implement things that way, but that doesn't mean it was an intelligent requirement. :shrug: Again it all depends on what exactly you were doing, what the inputs to the problem were, and how the outputs are/were used.
03 Jun, 2009, Cratylus wrote in the 48th comment:
Votes: 0
Sorry, I don't have anything to be testy and flamey about…so instead
I'll just contribute what it looks like on my codebase:

Quote
eval return 664.79+1350.71+394.03+246.02
Result = 2655.550049


Not that I kow why it would make a diff, but for completeness, I report
that this result is exactly the same when ds runs on:

Microsoft Windows 2000
Ubuntu Linux (x86)
Solaris SPARC

-Crat
http://dead-souls.net
03 Jun, 2009, David Haley wrote in the 49th comment:
Votes: 0
Uh-oh – you're off by 4.9 thousandths of a cent. I hope you're not trying to plan your savings or anything.

:rolleyes:

(sorry)

As for why it might make a difference, well, different architectures can implement some things differently, even though most conform to the same standard for floating point arithmetic AFAIK.
03 Jun, 2009, Guest wrote in the 50th comment:
Votes: 0
Sush you, that 4.9/1000th of a cent is my kickback for being such a nice guy :)
03 Jun, 2009, flumpy wrote in the 51st comment:
Votes: 0
Great reply David, very informative. I was aware of some of the applications you talk about, we ourselves as bookmakers calculate exposures and liabilities so I do understand.

This is exactly what I was looking for.

However. It also helps to demonstrate just how ludicrous the following statement is:

David Haley said:
It is true that I moved past my initial skepticism of your claim, as I have concluded that I was correct to doubt its general scope. I feel like my skepticism was met with extreme positions, and then my position was interpreted as the other extreme (namely that everybody should always use floating point calculations).
(emphasis mine)

1. My claim was not made in a general scope (as I have already pointed out).
2. Your skepticism was NOT met by an extreme position, it was met by an on topic and relevant example to apply to the situation that was being discussed.
3. Within the context of the discussion (as you yourself pointed out) using floats is pointless (forgive the pun) and so therefore defending the use of them is also pointless, and yet you still felt the need to do so just to win an argument and make yourself feel better (and look intelligent).

No one in their right mind (indeed, no one who actually understood, and very few people who didn't) would even begin to assail the use of floats in the context you have described. I doubt most people understand what you are talking about. Why would you assume a defensive stance? It does nothing for the conversation and just belittles my point.

We are definitely done here. In future I only ask that you take the context of a conversation in to consideration before you attack others for their contributions. It's a great place to discuss things here but when you are constantly jumped on for your contributions it makes you, well, not want to contribute.

Still no apology I see. ho hum.

[edit for typos and minor corrections]
03 Jun, 2009, David Haley wrote in the 52nd comment:
Votes: 0
Quote
Within the context of the discussion (as you yourself pointed out) using floats is pointless (forgive the pun) and so therefore defending the use of them is also pointless

By this reasoning, it's equally pointless to have brought them up in the first place, making this whole conversation a massive waste of time and an exercise in futility. What fun! :tongue:
03 Jun, 2009, Runter wrote in the 53rd comment:
Votes: 0
David Haley said:
Well flumpy, you tell me, since you started us down this road of how floats are so terrible for anything involving money. And no, I'm not talking about abstract "mathematical problems", I'm talking about very real-world, multi-billion dollar industries (where individual companies manage billions just to themselves). I'm a little tired of defending the idea that floats work just fine for many purposes in finance, when literally decades of empirical evidence supports it, as does plain common sense when you actually look at or understand the problems being solved. So, well, that's that for this.

(edit to fix typo)


Everyone knows floating points are the cause of this economic crisis we got ourselves in.
03 Jun, 2009, David Haley wrote in the 54th comment:
Votes: 0
Yeah, it's because everybody lost those 4.9 thousandths of a cent. After a while they add up to a trillion in bailout money. :sad:
03 Jun, 2009, David Haley wrote in the 55th comment:
Votes: 0
A propos… e^{pi} - pi
04 Jun, 2009, elanthis wrote in the 56th comment:
Votes: 0
Being off thousands of a cent in a calculation may not be a problem… but that calculation was made with numbers in the hundreds range. Try doing some heavy math (addition of 4 numbers does not count as heavy math) in the millions range, and the errors will accumulate much faster because the fractional portion of the number has less precision.

Use doubles instead of floats and you buy yourself a LOT of room for error (I think doubles are 80-bits or something like that, right?) which is why languages like Lua just use them by default for all number. Error is still introduced with doubles, but you can deal with far larger numbers before those errors become meaningful.
04 Jun, 2009, David Haley wrote in the 57th comment:
Votes: 0
But when dealing with numbers in the millions in the first place – for things like forecasting and risk computation – you don't really care about cents, whole dollars, tens of dollars, or even hundreds of dollars, given enough millions.

Incidentally, I thought we were talking about floating point arithmetic in general, not doubles vs. floats. We almost always use doubles, not literal floats.
04 Jun, 2009, quixadhal wrote in the 58th comment:
Votes: 0
As long as we're posting platforms….

Gurbalib Revision 263 in DGD 1.3-NET-06, running under Debian Linux 2.6.18, on a 933MHz Pentium 3 CPU:
Quote
> eval return 664.79+1350.71+394.03+246.02
Result:
2655.55
Ticks used: 37


and… on that same machine, the "bc" command at the bash prompt:
Quote
quixadhal@andropov:~$ echo 664.79+1350.71+394.03+246.02 | bc
2655.55


and while I'm at it:
Quote
quixadhal@andropov:~/wiley/src$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 8
model name : Pentium III (Coppermine)
stepping : 6
cpu MHz : 930.325
cache size : 256 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat pse36 mmx fxsr sse up
bogomips : 1862.08


Note, no fdiv bug. :)

and sure, why not…..

Quote
> eval return 664.79+1350.71+394.03+246.02
Result:
2655.55
Ticks used: 37

quixadhal@virt1:~$ echo 664.79+1350.71+394.03+246.02 | bc
2655.55
quixadhal@virt1:~$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 15
model : 2
model name : Intel(R) Pentium(R) 4 CPU 3.00GHz
stepping : 9
cpu MHz : 2991.035
cache size : 512 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss up pebs bts
bogomips : 6012.43
clflush size : 64
power management:

quixadhal@virt1:~$ uname -a
Linux virt1 2.6.26-2-686 #1 SMP Thu May 28 15:39:35 UTC 2009 i686 GNU/Linux


So anyways, DGD and bc, on two different intel platforms both yield the expected result.
05 Jun, 2009, David Haley wrote in the 59th comment:
Votes: 0
As we saw in the first post, it's quite possible that the output routines are doing the rounding for you, assuming that the operations are actually using floating point arithmetic.
05 Jun, 2009, quixadhal wrote in the 60th comment:
Votes: 0
Of course, I'm just providing data points. I suspect DGD uses floating point arithmetic. I am less sure of GNU's "bc" utility, it may very well use fixed point. Both may be rounding the results, although I'm somewhat less convinced of this, as how would the library know to what precision you want things rounded?

In any case, it's very informative to read http://en.wikipedia.org/wiki/Floating_po..., and look at the section "Representable numbers, conversion and rounding".
40.0/61