Nice story. An even more powerful way to express numbers is as a continued fraction (https://en.wikipedia.org/wiki/Continued_fraction). You can express both real and rational numbers efficiently using a continued fraction representation.
As a fun fact, I have a not-that-old math textbook (from a famous number theorist) that says that it is most likely that algorithms for adding/multiplying continued fractions do not exist. Then in 1972 Bill Gosper came along and proved that (in his own words) "Continued fractions are not only perfectly amenable to arithmetic, they are amenable to perfect arithmetic.", see https://perl.plover.com/yak/cftalk/INFO/gosper.txt.
I have been working on a Python library called reals (https://github.com/rubenvannieuwpoort/reals). The idea is that you should be able to use it as a drop-in replacement for the Decimal or Fraction type, and it should "just work" (it's very much a work-in-progress, though). It works by using the techniques described by Bill Gosper to manipulate continued fractions. I ran into the problems described on this page, and a lot more. Fun times.
> You can express both real and rational numbers efficiently using a continued fraction representation.
No, all finite continued fractions express a rational number (for... obvious reasons), which is honestly kind of a disappointment, since arbitrary sequences of integers can, as a matter of principle, represent arbitrary computable numbers if you want them to. They're powerful than finite positional representations, but fundamentally equivalent to simple fractions.
They are occasionally convenient for certain problem structures but, as I'm sure you've already discovered, somewhat less convenient for a wide range of common problems.
> No, all finite continued fractions express a rational number
Any real number x has an infinite continued fraction representation. By efficient I mean that the information of the continued fraction coefficients is an efficient way to compute rational upper and lower bounds that approximate x well (they are the best rational approximations to x).
> They are occasionally convenient for certain problem structures but, as I'm sure you've already discovered, somewhat less convenient for a wide range of common problems.
I'm curious what you mean exactly. I've found them to be very convenient for evaluating arithmetic expressions (involving both rational and irrational numbers) to fairly high accuracy. They are not the most efficient solution for this, but their simplicity and not having to do error analysis is far better than any other purely numerical system.
> fundamentally equivalent to simple fractions.
This feels like it is a bit too reductionist. I can come up with a lot of example, but it's quite hard to find the best rational approximations of a number with just fractions, while it's trivial with continued fractions. Likewise, a number like the golden ratio, e, or any algebraic number has a simple description in terms of continued fractions, while this is certainly not the case for normal fractions.
That continued fractions can be easily converted to normal fractions and vice versa, is a strength of continued fractions, not a weakness.
That's the issue, no? If you go infinite you can then express any real number. You can then actually represent all those whose sequence is equivalent to a computable function.
Continued fractions are very cool. I saw in a CTF competition once a question about breaking an RSA variant that relied on the fact that a certain ratio was a term in sequence of continued fraction convergents.
Naturally the person pursing a PhD in number theory (whom I recruited to our team for specifically this reason) was unable to solve the problem and we finished in third place.
Why unnecessarily air this grievance in a public forum. If this person reads it they will be unhappy and I'm sure they have already suffered enough from this failure.
I have been working on a new definition of real numbers which I think is a better foundation for real numbers and seems to be a theoretical version of what you are doing practically. I am currently calling them rational betweenness relations. Namely, it is the set of all rational intervals that contain the real number. Since this is circular, it is really about properties that a family of intervals must satisfy. Since real numbers are messy, this idealized form is supplemented with a fuzzy procedure for figuring out whether an interval contains the number or not. The work is hosted at (https://github.com/jostylr/Reals-as-Oracles) with the first paper in the readme being the most recent version of this idea.
The older and longer paper of Defining Real Numbers as Oracles contains some exploration of these ideas in terms of continued fractions. In section 6, I explore the use of mediants to compute continued fractions, as inspired by the old paper Continued Fractions without Tears ( https://www.jstor.org/stable/2689627 ). I also explore a bit of Bill Gosper's arithmetic in Section 7.9.2. In there, I square the square root of 2 and the procedure, as far as I can tell, never settles down to give a result as you seem to indicate in another comment.
For fun, I am hoping to implement a version of some of these ideas in Julia at some point. I am glad to see a version in Python and I will no doubt draw inspiration from it and look forward to using it as a check on my work.
How do you work out an answer for x - y when eg x = sqrt(2) and y = sqrt(2) - epsilon for arbitrarily small epsilon? How do you differentiate that from x - x?
In a purely numerical setting, you can only distinguish these two cases when you evaluate the expression with enough accuracy. This may feel like a weakness, but if you think about this it is a much more "honest" way of handling inaccuracy than just rounding like you would do with floating point arithmetic.
A good way to think about the framework, is that for any expression you can compute a rational lower and upper bound for the "true" real solution. With enough computation you can get them arbitrarily close, but when an intermediate result is not rational, you will never be able to compute the true solution (even if it happens to be rational; a good example is that for sqrt(2) * sqrt(2) you will only be able to get a solution of the form 2 ± ϵ for some arbitrarily small ϵ).
The link at the end is both shortened (for tracking purposes?) and unclickable… so that’s unfortunate. Here is the real link to the paper, in a clickable format: https://dl.acm.org/doi/pdf/10.1145/3385412.3386037
Thanks for pointing that out. It should be fixed now. The shortening was done by the editor I was using ("Buffer") to draft the tweets in - I wasn't intending to track one but it probably does provide some means of seeing how many people clicked the link
Unrelated to the article, but this reminds me of being an intrepid but naive 12-year-old trying to learn programming. I had already taught myself a bit using books, including following a tutorial to make a simple calculator complete with a GUI in C++. However I wasn't sure how to improve further without help, so my mom found me an IT school.
The sales lady gave us a hard sell on their "complete package" which had basic C programming but also included a bunch of unnecessary topics like Microsoft Excel, etc. When I tried to ask if I could skip all that and just skip to more advanced programming topics, she was adamant that this wasn't an option; she downplayed my achievements trying to say I basically knew nothing and needed to start from the beginning.
Most of all, I recall her saying something like "So what, you made a calculator? That's so simple, anybody could make that!"
However in the end I was naive, she was good at sales, and I was desperate for knowledge, so we signed up. However sure enough the curriculum was mostly focused on learning basic Microsoft Office products, and the programming sections barely scraped the surface of computer science; in retrospect, I doubt there was anybody there qualified to teach it at all. The only real lesson I learned was not to trust salespeople.
Thank god it's a lot easier for kids to just teach themselves programming these days online.
Nice story. Thank you share. For years, I struggled with the idea of "message passing" for GUIs. Later, I learned it was nothing more than the window procedure (WNDPROC) in the Win32 API. <sad face>
> However I wasn't sure how to improve further without help, so my mom found me an IT school.
This sounds interesting. What is an "IT school"? (What country? They didn't have these in mine.)
Probably institutes teaching IT stuff. They used to be popular (still?) in my country (India) in the past. That said, there are plenty of places which train in reasonable breadth in programming, embedded etc. now (think less intense bootcamps).
Most of all, I recall her saying something like "So what, you made a calculator? That's so simple, anybody could make that!"
This literally brings rage to the fore. Downplaying a kid's accomplishments is the worst thing an educator could do, and marks her as evil.
I've often looked for examples of time travel, hints it is happening. I've looked at pictures of movie stars, to see if anyone today has traveled back in time to try to woo them. I've looked at markets, to see if someone is manipulating them in weird, unconventional ways.
I wonder how many cases of "random person punched another person in the head" and then "couldn't be found" is someone traveling back in time to slap this lady in the head.
So yeah, a kid well-versed in Office. My birthday invites were bad-ass, though. Remember I had one row in Excel per invited person with data, and in the Word document placeholders, and when printing it would make a unique page per row in Excel, so everyone got customized invites with their names. Probably spent longer setting it up than it would've taken to edit their names + print 10 times separately, but felt cool..
Luckily a teacher understood what I really wanted, and sent me home with a floppy disk with some template web-page with some small code I could edit in Notepad and see come to live.
As soon as I read the title, I chuckled, because coming from the computational mathematics background I already knew what it roughly is going to be about. IEEE 754 is like democracy in a sense that it is the worst, except for all the others. Immediately when I saw the example I thought: it is either going to be either a Kahan summation or full-scale computer algebra system. It turned out to be some subset of the latter and I have to admit I have never heard of Recursive Real Arithmetic (I knew of Real Analysis though).
If anything that was a great insight about one of my early C++ heroes, and what they did in their professional life outside of the things they are known for. But most importantly it was a reminder how deep seemingly simple things can be.
IEEE 754 is what you get when you want numbers to have huge dynamic range, equal precision across the range, and fixed bit width. It balances speed and accuracy, and produces a result that is very close to the expected result 99.9999999% of the time. A competent numerical analyst can take something you want to do on paper and build a sequence of operations in floating point that compute that result almost exactly.
I don't think anyone who worked on IEEE 754 (and certainly nobody who currently works on it) contemplated calculators as an application, because a calculator is solving a fundamentally different problem. In a calculator, you can spend 10-100 ms doing one operation and people won't mind. In the applications for which IEEE 754 is made, you are expecting to do billions or trillions of operations per second.
William Kahan worked on both IEEE 754 and HP calculators. The speed gap between something like an 8087 and a calculator was not that big back then, either.
IEEE 754 is what you get if you started with the idea of sign, exponent, and fraction and made the most efficient hardware implementation of it possible. It's not "beautiful", but it falls out pretty straightforwardly from those starting assumptions, even the seemingly weirder parts like -0, subnormals and all the rounding modes. It was not really democratically designed, but done by numerical computing experts coupled with hardware design experts. Every "simplified" implementation of floating point that has appeared (e.g. auto-FTZ mode in vector units) has eventually been dragged kicking and screaming back to the IEEE standard.
Another way to see it is that floating point is the logical extension of fixed point math to log space to deal with numbers across a large orders of magnitude. I don't know if "beautiful" is exactly the right word, but it's an incredibly solid bit of engineering.
I feel like your description comes across as more negative on the design of IEEE-754 floats than you intend. Is there something else you think would have been better? Maybe I’m misreading it.
Maybe the hardware focus can be blamed for the large exponents and small mantissas.
The reasonable only non-IEEE things that comes to mind for me are:
- bfloat16 which just works with the most significant half of a float32.
- log8 which is almost all exponent.
I guess in both cases they are about getting more out of available memory bandwidth and the main operation is f32 + x * y -> f32 (ie multiply and accumulate into f32 result).
Maybe they will be (or already are) incorporated into IEEE standards though
> what you get if you started with the idea of sign, exponent, and fraction and made the most efficient hardware implementation of it possible. It's not "beautiful", but it falls out pretty straightforwardly from those starting assumptions
This implies a strange way of defining what "beautiful" means in this context.
IEEE754 is not great for pure maths, however, it is fine for real life.
In real life, no instrument is going to give you a measurement with the 52 bits of precision a double can offer, and you are probably never going to get quantities are in the 10^1000 range. No actuator is precise enough either. Even single precision is usually above what physical devices can work with. When drawing a pixel on screen, you don't need to know its position down to the subatomic level.
For these real life situations, improving on the usual IEEE 754 arithmetic would probably be better served with interval arithmetic. It would fail at maths, but in exchange you get support for measurement errors.
Of course, in a calculator, precision is important because you don't know if the user is working with real life quantities or is doing abstract maths.
> IEEE754 is not great for pure maths, however, it is fine for real life.
Partially. It can be fine for pretty much any real-life use case. But many naive implementations of formulae involve some gnarly intermediates despite having fairly mundane inputs and outputs.
> IEEE 754 is like democracy in a sense that it is the worst, except for all the others.
I can't see what would be worse. The entire raison d'etre for computers is to give accurate results. Introducing a math system which is inherently inaccurate to computers cuts against the whole reason they exist! Literally any other math solution seems like it would be better, so long as it produces accurate results.
That's doing a lot of work. IEE-754 does very in terms of error vs representation size.
What system has accurate results? I don't know any number system at all in usage that 1) represents numbers with a fixed size 2) Can represent 1/n accurately for reasonable integers 3) can do exponents accurately
Electronic computers were created to be faster and cheaper than a pool of human computers (who may have had slide rules or mechanical adding machines). Human computers were basically doing decimal floating point with limited precision.
It's ideal for engineering calculations which is a common use of computers. There, nobody cares if 1-1=0 exactly or not because you could never have measured those values exactly in the first place. Single precision is good enough for just about any real-world measurement or result while double precision is good for intermediate results without losing accuracy that's visible in the single precision input/output as long as you're not using a numerically unstable algorithm.
The NYC subway fare is $2.90. I was using PCalc on iOS to step through remaining MetroCard values per swipe and discovered that AC, 8.7, m+, 2.9, m-, m-, m- evaluates to -8.881784197E-16 instead of zero. This doesn't happen when using Apple's calculator. I wrote to the developer and he replied, "Apple has now got their own private maths library which isn't available to developers, which they're using in their own calculator. What I need to do is replace the Apple libraries with something else - that's on my list!"
I wrote the calculator for the original blackberry. Floating point won't do. I implemented decimal based floating point functions to avoid these rounding problems. This sounds harder than it was, basically, the "exponent" part wasn't how many bits to shift, but what power of two to divide by, so that 0.1, 0.001 etc can be represented exactly. Not sure if I had two or three digits of precision beyond whats on the display. 1 digit is pretty standard for 5 function calculators, scientific ones typically have two.
It was only a 5 function calculator, so not that hard, plus there was no floating point library by default so doing any floating point really ballooned the size of an app with the floating point library.
Sounds like he's just using stock math functions. Both Javascript and Python act the same way when you save the result immediately after subtracting two numbers multiple times, rather than just 8.7 - (2.9*3).
It's not even about features. Calculators are mostly useful for napkin math - if I can't afford an error, I'll take some full-fledged math software/package and write a program that will be debuggable, testable, and have version control.
But for some reason the authors of calculator apps never optimize them for the number of keypresses, unlike Casio/TI/HP. It's a lost art. Even a simple operator repetition is a completely alien concept for new apps. Even the devs of the apps that are supposed to be snappy, like speedcrunch, seem to completely misunderstand the niche of a calculator, are they not using it themselves? Calculator is neither a CAS nor a REPL.
For Android in particular, I've only found two non-emulated calculators worth using for that, HiPER Calc and 10BA by Segitiga.Pro. And I'm not sure I can trust the correctness.
Another app nobody has made is a simple random music player. Tried VLC on Android and adding 5000+ songs from SD card into a playlist for shuffling simply crashes the app. Why do we need a play list anyway, just play the folder! Is it trying to load the whole list at the same time into memory? VLC always works, but not on this task. Found another player that doesn't require building a playlist but when the app is restarted it starts from the same song following the same random seed. Either save the last one or let me set the seed!
I've been working on it for what will be a decade later this year. It tries to take all the features you had on these physical calculators, but present them in a modern way. It works on macOS, iOS, and iPad OS
With regards to the article, I wasn't quite as sophisticated as that. I do track rationals, exponents, square roots, and multiples of pi; then fall back to decimal when needed. This part is open source, though!
Well the 89 is a CAS in disguise most of the time which is mentioned in passing in the article.
But, I agree I almost never want the full power of Mathematica/sage initially but quickly become annoyed with calc apps. The 89 and hp prime//50 have just enough to solve anything where I wouldn’t rather just use a full programming language.
HiPER Calc Pro looks like and works like a "physical" calculator, I've use it for years to get effect. I also have Wabbitemu but hardly ever use it, the former works fine for nearly everything.
Can you tell me which emulator you're using? I loved using the open source Wabbitemu on previous Android phones, but it seems to have been removed from the app store, so I can't install it on newer devices :-/
> And almost all numbers cannot be expressed in IEEE floating points.
It is a bit stronger than that. Almost all numbers cannot be practically expressed and it may even be that the probability of a random number being theoretically indescribable is about 100%. Depending on what a number is.
> Some problems can be avoided if you use bignums.
Or that. My momentary existential angst has been assuaged. Thanks bignums.
> Almost all numbers cannot be practically expressed
That's certainly true, but all numbers that can be entered on a calculator can be expressed (for example, by the button sequence entered in the calculator). The calculator app can't help with the numbers that can't be practically expressed, it just needs to accurately approximate the ones that can.
Does it matter that some numbers are inexpressible (i.e., cannot be computed)?
I don't think it matters on a practical level--it's not like the cure for cancer is embedded in an inexpressible number (because the cure to cancer has to be a computable number, otherwise, we couldn't actually cure cancer).
But does it matter from a theoretical/math perspective? Are there some theorems or proofs that we cannot access because of inexpressible numbers?
[Forgive my ignorance--I'm just a dumb programmer.]
> Almost all numbers cannot be practically expressed and it may even be that the probability of a random number being theoretically indescribable is about 100%. Depending on what a number is.
A common rebuke is that the construction of the 'real numbers' is so overwrought that most of them have no real claim to 'existing' at all.
That's pretty cool, but the downsides of switching to RRA are not only about user experience. When the result is 0.0000000..., the calculator cannot decide whether it's fine to compute the inverse of that number.
For instance, 1/(atan(1/5)-atan(1/239)-pi/4) outputs "Can't calculate".
Well alright, this is a division by zero. But then you can try 1/(atan(1/5)-atan(1/239)-pi/4+10^(-100000)), and the output is still "Can't calculate" even though it should really be 10^100000.
You missed a 4. You are trying to say 1/(4atan(1/5)-atan(1/239)-pi/4) is a division by zero.
On the other hand 1/(atan(1/5)-atan(1/239)-pi/4) is just -1.68866...
I played around with the calculator source code from the Android Open Source Project after a previous submission[1]. I think Google moved it from AOSP to the Google Play Services several years ago, but the old source is still available.
It does solve some real problems that I'd love to have available in a library. The discussion on the previous article links to some libraries, but my recollection is that the calculator code is more accessible to an innumerate person like myself.
Edit: the previous article under discussion doesn't seem to be available, but it's on archive.org[2].
The way this article talks about using "recursive real arithmetic" (RRA) reminds me of an excellent discussion with Conal Elliot on the Type Theory For All podcast. He talked about moving from representing things discretely to representing things continually (and therefore more accurately). For instance, before, people represented fonts as blocks of pixels, (discrete.) They were rough approximations of what the font really was. But then they started to be recognized as lines/vectors (continual), no matter the size, they represented exactly what a font was.
Conal gave a beautiful case for how comp sci should be about pursuing truth like that, and not just learning the latest commercial tool. I see the same dogged pursuit of true, accurate representation in this beatiful story.
Thanks, that's a lovely analogy and I'm excited to listen to that podcast.
I think the general idea of converting things from discrete and implementation-motivated representations to higher-level abstract descriptions (bitmaps to vectors, in your example) is great. It's actually something I'm very interested in, since the higher-level representations are usually much easier to do interesting transformations to. (Another example is going from meshes to SDFs for 3D models.)
I noticed this too, but I was confused because the calculator article was informative and interesting. It's entirely unlike the inept fluffy slop that gets posted to LinkedIn
And furthermore the content isn't as "wow, who'd a thunk it" as the author seems to think it is. I cannot be unusual in knowing that single and double precision floating point numbers just don't cut it for a lot of arithmetic tasks. Surely this is taught in every comp-sci course? And doesn't nearly everyone know the N ⊆ Z ⊆ Q ⊆ A ⊆ R ⊆ C hierarchy?, well nearly everyone doesn't but surely everyone who either decides to write a calculator app or is tasked with writing one ought to know this? The claim "A calculator app? Anyone could make that." is a patently ridiculous claim to me, anybody who would make such a claim is clearly ignorant of both software development and mathematics. Next article. "A text editor? Anyone could make that."
Some quick research yields a couple of open source CAS, such as OpenAxiom, which uses the Modified BSD license. Granted that Google has strong "NIH" tendencies, but I'm curious why something like this wasn't adapted instead of paying several engineers some undisclosed amount of time to develop a calculation system.
The article mentions that a CAS is an order of magnitude (or more!) more complex than the bifurcated rational + RRA approach, as well as slower, but: the complexity would be solved by adapting an open source solution, and the computation speed wouldn't seem to matter on a device like an Android smartphone. My HP Prime in CAS mode runs at 400MHz and solves every problem the Android calculator solves with no perceptible delay.
Is it a matter of NIH? A legal issue with the 3-clause BSD license I don't understand? Reducing binary size? The available CAS weren't up to snuff for one reason or another? Some other technical issue? Or, if not that, why not use binary-coded decimal?
These are just questions, not criticisms. I have very very little experience in the problem domain and am curious about the answers :)
To make any type of app really good is super hard.
I have yet to see a good to-do list tool.
I'm not kidding. I tried TickTick, Notion, Workflowy ... everything I tried so far feels cumbersome compared to how I would like to handle my To-Do list. The way you create, edit, browse, drag+drop items is not as all as fluid as I imagine it.
So if anyone knows a good To-Do list software (must be web based, so I can use it anywhere without installing something) - let me know!
It seems like you're looking for an outliner? Workflowy might fit your needs: https://workflowy.com/
Like others have said, the perfect to-do list is impossible because each person wants wildly different functionality.
My dream to-do list has minimal interaction, with the details handled like I have my own personal secretary. All I'd do is verbally say something like "remind me to do laundry later" and it would do the rest: Categorizing, organizing, prioritizing, scheduling and adding sub-tasks as needed.
I love the idea of automatic sub-tasks created at level which helps with your particular procrastination level. For example "do laundry" would add in "gather clothes, bring to laundry room, separate colors, add to washer, set timer, add to dryer, set timer, get clothes, fold clothes, put away, reschedule in a week (but hide until then). Maybe it's even add in Pomodoro timers to help.
LLMs with reasoning might get us there soon - we've been waiting for Knowledge Navigator like assistants for years.
I'm one of the creators of Godspeed, which is a fast, 100% keyboard oriented to-do app (though we do support drag and drop as well!). And we've got a web app!
I would love to have a To-Do app that is fluid for both one-off tasks and periodic checklists (daily/weekly/monthly/etc.) Most importantly, I want it to yell at me to actually do it. I was pretty surprised that basically nothing seems to fit the bill and even what existing "GTD" type apps could do felt cumbersome and limited.
There’s a pleasantly elegant “hey, we’ve solved the practical functional complement to this category of problems over here, so let’s just split the general actual user problem structurally” vibe to this journey.
It often pays off to revisit what the actual “why” is behind the work that you’re doing, and this story is a delightful example.
I wrote an arbitrary precision arithmetic C++ library back in the 90’s. We used it to compute key pairs for our then new elliptic-curve based software authentication/authorization system. I think the full cracks of the software were available in less than two weeks, but it was definitely a fun aside and waaaay too strong of a solution to a specific problem. I was young and stupid… now I’m old and stupid, so I’d just find an existing tool chain to solve the problem.
All the calculators that I just tried for the article's expression give the wrong answer (HP Prime, TI-36X Pro, some casio thing). Even google's own online calculator gives the wrong answer, which is mildly ironic. [https://www.google.com/search?q=1e101%2B1-1e101&oq=1e101%2B1]
I played around with the macOS calculator and discovered that the dividing line seems to be at 1e33. I.e. 1e33+1-1e33 gives the correct answer of 1 but 1e34+1-1e34 gives 0. Not sure what to make of that.
Tried with the HP Prime and it gave the precise 1 for the test. One need to put it in the CAS mode and use the exact form of 10^100 instead of 1E100. You shall get the right answer if the calculator is instructed to use its very powerful CAS engine.
I enjoyed the article, but it seems Apple has since improved their calculator app slightly. The first example is giving me the correct result today. However, the second example with the “Underflow” result is still occurring.
I remember hearing stories that for a time there was no engineer inside Apple responsible for the iOS Calculator.
Now it seems to be revived as there were some updates to it, but those also removed one of my favourite features -> tapping equals button no longer repeats the last operation.
Find a representation of finite memory to represent points, which allows exact addition, multiplication and rotation between them, (with all the nice standard math property like associativity and commutativity).
For example your representation should be able to take a 2d point A, aka two coordinates, and rotate it around the origin by an angle theta to obtain the point B. Take the original point and rotate it by pi + theta, then reflect it around the origin to obtain the point C. Now answer the question whether B is coincident with C.
years ago the daily wtf had a challenge for writing the worst calculator app. my submission maintained calculation state by emitting it's own source code, recompiling and running the new executable.
I first learned to program on a Wang 2200 computer with 8KB of RAM, back in 1978. One of the math teachers stayed an hour late most days to allow us nerds to come in an use the two computers. There were more people than computers, so often you'd only get 10 or 15 minutes of time.
Anyway, I wrote a program where you could enter an equation and it would draw an ASCII graph of the curve. I didn't know how to parse expressions and even if I had I knew it would be slow. The machine had a cassette tape under computer control for storing and loading programs. What I did was to take the expression typed by the user and convert each one into its tokenized form and write it out to tape. The program would then load that just created overlay which contained something like "1000 DEF FNY(X)=X^2-5" and a FOR loop would sweep X over the designated range, and have "LET Y=FNY(X)" to evaluate the expression for me.
As a result, after entering the equation, it would take about five seconds to write out the overlay, rewind a couple blocks, and load the overlay before it would start to plot. But once it started it went pretty fast.
Interesting article, and kudos to Boehm for going the extra mile(s), but it seems like overkill to me.
I wouldn't expect, or use, a calculator for any calculation requiring more accuracy than the number of digits it can display. I'm OK with with iPhone's 10^100 + 1 = 1e100.
If I really needed something better, I'd try Wolfram Alpha.
The thing about this calculator app is that it can display any number of digits just by scrolling the display field. The UX is "any number of digits the user wants" not some predetermined fixed number of digits.
One of the first ideas I had for an app was a calculator that represented digits like shown in the article but allowed you to write them with variables and toggle between symbolic and actual responses.
A use case would be: in a spreadsheet like interface you could verify if the operations produced the final equation you were modeling in order to help validate if the number was correct or not. I had a TI-89 that could do something close and even in 2006 that was not exactly brand new tech. I figured surely some open source library available on the desktop must get me close. I was wildly wrong. I stuck with programming but abandoned the calculator idea. Even nearly 20 years later, such a task doesn’t seem that much easier to me.
That's a CAS, as mentioned. There are plenty of open source libraries available, but one that specifically implements the algorithms discussed in this article is flintlib. Here's an example from their docs showing exactly what you want:
https://flintlib.org/doc/examples_calcium.html#examples-calc...
At the risk of coming across as being a spoilsport, I think when someone says "anyone can write a calculator app", they just mean an app that simulates a pocket calculator (which is indeed pretty easy) as opposed to one which always gives precisely the right answer (which is indeed impossible). Also, you can avoid the most embarrassing errors just by rearranging the terms to do cancellation where possible, e.g. sqrt(2) * 3 * sqrt(2) is absolutely precisely 6, not 6 within some degree of approximation.
> as opposed to one which always gives precisely the right answer (which is indeed impossible)
Per the article, it's completely possible. Frankly I'd say they found the obvious solution, the one that any decent programmer would find for that problem.
> 1 is not equal to 1 - e^(-e^1000). But for Richardson and Fitch's algorithm to detect that, it would require more steps than there are atoms in the universe.
> They needed something faster.
I'm disappointed after this paragraph I expected a better algorithm and instead they decided to give up. Fredrik Johansson in his paper "Calcium: computing in exact real and complex fields" gives a partial algorithm for the problem and writes "Algorithm 2 is inspired by Richardson’s algorithm, but incomplete: it will
find logarithmic and exponential relations, but only if the extension tower is flattened (in
other words, we must avoid extensions such as e^log(z) or √z^2), and it does not handle all
algebraic functions.
Much like the Risch algorithm, Richardson’s algorithm has apparently never been implemented fully. We presume that Mathematica and Maple use similar heuristics to ours,
but the details are not documented [6], and we do not know to what extent True/False
answers are backed up by a rigorous certification in those system".
> Obviously we'll want a symbolic representation for the real number 1
Sorry, why is this obvious? A basic int type can store the value of 1, let alone the more complicated Rational (BigNum/BigNum) type they have. I can absolutely see why you want symbolic representations for pi, e, i, trig functions, etc., but why one?!
I think the issue was that they are representing a real as a product of a rational and that more complicated type, so without a symbolic representation for 1, when representing and rational, they would have to multiply it by a RRA representation of 1 which brings in all the decision problem issues.
Sorry for being unclear about this. A number is being expressed as a rational times a real. In the case where the rational is exactly the number we want, we want to be able to set the real to 1, so the multiplication has no effect
Because they express numbers as rational times a real, so the real in all those cases would be one. When it’s one, you do rational math as normal without involving reals.
I use python repl as my primary calculator on my computer.
1. I don't have problems like the IOS problem documented here. This requires me to know the difference between an int and a float, but pythons ints have unbounded precision(except if you overflow your entire memory), so that kind of precision loss isn't a big deal.
2. History is a lot better. Being able to scroll back seems like a thing calculators ought to offer you, but they don't.
3. In the 1-in-a-hundred times I need to repeat operations on the calculator, hey, we've already got loops, this is python
4. Every math feature in the windows default calculator is available in the math library.
5. As bad as python's performance reputation is, it's not at all going to be noticeable for simple math.
I was always a little envious of the people that could use bc because they knew how. I know python and its installed on linuxes by default, so now I am no longer envious.
I really hate when people put cat images and memes in a serious article.
Don't get me wrong, the content is good and informative. But I just hate the format.
That reminds me when SideFX started putting memes into their official tutorial youtube channel. At least this is just a webpage and we can scroll through them...
While we're already breaking the HN guidelines—"Please don't complain about tangential annoyances—e.g. article or website formats"—let me just say that the scrolljacking on this article is awful.
The tone of the article has given away the fact that the article is not serious. At least not the way it's presented. You want something serious? Go read the pdf.
And I don't mind at all. Without this article, I probably will never know what's in the paper and how they iterated. I'll likely give up after reading the abstract -- "oh, they solved a problem". But this article actually makes much more motivating to read the original paper, which I plan to do now.
Off topic, but I believe naming this specific kind of numbers "real" is a misnomer. Nothing in reality is expression of a real number. Real numbers pop up only when we abstract reality into mathematical models.
In Polish language rational numbers are called something more like "measurable" numbers and in my opinion that's the last kind of numbers that is expressed in reality in any way. Those should be called "real" and real should be called something like "abstract" or "limiting" because they pop-up first as limits of some process working on rational numbers for infinite number of steps.
I think I understand why, from the article, but wouldn't it be "easy" (probably not, but curious about why) to simplify the first expression to (1-1)π + 1 then 0π + 1 and finally just 1 before calculating a result?
Due to backwards compatibility modern PC CPUs have some mathematical constants in hardware, one of them Pi https://www.felixcloutier.com/x86/fld1:fldl2t:fldl2e:fldpi:f... Moreover, that FLDPI instruction delivers 80 bits of precision, i.e. more precise than FP64.
That’s pretty much useless in modern world because the whole x87 FPU is deprecated. Modern compilers are generating SSE1 and SSE2 instructions for floating-point arithmetic, instead of x87.
As far as I know, windows calculator have a similar approach. It use rational, and switch to Taylor expansion to try to avoid cancellation errors. Microsoft open sourced it some times ago on GitHub
I haven’t really used the iPad’s calculator app, but it looks exactly like a larger version of the iPhone app. So I don’t think there are any technical reasons why it took so long for the iPad to get that app.
lowkey this is why ieee 754 floating point is both a blessing and a curse, like yeah it’s fast n standardized but also introduces unavoidable precision loss, esp w iterative computations where rounding errors stack up in unpredictable ways. ppl act like increasing precision bits solves everything. but u just push the problem further down, still dealing w truncation, cancellation, etc. (and edge cases where numerical stability breaks down.)
… and this is why interval arithmetic and arbitrary precision methods exist, so it gives guaranteed bounds on error instead of just hoping fp rounding doesn’t mess things up too bad. but obv those come w their own overhead: interval methods can be overly conservative, which leads to unnecessary precision loss, and arbitrary precision is computationally expensive, scaling non-linearly w operand size.
wonder if hybrid approaches could be the move, like symbolic preprocessing to maintain exact forms where possible, then constrained numerical evaluation only when necessary. could optimize tradeoffs dynamically. so we’d keep things efficient while minimizing precision loss in critical operations. esp useful in contexts where precision requirements shift in real time. might even be interesting to explore adaptive precision techniques (where computations start at lower precision but refine iteratively based on error estimates).
This article was really well written. Usually in such articles i understand about 50%, maybe if I'm lucky 70% but this one I've understood nearly everything. It's not much of a smartness thing but an absolute refusal on my part to learn the jargon of programming as well as my severe lack of knowledge of all the big words that are thrown around lol. But really simply written love it
At some point, when I get a spare 5 years (and/or if people start paying for software again), I will start to work on a calculator application. Number system wrangling is quite fun and challenging, and I am hoping to incorporate units as a first-class citizen.
If you accept that Pi and Sqrt(2) will be represented as a terminating series of digits (say, 30), then 99% of the problems stated go away. My HP calculator doesn't represent the square root of 2 as a magic number, it's 1.414213562.
This is really cool, but it does show how Google works. They’ll pay this guy ~$3million a year (assuming stock appreciation) to do this but almost no end user will appreciate it in the calculator app itself.
I doubt that most people using the calc app expect it to handle such situations. It's nice that it does of course but IMO it misses the point that the inputs to a lot of real world calculations are inaccurate to start with.
i.e it's more likely that I've made a few mm mistake when measuring the radius of my table than that I'm not using an precise enough version of Pi. The area of the table will have more error because one is squaring the radius, obviously.
It would be interesting to have a calculator that let you add in your estimated measurement error (or made a few reasonable guesses about it for you) and told you the error in your result e.g. the standard deviation.
I sometimes want to buy stuff at a hardware shop and I think : "how much paint do I need to buy?" I haven't planned so I'm thinking "it's about 4m by 5m...I think?" I try to do a couple of calculations with worst case numbers so I at least get enough paint and save another trip to the shop but not comically too much so that I have a tin of it for the next 5 years.
I remember having to estimate error in results that were calculated from measured values for physics 101 and it was a pain.
Does anyone know if this was the system used by higher end TI calculators like the TI-92? It had a 'rational' mode for exact answers and I suspect that it used RRA for that.
The TI-92 and similar have a full-on computer algebra system that they use when they're in exact mode [1]. It does symbolic manipulation.
This is different from what the post (and linked paper) discuss, where the result will degrade to recursive real arithmetic, which is correct but only to a bounded level of precision. A CAS will always give a fully-exact (although sometimes very unwieldy) answer.
I really do think we should just use the symbolic systems of math rather than trying to bring natural world numbers into a digital number space. It's this mapping that inherently leads to compensating strategies. I guess this is called an algebraic system like the author mentioned.
But I view math as more of a string manipulation function with position-dependent mapping behavior per character and dependency graphs, combined with several special functions that form the universal constants.
Just because data is stored in digitization as 1 and O, don't forget it's more like charged and not charged. Computers are not numeric systems, they are binary systems. Not the same thing.
Really interesting article. I noticed that my Android calculator app could display irrational numbers like PI to an impressive amount of digits, if I hold it sideways.
I really wonder what the business case for spending so much effort on such precision was. Who are the users who need such accuracy but are using android calculator?
Students learning about real numbers. Yes seriously.
Unlike software engineers who have already studied IEEE754 numbers, you can't expect a middle school student to know concepts like catastrophic cancellation. But a middle school student wants to poke around with trigonometric functions and pi to study their properties, but a true computer algebra system might not be available to them. They might not understand that a random calculator app doesn't behave correctly because it's not using the same kind of numbers discussed in their math class.
while phones are mostly a circus people do try to use them for serious things. For a program you make the calculations as accurate as the application requires. If you don't know what a tool will be used for you never really get to feel satisfied.
Are you in standard or scientific? Each new operator (Not sure if thats the correct term) is calculated immediately. ie 1+2x3 is worked out as 1+2 (Stored into buffer as 3) x 3 = 9
But scientific does it correctly where it just appends the new expression onto the buffer instead of applying it
I'm on windows 11. I just did it and it replied "7". I subtracted 7 to see if there was some epsilon error but it reported "0". What do you experience?
There should have been an "x" on the right of the "contact me" portion that you could click to make it go away. Sounds like it didn't show up for you, so sorry about that. Unfortunately I don't have an iPhone SE to test against and the "x" does seem to show up on the iPhone SE screen-size simulator in Chrome. This means I don't know how to reproduce the issue and probably won't be able to resolve it without removing the "contact me" page entirely, which I'm not willing to do right now.
So, 'bc' just has the (big) rationals. Rationals are the numbers you could make by taking one integer (say 5 or minus sixteen trillion and fifty-one) and dividing it by some positive integer (such as three or sixty-two thousand)
If we have a "Big Integer" type which can represent arbitrarily huge integers, such 10 to the power 5000, we can use two of these to make a Big Rational, and so that's what bc has.
But the rationals aren't enough for all the features on your calculator. What's the square root of ten ? How about the square root of 40 ? Now, multiply those together. The correct answer is 20. Not 20.00000000000000001 but exactly 20.
No? They made a goal to show 0.0000 in as few places as possible, and they got as close to it as they could without compromising their other requirements.
Was given the task to build a simple calculator app as a project for a Java class I took in college.
No parens or anything like that, nothing nearly so fancy. Classic desk calculator where you set the infix operation to apply to the previous value, followed by the second value of the operation.
It was frankly an unexpected challenge. There's a lot more to it than meets the eye.
I only got as far as rational numbers though. PI accurate to the 8 digit display was good enough for me.
Honestly though, I think it was a great exercise for students, showing how seemingly simple tasks can actually be more complex than they seem. I'm still here thinking about it some twenty years later.
> We no longer receive bug reports about inaccurate results, as we occasionally did for the 2014 floating-point-based calculator
(with a footnote: This excludes reports from one or two bugs that have now been fixed for many months. Unfortunately, we continue to receive complaints about
incorrect results, mostly for two reasons. Users often do not understand the
difference between degrees and radians. Second, there is no standard way
to parse calculator expressions. 1 + 10% is 0.11. 10% is 0.1. What’s 10% + 10%?)
When you have 3 billion users, I can imagine that getting rid of bugs that only affect 0.001% of your userbase is still worthwhile and probably pays for itself in reduced support costs.
I think a big issue with how we teach math, is the casualness with which we introduce children to floating points.
Its like: Hey little Bobby, now that you can count here are the ints and multiplication/division. For the rest of your life there will be things to learn about them and their algebra.
Tomorrow we'll learn how to put a ".25" behind it. Nothing serious. Just adds multiple different types of infinities with profound impact on exactness and computability, which you have yet to learn about. But it lets you write 1/4 without a fraction which means its simple!
I don't care if it gives me "Underflow" for bs like e^-1000, just give me a text field that will be calculated into result that's represented in the way I want (sci notation, hex, binary, ascii etc whatever).
All standard calculators are imitations of a desktop calculator, It's insane that we're still dragging this UI into desktop. Why don't we use rotary dial on mobile phones then?
It's great that at least OSX have cmd+space where I can type an expression and get a quick result.
And yes, I did develop my own calculator, and happily used it for many years.
TLDR: the real problem of calculators is their UI, not arithmetic core.
Slightly disappointing: The calculator embedded in Google's search page also gives the wrong answer (0) for (10^100) + 1 − (10^100). So apparently they don't use the insights they gained from their Android calculator.
I removed telemetry on my Win10 system and now calc.exe crashes on basic calculations. I've reported this but nobody cares because the next step in troubleshooting is to reinstall Windows. So if telemetry fails, calc.exe will silently explode. Therefore no, anyone cannot make it.
I don't see how one can expect them to take a report worded this way seriously. Perhaps if they actually reported the crash without the tantrum the team would fix it.
And yet Android's calculator is quite bad. Despite being able to correctly calculate stuff that 99.99% of the population don't care about, it lacks many scientific operations that a good chunk of accountants, engineers and coders would make use of regularly. This is a classic situation of engineers solving the fun/challenging problems before the customer's actual problems.
lol I ran into this when making a calculator program because Google's calculator didn't do certain operations (such as adding clock time results like 1:23+1:54) and also because Google occasionally accuses me of being a bot when I search for too many equations.
Maybe I'll get back to the project and finish it this year.
Interesting article but that feels like wasted effort for what is probably the most bare-bones calculator app out there. The Android calc app has the 4 operations, sin cos tan ^ ln log √ ! And that's it. I think most people serious about calculator usage either have a physical one or use another more featureful app and the others don't need such precision.
It's not wasted effort at all, as this app comes installed by default for over a billion users. Only a tiny fraction will ever install another calculator app, so the default one better work entirely correctly. When you have that many users it's hard to waste effort on making the product better.
In the year he did this he easily could have just done some minor interface tweaks to a ruby repl which includes the BigDecimal library. In fact I bet this post to an AI could result in such a numerically accurate calculator app. maybe as a Sinatra single file ruby web app designed to format to phone resolutions natively.
Nice story. An even more powerful way to express numbers is as a continued fraction (https://en.wikipedia.org/wiki/Continued_fraction). You can express both real and rational numbers efficiently using a continued fraction representation.
As a fun fact, I have a not-that-old math textbook (from a famous number theorist) that says that it is most likely that algorithms for adding/multiplying continued fractions do not exist. Then in 1972 Bill Gosper came along and proved that (in his own words) "Continued fractions are not only perfectly amenable to arithmetic, they are amenable to perfect arithmetic.", see https://perl.plover.com/yak/cftalk/INFO/gosper.txt.
I have been working on a Python library called reals (https://github.com/rubenvannieuwpoort/reals). The idea is that you should be able to use it as a drop-in replacement for the Decimal or Fraction type, and it should "just work" (it's very much a work-in-progress, though). It works by using the techniques described by Bill Gosper to manipulate continued fractions. I ran into the problems described on this page, and a lot more. Fun times.
> You can express both real and rational numbers efficiently using a continued fraction representation.
No, all finite continued fractions express a rational number (for... obvious reasons), which is honestly kind of a disappointment, since arbitrary sequences of integers can, as a matter of principle, represent arbitrary computable numbers if you want them to. They're powerful than finite positional representations, but fundamentally equivalent to simple fractions.
They are occasionally convenient for certain problem structures but, as I'm sure you've already discovered, somewhat less convenient for a wide range of common problems.
> No, all finite continued fractions express a rational number
Any real number x has an infinite continued fraction representation. By efficient I mean that the information of the continued fraction coefficients is an efficient way to compute rational upper and lower bounds that approximate x well (they are the best rational approximations to x).
> They are occasionally convenient for certain problem structures but, as I'm sure you've already discovered, somewhat less convenient for a wide range of common problems.
I'm curious what you mean exactly. I've found them to be very convenient for evaluating arithmetic expressions (involving both rational and irrational numbers) to fairly high accuracy. They are not the most efficient solution for this, but their simplicity and not having to do error analysis is far better than any other purely numerical system.
> fundamentally equivalent to simple fractions.
This feels like it is a bit too reductionist. I can come up with a lot of example, but it's quite hard to find the best rational approximations of a number with just fractions, while it's trivial with continued fractions. Likewise, a number like the golden ratio, e, or any algebraic number has a simple description in terms of continued fractions, while this is certainly not the case for normal fractions.
That continued fractions can be easily converted to normal fractions and vice versa, is a strength of continued fractions, not a weakness.
The linked paper by Bill Gosper is about potentially infinite continued fractions with potentially irrational symbolic terms.
> finite
That's the issue, no? If you go infinite you can then express any real number. You can then actually represent all those whose sequence is equivalent to a computable function.
Continued fractions are very cool. I saw in a CTF competition once a question about breaking an RSA variant that relied on the fact that a certain ratio was a term in sequence of continued fraction convergents.
Naturally the person pursing a PhD in number theory (whom I recruited to our team for specifically this reason) was unable to solve the problem and we finished in third place.
Sounds a bit like https://en.wikipedia.org/wiki/Wiener%27s_attack.
(It's not a good article when it comes to the attack details, unfortunately.)
Why unnecessarily air this grievance in a public forum. If this person reads it they will be unhappy and I'm sure they have already suffered enough from this failure.
What do you mean by "[n]aturally" here?
I have been working on a new definition of real numbers which I think is a better foundation for real numbers and seems to be a theoretical version of what you are doing practically. I am currently calling them rational betweenness relations. Namely, it is the set of all rational intervals that contain the real number. Since this is circular, it is really about properties that a family of intervals must satisfy. Since real numbers are messy, this idealized form is supplemented with a fuzzy procedure for figuring out whether an interval contains the number or not. The work is hosted at (https://github.com/jostylr/Reals-as-Oracles) with the first paper in the readme being the most recent version of this idea.
The older and longer paper of Defining Real Numbers as Oracles contains some exploration of these ideas in terms of continued fractions. In section 6, I explore the use of mediants to compute continued fractions, as inspired by the old paper Continued Fractions without Tears ( https://www.jstor.org/stable/2689627 ). I also explore a bit of Bill Gosper's arithmetic in Section 7.9.2. In there, I square the square root of 2 and the procedure, as far as I can tell, never settles down to give a result as you seem to indicate in another comment.
For fun, I am hoping to implement a version of some of these ideas in Julia at some point. I am glad to see a version in Python and I will no doubt draw inspiration from it and look forward to using it as a check on my work.
That sounds kind of similar to Dedikin cuts but crossed with ordered sequence < and > the real? Cool web site.
How do you work out an answer for x - y when eg x = sqrt(2) and y = sqrt(2) - epsilon for arbitrarily small epsilon? How do you differentiate that from x - x?
In a purely numerical setting, you can only distinguish these two cases when you evaluate the expression with enough accuracy. This may feel like a weakness, but if you think about this it is a much more "honest" way of handling inaccuracy than just rounding like you would do with floating point arithmetic.
A good way to think about the framework, is that for any expression you can compute a rational lower and upper bound for the "true" real solution. With enough computation you can get them arbitrarily close, but when an intermediate result is not rational, you will never be able to compute the true solution (even if it happens to be rational; a good example is that for sqrt(2) * sqrt(2) you will only be able to get a solution of the form 2 ± ϵ for some arbitrarily small ϵ).
is that how that kid calculated super long pi?
The link at the end is both shortened (for tracking purposes?) and unclickable… so that’s unfortunate. Here is the real link to the paper, in a clickable format: https://dl.acm.org/doi/pdf/10.1145/3385412.3386037
Thanks for pointing that out. It should be fixed now. The shortening was done by the editor I was using ("Buffer") to draft the tweets in - I wasn't intending to track one but it probably does provide some means of seeing how many people clicked the link
Unrelated to the article, but this reminds me of being an intrepid but naive 12-year-old trying to learn programming. I had already taught myself a bit using books, including following a tutorial to make a simple calculator complete with a GUI in C++. However I wasn't sure how to improve further without help, so my mom found me an IT school.
The sales lady gave us a hard sell on their "complete package" which had basic C programming but also included a bunch of unnecessary topics like Microsoft Excel, etc. When I tried to ask if I could skip all that and just skip to more advanced programming topics, she was adamant that this wasn't an option; she downplayed my achievements trying to say I basically knew nothing and needed to start from the beginning.
Most of all, I recall her saying something like "So what, you made a calculator? That's so simple, anybody could make that!"
However in the end I was naive, she was good at sales, and I was desperate for knowledge, so we signed up. However sure enough the curriculum was mostly focused on learning basic Microsoft Office products, and the programming sections barely scraped the surface of computer science; in retrospect, I doubt there was anybody there qualified to teach it at all. The only real lesson I learned was not to trust salespeople.
Thank god it's a lot easier for kids to just teach themselves programming these days online.
Nice story. Thank you share. For years, I struggled with the idea of "message passing" for GUIs. Later, I learned it was nothing more than the window procedure (WNDPROC) in the Win32 API. <sad face>
This sounds interesting. What is an "IT school"? (What country? They didn't have these in mine.)This was in Bangkok, Thailand, although the school itself was run by Indians. They had IT courses for both kids and adults.
Fortunately I think these days there are a lot more options for kids to learn programming, but back then the options were pretty limited.
Probably institutes teaching IT stuff. They used to be popular (still?) in my country (India) in the past. That said, there are plenty of places which train in reasonable breadth in programming, embedded etc. now (think less intense bootcamps).
Most of all, I recall her saying something like "So what, you made a calculator? That's so simple, anybody could make that!"
This literally brings rage to the fore. Downplaying a kid's accomplishments is the worst thing an educator could do, and marks her as evil.
I've often looked for examples of time travel, hints it is happening. I've looked at pictures of movie stars, to see if anyone today has traveled back in time to try to woo them. I've looked at markets, to see if someone is manipulating them in weird, unconventional ways.
I wonder how many cases of "random person punched another person in the head" and then "couldn't be found" is someone traveling back in time to slap this lady in the head.
Hah, I also went down that route. Through my school I could do extra computer stuff, ended up with this certificate at 10 years old or so: https://en.wikipedia.org/wiki/International_Certification_of...
So yeah, a kid well-versed in Office. My birthday invites were bad-ass, though. Remember I had one row in Excel per invited person with data, and in the Word document placeholders, and when printing it would make a unique page per row in Excel, so everyone got customized invites with their names. Probably spent longer setting it up than it would've taken to edit their names + print 10 times separately, but felt cool..
Luckily a teacher understood what I really wanted, and sent me home with a floppy disk with some template web-page with some small code I could edit in Notepad and see come to live.
Salespeople are the cancer at the worlds butt.
For profit education is the problem here.
As soon as I read the title, I chuckled, because coming from the computational mathematics background I already knew what it roughly is going to be about. IEEE 754 is like democracy in a sense that it is the worst, except for all the others. Immediately when I saw the example I thought: it is either going to be either a Kahan summation or full-scale computer algebra system. It turned out to be some subset of the latter and I have to admit I have never heard of Recursive Real Arithmetic (I knew of Real Analysis though).
If anything that was a great insight about one of my early C++ heroes, and what they did in their professional life outside of the things they are known for. But most importantly it was a reminder how deep seemingly simple things can be.
IEEE 754 is what you get when you want numbers to have huge dynamic range, equal precision across the range, and fixed bit width. It balances speed and accuracy, and produces a result that is very close to the expected result 99.9999999% of the time. A competent numerical analyst can take something you want to do on paper and build a sequence of operations in floating point that compute that result almost exactly.
I don't think anyone who worked on IEEE 754 (and certainly nobody who currently works on it) contemplated calculators as an application, because a calculator is solving a fundamentally different problem. In a calculator, you can spend 10-100 ms doing one operation and people won't mind. In the applications for which IEEE 754 is made, you are expecting to do billions or trillions of operations per second.
William Kahan worked on both IEEE 754 and HP calculators. The speed gap between something like an 8087 and a calculator was not that big back then, either.
Billions or trillions of ops per second and 1987 don't really go together.
> equal precision across the range
What? Pretty sure there's more precision in [0-1] than there is in really big numbers.
IEEE 754 is what you get if you started with the idea of sign, exponent, and fraction and made the most efficient hardware implementation of it possible. It's not "beautiful", but it falls out pretty straightforwardly from those starting assumptions, even the seemingly weirder parts like -0, subnormals and all the rounding modes. It was not really democratically designed, but done by numerical computing experts coupled with hardware design experts. Every "simplified" implementation of floating point that has appeared (e.g. auto-FTZ mode in vector units) has eventually been dragged kicking and screaming back to the IEEE standard.
Another way to see it is that floating point is the logical extension of fixed point math to log space to deal with numbers across a large orders of magnitude. I don't know if "beautiful" is exactly the right word, but it's an incredibly solid bit of engineering.
I feel like your description comes across as more negative on the design of IEEE-754 floats than you intend. Is there something else you think would have been better? Maybe I’m misreading it.
Maybe the hardware focus can be blamed for the large exponents and small mantissas.
The reasonable only non-IEEE things that comes to mind for me are:
- bfloat16 which just works with the most significant half of a float32.
- log8 which is almost all exponent.
I guess in both cases they are about getting more out of available memory bandwidth and the main operation is f32 + x * y -> f32 (ie multiply and accumulate into f32 result).
Maybe they will be (or already are) incorporated into IEEE standards though
> what you get if you started with the idea of sign, exponent, and fraction and made the most efficient hardware implementation of it possible. It's not "beautiful", but it falls out pretty straightforwardly from those starting assumptions
This implies a strange way of defining what "beautiful" means in this context.
IEEE754 is not great for pure maths, however, it is fine for real life.
In real life, no instrument is going to give you a measurement with the 52 bits of precision a double can offer, and you are probably never going to get quantities are in the 10^1000 range. No actuator is precise enough either. Even single precision is usually above what physical devices can work with. When drawing a pixel on screen, you don't need to know its position down to the subatomic level.
For these real life situations, improving on the usual IEEE 754 arithmetic would probably be better served with interval arithmetic. It would fail at maths, but in exchange you get support for measurement errors.
Of course, in a calculator, precision is important because you don't know if the user is working with real life quantities or is doing abstract maths.
> IEEE754 is not great for pure maths, however, it is fine for real life.
Partially. It can be fine for pretty much any real-life use case. But many naive implementations of formulae involve some gnarly intermediates despite having fairly mundane inputs and outputs.
It becomes a problem when precision errors accumulate in a system, right?
The issue isn't so much that a single calculation is slightly off, it's that many calculations together will be off by a lot at the end.
Is this stupid or..?
> IEEE 754 is like democracy in a sense that it is the worst, except for all the others.
I can't see what would be worse. The entire raison d'etre for computers is to give accurate results. Introducing a math system which is inherently inaccurate to computers cuts against the whole reason they exist! Literally any other math solution seems like it would be better, so long as it produces accurate results.
Sometimes you need a number system which is 1. approximate 2. compact and fast 3. high dynamic range
You’re going to have a hard time doing better than floats with those constraints.
> so long as it produces accurate results
That's doing a lot of work. IEE-754 does very in terms of error vs representation size.
What system has accurate results? I don't know any number system at all in usage that 1) represents numbers with a fixed size 2) Can represent 1/n accurately for reasonable integers 3) can do exponents accurately
Electronic computers were created to be faster and cheaper than a pool of human computers (who may have had slide rules or mechanical adding machines). Human computers were basically doing decimal floating point with limited precision.
There's no "accurate results" most of the time
You can only have a result that's exact enough in your desired precision
It's ideal for engineering calculations which is a common use of computers. There, nobody cares if 1-1=0 exactly or not because you could never have measured those values exactly in the first place. Single precision is good enough for just about any real-world measurement or result while double precision is good for intermediate results without losing accuracy that's visible in the single precision input/output as long as you're not using a numerically unstable algorithm.
Define "accurate"!
Given a computer have finite memory but there are infinitely many real numbers in any range, any system using real numbers will have to use rounding.
The NYC subway fare is $2.90. I was using PCalc on iOS to step through remaining MetroCard values per swipe and discovered that AC, 8.7, m+, 2.9, m-, m-, m- evaluates to -8.881784197E-16 instead of zero. This doesn't happen when using Apple's calculator. I wrote to the developer and he replied, "Apple has now got their own private maths library which isn't available to developers, which they're using in their own calculator. What I need to do is replace the Apple libraries with something else - that's on my list!"
I wrote the calculator for the original blackberry. Floating point won't do. I implemented decimal based floating point functions to avoid these rounding problems. This sounds harder than it was, basically, the "exponent" part wasn't how many bits to shift, but what power of two to divide by, so that 0.1, 0.001 etc can be represented exactly. Not sure if I had two or three digits of precision beyond whats on the display. 1 digit is pretty standard for 5 function calculators, scientific ones typically have two. It was only a 5 function calculator, so not that hard, plus there was no floating point library by default so doing any floating point really ballooned the size of an app with the floating point library.
Sounds like he's just using stock math functions. Both Javascript and Python act the same way when you save the result immediately after subtracting two numbers multiple times, rather than just 8.7 - (2.9*3).
Almost no one has done that calculator app, properly. When I mean properly, I mean a complete calculator, like the TI-89.
I am using, in Android, and emulator for the TI-89 calculator.
Because no Android app has half the features, and works as well.
It's not even about features. Calculators are mostly useful for napkin math - if I can't afford an error, I'll take some full-fledged math software/package and write a program that will be debuggable, testable, and have version control.
But for some reason the authors of calculator apps never optimize them for the number of keypresses, unlike Casio/TI/HP. It's a lost art. Even a simple operator repetition is a completely alien concept for new apps. Even the devs of the apps that are supposed to be snappy, like speedcrunch, seem to completely misunderstand the niche of a calculator, are they not using it themselves? Calculator is neither a CAS nor a REPL.
For Android in particular, I've only found two non-emulated calculators worth using for that, HiPER Calc and 10BA by Segitiga.Pro. And I'm not sure I can trust the correctness.
Another app nobody has made is a simple random music player. Tried VLC on Android and adding 5000+ songs from SD card into a playlist for shuffling simply crashes the app. Why do we need a play list anyway, just play the folder! Is it trying to load the whole list at the same time into memory? VLC always works, but not on this task. Found another player that doesn't require building a playlist but when the app is restarted it starts from the same song following the same random seed. Either save the last one or let me set the seed!
I've put a lot of effort into mine, TechniCalc
I've been working on it for what will be a decade later this year. It tries to take all the features you had on these physical calculators, but present them in a modern way. It works on macOS, iOS, and iPad OS
With regards to the article, I wasn't quite as sophisticated as that. I do track rationals, exponents, square roots, and multiples of pi; then fall back to decimal when needed. This part is open source, though!
Marketing page - https://jacobdoescode.com/technicalc
AppStore Link - https://apps.apple.com/gb/app/technicalc-calculator/id150496...
Open source components - https://github.com/jacobp100/technicalc-core
Just tried the TI-89 emulator on android and it says 1e100 + 1 - 1e100 is 0
TI-89 doesn't have infinite precision.
Built-in Android calculator does.
They are incomparable. TI-89 has tons of features, but can't take a square foot to high accuracy.
Well the 89 is a CAS in disguise most of the time which is mentioned in passing in the article.
But, I agree I almost never want the full power of Mathematica/sage initially but quickly become annoyed with calc apps. The 89 and hp prime//50 have just enough to solve anything where I wouldn’t rather just use a full programming language.
HiPER Calc Pro looks like and works like a "physical" calculator, I've use it for years to get effect. I also have Wabbitemu but hardly ever use it, the former works fine for nearly everything.
Same here, except I use an HP48 emulator because TI sucks and HP rocks.
That's not surprising, considering the TI-89 was based on a full CAS, Derive. (A 3.3 MB program!)
https://en.wikipedia.org/wiki/Derive_(computer_algebra_syste...
Can you tell me which emulator you're using? I loved using the open source Wabbitemu on previous Android phones, but it seems to have been removed from the app store, so I can't install it on newer devices :-/
GeoGebra, in fact, embeds xcas and has a touchscreen friendly UI.
Yes that. I use free42 on my phone and mac. And an actual real HP 42S.
Edit: and Maxima as well on the mac (to back up another user's comment)
And the most usable calculator on iOS is easily GrafNCalc83 (a very close TI-83 homage), IMO, even for simple math.
Maxima
> And almost all numbers cannot be expressed in IEEE floating points.
It is a bit stronger than that. Almost all numbers cannot be practically expressed and it may even be that the probability of a random number being theoretically indescribable is about 100%. Depending on what a number is.
> Some problems can be avoided if you use bignums.
Or that. My momentary existential angst has been assuaged. Thanks bignums.
> existential angst
The best (and most educational) expression of that angst that I know: https://mathwithbaddrawings.com/2016/12/28/why-the-number-li....
> Almost all numbers cannot be practically expressed
That's certainly true, but all numbers that can be entered on a calculator can be expressed (for example, by the button sequence entered in the calculator). The calculator app can't help with the numbers that can't be practically expressed, it just needs to accurately approximate the ones that can.
Does it matter that some numbers are inexpressible (i.e., cannot be computed)?
I don't think it matters on a practical level--it's not like the cure for cancer is embedded in an inexpressible number (because the cure to cancer has to be a computable number, otherwise, we couldn't actually cure cancer).
But does it matter from a theoretical/math perspective? Are there some theorems or proofs that we cannot access because of inexpressible numbers?
[Forgive my ignorance--I'm just a dumb programmer.]
> Almost all numbers cannot be practically expressed and it may even be that the probability of a random number being theoretically indescribable is about 100%. Depending on what a number is.
A common rebuke is that the construction of the 'real numbers' is so overwrought that most of them have no real claim to 'existing' at all.
I worded that sentence carefully, when I said “almost all” :)
That's pretty cool, but the downsides of switching to RRA are not only about user experience. When the result is 0.0000000..., the calculator cannot decide whether it's fine to compute the inverse of that number.
For instance, 1/(atan(1/5)-atan(1/239)-pi/4) outputs "Can't calculate".
Well alright, this is a division by zero. But then you can try 1/(atan(1/5)-atan(1/239)-pi/4+10^(-100000)), and the output is still "Can't calculate" even though it should really be 10^100000.
You missed a 4. You are trying to say 1/(4atan(1/5)-atan(1/239)-pi/4) is a division by zero. On the other hand 1/(atan(1/5)-atan(1/239)-pi/4) is just -1.68866...
I played around with the calculator source code from the Android Open Source Project after a previous submission[1]. I think Google moved it from AOSP to the Google Play Services several years ago, but the old source is still available.
It does solve some real problems that I'd love to have available in a library. The discussion on the previous article links to some libraries, but my recollection is that the calculator code is more accessible to an innumerate person like myself.
Edit: the previous article under discussion doesn't seem to be available, but it's on archive.org[2].
[1] https://news.ycombinator.com/item?id=24700705
[2] https://web.archive.org/web/20250126130328/https://blog.acol...
> I think Google moved it from AOSP to the Google Play Services several years ago, but the old source is still available.
For the curious, here is the source of ExactCalculator from the commit before all files were deleted: https://android.googlesource.com/platform/packages/apps/Exac..., and here is the dependency CR.java https://android.googlesource.com/platform/external/crcalc/+/...
The way this article talks about using "recursive real arithmetic" (RRA) reminds me of an excellent discussion with Conal Elliot on the Type Theory For All podcast. He talked about moving from representing things discretely to representing things continually (and therefore more accurately). For instance, before, people represented fonts as blocks of pixels, (discrete.) They were rough approximations of what the font really was. But then they started to be recognized as lines/vectors (continual), no matter the size, they represented exactly what a font was.
Conal gave a beautiful case for how comp sci should be about pursuing truth like that, and not just learning the latest commercial tool. I see the same dogged pursuit of true, accurate representation in this beatiful story.
- https://www.typetheoryforall.com/episodes/the-lost-elegance-...
- https://www.typetheoryforall.com/episodes/denotational-desig...
Thanks, that's a lovely analogy and I'm excited to listen to that podcast.
I think the general idea of converting things from discrete and implementation-motivated representations to higher-level abstract descriptions (bitmaps to vectors, in your example) is great. It's actually something I'm very interested in, since the higher-level representations are usually much easier to do interesting transformations to. (Another example is going from meshes to SDFs for 3D models.)
You might get a kick out of the "responsive pixel art" HN post from 2015 which implements this idea in a unique way: https://news.ycombinator.com/item?id=11253649
> (Also I decided to try writing this thread in the style of a linkedin influencer lol, sorry about that.)
I hated reading this buzzfeedy style (or apparently LinkedIn-style?) moron-vomit.
I shouldn't complain, just ask my nearest LLM to rewrite this article^W scribbling to a less obnoxious form of writing..
I noticed this too, but I was confused because the calculator article was informative and interesting. It's entirely unlike the inept fluffy slop that gets posted to LinkedIn
Agreed.
And furthermore the content isn't as "wow, who'd a thunk it" as the author seems to think it is. I cannot be unusual in knowing that single and double precision floating point numbers just don't cut it for a lot of arithmetic tasks. Surely this is taught in every comp-sci course? And doesn't nearly everyone know the N ⊆ Z ⊆ Q ⊆ A ⊆ R ⊆ C hierarchy?, well nearly everyone doesn't but surely everyone who either decides to write a calculator app or is tasked with writing one ought to know this? The claim "A calculator app? Anyone could make that." is a patently ridiculous claim to me, anybody who would make such a claim is clearly ignorant of both software development and mathematics. Next article. "A text editor? Anyone could make that."
I liked it.
I think it’s called broetry although perhaps the sentences are a little long for that.
Some quick research yields a couple of open source CAS, such as OpenAxiom, which uses the Modified BSD license. Granted that Google has strong "NIH" tendencies, but I'm curious why something like this wasn't adapted instead of paying several engineers some undisclosed amount of time to develop a calculation system.
The article mentions that a CAS is an order of magnitude (or more!) more complex than the bifurcated rational + RRA approach, as well as slower, but: the complexity would be solved by adapting an open source solution, and the computation speed wouldn't seem to matter on a device like an Android smartphone. My HP Prime in CAS mode runs at 400MHz and solves every problem the Android calculator solves with no perceptible delay.
Is it a matter of NIH? A legal issue with the 3-clause BSD license I don't understand? Reducing binary size? The available CAS weren't up to snuff for one reason or another? Some other technical issue? Or, if not that, why not use binary-coded decimal?
These are just questions, not criticisms. I have very very little experience in the problem domain and am curious about the answers :)
To make any type of app really good is super hard.
I have yet to see a good to-do list tool.
I'm not kidding. I tried TickTick, Notion, Workflowy ... everything I tried so far feels cumbersome compared to how I would like to handle my To-Do list. The way you create, edit, browse, drag+drop items is not as all as fluid as I imagine it.
So if anyone knows a good To-Do list software (must be web based, so I can use it anywhere without installing something) - let me know!
To-Do List is an infinite product category.
They are extremely personal and any unwanted features end up as friction.
You'll never find a perfect Todo app because it will have an audience of 1 so wouldn't be made.
Other examples of Todo apps:
Things, 2Do, Todoist, OmniFocus, Due, Reminders (Apple), Clear, GoodTask, Notes, Google Keep
The list is literally neverending,
It seems like you're looking for an outliner? Workflowy might fit your needs: https://workflowy.com/
Like others have said, the perfect to-do list is impossible because each person wants wildly different functionality.
My dream to-do list has minimal interaction, with the details handled like I have my own personal secretary. All I'd do is verbally say something like "remind me to do laundry later" and it would do the rest: Categorizing, organizing, prioritizing, scheduling and adding sub-tasks as needed.
I love the idea of automatic sub-tasks created at level which helps with your particular procrastination level. For example "do laundry" would add in "gather clothes, bring to laundry room, separate colors, add to washer, set timer, add to dryer, set timer, get clothes, fold clothes, put away, reschedule in a week (but hide until then). Maybe it's even add in Pomodoro timers to help.
LLMs with reasoning might get us there soon - we've been waiting for Knowledge Navigator like assistants for years.
I'm one of the creators of Godspeed, which is a fast, 100% keyboard oriented to-do app (though we do support drag and drop as well!). And we've got a web app!
https://godspeedapp.com/
I use Google's Keep but you may need to make your own
Any.do did me well until I adopted a new method of stacking tasks into a routine.
The hard part is altering the routine.
I find Trello adequate.
I would love to have a To-Do app that is fluid for both one-off tasks and periodic checklists (daily/weekly/monthly/etc.) Most importantly, I want it to yell at me to actually do it. I was pretty surprised that basically nothing seems to fit the bill and even what existing "GTD" type apps could do felt cumbersome and limited.
I share a lot of thoughts with that, and built my own calculator, too A calculator that gives right instead of wrong answers
https://chachatelier.fr/chalk/chalk-home.php
I tried to explain what was going on https://chachatelier.fr/chalk/article/chalk.html, but it's not a very popular topic :-)
Looks really well done, nice!
There’s a pleasantly elegant “hey, we’ve solved the practical functional complement to this category of problems over here, so let’s just split the general actual user problem structurally” vibe to this journey.
It often pays off to revisit what the actual “why” is behind the work that you’re doing, and this story is a delightful example.
I wrote an arbitrary precision arithmetic C++ library back in the 90’s. We used it to compute key pairs for our then new elliptic-curve based software authentication/authorization system. I think the full cracks of the software were available in less than two weeks, but it was definitely a fun aside and waaaay too strong of a solution to a specific problem. I was young and stupid… now I’m old and stupid, so I’d just find an existing tool chain to solve the problem.
I use qalculate, it behaves well enough for my needs.
https://qalculate.github.io/
All the calculators that I just tried for the article's expression give the wrong answer (HP Prime, TI-36X Pro, some casio thing). Even google's own online calculator gives the wrong answer, which is mildly ironic. [https://www.google.com/search?q=1e101%2B1-1e101&oq=1e101%2B1]
I played around with the macOS calculator and discovered that the dividing line seems to be at 1e33. I.e. 1e33+1-1e33 gives the correct answer of 1 but 1e34+1-1e34 gives 0. Not sure what to make of that.
> the dividing line seems to be at 1e33.. Not sure what to make of that
That’s not too bad. They are probably using hand-rolled FP128 format for their numbers. If they were using hardware-provided FP64 arithmetic, the threshold would have been 2^53 ≈ 9E+15: https://en.wikipedia.org/wiki/Double-precision_floating-poin...
Tried with the HP Prime and it gave the precise 1 for the test. One need to put it in the CAS mode and use the exact form of 10^100 instead of 1E100. You shall get the right answer if the calculator is instructed to use its very powerful CAS engine.
I enjoyed the article, but it seems Apple has since improved their calculator app slightly. The first example is giving me the correct result today. However, the second example with the “Underflow” result is still occurring.
I've just tried the first example on iOS 18.3.1 and it absolutely reproduces perfectly for me.
I remember hearing stories that for a time there was no engineer inside Apple responsible for the iOS Calculator.
Now it seems to be revived as there were some updates to it, but those also removed one of my favourite features -> tapping equals button no longer repeats the last operation.
That's just a single number to calculate.
The real fun begins when you do geometry.
Find a representation of finite memory to represent points, which allows exact addition, multiplication and rotation between them, (with all the nice standard math property like associativity and commutativity).
For example your representation should be able to take a 2d point A, aka two coordinates, and rotate it around the origin by an angle theta to obtain the point B. Take the original point and rotate it by pi + theta, then reflect it around the origin to obtain the point C. Now answer the question whether B is coincident with C.
Just use Mathematica.
This seems so elementary that I think open source computer algebra systems can do it.I solved this by making a calculator that rounds to the nearest 1/16. Tiny app for Americans doing DIY work around the house:
https://thomaspark.co/projects/calc-16/
years ago the daily wtf had a challenge for writing the worst calculator app. my submission maintained calculation state by emitting it's own source code, recompiling and running the new executable.
I first learned to program on a Wang 2200 computer with 8KB of RAM, back in 1978. One of the math teachers stayed an hour late most days to allow us nerds to come in an use the two computers. There were more people than computers, so often you'd only get 10 or 15 minutes of time.
Anyway, I wrote a program where you could enter an equation and it would draw an ASCII graph of the curve. I didn't know how to parse expressions and even if I had I knew it would be slow. The machine had a cassette tape under computer control for storing and loading programs. What I did was to take the expression typed by the user and convert each one into its tokenized form and write it out to tape. The program would then load that just created overlay which contained something like "1000 DEF FNY(X)=X^2-5" and a FOR loop would sweep X over the designated range, and have "LET Y=FNY(X)" to evaluate the expression for me.
As a result, after entering the equation, it would take about five seconds to write out the overlay, rewind a couple blocks, and load the overlay before it would start to plot. But once it started it went pretty fast.
People call that a JIT compiler nowadays (?)
Interesting article, and kudos to Boehm for going the extra mile(s), but it seems like overkill to me.
I wouldn't expect, or use, a calculator for any calculation requiring more accuracy than the number of digits it can display. I'm OK with with iPhone's 10^100 + 1 = 1e100.
If I really needed something better, I'd try Wolfram Alpha.
The thing about this calculator app is that it can display any number of digits just by scrolling the display field. The UX is "any number of digits the user wants" not some predetermined fixed number of digits.
This is interesting.
One of the first ideas I had for an app was a calculator that represented digits like shown in the article but allowed you to write them with variables and toggle between symbolic and actual responses.
A use case would be: in a spreadsheet like interface you could verify if the operations produced the final equation you were modeling in order to help validate if the number was correct or not. I had a TI-89 that could do something close and even in 2006 that was not exactly brand new tech. I figured surely some open source library available on the desktop must get me close. I was wildly wrong. I stuck with programming but abandoned the calculator idea. Even nearly 20 years later, such a task doesn’t seem that much easier to me.
That's a CAS, as mentioned. There are plenty of open source libraries available, but one that specifically implements the algorithms discussed in this article is flintlib. Here's an example from their docs showing exactly what you want: https://flintlib.org/doc/examples_calcium.html#examples-calc...
Isn't what you are asking for a CAS?
Great resource! For my calculator I also wanted to tackle physics, which expands on the number definition with measurement error size.
It is suprisingly hard problem.
https://recomputer.github.io/
yours give 1 on (10^100)+1-(10^100), cool
At the risk of coming across as being a spoilsport, I think when someone says "anyone can write a calculator app", they just mean an app that simulates a pocket calculator (which is indeed pretty easy) as opposed to one which always gives precisely the right answer (which is indeed impossible). Also, you can avoid the most embarrassing errors just by rearranging the terms to do cancellation where possible, e.g. sqrt(2) * 3 * sqrt(2) is absolutely precisely 6, not 6 within some degree of approximation.
Pocket calculators are not using 32 bit floating point math.
> as opposed to one which always gives precisely the right answer (which is indeed impossible)
Per the article, it's completely possible. Frankly I'd say they found the obvious solution, the one that any decent programmer would find for that problem.
> It's too slow.
> 1 is not equal to 1 - e^(-e^1000). But for Richardson and Fitch's algorithm to detect that, it would require more steps than there are atoms in the universe.
> They needed something faster.
I'm disappointed after this paragraph I expected a better algorithm and instead they decided to give up. Fredrik Johansson in his paper "Calcium: computing in exact real and complex fields" gives a partial algorithm for the problem and writes "Algorithm 2 is inspired by Richardson’s algorithm, but incomplete: it will find logarithmic and exponential relations, but only if the extension tower is flattened (in other words, we must avoid extensions such as e^log(z) or √z^2), and it does not handle all algebraic functions. Much like the Risch algorithm, Richardson’s algorithm has apparently never been implemented fully. We presume that Mathematica and Maple use similar heuristics to ours, but the details are not documented [6], and we do not know to what extent True/False answers are backed up by a rigorous certification in those system".
Perhaps amusingly, the implementation referenced in Boehm's paper is a still-unmerged Android platform CL adding tests using this approach: https://android-review.googlesource.com/c/platform/art/+/101...
> Obviously we'll want a symbolic representation for the real number 1
Sorry, why is this obvious? A basic int type can store the value of 1, let alone the more complicated Rational (BigNum/BigNum) type they have. I can absolutely see why you want symbolic representations for pi, e, i, trig functions, etc., but why one?!
I think the issue was that they are representing a real as a product of a rational and that more complicated type, so without a symbolic representation for 1, when representing and rational, they would have to multiply it by a RRA representation of 1 which brings in all the decision problem issues.
Sorry for being unclear about this. A number is being expressed as a rational times a real. In the case where the rational is exactly the number we want, we want to be able to set the real to 1, so the multiplication has no effect
Because they express numbers as rational times a real, so the real in all those cases would be one. When it’s one, you do rational math as normal without involving reals.
I use python repl as my primary calculator on my computer.
1. I don't have problems like the IOS problem documented here. This requires me to know the difference between an int and a float, but pythons ints have unbounded precision(except if you overflow your entire memory), so that kind of precision loss isn't a big deal. 2. History is a lot better. Being able to scroll back seems like a thing calculators ought to offer you, but they don't. 3. In the 1-in-a-hundred times I need to repeat operations on the calculator, hey, we've already got loops, this is python 4. Every math feature in the windows default calculator is available in the math library. 5. As bad as python's performance reputation is, it's not at all going to be noticeable for simple math.
I was always a little envious of the people that could use bc because they knew how. I know python and its installed on linuxes by default, so now I am no longer envious.
I really hate when people put cat images and memes in a serious article.
Don't get me wrong, the content is good and informative. But I just hate the format.
That reminds me when SideFX started putting memes into their official tutorial youtube channel. At least this is just a webpage and we can scroll through them...
While we're already breaking the HN guidelines—"Please don't complain about tangential annoyances—e.g. article or website formats"—let me just say that the scrolljacking on this article is awful.
Bah, cats have a place in programming articles, regardless of seriousness
Also the “highschool poem”-type writing style is quite jarring but forgiven when he acknowledged it at the end of the article.
> Also I decided to try writing this thread in the style of a linkedin influencer lol, sorry about that.
This was not intended to be a serious article, like something you'd submit for publication in an ACM journal.
The last sentence is: "(Also I decided to try writing this thread in the style of a linkedin influencer lol, sorry about that.)"
The tone of the article has given away the fact that the article is not serious. At least not the way it's presented. You want something serious? Go read the pdf.
And I don't mind at all. Without this article, I probably will never know what's in the paper and how they iterated. I'll likely give up after reading the abstract -- "oh, they solved a problem". But this article actually makes much more motivating to read the original paper, which I plan to do now.
Cats rule our world. So nothing wrong about it.
The original is a Twitter thread, not a serious article.
Two cat pictures. 0 memes. Lighten up.
Well the cats hate you right back, how dare you. The whole point of the internet is to post cats on it.
Off topic, but I believe naming this specific kind of numbers "real" is a misnomer. Nothing in reality is expression of a real number. Real numbers pop up only when we abstract reality into mathematical models.
In Polish language rational numbers are called something more like "measurable" numbers and in my opinion that's the last kind of numbers that is expressed in reality in any way. Those should be called "real" and real should be called something like "abstract" or "limiting" because they pop-up first as limits of some process working on rational numbers for infinite number of steps.
> This means a calculator built on floating point numbers is like a house built on sand.
I've taken multiple numerical analysis courses, including at the graduate level.
The only thing I've learnt was: be afraid, very afraid.
π+1−π = 1.0000000000000
But
π−π = 0
I think I understand why, from the article, but wouldn't it be "easy" (probably not, but curious about why) to simplify the first expression to (1-1)π + 1 then 0π + 1 and finally just 1 before calculating a result?
that would require an algebraic solver which is definitely possible but more complex than really warranted for a "basic" calculator
Hasn't this been solved in cheap pocket calculators for decades before this?
HP scientific calculators goes back to the 60's and can presumably add 0.6 to 3 without adding small values to the 20th significant digit.
Due to backwards compatibility modern PC CPUs have some mathematical constants in hardware, one of them Pi https://www.felixcloutier.com/x86/fld1:fldl2t:fldl2e:fldpi:f... Moreover, that FLDPI instruction delivers 80 bits of precision, i.e. more precise than FP64.
That’s pretty much useless in modern world because the whole x87 FPU is deprecated. Modern compilers are generating SSE1 and SSE2 instructions for floating-point arithmetic, instead of x87.
As far as I know, windows calculator have a similar approach. It use rational, and switch to Taylor expansion to try to avoid cancellation errors. Microsoft open sourced it some times ago on GitHub
Could this be the reason why iPads haven't had a calculator app until recently?
Because it's such a difficult problem to solve that it required elite coders and Masters/PhD level knowledge to even make an attempt?
(Apple Finally Plans To Release a Calculator App for iPad Later This Year)[https://www.macrumors.com/2024/04/23/calculator-app-for-ipad...]
I haven’t really used the iPad’s calculator app, but it looks exactly like a larger version of the iPhone app. So I don’t think there are any technical reasons why it took so long for the iPad to get that app.
lowkey this is why ieee 754 floating point is both a blessing and a curse, like yeah it’s fast n standardized but also introduces unavoidable precision loss, esp w iterative computations where rounding errors stack up in unpredictable ways. ppl act like increasing precision bits solves everything. but u just push the problem further down, still dealing w truncation, cancellation, etc. (and edge cases where numerical stability breaks down.)
… and this is why interval arithmetic and arbitrary precision methods exist, so it gives guaranteed bounds on error instead of just hoping fp rounding doesn’t mess things up too bad. but obv those come w their own overhead: interval methods can be overly conservative, which leads to unnecessary precision loss, and arbitrary precision is computationally expensive, scaling non-linearly w operand size.
wonder if hybrid approaches could be the move, like symbolic preprocessing to maintain exact forms where possible, then constrained numerical evaluation only when necessary. could optimize tradeoffs dynamically. so we’d keep things efficient while minimizing precision loss in critical operations. esp useful in contexts where precision requirements shift in real time. might even be interesting to explore adaptive precision techniques (where computations start at lower precision but refine iteratively based on error estimates).
I wrote an OCaml implementation of this paper a few years ago, which I've now extracted into its own [repo](https://github.com/joelburget/constructive-reals/blob/main/C...)
The link in the paper to their Java implementation is now broken: does anyone have a current link?
This article was really well written. Usually in such articles i understand about 50%, maybe if I'm lucky 70% but this one I've understood nearly everything. It's not much of a smartness thing but an absolute refusal on my part to learn the jargon of programming as well as my severe lack of knowledge of all the big words that are thrown around lol. But really simply written love it
I thought this was going to be about how Apple completely destroyed the calculator on iOS with the latest update.
Now it does the running ticker tape thing, which means you can't use the AC button to quickly start over, because there is no AC button anymore!
I know it's supposed to be easier/better for the user, but they didn't even give me a way to go back to the old behavior.
Hmmm, solved a lot of these problems as well when building my own calculator: https://github.com/crouther/SciCalc
Seems like Apple got lazy with their calculator didn't even realize they had so many flaws... Math Notes is pretty cool though.
At some point, when I get a spare 5 years (and/or if people start paying for software again), I will start to work on a calculator application. Number system wrangling is quite fun and challenging, and I am hoping to incorporate units as a first-class citizen.
If you accept that Pi and Sqrt(2) will be represented as a terminating series of digits (say, 30), then 99% of the problems stated go away. My HP calculator doesn't represent the square root of 2 as a magic number, it's 1.414213562.
Given the function to compute pi:
Wouldn’t that return a value where the error of the result is 4x the requested tolerance?https://github.com/based2/KB/blob/main/math/aaa.md#calculato...
This is really cool, but it does show how Google works. They’ll pay this guy ~$3million a year (assuming stock appreciation) to do this but almost no end user will appreciate it in the calculator app itself.
I doubt that most people using the calc app expect it to handle such situations. It's nice that it does of course but IMO it misses the point that the inputs to a lot of real world calculations are inaccurate to start with.
i.e it's more likely that I've made a few mm mistake when measuring the radius of my table than that I'm not using an precise enough version of Pi. The area of the table will have more error because one is squaring the radius, obviously.
It would be interesting to have a calculator that let you add in your estimated measurement error (or made a few reasonable guesses about it for you) and told you the error in your result e.g. the standard deviation.
I sometimes want to buy stuff at a hardware shop and I think : "how much paint do I need to buy?" I haven't planned so I'm thinking "it's about 4m by 5m...I think?" I try to do a couple of calculations with worst case numbers so I at least get enough paint and save another trip to the shop but not comically too much so that I have a tin of it for the next 5 years.
I remember having to estimate error in results that were calculated from measured values for physics 101 and it was a pain.
This is a crazy take to me. Most people don’t care that a calculator gives them a correct answer? It’s just an estimator?
Why would we not expect it to work when we know how to build ones that do, and people use it as a replacement for math on paper?
Does anyone know if this was the system used by higher end TI calculators like the TI-92? It had a 'rational' mode for exact answers and I suspect that it used RRA for that.
The TI-92 and similar have a full-on computer algebra system that they use when they're in exact mode [1]. It does symbolic manipulation.
This is different from what the post (and linked paper) discuss, where the result will degrade to recursive real arithmetic, which is correct but only to a bounded level of precision. A CAS will always give a fully-exact (although sometimes very unwieldy) answer.
[1] See page 87 here: https://sites.science.oregonstate.edu/math/home/programs/und...
The bar for "the greatest calculator app development story ever told" it should be noted, is quite high :)
I really do think we should just use the symbolic systems of math rather than trying to bring natural world numbers into a digital number space. It's this mapping that inherently leads to compensating strategies. I guess this is called an algebraic system like the author mentioned.
But I view math as more of a string manipulation function with position-dependent mapping behavior per character and dependency graphs, combined with several special functions that form the universal constants.
Just because data is stored in digitization as 1 and O, don't forget it's more like charged and not charged. Computers are not numeric systems, they are binary systems. Not the same thing.
Really interesting article. I noticed that my Android calculator app could display irrational numbers like PI to an impressive amount of digits, if I hold it sideways.
You can also scroll it to make it display more digits.
I really wonder what the business case for spending so much effort on such precision was. Who are the users who need such accuracy but are using android calculator?
Students learning about real numbers. Yes seriously.
Unlike software engineers who have already studied IEEE754 numbers, you can't expect a middle school student to know concepts like catastrophic cancellation. But a middle school student wants to poke around with trigonometric functions and pi to study their properties, but a true computer algebra system might not be available to them. They might not understand that a random calculator app doesn't behave correctly because it's not using the same kind of numbers discussed in their math class.
while phones are mostly a circus people do try to use them for serious things. For a program you make the calculations as accurate as the application requires. If you don't know what a tool will be used for you never really get to feel satisfied.
There's a point at which you're really building a computer algebra system.
Or you can do what the Windows 11 calculator does and not even get 1+2*3 right.
Are you in standard or scientific? Each new operator (Not sure if thats the correct term) is calculated immediately. ie 1+2x3 is worked out as 1+2 (Stored into buffer as 3) x 3 = 9
But scientific does it correctly where it just appends the new expression onto the buffer instead of applying it
I'm on windows 11. I just did it and it replied "7". I subtracted 7 to see if there was some epsilon error but it reported "0". What do you experience?
This reminds me of solving project Euler problems that intentionally not possible to solve with simple float representation of numbers
Why does the author insist on using a dick bar? With the "contact me" portion, it takes up 30% of my screen on an iPhone SE.
There should have been an "x" on the right of the "contact me" portion that you could click to make it go away. Sounds like it didn't show up for you, so sorry about that. Unfortunately I don't have an iPhone SE to test against and the "x" does seem to show up on the iPhone SE screen-size simulator in Chrome. This means I don't know how to reproduce the issue and probably won't be able to resolve it without removing the "contact me" page entirely, which I'm not willing to do right now.
A what?
I'm also reading this on an sPhone but don't remember seeing anything that looked like, well... what you said
I just tried this in raku App::Crag...
crag 'say (10*100) + 1 − (10*100)' #1
Raku uses Rats by default (Rational numbers) unless you ask for floating point.
Anyone know of a comparison of the linked paper's algorithm to how Gavin Howard's 'bc' CLI calculator does it?
So, 'bc' just has the (big) rationals. Rationals are the numbers you could make by taking one integer (say 5 or minus sixteen trillion and fifty-one) and dividing it by some positive integer (such as three or sixty-two thousand)
If we have a "Big Integer" type which can represent arbitrarily huge integers, such 10 to the power 5000, we can use two of these to make a Big Rational, and so that's what bc has.
But the rationals aren't enough for all the features on your calculator. What's the square root of ten ? How about the square root of 40 ? Now, multiply those together. The correct answer is 20. Not 20.00000000000000001 but exactly 20.
No it is easy, you just throw up a math error whenever a smartass tries something like this. Like calculators do.
How does an "old school" physical calculator handle the floating point precision problem?
> Showing 0.0000000000000 on the screen, when the answer is exactly 0, would be a horrible user experience.
> They realized that it's not the end of the world if they show "0.000000..." in a case where the answer is exactly 0
so... devs self-made a requirement, got into trouble (complexity) - removed the requirement, trouble didn't go anywhere
just keep saying "it's a win" and you'll be winning, I guess
No? They made a goal to show 0.0000 in as few places as possible, and they got as close to it as they could without compromising their other requirements.
Was given the task to build a simple calculator app as a project for a Java class I took in college.
No parens or anything like that, nothing nearly so fancy. Classic desk calculator where you set the infix operation to apply to the previous value, followed by the second value of the operation.
It was frankly an unexpected challenge. There's a lot more to it than meets the eye.
I only got as far as rational numbers though. PI accurate to the 8 digit display was good enough for me.
Honestly though, I think it was a great exercise for students, showing how seemingly simple tasks can actually be more complex than they seem. I'm still here thinking about it some twenty years later.
Saw the thread on Twitter. Kudos to the author for going in so much detail!
Not a calculator engineer but this seems hideously complex?
Maybe, though in the paper (not the article):
> We no longer receive bug reports about inaccurate results, as we occasionally did for the 2014 floating-point-based calculator
(with a footnote: This excludes reports from one or two bugs that have now been fixed for many months. Unfortunately, we continue to receive complaints about incorrect results, mostly for two reasons. Users often do not understand the difference between degrees and radians. Second, there is no standard way to parse calculator expressions. 1 + 10% is 0.11. 10% is 0.1. What’s 10% + 10%?)
When you have 3 billion users, I can imagine that getting rid of bugs that only affect 0.001% of your userbase is still worthwhile and probably pays for itself in reduced support costs.
I think a big issue with how we teach math, is the casualness with which we introduce children to floating points.
Its like: Hey little Bobby, now that you can count here are the ints and multiplication/division. For the rest of your life there will be things to learn about them and their algebra.
Tomorrow we'll learn how to put a ".25" behind it. Nothing serious. Just adds multiple different types of infinities with profound impact on exactness and computability, which you have yet to learn about. But it lets you write 1/4 without a fraction which means its simple!
Real numbers are quite complex (no pun). Understanding the material well is a junior level math major course.
If you really understand the existing math curriculum this should be high school level.
For an everyday use calculator? Sure. It's still fun and challenging to create a calculator that can handle "as much" math/arithmetics as possible.
if "answer" overflows, switch to symbolic mode.
not that simple, 1/3 + 5 - 1/3 should be 5. It doesn't overflow in IEEE754
Yes, anyone can make a calculator.
I don't care if it gives me "Underflow" for bs like e^-1000, just give me a text field that will be calculated into result that's represented in the way I want (sci notation, hex, binary, ascii etc whatever).
All standard calculators are imitations of a desktop calculator, It's insane that we're still dragging this UI into desktop. Why don't we use rotary dial on mobile phones then?
It's great that at least OSX have cmd+space where I can type an expression and get a quick result.
And yes, I did develop my own calculator, and happily used it for many years.
TLDR: the real problem of calculators is their UI, not arithmetic core.
nice story, building calculator at that time was a tough task today building calculator is just a prompt away inspiring
So did they fix the iOS Calculator bug?
On another note. Since Calculator is so complex are there any open source cross platform library that makes it easier to implement?
I imagine you could do most or all of this with yacas, which is actually a computer algebra system (GPL).
He was not working on the iOS Calculator.
Slightly disappointing: The calculator embedded in Google's search page also gives the wrong answer (0) for (10^100) + 1 − (10^100). So apparently they don't use the insights they gained from their Android calculator.
Duckduckgo and (apt install) qalc do it correctly fwiw
I removed telemetry on my Win10 system and now calc.exe crashes on basic calculations. I've reported this but nobody cares because the next step in troubleshooting is to reinstall Windows. So if telemetry fails, calc.exe will silently explode. Therefore no, anyone cannot make it.
Won't fix: https://github.com/microsoft/calculator/issues/148
> Won't fix: https://github.com/microsoft/calculator/issues/148
I don't see how one can expect them to take a report worded this way seriously. Perhaps if they actually reported the crash without the tantrum the team would fix it.
So they send telemetry that shows what people are calculating.
Does it mean that there are some "dangerous" numbers that can be used to flag someone?
Based on the thread, you can build it from the source code and telemetry won’t be enabled…
And yet Android's calculator is quite bad. Despite being able to correctly calculate stuff that 99.99% of the population don't care about, it lacks many scientific operations that a good chunk of accountants, engineers and coders would make use of regularly. This is a classic situation of engineers solving the fun/challenging problems before the customer's actual problems.
What exactly is missing? https://imgur.com/a/q0yevdW
Could have just used an off-the-shelf CAS.
The point of the article is to teach you how calculators work. Not find a piece of software to unblock you.
You may well find yourself in the field of computing having to compute something!
"Over the past year or so, I've reluctantly come to the conclusion I need to leave Elm and migrate to some other MUA like PINE or mutt..."
lol I ran into this when making a calculator program because Google's calculator didn't do certain operations (such as adding clock time results like 1:23+1:54) and also because Google occasionally accuses me of being a bot when I search for too many equations.
Maybe I'll get back to the project and finish it this year.
Good read, thanks.
cool story. All programming students should be made to create a calculator in school and have them truly understand the issue at hand.
recursive descent parser, normally any decent developer can do that, my version: https://caub.github.io/misc/calculator
"-2 ** 3 SyntaxError: unparenthesized unary expression can't appear on the left-hand side of '*' "
That's actually a great error, I have made the mistake of expecting "-2 ** 2" would output 4 instead of -4 before.
^ fyi, this comment reveals you didn't RTFA
Re: (10^100)+1-(10^100)
i) Answer is 0 if you cancel out two expression (10^100)
ii) Answer is 1 if you compute 10^100 and then add 1 which is insignificant.
How do you even cater for these scenarios? This needs more than arithmetic.
Uh, what do you mean? The answer is very obviously 1 no matter what.
Interesting article but that feels like wasted effort for what is probably the most bare-bones calculator app out there. The Android calc app has the 4 operations, sin cos tan ^ ln log √ ! And that's it. I think most people serious about calculator usage either have a physical one or use another more featureful app and the others don't need such precision.
It's not wasted effort at all, as this app comes installed by default for over a billion users. Only a tiny fraction will ever install another calculator app, so the default one better work entirely correctly. When you have that many users it's hard to waste effort on making the product better.
Nah yea you can over engineer anything.
In the year he did this he easily could have just done some minor interface tweaks to a ruby repl which includes the BigDecimal library. In fact I bet this post to an AI could result in such a numerically accurate calculator app. maybe as a Sinatra single file ruby web app designed to format to phone resolutions natively.