So, I've performed a test to estimate fx-991es plus v2's default MCU speed (also if anyone can please tell me what exact MCU is inside, because as I remember someone have found it, but I haven't found the exact post upper).

So, I entered the following formula and recorded the time it took to calculate this (actually 16 ±0.1 seconds):

So we can assume that sine calculation is way longer that addition, so I simplified it up to "we just calculated 100 sines".

And 1 sine was calculated during 0.16 seconds. 6.25 sines per second. So next step is to determine how many clock cycles does it take to calculate a sine.

Calculator does, as I assume, single precision floating point calculations, because we every time have 11 digits in significand and I don't think that developers would use other format.

Then there goes long arithmetic, because we in best case can calculate only 16-bit (2-byte) variables on one cycle, be it frac/mult or sum/subtract (estimated based on AVR MCUs which are not connected with this calcs but I think in calc there's an MCU capable of this). First we need to calculate significand which takes us around 5 cycles and then an exponent (2 cycles for multiplication and 3 more because it need to be moved somewhere, organized, etc.).

Then we need sine. I haven't found the exact widespread algorithm, only estimations for good old AVR. The whole sine takes from 1600 to 1900 clock cycles. So, if we bring 1600 cycles, it gives us 10000Hz clocking oscillator, which is pretty off standard ones (powers of 2 or 8MHz divided by 1, 2, 4, 8, etc.).

Somewhere upper somebody wrote that there would be 128KHz clock speed, and I can't figure out why. Sines are calculated with better precision? We don't have hardware multiplier? I'm intrigued.

**Edited by LTVA, 31 October 2020 - 06:04 PM.**