Fast ADC result to unit conversion

Some of the 32-Bit controller, mostly in the entry range, with Cortex M0/M0+/M3 core doesn't offer a floating point unit (FPU) and every code including float will be handled in software. This is a bad fact, because of
• processing speed is reduced dramatic
• program size (flash use) is increased
Especially the second point can let explode code size so that in the limited flash size devices (16k/32k) this will sometimes NOT fit.
Mostly, this conversion is only necessary to show results to the user (voltages, currents).
Unfortunately, also the <divide> instruction is not available in most of the cores given in the intro :-(
But in all cores, multiplaction of 32-Bit numbers and binary shifting  is available, mostly in 1-6 cycles.

So, think about the following formula:

Value [unit] = (ADC * c ) >> M ; // c=int(L);

where
• M any number between 1 and 16 (greater valiues possible, keep overflow in mind!)
• N = 2^M
• c = coefficient, integer
• L = value of 1 LSB in [unit]

e.g.:
we are using a 12-Bit ADC (4096 steps) with Vref = 3V3 --> 1 LSB =  3V3 / 4096 = 0.805664 mV / LSB and we want to display a result in mV. We decide to use M=10 --> N = 1024
Calculation (offline):
1. L = 0.805664 mV
2. c = int (L * N) =  825  // int (824.99999)
3. Value [mV] = ADC * c >> N  --> ADC * 825 >> 10
prove 1:
ADC input = 2.450 V --> ADCvalue = 3041
Value [mV] = ADC * c >> N  --> 3041 * 825 >> 10  = 2450 [mV]

prove 2:
Measurement:    3456 mA is FS-Voltage (4095)  --> 1 LSB = L = 0,843956 mA / LSB
Current in mA:    ADCvalue = 592  for 500mA
Choosen:           M = 12 --> N = 4096
c = int (L * N) =  3457
Value [mA] = ADC * c >> N  --> 592 * 3457 >> 12  = 499 [mA]  { 499.6mA }

prove 2:
Measurement:    6543 mA is FS-Voltage (4095)  --> 1 LSB = L = 1,597412 mA / LSB
Current in mA:    ADCvalue = 313  for 500mA
Choosen:           M = 12 --> N = 4096
c = int (L * N) =  6543
Value [mA] = ADC * c >> N  --> 313 * 6543 >> 12  = 500 [mA]  { 499.6mA }

Maximum Error
The maximum error of the conversion will be:

err < (c / N) * 1 LSB (this error does not include ADC-LSB errors)

Therefore, the highest possible M/N should be choosen wich cannot overlow. This is 20 for 12 Bit ADC (conservatively, choose 19 then !)

Resolution
This mathematical shifting will NOT increase resolution, if you want to display e.g. mA, your LSB should be below 1mA to be able to resolve 1 mA correctly. The minimal stepsize will else vary between 1 mA and 2 mA (prove 2) or even more !