This post will talk about the conversion process between base 10 and two alternate bases that are popular with programmers. Please refer to the previous post: Binary and Hexadecimal : Number Format to get some context on how number bases are structured.
It is easiest to talk about the number system that everyone knows from the start, because it also follows the same rules as binary and hexadecimal when it comes to calculation. Decimal uses the following digits in each “place”: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. Let’s take a look at a random four digit number: 1523
Again, notice that the highest exponent is one less than the total number of digits and the order of exponents is right to left, this is the same for any number base. If we follow the formula for calculation we see that it equals 1523 when added together:
The same process is used for any base format, except different digits are used for different bases. Converting from any other number base to decimal takes a different process.
Decimal to Binary
Here are the following steps I use to convert a decimal value to a binary value, let’s use the same number as before. There are only two digits to use in binary: 0 and 1.
- Determine the highest power of 2 that can go into 1523, in this case 2 raised to the 10th power (1024) is the highest power of two that goes into 1523.
- Divide 1523 by 1024, round down, and find the remainder. 1024 goes into 1523 once so the most significant digit will be a 1. The remainder is 499.
- Check if the next smallest power can go into the remainder, in this case 2 raised to the 9th power does not go into 499, so the next digit is a 0 (we have 10 so far). Keep reducing the exponent until it can go into 499 (if they can't go into the remainder, use 0's). This case, 2^8 does go into 499 once, which would make the next digit 1(101 at this point). You will also need a new remainder by dividing the previous remainder by the next power of 2. The new remainder in this case is 243.
- Repeat steps 2 and 3 on each remainder until you reach 2^0: 243 is the new remainder; 2^7 can go into 243 (next digit is 1, making the number 1011) remainder is 115; 2^6 goes into 115 (next digit is 1, 10111) remainder is 51; 2^5 goes into 51 (next digit is 1, 101111) remainder is 19; 2^4 goes into 19 (next digit is 1, 1011111) remainder is 3; 2^3 does not go into 3 (next digit is 0, 10111110); 2^2 does not go into 3 (next digit is 0, 101111100); 2^1 goes into 3 (next digit is 1, 1011111001) remainder is 1; 2^0 goes into 1 (last digit is 0, final value 10111110011) no further remainder.
Decimal to Hexadecimal
Hexadecimal uses a total of 16 digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F where A = 10, B = 11, C = 12, D = 13, E = 14, F =15. The process doesn’t change for converting to hexadecimal except that we are using 16 as a base, so the number of digits reduces significantly. The highest power of 16 that goes into 1523 is 16^2, so there is only three digits in the final result. Dividing 1523 by 256 rounded down equals 5, so the digit used would be 5; leaving a remainder of 243. 16^1 goes into 243 fifteen times, so the next digit would be F; leaving a remainder of 3. 16^0 goes into 3 three times. This gives a final number of 5F3:
While technically the binary representation of the number is longer, there are tons of applications where it is absolutely necessary. Hexadecimal can be more convenient when representing much larger values where binary can become tedious and lengthy. There are a lot of calculators that can convert values for you, however, it is important to know the background of the structure to interpret how different applications use the number systems.
We will cover some applications and specific parts that use these values or concepts in their design and why different number formats are useful.