We have 10 fingers, so the number 10 comes naturally to us. We count and do math using decimal numbers. Another term for that is base-10. All that really means is that we use 10 unique digits and when we put those digits together to make bigger numbers, each place moving to the left is worth ten times the place to its right. We say there is the one's place, the ten's place, the hundred's place, the thousand's place, and so on.
For example, the number
has a 1 in the one's place, 7 in the ten's place, 5 in the hundred's place, and 3 in the thousand's place. We could represent is like so:
1 * 1 + 7 * 10 + 5 * 100 + 3 * 1000
Notice that each subsequent place has a multipler that is a power of 10 (I use the fairly standard ^ as the exponentiation operator):
1 * 10^0 + 7 * 10^1 + 5 * 10^2 + 3 * 10^3
This is why it's called base-10: it's based on 10.
There's nothing inherently special about basing a number system on 10. We have 10 fingers, so it's natural for us.
But there are other number systems that work better in some situations.
Binary is the foundation of information representation in a digital computer. Binary is also known as base-2. In binary we use 2 unique digits: 0 and 1. Similar to decimal, each place is "worth" a power of two: 2^0 (the 1's place), 2^1 (the 2's place), 2^2 (the 4's place), 2^3 (the 8's place), and so on.
What does the earlier (in decimal) 3571 look like in binary?
1*2^0 + 1*2^1 + 0*2^2 + 0*2^3 + 1*2^4 + 1*2^5 + 1*2^6 + 1*2^7 + 1*2^8 + 0*2^9 + 1*2^10 + 1*2^11
That's a lot more verbose. So why is this used in computers? It turns out that deep inside, computers are made up of very simple circuits. So simple that everything is in terms on or off. That corresponds directly with 1 and 0. Hence binary corresponds directly to how computers store information.
A potential problem is that a string of ones and zeros could easily be a decimal number. We typically prefix binary numbers with
0b to make it clear, especially in source code. In this case the example number would be written as
Binary is rather verbose and tedious to work with, so other, more compact number systems have been adopted.
In octal (aka base-8) we use the digits 0-7 (i.e. 8 unique digits and we have those laying around from base-10, so why not reuse them). Additionally, each place is a power of 8: 1's, 8's, 64's, etc.
Octal was common in the early days of computing when computers commonly had 12, 24, or 36 bit words instead of the 8, 16, 32, or 64 bits that have been common since then. To represent a single octal digit in binary takes exactly 3 digits. Considering that those early word lengths were multiples of 3 long, octal was a natural choice. Our earlier example (3571, aka 110111110011) in octal would be 6763.
3 * 8^0 + 6 * 8^1 + 7 * 8^2 + 6 * 8^3
You can see this translation quite clearly if you separate the binary number in groups of 3 digits, starting at the right: 110 111 110 011.
Because octal numbers look a lot like decimal ones, it's we generally prefix them with
0o. So our example would usually be written as
0o6763. Again, this is standard in source code.
Also known as hex, hexadecimal is base-16. That is, each digit is worth increasing powers of 16.
Unlike octal where we had plenty of digits from decimal to make use of, we need 16 unique digits for hex. We have plenty of alphabetic characters just lying around so we can borrow a few. In fact for hex we use the digits 0-9 plus the letters A-F (or just as commonly: a-f).
0-9 have their usual meanings, but the letters have the following decimal values:
A - 10
B - 11
C - 12
D - 13
E - 14
F - 15
Similarly to octal, a hex digit represents exactly 4 binary digits. If we take a binary number and divide it into groups of 4 digits (starting at the right) each group corresponds to a hex digit.
0 - 0000
1 - 0001
2 - 0010
3 - 0011
4 - 0100
5 - 0101
6 - 0110
7 - 0111
8 - 1000
9 - 1001
A - 1010
B - 1011
C - 1100
D - 1101
E - 1110
F - 1111
Using the same example:
110111110011 becomes 1101 1111 0011 which becomes DF3
3 * 16^0 + F * 16^1 + D * 16^2
or, using decimal values for the places:
3 * 16^0 + 15 * 16^1 + 13 * 16^2
As before, we use a prefix to denote a hex number. No, not
0h for some reason. Instead we use
Since modern computers use word lengths that are multiples of 8, hex is pretty standard. In fact there are often situations where the fact that value is byte (8 bits), such as the red, green, and blue values in a color specification. In cases like this it is standard to use hexidecimal numbers. Instead of
(192, 255, 128) you would usually write
(0xC0, 0xFF, 0x80). The same is also commonly done with 16 and 32 bit numbers. This makes it explicit that the value needs to fit in a certain number of bits and can't be just anything.
So if decimal is we use ubiquitously throughout life, why the fascination with binary and similar system (e.g. hexadecimal)? It comes down to the fact (mentioned earlier) that computers work in binary internally. If we are going to study the internal structures of computers, in this case digital logic, it's way easier to use binary (or similar) system because that's the basis of everything in that world.