Do programming for any length of time and you’ll soon learn about different number bases, such as binary or hexadecimal. It’s especially common with graphics or peripherals…these different notations relate numbers more naturally to the microcontroller’s internal representation.
int a = 0b00101010; // Binary representation of 42 int b = 0x2A; // Hexadecimal representation of 42
The topic is well covered in existing programming tutorials so this guide won’t go into detail (here’s Wikipedia’s entry on radix), but I must specifically warn about octal notation, which occasionally trips folks up…
Octal is base 8 notation…three bits per digit, or values from 0 to 7. Notice how the binary and hexadecimal examples above have 0b
or 0x
at the start? Octal just has a 0
.
What occasionally happens is that folks will “zero pad” regular decimal values to make them fill out space nicely in a table, not realizing that they’re actually mixing in octal numbers with totally different values, or will use digits beyond the valid octal range (e.g. 089
), then can’t figure why the program fails. This is strange in that the other representations don’t mind being zero-padded…
int table1[] = { 0b00000000, // Binary representation of 0 0b00101010, // Binary rep. of 42 (harmless leading zeros!) 0b11111111, // Binary rep. of 255 }; int table2[] = { 0x0000, // Hexadecimal representation of 0 0x002A, // Hex rep. of 42 (harmless leading zeros!) 0x00FF, // Hex rep. of 255 }; int table3[] = { 000, // Octal representation of 0 042, // *** Octal rep. of 34! Danger! Danger! *** 089, // *** Invalid octal value! Danger! *** 255, // Decimal rep of 255 };
Use of octal notation is super rare nowadays, but the formal C specification requires its inclusion for compatibility. If you encounter code using this at all it’s probably been ported from truly ancient platforms. Because…this might blow your mind…bytes weren’t always 8 bits!
What blows my mind even more is that the very first stored-program computer ever—the Manchester Baby—had a “modern” 32-bit word length. Its immediate successor…and a majority of systems for the next two decades…took other directions that favored octal numbers before mainstreaming back to 32 bits in the late 1960s.
This fact has nothing to do with strange C code…C didn’t even exist yet…I just think it’s so neat!
Page last edited March 08, 2024
Text editor powered by tinymce.