In the same way that “4” is an Arabic numeral and “IV” is the same number written as a Roman numeral; the language of computers represents the same information in a visually different form. The binary number “100” is the same number as the decimal number “4.”
Computers can only understand numbers, and binary is the simplest number system. It can then be converted to the commonly used ASCII codes. ASCII was initially based on 128 symbols (10 numbers, 26 letters of the English alphabet, some punctuation marks, etc.), and it represents visible characters or commands. Every 8 bits (or byte) of binary represents one ASCII code/character.
The binary system is a base-2 system that uses only “0s” and “1s” to represent all numbers. The numbers that you are most likely familiar with are the decimal (base-10) system. The position of the “0” and “1” digits in a binary string represents a particular value. “1s” add the value, at that position, to the total value of the string, while “0s” indicate that the value is not counted in the total.
In the base-2 system, positional values are denoted as 2n, where n is the position. “n” values begin with 0 and increase from right to left.
Base-2 from the 0th power to the 6th power:
Taking into account the base-10 equivalent values gives:
Binary string “1001”:
The decimal equivalent would be: 8+0+0+1 = 9
The binary string shows that there are “1s” at 23 and at 20, and the “0s” at 22 and 21 indicate that we do not include those numbers in our total. They are significant placeholders, however, and cannot be discarded without changing the value of the string: “11” in binary being equivalent to “3” in decimal.
Converting decimal to binary “26”:
The binary equivalent would be “11010.”
Bits and Bytes:
Each binary position represents a unit of memory called a bit (b), whether that position is occupied by a “1” or a “0.” An individual bit is typically too small to represent sufficiently larger values. To efficiently process information, computers handle groups of bits. The smallest group of bits processed by a computer is a byte (B), which is 8 bits long in most modern computers.
Large numbers of bytes are referred to using the International System of Units (SI) prefixes, with those prefixes using a base-2 count rather than the base-10 count. This means that binary-based prefixes do not represent the same amount as base-10 SI prefixes.
|Prefix||Standard SI amount||Software amount (Bytes)|
|kilo (k)||103 = 1,000||210 = 1,024|
|mega (M)||106 = 1,000,000||220 = 1024 * 1024|
|giga (G)||109 = 1,000,000,000||230 = 1024 * 1024 *1024|
|tera (T)||1012 = 1,000,000,000,000||240 = 1024 * 1024 * 1024 *1024|
How many bits (b) are in a Gigabyte (GB)?
(10243)*8 = 8.6×109 b in 1 GB