To understand that you must understand the metric of bits and bytes. 1 bit is a simple on or off. You either have power of you don't. Let 1 stand for on and 0 stand for off. 8 bits is one byte.
Your storage is measured in bytes, in powers of two. Like any other numeric system, you start counting from 0 and keep on counting, until you are done. A byte is a string or collection of 8 bits that represent the decimal numbers 0 to 255, written 00000000 to 11111111 respectively. So if I wanted to represent a decimal number as a byte, I could do it.
1 in decimal is 1 in binary. Why? Because when I send a byte of information, I typically do it 1 at a time. Now, where binary gets interesting and predictable. In decimal, with the same digit, I can count up to 9, without much fuss or thought. But, when I need 2 decimal digits to represent higher numbers, I need to represent them by increasing the number in a higher register by 1 and resets the number in the lower register to 0. This gives me room to count to 99. Rinse and repeat for numbers go to 999. Again for numbers up to 9999.
Binary is no different. But the only thing I can do with 1 bit is I can count up to one, which isn't very useful. So if I wanted to count up to 3, I would need 2 bits to do so. Hence, in binary 10 is 2 in decimal, and then 11 is 3 in decimal. If I wanted to count up to 7, I could do that, but I need 3 bits. 100 is 4, 101 is 5, 110 is 6, and 111 is 7. Do you see what is happening? The numbers double with every register with binary, while the numbers in decimal go up by multitudes of 10, 100, and 1000.
Thusly…..
Number of bits / values
1 / 0 - 1 because that is the 2 raised to the power of 0
2 / 0 - 3 because that is 2 raised to the power of 1
3 / 0 - 7 because that is 2 raised to the power of 2
4 / 0 - 15 because that is 2 raised to the power of 3
And so on. I could count like this until I get to 1024, which is the binary version of 1000 in decimal.
Suffice it to say that I could also count in powers of two for simplicity.
1, 2, 4, 8, 16, 32, 64, 128, 256, 512, and 1024 is the progression. (Count these. How many numbers are there?)
So 2 raised to the power of 11 bits can also be represented as 1 kilo byte, but it could be also be rounded to 1000 bytes; though it’s a little less accurate in the binary world, it’s accurate in the decimal world.
To get one mega byte I have to count 1024 bytes by 1024. I could also get an approximation by counting 1000 by 1000 = 1 million. 1 million is also a mega.
So what else is next? Billions. 1 giga byte (GB) is 1024 by 1024 by 1024 in binary or 1000 by 1000 by 1000 in decimal.
So when do you use the binary term? When you need accuracy, you want to represent the number 1024 MB (which Is one GB) in binary. When you want to represent the number in decimal, you use 1000 MB or simplify to 1.0 GB.
Credits: Ian Lasky and Taciano Dreckmann Perez
0 Comments