"Eight-bit" explained


People don't seem to understand 'bits'.

It always baffled me, when people like AVGN openly admitted they didn't understand what '16-bit' meant, and how they couldn't explain to their parents how much better 16 bits are compared to eight.

Sigh. So let's get it over with once and for all, so even the simplest of minds can realize what it actually means, and why there were no '256-bit consoles', and even when something WAS advertised as 128-bit, it was just hokum.

64 bits may sound like a tiny number, but if people knew just how big a number those bits actually contain, they would probably be a bit shocked.

It's the old 'one grain of rice on a chessboard'-parable, where you double the grains of rice with every square until you reach the 64th square, where the amount of rice grains would easily exceed anything any king in history could have owned at the time.

The word 'bit' comes from 'binary digit'.

There are many 'counting systems' like this, most commonly used being the decimal system, where we use ten numbers, from 0 to 9, obviously.

In the mysterious world of computers, there have always been numerous systems like these, like the 'hexadecimal' system, which uses 16 numbers (I think the more accurate terminology would use the word 'base', but I am trying to keep things simple).

This means, the '10' won't come after ten numbers, but after number 15, because it uses sixteen numbers from 0 to 15 before the '10' comes. Now, how can we have 15 before 10? Easily, we don't WRITE it as '15', we write it as F. It's perfectly logical. It goes like this:

1,2,3,4,5,6,7,8,9,A,B,C,D,E,F .. the next number is then '10'.

So '10' in hexadecimal system is actually '16' in our 'normal decimal system'.

Now, 'binary' system has only TWO numbers, which is why it's called 'binary'. A bit is 'binary digit', which means either 0 or 1. That is one bit. One bit can express exactly two values, zero or one, or 'on or off'. A light switch is a one-bit system.

So '10' comes immediately after two numbers have been reached, and we always count from zero, so 10 comes after 10 numbers in decimal system (0-9), 10 comes after 16 numbers in hexadecimal (0-15) and in binary, it comes after 2 numbers (0-1).

With two bits, we can express four values: 00, 01, 10 and 11. What comes after that? 100, of course. Those three bits (or binary digits) can express eight values: 000, 001, 010, 011, 100, 101, 110 and 111.

This means that as we go along, we can realize this is an exponential system that utilizes the 'powers of two' (not the acolyte kind). This means, every added bit doubles the previous amount. Four bits can, thus, express 16 values. 4-bit graphics would generally mean 16-color graphics.

Now, eight bits can express, how many values? Anyone? Anyone?

Obviously, the answer is 256.

What does this have to do with computers, consoles, etc.? Well, factually, nothing, but the reason people talk about '8-bit consoles/computers' is because the CPU of those systems can basically handle 256 register values simultaneously - its internal bandwidth can, thus, be said to be 'eight bits wide'.

This is all it means.

This is also why calling something '8-bit music' is so, SO very wrong, and anyone that knows all this, should cringe every time someone says stupid things like that.

It also means that no system is truly '8-bit' or '16-bit', as there are SO many other things that make up a computer/console than how many values the CPU can handle, it boggles my mind that people can just reduce all this to bits that way.

A funny example; the Commodore 64 has an '8-bit CPU', BUT, its synthesizer sound chip has 16-bit registers for accurate pitch. So technically, you could call Commodore 64 a 16-bit computer!

At the same time, Amiga 500 has a sound chip that has 8-bit sample capacity (with 22.1 kHz sampling frequency, if I recall). This means that you could say Amiga is an 8-bit computer as well..

Then there are all kinds of systems that have varying 'bits' in their special hardware, GPU, graphics chips and so on, so that you shouldn't REALLY be able to reduce a system just to 'bits' that easily, but people do anyway. PC Engine, for example, has an '8-bit CPU', but when you play its games, they look, sound and 'feel' like anything you see on a '16-bit' system.

Then there's the whole 'Jaguar 64-bit' fiasco (it has the same CPU as Amiga, which is debatably either 16-bit or 32-bit CPU, depending on whom you ask, and some special hardware that somehow becomes '64-bit' altogether (although you can't add bits together that way - two 32-bit processors do not magically create one 64-bit one, for example)), the whole Dreamcast 128-bit lie and numerous others.

Modern PCs usually have a 64-bit CPU, a bit older ones have 32-bit, as well as the old Amiga 1200 and CD32, Super Famicom and Sega Megadrive, are of course of the '16-bit' variety..

..I hope this helps understand bits.


reply

The problem with 'bits' was that people didn't understand what they were, what they meant, they just saw a number and an 'alien word', and just started thinking it somehow means how good the graphics are, or has something to do with the quality of games, or the quality of the system, so of course it was understandable but frustrating to hear AVGN ask something like 'where did all the bits go?' in his Jaguar review, but bits actually had nothing to do with how good something looked, sounded or played.

It was always just about 'how much data can a CPU cram through', and that's all it was. This kind of renders it at least semi-meaningless, as CPU power alone does not dictate anything - you need good artists, good programmers, good design/designers and so on, but the system can vary WILDLY outside the whole 'bit' stuff, as to how much data the system as a whole can handle, how many colors it allows, what kind of sound system it has, and so on and so forth. The bits are relatively meaningless considering ALL of the other stuff, so it was, and would be stupid to have 'system wars' around that concept alone.

You can have fast CPU in an otherwise crappy system, or amazing graphics and sound capabilities in a system with relatively slow CPU. It's like a movie's budget, really - a big budget does not guarantee a good movie, and a small budget movie can be really good. It just means the movie has potential for 'larger visuals' (not necessarily better) and 'better actors' (or at least more famous ones), so it can -potentially- be better, but there are SO many other factors to consider, it's almost meaningless or even counter-productive.

The next time you think of bits, think of PC Engine, think of Atari Jaguar, think of C64's synthesizer.

Bits aren't everything, and never were. If we were to calculate the bits of ALL the components and processing units in every single console/system, the bits would fluctuate quite wildly. Don't just think that 'the more bits, the better'.

reply

Some commonly known, easy to remember, useful 'bit amounts' that can serve you well if you use computers a lot, program things, etc..

8 bits = 256
9 bits = 512
10 bits = 1024
11 bits = 2048
12 bits = 4096
13 bits = 8192
14 bits = 16384
15 bits = 32768
16 bits = 65536
18 bits = 262144
24 bits = 16777216
..
64 bits = You try to figure it out..

reply