"Eight-bit" explained
People don't seem to understand 'bits'.
It always baffled me, when people like AVGN openly admitted they didn't understand what '16-bit' meant, and how they couldn't explain to their parents how much better 16 bits are compared to eight.
Sigh. So let's get it over with once and for all, so even the simplest of minds can realize what it actually means, and why there were no '256-bit consoles', and even when something WAS advertised as 128-bit, it was just hokum.
64 bits may sound like a tiny number, but if people knew just how big a number those bits actually contain, they would probably be a bit shocked.
It's the old 'one grain of rice on a chessboard'-parable, where you double the grains of rice with every square until you reach the 64th square, where the amount of rice grains would easily exceed anything any king in history could have owned at the time.
The word 'bit' comes from 'binary digit'.
There are many 'counting systems' like this, most commonly used being the decimal system, where we use ten numbers, from 0 to 9, obviously.
In the mysterious world of computers, there have always been numerous systems like these, like the 'hexadecimal' system, which uses 16 numbers (I think the more accurate terminology would use the word 'base', but I am trying to keep things simple).
This means, the '10' won't come after ten numbers, but after number 15, because it uses sixteen numbers from 0 to 15 before the '10' comes. Now, how can we have 15 before 10? Easily, we don't WRITE it as '15', we write it as F. It's perfectly logical. It goes like this:
1,2,3,4,5,6,7,8,9,A,B,C,D,E,F .. the next number is then '10'.
So '10' in hexadecimal system is actually '16' in our 'normal decimal system'.
Now, 'binary' system has only TWO numbers, which is why it's called 'binary'. A bit is 'binary digit', which means either 0 or 1. That is one bit. One bit can express exactly two values, zero or one, or 'on or off'. A light switch is a one-bit system.
So '10' comes immediately after two numbers have been reached, and we always count from zero, so 10 comes after 10 numbers in decimal system (0-9), 10 comes after 16 numbers in hexadecimal (0-15) and in binary, it comes after 2 numbers (0-1).
With two bits, we can express four values: 00, 01, 10 and 11. What comes after that? 100, of course. Those three bits (or binary digits) can express eight values: 000, 001, 010, 011, 100, 101, 110 and 111.
This means that as we go along, we can realize this is an exponential system that utilizes the 'powers of two' (not the acolyte kind). This means, every added bit doubles the previous amount. Four bits can, thus, express 16 values. 4-bit graphics would generally mean 16-color graphics.
Now, eight bits can express, how many values? Anyone? Anyone?
Obviously, the answer is 256.
What does this have to do with computers, consoles, etc.? Well, factually, nothing, but the reason people talk about '8-bit consoles/computers' is because the CPU of those systems can basically handle 256 register values simultaneously - its internal bandwidth can, thus, be said to be 'eight bits wide'.
This is all it means.
This is also why calling something '8-bit music' is so, SO very wrong, and anyone that knows all this, should cringe every time someone says stupid things like that.
It also means that no system is truly '8-bit' or '16-bit', as there are SO many other things that make up a computer/console than how many values the CPU can handle, it boggles my mind that people can just reduce all this to bits that way.
A funny example; the Commodore 64 has an '8-bit CPU', BUT, its synthesizer sound chip has 16-bit registers for accurate pitch. So technically, you could call Commodore 64 a 16-bit computer!
At the same time, Amiga 500 has a sound chip that has 8-bit sample capacity (with 22.1 kHz sampling frequency, if I recall). This means that you could say Amiga is an 8-bit computer as well..
Then there are all kinds of systems that have varying 'bits' in their special hardware, GPU, graphics chips and so on, so that you shouldn't REALLY be able to reduce a system just to 'bits' that easily, but people do anyway. PC Engine, for example, has an '8-bit CPU', but when you play its games, they look, sound and 'feel' like anything you see on a '16-bit' system.
Then there's the whole 'Jaguar 64-bit' fiasco (it has the same CPU as Amiga, which is debatably either 16-bit or 32-bit CPU, depending on whom you ask, and some special hardware that somehow becomes '64-bit' altogether (although you can't add bits together that way - two 32-bit processors do not magically create one 64-bit one, for example)), the whole Dreamcast 128-bit lie and numerous others.
Modern PCs usually have a 64-bit CPU, a bit older ones have 32-bit, as well as the old Amiga 1200 and CD32, Super Famicom and Sega Megadrive, are of course of the '16-bit' variety..
..I hope this helps understand bits.