bit montage
September 23rd, 2013

The iPhone 5s is 64 bit! Does that mean it's eight bucks?

 iOS 7 has arrived. And it's 64 bit! Great!

But... what the heck does that mean?

That's a fun question, because people have been throwing around claims about "8 bit," "16 bit," "32 bit" and more lately "64 bit" computing for as long as personal computing has existed. Certainly long before the smartphone.

You probably know what a bit is: true or false. One or zero. She loves me, she loves me not. The smallest piece of information possible.

Modern computers ultimately deal only with bits. They tackle bigger things by stringing bits together. If I have two bits, I can represent the numbers 0, 1, 2 and 3 like this: 00, 01, 10, 11.

And if I have eight bits, I can count as high as 255:

11111111

"Aha! So an 8-bit computer can only count to 255!" Well, no, not quite. An 8-bit computer can only count to 255 "in its head."

What do I mean by "in its head?" Well, if you're like most people, you can multiply one-digit numbers in your head, no problem. 

Many people can multiply two-digit numbers without writing anything down, either, although mistakes start to creep in as we mentally shift columns around.

But by the time we get to three digits, most of us who aren't showoffs are reaching for pencil and paper.

An 8-bit computer is much the same way. Of course it can tackle bigger numbers than 255. But it has to use "pencil and paper." It has to "write down" those numbers in memory, working with one 8-bit "column" at a time, much as you would doing a multiplication problem on paper. And that slows the computer down.

This is why there can be spreadsheets for the Apple ][, even though it was an 8-bit computer. In fact, there are pocket calculators that only have four bits. It's OK as long as you have plenty of paper... and plenty of time.

The 16-bit era (or was it 20?)

Enter the 16-bit computer. The classic IBM PC and its clones had 16-bit "registers." Registers are much like the seven digits or so you can remember easily in your head at any one time. Having 16-bit registers means you can count to 65,535 without really trying. 

But this isn't really what people remember about the IBM PC. What they remember is the 640K limit. 640K probably sounds tiny to you (a web-sized JPEG, maybe?) but it is significant because it's more than 65,535.

I lied to you (or, counting isn't everything)

I've been saying that "8 bit" means you can count to 255, "16 bit" means you can count to 65,535, and so on. But in building a computer, or a smartphone, counting isn't the only thing. Access to lots and lots of memory is also really important. And throughout the history of computing, the amount of memory a computer can access has tended to be more than the biggest number it can hold "in its head."

So "8-bit" personal computers actually had a 16-bit "bus," which means they could "see" up to 65,536 bytes of memory at addresses 0, 1, 2, 3, and so on. There were true "8-bit only devices," like the atari 2600 video game system and many calculators. But for a computer you needed a little more than 256 bytes of RAM.

When the IBM PC came out, memory was starting to get cheaper, so they built it to tackle more than 64K of RAM. And they used a processor that could do that... Just barely. The IBM PC's processor had a clever trick that allowed it to access memory as if it were a 20-bit computer. And 20 bits is enough to get you to...

Well, 1,048,576, actually. Which is more than 640K! Where did the extra space go? It was used up by ROM, special hardware that made the display look like RAM, and a generous mix of good and bad decisions.

Later members of the IBM PC family tree used a similar trick to access 24 bits of memory. But in all of these, it was still very easy to count to 65,535, and more work to count higher. You still had to keep that 64K limit in mind, or your code would mysteriously stop working as your registers "rolled over" like an odometer flipping over to zero.

The 32-bit era: 4,294,967,296 bytes should be enough for anybody

Eventually, 32-bit computing caught on. Programmers understood it was a good idea for computers to be able to work "in their heads" with numbers bigger than most people need to deal with. It makes everything faster, and it cuts way down on bugs, especially when it comes to dealing with memory in a simple way. It was only a matter of time before costs came down to the point where 32-bit computers were practical.

One of the first 32-bit personal computers was the original Macintosh.

"Hold it right there buddy! I'm reading right in that article that it had a 16-bit bus!"

Well yes, I lied to you again. Many early 32-bit computers could tackle 32-bit numbers "in their heads," but they couldn't really move data back and forth in 32-bit chunks. They had to do it in 16-bit chunks. The thing is: this doesn't really matter. What matters is that programmers could treat the computer as if it could do that. The gnarly workarounds were hidden away in the hardware. And that made for better, less buggy programs.

"Excuse me, it totally matters. I have a 512-bit video card and it totally screams."

Yup, yet another shameless falsehood on my part. For certain things, the size of the bus actually matters the most. And game graphics is one of those things, because you want to move lots and lots and lots of triangles in a hurry. 

So when people talk about a 512-bit video card, they usually mean that it can move 512 bits in and out of memory at the same time. They don't mean that it can count to... well, two to the 512th power minus one... "in its head."

The 64-bit era: too much is finally enough

Now we had 32 bits and everyone was happy. Well, almost.

There are still sneaky reminders that four-billion-something-something is not such a big number after all.

Most computers measure time in seconds since 1970. And it so happens that on January 19th, 2038, any 32-bit computer that is still measuring time as a plain old number that it can hold "in its head" is going to have a very bad day.

There are more immediate problems. It's not that rare anymore to have a file that is bigger than 4-billion-something-something bytes. Many computers have more than 4 gigabytes of RAM. And lots of real-world math does involve counting to numbers bigger than 4 billion.

Dealing with these cases in 32 bits is possible, but involves the same sort of clever-but-buggy hacks that got us through the 16-I-mean-20-bit era. It takes the elegance out of programming. One has to remember to say "okay, THIS number is a BIG number, not like all these REGULAR numbers." And for those of you who are not programmers, that means buggier programs.

And so everyone switched to 64 bit computing and everything was awesome. Or was it?

64 bits: too much is kind of too much, maybe?

Not everyone is so quick to jump on the 64-bit bandwagon.

The biggest complaint by far: 64-bit computing wastes memory.

Why is that? Because if the numbers you're comfortable working with are 64-bit numbers, you're going to write down 64-bit numbers most of the time, because it comes naturally to you. You're not going to go out of your way to go back to crappy 32-bit numbers. That's understandable. But you're going to use twice as much paper. And RAM is not exactly cheap once you get past the 2GB mark.

Also, 64-bit computing isn't necessarily faster. If the kind of work you're doing could just as elegantly be done in 32 bits, you might not see any speed benefit at all.

That's why many people still don't run a 64-bit operating system on their laptop.

"So why did Apple go 64 bit for iOS 7?"

OK, so if 64 bits can be overkill even on a laptop, why would you want it on a phone? It "only" has 1GB of RAM, so it's not about addressing more than 4GB of memory. And apps will actually take more memory when compiled for 64 bits.

So what's the point? As it turns out, the main benefits are in math-intensive tasks like games, photo manipulation and video. Which are, of course, some of the most popular mobile apps.

And you can do that math more quickly if you can count to 18,446,744,073,709,551,616 in your head.

Edit: while it's true that there are problems you can solve more efficiently with 64-bit numbers, a lot of smart people disagree that the 64-bitness of the iPhone 5s will improve performance for most apps. A case can be made that it's mostly about accessing other improvements made in the new A7 chip that are only available in its 64-bit mode. You can even argue that Apple hopes to unify its desktop and mobile devices on a single chip... which would mean another big transition on the desktop side like the famous shift from PowerPC processors to Intel processors. We shall see.

 
Check out another article
September 17th, 2013
How do you use Bootstrap?
By