Web 2.0
April 14th, 2008I was talking to a friend of mine the other day and he mentioned that Web 2.0 doesn’t really exactly exist.
I mean, yes, we can point to applications on the web and say these are clearly web 2.0 applications, but no one has (that I know of) laid down a concrete standard for what the dividing line between 2.0 and 1.0 is.
For that matter, web 1.0 didn’t exist either. We had several diverging standards of how http was to be rendered – to the point that in some cases web developers were forced to write seperate versions of HTML for seperate browsers. Web 1.0 exists only in retrospect.
What I’m sort of wondering, at this point, is where web 3.0 is going to be taking us. I would share what’s in my crystal ball, but I’d probably be (very) wrong. One of the interesting trends is the perpetual game of pushing CPU load off to the client – or the server. It seems possible that Google will release a desktop OS that turns all the computers in the world into one massively parallel computer, and you’ll never know if your spreadsheet is stored on your local hard drive or somewhere in another country. People who use skype already accept that they’re going to be relaying off their neighbors, and vice versa. It may be that web 3.0 will be the end of the server-client mentality and we’ll all be using one monster peer-to-peer system.
Or it might not. I can’t imagine which direction web 3.0 will go in, because I don’t know what radically new developments are just over the horizen. Most of the technology we’ve seen in the last ten years have been logical extensions of Moore’s law – but it seems like there are a lot of concepts that are completely unexplored, and there are a lot more people out there to explore them.
On a unrelated note, I still wonder when and if we will see hybrid analog-digital computers. I thought this would start with each computer having several registers of random noise, generated using some sort of very high quality white noise generator. I also keep thinking certain functions that are very expensive in CPU cycles are very easy with op-amps. Of course, it’s possible this is already being done inside GPUs – anyone know if GPUs have analog computers as part of them?
I also wonder if the science of analog pattern recognition or analog recognition assistance shouldn’t be bumped up a notch or three. We’ve gotten a little too obsessed with digital of late – not saying digital isn’t great, wonderful, the dog’s bark and the cat’s meow, but a hybrid digital/analog computer might be able to achieve things that neither a analog or a digital system could do alone.