Google’s Need For Speed Resulting In A Quicker Faster Internet

googleandhigh_speed_fasterinternetWhen you go on the Internet and call up a website, the data, text, image or video of the site that’s being requested takes on a worldwide tour which travels from the original users browser from home, to the server which hosts the domain name of that site, which will then determine the exact information which needs to be sent back to fulfill that request. Seems like a lot of work,, so Google wants to make it SPDY.

The code on the site which describes how the page should be loaded then makes it’s journey back to the end users browser. The instructions may include the fetching of certain items or data such as a video or image and text, which will usually require the transmission of even more messages.

Each of these seemingly simple looking messages can actually involve complicated as well as interconnected hardware and software that’s often most likely outdated, poorly designed, ridden with malware or at the very least clogged and congested.

The route back to the browser itself goes through various types of physical infrastructures, beginning from high-speed wires that construct the backbone of the Web’s structure, then to cables or phone wires, and then it transmits to the wireless signals which will deliver the requested website to its intended user.

The Need For Faster Internet Speed
Performance and speed issues can occur or lag anywhere along this process of transmission. The servers themselves which are hosting the site may just be slow. The end users browser may not be properly equipped to be able to handle the sites code efficiently. The code on the site itself may just be too hard to process. But to top that, the data, the back and forth negotiation of sending that information as well as determining whether it’s even arrived is governed by protocols which were actually designed decades ago.

They were not developed for the type or level of speed or the interactivity that’s required by today’s modern ultra fast and complicated Web applications that are designed to replace the software which traditionally runs on a computer.

Users will become extremely upset as well as sensitive in the slightest delay of receiving Web data. A recent study indicated that when introducing a delay of just 100 to 500 milliseconds when displaying search results, it resulted in users conducting close to a 1% percent fewer searches overall.

Once the speed was restored to its previous levels, it then still took a bit of time for users to resume their initial searching habits. Browsing Web pages on the Internet is now expected to be like grabbing a remote and flicking channels on TV and getting instant results.

So Google Wants A Lightening Quick Internet

Google hopes to digging deeper into the Web’s fundamental architecture. They have even proposed a brand new protocol, which they call SPDY (as in speedy), which according to Google can potentially make the Internet two times faster than today’s protocols. The current protocols in place were not designed to handle the extreme bandwidth which is speeding through now.

What’s known as Transmission Control Protocol or TCP, is a configuration that’s setup so that there’s no information that gets lost during transfer. What it does is it will slowly increase the transfer rate, bit by bit, once a connection is initially open, testing itself the entire way. If it happens to detect a problem, it will then cut its transfer rate by half. TCP will rarely take advantage of the entire bandwidth that’s available.

One other issue with the majority of Web pages today is that they are designed so that the data information will load sequentially, such as an image first, then an ad or a video, then finally the text or vice-versa. If all of these pieces of data could be loaded at once in parallel, the site would reach the users a lot more quicker.

But while everyone would agree that the old protocols which are currently in place slow things down, it won’t however be that easy to replace them. The challenge to do so is both an economic problem as well as a technical one. Attempting to replace the existing standards would require the updating of each and every computer users’ operating systems as well as changing the servers, all the networking software and hardware, and all of the hardware equipment which is scattered around the globe.

Google’s Plans For A Faster Internet
So in the meantime, Google’s plan is to pressure all of the current Internet service providers until they eventually offer connections which will meet the higher standards that they expect as well as need. In the next few years, Google plans to construct as well as run a 1 gigabit per second Internet connection, and test it on a few selected communities located in the United States.

This will be at least 20 times faster than what most major Internet Communication companies generally deliver today, this over their FiOS fiber-optics network, which happens to be one of the fastest consumer plans available. On average, the speed will be close to 100 times faster than what most experience today.

Google is in hopes that this project will eventually yield information regarding what it’ll take to be able to provide that level of service, as well as encouraging users to demand these higher speeds. But even with these higher Internet connection speeds, all of the existing software would need to be updated as well as redesigned to be able to take advantage of the faster capacity.

Also, constructing the actual required infrastructure would be extremely grueling, time consuming and expensive. Major Internet providers has indicated that they would complete their existing FiOS fiber-optics construction projects, but there are no plans on launching new ones, not even if faster Internet service can reach the end user.

Google as mentioned, who already has plans in place to offer extremely higher Internet speeds to selected test communities, has no plans of becoming an full-fledged Internet service provider on any significant level.

Fight The Good Fight
Also, the problems with some of the ways the infrastructure are being deployed is not within grasp of what Google is able to resolve. In most of the cases, the failure of the entire system occurs at the intermediary stages, which may be too established. As an example, even with Google’s improved protocol such as SPDY, an ISP’s mis-configured server can slow down the Web experience for the end user.

These slowdowns are pretty much common in most countries since they have just a few local data centers in place and the majority of the data information has to travel a lot farther to reach the end user. So for these countries, it’s up to the governments to make the changes at the infrastructure level.

Even if Google’s plan and these projects eventually pans out, they could still ultimately still be stymied. Google along with their overall market stranglehold, which gives them the ability to be able to push out other companies when pursuing their goals, also has come a bit under fire lately, particularly in a recent EU inquiry, questioning if Google exerts too much power as well as unfair control on how they rank sites in their search engine results.

Google’s determination as well as their confidence that they can change and control the Web seems almost endless. Google’s speed project will succeed and there’s no reason to think that it won’t. This because of the sheer size of their resources. People, business and corporations tend to listen when they speak and there really isn’t any other options if they don’t.

But the end game that everyone wants is a faster Internet, so it’s not just about Google. We all understand that all this initial spade work will eventually benefit everyone. It just won’t be another win for Google, it’ll be a success for the Internet as a whole.

HTML5 Says Rumors Of The Web's Demise May Be Greatly Exaggerated
Sports Nutrition Guide Review For Improving Athletic Performance