Sunday, August 3, 2008

Moving towards the next gen Inet....

How do you motivate researchers to move toward a next generation Internet? In 2006 the Internet2 consortium began awarding the IDEA (Internet2 Driving Exemplary Applications) Award to those who "represent applied advanced networking at its best, and hold the promise to increase the impact of next-generation networks around the world".

Judges use the following criteria for determining winners:
  1. the positive impact of the application for its users
  2. the technical merit of the application
  3. the application's impact and likelihood of broader adoption
The IDEA awards have been presented in 2006, 2007 and 2008. Each year, a few different awards are given. A few of the more interesting winners, in my opinion are:

2006 - Interactive Music Education: Member universities, using the Internet2 backbone, host live interactive music classes, symposiums, and coaching sessions for their music programs. These programs connect living composers and conductors, too busy to take part in a formal university schedule, interact with students and faculty. The use of this network also allows potential students to audition for programs without the need for travel, opening up opportunities for students never available before.

I find this award-winning program interesting because of the very nature of their use of the Internet2. The type of streaming and multicasting speeds required for a conductor in one location to lead students at another location is difficult to attain with the Internet of today. Equally 'impossible' would be for students at several locations to play musical pieces simultaneously. Although our Internet of today has made great leaps and bounds for delivering multi-media to our homes and businesses, the concept of truly 'live' interaction is still a ways away.

2007 - Ultralight: This project links several next-generation networks together in a managed network-aware grid for the purposes of advancing new physics research projects such as CERN's LHC (Large Hadron Collider,) CMS (Compact Muon Solenoid) and ATLAS. Each of these projects has more than 2000 scientists, physicists and engineers from around the globe working together. These projects generate petabytes of data which is shared and processed by more than 100 facilities today.

This existence of high-speed, secure networks on a global scale is interesting to me because it shows how far we've come in the science community in such a short period of time. What else would Newton have discovered if he was collaborating with 1000 other scientists in his time? The LHC project is expected to generate Exabytes of data within the next decade, so the need for higher-speed network that has a reliable quality of service (QoS) and is secure is apparent. I can only imagine that the rate of our scientific discoveries will continue to be exponential.

2008 - Transforming High-Angular Resolution Astrophysics: radio telescopes across the world are networked and use VLBI (Very Long Baseline Interferometry) to create high-resolution images of radio sources of cosmic origin. Having these telescopes networked allow them to capture and record very brief cosmic events for several locations, providing different 'slices' of the same image. This allows the astronomers to tweak their instrumentation to get the most of them while analyzing astronomical events.

This project, like the others I listed, shows how continued efforts to collaborate on a wider, if not global scale can only enhance contributions to arts and sciences. The impact the Internet has had in the last decade, and future generation Internets will have in the future will never cease to amaze me. This level of global connectivity is still new to us, and I believe that way more good has come out of it than bad. I hope that the next generation of inhabitants on our planet will fully appreciate the advances made by the original Internet and are being made and will be made by next generation Internets.

Sunday, July 27, 2008

Is the need for a next generation Internet reached crisis levels?

In the coming three or four years, we will see signs that the existing Internet has outgrown its capabilities. A recent workshop from ESNet put forth the following alerts:

  • IPv4 addresses provided by ARIN will soon be exhausted
  • the DFZ (default-free zone) continues to grow
  • as the Internet grows it places more and more stress on hardware
    forwarding tables are approaching capacity in many existing routers
  • RAM issues - routing table size often exceeds router memory capacity in some routers
  • CPU issues - routing table recalculations exceeding time between updates – when they fall behind, routers stop routing!

In all actuality, existing USED address space is not what is running out. Existing ALLOCATED address space is what ARIN will no longer be able to freely give out. This means that address space will likely begin to become a commodity. Many organizations, countries, businesses have large amounts of unused addresses. When they are no longer free, then it is likely that addresses will begin to be sold. This only leads to further problems, especially in how it affects address routing.

In my home, I have a few PCs and laptop which share a default route for any call outside of my personal network. This default route is provided by my ISP. It is somewhat likely that my ISP also uses a default route, at least regionally, to send all of is traffic along. At some point however, my requests are received by routers that handle traffic using routing tables. These routers exist in what is referred to as the DFZ (default-free zone). As the number of used addresses grows, the routing tables for the routers in the DFZ grow as well. This will continue to pose hardware issues for these routers, since most routers use CPU and RAM that are not ‘bleeding edge’. Since most routers are built with slower and smaller processors and memory, and routers tend to be used until they are no longer useful, the growth in size of these routing tables can cause slowdowns and outages on a global level.

Since hardware appears to be the biggest issue, then budgets for new routers need to be increased. Possibly selling some of these unused addresses will provide funding for these upgrades. In order to sell the unused addresses, they will need to be broken up into smaller blocks of addresses. This will cause additional deaggregation of addresses, which causes more and more issues for those DFZ routers. It reminds me of a dog trying to chase its tail!

Some people point to IPv6 as being the solution to most of these issues, however there does not seem to be a transition plan in place yet. For a networking technology that has been around for well more than a decade, and with writing on the wall that IPv4 is not sustainable, I am amazed that we are not better prepared. Although this crisis may not have the same impact as something like global warming, it appears that mankind is willing to wait until catastrophe occurs before taking any action.

Sunday, July 20, 2008

Other Next Generation Internets...

My last post centered on the details of one specific development effort to develop and test the next generation of the Internet, led by universities, research labs and government agencies, known most commonly as Internet2. I do not want to confuse the reader into thinking that Internet2 is the only existing network working with new technology to lead the charge for the next generation Internet. While the majority of Internet2 members are and is funded by universities, other groups have made large inroads into developing their own vision/implementation of a future Internet.

The most commonly known group, known as NGI (Next Generation Internet), was formed by congressional and presidential order in 1996. Its goals were threefold: to continue networking research, to create advanced test beds and to develop revolutionary applications. The NGI initiative provided much needed research for handling real time multimedia traffic. Its test bed goal had two parts – part one was to network over 100 locations, end to end, with connectivity speeds up to 100 times the 1Gbps rate of the Internet at that time, part two was to connect at least 10 locations at speeds of 1000 times the 1Gbps rate. The NGI successfully implemented several new applications geared toward high-speed, secure networking including national security applications, tools for laboratories to collaborate with each other, and provided assistance to universities by developing distance education applications.

The life cycle of the NGI initiative was only five years, from 1997 to 2002. Although the 1Tbps goal was never achieved by the NGI initiative, its work was taken over by the LSN (Large Scale Networking) group of the NITRD (National Coordination Office for Networking and Information Technology Research and Development). This coordinating group continues the NGI’s research to this day. Other groups working towards development of a next generation Internet include:

Euro-NGI
China Education and Research Network 2 (CERNET2)
Next Generation IX Consortium
NGI-NZ SocietyInternet2
Several of these groups have announced the formation of the Global Terabit Research Network (GTRN), however it does not appear to be an official entity (at least there is little to be found in researching this other than press releases announcing its formation in 2002.)

The goals of these varied groups, although generally nationalistic, have common ground. Each group is working to not only increase network capacity, but also new technology including but not limited to security and performance measurement improvements which are very important to the growth of the Internet.

Our current Internet incubated within US government programs by DARPA and NSF. From this came university and research center involvement. This grew into the commercialization of the Internet and into what it is for us today – a commodity which is competitively provided.

This next generation Internet, being researched and implemented by so many organizations; some solely government based, others a collaboration of government and education/research institutions, still others incorporating input from the business sector, should provide a much larger springboard to commercialization than the current Internet. As ISPs begin to upgrade their networks in order to support and connect to these networks, as hardware and software providers continue to grow their product lines to support faster connections and newer protocols, and as we the consumer demand higher bandwidth capabilities to support our never satisfied desire for more multi-media specific content, this next generation Internet should become a reality.

Sunday, July 13, 2008

A next generation Internet - already in use!!!

My last entry focused on the fact the current underlying structure of the Internet had a flaw (IPv4) in it that original researchers never considered - that 4 billion addresses would not meet their needs. Another important factor in the need for the next generation Internet is the massive amount of data that is exchanged over the Internet that continues to grow with no end in sight.

The original creators of what is our current Internet were government agencies, researchers and universities. As the Internet became easily accessible to the consumer, this group continued to use the Internet alongside you and me. With available bandwidth and security both growing concerns to this group, they began testing new networks for their own use. This network is maintained by the Internet2 consortium, made up of you guessed it, government agencies, research centers and universities.

Internet2’s network began over 10 years ago to meet these bandwidth problems and security needs. As research tools generated more data, as high quality video became digitized, and as more and more educational material was converted to text, using an existing Internet like the World Wide Web for example, was less and less practical. Researchers needed the ability to exchange massive amounts of data to each other at speeds unavailable at the time.

Internet2’s initial network brought speeds of 100 Mbps when we were all connecting to the World Wide Web with 56K modems. By the time DSL and cable operators provided us a 1.5Gbps download speed, Internet2’s network provided 10Gbps. Their existing provides 100Gbps and should provide 400Gbps by 2012. See graphic for history of speed of ESNet (Energy Sciences Network), an Internet2 provided network.



Over this time, their network has been upgraded at least three times, from early days with MCI provided connections, to a long-running network nicknamed Abilene (provided by Sprint) to today’s fiber-only network provided by Level 3 Communications which can theoretically supply the 400Gbps speeds.

From a member university of the Internet2 consortium, their purpose is threefold:
To create and sustain a leading edge network capability for the national research community.
To direct network development efforts to enable a new generation of applications to fully exploit the capabilities of broadband networks.
To work to rapidly transfer new network services and applications to all levels of educational use and to the broader Internet community, both nationally and internationally.

In just ten years, the Internet2 consortium has made great strides in accomplishing these goals. Member organizations regularly perform speed tests to try to continue to push the envelope of network speed over a distance. Internet2 recognizes achievements in their I2 – Land Speed Record contest. Year after year, new records are set. Ongoing competitions like this help the consortium continue to meet is goals and achieve its purpose.

While the original Internet was built upon the research and needs of government agencies, universities and research centers, so will this next generation Internet. As our lives become more intertwined with the devices we carry, the entertainment we demand, and the connection with the rest of the world we desire, a reliable, fast and secure network will be required. The members of the Internet2 consortium are paving the way for that network!

Sunday, June 29, 2008

When will we need this next generation Internet?

Sooner than you'd think!

The current addressing system of the Internet uses Internet Protocol v4, which provides a total of 4.3 billion addresses, of which only a billion or so is still available. By different accounts, all available IPv4 Internet addresses will be used up sometime between 2010 and 2013, the more recent estimates favoring the 2010 to 2011 timeframe. Once these addresses run out, existing address owners will be fine - only new organizations, individuals, and devices needing unique addresses will be shut out.

The US government and a few US universities which assisted in creating the Internet have been allotted more addresses than all of Asia. This means that a shortage will be seen well before the actual number of in use addresses hit the 4.3 billion maximum. Will these entities sell some of their addresses to be used either commercially or by other governments? If so, it will only be a temporary stopgap.

So what is to be done to prevent the Internet from reaching its maximum size? In the past, technologies like NAT (Network Address Translation) and CIDR (Classless Inter-Domain Routing) have allowed several devices to access the Internet through a single access point or address. With the ever increasing growth of the Internet, these have served merely as a band-aid to the underlying problem.

IPv6, which provides more than 340 trillion raised to the third power addresses has been around for over ten years, but is only beginning to be used by government agencies and universities. These front-runners just can't flip the switch over to IPv6 however, they must run both IPv4 and IPv6 in unison to allow IPv4 (most of the world) access to their sites. This makes service providers balk at the increased costs of maintaining two separate but identical access points to their systems.

Whenever this cutoff does begin to occur, I am sure that the consumer will ultimately pay the price for the infrastructure change in the form of rate hikes and usage restrictions. Hopefully we have already begun so we are not soon hit with additional sticker shock!