|This page has been designed to teach you very quickly about the internet. Here is an introduction to the internet's origin.
Who made the internet?
One of the biggest misconceptions about the internet, is that it was planned. The internet was indirectly created back in the '50s by the US government during the Cold War. The government was tired of flying magnetic tapes back and forth between computers. Email was comprised of people on roller skates carrying pieces of paper from one cubical to another. Meanwhile, telephones and telegrams continued to use wires to transmit information. The government knew there was a better way of transferring information, so they decided to devise a way to link computers together using cables.
The first objective the government had when designing a network was to make sure the system was robust. Meaning, if one computer crashed, or was disconnected, the other computers needed to be able to stay connected with no interruptions. They wanted a design whereby they could connect and disconnect computers without disturbing any of the other computers on the network. Any of you who have looked through Christmas tree lights for that one burnt-out bulb, can appreciate the design they were looking for.
Conquering data transfer, the birth of TCP
Warning: include(/home/stimulus/www/education/adsense/box-ad.php) [function.include]: failed to open stream: No such file or directory in /home/content/36/11024436/html/swellgames/education/preface.php on line 73
Warning: include() [function.include]: Failed opening '/home/stimulus/www/education/adsense/box-ad.php' for inclusion (include_path='.:/usr/local/php5_3/lib/php') in /home/content/36/11024436/html/swellgames/education/preface.php on line 73
What is data? In a computer it's a collection of binary numbers 0 and 1 a.k.a. ON and OFF. What else uses ON and OFF as a method of transferring information? Do you remember Morse code? The method by which telegrams were sent for over a hundred years. What the early computer scientists did, was update an age old solution using modern techniques. The computer scientist devised a means to use computer switches instead of humans to transfer the 1s and 0s between locations.
But this solution begged the question; If you remove humans, then who answers the call? How do we determine who the information is for? To answer this question, they needed a method of operation, a set of rules, a "protocol" to control how computers speak to each other. The best way to answer this question is to review the anatomy of a telegram.
A basic telegram follows the same composition every time. A human sitting in a telegram station awaits a particular combination of clicks and taps to signal that a new message is coming in. Once confirmed back to the sending station with other taps and clicks, the transfer of information is on! Each message was comprised of a START, a TO, a FROM, a MESSAGE, and an END. Well the inventors of the first network protocol used this anatomy to build "packets" of information that are transferred over the 'net. They gave this protocol the name TCP or Transmission Control Protocol, for this is the method by which information is transmitted.
What is an IP?
Internet Protocol was an additional feature-set added to TCP to allow for more exclusive ownership of TCP packet information. Problem: If a person in location A wanted to send a message to a person in location B, TCP would demand that a mediator at location B read the contents of the message to determine who should get the information being transferred. With the addition of the IP extension, TCP/IP was finally able to author a message from one individual in location A to the individual in location B without human intervention using an IP address to uniquely identify each individual involved in the transaction.
What is an IP Address?
An IP address is virtually identical to a phone number with different formatting. Four numbers separated by periods make up the genetics of an IP address. Each number can be a digit from 0 to 255 (making up 256 possibilities). By using four unique numbers, the government was able to guarantee 256x255x254x253 unique addresses (or roughly 4.2 billion unique numbers). Each user on the internet today is assigned a unique IP address in order to communicate over the web.
We are running out of IP numbers - due the nature of computer implementation today, there are more computers than users. The governments initial plan to create a numbering scheme that would provide 4 billion unique addresses is running out due to allocation methods and overall consumption.
IP Allocation Methods - The government divided the allocation of IP numbers similar to the methods used by the telephone companies to distribute telephone numbers. In a telephone number there is an area code - prefix - and a number. In regard to IP addresses, the government devised a scheme that would allow them to identify an organization by simply examining the IP address themselves. Similar to the designation of a toll-free 800 area code, IP addresses have similar indexing methods. The primary drawback to the method by which the government decided to allocate numbers, is that a large range of numbers are being held by organizations (mostly military) that don't have anywhere near that number of computers. For instance, imagine allocating a state with 50 people 10 million phone numbers (an average amount of phone numbers that are given per area code). You would be wasting 9999950 numbers in the process. Well, this is where many IP addressed have been consumed.
Dynamic IP Allocation - One of the methods being used to get around the shortage of allocated IP addresses, is the method of dynamic allocation. A company is given a designated amount of IP addresses, and the network software distributes those IP addresses as users log into the network. This allows 1000 IP addresses to be shared amongst 1500 users. If a business works in hourly shifts, there may not be the need for 1500 computers to be functioning over the network at one time. Virtual assignment allows up to 1000 users at one time while 500 employees are preparing to work the next shift. The obvious drawback, is that if and when the 1001st user logs into the network, their computer will fail to establish a unique identity on the web, and therefore will be refused connection.
The Internet and Universities
Warning: include(/home/stimulus/www/education/adsense/box-ad.php) [function.include]: failed to open stream: No such file or directory in /home/content/36/11024436/html/swellgames/education/preface.php on line 99
Warning: include() [function.include]: Failed opening '/home/stimulus/www/education/adsense/box-ad.php' for inclusion (include_path='.:/usr/local/php5_3/lib/php') in /home/content/36/11024436/html/swellgames/education/preface.php on line 99
When the government finally realized the cost of owning and operating a world-wide network of computers, they came to the conclusion that it was too costly for cold-war justification. When examining the problem closer, they noticed that another institution existed world-wide that had the financial capability of assuming the burden of such a huge infrastructure. In every major city, there were colleges that could benefit from sharing information on a global scale. During the late 1960's and early 70's the government transitioned many secondary computer hubs to local universities for private operation. The government then piggybacked on these connections at a much reduced cost.
Growth of the internet
Since the early '80s, when the government began to share their network technology with the world, there has been growth on a scale that is hard to imagine. To put it into better perspective, in the early 80's there were only 213 registered hosts on the internet. By 1986, this number had risen to 2,308 hosts. By January 1995, there were over 4.85 million registered hosts. This number does not include personal computers that were accessing the internet, but merely the number of servers that make up the internet.
Who owns the internet?
No one person or country owns the internet. Literally millions of governments, corporations, universities, commercial companies and citizens own the internet jointly. What this means is that no one can control it in its entirety. In the United States there is a group called the National Science Foundation (NSF) that over looks methods of improving the internet's performance. The NSF is supported by a group called the internet Engineering Task Force (IETF) committee. This committee has to conform to guidelines that are set by the Internet Architecture Board (IAB).
In reality, there are many groups that manage every facet of the internet. Unless you plan to devote your life to serving on one of these committees, you can probably live a very prosperous life on the internet without knowing they exist.
Where do all these services come from?
Services like web pages, FTP sites, newsgroups, e-mail, etc., are all individual functions of computers we call servers (or hosts). Some servers have few features, while other servers have all kinds of services available. Luckily for you and me, the servers know who they are, and they know where the other computers are as well. When you make a request for a web page, the servers work together to help you find the server where that web page exists. If one computer is down between you and the server you're asking for, the other computers will help to find a path for your request to travel on. Once the path is finally resolved, the connection is made, and you will see your web page. The same is true in regards to most of the requests over the internet.
- The internet is a huge collection of computers world wide.
- They are connected to each other by a method that guarantees access (provided that computer is operational and connected to the network).
- There are servers (also referred to as "hosts") that provide services of all types.
- No one person or group owns the internet, yet there are many committees and organizations that supervise its developmental evolution.