Overview Statistics and WHOIS who
1969 Introducing the Internet
C a portable language UNIX a universal operating system
1983 a landmark year
Decentralized Routing and ISPs
World Wide Web
ARIN and ICANN
New Modems, Wireless Networks and Smartphones
Foreign Characters in Domain Names
Latest US Stats
Internet in Australia and the NBN
HTML - Hyper Text Markup Language
Other Top Languages
Firstly some statistics. In 2017, 3 billion individual users access some 904 million host computers via a global routing table of 660,000 networks on 57,000 Autonomous Systems (AS). An Autonomous System is a single network/group of networks typically governed by a large enterprise with multiple links to other Autonomous Systems. These are then serviced by the several hundred backbone Internet Service Providers (ISPs) that make up the core of the Internet, overseen by five Regional Internet Registries (RIRs). E-mail is sent, and web pages are found through the use of domain names. There are now 329 million domain names, with 128 million of them ending with those 3 letters .com. All of these names are overseen by registrars with Go Daddy currently the largest, having 63 million domain names under management. That's a lot. So for this to work, you as a user connect to a local ISP's network. You then have access to his Domain Name System Server - DNS Server for short - software on a computer that translates a host name you send it e.g. www.google.com into a corresponding IP address (188.8.131.52). This Internet Protocol address specifies first the network, and second the host computer (similar to the way a phone number works). If the DNS server doesn't know the host name, it endeavours to connect (within a second or so) to an authoritative DNS Server that does.
Ultimately, that server is one of
At this point your DNS server caches ("stores") that name and IP address for subsequent requests, for perhaps 24 hours or so. After that, it empties the name & IP address from its cache, which means that the next time the name is requested, the ISP has to look it up again. This cache minimizes requests made on the authoritative DNS Servers, but also ensures it won't be out of date for more than 24 hours or so on any domain. And of course this only happens if the domain changes hosts. And to further reduce Internet traffic, Desktops and Mobiles also cache the host name IP address, and copies of the web page, and only download fresh data after the set time has elapsed. A proxy server similarly caches copies of pages for computers on its network. Note, manually pressing page refresh doesn't update the DNS cache — the IP address. Click here for how to manually clear a DNS cache on your desktop. On iPhones, switching to Airplane Mode, then switching back, clears the DNS. With Android phones, navigating to Settings -> Apps -> Chrome allows you to clear the cache. For more reading,
Now, if looking up details for one of the "open" .au domains, you, as an individual, can go to Ausregistry. This database provides the IP addresses of name servers for all the domains within the five "open" 2nd level domain (2LD) space i.e. ones ending in .com.au, .org.au, .net.au, .id.au or .asn.au (asn association). Note - since 2002 when it was appointed, AusRegistry has never dealt directly with the public in registering domains. Commercial registrars carry out this task , thus preventing potential "conflict of interest" situations within AusRegistry. Then, with regard to "closed" government 2LDs,
Some Background: On Oct 25th 2001, auDA (a Government endorsed body) became the authorized Domain Administrator for the .au TLD. They began with appointing AusRegistry in July 2002 on 4 year terms and this was last renewed in Dec 13 for a 4 year term 2014 - 2018. Prior to auDA and AusRegistry,
Click here to view a document with a breakdown of annual fees charged by AusRegistry to authorized registrars.
Example - Stephen Williamson Computing Services
So, by going to AusRegistry, we learn that the host computer for the domain
This means that swcs.com.au is currently hosted on the Quadra Hosting network, who provide a virtual Web hosting service capable of hosting numerous domains transparently. Thousands of different domains might in fact share the same processor (with pages being published in different folders). If the Internet traffic grows too heavy on this shared server, the swcs domain may in the future require its own, dedicated server. (This situation has not yet been reached).
Click here for a web page that will look up the IP address for a specific domain.
Click here to download a free program that will look up any IP address or Host, by accessing the WHOIS section of each of the five regional bodies responsible for IP address registration: ARIN, RIPE, APNIC, LACNIC, and AfriNIC.
Your computer uses this IP address to form
There are 4 billion addresses available - 2 to the 32nd power -
Computer routers store and forward small data packets between computer networks. Gateway routers repackage and convert packets going between homes/businesses and ISPs, or between ISPs. These connect with core routers which form the Internet backbone. So, how did it all come together? In a nutshell, it came as a joint open exercise between the U.S. Military and the research departments at a number of key universities. For
It began in 1969, when the Defense Advanced Research Projects Agency (DARPA), working with
Over in France in 1963 Louis Pouzin had written RUNCOM, an ancestor to the command-line interface and the first "shell" script. Now in 1972 he designed the datagram for use in Cyclades, a robust, packet switching network that always "assumed the worst" i.e. that data "packets" being transferred over its network would always reach their final destination via unreliable / out of order delivery services. Drawing on these ideas, in 1973 Robert Kahn & Vinton Cerf started work on a new Internetwork Transmission Control Program using set port numbers for specific uses. Click here for an initial list in December 1972. It used the concept of a "socket interface" that combined functions (or ports) with source and destination network addresses, connecting user-hosts to server-hosts.
In the late 1970s, DARPA decided to base their universal computing environment on BSD UNIX, with all development to be carried out at the University of California in Berkeley. UNIX had been greatly influenced by an earlier operating system Multics, a project that had been funded by DARPA at
With IPv4 in 1980, the National Science Foundation created a core network for institutions without access to the ARPANET. Three Computer Science depts — Wisconsin-Madison, Delaware, Purdue initially joined. Vinton Cerf came up with a plan for an inter-network connection between this CSNET and the ARPANET.
Meanwhile, at the hardware cabling level, Ethernet was rapidly becoming the standard for small and large computer networks over twisted pair copper wire. It identified the unique hardware address of the network interface card inside each computer, then regulated traffic through a variety of switches. This standard was patented in 1977 by Robert Metcalfe at the Xerox Corporation, operating with an initial data rate of 3 Mbps. Success attracted early attention and led in 1980 to the joint development of the 10-Mbps Ethernet Version 1.0 specification by the three-company consortium: Digital Equipment Corporation, Intel Corporation, and Xerox Corporation. Today, the IEEE administers these unique Ethernet addresses, sometimes referred to as a media access control (MAC) address. It is 48 bits long and is displayed as 12 hexadecimal digits (six groups of two digits) separated by colons, and thus allows for 280 trillion unique addresses. An example of an Ethernet address is 44:45:53:54:42:00 — note — IEEE designates the first three octets as vendor-specific. To learn the Ethernet address of your own computer in Windows, at a Command Line prompt type ipconfig /all and look for the physical address. To learn the Ethernet address of your ISP's computer, type ARP -a, then look for the physical address that applies to the default gateway.
Back to the Internet. On January 1st 1983, the Defense Communications Agency at Stanford split off the military network — MILNET — from their research based ARPANET network, and then mandated TCP/IP protocols on every host. In May, the
In 1984 in Europe, a consortium of several European UNIX systems manufacturers founded the
But meanwhile on an academic level, the University of Wisconsin established the first Name Server — a directory service that looked up host names when sending email on the CSNET. In September 1984, taking this to the next logical step, DARPA replaced the HOSTS.TXT file with the Domain Name System, establishing the first of the Top Level Domains — .arpa .mil .gov .org .edu .net and .com. In 1985, with 100 autonomous networks now connected — click here to see a 1985 primary gateway diagram, registration within these TLDs commenced.
In 1986, there was major expansion when the National Science Foundation built a third network, the NSFnet, having high speed links to university networks right around the country. In 1987, the
In 1989, with 500 local networks now connected through regional network consortiums, the
Over in Europe back in February 1991,
But back in this year, Jean Polly now published the phrase 'Surfing the INTERNET'.
Meanwhile in Amsterdam, Holland,
And in Australia, the AARNet who had linked all the universities in April-May 1990, now "applied to IANA for a large block of addresses on behalf of the Australian network community .... because allocations from the US were taking weeks .... The address space allocated in 1993 was large enough for over 4 million individual host addresses .... The Asia-Pacific Network Information Centre (APNIC) then started as an experimental project in late 1993 (in Tokyo), based on volunteer labour and donated facilities from a number of countries. It evolved into an independent IP address registry ... that operates out of Brisbane" - R.Clarke's Internet in Australia
Back in the U.S.
The Dept of Defense now ceased all funding of the Internet apart from the .mil domain. On January 1st 1993 the National Science Foundation set up the Internet Network Information Center - InterNIC, awarding Network Solutions the contract for ongoing registration services, working co-operatively with AT&T for directory, database & (later) information services.
This same year 1993, students and staff working at the NSF-supported
Regarding these new dial-up home users, the plan was to be able to dial up an ISP's telephone number using a home phone modem, be automatically granted access to a modem from a pool of modems at the ISP's premises, and thus have a temporary IP address assigned to the home computer for the length of the phone call. Initial costs for these SLIP / PPP connections were $US175 per month. But competition between ISPs & new technology meant that over the next two years prices plummeted rapidly. So while Mosaic was a fairly basic browser by today's standards, its new features introduced huge numbers of "unskilled" users to the web. At the end of 1993 there were 20,000 separate networks, involving over 2 million host computers and 20 million individual users. Click here to see year by year growth.
In February 1994, the NSF awarded contracts to four NAPs (Network Access Points) or, as they are now known,
On April 30 1995, the NSFnet was dissolved. The Internet Service Providers had now taken over — internetMCI, ANSnet (now owned by AOL), SprintLink, UUNET and PSINet. Click here to see a diagram. There was a massive surge in registrations for the .com domain space. In August, Microsoft released
At this time data encryption came to the fore via the Secure Socket Layer or SSL protocol, which changed all communication between user and server into a format that only user and server could understand. It encrypted this data using a server's public encryption key along with a user's private encryption key, a key that had been advised initially to the server through a special handshaking exchange. Click here for further details.
In December 1997, ARIN - American Registry of Internet Numbers - a nonprofit corporation - was given the task of registering the IP address allocations of all U.S. ISP's, a task previously handled by Jon Postel/InterNIC/Network Solutions. Meanwhile, since Sep 1995, there had been widespread dissatisfaction at the $50 per annum domain name fees for the five generic TLDs .com .net .org .gov .edu, and back in 1996 Jon Postel had proposed the creating of a number of new, competing TLDs. With this in mind, on January 28 1998, he authorized the switching over of 8 of the 12 root servers to a new IANA root zone file, thus, in effect, setting up two Internets. Within the day, a furious Ira Magaziner, Bill Clinton's senior science advisor, insisted it be switched back. Within the week, the US Govt had formally taken over responsibility for the DNS root zone file. On September 30 1998, ICANN - Internet Corporation for Assigned Names and Numbers - was formed to oversee InterNIC for names and IANA for numbers under a contract with the
In December 1998, the movie "You've Got Mail" was released with Tom Hanks and Meg Ryan and featuring AOL as their ISP. In June 1999, with ICANN's decision to allow multiple registrars of those generic domain names, .com .org and .net, Network Solutions lost its monopoly as sole domain name registrar. And with competition, registration costs for generic .com domain names dropped from $50 to $10 per annum. As mentioned previously, this .com domain name registry, by far the largest TLD with 118 million names, is now operated by Verisign who purchased Network Solutions in 2000. Around the same time, search engines became an essential part. Click Here for an article on How Search Engines Work.
Cable Modems: Firstly, some background regarding cable TV. In the US it goes back to 1948. It was introduced into Australia in 1994 by Optus, who implemented it with fibre-optic cable (i.e. transmitting via on/off light pulses). Fibre optic is more fragile than copper, and Optus (and Foxtel) employed FTTN Fibre (just) to the node, with coaxial copper wire for its final "last mile" connection. Regarding FTTP (Fibre to the Premises) OECD stats in 2009 showed that Japan had 87%, and South Korea had 67% of their households installed with it. However, the difficulties with fibre meant that FTTP installations in other countries was much lower.
Now in 1996 in the US, cable modems lifted download speeds on the Internet from 56Kbps to 1.5Mbps (i.e. over 25 fold) and more. Microsoft and NTT ran pure fibre-optic tests and saw speeds as high as 155Mbps.
ADSL Modems: In 1998, ADSL (Assymetric Digital Subscriber Line) technology (deployed on the "downstream" exchange-to-customer side) and a small 53 byte ATM format (on the "upstream" exchange-to-ISP side) was retooled for Internet access, offering initial download speeds of 768Kbps. ATM packets had been originally developed to meet the needs of Broadband ISDN, first published in early 1988. Click here for more info.
As a sidenote, click here for an excellent article on how telephones actually work. First introduced into Melbourne in 1879. Click here for a short page re Aussie voltages, milliamps and signal strength on a typical phone line.
WiFi: In August 1999 the Wi-Fi™ (IEEE 802.11) alliance was formed to provide a high-speed wireless local area networking standard covering short distances, initially 30 metres inside of buildings and 100 metres outside, though a later standard 802.11n was able to more than double this range. Typical speeds are 4-5 Mbps using 802.11b, 11-22 Mbps using 802.11g, and over 100 Mbps using 802.11n. Click here for an article re WiFi and signal strength.
In 2001 the WiMAX™ (IEEE 802.16) Forum was launched, designed to cover distances up to 50 kms, though when hundreds of users came online simultaneously the quality of the service dropped dramatically.
Mobile Phones 1G, 2G, GPRS (2.5G), Edge (2.75G), 3G, 4G - What's the Difference:
Click here for an introduction (with photos) to each of these various mobile technologies.
Click here for the date when each was first introduced to Australia.
Click here for a current list of the largest mobile network operators worldwide.
Internet on Mobile Phones 2.5G: The packet switching technology called GPRS General Packet Radio Service running at 20-40 Kbps was commercially launched on a 2G GSM mobile phone network in the UK in June 2000, followed by Nokia in China in August 2000. With GPRS, SGSNs Serving
Internet on Mobile Phones 3G and 4G: On the 3G packet switching level, two competing standards were launched worldwide. First came the CDMA2000 EV-DO Evolution-Data Optimised high-speed system in 2000 for 2G CDMA networks. Next came W-CDMA Wideband CDMA in 2001 as the main member of the UMTS Universal Mobile Telecommunications System family. Both systems used more bandwidth than 2G CDMA, but W-CDMA was also able to complement existing GSM/GPRS/Edge networks on 2G TDMA. In Australia W-CDMA is used by all mobile carriers, with Telstra switching off CDMA EV-DO in Jan 2008. While it initially ran at 100-200 Kbps, W-CDMA has evolved to higher speeds 1 to 4 Mbps by using HSPA High Speed Packet Access. Much higher speeds again at least 100Mbps may be seen with the new IP-oriented LTE Long Term Evolution or 4G standard.
Smartphones: On the hardware front, we have had the Blackberry in 2003 and their
Smartphones built in scanning cameras combined with their explosion in popularity, has meant that companies worldwide have standardised on designing applications that communicate with the user with
In recent statistics, 1.2 billion smartphones were shipped in 2014. Android ran 81% of them, 15% ran iOS (Apple), 3% ran Windows, and less than 1% were Blackberries. Click here for a recent article on the "cheap smartphone", built by companies unknown outside their own country.
Foreign Characters in Domain Names: The Domain Name System service had been originally designed to only support 37 ASCII characters i.e. the 26 letters "a - z", the 10 digits "0 - 9", and the "-" character.Although domain names could be generated in English using upper case or lower case characters, the system itself was case-insensitive — it always ignored the case used when resolving the IP address of the host. Then, in 2003 a system was released to allow domain names to contain foreign characters. A special syntax called Punycode was developed to employ the prefix
In 2006, Amazon Web Services launched the
According to a report in the Weekend Australian January 29 2017 from MoffettNathanson, US broadband stats (100 million users) show Comcast in the lead on 25%, Charter second on 22%, AT&T third on 16% and Verizon on just 7%. Numerous others make up the remaining 30%. Click here for a list.
Latest US wireless stats (300 million users) show Verizon in the lead on 37%, AT&T second on 30%, T-Mobile third on 17% and Sprint fourth on 15%. By far these outweigh the rest, the balance making up just 2%.
Now, to summarize. IP addresses are used to deliver packets of data across a network and have what is termed end-to-end significance. This means that the source and destination IP address remains constant as the packet traverses a network. Each time a packet travels through a router, the router will reference its routing table to see if it can match the network number of the destination IP address with an entry in its routing table. If a match is found, the packet is forwarded to the next hop router for the destination network in question (note that a router does not necessarily know the complete path from source to destination — it just knows the MAC hardware address of the next hop router to go to). If a match is not found, one of two things happens. The packet is forwarded to the router defined as the default gateway, or the packet is dropped by the router. To see a diagram of a packet showing its application layer Email/FTP/HTTP overlaid with separate Transport
Click here to see the latest BGP Border Gateway Protocol, the Internet's global routing table. Click here for an analysis of the headings.
Now that we have some background, let's learn more about IP address allocation in Australia.
The company, Stephen Williamson Computing Services, is currently hosted at IP address 184.108.40.206
By clicking on www.iana.org we learn that 220.127.116.11 - 18.104.22.168 i.e. 16 million addresses were allocated to APNIC Asia-Pacific Network Information Centre. And by clicking on APNIC we learn that IP Addresses 22.214.171.124 - 126.96.36.199 (which is 2000 addresses) were allocated to Net Quadrant Pty Ltd, trading as Quadra Hosting in Sydney.
APNIC is a nonprofit organization based in Brisbane, since 1998, having started as a pilot project in Tokyo in late 1993. Today the majority of its members are Internet Service Providers (ISPs) in the Asia-Pacific region. Naturally, China is involved. In Australia, Telstra (who had purchased the AARNet's commercial businesses in 1995) and Optus are two national ISPs.
In 1999, Optus (followed by Telstra) introduced Cable modems offering high speed connections transmitted over their HFC television networks, a Hybrid of Fibre-optic cable running to each street cabinet (node), then copper Coaxial cable into each house. Currently as of 2016, Australia has about one million HFC cable users.
With coaxial cable, used for carrying TV channels as well as broadband Internet, the accessible range of frequencies is 1,000 times higher than telephone cable, up to 1 gigahertz, but the Internet channel bandwidth for uploading and downloading data is then shared between about 200 houses per node.
In 2000, Telstra (followed by Optus and other service providers) introduced ADSL modems providing broadband (high-frequency) signals over copper (telephone) wire. It rapidly became the broadband standard for desktops, with about five million users as of 2016.
In using ADSL in Australia, with filters, the telephone line gets divided into three frequency or "information" bands, 0-4kHz carries the voice, 26-138kHz carries digital upload data, and 138-1100kHz carries the high frequency, high speed digital download data. One weakness with ADSL though lies in the fact that, without repeaters, the phone company was unable to transmit these high frequencies over a long distance. It meant in many cases that 4½ kilometres was the maximum limit between the modem and the telephone exchange. It also suffered where there was poor quality wiring.
With both cable and ADSL (and wireless), Telstra and Optus and the other service providers have a pool of IP addresses, and use them to allocate a single IP address to each customer's modem (or smartphone) while it stays switched on. For customers with slower
Click here for a list of Telstra telephone exchanges in Australia, including locations and 3rd party DSLAM presence.
Now some further statistics. ABS data shows Australia had 13.5 million active internet subscribers at the end of 2016. While the number of dial-up subscribers has disappeared, down from 1.3 million in 2008 to 90,000 in 2016, the faster types of connection increased from 6.6 million to 13.4 million over the same period. This growth predominantly has been in mobile wireless, which has more than quadrupled. The ABS figures show mobile subscriptions climbed from 1.37 million to 6 million over that eight-year period, giving mobile wireless 50 per cent of the broadband market compared with 20 per cent previously.
Click here for an interesting article on commercial peering in Australia, the establishment of the so called "Gang of Four" in 1998, Telstra, Optus, Ozemail (sold to iiNet in 2005) and Connect (in 1998 part of AAPT, with AAPT then sold to iiNet and TPG).
In January 2015, the top four retail ISPs for landlines were Telstra, Optus, iiNet and TPG. For mobiles, there are three — Telstra, Optus and Vodafone. In March 2015, TPG advised of its intent to take over iiNet. This was approved by shareholders on 27th July, and by the ACCC (Australian Competition and Consumer Commission) on 20th August.
The National Broadband Network is the planned "last mile" wholesale broadband network for all Australian ISPs, designed to provide fibre cable either to the node, or to the premises for 93% of Australian residents, and wireless or satellite for the final 7%. Rollout has been slower than anticipated. According to a report in March 2015, a total 899,000 homes and businesses had been passed, and 389,000 had signed up for active services. Eventually, everyone will have to switch across.
Click here for their current rollout map. Move the red pointer to the area you're interested in, and use the scroll wheel on your mouse or the +/- icons in the bottom right hand corner to zoom in and zoom out.
When pages have a .html or .htm extension, it means they are simple text files (that can be created in Notepad or Wordpad and then saved with a .htm extension). Hypertext comes from the Greek preposition hyper meaning over, above, beyond. It is text which does not form a single sequence and which may be read in various orders. especially text and graphics ... which are interconnected in such a way that a reader of the material (as displayed at a computer terminal, etc.) can discontinue reading one document at certain points in order to consult other related matter.
You specify markup commands in HTML by enclosing them within < and > characters, followed by text.
E.g. <a href="http://www.swcs.com.au/aboutus.htm" target="_blank"> Load SWCS Page</a>
<img src="steveandyve2.jpg" align=left> will load the jpg file (in this example it is stored in the same folder as the web page) and align it on the left so that the text that follows will flow around it (on the right). If the align command is omitted, the text will start underneath it (instead).
Note, only a few thousand characters are generally involved in each transfer packet of data. If many transfers are necessary to transfer all the information, the program on the sender's machine needs to ensure that each packet's arrival is successfully acknowledged. This is an important point: in packet switching, the sender, not the network, is responsible for each transfer. After an initial connection is established, packets can be simply resent if that acknowledgement is not received.
Most of these examples can be seen on this page that you are viewing. To see the text file that is the source of this page, right click on the mouse, then click View Source.
See below http://www.swcs.com.au/top10languages.htm for a brief summary of the current top 10 programming languages on the Internet.
Background information to this article came from here.
|Name||Year||Based On||Written by|
|1. Java||1995||C and C++||Sun Microsystems as a graphical language to run on any operating system (Windows, Mac, Unix) inside a Java "virtual machine". It is now one of the most in-demand programming languages, playing a major role within the Android operating system on smartphones. Sun was started by a team of programmers from Stanford University in California in 1982, building Sun workstations that ran on the Unix operating system.|
|2. C||1972||B||AT&T Bell Inc. as a high-level structured language with which to write an operating system — Unix for Digital Equipment Corporation (DEC)'s |
|3. C++||1983||C||AT&T Bell Inc. to provide C with "classes" or graphical "object" extensions. Used in writing Adobe graphical software and the Netscape web browser.|
|2000||C and C++||Microsoft to run on Windows operating systems.|
|1988||C||Licensed by Steve Jobs to run his NeXT graphical workstations. Currently runs OSX operating system on Apple iMacs and iOS on Apple iPads and iPhones.|
|6. PHP||1997||C and C++||University students as open source software running on web servers. Major community release by two programmers, Andi Gutmans and Zeev Suraski, in Israel in 2000. Used in Wordpress and Facebook.|
|7. Python||1991||C and C++||University and research students on web servers as open source software. Click here for sample instructions. First major community release in 2000. Used by Google, Yahoo, NASA.|
|8. Ruby||1995||C and C++||Japanese university students as open source software for websites and mobile apps.|
|10. SQL||1974||Initially designed by IBM as a structured query language, a special-purpose language for managing data in IBM's relational database management systems. It is most commonly used for its "Query" function, which searches informational databases. SQL was standardized by the American National Standards Institute (ANSI) and the International Organization for Standardization (ISO) in the 1980s.|
** End of List
1. Gilster, Paul (1995). The New Internet Navigator. (3rd ed.) John Wiley & Sons, Inc.
2. Roger Clarke's Brief History of the Internet in Australia - 2001, 2004
3. Goralski, Walter (2002). Juniper and Cisco Routing. John Wiley - The Internet and the Router - excerpt
4. History of Computing (with photo links) and the Internet - 2007
5. History of the Internet - Wikipedia - 2010
** End of Report