Who is who on the Internet and who did what when

and How did Everyone else Manage to Agree

 

Table of Contents

Overview Statistics and WHOIS who

1969 Introducing the Internet

C a portable language UNIX a universal operating system

1983 a landmark year

Decentralized Routing and ISPs

World Wide Web

Data Encryption

ARIN and ICANN

New Modems, Wireless Networks and Smartphones

Foreign Characters in Domain Names

Cloud Computing

Latest US Stats

Summing Up

Internet in Australia and the NBN

HTML - Hyper Text Markup Language

Other Top Languages

Firstly some statistics. In 2017, 3 billion individual users access some 904 million host computers via a global routing table of 660,000 networks on 57,000 Autonomous Systems (AS). An Autonomous System is a single network/group of networks typically governed by a large enterprise with multiple links to other Autonomous Systems. These are then serviced by the several hundred backbone Internet Service Providers (ISPs) that make up the core of the Internet, overseen by five Regional Internet Registries (RIRs). E-mail is sent, and web pages are found through the use of domain names. There are now 329 million domain names, with 128 million of them ending with those 3 letters .com. All of these names are overseen by registrars with Go Daddy currently the largest, having 63 million domain names under management. That's a lot. So for this to work, you as a user connect to a local ISP's network. You then have access to his Domain Name System Server - DNS Server for short - software on a computer that translates a host name you send it e.g. www.google.com into a corresponding IP address (72.14.207.99). This Internet Protocol address specifies first the network, and second the host computer (similar to the way a phone number works). If the DNS server doesn't know the host name, it endeavours to connect (within a second or so) to an authoritative DNS Server that does.

Ultimately, that server is one of 800 or so worldwide servers appointed by the Internet's 1531 TLD authorities - Top-Level Domains. Click here for the latest list, showing recent additions and deletions. E.g. if the domain name ends in .au, then the Australian auDA is the authority for that TLD zone. If it ends in .cn, then the China Internet Network Information Center is the authority. If the name ends in .com or .net (and with no 2 letter country suffix) then the US-based Verisign, Inc. is the authority for those 2 TLD zones. And so on. Your request is successful when firstly the name is found, and secondly acknowledged by the nameservers on the network which actually hosts the pages, files & emails sent to that domain. At this point your DNS server caches ("stores") that name and IP address for subsequent requests, for perhaps 24 hours or so. After that, it empties the name & IP address from its cache, which means that the next time the name is requested, the ISP has to look it up again. This cache minimizes requests made on the authoritative DNS Servers, but also ensures it won't be out of date for more than 24 hours or so on any domain. And of course this only happens if the domain changes hosts.

Now, if a TLD server is unknown, it can be found in a small official root zone file overseen by ICANN, under contract to the US. Dept of Commerce. Any changes to the file are processed by Verisign then published by 12 Root Name Server organizations. And to further reduce Internet traffic, a PC caches a copy of the page in a folder on its hard drive known as "Temporary Internet Files", and normally only downloads a fresh copy if the "Last Update" date on the page is newer than the date in the cache. A proxy server similarly caches copies of pages for PCs on its network. For more reading, click here.

Now, if looking up details for one of the "open" .au domains, you, as an individual, can go to Ausregistry. This database provides the IP addresses of name servers for all the domains within the five "open" 2nd level domain (2LD) space i.e. ones ending in .com.au, .org.au, .net.au, .id.au or .asn.au (asn association). Note - since 2002 when it was appointed, AusRegistry has never dealt directly with the public in registering domains. Commercial registrars carry out this task , thus preventing potential "conflict of interest" situations within AusRegistry. Then, with regard to "closed" government 2LDs, Netregistry Pty Ltd is the registrar for .gov.au and Education Services Australia Ltd for .edu.au.

Some Background: On Oct 25th 2001, auDA (a Government endorsed body) became the authorized Domain Administrator for the .au TLD. They began with appointing AusRegistry in July 2002 on 4 year terms and this was last renewed in Dec 13 for a 4 year term 2014 - 2018. Prior to auDA and AusRegistry, Robert Elz at Melbourne University administered the whole .au TLD at no charge from March 1986. In 1996 he delegated a five year exclusive commercial license for the .com.au 2LD to a startup company, Melbourne IT. Click here for further details.
Click here to view a document with a breakdown of annual fees charged by AusRegistry to authorized registrars.

Example - Stephen Williamson Computing Services
So, by going to AusRegistry, we learn that the host computer for the domain swcs.com.au (currently 202.146.212.12) can be found by accessing any one of the three IP Addresses 202.146.209.1, 202.146.209.2 or 65.99.229.31 — ns1.qnetau.com, ns2.qnetau.com or ns3.qnetau.com — name servers managed by Quadra Hosting, a professional web hosting Australian company. Next, we learn that the commercial registrar for swcs.com.au is TPP Wholesale in Sydney (who acts on behalf of the owner of the name i.e. Stephen Williamson Computing Services Pty Ltd).

This means that swcs.com.au is currently hosted on the Quadra Hosting network, who provide a virtual Web hosting service capable of hosting numerous domains transparently. Thousands of different domains might in fact share the same processor (with pages being published in different folders). If the Internet traffic grows too heavy on this shared server, the swcs domain may in the future require its own, dedicated server. (This situation has not yet been reached).

Domain lookup and WHOIS who - IP address registration

Click here for a web page that will look up the IP address for a specific domain.

Click here to download a free program that will look up any IP address or Host, by accessing the WHOIS section of each of the five regional bodies responsible for IP address registration: ARIN, RIPE, APNIC, LACNIC, and AfriNIC.

Your PC uses this IP address to form packet headers for routing data to & from the host

There are 4 billion addresses available - 2 to the 32nd power - IP version 4 first deployed Jan 1980. IPv0-IPv3 were test versions 1977-1979. A few servers use IPv6 - first defined in 1996 - with 2 to the 128th power addressing i.e. 340 trillion, trillion, trillion addresses — enough addresses for every grain of sand on the planet.
Computer routers store and forward small data packets between computer networks. Gateway routers repackage and convert packets going between homes/businesses and ISPs, or between ISPs. These connect with core routers which form the Internet backbone. So, how did it all come together? In a nutshell, it came as a joint open exercise between the U.S. Military and the research departments at a number of key universities. For 20 years and more, in fact until April 30 1995, unrelated commercial use on the national backbone was strictly forbidden. Accordingly, no private companies got a look in when it came to claiming patents, trademarks, or ownership of its overall design.

Go top

1969 — Introducing the Internet

It began in 1969, when the Defense Advanced Research Projects Agency (DARPA), working with Leonard Kleinrock at UCLA and Douglas Engelbart at Stanford Research Institute (SRI), built the world's first packet-switching network, the ARPANET. Using four Honeywell computers as gateways (routers), the contracted team of Bolt Beranek and Newman connected 3 university mainframes at 50 Kbps:- first an SDS at UCLA, then an IBM at UCSB and DEC at UU to an SDS at SRI near San Francisco. Telnet in 1969 was followed by Email in 1971. By the close of 1973, 40 mainframe hosts were online, including satellite links to Norway & London.

Over in France in 1963 Louis Pouzin had written RUNCOM, an ancestor to the command-line interface and the first "shell" script. Now in 1972 he designed the datagram for use in Cyclades, a robust, packet switching network that always "assumed the worst" i.e. that data "packets" being transferred over its network would always reach their final destination via unreliable / out of order delivery services. Drawing on these ideas, in 1973 Robert Kahn & Vinton Cerf started work on a new Internetwork Transmission Control Program using set port numbers for specific uses. Click here for an initial list in December 1972. It used the concept of a "socket interface" that combined functions (or ports) with source and destination network addresses, connecting user-hosts to server-hosts. Request for Comments (RFCs) for these TCP/IP standards and issuing of IP address blocks to assignees was overseen by Jon Postel, until his death in 1998, director at Information Sciences Institute at the University of Southern California in LA. And regular updates of the network with a list of hosts was provided in a standard text file to all parties by the Defense Dept's NIC Network Information Center at Stanford.

Go top

C — a portable language UNIX — a universal operating system

In the late 1970s, DARPA decided to base their universal computing environment on BSD UNIX, with all development to be carried out at the University of California in Berkeley. UNIX had been greatly influenced by an earlier operating system Multics, a project that had been funded by DARPA at Massachusetts Institute of Technology (MIT) in 1964, though with its size and complexity, many of its AT&T Bell Laboratory programmers had withdrawn. In 1969 a group of them led by Ken Thompson and Dennis Ritchie wrote a simpler version of Multics (which they called UNIX) in assembly language on a DEC PDP-7. In 1972 UNIX was rewritten in a slightly higher-level language, C, on a DEC PDP-11. C had been developed by Dennis Ritchie, (based on an earlier language, B, put together by Ken Thompson). C's clever use of data structures, combined with its closeness to an assembly language, led to the takeup of UNIX as an operating system by hardware manufacturers everywhere. Except that it was developed on a DEC computer, C thus had the result that IBM's PL/1, the high level language that had been employed in Multics, had been after. AT&T accordingly made the UNIX operating system (and the C language) universally available to universities and commercial firms, as well as the United States government, under licenses.

With IPv4 in 1980, the National Science Foundation created a core network for institutions without access to the ARPANET. Three Computer Science depts — Wisconsin-Madison, Delaware, Purdue initially joined. Vinton Cerf came up with a plan for an inter-network connection between this CSNET and the ARPANET.

Meanwhile, at the hardware cabling level, Ethernet was rapidly becoming the standard for small and large computer networks over twisted pair copper wire. It identified the unique hardware address of the network interface card inside each computer, then regulated traffic through a variety of switches. This standard was patented in 1977 by Robert Metcalfe at the Xerox Corporation, operating with an initial data rate of 3 Mbps. Success attracted early attention and led in 1980 to the joint development of the 10-Mbps Ethernet Version 1.0 specification by the three-company consortium: Digital Equipment Corporation, Intel Corporation, and Xerox Corporation. Today, the IEEE administers these unique Ethernet addresses, sometimes referred to as a media access control (MAC) address. It is 48 bits long and is displayed as 12 hexadecimal digits (six groups of two digits) separated by colons, and thus allows for 280 trillion unique addresses. An example of an Ethernet address is 44:45:53:54:42:00 — note — IEEE designates the first three octets as vendor-specific. To learn the Ethernet address of your own computer in Windows, at a Command Line prompt type ipconfig /all and look for the physical address. To learn the Ethernet address of your ISP's computer, type ARP -a, then look for the physical address that applies to the default gateway.

Go top

1983 — a landmark year

Back to the Internet. On January 1st 1983, the Defense Communications Agency at Stanford split off the military network — MILNET — from their research based ARPANET network, and then mandated TCP/IP protocols on every host. In May, the Massachusetts Institute of Technology, in conjunction with DEC and IBM, used TCP/IP to develop a campus-wide model of distributed computing that became known as the client-server model with PC/Workstations and Servers as opposed to the mainframe model with all intelligence within the central host computer. Next in August 1983, the University of California in Berkeley included Bill Joy's modified version of TCP/IP in its commercial release of BSD UNIX, a landmark event. With this the ARPANET and CSNET grew, adding gateways for any network anywhere — regardless of that network's internal protocols or hardware cabling — allowing these two original core networks to remain intact. Accordingly defense contractors' networks, Usenet discussion groups (i.e. bulletin boards), Bitnet's automated mailouts, later large business networks - CompuServe in 1989, MCI Mail in 1990, and AOL in 1994, all of whom used different protocols, established gateways. Click here for details of other non-Internet networks at that time. Many date the true arrival of the Internet as 1983.

In 1984 in Europe, a consortium of several European UNIX systems manufacturers founded the X/Open Company Ltd to define a single specification for operating systems derived from UNIX, to increase the interoperability of applications and reduce the cost of porting software. Its original members were Bull in Paris, ICL in London, Siemens in Munich, Olivetti in Italy, and Nixdorf (also from Germany) — a group sometimes referred to as BISON. Philips in Amsterdam and Ericsson in Stockholm joined soon afterwards, at which point the name X/Open was adopted. In many ways, it was preparing for the commercial impact of AT&T Computer Systems (a company that had been allowed to form on January 1, 1984, having divested itself of the Bell (Telephone) Operating companies), and the upcoming release of their proprietary UNIX System V. As a proprietary product, it was in direct opposition to the UNIX that had been developed at Berkeley, and fears of numerous versions emerging were realized as AT&T formed a mutual alliance with Sun Microsystems a few years afterwards.

But meanwhile on an academic level, the University of Wisconsin established the first Name Server — a directory service that looked up host names when sending email on the CSNET. In September 1984, taking this to the next logical step, DARPA replaced the HOSTS.TXT file with the Domain Name System, establishing the first of the Top Level Domains — .arpa .mil .gov .org .edu .net and .com. In 1985, with 100 autonomous networks now connected — click here to see a 1985 primary gateway diagram, registration within these TLDs commenced. Click Here for a list of the first domains. Country code TLDs were then appointed shortly afterwards. Stanford now administered a small centralized Root Zone file providing authoritative IP addresses for each TLD — as well as Name Registries providing IP addresses of name servers for each domain in the generic TLDs — with Jon Postel administering the .edu TLD & the .us TLD. These generic domains .com, .org, .mil, .gov, .edu and .net were given out for free, along with free IP address blocks.

In 1986, there was major expansion when the National Science Foundation built a third network, the NSFnet, having high speed links to university networks right around the country. In 1987, the first email from China to Germany occurred, via the CSNET. In 1988, IANAthe Internet Assigned Numbers Authority overseen by Jon Postel at the Institute entered into a formal funding contract with the Defense Dept. This same year, a non-profit company in Ann Arbor Michigan called Merit Network, partnering with IBM and MCI, upgraded the NSFnet from 56Kbps to 1.5Mbps. As over 170 campus networks came online, it took over backbone duties from the ARPANET, which was then decommissioned and dismantled in 1990.

Go top

Decentralized Routing and ISPs

In 1989, with 500 local networks now connected through regional network consortiums, the very first commercial ISP The World provided indirect access via dial-up UUCP accounts. This first downloaded a file from the Internet to the ISP, then the ISP used the UNIX to UNIX Copy command to copy the file to a paying customer's computer. And with the introduction of decentralized routing via the Border Gateway Protocol in June of that same year, the National Science Foundation in 1990 now started a series of workshops and studies to enable the transition of their backbone network to private industry. Merit, partnering again with IBM and MCI, now formed Advanced Network & Services Inc or ANSnet, which over the next two years upgraded the NSFnet backbone from 1.5Mbps to 45Mbps. The first Internet search engine now started with Archie designed to index files in "archives". And the PPP Point-to-Point Protocol for data linking over telephone lines was published as an Internet standard. Where SLIP Serial Line Interface Protocol had been in place since the early 1980s, this now brought in a full-featured protocol. Its framing mechanism was based on ISO's High-Level Data Link Control (HDLC) protocol, which had been initially developed by IBM. Click here for further details.

In September 1991, with 4000 networks now online, US Defense transferred domain name registration from Stanford to a small, but capable private-sector firm Network Solutions Incorporated - NSI.

Go top

The World Wide Web

Over in Europe back in February 1991, Tim Berners-Lee, programming a NeXTcube workstation in the European Laboratory for Particle Physics, had held a Seminar for a World-Wide Web, then with the help of Paul Kunz at Stanford, demonstrated in France on Jan 15, 1992 the first web browser and offsite server. The server downloaded file(s) using a new protocol he called HTTP - Hyper Text Transfer Protocol, forming a page that was formatted by the browser using a Markup Language called HTML. This allowed Hyper Text on a page to link to another page on a computer running his protocol anywhere in the world, by including in the markup language the linked page's URL - Uniform Resource Locator, its domain name, path & filename (i.e. a web address). Thus the user could upload a request for that new page by simply selecting the link. And for Paul Kunz, the standout feature was that the user could also type in and upload a query string to a database hosted on that remote computer, a feature that Yahoo, and then Google would utilize on their database servers with great effect. Click here for further background.
But back in this year, Jean Polly now published the phrase 'Surfing the INTERNET'.

Meanwhile in Amsterdam, Holland, Réseaux IP Européens (RIPE) in 1992 received the first allocation of a block of IP addresses from IANA that enabled it to become a regional IP registry, independently responsible for the physical location of each network.
And in Australia, the AARNet who had linked all the universities in April-May 1990, now "applied to IANA for a large block of addresses on behalf of the Australian network community .... because allocations from the US were taking weeks .... The address space allocated in 1993 was large enough for over 4 million individual host addresses .... The Asia-Pacific Network Information Centre (APNIC) then started as an experimental project in late 1993 (in Tokyo), based on volunteer labour and donated facilities from a number of countries. It evolved into an independent IP address registry ... that operates out of Brisbane" - R.Clarke's Internet in Australia

Back in the U.S.
The Dept of Defense now ceased all funding of the Internet apart from the .mil domain. On January 1st 1993 the National Science Foundation set up the Internet Network Information Center - InterNIC, awarding Network Solutions the contract for ongoing registration services, working co-operatively with AT&T for directory, database & (later) information services.

This same year 1993, students and staff working at the NSF-supported National Center for Supercomputing Applications (NCSA) at the University of Illinois, launched Mosaic, a free web browser with versions for UNIX, Mac & Windows (via Winsock 1.1). As a graphical browser that enabled same page images/text, also clickable hyperlinks, it provided home users a much richer experience than running a command line shell.

Regarding these new dial-up home users, the plan was to be able to dial up an ISP's telephone number using a home phone modem, be automatically granted access to a modem from a pool of modems at the ISP's premises, and thus have a temporary IP address assigned to the home computer for the length of the phone call. Initial costs for these SLIP / PPP connections were $US175 per month. But competition between ISPs & new technology meant that over the next two years prices plummeted rapidly. So while Mosaic was a fairly basic browser by today's standards, its new features introduced huge numbers of "unskilled" users to the web. At the end of 1993 there were 20,000 separate networks, involving over 2 million host computers and 20 million individual users. Click here to see year by year growth.

In February 1994, the NSF awarded contracts to four NAPs (Network Access Points) or, as they are now known, IXPs (Internet Exchange Points), operating at 155Mbps — one in New York operated by Sprint, one in Washington D.C. operated by MFS, one in Chicago operated by Ameritech, and one in California operated by Pacific Bell. These new Tier 1 networks now formed a mesh rather than a single backbone network, having mutual peering agreements to allow network traffic exchange to occur without cost. Over the following year, all the regional NSFnet networks migrated their connections to commercial network providers connected to one (or more) of these NAPs. In late 1994, America Online (AOL) purchased ANSnet's assets and operations. Also this year, the immensely successful web directory Yahoo was created. Southwest Airlines offered the first e-tickets for passengers. And at the end of 1994, with 30,000 web sites now online, the Netscape browser, a predecessor of the open source browser, Mozilla Firefox, was released. Netscape cost non-academic users just $US39 and soon gained over 80% market share.

On April 30 1995, the NSFnet was dissolved. The Internet Service Providers had now taken over — internetMCI, ANSnet (now owned by AOL), SprintLink, UUNET and PSINet. Click here to see a diagram. There was a massive surge in registrations for the .com domain space. In August, Microsoft released Internet Explorer 1.0 free in a Windows 95 add-on pack. On September 14th, the NSF imposed a yearly fee of $50 per domain name, payable through Network Solutions. In December, Netscape added support for Javascript as a brand new web language. The first search engine to allow natural language queries, Alta Vista, was released. VocalTec released the first real-time VoIP.

Go top

Data Encryption

At this time data encryption came to the fore via the Secure Socket Layer or SSL protocol, which changed all communication between user and server into a format that only user and server could understand. It encrypted this data using a server's public encryption key along with a user's private encryption key, a key that had been advised initially to the server through a special handshaking exchange. Click here for further details.

ARIN and ICANN

In December 1997, ARIN - American Registry of Internet Numbers - a nonprofit corporation - was given the task of registering the IP address allocations of all U.S. ISP's, a task previously handled by Jon Postel/InterNIC/Network Solutions. Meanwhile, since Sep 1995, there had been widespread dissatisfaction at the $50 per annum domain name fees for the five generic TLDs .com .net .org .gov .edu, and back in 1996 Jon Postel had proposed the creating of a number of new, competing TLDs. With this in mind, on January 28 1998, he authorized the switching over of 8 of the 12 root servers to a new IANA root zone file, thus, in effect, setting up two Internets. Within the day, a furious Ira Magaziner, Bill Clinton's senior science advisor, insisted it be switched back. Within the week, the US Govt had formally taken over responsibility for the DNS root zone file. On September 30 1998, ICANN - Internet Corporation for Assigned Names and Numbers - was formed to oversee InterNIC for names and IANA for numbers under a contract with the US. Dept of Commerce. ICANN is a nonprofit corporation based at the Information Sciences Institute. Two weeks later, Jon Postel passed away, following complications after heart valve replacement surgery.

In December 1998, the movie "You've Got Mail" was released with Tom Hanks and Meg Ryan and featuring AOL as their ISP. In June 1999, with ICANN's decision to allow multiple registrars of those generic domain names, .com .org and .net, Network Solutions lost its monopoly as sole domain name registrar. And with competition, registration costs for generic .com domain names dropped from $50 to $10 per annum. As mentioned previously, this .com domain name registry, by far the largest TLD with 118 million names, is now operated by Verisign who purchased Network Solutions in 2000. Around the same time, search engines became an essential part. Click Here for an article on How Search Engines Work.

Go top

New Modems, Wireless Networks and Smartphones

Cable Modems: Firstly, some background regarding cable TV. In the US it goes back to 1948. It was introduced into Australia in 1994 by Optus, who implemented it with fibre-optic cable (i.e. transmitting via on/off light pulses). Fibre optic is more fragile than copper, and Optus (and Foxtel) employed FTTN Fibre (just) to the node, with coaxial copper wire for its final "last mile" connection. Regarding FTTP (Fibre to the Premises) OECD stats in 2009 showed that Japan had 87%, and South Korea had 67% of their households installed with it. However, the difficulties with fibre meant that FTTP installations in other countries was much lower.
Now in 1996 in the US, cable modems lifted download speeds on the Internet from 56Kbps to 1.5Mbps (i.e. over 25 fold) and more. Microsoft and NTT ran pure fibre-optic tests and saw speeds as high as 155Mbps.

ADSL Modems: In 1998, ADSL (Assymetric Digital Subscriber Line) technology (deployed on the "downstream" exchange-to-customer side) and a small 53 byte ATM format (on the "upstream" exchange-to-ISP side) was retooled for Internet access, offering initial download speeds of 768Kbps. ATM packets had been originally developed to meet the needs of Broadband ISDN, first published in early 1988. Click here for more info.

As a sidenote, click here for an excellent article on how telephones actually work. First introduced into Melbourne in 1879. Click here for a short page re Aussie voltages, milliamps and signal strength on a typical phone line.

WiFi: In August 1999 the Wi-Fi(IEEE 802.11) alliance was formed to provide a high-speed wireless local area networking standard covering short distances, initially 30 metres inside of buildings and 100 metres outside, though a later standard 802.11n was able to more than double this range. Typical speeds are 4-5 Mbps using 802.11b, 11-22 Mbps using 802.11g, and over 100 Mbps using 802.11n. Click here for an article re WiFi and signal strength.
In 2001 the WiMAX(IEEE 802.16) Forum was launched, designed to cover distances up to 50 kms, though when hundreds of users came online simultaneously the quality of the service dropped dramatically.

Mobile Phones 1G, 2G, GPRS (2.5G), Edge (2.75G), 3G, 4G - What's the Difference:
Click here for an introduction (with photos) to each of these various mobile technologies.
Click here for the date when each was first introduced to Australia.
Click here for a current list of the largest mobile network operators worldwide.

Internet on Mobile Phones 2.5G: The packet switching technology called GPRS General Packet Radio Service running at 20-40 Kbps was commercially launched on a 2G GSM mobile phone network in the UK in June 2000, followed by Nokia in China in August 2000. With GPRS, SGSNs Serving G(PRS) Support Nodes are responsible for delivering data packets from and to the base stations and converting mobile data to and from IP. A GGSN Gateway G(PRS) Support Node at the network provider then connects the user to the appropriate site on the Internet, known as an APN Access Point Name. If the user is in their car, there may be more than one SGSN serving the user as they drive between base stations. Note, the maximum range of a mobile phone to a base station, dependent on factors such as the number of hills and the height of the mast, can be anywhere from 5 to 70 kms. If the user drives near a base station coverered by a new SGSN, the old SGSN hands off automatically with any lost packets retransmitted. So it's Mobile phone→Base station→sgsn→ggsn→Internet.

Internet on Mobile Phones 3G and 4G: On the 3G packet switching level, two competing standards were launched worldwide. First came the CDMA2000 EV-DO Evolution-Data Optimised high-speed system in 2000 for 2G CDMA networks. Next came W-CDMA Wideband CDMA in 2001 as the main member of the UMTS Universal Mobile Telecommunications System family. Both systems used more bandwidth than 2G CDMA, but W-CDMA was also able to complement existing GSM/GPRS/Edge networks on 2G TDMA. In Australia W-CDMA is used by all mobile carriers, with Telstra switching off CDMA EV-DO in Jan 2008. While it initially ran at 100-200 Kbps, W-CDMA has evolved to higher speeds 1 to 4 Mbps by using HSPA High Speed Packet Access. Much higher speeds again at least 100Mbps may be seen with the new IP-oriented LTE Long Term Evolution or 4G standard.

Smartphones: On the hardware front, we have had the Blackberry in 2003 and their Push Email feature, followed by numerous other Smartphones, including the Apple iPhone in 2007, then the Android phones e.g. the HTC Dream, released in Australia by Optus in Feb 2009.

Smartphones built in scanning cameras combined with their explosion in popularity, has meant that companies worldwide have standardised on designing applications that communicate with the user with QR (Quick Response) codes, a two dimensional barcode designed in 1994 for the automotive industry in Japan.

In recent statistics, 1.2 billion smartphones were shipped in 2014. Android ran 81% of them, 15% ran iOS (Apple), 3% ran Windows, and less than 1% were Blackberries. Click here for a recent article on the "cheap smartphone", built by companies unknown outside their own country.

Go top

Foreign Characters in Domain Names: The Domain Name System service had been originally designed to only support 37 ASCII characters i.e. the 26 letters "a - z", the 10 digits "0 - 9", and the "-" character.Although domain names could be generated in English using upper case or lower case characters, the system itself was case-insensitive — it always ignored the case used when resolving the IP address of the host. Then, in 2003 a system was released to allow domain names to contain foreign characters. A special syntax called Punycode was developed to employ the prefix "xn--" in the domain label and encode any foreign characters within the label from Unicode into a unique ASCII address — e.g. http://中.org would thus find itself encoded on name servers worldwide as http://xn--fiq.org. In 2008, with a view to also having TLDs in foreign scripts, ICANN released 11 new TLDs using the word for "test" in 11 different languages, and letting DNS Servers confirm that they were able to handle a .test TLD in each of the scripts. Click here to view the results. Click here for another news report.

Cloud Computing

In 2006, Amazon Web Services launched the Elastic Compute Cloud (EC2). It allowed users to rent virtual computers on which to run their own computer applications, paying by the hour for active servers, hence the term "elastic". Click here for a recent news report. As it says, other companies (e.g. Rackspace, Google, Microsoft, IBM, HP and VMWare) have similarly stepped up and are also providing these "cloud" services to paying customers.

Go top

Latest US Stats January 2017

According to a report in the Weekend Australian January 29 2017 from MoffettNathanson, US broadband stats (100 million users) show Comcast in the lead on 25%, Charter second on 22%, AT&T third on 16% and Verizon on just 7%. Numerous others make up the remaining 30%. Click here for a list.
Latest US wireless stats (300 million users) show Verizon in the lead on 37%, AT&T second on 30%, T-Mobile third on 17% and Sprint fourth on 15%. By far these outweigh the rest, the balance making up just 2%.

Summing Up

Now, to summarize. IP addresses are used to deliver packets of data across a network and have what is termed end-to-end significance. This means that the source and destination IP address remains constant as the packet traverses a network. Each time a packet travels through a router, the router will reference its routing table to see if it can match the network number of the destination IP address with an entry in its routing table. If a match is found, the packet is forwarded to the next hop router for the destination network in question (note that a router does not necessarily know the complete path from source to destination — it just knows the MAC hardware address of the next hop router to go to). If a match is not found, one of two things happens. The packet is forwarded to the router defined as the default gateway, or the packet is dropped by the router. To see a diagram of a packet showing its application layer Email/FTP/HTTP overlaid with separate Transport TCP/UDP Port address, Internet IP address, and Network Ethernet MAC address headers, click here. To view an example of router "hops", click here.
Click here to see the latest BGP Border Gateway Protocol, the Internet's global routing table. Click here for an analysis of the headings.

Go top

Internet in Australia

Now that we have some background, let's learn more about IP address allocation in Australia.

The company, Stephen Williamson Computing Services, is currently hosted at IP address 202.146.212.12
By clicking on www.iana.org we learn that 202.0.0.0 - 202.255.255.255 i.e. 16 million addresses were allocated to APNIC Asia-Pacific Network Information Centre. And by clicking on APNIC we learn that IP Addresses 202.146.208.0 - 202.146.215.254 (which is 2000 addresses) were allocated to Net Quadrant Pty Ltd, trading as Quadra Hosting in Sydney.

APNIC is a nonprofit organization based in Brisbane, since 1998, having started as a pilot project in Tokyo in late 1993. Today the majority of its members are Internet Service Providers (ISPs) in the Asia-Pacific region. Naturally, China is involved. In Australia, Telstra (who had purchased the AARNet's commercial businesses in 1995) and Optus are two national ISPs.

In 1999, Optus (followed by Telstra) introduced Cable modems offering high speed connections transmitted over their HFC television networks, a Hybrid of Fibre-optic cable running to each street cabinet (node), then copper Coaxial cable into each house. Currently as of 2016, Australia has about one million HFC cable users.

With coaxial cable, used for carrying TV channels as well as broadband Internet, the accessible range of frequencies is 1,000 times higher than telephone cable, up to 1 gigahertz, but the Internet channel bandwidth for uploading and downloading data is then shared between about 200 houses per node.

In 2000, Telstra (followed by Optus and other service providers) introduced ADSL modems providing broadband (high-frequency) signals over copper (telephone) wire. It rapidly became the broadband standard for desktops, with about five million users as of 2016.

In using ADSL in Australia, with filters, the telephone line gets divided into three frequency or "information" bands, 0-4kHz carries the voice, 26-138kHz carries digital upload data, and 138-1100kHz carries the high frequency, high speed digital download data. One weakness with ADSL though lies in the fact that, without repeaters, the phone company was unable to transmit these high frequencies over a long distance. It meant in many cases that 4½ kilometres was the maximum limit between the modem and the telephone exchange. It also suffered where there was poor quality wiring.

With both cable and ADSL (and wireless), Telstra and Optus and the other service providers have a pool of IP addresses, and use them to allocate a single IP address to each customer's modem (or smartphone) while it stays switched on. For customers with slower Dial-up modems that utilize the voice-frequency range over telephone wire, the link is much more temporary, lasting just the length of each phone call.
Click here for a list of Telstra telephone exchanges in Australia, including locations and 3rd party DSLAM presence.

Now some further statistics. ABS data shows Australia had 13.5 million active internet subscribers at the end of 2016. While the number of dial-up subscribers has disappeared, down from 1.3 million in 2008 to 90,000 in 2016, the faster types of connection increased from 6.6 million to 13.4 million over the same period. This growth predominantly has been in mobile wireless, which has more than quadrupled. The ABS figures show mobile subscriptions climbed from 1.37 million to 6 million over that eight-year period, giving mobile wireless 50 per cent of the broadband market compared with 20 per cent previously.

Click here for an interesting article on commercial peering in Australia, the establishment of the so called "Gang of Four" in 1998, Telstra, Optus, Ozemail (sold to iiNet in 2005) and Connect (in 1998 part of AAPT, with AAPT then sold to iiNet and TPG).

In January 2015, the top four retail ISPs for landlines were Telstra, Optus, iiNet and TPG. For mobiles, there are three — Telstra, Optus and Vodafone. In March 2015, TPG advised of its intent to take over iiNet. This was approved by shareholders on 27th July, and by the ACCC (Australian Competition and Consumer Commission) on 20th August.

The National Broadband Network is the planned "last mile" wholesale broadband network for all Australian ISPs, designed to provide fibre cable either to the node, or to the premises for 93% of Australian residents, and wireless or satellite for the final 7%. Rollout has been slower than anticipated. According to a report in March 2015, a total 899,000 homes and businesses had been passed, and 389,000 had signed up for active services. Eventually, everyone will have to switch across.

Click here for their current rollout map. Move the red pointer to the area you're interested in, and use the scroll wheel on your mouse or the +/- icons in the bottom right hand corner to zoom in and zoom out.

Go top

HTML - Hyper Text Markup Language:

When pages have a .html or .htm extension, it means they are simple text files (that can be created in Notepad or Wordpad and then saved with a .htm extension). Hypertext comes from the Greek preposition hyper meaning over, above, beyond. It is text which does not form a single sequence and which may be read in various orders. especially text and graphics ... which are interconnected in such a way that a reader of the material (as displayed at a computer terminal, etc.) can discontinue reading one document at certain points in order to consult other related matter.

You specify markup commands in HTML by enclosing them within < and > characters, followed by text.
E.g. <a href="http://www.swcs.com.au/about us.htm" target="_blank"> Load SWCS Page</a>

Other Examples:
 
<img src="steveandyve2.jpg" align=left> will load the jpg file (in this example it is stored in the same folder as the web page) and align it on the left so that the text that follows will flow around it (on the right). If the align command is omitted, the text will start underneath it (instead).

Note, only a few thousand characters are generally involved in each transfer packet of data. If many transfers are necessary to transfer all the information, the program on the sender's machine needs to ensure that each packet's arrival is successfully acknowledged. This is an important point: in packet switching, the sender, not the network, is responsible for each transfer. After an initial connection is established, packets can be simply resent if that acknowledgement is not received.

Most of these examples can be seen on this page that you are viewing. To see the text file that is the source of this page, right click on the mouse, then click View Source.

Go top

 

Other Top Languages

See below http://www.swcs.com.au/top10languages.htm for a brief summary of the current top 10 programming languages on the Internet.

  10 Top Programming Languages 2014

10 Top Programming Languages 2014

Background information to this article came from here.

NameYearBased OnWritten by
1. Java1995C and C++Sun Microsystems as a graphical language to run on any operating system (Windows, Mac, Unix) inside a Java "virtual machine". It is now one of the most in-demand programming languages, playing a major role within the Android operating system on smartphones. Sun was started by a team of programmers from Stanford University in California in 1982, building Sun workstations that ran on the Unix operating system.
2. C1972BAT&T Bell Inc. as a high-level structured language with which to write an operating system — Unix for Digital Equipment Corporation (DEC)'s PDP-11 mainframe.
3. C++1983CAT&T Bell Inc. to provide C with "classes" or graphical "object" extensions. Used in writing Adobe graphical software and the Netscape web browser.
4. C#
(C sharp)
2000C and C++Microsoft to run on Windows operating systems.
5.
Objective
C
1988CLicensed by Steve Jobs to run his NeXT graphical workstations. Currently runs OSX operating system on Apple iMacs and iOS on Apple iPads and iPhones.
6. PHP1997C and C++University students as open source software running on web servers. Major community release by two programmers, Andi Gutmans and Zeev Suraski, in Israel in 2000. Used in Wordpress and Facebook.
7. Python1991C and C++University and research students on web servers as open source software. Click here for sample instructions. First major community release in 2000. Used by Google, Yahoo, NASA.
8. Ruby1995C and C++Japanese university students as open source software for websites and mobile apps.
9. Javascript1995CMarc Andreessen at Netscape (the forerunner to Mozilla's Firefox browser) as open source software at the client end. Used in Adobe Acrobat.
10. SQL1974 Initially designed by IBM as a structured query language, a special-purpose language for managing data in IBM's relational database management systems. It is most commonly used for its "Query" function, which searches informational databases. SQL was standardized by the American National Standards Institute (ANSI) and the International Organization for Standardization (ISO) in the 1980s.

** End of List

 

Click here for a Udemy tutorial in Javascript, as well as some of these other languages.

 

 

References:
1. Gilster, Paul (1995). The New Internet Navigator. (3rd ed.) John Wiley & Sons, Inc.
2. Roger Clarke's Brief History of the Internet in Australia - 2001, 2004
3. Goralski, Walter (2002). Juniper and Cisco Routing. John Wiley - The Internet and the Router - excerpt
4. History of Computing (with photo links) and the Internet - 2007
5. History of the Internet - Wikipedia - 2010

** End of Report

Go top