Who is who on the Internet and who did what when

and How did Everyone else Manage to Agree

 

Table of Contents

Overview Statistics and WHOIS who

1969 Introducing the Internet

C a portable language UNIX a universal operating system

1983 a landmark year

Decentralized Routing and ISPs

World Wide Web

Data Encryption

ARIN and ICANN

New Modems, Wireless Networks and Smartphones

Foreign Characters in Domain Names

Cloud Computing

Latest US Stats

Summing Up

Internet in Australia and the NBN

HTML - Hyper Text Markup Language

Other Top Languages

Firstly some statistics. In 2024, 5 billion individual users access around 1 billion host computers via a global routing table of 520,000 networks on 115,000 Autonomous Systems (AS). An Autonomous System is a single network/group of networks typically governed by a large enterprise with multiple links to other Autonomous Systems. These are then serviced by the several hundred backbone (May 1999) Internet Service Providers (ISPs) that make up the core of the Internet overseen by five Regional Internet Registries (RIRs). E-mail is sent, and web pages are found through the use of domain names. There are now 359 million domain names, with 160 million of them ending with those 3 letters .com. All of these names are overseen by registrars with Go Daddy currently the largest, having 62 million domain names under management. That's a lot. So for this to work, you as a user connect to a local ISP's network. You then have access to his Domain Name System Server - DNS Server for short - software on a computer that translates a host name you send it e.g. www.google.com into a corresponding IP address (72.14.207.99). This Internet Protocol address specifies first the network, and second the host computer (similar to the way a phone number works). If the DNS server doesn't know the host name, it endeavours to connect (within a second or so) to an authoritative DNS Server that does.

Ultimately, that server is one of 1,754 or so worldwide servers appointed by the Internet's 1449 TLD authorities - Top-Level Domains. Click here for the latest list, showing recent additions and deletions. E.g. if the domain name ends in .au, then the Australian auDA is the authority for that TLD zone. If it ends in .cn, then the China Internet Network Information Center is the authority. If the name ends in .com or .net (and with no 2 letter country suffix) then the US-based Verisign, Inc. is the authority for those 2 TLD zones. And so on. Now, if a TLD server is unknown, it can be found in a small official root zone file overseen by ICANN, a nonprofit private corporation headquartered in Los Angeles. Any changes to the file are processed by Verisign then published by 13 Root Name Server organizations. Your request is successful when firstly the name is found, and secondly acknowledged by the nameservers on the network which actually hosts the pages, files & emails sent to that domain.

At this point your DNS server caches ("stores") that name and IP address for subsequent requests, for perhaps 24 hours or so. After that, it deletes the name & IP address from its cache, which means that the next time the name is requested, the ISP has to look it up again. This cache minimizes requests made on the authoritative DNS Servers, but also ensures it won't be out of date for more than 24 hours or so on any domain. And of course this only happens if the domain changes hosts. And to further reduce Internet traffic, Desktops and Mobiles also cache the host name IP address, and copies of the web page, and only download fresh data after the set time has elapsed. A proxy server similarly caches copies of pages for computers on its network. Note, manually pressing page refresh doesn't update the DNS cache — the IP address. Click here for how to manually clear a DNS cache on your desktop. On iPhones, switching to Airplane Mode, then switching back, clears the DNS. With Android phones, navigating to Settings -> Apps -> Chrome allows you to clear the cache. For more reading, click here.

Now, if looking up details for one of the "open" .au domains, you, as an individual, can go to whois.auda.org.au. This database provides the list of name servers for all the domains within the five "open" 2nd level domain (2LD) space i.e. ones ending in .com.au, .org.au, .net.au, .id.au or .asn.au (asn association). Note - since 2002 when it was appointed, AuDA has never dealt directly with the public in registering domains. Commercial registrars carry out this task , thus preventing potential "conflict of interest" situations within AuDA. Then, with regard to three "closed" government 2LDs, Afilias is the registrar for .gov.au and .edu.au, while CSIRO manage their own .csiro.au 2LD.

Some Background: On Oct 25th 2001, auDA (a Government endorsed body) became the authorized Domain Administrator for the .au TLD. They began with appointing AusRegistry in July 2002 on 4 year terms and this was last renewed in Dec 13 for a 4 year term 2014 - 2018. In July 2018, Afilias became the new .au registry services provider. Prior to Afilias, auDA and AusRegistry, Robert Elz at Melbourne University administered the whole .au TLD at no charge from March 1986. In 1996 he delegated a five year exclusive commercial license for the .com.au 2LD to a startup company, Melbourne IT. Click here for further details.

In 2024 the AuDA wholesale price to authorized registrars was $8.56.

Example - Stephen Williamson Computing Services
So, by going to AuDA, we learn that the host computer for the domain swcs.com.au can be found by accessing any one of the three name servers found at ns1.qnetau.com, ns2.qnetau.com or ns3.qnetau.com — managed by Quadra Hosting now known as Vodien Australia, a professional cloud-based web hosting company from Singapore. And the commercial registrar for swcs.com.au is Netregistry in Sydney (who acts on behalf of the owner of the name i.e. Stephen Williamson Computing Services Pty Ltd).

This means that swcs.com.au is currently hosted on the Vodien network, with its actual site hosted in Sydney using a service capable of hosting numerous domains transparently. Thousands of different domains might in fact share the same processor (with pages being published in different folders). If the Internet traffic grows too heavy on this shared server, the swcs domain may in the future require its own, dedicated server. (This situation has not yet been reached).

Domain lookup and WHOIS who - IP address registration

Click here for a web page that will look up any IP address or domain name details online.

Click here to download a free program that will look up any IP address or Host, by accessing the WHOIS section of each of the five regional bodies responsible for IP address registration: ARIN, RIPE, APNIC, LACNIC, and AfriNIC.

Your computer uses this IP address to form packet headers for routing data to & from the host

There are 4 billion addresses available - 2 to the 32nd power - IP version 4 first deployed Jan 1980. IPv0-IPv3 were test versions 1977-1979. A few servers use IPv6 - first defined in 1996 - with 2 to the 128th power addressing i.e. 340 trillion, trillion, trillion addresses — enough addresses for every grain of sand on the planet.
Computer routers store and forward small data packets between computer networks. Gateway routers repackage and convert packets going between homes/businesses and ISPs, or between ISPs. These connect with core routers which form the Internet backbone. So, how did it all come together? In a nutshell, it came as a joint open exercise between the U.S. Military and the research departments at a number of key universities. For 20 years and more, in fact until April 30 1995, unrelated commercial use on the national backbone was strictly forbidden. Accordingly, no private companies got a look in when it came to claiming patents, trademarks, or ownership of its overall design.

Go top

1969 — Introducing the Internet

It began in 1969, when the Defense Advanced Research Projects Agency (DARPA), working with Leonard Kleinrock at UCLA and Douglas Engelbart at Stanford Research Institute (SRI), built the world's first packet-switching network, the ARPANET. Using four Honeywell computers as gateways (routers), the contracted team of Bolt Beranek and Newman connected 3 university mainframes at 50 Kbps:- first an SDS at UCLA, then an IBM at UCSB and DEC at UU to an SDS at SRI near San Francisco. Telnet in 1969 was followed by Email in 1971. By the close of 1973, 40 mainframe hosts were online, including satellite links to Norway & London.

Over in France in 1963 Louis Pouzin had written RUNCOM, an ancestor to the command-line interface and the first "shell" script. Now in 1972 he designed the datagram for use in Cyclades, a robust, packet switching network that always "assumed the worst" i.e. that data "packets" being transferred over its network would always reach their final destination via unreliable / out of order delivery services. Drawing on these ideas, in 1973 Robert Kahn & Vinton Cerf started work on a new Internetwork Transmission Control Program using set port numbers for specific uses. Click here for an initial list in December 1972. It used the concept of a "socket interface" that combined functions (or ports) with source and destination network addresses, connecting user-hosts to server-hosts. Request for Comments (RFCs) for these TCP/IP standards and issuing of IP address blocks to assignees was overseen by Jon Postel, until his death in 1998, director at Information Sciences Institute at the University of Southern California in LA. And regular updates of the network with a list of hosts was provided in a standard text file to all parties by the Defense Dept's NIC Network Information Center at Stanford.

Go top

C — a portable language UNIX — a universal operating system

In the late 1970s, DARPA decided to base their universal computing environment on BSD UNIX, with all development to be carried out at the University of California in Berkeley. UNIX had been greatly influenced by an earlier operating system Multics, a project that had been funded by DARPA at Massachusetts Institute of Technology (MIT) in 1964, though with its size and complexity, many of its AT&T Bell Laboratory programmers had withdrawn. In 1969 a group of them led by Ken Thompson and Dennis Ritchie wrote a simpler version of Multics (which they called UNIX) in assembly language on a DEC PDP-7. In 1972 UNIX was rewritten in a slightly higher-level language, C, on a DEC PDP-11. C had been developed by Dennis Ritchie, (based on an earlier language, B, put together by Ken Thompson). C's clever use of data structures, combined with its closeness to an assembly language, led to the takeup of UNIX as an operating system by hardware manufacturers everywhere. Except that it was developed on a DEC computer, C thus had the result that IBM's PL/1, the high level language that had been employed in Multics, had been after. AT&T accordingly made the UNIX operating system (and the C language) universally available to universities and commercial firms, as well as the United States government, under licenses.

With IPv4 in 1980, the National Science Foundation created a core network for institutions without access to the ARPANET. Three Computer Science depts — Wisconsin-Madison, Delaware, Purdue initially joined. Vinton Cerf came up with a plan for an inter-network connection between this CSNET and the ARPANET.

Meanwhile, at the hardware cabling level, Ethernet was rapidly becoming the standard for small and large computer networks over twisted pair copper wire. It identified the unique hardware address of the network interface card inside each computer, then regulated traffic through a variety of switches. This standard was patented in 1977 by Robert Metcalfe at the Xerox Corporation, operating with an initial data rate of 3 Mbps. Success attracted early attention and led in 1980 to the joint development of the 10-Mbps Ethernet Version 1.0 specification by the three-company consortium: Digital Equipment Corporation, Intel Corporation, and Xerox Corporation. Today, the IEEE administers these unique Ethernet addresses, sometimes referred to as a media access control (MAC) address. It is 48 bits long and is displayed as 12 hexadecimal digits (six groups of two digits) separated by colons, and thus allows for 280 trillion unique addresses. An example of an Ethernet address is 44:45:53:54:42:00 — note — IEEE designates the first three octets as vendor-specific. To learn the Ethernet address of your own computer in Windows, at a Command Line prompt type ipconfig /all and look for the physical address. To learn the Ethernet address of your ISP's computer, type ARP -a, then look for the physical address that applies to the default gateway. In Australia, this will be the modem you received from Telstra, Optus, TPG, Dodo, etc and your computer issues it with the IP address 192.168.0.1.

Go top

1983 — a landmark year

Back to the Internet. On January 1st 1983, the Defense Communications Agency at Stanford split off the military network — MILNET — from their research based ARPANET network, and then mandated TCP/IP protocols on every host. In May, the Massachusetts Institute of Technology, in conjunction with DEC and IBM, used TCP/IP to develop a campus-wide model of distributed computing that became known as the client-server model with PC/Workstations and Servers as opposed to the mainframe model with all intelligence within the central host computer. Next in August 1983, the University of California in Berkeley included Bill Joy's modified version of TCP/IP in its commercial release of BSD UNIX, a landmark event. With this the ARPANET and CSNET grew, adding gateways for any network anywhere — regardless of that network's internal protocols or hardware cabling — allowing these two original core networks to remain intact. Accordingly defense contractors' networks, Usenet discussion groups (i.e. bulletin boards), Bitnet's automated mailouts, later large business networks - CompuServe in 1989, MCI Mail in 1990, and AOL in 1994, all of whom used different protocols, established gateways. Click here for details of other non-Internet networks at that time. Many date the true arrival of the Internet as 1983.

In 1984 in Europe, a consortium of several European UNIX systems manufacturers founded the X/Open Company Ltd to define a single specification for operating systems derived from UNIX, to increase the interoperability of applications and reduce the cost of porting software. Its original members were Bull in Paris, ICL in London, Siemens in Munich, Olivetti in Italy, and Nixdorf (also from Germany) — a group sometimes referred to as BISON. Philips in Amsterdam and Ericsson in Stockholm joined soon afterwards, at which point the name X/Open was adopted. In many ways, it was preparing for the commercial impact of AT&T Computer Systems (a company that had been allowed to form on January 1, 1984, having divested itself of the Bell (Telephone) Operating companies), and the upcoming release of their proprietary UNIX System V. As a proprietary product, it was in direct opposition to the UNIX that had been developed at Berkeley, and fears of numerous versions emerging were realized as AT&T formed a mutual alliance with Sun Microsystems a few years afterwards.

But meanwhile on an academic level, the University of Wisconsin established the first Name Server — a directory service that looked up host names when sending email on the CSNET. In September 1984, taking this to the next logical step, DARPA replaced the HOSTS.TXT file with the Domain Name System, establishing the first of the Top Level Domains — .arpa .mil .gov .org .edu .net and .com. In 1985, with 100 autonomous networks now connected — click here to see a 1985 primary gateway diagram, registration within these TLDs commenced. Click Here for a list of the first domains. Country code TLDs were then appointed shortly afterwards. Stanford now administered a small centralized Root Zone file providing authoritative IP addresses for each TLD — as well as Name Registries providing IP addresses of name servers for each domain in the generic TLDs — with Jon Postel administering the .edu TLD & the .us TLD. These generic domains .com, .org, .mil, .gov, .edu and .net were given out for free, along with free IP address blocks.

In 1986, there was major expansion when the National Science Foundation built a third network, the NSFnet, having high speed links to university networks right around the country. In 1987, the first email from China to Germany occurred, via the CSNET. In 1988, IANAthe Internet Assigned Numbers Authority overseen by Jon Postel at the Institute entered into a formal funding contract with the Defense Dept. This same year, a non-profit company in Ann Arbor Michigan called Merit Network, partnering with IBM and MCI, upgraded the NSFnet from 56Kbps to 1.5Mbps. As over 170 campus networks came online, it took over backbone duties from the ARPANET, which was then decommissioned and dismantled in 1990.

Go top

Decentralized Routing and ISPs

In 1989, with 500 local networks now connected through regional network consortiums, the very first commercial ISP The World provided indirect access via dial-up UUCP accounts. This first downloaded a file from the Internet to the ISP, then the ISP used the UNIX to UNIX Copy command to copy the file to a paying customer's computer. And with the introduction of decentralized routing via the Border Gateway Protocol in June of that same year, the National Science Foundation in 1990 now started a series of workshops and studies to enable the transition of their backbone network to private industry. Merit, partnering again with IBM and MCI, now formed Advanced Network & Services Inc or ANSnet, which over the next two years upgraded the NSFnet backbone from 1.5Mbps to 45Mbps. The first Internet search engine now started with Archie designed to index files in "archives". And the PPP Point-to-Point Protocol for data linking over telephone lines was published as an Internet standard. Where SLIP Serial Line Interface Protocol had been in place since the early 1980s, this now brought in a full-featured protocol. Its framing mechanism was based on ISO's High-Level Data Link Control (HDLC) protocol, which had been initially developed by IBM. Click here for further details.

In September 1991, with 4000 networks now online, US Defense transferred domain name registration from Stanford to a small, but capable private-sector firm Network Solutions Incorporated - NSI.

Go top

The World Wide Web

Over in Europe back in February 1991, Tim Berners-Lee, programming a NeXTcube workstation in the European Laboratory for Particle Physics, had held a Seminar for a World-Wide Web, then with the help of Paul Kunz at Stanford, demonstrated in France on Jan 15, 1992 the first web browser and offsite server. The server downloaded file(s) using a new protocol he called HTTP - Hyper Text Transfer Protocol, forming a page that was formatted by the browser using a Markup Language called HTML. This allowed Hyper Text on a page to link to another page on a computer running his protocol anywhere in the world, by including in the markup language the linked page's URL - Uniform Resource Locator, its domain name, path & filename (i.e. a web address). Thus the user could upload a request for that new page by simply selecting the link. And for Paul Kunz, the standout feature was that the user could also type in and upload a query string to a database hosted on that remote computer, a feature that Yahoo, and then Google would utilize on their database servers with great effect. Click here for further background.
But back in this year, Jean Polly now published the phrase 'Surfing the INTERNET'.

Meanwhile in Amsterdam, Holland, Réseaux IP Européens (RIPE) in 1992 received the first allocation of a block of IP addresses from IANA that enabled it to become a regional IP registry, independently responsible for the physical location of each network.
And in Australia, the AARNet who had linked all the universities in April-May 1990, now "applied to IANA for a large block of addresses on behalf of the Australian network community .... because allocations from the US were taking weeks .... The address space allocated in 1993 was large enough for over 4 million individual host addresses .... The Asia-Pacific Network Information Centre (APNIC) then started as an experimental project in late 1993 (in Tokyo), based on volunteer labour and donated facilities from a number of countries. It evolved into an independent IP address registry ... that operates out of Brisbane" - R.Clarke's Internet in Australia

Back in the U.S.
The Dept of Defense now ceased all funding of the Internet apart from the .mil domain. On January 1st 1993 the National Science Foundation set up the Internet Network Information Center - InterNIC, awarding Network Solutions the contract for ongoing registration services, working co-operatively with AT&T for directory, database & (later) information services.

This same year 1993, students and staff working at the NSF-supported National Center for Supercomputing Applications (NCSA) at the University of Illinois, launched Mosaic, a free web browser with versions for UNIX, Mac & Windows (via Winsock 1.1). As a graphical browser that enabled same page images/text, also clickable hyperlinks, it provided home users a much richer experience than running a command line shell.

Regarding these new dial-up home users, the plan was to be able to dial up an ISP's telephone number using a home phone modem, be automatically granted access to a modem from a pool of modems at the ISP's premises, and thus have a temporary IP address assigned to the home computer for the length of the phone call. Initial costs for these SLIP / PPP connections were $US175 per month. But competition between ISPs & new technology meant that over the next two years prices plummeted rapidly. So while Mosaic was a fairly basic browser by today's standards, its new features introduced huge numbers of "unskilled" users to the web. At the end of 1993 there were 20,000 separate networks, involving over 2 million host computers and 20 million individual users. Click here to see year by year growth.

In February 1994, the NSF awarded contracts to four NAPs (Network Access Points) or, as they are now known, IXPs (Internet Exchange Points), operating at 155Mbps — one in New York operated by Sprint, one in Washington D.C. operated by MFS, one in Chicago operated by Ameritech, and one in California operated by Pacific Bell. These new Tier 1 networks now formed a mesh rather than a single backbone network, having mutual peering agreements to allow network traffic exchange to occur without cost. Over the following year, all the regional NSFnet networks migrated their connections to commercial network providers connected to one (or more) of these NAPs. In late 1994, America Online (AOL) purchased ANSnet's assets and operations. Also this year, the immensely successful web directory Yahoo was created. Southwest Airlines offered the first e-tickets for passengers. And at the end of 1994, with 30,000 web sites now online, the Netscape browser, a predecessor of the open source browser, Mozilla Firefox, was released. Netscape cost non-academic users just $US39 and soon gained over 80% market share.

On April 30 1995, the NSFnet was dissolved. The Internet Service Providers had now taken over — internetMCI, ANSnet (now owned by AOL), SprintLink, UUNET and PSINet. Click here to see a diagram. There was a massive surge in registrations for the .com domain space. In August, Microsoft released Internet Explorer 1.0 free in a Windows 95 add-on pack. On September 14th, the NSF imposed a yearly fee of $50 per domain name, payable through Network Solutions. In December, Netscape added support for Javascript as a brand new web language. The first search engine to allow natural language queries, Alta Vista, was released. VocalTec released the first real-time VoIP.

Go top

Data Encryption

At this time data encryption came to the fore via the Secure Socket Layer or SSL protocol, which changed all communication between user and server into a format that only user and server could understand. It encrypted this data using a server's public encryption key along with a user's private encryption key, a key that had been advised initially to the server through a special handshaking exchange. Click here for further details.

ARIN and ICANN

In December 1997, ARIN - American Registry of Internet Numbers - a nonprofit corporation - was given the task of registering the IP address allocations of all U.S. ISP's, a task previously handled by Jon Postel/InterNIC/Network Solutions. Meanwhile, since Sep 1995, there had been widespread dissatisfaction at the $50 per annum domain name fees for the five generic TLDs .com .net .org .gov .edu, and back in 1996 Jon Postel had proposed the creating of a number of new, competing TLDs. With this in mind, on January 28 1998, he authorized the switching over of 8 of the 12 root servers to a new IANA root zone file, thus, in effect, setting up two Internets. Within the day, a furious Ira Magaziner, Bill Clinton's senior science advisor, insisted it be switched back. Within the week, the US Govt had formally taken over responsibility for the DNS root zone file. On September 30 1998, ICANN - Internet Corporation for Assigned Names and Numbers - was formed to oversee InterNIC for names and IANA for numbers under a contract with the US. Dept of Commerce. ICANN is a nonprofit corporation based in Los Angeles. Two weeks later, Jon Postel passed away, following complications after heart valve replacement surgery.

In December 1998, the movie "You've Got Mail" was released with Tom Hanks and Meg Ryan and featuring AOL as their ISP, but preferring to call itself an "Online" Service Provider, as it triggered the phrase "Are you Online?" . In June 1999, with ICANN's decision to allow multiple registrars of those generic domain names, .com .org and .net, Network Solutions lost its monopoly as sole domain name registrar. And with competition, registration costs for generic .com domain names dropped from $50 to $10 per annum. As mentioned previously, this .com domain name registry, by far the largest TLD with 160 million names, is now operated by Verisign who purchased Network Solutions in 2000. Around the same time, search engines became an essential part. Click Here for an article on How Search Engines Work.

In October 2016, ICANN's governance transitioned to a multistakeholder model. Click here for more details.

Go top

New Modems, Wireless Networks and Smartphones

Cable Modems: Firstly, some background regarding cable TV. In the US it goes back to 1948. It was introduced into Australia in 1994 by Optus, who implemented it with fibre-optic cable (i.e. transmitting via on/off light pulses). Fibre optic is more fragile than copper, and Optus (and Foxtel) employed FTTN Fibre (just) to the node, with coaxial copper wire for its final "last mile" connection. Regarding FTTP (Fibre to the Premises) OECD stats in 2009 showed that Japan had 87%, and South Korea had 67% of their households installed with it. However, the difficulties with fibre meant that FTTP installations in other countries was much lower.
Now in 1996 in the US, cable modems lifted download speeds on the Internet from 56Kbps to 1.5Mbps (i.e. over 25 fold) and more. Microsoft and NTT ran pure fibre-optic tests and saw speeds as high as 155Mbps.

ADSL Modems: In 1998, ADSL (Assymetric Digital Subscriber Line) technology (deployed on the "downstream" exchange-to-customer side) and a small 53 byte ATM format (on the "upstream" exchange-to-ISP side) was retooled for Internet access, offering initial download speeds of 768Kbps. ATM packets had been originally developed to meet the needs of Broadband ISDN, first published in early 1988. Click here for more info.

As a sidenote, click here for an excellent article on how telephones actually work. First introduced into Melbourne in 1879. Click here for a short page re Aussie voltages, milliamps and signal strength on a typical phone line.

WiFi: In August 1999 the Wi-Fi(IEEE 802.11) alliance was formed to provide a high-speed wireless local area networking standard covering short distances, about 25 square metres from the router inside a building, utilizing each device's MAC address as its unique identifier. Additional access points, along with switching and cabling to the router, enabled large shopping centres to increase their coverage. Typical speeds are 4-5 Mbps using 802.11b, 11-22 Mbps using 802.11g, over 100 Mbps using 802.11n, up to 1 Gbps or more using 802.11ac. Click here for an article re WiFi and signal strength.
In 2001 the WiMAX(IEEE 802.16) Forum offered extended outside coverage, up to 50 kms from each base station. Awkwardly, its initial lack of compatibility with legacy 3G devices meant it was unable to counter the mass rollout of 4G and LTE that came in 2009 (see next paragraph) which was more compatible with earlier 3G technology. In 2020, while free Wi-Fi proliferates everywhere, WiMAX subscribers worldwide number, perhaps, just in the tens of millions.
Click here for a recent article, comparing WiMAX with LTE today.

Mobile Phones 1G, 2G, GPRS (2.5G), Edge (2.75G), 3G, 4G, 5G - What's the Difference
Click here for an introduction written in 2011 (with photos) to each of these various mobile technologies, except for 5G.
Click here for the date when each was first introduced to Australia.
Click here for a current list of the largest mobile network operators worldwide.

Internet on Mobile Phones 2.5G:
The packet switching technology called GPRS General Packet Radio Service running at 20-40 Kbps was pioneered under the European GSM Standard and was commercially launched on a 2G mobile phone network in the UK in June 2000, followed by Nokia in China in August 2000. Rather than the MAC address, GSM utilizes info on the phone's SIM card as its unique identifier at each base station. SGSNs Serving G(PRS) Support Nodes are responsible for delivering data packets from and to the base stations and converting mobile data to and from IP. A GGSN Gateway G(PRS) Support Node at the network provider then connects the user to the appropriate site on the Internet, known as an APN Access Point Name. If the user is in their car, there may be more than one SGSN serving the user as they drive between base stations. Note, the maximum range of a mobile phone to a base station, dependent on factors such as the number of hills and the height of the mast, can be anywhere from 5 to 70 kms. If the user drives near a base station coverered by a new SGSN, the old SGSN hands off automatically with any lost packets retransmitted. So it's Mobile phone→Base station→sgsn→ggsn→Internet.

Internet on Mobile Phones 3G:
On the 3G packet switching level, two competing standards were launched worldwide. First came the CDMA2000 EV-DO Evolution-Data Optimised high-speed system in 2000 for 2G CDMA One networks, pioneered by Qualcomm in San Diego, California. Adopted by Sprint and Verizon in the US and Telstra in Australia.
Over in Europe, and China, as 2G GSM Global System for Mobile communications evolved into 3G UMTS Universal Mobile Telecommunications System, next came W-CDMA Wideband CDMA in 2001 as the main member of this UMTS family. Both CDMA2000 and W-CDMA provided more bandwidth than 2G systems, but W-CDMA was also able to complement existing GSM/GPRS/Edge networks on 2G TDMA and their use of SIM cards. SIM cards were not a feature of CDMA2000, and this, thus, made it more difficult for the user to change carriers. No, not very flexible. In Australia, W-CDMA is now used for 3G by all mobile carriers, with Telstra switching off CDMA EV-DO in Jan 2008. While it initially ran at 100-200 Kbps, W-CDMA evolved to higher speeds 1 to 4 Mbps by using HSPA High Speed Packet Access.

Smartphones and scanning cameras:
On the hardware front, we have had the Blackberry in 2003 and their Push Email feature, followed by numerous other Smartphones, including the Apple iPhone in 2007, then the Android phones e.g. the HTC Dream, released in Australia by Optus in Feb 2009. They came with scanning cameras that could communicate with the user with QR (Quick Response) codes, a two dimensional barcode designed in 1994 for the automotive industry in Japan.

Smartphones and their multiple antennas:
Smartphones were designed initially with at least three antennas. In the Samsung Galaxy SII released back in 2011 you can see its internal GPS antenna at the top, a combination Bluetooth and WiFi antenna at the bottom left, and the GSM and UMTS antenna (for 2G and 3G) on the bottom right.

SmartPhones 4G and 5G:
In 2009, newer 4G technology increased the speed again at least 100Mbps with GSM's LTE Long Term Evolution standard. It required at least two antennas at every base station and two LTE receiver antennas in each smartphone.

In November 2019, 4G subscribers worldwide number 4 billion, 3G 2 billion, 2G 2 billion, and 5G (see below) just 13 million.

On 17 October 2017, Qualcomm announced the first 5G mobile connection (yes, much faster again) having a speed of 1 Gbps. But needing new phones with additional antenna, and many new towers, yet to be built.
On 28 May 2019, Telstra launched its first 5G mobile phone plans in Australia with a Samsung Galaxy S10.
Click here for its latest news.

In a report by Gartner, 1.55 billion smartphones shipped in 2018. Apple iPhones had 13.4% of this market and nearly all the rest were Android smartphones.
PC sales in 2018 in contrast were 259 million units. Many customers are holding on to their devices, smartphones and computers, for longer as they continue to offer both speed and performance.

Go top

Foreign Characters in Domain Names: The Domain Name System service had been originally designed to only support 37 ASCII characters i.e. the 26 letters "a - z", the 10 digits "0 - 9", and the "-" character. Although domain names could be generated in English using upper case or lower case characters, the system itself was case-insensitive — it always ignored the case used when resolving the IP address of the host. Then, in 2003 a system was released to allow domain names to contain foreign characters. A special syntax called Punycode was developed to employ the prefix "xn--" in the domain label and encode any foreign characters within the label from Unicode into a unique ASCII address — e.g. http://中.org would thus find itself encoded on name servers worldwide as http://xn--fiq.org. In 2008, with a view to also having TLDs in foreign scripts, ICANN released 11 new TLDs using the word for "test" in 11 different languages, and letting DNS Servers confirm that they were able to handle a .test TLD in each of the scripts. Click here to view the results. Click here for another news report.

Cloud Computing

In 2006, Amazon Web Services launched the Elastic Compute Cloud (EC2). It allowed users to rent virtual computers on which to run their own computer applications, paying by the hour for active servers, hence the term "elastic". Click here for a 2012 report. As it says, other companies (e.g. Rackspace, Google, Microsoft, IBM, HP and VMWare) have similarly stepped up and are also providing these "cloud" services to paying customers.

Click here for a 2017 revenue report. In order, the top revenue earners were Microsoft, Amazon, IBM, Salesforce.com, Oracle, SAP and Google.

Go top

US Stats

US Wired Stats (covering 110 million households) show Comcast (Xfinity) in the lead on 32 million, Charter (Spectrum) second on 30 million, AT&T third on 15 million then Verizon on 8 million subscribers.

Wireless stats (over 300 million users) show AT&T in Dallas Texas on top with 240 million users, with two rivals for second place, Verizon in New Jersey with 140 million users, and T-Mobile in Seattle Washington with 120 million users. By far, these three outweigh the others.

  1. AT&T was formed from Southwestern Bell (SBC) and has incorporated Ameritech in Chicago, Pacific Telesis in California, and Bell South in Georgia.
  2. Verizon followed the Bell System breakup in 1984 and the Baby Bells. Bell Atlantic incorporated NYNEX and GTE in 1997-98, renamed itself as Verizon in 2000, partnered with British wireless company Vodafone for next 14 years, and incorporated MCI in 2006.
  3. T-Mobile merged with Sprint in 2020 and is majority owned by Deutsche Telekom.

Summing Up

Now, to summarize. IP addresses are used to deliver packets of data across a network and have what is termed end-to-end significance. This means that the source and destination IP address remains constant as the packet traverses a network. Each time a packet travels through a router, the router will reference its routing table to see if it can match the network number of the destination IP address with an entry in its routing table. If a match is found, the packet is forwarded to the next hop router for the destination network in question (note that a router does not necessarily know the complete path from source to destination — it just knows the MAC hardware address of the next hop router to go to). If a match is not found, one of two things happens. The packet is forwarded to the router defined as the default gateway, or the packet is dropped by the router. To see a diagram of a packet showing its application layer Email/FTP/HTTP overlaid with separate Transport TCP/UDP Port address, Internet IP address, and Network Ethernet MAC address headers, click here. To view an example of router "hops", click here.
Click here to see the latest BGP Border Gateway Protocol, the Internet's global routing table. Click here for an analysis of the headings.

Go top

Internet in Australia

Now that we have some background, let's learn more about IP address allocation in Australia.

The company, Stephen Williamson Computing Services, is currently hosted at IP address 202.146.212.12
By clicking on www.iana.org we learn that 202.0.0.0 - 202.255.255.255 i.e. 16 million addresses were allocated to APNIC Asia-Pacific Network Information Centre. And by clicking on APNIC we learn that IP Addresses 202.146.208.0 - 202.146.215.254 (which is 2000 addresses) were allocated to Net Quadrant Pty Ltd, trading as Quadra Hosting in Sydney.

APNIC is a nonprofit organization based in Brisbane, since 1998, having started as a pilot project in Tokyo in late 1993. Today the majority of its members are Internet Service Providers (ISPs) in the Asia-Pacific region. Naturally, China is involved. In Australia, Telstra (who had purchased the AARNet's commercial businesses in 1995) and Optus are two national ISPs.

Click here for an article in 2003 that summarized commercial peering in Australia, the establishment of the so called "Gang of Four" back in 1998, Telstra, Optus, Ozemail — sold to iiNet in 2005, and Connect.com.au — in 1998 part of AAPT, with AAPT sold to Telecom New Zealand in 2000, then sold to iiNet in 2010, before iiNet became part of TPG in 2015.

Back to 1999. Optus (followed by Telstra) introduced Cable modems offering high speed connections transmitted over their HFC television networks, a Hybrid of Fibre-optic cable running to each street cabinet (node), then copper Coaxial cable into each house.

With coaxial cable, used for carrying TV channels as well as broadband Internet, the accessible range of frequencies is 1,000 times higher than ADSL phone cable, up to 1 gigahertz but the Internet channel bandwidth for uploading and downloading data is then shared between about 200 houses per node.

In 2000, Telstra (followed by Optus and other service providers) introduced ADSL modems providing broadband (high-frequency) signals over copper (telephone) wire.

In using ADSL in Australia, with filters, the telephone line was divided into three frequency or "information" bands, 0-4kHz carried the voice, 26-138kHz carried digital upload data, and 138-1100kHz carried the digital download data, though without repeaters, the phone company was unable to transmit ADSL further than 5 kilometres.

With the NBN and VDSL2 (very high speed DSL up to 30MHz) 500 metres distant to fibre-optic cable allows up to 100Mbps, 1 km 50Mbps, 2km 15Mbps, 3km 8Mbps, 4-5 km 1-4Mbps.

With landlines and wireless, Telstra and Optus and the other service providers have a pool of IP addresses and use them to allocate a single IP address to each customer's router (or smartphone), coupling it with a unique port number. For customers (worldwide) still with the early Dial-up modems that utilized the voice-frequency range over telephone wire, very rare in 2024, the link is much more temporary, lasting just the length of each phone call.
Click here for a list of all telephone exchanges in Australia, including their locations and 3rd party DSLAM presence.

In 2010 the National Broadband Network became the "last mile" wholesale broadband network for all Australian ISPs, designed to provide fibre cable either to the node, or to the premises for 93% of Australian residents, and wireless or satellite for the final 7%. Rollout was slower than anticipated. According to a report in March 2015, a total 899,000 homes and businesses had been passed, and 389,000 had signed up for active services. However, the numbers have steadily increased.

Click here for their current rollout map. Move the red pointer to the area you're interested in, and use the scroll wheel on your mouse or the +/- icons in the bottom right hand corner to zoom in and zoom out.

Now some recent statistics in June 2021. ACCC data shows Australia had 12.8 million broadband subscribers. The number of dial-up subscribers has disappeared, down from 1.3 million in 2008 to 90,000 in 2016, to no longer being counted from 2018. Strong growth has been in mobile wireless (which employs a SIM card in a modem or laptop) which has increased more than 3 times. The ACCC figures show mobile broadband subscriptions climbed from 1.37 million in 2008 to 4.578 million over that period.

NBN and non-NBN and Mobile Broadband Services in Operation

  1. NBN
    7.566 million are now connected to the NBN via Fibre-optic, HFC cable, Fixed wireless or Satellite.
  2. Non-NBN
    • 353,000 are on DSL.
    • 91,000 are on HFC cable.
    • 30,000 connect via a rooftop antenna to an overhead satellite.
    • 183,000 connect via Fibre to the Building FTTB, Fibre to the Curb FTTC, Fibre to the Node FTTN or Fibre to the Premises FTTP.
  3. Mobile Broadband
    4.578 million connect via a SIM card in a modem, preferably near the window of a building or via a SIM card in a laptop or a tablet.

In 2021, the top four retail ISPs for landlines were

  1. Telstra based in Melbourne with offices Australia-wide-3.45 million
  2. TPG based in Sydney and which now incorporates many regional ISPs-1.66 million
  3. Optus based in Sydney and with Head Office in Singapore-1.14 million
  4. Vocus based in Sydney which now incorporates Dodo and iPrimus-535,000

For mobile subscribers, in June 2021 the statistics showed

with three major carriers Australia-wide.

  1. Telstra
  2. Optus
  3. Vodafone

In 2019 according to a report in the Australian, Telstra had 46.5% share of the prepaid mobile subscriber market, Optus had 24.8% and Vodafone had 17.4%. MVNOs (Mobile virtual network operators) who lease wireless telephone and data services from Optus Telstra and Vodafone for resale had 11.3% share.

In the postpaid mobile subscriber market, Telstra's share was 47.4%, Optus 32.5%, and Vodafone 20.2%.

Go top

HTML - Hyper Text Markup Language:

When pages have a .html or .htm extension, it means they are simple text files (that can be created in Notepad or Wordpad and then saved with a .htm extension). Hypertext comes from the Greek preposition hyper meaning over, above, beyond. It is text which does not form a single sequence and which may be read in various orders. especially text and graphics ... which are interconnected in such a way that a reader of the material (as displayed at a computer terminal, etc.) can discontinue reading one document at certain points in order to consult other related matter.

You specify markup commands in HTML by enclosing them within < and > characters, followed by text.
E.g. <a href=" aboutus.htm" target="_blank"> Load SWCS Page</a>

Other Examples:
 
<img src = "steveandyve2.jpg" align=left> will load the jpg file (in this example it is stored in the same folder as the web page) and align it on the left so that the text that follows will flow around it (on the right). If the align command is omitted, the text will start underneath it (instead).

Note, only a few thousand characters are generally involved in each transfer packet of data. If many transfers are necessary to transfer all the information, the program on the sender's machine needs to ensure that each packet's arrival is successfully acknowledged. This is an important point: in packet switching, the sender, not the network, is responsible for each transfer. After an initial connection is established, packets can be simply resent if that acknowledgement is not received.

Most of these examples can be seen on this page that you are viewing. To see the text file that is the source of this page, right click on the mouse, then click View Source.

Go top

Other Top Languages

See as a new page 10 Top Programming Languages 2014

10 Top Programming Languages 2014

Background information to this article came from here.

NameYearBased OnWritten by
1. Java1995C and C++Sun Microsystems as a graphical language to run on any operating system (Windows, Mac, Unix) inside a Java "virtual machine". It is now one of the most in-demand programming languages, playing a major role within the Android operating system on smartphones. Sun was started by a team of programmers from Stanford University in California in 1982, building Sun workstations that ran on the Unix operating system.
2. C1972BAT&T Bell Inc. as a high-level structured language with which to write an operating system — Unix for Digital Equipment Corporation (DEC)'s PDP-11 mainframe.
3. C++1983CAT&T Bell Inc. to provide C with "classes" or graphical "object" extensions. Used in writing Adobe graphical software and the Netscape web browser.
4. C#
(C sharp)
2000C and C++Microsoft to run on Windows operating systems.
5.
Objective
C
1988CLicensed by Steve Jobs to run his NeXT graphical workstations. Currently runs OSX operating system on Apple iMacs and iOS on Apple iPads and iPhones.
6. PHP1997C and C++University students as open source software running on web servers. Major community release by two programmers, Andi Gutmans and Zeev Suraski, in Israel in 2000. Used in Wordpress and Facebook.
7. Python1991C and C++University and research students on web servers as open source software. Click here for sample instructions. First major community release in 2000. Used by Google, Yahoo, NASA.
8. Ruby1995C and C++Japanese university students as open source software for websites and mobile apps.
9. Javascript1995CMarc Andreessen at Netscape (the forerunner to Mozilla's Firefox browser) as open source software at the client end. Used in Adobe Acrobat.
10. SQL1974 Initially designed by IBM as a structured query language, a special-purpose language for managing data in IBM's relational database management systems. It is most commonly used for its "Query" function, which searches informational databases. SQL was standardized by the American National Standards Institute (ANSI) and the International Organization for Standardization (ISO) in the 1980s.

** End of List

 

Click here for a Udemy tutorial in Javascript, as well as some of these other languages.

 

References:
1. Gilster, Paul (1995). The New Internet Navigator. (3rd ed.) John Wiley & Sons, Inc.
2. Roger Clarke's Brief History of the Internet in Australia - 2001, 2004
3. Goralski, Walter (2002). Juniper and Cisco Routing. John Wiley - The Internet and the Router - excerpt
4. History of Computing (with photo links) and the Internet - 2007
5. History of the Internet - Wikipedia - 2010

** End of Report

Go top