There has been a rapid expansion of services over the Internet. Real-time services like stock-trading, for example, are highly quality and time sensitive. When I check stock prices in the Indian stock market in the early morning in Europe, the trading-session is well underway in India. The first inkling of a high volume day (rapid rise or fall of the market) comes from the speed of my online broker's webs-server! Because at the end of it, Internet technology is based on Statistical multiplexing of the data-packets and service requests. When the number of users is high, quality degrades - some on the server connection, some on the server processing!
So if it is best-effort, how come everyone is shifting all sorts of critical applications to the Internet? Worse, many of these services are becoming more resource hungry and QoS demanding - think of the canonical "Internet Medicine" application where your high resolution X-ray is seen by an expert halfway across the world, marked up with comments, and sent back. Or those VOIP applications which keep jacking up the voice quality because the bandwidth is available (I saw a VOIP promo which claimed CD quality audio). Or, ok, if I am driving down the Interstate and using my 3G handset to pull up a Google Map of the area. Oops too late, by the time the map loaded I missed the exit.
Are we betting to much on best effort networks???
Yes and No, it depends.
So one of the nicest things about the Internet nowadays is that you can buy the QoS you offer with your service. This means for example, paying Akamai a fortune and then some more to give you great CDNs that can limit the vagaries of best-effort delivery. Another great thing about the Internet is that it is robust - the key design criterion for its pre-cursor, the Arpanet, was that it re-route traffic automatically around problem areas (problem-areas scenarios included nuked cities in those cold-war days). Moreover, folks who design good Web services have a good idea of the underlying best-effort clause of the Internet...e.g. Blogger saves this post every few seconds as I type it in directly into their web-site.
But sometimes, things don't go as well and applications do hit the best-effort wall. In my opinion, this is mostly due to bad design rather than unsurmountable limitations of the Internet platform. For example, I recently tested a video-voip SIP telephony system where the designer sent the audio and video streams over RTP, separately and without synchronization, hoping that packet re-ordering and routing and buffering will "even out". Moreover, they didnt prioritze voice packets over video packets - bad idea, given how much delay and jitter sensitive our ears are. But on the other hand, Skype or Microsoft Live chat have implemented these services beautifully on the same best-effort Internet.
To be sure, There are some applications which cannot be left to the mercy best-effort .e.g. the 112 service (in Germany; 911 in the US).
So lets just say,
know your underlying network when you design services over it, and promise your customers things that can be delivered over the best-effort network, not a circuit switched network.