And so it begins. The battle between the H.264 and WebM video codec.
Google's On2 acquisition and the subsequent open-sourcing of the VP8 video codec has created a formidable competitor for H.264. Formidable not because WebM is technically superior to H.264 but because now there is a free alternative to the proprietary and licensed H.264. WebM is free, underwritten by Google, and a proven web-video delivery veteren -after all, Adobe Flash has used On2's codecs for web video delivery over the years.
There are several things going for H.264. First, it is entrenched in several video delivery formats and standards. For example, Bluray uses H.264 to encode video. Millions of Bluray players will become obselete if WebM is used instead of H.264. My two cents are that this wont really happen, instead, newer players will incorporate the possibility of decoding WebM video also. Even as I write this I am aware of several hardware manufacturers who are incorporating the WebM video decoders into their ASIC hardware. But I am not assuming that things like the Bluray standard will be changed, on the contrary, there are other emerging media delivery and storage standards that have been frozen with H.264 being selected as the codec of choice. Standards take years to change or deploy and its very unlikely that they can suddenly adopt WebM instead of H.264.
In the mid-term WebM will defeat H.264 where there is a (easily replacable) software decoder and soft-media. By soft-media I mean video that is not burnt onto read-only media like Bluray disks but instead exists, say, in the form of a web-downloadable video on a server's hard-disk. The economic compulsion of having to pay the H.264 licensing body per-video download and per decoder shipped compared to the free (as in air) WebM alternative shall edge out the former. I suspect web-video delivery platforms like You-tube will lead the charge because (1) The number of videos being downloaded are huge and, (2) Their average revenue per video is miniscule, and each WebM download instead of H.264 download saves a few cents in licensing fee.
A black-knight for the time-frame question will be the innovation in H.264 vs. that in WebM. If open-sourcing WebM has the desired effect of creating a better and more innovative codec in the future then WebM could gain on H.264 faster. But I am sure that the H.264 camp won't be sitting on their palms all this while! Video codecs use advanced algorithms and developing such concepts needs big investments (R&D). Will backers of WebM bring that kind of investment to the table in the interest of improving WebM when there is no direct revenue coming back to them?
Another thing that is going for WebM is the push toward virtualization in consumer electronics (away from the conventional ASIC approach) in the coming years. This means that future hardware (such as future Bluray players) may be capable of running multiple upgradable decoders rather than being tied to a specific ASIC implementing a specific decoding algorithm for a specific codec. That may just break the hardware dominance of H.264 over WebM. As a consumer I would prefer to hedge my bets and buy a virtualization-capable decoder rather than being tied into one video codec via an ASIC decoder.
Tuesday, January 25, 2011
Tuesday, November 9, 2010
Parallelizing & Multiprocessing Commands Using Python
My computer has multiple processor cores. That means I could speed up scripts by running some of their tasks in parallel. I have written up a simple Python script that uses the Multiprocessing library to take a list of jobs (each is a unix command string) and then executes them on a specified number of independent processes. These processes are created only once and act as a pool of "workers" which undertake a job, submit the result of the computation, and then undertake another job (if available in the job queue). The script ends when there are no more jobs in the job queue.
This approach is useful when (1) You have a multi-processor/multicore CPU. (2) Your tasks are CPU intensive. (3) You are reasonably sure that the jobs are not internally parallelized to take advantage of multiple CPUs. In my case, I had two directories full of numerically-named image (.ppm) files whose PSNR's had to be compared using the pnmpsnr utility. Computing PSNR is a computationally intensive task. Running the comparisons serially (single process) was significantly slower than adopting a multiprocess approach.
The code below should get you started on parallelizing your computationally intensive script. You can download the script from here.
This approach is useful when (1) You have a multi-processor/multicore CPU. (2) Your tasks are CPU intensive. (3) You are reasonably sure that the jobs are not internally parallelized to take advantage of multiple CPUs. In my case, I had two directories full of numerically-named image (.ppm) files whose PSNR's had to be compared using the pnmpsnr utility. Computing PSNR is a computationally intensive task. Running the comparisons serially (single process) was significantly slower than adopting a multiprocess approach.
The code below should get you started on parallelizing your computationally intensive script. You can download the script from here.
#! /usr/bin/env python # Sachin Agarwal, Google, Twitter: sachinkagarwal, Web: http://sites.google.com/site/sachinkagarwal/ # November 2010 # Using Python to execute a bunch of job strings on multiple processors and print out the results of the jobs in the order they were listed in the job list (e.g. serially). # Partly adapted from http://jeetworks.org/node/81 #These are needed by the multiprocessing scheduler from multiprocessing import Queue import multiprocessing import commands import sys #These are specific to my jobs requirement import os import re def RunCommand (fullCmd): try: return commands.getoutput(fullCmd) except: return "Error executing command %s" %(fullCmd) class Worker(multiprocessing.Process): def __init__(self, work_queue, result_queue, ): # base class initialization multiprocessing.Process.__init__(self) self.work_queue = work_queue self.result_queue = result_queue self.kill_received = False def run(self): while (not (self.kill_received)) and (self.work_queue.empty()==False): try: job = self.work_queue.get_nowait() except: break (jobid,runCmd) = job rtnVal = (jobid,RunCommand(runCmd)) self.result_queue.put(rtnVal) def execute(jobs, num_processes=2): # load up work queue work_queue = multiprocessing.Queue() for job in jobs: work_queue.put(job) # create a queue to pass to workers to store the results result_queue = multiprocessing.Queue() # spawn workers worker = [] for i in range(num_processes): worker.append(Worker(work_queue, result_queue)) worker[i].start() # collect the results from the queue results = [] while len(results) < len(jobs): #Beware - if a job hangs, then the whole program will hang result = result_queue.get() results.append(result) results.sort() # The tuples in result are sorted according to the first element - the jobid return (results) #MAIN if __name__ == "__main__": import time #Code to measure time starttime = time.time() #Code to measure time jobs = [] #List of jobs strings to execute jobid = 0#Ordering of results in the results list returned #Code to generate my job strings. Generate your own, or load joblist into the jobs[] list from a text file lagFactor = 5 ppmDir1 = sys.argv[1] ppmDir2 = sys.argv[2] decNumRe = re.compile(u"((\d+)\.(\d+))") ctr = 0 for root, dirs, files in os.walk(ppmDir1): numFiles = len(files) files.sort() fNameLen = len(files[0].split('.')[0]) for fName in files: for offset in range(0,lagFactor): fNumber = int(fName.split('.')[0]) targetFile = '%0*d' % (fNameLen, max(fNumber-offset,1)) targetPath = ppmDir2+'/'+targetFile+'.ppm' origPath = ppmDir1+'/'+fName fullCmd = "pnmpsnr "+origPath+' '+targetPath #Linux command to execute jobs.append((jobid,fullCmd)) # Append to joblist jobid = jobid+1 # run numProcesses = 2 if len(sys.argv) == 3: numProcesses = int(sys.argv[1]) results = execute(jobs,numProcesses) #job list and number of worker processes #Code to print out results as needed by me. Change this to suit your own need # dump results ctr = 0 for r in results: (jobid, cmdop) = r if jobid % lagFactor == 0: print print jobid/lagFactor, print '\t', try: print cmdop.split()[10], except: print "Err", ctr = ctr+1 print print "Time taken = %f" %(time.time()-starttime) #Code to measure time
Sunday, October 31, 2010
My Grocery Store is a Mobile Operator
My grocery store sells generic versions of bottled water, soap, breakfast cereal, butter, milk and mobile voice/Internet service. Now thats quite remarkable considering Rewe, the German grocery store chain I am alluding to, doesn't really have a history in the German telecommunications market. What they do have are 15445 stores across Europe that can stock up prepaid SIM cards branded "ja! Mobil" (the name comes from their generic in-store brand). Their physical presence and the mind space ja! occupies drives their business model. If shoppers can drink ja! branded generic cola then they could as well use ja!-branded mobile voice/Internet service.
The innovation here is the marketing possibility offered by Rewe grocery stores (instead of any technical innovation). Rewe has partnered with T-Mobile in Germany to implement its ja! branded "mobile operator". T-Mobile provides a white-label technical platform and Rewe simply brands it "ja! mobile". T-Mobile wins because it gets to sell its service at a discount to lower-paying market segments without putting off the premium T-Mobile customers, Rewe makes a neat profit by leveraging the ja! brand, and the customer wins by getting a discounted service from the best mobile operator of Germany, minus the T-Mobile brand.
I was looking at ja! mobile pricing. There are various flavors of pre-paid and flat-rate plans, although the focus seems to be on pre-paid plans that require no long-term contract and can be dispensed at Rewe's check-out counters. Depending on a customer's typical usage, s/he can can trade-off get a discounted subset of services from among the services offered - SMS, MMS, in-network calling, fixed-line calls, data etc. Interestingly, customer support is not free. Its a little like the contemporary airline business where everything from customer service to carry-on baggage can become a chargeable add-on rather than part of the product. Customers need to be mindful of what their money is buying them before assuming that things like customer service or technical support is part of the product.
Brick-and-mortar stores also sell iTunes gift cards and Facebook credit nowadays. Dell and Amazon partner with Best Buy to sell computers and Kindle e-books respectively. There are interesting business opportunities for anyone who can funnel real customers and subscribers (read: money) into the virtual/communications world. Very real profits await those brick-and-mortar outfits who can build bridges between technology companies and customers, even if they are just plain-Jane grocery stores!
The innovation here is the marketing possibility offered by Rewe grocery stores (instead of any technical innovation). Rewe has partnered with T-Mobile in Germany to implement its ja! branded "mobile operator". T-Mobile provides a white-label technical platform and Rewe simply brands it "ja! mobile". T-Mobile wins because it gets to sell its service at a discount to lower-paying market segments without putting off the premium T-Mobile customers, Rewe makes a neat profit by leveraging the ja! brand, and the customer wins by getting a discounted service from the best mobile operator of Germany, minus the T-Mobile brand.
I was looking at ja! mobile pricing. There are various flavors of pre-paid and flat-rate plans, although the focus seems to be on pre-paid plans that require no long-term contract and can be dispensed at Rewe's check-out counters. Depending on a customer's typical usage, s/he can can trade-off get a discounted subset of services from among the services offered - SMS, MMS, in-network calling, fixed-line calls, data etc. Interestingly, customer support is not free. Its a little like the contemporary airline business where everything from customer service to carry-on baggage can become a chargeable add-on rather than part of the product. Customers need to be mindful of what their money is buying them before assuming that things like customer service or technical support is part of the product.
Brick-and-mortar stores also sell iTunes gift cards and Facebook credit nowadays. Dell and Amazon partner with Best Buy to sell computers and Kindle e-books respectively. There are interesting business opportunities for anyone who can funnel real customers and subscribers (read: money) into the virtual/communications world. Very real profits await those brick-and-mortar outfits who can build bridges between technology companies and customers, even if they are just plain-Jane grocery stores!
Wednesday, October 13, 2010
Fancy Vertical Handover: A victim of REST?
There has been a ton of research, standardization work, and development around Vertical Handover - the ability to change the underlying network access without disturbing the overlying communication protocol (TCP or application) sessions. The simplest example is when a user moves from a Wifi zone (e.g. office) to a 3G zone (outdoors). A seamless handover hides the underlying rewiring of the access and lets the user continue using the device as if nothing changed. Vertical handovers have quickly graduated from laboratory quirk to mainstream occurance, with Wifi-enabled smart-phones switching between access technologies multiple times daily.
But the vertical handover on my smart-phone doesn't really preserve the underlying TCP session and yet works pretty well. Why? Because most of the apps on my phone use REST-ful protocols like HTTP, XML-RPC, or SOAP. That means they are, in theory, stateless. In fact, a TCP connection is created and torn down for every message exchange between the service server and the client. Sometimes TCP connections linger on to improve efficiency (carrying multiple request-response mesages between the client and service server), but a discontinuity in the TCP connection is not catastrophic. I simply see my smart-phone negotiate a new connection with the new access (3G or Wifi) and then my app keeps working as if nothing has changed.
All that talk about preserving TCP connections across access technologies was much ado about nothing!
But the vertical handover on my smart-phone doesn't really preserve the underlying TCP session and yet works pretty well. Why? Because most of the apps on my phone use REST-ful protocols like HTTP, XML-RPC, or SOAP. That means they are, in theory, stateless. In fact, a TCP connection is created and torn down for every message exchange between the service server and the client. Sometimes TCP connections linger on to improve efficiency (carrying multiple request-response mesages between the client and service server), but a discontinuity in the TCP connection is not catastrophic. I simply see my smart-phone negotiate a new connection with the new access (3G or Wifi) and then my app keeps working as if nothing has changed.
All that talk about preserving TCP connections across access technologies was much ado about nothing!
Wednesday, October 6, 2010
Mobile Video Calling: Can Tango Tango?
Tango is a newly launched mobile-to-mobile video calling application for iPhone and Android devices. Tango enables smart-phone owners to see each other in addition to speaking with each other during a Voice Over IP (voip) conversation. Many smart-phones come with front facing cameras, ostensibly for video calling, and Tango enables people to use these cameras during a voip call. Think of mobile video calling when you want to see your expat pet doing silly tricks on video (or for beach and boardroom voyeurism).
But, as Walter Mossberg's Tango's review in WSJ reports, the quality of Tango's video call leaves a lot to be desired. I came across a video on Gizmodo's website showing Tango in action. The verdict is that Tango's performance is way below expectation. In fact, Tango's video frame-rate seemed to be approximately 1 frame per second in the Gizmodo video (and not the "high quality video mobile calling service" as the company's press release claims).
Make no mistake, achieving even 1 frame-per-second video+voice is no small feat. Tango's engineers have packed a real-time video+voice encoder/decoder into a smart-phone and have managed to trasmit/receive two parallel audio/video streams over Wifi (they also claim high quality video calls over 3G but lets not give Tango all the benefit of doubt :-) ). On top of this, achieving this for both the Android and iPhone platforms and for dozens of smart-phone models is admirable.
Frankly, I am not surprised by Tango's dismal video frame rate - resource bottlenecks such as smart-phone hardware, software/OS, network bandwidth and latency have to be overcome before an acceptable double digit frame-rate is achieved. But what surprised me was the poor voice quality: the Tango call sounded a lot like those cheap international calling cards I used to make international calls from the US many years ago. Terrible sound quality. I wonder why Tango engineers didn't trade more video quality (or even cut out video entirely when resources were scarce) and spend resources on improving voice? Voice over IP for mobile phones is a solved problem - Skype and the umpteen number of mobile SIP voip clients got audio to work well even on older smart-phones. Why couldn't Tango?
Tango is an over-the-top application, meaning that it runs over the best-effort (ordinary) Internet. I mention this here because the alternative, 3G telecom-operator-supported video calling, uses a dedicated network channel to ensure call quality assurance. But a Tango call will be carried over the same pipes as plain web traffic, making the video/voice call quality dependent on what else is being transmitted during the call. Telecom-supported 3G video calling is also much more energy (battery) efficient than Tango.Why? Because in order to remain signed-into Tango to receive calls, the smart-phone has to periodically send "I-am-alive" messages to the Tango server. This means that a TCP or UDP socket is always active (or repeatedly created and and torn-down), effectively disabling the smart-phone's built-on power-saving sleep function. Offcourse, telecom supported 3G video calling costs money, but it is technically superior to Tango or any other over-the-top mobile video calling system.
But this is not about Telecom vs. Internet applications. This is about the use-case. Video calling was touted as one of the big use-cases for 3G Telecom networks (and 4G too?). 3G standards support video calling and so there is hardware acceleration, network resource reservation, optimized audio/video codecs, and cross-phone/OS support for video calling on every modern smart-phone. But apart from the cost of making 3G video calls, is their something else that relegated video calling to its sad never-used status in phones? Yes there is. Video calling has simply not been accepted as a viable form of mass communication in our society, and remains to-date, a quirky add-on. When was the last time you placed a video call?
When Internet telephony (voip) arrived it quickly replaced circuit-switched calling. With mobile video calling, even if Tango can eventually fix its technical/engineering limitations, there is nothing to replace! Sadly, the mobile video calling use-case was still-born from the beginning.
Make no mistake, achieving even 1 frame-per-second video+voice is no small feat. Tango's engineers have packed a real-time video+voice encoder/decoder into a smart-phone and have managed to trasmit/receive two parallel audio/video streams over Wifi (they also claim high quality video calls over 3G but lets not give Tango all the benefit of doubt :-) ). On top of this, achieving this for both the Android and iPhone platforms and for dozens of smart-phone models is admirable.
Frankly, I am not surprised by Tango's dismal video frame rate - resource bottlenecks such as smart-phone hardware, software/OS, network bandwidth and latency have to be overcome before an acceptable double digit frame-rate is achieved. But what surprised me was the poor voice quality: the Tango call sounded a lot like those cheap international calling cards I used to make international calls from the US many years ago. Terrible sound quality. I wonder why Tango engineers didn't trade more video quality (or even cut out video entirely when resources were scarce) and spend resources on improving voice? Voice over IP for mobile phones is a solved problem - Skype and the umpteen number of mobile SIP voip clients got audio to work well even on older smart-phones. Why couldn't Tango?
Tango is an over-the-top application, meaning that it runs over the best-effort (ordinary) Internet. I mention this here because the alternative, 3G telecom-operator-supported video calling, uses a dedicated network channel to ensure call quality assurance. But a Tango call will be carried over the same pipes as plain web traffic, making the video/voice call quality dependent on what else is being transmitted during the call. Telecom-supported 3G video calling is also much more energy (battery) efficient than Tango.Why? Because in order to remain signed-into Tango to receive calls, the smart-phone has to periodically send "I-am-alive" messages to the Tango server. This means that a TCP or UDP socket is always active (or repeatedly created and and torn-down), effectively disabling the smart-phone's built-on power-saving sleep function. Offcourse, telecom supported 3G video calling costs money, but it is technically superior to Tango or any other over-the-top mobile video calling system.
But this is not about Telecom vs. Internet applications. This is about the use-case. Video calling was touted as one of the big use-cases for 3G Telecom networks (and 4G too?). 3G standards support video calling and so there is hardware acceleration, network resource reservation, optimized audio/video codecs, and cross-phone/OS support for video calling on every modern smart-phone. But apart from the cost of making 3G video calls, is their something else that relegated video calling to its sad never-used status in phones? Yes there is. Video calling has simply not been accepted as a viable form of mass communication in our society, and remains to-date, a quirky add-on. When was the last time you placed a video call?
When Internet telephony (voip) arrived it quickly replaced circuit-switched calling. With mobile video calling, even if Tango can eventually fix its technical/engineering limitations, there is nothing to replace! Sadly, the mobile video calling use-case was still-born from the beginning.
Wednesday, September 1, 2010
Android Device Chatter with the Google Mother Ship
Parts of this post were moved into a formal research study (click link below).
https://sites.google.com/site/sachinkagarwal/home/publications-talks/gis-2011-infocom-2011
https://sites.google.com/site/sachinkagarwal/home/publications-talks/gis-2011-infocom-2011
Saturday, July 3, 2010
Microsoft Kin: RIP == Social Networks:RIP (?)

Microsoft Kin 04/2010-06/2010
Microsoft is phasing out its social-network/cloud storage-heavy Kin smart phone just 2 months after launch. This embarrassing report from CNN claims that the Microsoft+Verizon Kin sold less than 10000 units in the two months. RIP Kin.
I never got around to using the Kin, but apparently the market didn't see the justification for the expensive data plan (>=$29 p.m.) tagged on to the Kin by Verizon. The market was supposed to be teens looking to stay connected via social networks, but they did not bite into the insanely high data-plan tariff. Social networking, it seems, is not worth that much to them. How much is it worth anyway?
Lets not belittle the effort Microsoft put into this device - as a product the Kin was fully functional and seemed to do the things you would expect from this sort of device - Internet social networking, cloud storage and syncing of users' data, a built in Zune player, sleek design, etc. And at under $100 (with a data plan) it had a low entry barrier too. It seems like all the pieces were there but the Kin machine never got off the ground.
I don't know if the lack of a credible app store spelt the end for the Kin. What I do know is that social networking apps completely failed to drive sales. Next time someone uses social networking as the use-case for a device or service that is supposed to make money, say - Kin!
Lets not belittle the effort Microsoft put into this device - as a product the Kin was fully functional and seemed to do the things you would expect from this sort of device - Internet social networking, cloud storage and syncing of users' data, a built in Zune player, sleek design, etc. And at under $100 (with a data plan) it had a low entry barrier too. It seems like all the pieces were there but the Kin machine never got off the ground.
I don't know if the lack of a credible app store spelt the end for the Kin. What I do know is that social networking apps completely failed to drive sales. Next time someone uses social networking as the use-case for a device or service that is supposed to make money, say - Kin!
Friday, April 30, 2010
Untitled Poem
Among many other things, my father taught me how to read and write English. Everything I've ever written starts with what he taught me. Now as he lies dying of cancer, I wrote this for him. Say a prayer for him.
All of my thoughts
Like river drops
Together making up me
Like a river that flows
Until it throws
Fresh into the salty sea
All rivers meet that end
No matter what they pretend
Or how many bends they make
And so it will be
With every drop inside me
No matter what path I take
So you may ask
The point of the task
To meander toward the salty end
But don't we all know
Drops become vapor and snow
From which new rivers descend
All of my thoughts
Like river drops
Together making up me
Like a river that flows
Until it throws
Fresh into the salty sea
All rivers meet that end
No matter what they pretend
Or how many bends they make
And so it will be
With every drop inside me
No matter what path I take
So you may ask
The point of the task
To meander toward the salty end
But don't we all know
Drops become vapor and snow
From which new rivers descend
Friday, January 8, 2010
Plug and play internal HDDs, literally!
I just saw this contraption on a colleagues desk. As you can see, a 3.5" HDD is literally plugged into the dock as if it were some super-sized memory card . Well, thats exactly what it is. The dock also has ports for USB keys, SD cards, and probably a few other formats.
Interesting to see the form factor difference between the SD slot and the 3.5" HDD slot. Flash memory capacity is quickly catching up with HDD capacity (the latter's lead has shrunk to only about ~10x). HDDs are endangered species!
Monday, December 14, 2009
India's Broadband Future
Ajit Balakrishnan, CEO, Rediff gave a keynote in IIT Delhi earlier today. His talk suggested that Indian telecommunication operators and the government should not be concentrating at delivering niche multi-Mbps broadband services but should instead concentrate on delivering reasonably good service (100s of kbps) to a larger population. Ajit flashed a slide which showed that 86% of 3G users use their smartphones to access their email, a relatively low bandwidth application, but only 6% use 3G to download and watch videos. Ajit's point was to recognize the importance of broadband as an "always on" connection rather than a high-bandwidth connection in India.
There is a analogue in India's history to this choice that Indian telecommunication operators and the government has to make. The government of India created top notch higher education institutes - IITs, RECs, and IIMs - in the 1950s (after Indian independence). It spends tens of thousands of dollars per year on each student enrolled in these institutes, arguably at the expense of thousands of primary education schools in backward areas of the country. The thinking at the time of creation of these institutes was that this creme de la creme would catalyze the growth of industry and technology in the country. Similarly, it may be theorized that by providing high-speed Internet connectivity, early adoptors will drive applications and create demand in the general population to upgrade their connectivity.
Countries like China or South Korea concentrated on their primary education institutions rather than creating world-class higher education institutes. It is safe to say that both these countries are significantly ahead of India, measured via any human development index. But does this analogy suggest that India should concentrate on democratization of (relatively low speed) broadband rather than creating small pockets of high speed broadband?
I think that the market forces will decide the balance between broadband services in India. The ARPU on low-speed broadband may not exceed $5, but this will be compensated via large volumes. I also believe that low-speed broadband will be served via wireless in India. With mobile phones outpacing fixed line connections by a 12:1 ratio in the country, there is limited scope for technologies like DSL to be widely deployed. Fortunately, 3G, LTE and Wimax are nicely poised to fill in for the lack of fixed line infrastructure in India. As for the niche multi-Mbps broadband, I expect FTTX being deployed in highly urbanized areas where western ARPUs (10s of dollars) are possible.
There is a analogue in India's history to this choice that Indian telecommunication operators and the government has to make. The government of India created top notch higher education institutes - IITs, RECs, and IIMs - in the 1950s (after Indian independence). It spends tens of thousands of dollars per year on each student enrolled in these institutes, arguably at the expense of thousands of primary education schools in backward areas of the country. The thinking at the time of creation of these institutes was that this creme de la creme would catalyze the growth of industry and technology in the country. Similarly, it may be theorized that by providing high-speed Internet connectivity, early adoptors will drive applications and create demand in the general population to upgrade their connectivity.
Countries like China or South Korea concentrated on their primary education institutions rather than creating world-class higher education institutes. It is safe to say that both these countries are significantly ahead of India, measured via any human development index. But does this analogy suggest that India should concentrate on democratization of (relatively low speed) broadband rather than creating small pockets of high speed broadband?
I think that the market forces will decide the balance between broadband services in India. The ARPU on low-speed broadband may not exceed $5, but this will be compensated via large volumes. I also believe that low-speed broadband will be served via wireless in India. With mobile phones outpacing fixed line connections by a 12:1 ratio in the country, there is limited scope for technologies like DSL to be widely deployed. Fortunately, 3G, LTE and Wimax are nicely poised to fill in for the lack of fixed line infrastructure in India. As for the niche multi-Mbps broadband, I expect FTTX being deployed in highly urbanized areas where western ARPUs (10s of dollars) are possible.
Sunday, December 6, 2009
Thermal imaging cameras at Bangalore airport!
Arriving on an international flight at the Bangalore International airport, I was surprised to see two thermal imaging cameras. Each camera was looking at arriving passengers and visually marking those who had an elevated body temperature, in order to discern people who may be suffering from Swine Flu. These cameras are sensitive to IR heat radiation in the body temperature range. The cameras work by mapping temperature readings into a colormap that visually depicts body temperature. The video images produced by the cameras looked eerily similar to the IR images that the alien saw in the Predator movie series!
As compared to conventional body temperature measurements via thermometers, this real-time technique makes it possible for a medical officer to screen many more people. I wonder why these systems are not installed in other world airports.
Tuesday, November 24, 2009
Multiprocessing vs. Network I/O
I've been reading up on Python's (v2.6 and above) multiprocessing module. While multiprocessing has been around for a long time, simplified libraries like this multiprocessing module may spur even casual programmers to consider parallelism in their programs. My feeling is that if issues like inter-process communication, synchronization among processes, and deadlock avoidance are dealt with painlessly, then many non-professional programmers would feel confident enough to load up CPUs with programs with multiple processes to speed things up. Moreover, given that multiple CPU cores are becoming the norm rather than the exception on commodity hardware, there is a real incentive to eventually switch to multiprocessing.
What will this switch in program design mean for network data I/O? Will average users end up opening and using more network connections on average? Web browser tabs are a good example of multiple threads or processes. When modern browsers fire up they often connect to several websites saved from the previous session. I conjecture that multiple tabs fill up the network's queue faster than was possible with single core CPUs. Although Network I/O is much slower than CPU bandwidth (data rate at which CPUs process say, HTML), there is a point beyond which a single core CPU becomes the bottle-neck (e.g. firing a dozen browser tabs). But multiple cores remove this limitation and drive network I/O to its physical (or traffic-shaped) limits. I plan to measure this interplay between multiprocessing and network I/O. Watch this space!
What will this switch in program design mean for network data I/O? Will average users end up opening and using more network connections on average? Web browser tabs are a good example of multiple threads or processes. When modern browsers fire up they often connect to several websites saved from the previous session. I conjecture that multiple tabs fill up the network's queue faster than was possible with single core CPUs. Although Network I/O is much slower than CPU bandwidth (data rate at which CPUs process say, HTML), there is a point beyond which a single core CPU becomes the bottle-neck (e.g. firing a dozen browser tabs). But multiple cores remove this limitation and drive network I/O to its physical (or traffic-shaped) limits. I plan to measure this interplay between multiprocessing and network I/O. Watch this space!
Thursday, November 12, 2009
Free airport Wifi as a marketing tool
Google is offering free Wifi in 47 US airports during the holiday season The idea is to flash a few web pages marketing Google's software and services to users in return for free Wifi service. According to this CNN article, Google is not the only company to do so - apparently Lexus and Ebay have also implemented similar ideas, or intend do so in the near future.
Free service is probably going to bring a torrent of airport Wifi users online - probably many more than the current number of (paying) users. Given that Wifi Internet channel space is a shared resource, it will be interesting to see how airport Wifi scales with the up-tick in usage. I just hope that the service doesn't deteriorate so much that the sponsoring companys' well-meaning message is lost to disgruntled users. And I do hope that engineers running these Wifi access points have done the networking provisioning Math beforehand.
Now the economics. The sponsoring company (Google) is probably going to pay a lot less than the retail price of airport Wifi connectivity. Why? Because the sheer volume of users will be much higher than when users have to pay individually. I think that the payment will include a fixed component depending on the number of access points participating in the service, and a variable component depending on the number of users accessing the service.
Lets assume that an average airport has about 20 accessible Wifi access points. Each access point can support (with any reasonable quality of service) about 10 concurrent users. If the airport is busy for, say, 12 hours in a day, and further say that we assume an average utilization of 50% of the total capacity of the access points, then we have (per day)
10 * 20 * 12 * 0.5 = 1200 hours of usage per day per airport.
I would assume that the sponsoring company (Google) would pay about $5000 per day as a fixed cost and then about $1 per hour usage. This brings the daily total cost per airport for the sponsoring company to $5000+$1200 = $6200.
So for 47 airports and 50 holiday season days, we are looking at a bill of about
6200 * 47 * 50 = $14.57m
That's not a bad deal for a big company like Google, considering the number of eyeballs they will capture. Lets say a user uses the free Wifi for 30 minutes on average. So, we are looking at about 12*10*20/(1/2) = 4800 users per airport, per day. That works out to over 11m users in the 47 airports over the 50 day holiday period. Even if we assume that most people make round trips and therefore use the Wifi connection 2 times, Google can still reach about 5.5 million unique users! No too bad for the $15 million spent.
And I haven't even started counting the goodwill ROI bonus for playing Santa during holiday season! Nifty nifty marketing.
Free service is probably going to bring a torrent of airport Wifi users online - probably many more than the current number of (paying) users. Given that Wifi Internet channel space is a shared resource, it will be interesting to see how airport Wifi scales with the up-tick in usage. I just hope that the service doesn't deteriorate so much that the sponsoring companys' well-meaning message is lost to disgruntled users. And I do hope that engineers running these Wifi access points have done the networking provisioning Math beforehand.
Now the economics. The sponsoring company (Google) is probably going to pay a lot less than the retail price of airport Wifi connectivity. Why? Because the sheer volume of users will be much higher than when users have to pay individually. I think that the payment will include a fixed component depending on the number of access points participating in the service, and a variable component depending on the number of users accessing the service.
Lets assume that an average airport has about 20 accessible Wifi access points. Each access point can support (with any reasonable quality of service) about 10 concurrent users. If the airport is busy for, say, 12 hours in a day, and further say that we assume an average utilization of 50% of the total capacity of the access points, then we have (per day)
10 * 20 * 12 * 0.5 = 1200 hours of usage per day per airport.
I would assume that the sponsoring company (Google) would pay about $5000 per day as a fixed cost and then about $1 per hour usage. This brings the daily total cost per airport for the sponsoring company to $5000+$1200 = $6200.
So for 47 airports and 50 holiday season days, we are looking at a bill of about
6200 * 47 * 50 = $14.57m
That's not a bad deal for a big company like Google, considering the number of eyeballs they will capture. Lets say a user uses the free Wifi for 30 minutes on average. So, we are looking at about 12*10*20/(1/2) = 4800 users per airport, per day. That works out to over 11m users in the 47 airports over the 50 day holiday period. Even if we assume that most people make round trips and therefore use the Wifi connection 2 times, Google can still reach about 5.5 million unique users! No too bad for the $15 million spent.
And I haven't even started counting the goodwill ROI bonus for playing Santa during holiday season! Nifty nifty marketing.
Friday, November 6, 2009
Call for action! Powering down PCs

I've been playing with the idea of building a PC application that measures a computer's idle time. The idea is to gently convince users to suspend or power-down their PCs when they are not being utilised. I strongly believe that if PCs are optimally powered down, then many users could cut energy consumption (and hence also save on energy bills). Powering down battery-powered laptops will also increase the longevity of batteries and thereby decrease toxic battery waste in landfills.
As an example of where the possible savings may be, above is a pie chart showing my own PC usage over the past few working days. As you can see, there is ample scope to power down/suspend PCs when they are idle.
If you want to contribute time to this project (coding/web page/translation into other languages/spreading the word), feel free to contact me. If not, then do suspend your PC every time you are away for more than a few minutes :-).
Friday, October 30, 2009
Impact of International Domain Names
On the 40th birthday of the Internet last week, The Internet Corporation for Assigned Names and Numbers (ICANN) formally announced that there would now be domain name support for non-latin character URLs. This concept, called international domain names or IDNs, will allow URLs composed from letters of scripts of languages such as Korean, Chinese, Hebrew, Arabic and Hindi.
A little digging on Wikipedia about IDNs reveals that the underlying implementation is based on translating unicode names into DNS-compatible (ascii) URLs and visa versa in order to keep the current DNS system functional. This makes the system backward compatible with currently deployed name resolution infrastructure. In fact most of the translation to/from the non-Latin scripts will be done on the users' browsers.
But what does this mean for the fabric of DNS address space and the web?
Dilution of Latin namespace (?) Will we see some dilution in the value of address real estate? For example will http://www.doctor.com become less valuable because folks in Germany can now remember it instead as the more meaningful http://ärzt.com (Ärzt is German for medical doctor)? And what of those tens of thousands of domain names registered from different languages in Latin script (e.g. http://naukri.com in India. Naukri in Hindi means job).
The registration rush Initially, web content providers will scurry to buy up non-Latin names. But this will be more important for those content providers who do not have a global brand-name, or have a brand-name that defines their product or service. For a content provider like doctor.com, it will make sense to buy the synonyms of "doctor" in other languages, in addition to the spelling of "doctor" in the other languages. On the other hand, Microsoft.com will only buy up the spelling "Microsoft" in the languages/scripts becoming available through IDNs. At the very least, I forsee most businesses re-evaluating their namespace position on the web.
Security and phishing Completely unrelated characters in different scripts can look the same to the human eye. This means that users can be tricked into thinking that the address displayed in the address bar points to a legitimate page when in fact it points to a phishing page. It may be prudent for businesses to be aware of these security vulnerabilities of their URLs and perhaps register "similar looking" URLs in other languages/scripts proactively.
Impact on search engines Search engines are known to weigh in address name strings in their ranking algorithms. This may need some re-thinking. At the very least, some search engines may need to use automated translators to link up semantically similar web pages irrespective of how the address space links different copies of the same information in different languages/scripts.
A little digging on Wikipedia about IDNs reveals that the underlying implementation is based on translating unicode names into DNS-compatible (ascii) URLs and visa versa in order to keep the current DNS system functional. This makes the system backward compatible with currently deployed name resolution infrastructure. In fact most of the translation to/from the non-Latin scripts will be done on the users' browsers.
But what does this mean for the fabric of DNS address space and the web?
Dilution of Latin namespace (?) Will we see some dilution in the value of address real estate? For example will http://www.doctor.com become less valuable because folks in Germany can now remember it instead as the more meaningful http://ärzt.com (Ärzt is German for medical doctor)? And what of those tens of thousands of domain names registered from different languages in Latin script (e.g. http://naukri.com in India. Naukri in Hindi means job).
The registration rush Initially, web content providers will scurry to buy up non-Latin names. But this will be more important for those content providers who do not have a global brand-name, or have a brand-name that defines their product or service. For a content provider like doctor.com, it will make sense to buy the synonyms of "doctor" in other languages, in addition to the spelling of "doctor" in the other languages. On the other hand, Microsoft.com will only buy up the spelling "Microsoft" in the languages/scripts becoming available through IDNs. At the very least, I forsee most businesses re-evaluating their namespace position on the web.
Security and phishing Completely unrelated characters in different scripts can look the same to the human eye. This means that users can be tricked into thinking that the address displayed in the address bar points to a legitimate page when in fact it points to a phishing page. It may be prudent for businesses to be aware of these security vulnerabilities of their URLs and perhaps register "similar looking" URLs in other languages/scripts proactively.
Impact on search engines Search engines are known to weigh in address name strings in their ranking algorithms. This may need some re-thinking. At the very least, some search engines may need to use automated translators to link up semantically similar web pages irrespective of how the address space links different copies of the same information in different languages/scripts.
Subscribe to:
Posts (Atom)