Retail App Testing

Learn the unique challenges and opportunities with launching winning retail apps

Get The Free eBook

Mobile App Testing

Learn tips, techniques and trends for launching great mobile apps

Get The Free eBook

Sign Up For ARC’s Newsletters

Stay informed to make better business decisions.

Sign Up

Travel App Testing

Learn the unique challenges and opportunities with launching winning travel app

Get The Free eBook

Mobile Payments: More Than Just A New Way To Pay

Learn the fundamentals of mobile payment technology and what to consider when implementing a new payment strategy

Get the Free eBook

Accessible By Design

This free eBook details why accessibility testing matters and offers guidelines to digital accessibility

Get the Free eBook

Retail App Testing

Learn the unique challenges and opportunities with launching winning retail apps

Get The Free eBook Now

Travel App Testing

Learn the unique challenges and opportunities with launching winning travel apps

Get The Free eBook Now

The Essential Guide to Mobile App Testing

Learn tips, techniques and trends for launching great mobile apps

Get The Free eBook Now
March 10th, 2016

The Intelligence Of The Internet Of Things Is Moving To The Things

The client-server model evolves, to powerful effect.

Since the dawn of computing, a seesaw balance of power has occurred between the client and the server.

And as evolution occurs, we see again and again that intelligence will always migrate towards the edge.

In the beginning, mainframes were the size of small buildings and only accessible through timeshared terminals. All the intelligence was in the mainframe, the terminal was only a conduit. Later, the intelligence moved from the mainframe to the personal computer, with the mainframe serving as backup to the client-side device.

“It is actually a nice push-pull of centralized computing, which is actually the model that we have had in computing for a long time,” said Raj Talluri, Qualcomm senior vice president and general manage, in an interview with ARC at Mobile World Congress 2016 in Barcelona. “You know mainframes and distributed terminals back to main frames and the cloud and … it is a natural evolution of how compute works.”

The client-server model hasn’t really changed all that much through the years from a fundamental standpoint, even if the actual technology on each side has. Take smartphones and mobile technology. In the beginning, smartphones were low powered devices capable of only so much because of limited hardware. The compute power needed to perform advanced application processing was not yet present. So smartphones and tablets performed most of their high-level computations in the cloud, using wireless technologies as a conduit between the two.

In 2011, then NASA CTO and OpenStack founder Chris Kemp perfectly described to me the relationship between mobile devices and the cloud:

“When you are carrying around a tablet, you are carrying a gateway to the cloud,” Kemp said.

5 Issues Facing the Future of the Apps Economy Get ahead of the future of the apps economy by knowing the current trends in app development Get It Now

Nine years into the mobile revolution, the intelligence has again moved to the edge. Smartphones today are as powerful as laptops, running upwards of 4 GB of RAM on 64-bit chip architecture. The most robust of smartphones even have 128 GB of internal flash memory. My laptop 10 years ago could only hope to be that capable.

The Internet of Things is starting to follow this same pattern.

See also: How We Built The Next 10 Years Of Innovation

The Computing Paradigm Moves Down To The Internet Of Things

The cloud continues to grow as the backbone of the Internet. But it is no longer necessary for the heavy lifting of application processing on the smartphone.

In many ways, the smartphone is now becoming the server in the client-server relationship. Look at smartwatches. These small, low-powered devices do not have the computational capability for more than the lightest application processing. So the smartphone becomes the server, performing the computation and pushing it down to the smartwatch through wireless technology.


Smartwatches with decent application and graphics processors are already being shipped, so the intelligence is again moving down the stack.

And that is just the beginning. The Internet of Things will follow the same model, to varying degrees. The intelligence is moving to the edge, instead of the smartphone or the cloud. The things are going to be smart.

“But as soon as you get that you want that same experience on the edge and the latency of going to the cloud and back doesn’t quite work,” said Talluri.

Talluri continued:

So it is exactly happening in the same way that phones did. Because you start by being just a connected device but soon customers want the connected device to do more and you can’t do more by always going to the cloud and back. Of course, you still want it in the cloud, but there is a natural division of labor, if you will, between the cloud and the device which actually moves as times goes on. The more properties the device gets, the less you need to do on the cloud.

The Mobile Supernova

Mobile is everything.

That was motto of Mobile World Congress 2016. On one hand, the slogan may have been a way for MWC to justify its continued relevance in an era where smartphone shipments have more or less peaked. On the other hand, most of the various subsectors of the Internet of Things will be based on the same supply chain that has served the smartphone industry.

The way I tend to describe it to people is the example of a supernova. A star is born, lives its life creating massive amounts of energy, then expands to massive proportions and blows up. The material ejected from that supernova is then used to create new stars, new planets and solar systems. Eventually, the cycle repeats. It is the cosmic cycle of life.


Computing goes through similar cycles. The mainframe was a star that exploded, using its technology to build personal computers. Personal computers went supernova and created the Internet. The combination of the Internet and personal computers—a binary star combination—blew up and created smartphones. Smartphones are now the big bang that spreads the material to create the Internet of Things.

Andreesen Horowitz analyst Ben Evans states this principle succinctly in a blog post this week wondering what comes after the mobile revolution:

Today, if you are innovating in sensors or cameras or radios or pretty much any other component, you are far more likely to target ARM and mobile. And over time, these economies of scale (amongst other things) mean that mobile will supplant the PC just as the PC supplanted everything before it. But again, the first step is that the new ecosystem gets scale from a new and much larger customer base, and only afterwards can the new ecosystem start supplanting the old one.

Qualcomm’s Talluri confirms the thesis.

“It is very interesting for us because the technologies that are needed are actually very similar technologies as in mobile,” said Talluri. “You want multiple forms of connectivity, you want application processors, you need low power and an ecosystem of people that know how to use that.”

From a computing perspective, we are now in our fourth or fifth supernova cycle, dating back to when Alan Turing was building the first computers to crack enemy code in World War II.

And we are getting increasingly better at measuring the progress of computing cycles.

ARM—the chipset and IP architecture firm that designs most processors in smartphones—is the progenitor of the mobile revolution. If you look back 10 to 12 years ago, almost nothing was running on ARM chipset architecture as x86 chips dominated laptops and PCs. If you were to go on the show floor of the Consumer Electronics Show in Las Vegas in 2005, you’d have seen lots of x86-based computers and only a couple ARM-based products.

The show floor of CES 2016? An empirical guess would be that more than 90% of the products were ARM based. That includes the drones, cameras, intelligence in automobiles, televisions, streaming boxes, appliances, utilities and other assorted gadgets. The low-power, high-efficiency model of ARM has given birth to an entirely new ecosystem of intelligent devices that are not reliant on a server for computational functionality.


“That can be embedded in your automotive or entertainment systems. They can be controlling the lights in the back of your car,” said James Bruce, lead mobile strategist at ARM, in an interview with ARC at MWC. “But it is also very much now, what you’re seeing is a lot of companies seeing all these great CPUs in smartphones, great GPUs. It has got connectivity around it, the image processing. How can we actually take this into new markets?”

We can quantify just how much the mobile market is expanding into the Internet of Things, using ARM shipments as a proxy. Bruce said that 14.8 billion ARM chips were shipped in 2015. That is an order of magnitude greater than the about 1.4 billion smartphones that were shipped in 2015.

Bruce said:

I think if you actually look, obviously the smartphone has become the face of personal computing device in the world. I mean there are three billion smartphones in use today. However, if you actually look at ARM shipments, if you look at our partnerships there were something like 14.8 billion chips with ARM CPUs in there. Really what you are seeing is this sort of pervasive computing everywhere.

Not every one of those 14.8 billion ARM chips is part of a sophisticated System-on-a-Chip (SoC) like a Qualcomm Snapdragon set. Many of them are simple Cortex micro-controllers for simpler machines. But even the most mundane of chips gains incredible layers of intelligence as time goes on.

Where Is The Mobile Intelligence Going?

ARM is not in the game of setting markets. ARM creates architecture and IP and gives instruction sets for how its partners—chipset manufacturers like Qualcomm, Samsung or Nvidia—will then build out the functionality of the architecture.

“I think if you look at ARM, we don’t actually go out and try to define markets,” Bruce said. “What we do is actually define IP that will be suitable for a wide-range of markets.”

What ARM does do is look five years ahead to determine the end points its chips will need to support and make the adjustments and tweaks necessary to support those end points. For instance, ARM’s new Cortex A32 chip it announced at Mobile World Congress is directly aimed at the embeddable Internet of Things market.


“Embedded, shall we say is a very loose term, but what it is means is using our Cortex-M cores and that can literally be anything from IoT to good old fashioned industrial equipment,” Bruce said. “That growth is coming as people are taking the power of the ARM ecosystem—great software and great development tools and the range of SoCs that are available—and bringing that intelligence into a wide range of products.”

Qualcomm definitely is in the business of setting markets. Talluri would not break down the demographics or volume of Qualcomm’s customers, but he did note that part of Qualcomm’s jobs is to work with its partners to create new use cases and markets using mobile technology.

“We like to think of it not so much as [filling in] hollow spots [of IoT deployment] but rather creating new markets,” said Talluri. “Because when we are able to bring this type of technology into these spaces, those things are able to do things that they could not do before.”

The App Quality Imperative Creating Apps that Win - 5 Challenges and 5 Solutions Get It Now

The way sales and innovation cycles work is that the Internet of Things technology we see deployed today is actually already a couple of years old. People buy silicon chips from the likes of Qualcomm and then work to give it connectivity and intelligent software layers. Most of what Talluri is working on today will be on the market in two to three years.

So what is coming in two to three years? Talluri says that Qualcomm is seeing an uptick in the type of connectivity that its customers are buying. Instead of just Wi-Fi capabilities, manufacturers of looking at Wi-Fi, Bluetooth, LTE etc. together as essential to building the next round of connected gadgets. Once connectivity is taken care of, those same manufacturers are buying application processors to run even the simplest of gadgets.

See also: Why We Are Not Waiting For The “Eureka!” Moment For The Internet Of Things

“Now we are finding that people don’t just want to buy our connectivity, they want to buy our application processors because they want smarts there,” Talluri said.

Talluri gives the example of a home security camera that is smart enough—without the need of the cloud or being attached to a laptop or smartphone—that can determine when and what it records. In older days, the camera would just record and archive everything on a local server or in the cloud. Now, the camera can selectively record just the movement of a human being (instead of say, a dog or raccoon) and analyze and store that video itself. With machine learning and facial recognition, the camera could determine if the person is a resident of the house or a potential burglar.

All on the device.

“What is happening now is this whole push for the cloud versus the non-cloud. That is another thing that is happening,” Talluri said. “Now you push a lot more intelligence to the edge, to the camera.”

This is what it means to watch the intelligence of the Internet of Things move to the Things themselves. It is part of a familiar cycle that has been 70 years in the making. And the results—mixed with the evolution of cloud and smartphones—creates for incredibly potent blend of innovation that can be applied to almost any industry or human behavior.

Image: “The Blue Supernova” by Flickr user Carlos, Creative Commons.

  • petergkinnon

    Yes, Netty continues to insidiously infiltrate every aspect of our world! Even now, hardly anybody notices.

    This article underlines the growing realization that rather than “artificial intelligence” arising from science fiction based notions involving individual robots/computers/systems, we see the increasing integration and extension of the entire network.

    In actuality, the real next cognitive entity quietly self assembles in the background, mostly unrecognized for what it is. And, contrary to our usual conceits, is not stoppable or directly within our control.

    We are very prone to anthropocentric distortions of objective reality. This is perhaps not surprising, for to instead adopt the evidence based viewpoint now afforded by “big science” and “big history” takes us way outside our perceptive comfort zone.

    The fact is that the evolution of the Internet (and, of course, major components such as Google) is actually an autonomous process. The difficulty in convincing people of this “inconvenient truth” seems to stem partly from our natural anthropocentric mind-sets and also the traditional illusion that in some way we are in control of, and distinct from, nature. Contemplation of the observed realities tend to be relegated to the emotional “too hard” bin.

    This evolution is not driven by any individual software company or team of researchers, but rather by the sum of many human requirements, whims and desires to which the current technologies react. Among the more significant motivators are such things as commerce, gaming, social interactions, education and sexual titillation.

    Virtually all interests are catered for and, in toto provide the impetus for the continued evolution of the Internet. Netty is still in her larval stage, but we “workers” scurry round mindlessly engaged in her nurture.

    By relinquishing our usual parochial approach to this issue in favor of the overall evolutionary “big picture” provided by many fields of science, the emergence of a new predominant cognitive entity (from the Internet, rather than individual machines) is seen to be not only feasible but inevitable.

    The separate issue of whether it well be malignant, neutral or benign towards we snoutless apes is less certain, and this particular aspect I have explored elsewhere.

    Stephen Hawking, for instance, is reported to have remarked “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all,”

    Such statements reflect the narrow-minded approach that is so common-place among those who make public comment on this issue. In reality, as much as it may offend our human conceits, the march of technology and its latest spearhead, the Internet is, and always has been, an autonomous process over which we have very little real control.

    Seemingly unrelated disciplines such as geology, biology and “big history” actually have much to tell us about the machinery of nature (of which technology is necessarily a part) and the kind of outcome that is to be expected from the evolution of the Internet.

    This much broader “systems analysis” approach, freed from the anthropocentric notions usually promoted by the cult of the “Singularity”, provides a more objective vision that is consistent with the pattern of autonomous evolution of technology that is so evident today.

    Very real evidence indicates the rather imminent implementation of the next, (non-biological) phase of the on-going evolutionary “life” process from what we at present call the Internet. It is effectively evolving by a process of self-assembly.

    The “Internet of Things” is proceeding apace and pervading all aspects of our lives. We are increasingly, in a sense, “enslaved” by our PCs, mobile phones, their apps and many other trappings of the increasingly cloudy net. We are already largely dependent upon it for our commerce and industry and there is no turning back. What we perceive as a tool is well on its way to becoming an agent.

    There are at present more than 3 billion Internet users. There are an estimated 10 to 80 billion neurons in the human brain. On this basis for approximation the Internet is even now only one order of magnitude below the human brain and its growth is exponential.

    That is a simplification, of course. For example: Not all users have their own computer. So perhaps we could reduce that, say, tenfold. The number of switching units, transistors, if you wish, contained by all the computers connecting to the Internet and which are more analogous to individual neurons is many orders of magnitude greater than 3 Billion. Then again, this is compensated for to some extent by the fact that neurons do not appear to be binary switching devices but instead can adopt multiple states.

    We see that we must take seriously the possibility that even the present Internet may well be comparable to a human brain in at least raw processing power. And, of course, the all-important degree of interconnection and cross-linking of networks and supply of sensory inputs is also growing exponentially.

    We are witnessing the emergence of a new and predominant cognitive entity that is a logical consequence of the evolutionary continuum that can be traced back at least as far as the formation of the chemical elements in stars.

    This is the main theme of my latest book “The Intricacy Generator: Pushing Chemistry and Geometry Uphill” which is now available as a 336 page illustrated paperback from Amazon, etc.

    Netty, as you may have guessed by now, is the name I choose to identify this emergent non-biological cognitive entity. In the event that we can subdue our natural tendencies to belligerence and form a symbiotic relationship with this new phase of the “life” process then we have the possibility of a bright future.

    If we don’t become aware of these realities and mend our ways, however, then we snout-less apes could indeed be relegated to the historical rubbish bin within a few decades. After all , our infrastructures are becoming increasingly Internet dependent and Netty will only need to “pull the plug” to effect pest eradication.

    So it is to our advantage to try to effect the inclusion of desirable human behaviors in Netty’s psyche. In practice that equates to our species firstly becoming aware of our true place in nature’s machinery and, secondly, making a determined effort to “straighten up and fly right”

    • DanRowinski

      Well reasoned and well said.

      On the other hand, I would’ve appreciated it more if you had left it as an original comment to the specific contents of this particular article and not a copy/paste job that you leave on any/all IoT-related articles you come across. Going through your Disqus history, you do the same thing with other article topics. I understand you have a book to sell and it seems interesting, based on your logic. I appreciate that. But leaving the same comment over and over on different article topics approximates spam. And I do hope you are more cunning and original than to resort to that particular tactic.

      Next week I will be tackling the evolution of the Internet vis-a-vis the evolution of the Internet of Things. If you care to leave an original, crafted response to that article, I invite you to do so.

      Dan Rowinski

      • petergkinnon

        Sorry for my tardiness in replying, very busy with other matters.

        I generally do not bother to respond to comment that do not directly address the topic or are simply mindless, as the other instance here.

        However, your reply, being both polite and quite rational, does, I feel, deserve some consideration.

        Certainly I re-use similar blocks of text to address various pertinent topics. I make no apology for this. What is the point of “re-inventing the wheel”? I do some tailoring where required but the content is highly relevant and of good quality. Where sensible side-issues are involved i construct replies “on the fly” and add them to my “Scrivener” database for future use. Is that not an efficient (rather than “cunning”) used of my limited time?

        I have a passion to spread informed and usually well constructed ideas rather than to “sell books”.
        However, it is my firm opinion after decades of internet interactions that the blog format (and its predecessors) are are unsuited to a full exploration of such a broad issue as the evolution of our universe from the stelliferous era onwards.
        Indeed, the Institute of Physics limits comments to 1000 characters, and a 5000 character limit of 5000 characters is not at all uncommon. Furthermore, the content of most posts indicates generally low attention spans.

        I think that your intimation that my re-use of text blocks constitutes spam is unfair. My method is not to publish these indiscriminately but to maintain a quite tight focus using (mostly) key phrases from Google alerts as the initial filter.

        Of course, the mere reference of “The Intricacy Generator” results in comments being expunged by the dumber kind of moderator. The more thoughtful being aware that the signal to noise ratio of the comment is worthy of its retention.

        Even if what is construed by some as as “self-promotion” is “noise”. Which I contest any way.

        Do you get my drift?

    • G_R_Johnson

      Who are you trying to impress?