By: Rob Bamforth, Principal Analyst, Quocirca
Published: 6th January 2014
Copyright Quocirca © 2014
There can be no doubting the growth and increasing influence on both personal and working lives of the touchscreen technology revolution of smartphones and, especially, tablets. Some have said we are entering a post-PC era and, given recent shipment statistics, this looks increasingly to be the case, but can people really do without their familiar desktop or laptop IT?
In the home it is clear that tablet use and similar information interaction via games consoles and smart TVs is becoming the norm—often several devices at once ‘meshing’ and ‘stacking’. This is supplanting the isolated use of the old fashioned PC on a desk in a distant room as mobile media users gather together (often still isolated!) in the same living space. Laptops cling on, but all except the lightest and sleekest are beginning to feel clunky and a little passé. Does this consumerisation of all things technical herald a seismic shift in IT in the workplace?
Those defending the status quo for business users will say a resounding "no". They will argue that tablets and their ilk are fine for consumption, but not great at content creation—real work requires a keyboard.
Let’s examine what 'real work' is (or perhaps should be). For the most part, unless your main job is 'author' or similar, content creation will be communication—to keep colleagues, business partners and, especially, managers, informed—and reports, to not really keep anyone informed, but to keep as 'evidence' just in case. That may be a touch exaggerated and cynical, but probably contains more than a hint of truth. Hierarchical organisations and distance may have once required a lot of 'paperwork', but online, on the move and on the same level, many present-day organisations communicate, administrate and share information differently.
If it was simply a matter of getting thoughts down as words into digital files, then surely everyone would be speaking to their computers; however audio input still fails to kit the moustache (I’m sure it was "cut the mustard" when I dictated it) despite the allusions of Star Trek and other sci-fi fans.
Voice recognition technology itself has come on leaps and bounds since first introduced, and sure, many can (in the isolation of their quiet office) dictate with very high degrees of accuracy into a computer. For those tasked with getting high volumes of words input in bounded periods of uninterrupted time, dictation and automated recognition are great.
But most working time is not like that for most workers. It is full of interruptions, phone calls, distractions, other people and noise. Some may only be able to speak to a computing device to issue the odd command or request—if they are fortunate—and even then, many will be embarrassed by it, if the experience is in public.
The next thing to assess is how much that goes into the overall business bucket of 'content' is simply words? The chances are, if we are talking about really valuable, accessible and actionable content, not a great deal. Images and other media format are often better, so too by an even greater degree are live feeds of information from pinpointed sources of interest—we could call this 'big data', even if there is not a huge volume of it.
Chances are that there probably is a lot of it, and certainly too much to type.
It comes from automated feeds from many IT applications and, increasingly, as ‘things’ are connected to the internet, it can come from objects in the real world as well.
Already most consumer mobile devices provide location information and often sensor feedback of movement. With its identity known, it become pretty straightforward to gather an individual information feed, and then combine it with others—anonymously or not, depending on opt-ins and the type of application. In its infancy, this idea of using location was thought to herald a profitable class of applications known as Location Based Services (LBS), but, in reality today, location is just an extra attribute to be used to support almost any applications, with the increasing use of proximity and motion looking even more interesting.
As technology shrinks and connectivity becomes more ubiquitous, more devices can be easily added to the mix; people sporting ‘wearable’ devices that can monitor anything from automatically recording exercise activity (Nike FuelBand) to capturing images on demand (Google Glass); sensors detecting movement, usage, wear, ambient conditions of things or places and recording them as machine to machine (M2M) activities. Collated, the mass of information can be viewed and analysed through reports and dashboards that meet the needs of the business information consumer, where interaction is likely to be comfortably primarily touch driven—a tablet?
With all that data being gathered and entered automatically, who needs a keyboard now? Clearly some, beyond the dedicated writers, will, but for almost everyone else, the amount of ‘keyboard time’ required for the effective use of IT for work as well as home use will continue to diminish, perhaps quite rapidly.
IT input is essentially becoming automated. The keyboard may have survived the typewriter into the desktop computing age, but in the mobile, wearable and internet of things age it might just be adding more bulk than value.
We automatically stop accepting comments 180 days after a post is published. If you would like to know more about this subject, please contact us and we'll try to help.
Published by: electronicdawn Ltd.