Save your money. This book contains nothing but an extended defense of a Utopian
vision of the IT future first published in Carr's HBR article. Limited understanding
of underlying IT technologies, haziness and lack of concrete detailed examples (obscurantism)
are typical marks of Carr's style. Carr used focus on IT shortcomings as a smokescreen
to propose a new utopia: users are mastering complex IT packages and perform all
functions previously provided by IT staff, while "in the cloud" software service
providers fill the rest. This is pretty fine humor, the caricature reminding me
mainframe model, but not much more.
His analogies are extremely superficial and are completely unconvincing (Google
actually can greatly benefit from owning an electrical generation plant or two :-)
Complexity of IT systems has no precedents in human history. That means that analogies
with railways and electrical grid are deeply and irrevocably flawed. They do not
capture the key characteristics of the IT technology: its unsurpassed complexity
and Lego type flexibility. IT became a real nerve system of the modern organizations.
Not the muscle system or legs :-)
Carr's approach to IT is completely anti-historic. Promoting his "everything in
the cloud" Utopia as the most important transformation of IT ever, he forgot (or
simply does not know) that IT already experienced several dramatic transformations
due to new technologies which emerged in 60th, 70th and 90th. Each of those transformations
was more dramatic and important then neo-mainframe revolution which he tried to
sell as "bright future of IT" and a panacea from all IT ills. For example, first
mainframes replaced "prehistoric" computers. Then minicomputers challenged mainframes
("glass wall" datacenters) and PC ended mainframe dominance (and democratized computing.).
In yet another transformation the Internet and TCP/IP (including wireless) converted
datacenters to their modern form. What Carr views as the next revolution is just
a blip on the screen in comparison with those events in each of which the technology
inside the datacenter and on user desks dramatically changed.
As for his "everything in the cloud" software service providers there are at least
three competing technologies which might sideline it: application streaming, virtualization
(especially virtual appliances), and "cloud in the box". "In the cloud" software
services is just one of several emerging technical trends and jury is still out
how much market share each of them can grab. Application streaming looks like direct
and increasingly dangerous competitor for the "in the cloud" software services model.
But all of them are rather complementary technologies with each having advantages
in certain situations and none can be viewed as a universal solution.
The key advantage of application streaming is that you use local computing power
for running the application, not a remote server. That removes the problem of latency
and bandwidth problems inherent in transmitting video stream generated by GUI interface
on the remote server (were the application is running) to the client. Also modern
laptops have tremendous computing power that is very expensive and not easy to match
in remote server park. Once you launch the application on the client (from a shortcut
) the remote server streams (like streaming video or audio) the necessary application
files to your PC and the application launches. This is done just once. After that
application works as if it is local. Also only required files are sent (so if you
are launching Excel you do NOT get those libraries that are shared with MS Word
if it is already installed).
Virtualization promises more agile and more efficient local datacenters and while
it can be used by "in the loud" providers (Amazon uses it), it also can undercut
"in the cloud" software services model in several ways. First of all it permits
packaging a set of key enterprise applications as "virtual appliances". the latter
like streamed applications run locally, store data locally, are cheaper, have better
response time and are more maintainable. This looks to me as a more promising technical
approach for complex sets of applications with intensive I/O requirements. For example,
you can deliver LAMP stack appliance (Linux-Apache-PHP-MySQL) and use it on a local
server for running your LAMP-applications (for example helpdesk) enjoying the same
level of quality and sophistication of packaging and tuning as in case of remote
software providers. But you do not depend on WAN as users connect to it using LAN
which guarantees fast response time. And your data are stored locally (but if you
wish they can be backed up remotely to Amazon or to other remote storage provider).
The other trend is the emergence of higher level of standardization of datacenters
("cloud in the box" ot "datacenter in the box" trend). It permits cheap prepackaged
local datacenters to be installed everywhere. Among examples of this trend are standard
shipping container-based datacenters which are now sold by Sun and soon will be
sold by Microsoft. They already contain typical services like DNS, mail, file sharing,
etc preconfigured. For a fixed cost an organization gets set of servers capable
of serving mid-size branch or plant. In this case the organization can save money
by avoiding paying monthly "per user" fees -- a typical cost recovery model of software
service providers. It also can be combined with previous two models: it is easy
to stream both applications and virtual appliances to the local datacenter from
central location. For a small organization such a datacenter now can be pre-configured
in a couple of servers using Xen or VMware plus necessary routers and switches and
shipped in a small rack.
I would like to stress that the power and versatility of modern laptop is the factor
that should not be underestimated. It completely invalidates Carr's cloudy dream
of users voluntarily switching to network terminal model inherent is centralized
software services ( BTW mainframe terminals and, especially, "glass wall datacenters"
were passionately hated by users). Remotely running applications have a mass appeal
only in very limited cases (webmail). I think that users will fight tooth and nail
for the preservation of the level of autonomy provided by modern laptops. Moreover,
in no way users will agree to the sub-standard response time and limited feature
set of "in the cloud" applications as problems with Google apps adoption demonstrated.
While Google apps is an interesting project which is now used in many small organizations
instead of their own mail and calendar infrastructure, they can serve as a litmus
test for the difficulties of replacing "installed" applications with "in the cloud"
applications. First of all, if we are talking about replacing Open Office or Microsoft
Office, functionality is really, really limited. At the same time Google have spend
a lot of money and efforts creating them but never got any significant traction
and/or sizable return on investment. After several years of existence this product
did not even come close to the functionality of Open Office. To increase penetration
Google recently started licensing them to Salesforce and other firms. That means
that the whole idea might be flawed because even such an extremely powerful organization
as Google with its highly qualified staff and huge server power of datacenters cannot
create an application suit that can compete with preinstalled on laptop applications,
which means cannot compete with the convenience and speed of running applications
locally on modern laptop.
In case of corporate editions the price is also an issue and Google apps in comparison
with Office Professional ($50 per user per year vs. $ 220 for Microsoft Office Professional)
do not look like a bargain if we assume five-seven years life span for the MS Office.
The same situation exists for home users: price-wise Microsoft Office can be now
classified as shareware (Microsoft Office Home and Student 2007 which includes Excel,
PowerPoint, Word, and OneNote costs ~$100 or ~$25 per application ). So for home
users Google need to provide Google apps for free, which taking into account the
amount of design efforts and complexity of the achieving compatibility, is not a
very good way of investing available cash. Please note that Microsoft can at any
time add the ability to stream Office applications to laptops and put "in the cloud"
Office-alternative software service providers in a really difficult position: remote
servers need to provide the same quality of interface and amount of computing power
per user as the user enjoys on a modern laptop. That also suggests existence of
some principal limitations of "in the cloud" approach for this particular application
domain. And this is not unique case. SAP has problems with moving SAP/R3 to the
cloud too and recently decided to scale back its efforts in this direction.
All-in-all computing power of a modern dual core 2-3GHz laptops with 2-4G of memory
and 100G-200G hard drives represent a serious challenge for "in the cloud" software
services providers. This power makes for them difficult to attract individual users
money outside advertising-based or other indirect models. It's even more difficult
for them "to shake corporate money loose": corporate users value the independence
of locally installed on laptop applications and the ability to store data locally.
Not everybody wants to share with Google their latest business plans.
Therefore Carr's 2003 vision looks in 2008 even less realistic then it used to be
five years earlier. As during those five years datacenters actually continued to
grow, Carr's value as a tech trends forecaster is open for review.
Another problem with Carr central "software service provider" vision (aka neo-mainframes
vision) is propaganda of "bandwidth communism". Good WAN connectivity is far from
being free. As experience of any university datacenter convincingly demonstrates
that a dozen of P2P enthusiasts in the neighborhood can prove futility of dreams
about free high quality WAN connectivity to any skeptics. In other words this is
a typical "tragedy of commons" problem and should be analyzed as such.
Viewing it from this angle makes Carr's views of reliable and free 24x7 communication
with remote datacenters unrealistic. This shortcoming can be compensated by properties
of some protocols (for example SMTP mail) and for such protocols this is not a problem,
but for other it is and always will be. At the same time buying dedicated WAN links
can be extremely expensive: for mid-side companies it is usually as expensive as
keeping everything in house. That makes problematic "in the cloud" approach to any
service where disruptions or low bandwidth in certain times of the day can lead
to substantial monetary losses. Also bandwidth is limited: for example OC-1 and
OC-3 lines have their upper limit of 51.84Mbit/s and 155.2 Mbit/s correspondingly.
And even within organization not all bandwidth is used for business purposes. In
a large organization there are always many "entertainment-oriented" users, who strain
the connection of the firm to the Internet cloud.
Another relevant question to ask is: "What are financial benefits to a large organization
for implementing Carr's vision." I do not see any substantial financial gains. IT
costs in large enterprises are already minimized (often 1-3% of total costs) and
further minimization does not bring much benefits (what can you save from just 1%
of total costs; but you can lose a lot). Are fraction of a percent savings worth
risks of outsourcing your own nerve system ? That translates into the question:
"What are principal differences in behavior of those two IT models during catastrophic
events ?" The answer is: "When disaster strikes the difference between local and
outsourced IT staff becomes really critical and entails huge competitive disadvantage
for those organization who weakened their internal IT staff."
That brings us to another problem with Carr's views: he is discounting IQ inherent
in local IT staff. If this IQ falls below certain threshold that not only endangers
an organization in case of catastrophic events but instantly opens such an enterprise
to various form of snake-oil salesmen and IT consultants proposing their wares.
Also software service providers are not altruists and if they sense that you are
really dependent on them or became "IT challenged" they will act accordingly.
In other words an important side effect of dismantling of IT organization is that
instantly makes a company a donor in the hands of ruthless external suppliers and
contractors. Consultants (especially large consultant firms) can help but they also
can become part of the problem due to the problem of loyalty. We all know what happened
with medicine when doctors were allowed to be bribed by pharmaceutical companies.
This situation which is aptly called "Viva Viagra" and in which useless or outright
dangerous drags like Vioxx were allowed to became blockbusters was fully replicated
in IT: myth about independence of IT consultants is just a myth (and moreover, some
commercial IDS/IPS and EMS systems in their destructive potential are not that different
from Vioxx ;-).
Carr's recommendation that companies should be more concerned with IT risk mitigation
then IT strategy is complete baloney. He just does not have any "in depth" understanding
of very complex security issues involved in large enterprise. Security cannot be
achieved without sound IT architecture and participation of non-security IT staff.
Sound architecture (which is a result of proper "IT strategy") is more important
then any amount of "risk mitigation" activities which most commonly are simple waist
of money or, worse, entail direct harm to the organizations (as SOX enthusiasts
from big accounting firms recently aptly demonstrated to the surprised corporate
world).
I touched only the most obvious weaknesses of the Carr's vision (or fallacy to be
exact). All-in-all Carr proposed just another dangerous utopia and skillfully milked
the controversy his initial HBR article generated in his two subsequent books.
Virtualization promises more agile and more efficient local datacenters. It also
permits packaging key enterprise application as "virtual appliances". The latter
compete directly with centralized "in the cloud" software service providers vision
and have several key advantages: they are local, they are cheaper, and they are
more maintainable. Delivery of virtual appliances to local datacenters instead of
"in the cloud" software services looks to me a more promising technical approach
for complex applications with intensive I/O requirements.
The other trend is a higher level of standardization of datacenters ("datacenter
in the box"), which permit cheap local datacenters to be installed everywhere. Among
examples of this trend are standard shipping container-based datacenters which are
now sold by Sun and soon will be sold by Microsoft. They already contain typical
services like DNS, mail, file sharing, etc preconfigured. For a fixed cost an organization
gets ready-make local datacenter capable of serving mid-size branch or plant. This
trend also competes with the idea of software service providers and for a medium
size organization might be cheaper in the long run then paying monthly "per user"
fees -- a typical cost recovery model of software service providers. It permits
streaming both applications and virtual appliances to the local delivery point.
For a small organization such a datacenter now can be pre-configured in a couple
of servers using Xen or VMware plus necessary routers and switches and shipped in
a small rack.
I would like to stress that the power of modern laptop is the factor that should
not be underestimated. It completely invalidates Carr's dream of users voluntarily
switching to network terminal model inherent is centralized software service provision
( BTW mainframe terminals and, especially, "glass wall datacenters" were passionately
hated by users). Such a solution can have a mass appeal only in very limited cases
(webmail). I think that users will fight tooth and nail for the preservation of
the level of autonomy provided by modern laptops. Moreover, in no way users will
agree to sub-standard response time and limited feature set of "in the cloud" applications
as problems with Google apps adoption demonstrated.
Can we call Google experiment with creation of Net-based alternative of the Office
(Google apps) a failure? I think we can: Google have spend a lot of money and efforts
creating them and never got any traction and/or sizable return on investment. After
several years this is not even a financially sustainable project. That's why Google
is licensing them to Salesforce and other firms. And forget about dreams of denting
Microsoft dominance. Office 2003 and 2007 each in its own way were a knockout for
such Google dreams. That means that the idea might be flawed because even such an
extremely powerful organization as Google with its highly qualified staff and huge
server power of datacenters cannot compete with the convenience and speed of running
applications locally on modern laptop. It also cannot compete on price: price-wise
Microsoft Office can be now classified as shareware: the cost is $25 per application
(Excel, PowerPoint, Word, and OneNote) in Microsoft Office Home and Student 2007.
So any price wars with Microsoft can be fought only on zero cost basis and taking
into account the amount of design efforts and complexity of the achieving compatibility
this is not a very good way of investing available cash. Even for organizations
flush with money. And Microsoft can any time switch to streaming Office applications
to laptops and put Office software service provider in a really difficult position:
remote servers need to provide the same amount of computing power per user as the
user has on a modern laptop.
Computing power of a modern dual core 2GHz laptops with 2G or 4G of memory and 100G
hard drives represent a serious challenge that "in the cloud" providers do not have
much chance to overcome. This makes for them difficult to attract individual users
money outside advertising-based or other indirect models. It will be even more difficult
for them to shake large organizations money loose as corporate users value the independence
of locally installed on laptop applications. As well as the ability to store data
locally.
Therefore Carr's 2003 vision looks in 2008 even less realistic then it used to be
five years earlier. As during those five years datacenters actually continue to
grow, Carr's value as a tech trends forecaster is open for review.
Another problem with Carr central "software service provider" vision (aka neo-mainframes
vision) is propaganda of "bandwidth communism". Good WAN connectivity is far from
being free. As experience of any university datacenter convincingly demonstrates
that a dozen of P2P enthusiasts in the neighborhood can prove futility of dreams
about free high quality WAN connectivity to any skeptics. In other words this is
a typical "tragedy of commons" problem and should be analyzed as such.
Viewing it from this angle makes Carr's views of reliable and free 24x7 communication
with remote datacenters unrealistic. This shortcoming can be compensated by properties
of some protocols (for example SMTP mail) and for such protocols this is not a problem,
but for other it is and always will be. At the same time buying dedicated WAN links
can be extremely expensive: for mid-side companies it is usually as expensive as
keeping everything in house. That makes problematic "in the cloud" approach to any
service where disruptions or low bandwidth in certain times of the day can lead
to substantial monetary losses. Also bandwidth is limited: for example OC-1 and
OC-3 lines have their upper limit of 51.84Mbit/s and 155.2 Mbit/s correspondingly.
And even within organization only a fraction of this bandwidth can be used for business
purposes. In practice corporate connection to Internet is used mainly for
entertainment
as in any large organization there are always quite a few "entertainment-oriented"
users, who consume lion share of available bandwidth.
Another Carr's folly is overestimation of costs of IT in large corporations. IT
costs in large enterprises are already minimized (often 1-3% of total costs) and
further minimization does not bring much benefits (what can you save from just 1%
of total costs; but you can lose a lot). I do not see any substantial financial
gains from cutting a fraction of a percent. And are risks involved in such cuttings,
even if they are possible, worth risks of outsourcing your own nerve system ? That
translates into the question: "What are principal differences in behavior of those
two IT models during catastrophic events ?" The proper answer is: "When disaster
strikes the difference between local and outsourced IT staff becomes really critical
and entails huge competitive disadvantage for those organizations who weakened their
internal IT staff."
That brings us to another problem with Carr's views: he is discounting IQ inherent
in local IT staff. If this IQ falls below certain critical threshold, that not only
endangers an organization in case of catastrophic events but instantly opens such
an enterprise to various form of exploitation by snake-oil salesmen and IT consultants
peddling their wares. Also software service providers are not altruists and if they
sense that you are really dependent on them or became "IT challenged" they will
act accordingly. In other words an important side effect of dismantling of IT organization
is that instantly makes a company a donor in the hands of ruthless external suppliers
and contractors. Consultants (especially large consultant firms) can help but they
also can become part of the problem due to the problem of loyalty. We all know what
happened with medicine when doctors were allowed to be bribed by pharmaceutical
companies. This situation which is aptly called "Viva Viagra" and in which useless
or outright dangerous drags like Vioxx were allowed to became blockbusters was fully
replicated in IT: myth about independence of IT consultants is just a myth (and
moreover, some commercial IDS/IPS and EMS systems in their destructive potential
are not that different from Vioxx ;-).
Carr's recommendation that companies should be more concerned with IT risk mitigation
then IT strategy is complete baloney. He just does not have any "in depth" understanding
of very complex availability and security issues involved in large enterprise. Neither
high availability, nor security cannot be achieved without sound IT architecture.
Sound architecture (which is a result of proper "IT strategy" that Carr discounted)
is more important then any amount of "risk mitigation" activities which most commonly
are simple waist of money or, worse, entail direct harm to the organizations (as
SOX enthusiasts from big accounting firms recently aptly demonstrated to the surprised
corporate world).
I touched only the most obvious weaknesses of the Carr's vision (or fallacy to be
exact). All-in-all Carr proposed just another dangerous utopia and skillfully milked
the controversy his initial HBR article generated in his two subsequent books.
By |
Robert D.
Steele
(Oakton, VA United States) -
See all my reviews |
The History of Power Generation,
By |
Christian
Claborne
(San Diego, CA USA) -
See all my reviews |
"Pancake People" and the Darker Side of the Net,
By | Trevor Cross "persepolis" (Hingham, MA United States) - See all my reviews |
"Pancake People" refers to Richard Foreman's description about people on the Internet being a mile wide and an inch deep. Carr describes how the technology behind the Internet (filters, etc) actually compounds this problem.
One of the author's best insights comes when he takes issue with the whole concept behind AI (artificial intelligence). He states that instead of computers becoming more human-like in their thinking, it is we who could become more computer-like in our thinking. As a humanist who grew up loving technology, I find this scenario frightening because it hits close to home. The comments (included in the book) from the co-founders of Google about creating a brain-computer interface reminded me of the "Borg" from Star Trek. For those interested, the Borg were a commentary on the communistic, totalitarian effects of unfettered technology (nanobots, brain/computer interfacing).
A pretty good book, with some serious flaws,
By |
Peter D.
Tillman
(Santa Fe, NM USA) -
See all my reviews |
The Big Switch in Many Ways,
By |
M. McDonald
(Chicago, IL United States) -
See all my reviews |