Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Edge Computing

News

Slightly Skeptical View on Enterprise Unix Administration

Recommended Links

Webliography of problems with "pure" cloud environment

Bandwidth communism

Cloud Mythology
Edge Computing Issues of security and trust in "pure" cloud environment Questionable costs efficiency of pure cloud Lock in related problems Cloud providers as intelligence collection hubs Problem of loyalty
Troubleshooting Remote Autonomous Servers Configuring Low End Authonomous Servers Review of Remote Management Systems Virtual Software Appliances Working with serial console System Management
iLO 3 -- HP engineering fiasco Dell DRAC ALOM Setup   Is Google evil ?

Typical problems with IT infrastructure

Heterogeneous Unix server farms Sysadmin Horror Stories Humor Etc

From the historical standpoint "pure cloud" providers represent the return to the mainframe era on a new technology level. Cloud providers are new "glass data centers" and due to overcentralization will soon will be hated as much if not more.  The bottleneck is WAN links, and they are costly both in public and private clouds.

An interesting technology that now is developing as a part of "hybrid cloud" technologies is so called "edge computing".  Of course, now the centralization train is still running at full speed, but in two to three years people might start asking questions "Why my application freezes so often".

There a synergy between "edge computing" and "grid computing": the latter can be viewed as grid computing at the edges of the network with the central management node (the headnode in “HPC speak”) software like Puppet, Chef or Ansible instead of HPC scheduler.

As WAN bandwidth provided at night is several times higher than during the day, one important task that can be performed by "edge computing" and the “time shift” activities: the delay of some transmissions and synchronization of data to the night time. UDP based tools for night allow to transfer large amount of data by saturating WAN bandwidth, but even TCP is can be usable at night on high latency links, if you can split the data stream into several sub streams.  I reached comparable to UDB speeds when I was able to split the data stream into 8-16 sub-streams.  Those "time shift" schemes does not require large investment.  All “edge” resources should use unified images for VM and unified patching scheme.

I would define "hybrid cloud" as a mixture of private cloud, public cloud and "edge" computing with the emphasis on an optimizing consumed WAN bandwidth and offloading processes that consume WAN bandwidth at day time (One perverted example of WAN bandwidth usage is Microsoft Drive synchronization of user data)  to the edge of the network via installation of "autonomous datacenters." Which can be just one remotely controlled.  No local personnel is needed, so this is still "cloud-style organization" as all the staff is at central location.  For the specific task of “time shift" of  backup of user data you need just one server with storage per site. Initial backup is performed to the local storage, while remote transfers via WAN are shifted to the night time, where bandwidth is larger. 

From Wikipedia ("edge computing"):

Edge computing is a distributed computing paradigm in which computation is largely or completely performed on distributed device nodes known as smart devices or edge devices as opposed to primarily taking place in a centralized cloud environment. The eponymous "edge" refers to the geographic distribution of computing nodes in the network as Internet of Things devices, which are at the "edge" of an enterprise, metropolitan or other network. The motivation is to provide server resources, data analysis and artificial intelligence ("ambient intelligence") closer to data collection sources and cyber-physical systems such as smart sensors and actuators.[1] Edge computing is seen as important in the realization of physical computing, smart cities, ubiquitous computing and the Internet of Things.

Edge computing about centrally (remotely) managed autonomous mini datacenters. Like in classic cloud environment staff is centralized and all computers are centrally managed. But they are located on sites not in central datacenter and that alleviate WAN bottleneck, for certain applications eliminating it completely. Some hardware producers already have product that are designed for such use cases.

... ... ...

Edge computing and grid computing are related.

IBM acquisition of Red Hat is partially designed to capture large enterprises interest in "private cloud" space, when virtual machines that constitute cloud are running on local premises in para-virtualization mode (Solaris x86 Zones, Docker containers).  This way one has the flexibility of "classic" (off site) cloud without prohibitive costs overruns. On Azure a minimal server (2 core CPU, 32GB of space and 8GB of RAM) with minimal, experimental usage costs about $100 a month.  You can lease $3000 Dell server with full hardware warranty for three years for $98.19 a month.  With promotion you can even get free delivery and installation for this amount of money and free removal of hardware after three years of usage.  So like with cloud you do not touch hardware, but you do not pay extra to Microsoft for this privilege either (you do have air-conditioning and electricity costs, though). And such "autonomous, remotely controlled,  mini-datacenter" can be installed anywhere where Dell (or HP) provide services.

There are now several hardware offerings for edge computing. See for example:

Unfortunately, centralization drive now still rules. But having some "autonomous units" on the edge of network is an attractive idea that has future. First of all, because it allows to cut the required WAN bandwidth. Second, if implemented as small units with full remote management ("autonomous"), it allows to avoid many problems with "classic cloud" WAN bottleneck and problems typical for corporate datacenters. 

Sooner or later the age of centralized cloud will come to the logical end.  Something will replace it. As soon as the top management realizes how much they are paying for WAN bandwidth there will be a backlash. I hope that hybrid cloud might become a viable technical policy in two to three years’ timeframe.

And older folks remember quite well how much IBM was hated in 60th and 70th (and this despite excellent compilers they provided innovative VM/360 OS which pioneered multitasking in the USA; base OS of VM/360 used REXX as shell) and how much energy and money people have spent trying to free themselves from this particular "central provider of services" model: "glass datacenter".  It is not unfeasible that cloud provider will repeat a similar path on a new level. Their strong point: delivering centralized services for mobile users is also under scrutiny as NSA and other intelligence agencies have free access to all user emails. 

Already now in many large enterprises WAN constitute a bottleneck, with Office 365 applications periodically frozen,  and quality of service deteriorates to the level of user discontent, if not a revolt.

People younger then 40 probably can't understand to what extent rapid adoption of IBM PC was driven by the desire to send a particular "central service provider" to hell. That historical fact raises a legitimate question of the user resistance to "public cloud" model. Now security consideration and widespread NSA interception of traffic, geo-tracking of mobile phones and unlimited unrestricted access to major cloud providers webmail also became hotly debated issues.


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Mar 05, 2021] Edge servers can be strategically placed within the topography of a network to reduce the latency of connecting with them and serve as a buffer to help mitigate overloading a data center

Mar 05, 2021 | opensource.com

... Edge computing is a model of infrastructure design that places many "compute nodes" (a fancy word for a server ) geographically closer to people who use them most frequently. It can be part of the open hybrid-cloud model, in which a centralized data center exists to do all the heavy lifting but is bolstered by smaller regional servers to perform high frequency -- but usually less demanding -- tasks...

Historically, a computer was a room-sized device hidden away in the bowels of a university or corporate head office. Client terminals in labs would connect to the computer and make requests for processing. It was a centralized system with access points scattered around the premises. As modern networked computing has evolved, this model has been mirrored unexpectedly. There are centralized data centers to provide serious processing power, with client computers scattered around so that users can connect. However, the centralized model makes less and less sense as demands for processing power and speed are ramping up, so the data centers are being augmented with distributed servers placed on the "edge" of the network, closer to the users who need them.

The "edge" of a network is partly an imaginary place because network boundaries don't exactly map to physical space. However, servers can be strategically placed within the topography of a network to reduce the latency of connecting with them and serve as a buffer to help mitigate overloading a data center.

... ... ...

While it's not exclusive to Linux, container technology is an important part of cloud and edge computing. Getting to know Linux and Linux containers helps you learn to install, modify, and maintain "serverless" applications. As processing demands increase, it's more important to understand containers, Kubernetes and KubeEdge , pods, and other tools that are key to load balancing and reliability.

... ... ...

The cloud is largely a Linux platform. While there are great layers of abstraction, such as Kubernetes and OpenShift, when you need to understand the underlying technology, you benefit from a healthy dose of Linux knowledge. The best way to learn it is to use it, and Linux is remarkably easy to try . Get the edge on Linux so you can get Linux on the edge.

[Aug 22, 2019] Why Micro Data Centers Deliver Good Things in Small Packages by Calvin Hennick

Aug 22, 2019 | solutions.cdw.com

Enterprises are deploying self-contained micro data centers to power computing at the network edge.

Calvin Hennick is a freelance journalist who specializes in business and technology writing. He is a contributor to the CDW family of technology magazines.

The location for data processing has changed significantly throughout the history of computing. During the mainframe era, data was processed centrally, but client/server architectures later decentralized computing. In recent years, cloud computing centralized many processing workloads, but digital transformation and the Internet of Things are poised to move computing to new places, such as the network edge .

"There's a big transformation happening," says Thomas Humphrey, segment director for edge computing at APC . "Technologies like IoT have started to require that some local computing and storage happen out in that distributed IT architecture."

For example, some IoT systems require processing of data at remote locations rather than a centralized data center , such as at a retail store instead of a corporate headquarters.

To meet regulatory requirements and business needs, IoT solutions often need low latency, high bandwidth, robust security and superior reliability . To meet these demands, many organizations are deploying micro data centers: self-contained solutions that provide not only essential infrastructure, but also physical security, power and cooling and remote management capabilities.

"Digital transformation happens at the network edge, and edge computing will happen inside micro data centers ," says Bruce A. Taylor, executive vice president at Datacenter Dynamics . "This will probably be one of the fastest growing segments -- if not the fastest growing segment -- in data centers for the foreseeable future."

What Is a Micro Data Center?

Delivering the IT capabilities needed for edge computing represents a significant challenge for many organizations, which need manageable and secure solutions that can be deployed easily, consistently and close to the source of computing . Vendors such as APC have begun to create comprehensive solutions that provide these necessary capabilities in a single, standardized package.

"From our perspective at APC, the micro data center was a response to what was happening in the market," says Humphrey. "We were seeing that enterprises needed more robust solutions at the edge."

Most micro data center solutions rely on hyperconverged infrastructure to integrate computing, networking and storage technologies within a compact footprint . A typical micro data center also incorporates physical infrastructure (including racks), fire suppression, power, cooling and remote management capabilities. In effect, the micro data center represents a sweet spot between traditional IT closets and larger modular data centers -- giving organizations the ability to deploy professional, powerful IT resources practically anywhere .

Standardized Deployments Across the Country

Having robust IT resources at the network edge helps to improve reliability and reduce latency, both of which are becoming more and more important as analytics programs require that data from IoT deployments be processed in real time .

"There's always been edge computing," says Taylor. "What's new is the need to process hundreds of thousands of data points for analytics at once."

Standardization, redundant deployment and remote management are also attractive features, especially for large organizations that may need to deploy tens, hundreds or even thousands of micro data centers. "We spoke to customers who said, 'I've got to roll out and install 3,500 of these around the country,'" says Humphrey. "And many of these companies don't have IT staff at all of these sites." To address this scenario, APC designed standardized, plug-and-play micro data centers that can be rolled out seamlessly. Additionally, remote management capabilities allow central IT departments to monitor and troubleshoot the edge infrastructure without costly and time-intensive site visits.

In part because micro data centers operate in far-flung environments, security is of paramount concern. The self-contained nature of micro data centers ensures that only authorized personnel will have access to infrastructure equipment , and security tools such as video surveillance provide organizations with forensic evidence in the event that someone attempts to infiltrate the infrastructure.

How Micro Data Centers Can Help in Retail, Healthcare

Micro data centers make business sense for any organization that needs secure IT infrastructure at the network edge. But the solution is particularly appealing to organizations in fields such as retail, healthcare and finance , where IT environments are widely distributed and processing speeds are often a priority.

In retail, for example, edge computing will become more important as stores find success with IoT technologies such as mobile beacons, interactive mirrors and real-time tools for customer experience, behavior monitoring and marketing .

"It will be leading-edge companies driving micro data center adoption, but that doesn't necessarily mean they'll be technology companies," says Taylor. "A micro data center can power real-time analytics for inventory control and dynamic pricing in a supermarket."

In healthcare, digital transformation is beginning to touch processes and systems ranging from medication carts to patient records, and data often needs to be available locally; for example, in case of a data center outage during surgery. In finance, the real-time transmission of data can have immediate and significant financial consequences. And in both of these fields, regulations governing data privacy make the monitoring and security features of micro data centers even more important.

Micro data centers also have enormous potential to power smart city initiatives and to give energy companies a cost-effective way of deploying resources in remote locations , among other use cases.

"The proliferation of edge computing will be greater than anything we've seen in the past," Taylor says. "I almost can't think of a field where this won't matter."

Learn more about how solutions and services from CDW and APC can help your organization overcome its data center challenges.

Micro Data Centers Versus IT Closets

Think the micro data center is just a glorified update on the traditional IT closet? Think again.

"There are demonstrable differences," says Bruce A. Taylor, executive vice president at Datacenter Dynamics. "With micro data centers, there's a tremendous amount of computing capacity in a very small, contained space, and we just didn't have that capability previously ."

APC identifies three key differences between IT closets and micro data centers:

Difference #1: Uptime Expectations. APC notes that, of the nearly 3 million IT closets in the U.S., over 70 percent report outages directly related to human error. In an unprotected IT closet, problems can result from something as preventable as cleaning staff unwittingly disconnecting a cable. Micro data centers, by contrast, utilize remote monitoring, video surveillance and sensors to reduce downtime related to human error.

Difference #2: Cooling Configurations. The cooling of IT wiring closets is often approached both reactively and haphazardly, resulting in premature equipment failure. Micro data centers are specifically designed to assure cooling compatibility with anticipated loads.

Difference #3: Power Infrastructure. Unlike many IT closets, micro data centers incorporate uninterruptible power supplies, ensuring that infrastructure equipment has the power it needs to help avoid downtime.

[Nov 12, 2018] Edge Computing vs. Cloud Computing What's the Difference by Andy Patrizio ,

"... Download the authoritative guide: Cloud Computing 2018: Using the Cloud to Transform Your Business ..."
Notable quotes:
"... Download the authoritative guide: Cloud Computing 2018: Using the Cloud to Transform Your Business ..."
"... Edge computing is a term you are going to hear more of in the coming years because it precedes another term you will be hearing a lot, the Internet of Things (IoT). You see, the formally adopted definition of edge computing is a form of technology that is necessary to make the IoT work. ..."
"... Tech research firm IDC defines edge computing is a "mesh network of micro data centers that process or store critical data locally and push all received data to a central data center or cloud storage repository, in a footprint of less than 100 square feet." ..."
Jan 23, 2018 | www.datamation.com
Download the authoritative guide: Cloud Computing 2018: Using the Cloud to Transform Your Business

The term cloud computing is now as firmly lodged in our technical lexicon as email and Internet, and the concept has taken firm hold in business as well. By 2020, Gartner estimates that a "no cloud" policy will be as prevalent in business as a "no Internet" policy. Which is to say no one who wants to stay in business will be without one.

You are likely hearing a new term now, edge computing . One of the problems with technology is terms tend to come before the definition. Technologists (and the press, let's be honest) tend to throw a word around before it is well-defined, and in that vacuum come a variety of guessed definitions, of varying accuracy.

Edge computing is a term you are going to hear more of in the coming years because it precedes another term you will be hearing a lot, the Internet of Things (IoT). You see, the formally adopted definition of edge computing is a form of technology that is necessary to make the IoT work.

Tech research firm IDC defines edge computing is a "mesh network of micro data centers that process or store critical data locally and push all received data to a central data center or cloud storage repository, in a footprint of less than 100 square feet."

It is typically used in IoT use cases, where edge devices collect data from IoT devices and do the processing there, or send it back to a data center or the cloud for processing. Edge computing takes some of the load off the central data center, reducing or even eliminating the processing work at the central location.

IoT Explosion in the Cloud Era

To understand the need for edge computing you must understand the explosive growth in IoT in the coming years, and it is coming on big. There have been a number of estimates of the growth in devices, and while they all vary, they are all in the billions of devices.

This is taking place in a number of areas, most notably cars and industrial equipment. Cars are becoming increasingly more computerized and more intelligent. Gone are the days when the "Check engine" warning light came on and you had to guess what was wrong. Now it tells you which component is failing.

The industrial sector is a broad one and includes sensors, RFID, industrial robotics, 3D printing, condition monitoring, smart meters, guidance, and more. This sector is sometimes called the Industrial Internet of Things (IIoT) and the overall market is expected to grow from $93.9 billion in 2014 to $151.01 billion by 2020.

All of these sensors are taking in data but they are not processing it. Your car does some of the processing of sensor data but much of it has to be sent in to a data center for computation, monitoring and logging.

The problem is that this would overload networks and data centers. Imaging the millions of cars on the road sending in data to data centers around the country. The 4G network would be overwhelmed, as would the data centers. And if you are in California and the car maker's data center is in Texas, that's a long round trip.

[Nov 09, 2018] Cloud Computing vs Edge Computing Which Will Prevail

Notable quotes:
"... The recent widespread of edge computing in some 5G showcases, like the major sports events, has generated the ongoing discussion about the possibility of edge computing to replace cloud computing. ..."
"... For instance, Satya Nadella, the CEO of Microsoft, announced in Microsoft Build 2017 that the company will focus its strategy on edge computing. Indeed, edge computing will be the key for the success of smart home and driverless vehicles ..."
"... the edge will be the first to process and store the data generated by user devices. This will reduce the latency for the data to travel to the cloud. In other words, the edge optimizes the efficiency for the cloud. ..."
Nov 09, 2018 | www.lannerinc.com

The recent widespread of edge computing in some 5G showcases, like the major sports events, has generated the ongoing discussion about the possibility of edge computing to replace cloud computing.

In fact, there have been announcements from global tech leaders like Nokia and Huawei demonstrating increased efforts and resources in developing edge computing.

For instance, Satya Nadella, the CEO of Microsoft, announced in Microsoft Build 2017 that the company will focus its strategy on edge computing. Indeed, edge computing will be the key for the success of smart home and driverless vehicles.

... ... ...

Cloud or edge, which will lead the future?

The answer to that question is "Cloud – Edge Mixing". The cloud and the edge will complement each other to offer the real IoT experience. For instance, while the cloud coordinates all the technology and offers SaaS to users, the edge will be the first to process and store the data generated by user devices. This will reduce the latency for the data to travel to the cloud. In other words, the edge optimizes the efficiency for the cloud.

It is strongly suggested to implement open architecture white-box servers for both cloud and edge, to minimize the latency for cloud-edge synchronization and optimize the compatibility between the two. For example, Lanner Electronics offers a wide range of Intel x86 white box appliances for data centers and edge uCPE/vCPE.

http://www.lannerinc.com/telecom-datacenter-appliances/vcpe/ucpe-platforms/

[Nov 09, 2018] Why Micro Data Centers Deliver Good Things in Small Packages by Calvin Hennick

Notable quotes:
"... "There's a big transformation happening," says Thomas Humphrey, segment director for edge computing at APC . "Technologies like IoT have started to require that some local computing and storage happen out in that distributed IT architecture." ..."
"... In retail, for example, edge computing will become more important as stores find success with IoT technologies such as mobile beacons, interactive mirrors and real-time tools for customer experience, behavior monitoring and marketing . ..."
Nov 09, 2018 | solutions.cdw.com

Enterprises are deploying self-contained micro data centers to power computing at the network edge.

The location for data processing has changed significantly throughout the history of computing. During the mainframe era, data was processed centrally, but client/server architectures later decentralized computing. In recent years, cloud computing centralized many processing workloads, but digital transformation and the Internet of Things are poised to move computing to new places, such as the network edge .

"There's a big transformation happening," says Thomas Humphrey, segment director for edge computing at APC . "Technologies like IoT have started to require that some local computing and storage happen out in that distributed IT architecture."

For example, some IoT systems require processing of data at remote locations rather than a centralized data center , such as at a retail store instead of a corporate headquarters.

To meet regulatory requirements and business needs, IoT solutions often need low latency, high bandwidth, robust security and superior reliability . To meet these demands, many organizations are deploying micro data centers: self-contained solutions that provide not only essential infrastructure, but also physical security, power and cooling and remote management capabilities.

"Digital transformation happens at the network edge, and edge computing will happen inside micro data centers ," says Bruce A. Taylor, executive vice president at Datacenter Dynamics . "This will probably be one of the fastest growing segments -- if not the fastest growing segment -- in data centers for the foreseeable future."

What Is a Micro Data Center?

Delivering the IT capabilities needed for edge computing represents a significant challenge for many organizations, which need manageable and secure solutions that can be deployed easily, consistently and close to the source of computing . Vendors such as APC have begun to create comprehensive solutions that provide these necessary capabilities in a single, standardized package.

"From our perspective at APC, the micro data center was a response to what was happening in the market," says Humphrey. "We were seeing that enterprises needed more robust solutions at the edge."

Most micro data center solutions rely on hyperconverged infrastructure to integrate computing, networking and storage technologies within a compact footprint . A typical micro data center also incorporates physical infrastructure (including racks), fire suppression, power, cooling and remote management capabilities. In effect, the micro data center represents a sweet spot between traditional IT closets and larger modular data centers -- giving organizations the ability to deploy professional, powerful IT resources practically anywhere .

Standardized Deployments Across the Country

Having robust IT resources at the network edge helps to improve reliability and reduce latency, both of which are becoming more and more important as analytics programs require that data from IoT deployments be processed in real time .

"There's always been edge computing," says Taylor. "What's new is the need to process hundreds of thousands of data points for analytics at once."

Standardization, redundant deployment and remote management are also attractive features, especially for large organizations that may need to deploy tens, hundreds or even thousands of micro data centers. "We spoke to customers who said, 'I've got to roll out and install 3,500 of these around the country,'" says Humphrey. "And many of these companies don't have IT staff at all of these sites." To address this scenario, APC designed standardized, plug-and-play micro data centers that can be rolled out seamlessly. Additionally, remote management capabilities allow central IT departments to monitor and troubleshoot the edge infrastructure without costly and time-intensive site visits.

In part because micro data centers operate in far-flung environments, security is of paramount concern. The self-contained nature of micro data centers ensures that only authorized personnel will have access to infrastructure equipment , and security tools such as video surveillance provide organizations with forensic evidence in the event that someone attempts to infiltrate the infrastructure.

How Micro Data Centers Can Help in Retail, Healthcare

Micro data centers make business sense for any organization that needs secure IT infrastructure at the network edge. But the solution is particularly appealing to organizations in fields such as retail, healthcare and finance , where IT environments are widely distributed and processing speeds are often a priority.

In retail, for example, edge computing will become more important as stores find success with IoT technologies such as mobile beacons, interactive mirrors and real-time tools for customer experience, behavior monitoring and marketing .

"It will be leading-edge companies driving micro data center adoption, but that doesn't necessarily mean they'll be technology companies," says Taylor. "A micro data center can power real-time analytics for inventory control and dynamic pricing in a supermarket."

In healthcare, digital transformation is beginning to touch processes and systems ranging from medication carts to patient records, and data often needs to be available locally; for example, in case of a data center outage during surgery. In finance, the real-time transmission of data can have immediate and significant financial consequences. And in both of these fields, regulations governing data privacy make the monitoring and security features of micro data centers even more important.

Micro data centers also have enormous potential to power smart city initiatives and to give energy companies a cost-effective way of deploying resources in remote locations , among other use cases.

"The proliferation of edge computing will be greater than anything we've seen in the past," Taylor says. "I almost can't think of a field where this won't matter."

Learn more about how solutions and services from CDW and APC can help your organization overcome its data center challenges.

Micro Data Centers Versus IT Closets

Think the micro data center is just a glorified update on the traditional IT closet? Think again.

"There are demonstrable differences," says Bruce A. Taylor, executive vice president at Datacenter Dynamics. "With micro data centers, there's a tremendous amount of computing capacity in a very small, contained space, and we just didn't have that capability previously ."

APC identifies three key differences between IT closets and micro data centers:

  1. Difference #1: Uptime Expectations. APC notes that, of the nearly 3 million IT closets in the U.S., over 70 percent report outages directly related to human error. In an unprotected IT closet, problems can result from something as preventable as cleaning staff unwittingly disconnecting a cable. Micro data centers, by contrast, utilize remote monitoring, video surveillance and sensors to reduce downtime related to human error.
  2. Difference #2: Cooling Configurations. The cooling of IT wiring closets is often approached both reactively and haphazardly, resulting in premature equipment failure. Micro data centers are specifically designed to assure cooling compatibility with anticipated loads.
  3. Difference #3: Power Infrastructure. Unlike many IT closets, micro data centers incorporate uninterruptible power supplies, ensuring that infrastructure equipment has the power it needs to help avoid downtime.

Calvin Hennick is a freelance journalist who specializes in business and technology writing. He is a contributor to the CDW family of technology magazines.

[Nov 09, 2018] Solving Office 365 and SaaS Performance Issues with SD-WAN

Notable quotes:
"... most of the Office365 deployments face network related problems - typically manifesting as screen freezes. Limited WAN optimization capability further complicates the problems for most SaaS applications. ..."
"... Why enterprises overlook the importance of strategically placing cloud gateways ..."
Nov 09, 2018 | www.brighttalk.com

About this webinar Major research highlights that most of the Office365 deployments face network related problems - typically manifesting as screen freezes. Limited WAN optimization capability further complicates the problems for most SaaS applications. To compound the issue, different SaaS applications issue different guidelines for solving performance issues. We will investigate the major reasons for these problems.

SD-WAN provides an essential set of features that solves these networking issues related to Office 365 and SaaS applications. This session will cover the following major topics:

[Nov 09, 2018] Make sense of edge computing vs. cloud computing

Notable quotes:
"... We already know that computing at the edge pushes most of the data processing out to the edge of the network, close to the source of the data. Then it's a matter of dividing the processing between the edge and the centralized system, meaning a public cloud such as Amazon Web Services, Google Cloud, or Microsoft Azure. ..."
"... The goal is to process near the device the data that it needs quickly, such as to act on. There are hundreds of use cases where reaction time is the key value of the IoT system, and consistently sending the data back to a centralized cloud prevents that value from happening. ..."
Nov 09, 2018 | www.infoworld.com

The internet of things is real, and it's a real part of the cloud. A key challenge is how you can get data processed from so many devices. Cisco Systems predicts that cloud traffic is likely to rise nearly fourfold by 2020, increasing 3.9 zettabytes (ZB) per year in 2015 (the latest full year for which data is available) to 14.1ZB per year by 2020.

As a result, we could have the cloud computing perfect storm from the growth of IoT. After all, IoT is about processing device-generated data that is meaningful, and cloud computing is about using data from centralized computing and storage. Growth rates of both can easily become unmanageable.

So what do we do? The answer is something called "edge computing." We already know that computing at the edge pushes most of the data processing out to the edge of the network, close to the source of the data. Then it's a matter of dividing the processing between the edge and the centralized system, meaning a public cloud such as Amazon Web Services, Google Cloud, or Microsoft Azure.

That may sound a like a client/server architecture, which also involved figuring out what to do at the client versus at the server. For IoT and any highly distributed applications, you've essentially got a client/network edge/server architecture going on, or -- if your devices can't do any processing themselves, a network edge/server architecture.

The goal is to process near the device the data that it needs quickly, such as to act on. There are hundreds of use cases where reaction time is the key value of the IoT system, and consistently sending the data back to a centralized cloud prevents that value from happening.

You would still use the cloud for processing that is either not as time-sensitive or is not needed by the device, such as for big data analytics on data from all your devices.

There's another dimension to this: edge computing and cloud computing are two very different things. One does not replace the other. But too many articles confuse IT pros by suggesting that edge computing will displace cloud computing. It's no more true than saying PCs would displace the datacenter.

It makes perfect sense to create purpose-built edge computing-based applications, such as an app that places data processing in a sensor to quickly process reactions to alarms. But you're not going to place your inventory-control data and applications at the edge -- moving all compute to the edge would result in a distributed, unsecured, and unmanageable mess.

All the public cloud providers have IoT strategies and technology stacks that include, or will include, edge computing. Edge and cloud computing can and do work well together, but edge computing is for purpose-built systems with special needs. Cloud computing is a more general-purpose platform that also can work with purpose-built systems in that old client/server model.

Related:

David S. Linthicum is a chief cloud strategy officer at Deloitte Consulting, and an internationally recognized industry expert and thought leader. His views are his own.

[Nov 03, 2018] Technology is dominated by two types of people; those who understand what they don t manage; and those who manage what they don t understand – ARCHIBALD PUTT ( PUTTS LAW )

Notable quotes:
"... These C level guys see cloud services – applications, data, backup, service desk – as a great way to free up a blockage in how IT service is being delivered on premise. ..."
"... IMHO there is a big difference between management of IT and management of IT service. Rarely do you get people who can do both. ..."
Nov 03, 2018 | brummieruss.wordpress.com

...Cloud introduces a whole new ball game and will no doubt perpetuate Putts Law for ever more. Why?

Well unless 100% of IT infrastructure goes up into the clouds ( unlikely for any organization with a history ; likely for a new organization ( probably micro small ) that starts up in the next few years ) the 'art of IT management' will demand even more focus and understanding.

I always think a great acid test of Putts Law is to look at one of the two aspects of IT management

  1. Show me a simple process that you follow each day that delivers an aspect of IT service i.e. how to buy a piece of IT stuff, or a way to report a fault
  2. Show me how you manage a single entity on the network i.e. a file server, a PC, a network switch

Usually the answers ( which will be different from people on the same team, in the same room and from the same person on different days !) will give you an insight to Putts Law.

Childs play for most of course who are challenged with some real complex management situations such as data center virtualization projects, storage explosion control, edge device management, backend application upgrades, global messaging migrations and B2C identity integration. But of course if its evidenced that they seem to be managing (simple things ) without true understanding one could argue 'how the hell can they be expected to manage what they understand with the complex things?' Fair point?

Of course many C level people have an answer to Putts Law. Move the problem to people who do understand what they manage. Professionals who provide cloud versions of what the C level person struggles to get a professional service from. These C level guys see cloud services – applications, data, backup, service desk – as a great way to free up a blockage in how IT service is being delivered on premise. And they are right ( and wrong ).

... ... ...

( Quote attributed to Archibald Putt author of Putt's Law and the Successful Technocrat: How to Win in the Information Age )

rowan says: March 9, 2012 at 9:03 am

IMHO there is a big difference between management of IT and management of IT service. Rarely do you get people who can do both. Understanding inventory, disk space, security etc is one thing; but understanding the performance of apps and user impact is another ball game. Putts Law is alive and well in my organisation. TGIF.

Rowan in Belfast.

stephen777 says: March 31, 2012 at 7:32 am

Rowan is right I used to be an IT Manager but now my title is Service Delivery Manager. Why? Because we had a new CTO who changed how people saw what we did. I ve been doing this new role for 5 years and I really do understand what i don't manage. LOL

Stephen777

[Oct 30, 2018] Cloud has a rich future, but I didn't even know Red Hat had any presence there, let alone $35 billion worth.

Oct 30, 2018 | arstechnica.com

ControlledExperiments Ars Centurion reply 5 hours ago I am not your friend wrote: I just can't comprehend that price. Cloud has a rich future, but I didn't even know Red Hat had any presence there, let alone $35 billion worth.

Yeah the price is nuts, reminiscent of FB's WhatsApp purchase, and maybe their Instagram purchase.

If you actually look at the revenue Amazon gets from AWS or Microsoft from Azure, it's not that much, relatively speaking. For Microsoft, it's nothing compared to Windows and Office revenue, and I'm not sure where the growth is supposed to come from. It seems like most everyone who wants to be on the cloud is already there, and vulns like Spectre and Meltdown broke the sacred VM wall, so...

[Oct 27, 2018] We are effectively back to the centralized systems ("clouds" controlled by Amazon (CIA), Google (NSA)). The peripheral components of autonomous networked systems are akin to sensory devices with heavy computational lifting in the (local) regime controlled servers.

Oct 27, 2018 | www.moonofalabama.org

realist , Oct 27, 2018 10:27:04 AM | link

The first wave of the computer revolution created stand-alone systems. The current wave is their combination into new and much larger ones.
Posted by b on October 26, 2018 at 02:18 PM

Not strictly correct. It has come full circle in sense. We started with standalone machines; then we had central mainframes with dumb terminals; then
the first networks of these centralized servers (XeroxParc's efforts being more forward looking and an exception); then PCs; then internet; and now (per design and not by merit, imo)

We are effectively back to the centralized systems ("clouds" controlled by Amazon (CIA), Google (NSA)). The peripheral components of autonomous networked systems are akin to sensory devices with heavy computational lifting in the (local) regime controlled servers.

---

By this, I mean moral codes no longer have any objective basis but are evermore easily re-wired by the .002 to suit their purposes
Posted by: donkeytale | Oct 27, 2018 6:32:11 AM | 69

I question the implied "helpless to resist reprogramming" notion of your statement. You mentioned God. Please note that God has endowed each and every one of us with a moral compass. You mentioned Jesus. Didn't he say something about "you generation of vipers"? I suggest you consider that ours is a morally degenerate generation and is getting precisely what it wants and deserves.

[Sep 16, 2017] Google Drive Faces Outage, Users Report

Sep 16, 2017 | tech.slashdot.org

(google.com) 75

Posted by msmash on Thursday September 07, 2017

Numerous Slashdot readers are reporting that they are facing issues access Google Drive, the productivity suite from the Mountain View-based company. Google's dashboard confirms that Drive is facing outage .

Third-party web monitoring tool DownDetector also reports thousands of similar complaints from users. The company said, "Google Drive service has already been restored for some users, and we expect a resolution for all users in the near future.

Please note this time frame is an estimate and may change. Google Drive is not loading files and results in a failures for a subset of users."

[Jul 02, 2017] The Details About the CIAs Deal With Amazon by Frank Konkel

Jul 17, 2014 | www.theatlantic.com

The intelligence community is about to get the equivalent of an adrenaline shot to the chest. This summer, a $600 million computing cloud developed by Amazon Web Services for the Central Intelligence Agency over the past year will begin servicing all 17 agencies that make up the intelligence community. If the technology plays out as officials envision, it will usher in a new era of cooperation and coordination, allowing agencies to share information and services much more easily and avoid the kind of intelligence gaps that preceded the Sept. 11, 2001, terrorist attacks.

For the first time, agencies within the intelligence community will be able to order a variety of on-demand computing and analytic services from the CIA and National Security Agency. What's more, they'll only pay for what they use.

The vision was first outlined in the Intelligence Community Information Technology Enterprise plan championed by Director of National Intelligence James Clapper and IC Chief Information Officer Al Tarasiuk almost three years ago. Cloud computing is one of the core components of the strategy to help the IC discover, access and share critical information in an era of seemingly infinite data.

For the risk-averse intelligence community, the decision to go with a commercial cloud vendor is a radical departure from business as usual.

In 2011, while private companies were consolidating data centers in favor of the cloud and some civilian agencies began flirting with cloud variants like email as a service, a sometimes contentious debate among the intelligence community's leadership took place.

... ... ...

The government was spending more money on information technology within the IC than ever before. IT spending reached $8 billion in 2013, according to budget documents leaked by former NSA contractor Edward Snowden. The CIA and other agencies feasibly could have spent billions of dollars standing up their own cloud infrastructure without raising many eyebrows in Congress, but the decision to purchase a single commercial solution came down primarily to two factors.

"What we were really looking at was time to mission and innovation," the former intelligence official said. "The goal was, 'Can we act like a large enterprise in the corporate world and buy the thing that we don't have, can we catch up to the commercial cycle? Anybody can build a data center, but could we purchase something more?

"We decided we needed to buy innovation," the former intelligence official said.

A Groundbreaking Deal

... ... ...

The Amazon-built cloud will operate behind the IC's firewall, or more simply: It's a public cloud built on private premises.

Intelligence agencies will be able to host applications or order a variety of on-demand services like storage, computing and analytics. True to the National Institute of Standards and Technology definition of cloud computing, the IC cloud scales up or down to meet the need.

In that regard, customers will pay only for services they actually use, which is expected to generate massive savings for the IC.

"We see this as a tremendous opportunity to sharpen our focus and to be very efficient," Wolfe told an audience at AWS' annual nonprofit and government symposium in Washington. "We hope to get speed and scale out of the cloud, and a tremendous amount of efficiency in terms of folks traditionally using IT now using it in a cost-recovery way."

... ... ...

For several years there hasn't been even a close challenger to AWS. Gartner's 2014 quadrant shows that AWS captures 83 percent of the cloud computing infrastructure market.

In the combined cloud markets for infrastructure and platform services, hybrid and private clouds-worth a collective $131 billion at the end of 2013-Amazon's revenue grew 67 percent in the first quarter of 2014, according to Gartner.

While the public sector hasn't been as quick to capitalize on cloud computing as the private sector, government spending on cloud technologies is beginning to jump.

Researchers at IDC estimate federal private cloud spending will reach $1.7 billion in 2014, and $7.7 billion by 2017. In other industries, software services are considered the leading cloud technology, but in the government that honor goes to infrastructure services, which IDC expects to reach $5.4 billion in 2017.

In addition to its $600 million deal with the CIA, Amazon Web Services also does business with NASA, the Food and Drug Administration and the Centers for Disease Control and Prevention. Most recently, the Obama Administration tapped AWS to host portions of HealthCare.gov.

[Jun 09, 2017] Amazon's S3 web-based storage service is experiencing widespread issues on Feb 28 2017

Jun 09, 2017 | techcrunch.com

Amazon's S3 web-based storage service is experiencing widespread issues, leading to service that's either partially or fully broken on websites, apps and devices upon which it relies. The AWS offering provides hosting for images for a lot of sites, and also hosts entire websites, and app backends including Nest.

The S3 outage is due to "high error rates with S3 in US-EAST-1," according to Amazon's AWS service health dashboard , which is where the company also says it's working on "remediating the issue," without initially revealing any further details.

Affected websites and services include Quora, newsletter provider Sailthru, Business Insider, Giphy, image hosting at a number of publisher websites, filesharing in Slack, and many more. Connected lightbulbs, thermostats and other IoT hardware is also being impacted, with many unable to control these devices as a result of the outage.

Amazon S3 is used by around 148,213 websites, and 121,761 unique domains, according to data tracked by SimilarTech , and its popularity as a content host concentrates specifically in the U.S. It's used by 0.8 percent of the top 1 million websites, which is actually quite a bit smaller than CloudFlare, which is used by 6.2 percent of the top 1 million websites globally – and yet it's still having this much of an effect.

Amazingly, even the status indicators on the AWS service status page rely on S3 for storage of its health marker graphics, hence why the site is still showing all services green despite obvious evidence to the contrary. Update (11:40 AM PT): AWS has fixed the issues with its own dashboard at least – it'll now accurately reflect service status as it continues to attempt to fix the problem .

[Apr 01, 2017] Amazon Web Services outage causes widespread internet problems

Apr 01, 2017 | www.cbsnews.com
Feb 28, 2017 6:03 PM EST NEW YORK -- Amazon's cloud-computing service, Amazon Web Services, experienced an outage in its eastern U.S. region Tuesday afternoon, causing unprecedented and widespread problems for thousands of websites and apps.

Amazon is the largest provider of cloud computing services in the U.S. Beginning around midday Tuesday on the East Coast, one region of its "S3" service based in Virginia began to experience what Amazon, on its service site, called "increased error rates."

In a statement, Amazon said as of 4 p.m. E.T. it was still experiencing "high error rates" that were "impacting various AWS services."

"We are working hard at repairing S3, believe we understand root cause, and are working on implementing what we believe will remediate the issue," the company said.

But less than an hour later, an update offered good news: "As of 1:49 PM PST, we are fully recovered for operations for adding new objects in S3, which was our last operation showing a high error rate. The Amazon S3 service is operating normally," the company said.

Amazon's Simple Storage Service, or S3, stores files and data for companies on remote servers. It's used for everything from building websites and apps to storing images, customer data and customer transactions.

"Anything you can think about storing in the most cost-effective way possible," is how Rich Mogull, CEO of data security firm Securosis, puts it.

Amazon has a strong track record of stability with its cloud computing service, CNET senior editor Dan Ackerman told CBS News.

"AWS... is known for having really good 'up time,'" he said, using industry language.

Over time, cloud computing has become a major part of Amazon's empire.

"Very few people host their own web servers anymore, it's all been outsourced to these big providers , and Amazon is one of the major ones," Ackerman said.

The problem Tuesday affected both "front-end" operations -- meaning the websites and apps that users see -- and back-end data processing that takes place out of sight. Some smaller online services, such as Trello, Scribd and IFTTT, appeared to be down for a while, although all have since recovered.

Some affected websites had fun with the crash, treating it like a snow day:

[Apr 01, 2017] After Amazon outage, HealthExpense worries about cloud lock-in by Maria Korolov

Notable quotes:
"... "From a sustainability and availability standpoint, we definitely need to look at our strategy to not be vendor specific, including with Amazon," said Lee. "That's something that we're aware of and are working towards." ..."
"... "Elastic load balances and other services make it easy to set up. However, it's a double-edged sword, because these types of services will also make it harder to be vendor-agnostic. When other cloud platform don't offer the same services, how do you wean yourself off of them?" ..."
"... Multi-year commitments are another trap, he said. And sometimes there's an extra unpleasant twist -- minimum usage requirements that go up in the later years, like balloon payments on a mortgage. ..."
Apr 01, 2017 | www.networkworld.com

The Amazon outage reminds companies that having all their eggs in one cloud basket might be a risky strategy

"That is the elephant in the room these days," said Lee. "More and more companies are starting to move their services to the cloud providers. I see attackers trying to compromise the cloud provider to get to the information."

If attackers can get into the cloud systems, that's a lot of data they could have access to. But attackers can also go after availability.

"The DDoS attacks are getting larger in scale, and with more IoT systems coming online and being very hackable, a lot of attackers can utilize that as a way to do additional attacks," he said.

And, of course, there's always the possibility of a cloud service outage for other reasons.

The 11-hour outage that Amazon suffered in late February was due to a typo, and affected Netflix, Reddit, Adobe and Imgur, among other sites.

"From a sustainability and availability standpoint, we definitely need to look at our strategy to not be vendor specific, including with Amazon," said Lee. "That's something that we're aware of and are working towards."

The problem is that Amazon offers some very appealing features.

"Amazon has been very good at providing a lot of services that reduce the investment that needs to be made to build the infrastructure," he said. "Elastic load balances and other services make it easy to set up. However, it's a double-edged sword, because these types of services will also make it harder to be vendor-agnostic. When other cloud platform don't offer the same services, how do you wean yourself off of them?"

... ... ...

"If you have a containerized approach, you can run in Amazon's container services, or on Azure," said Tim Beerman, CTO at Ensono , a managed services provider that runs its own cloud data center, manages on-premises environments for customers, and also helps clients run in the public cloud.

"That gives you more portability, you can pick something up and move it," he said.

But that, too, requires advance planning.

"It starts with the application," he said. "And you have to write it a certain way."

But the biggest contributing factor to cloud lock-in is data, he said.

"They make it really easy to put the data in, and they're not as friendly about taking that data out," he said.

The lack of friendliness often shows up in the pricing details.

"Usually the price is lower for data transfers coming into a cloud service provider versus the price to move data out," said Thales' Radford.

Multi-year commitments are another trap, he said. And sometimes there's an extra unpleasant twist -- minimum usage requirements that go up in the later years, like balloon payments on a mortgage.

[Mar 03, 2017] Do You Replace Your Server Or Go To The Cloud The Answer May Surprise You

Mar 03, 2017 | www.forbes.com
Is your server or servers getting old? Have you pushed it to the end of its lifespan? Have you reached that stage where it's time to do something about it? Join the crowd. You're now at that decision point that so many other business people are finding themselves this year. And the decision is this: do you replace that old server with a new server or do you go to: the cloud.

Everyone's talking about the cloud nowadays so you've got to consider it, right? This could be a great new thing for your company! You've been told that the cloud enables companies like yours to be more flexible and save on their IT costs. It allows free and easy access to data for employees from wherever they are, using whatever devices they want to use. Maybe you've seen the recent survey by accounting software maker MYOB that found that small businesses that adopt cloud technologies enjoy higher revenues. Or perhaps you've stumbled on this analysis that said that small businesses are losing money as a result of ineffective IT management that could be much improved by the use of cloud based services. Or the poll of more than 1,200 small businesses by technology reseller CDW which discovered that " cloud users cite cost savings, increased efficiency and greater innovation as key benefits" and that " across all industries, storage and conferencing and collaboration are the top cloud services and applications."

So it's time to chuck that old piece of junk and take your company to the cloud, right? Well just hold on.

There's no question that if you're a startup or a very small company or a company that is virtual or whose employees are distributed around the world, a cloud based environment is the way to go. Or maybe you've got high internal IT costs or require more computing power. But maybe that's not you. Maybe your company sells pharmaceutical supplies, provides landscaping services, fixes roofs, ships industrial cleaning agents, manufactures packaging materials or distributes gaskets. You are not featured in Fast Company and you have not been invited to presenting at the next Disrupt conference. But you know you represent the very core of small business in America. I know this too. You are just like one of my company's 600 clients. And what are these companies doing this year when it comes time to replace their servers?

These very smart owners and managers of small and medium sized businesses who have existing applications running on old servers are not going to the cloud. Instead, they've been buying new servers.

Wait, buying new servers? What about the cloud?

At no less than six of my clients in the past 90 days it was time to replace servers. They had all waited as long as possible, conserving cash in a slow economy, hoping to get the most out of their existing machines. Sound familiar? But the servers were showing their age, applications were running slower and now as the companies found themselves growing their infrastructure their old machines were reaching their limit. Things were getting to a breaking point, and all six of my clients decided it was time for a change. So they all moved to cloud, right?

Nope. None of them did. None of them chose the cloud. Why? Because all six of these small business owners and managers came to the same conclusion: it was just too expensive. Sorry media. Sorry tech world. But this is the truth. This is what's happening in the world of established companies.

Consider the options. All of my clients' evaluated cloud based hosting services from Amazon , Microsoft and Rackspace . They also interviewed a handful of cloud based IT management firms who promised to move their existing applications (Office, accounting, CRM, databases) to their servers and manage them offsite. All of these popular options are viable and make sense, as evidenced by their growth in recent years. But when all the smoke cleared, all of these services came in at about the same price: approximately $100 per month per user. This is what it costs for an existing company to move their existing infrastructure to a cloud based infrastructure in 2013. We've got the proposals and we've done the analysis.

You're going through the same thought process, so now put yourself in their shoes. Suppose you have maybe 20 people in your company who need computer access. Suppose you are satisfied with your existing applications and don't want to go through the agony and enormous expense of migrating to a new cloud based application. Suppose you don't employ a full time IT guy, but have a service contract with a reliable local IT firm.

Now do the numbers: $100 per month x 20 users is $2,000 per month or $24,000 PER YEAR for a cloud based service. How many servers can you buy for that amount? Imagine putting that proposal out to an experienced, battle-hardened, profit generating small business owner who, like all the smart business owners I know, look hard at the return on investment decision before parting with their cash.

For all six of these clients the decision was a no-brainer: they all bought new servers and had their IT guy install them. But can't the cloud bring down their IT costs? All six of these guys use their IT guy for maybe half a day a month to support their servers (sure he could be doing more, but small business owners always try to get away with the minimum). His rate is $150 per hour. That's still way below using a cloud service.

No one could make the numbers work. No one could justify the return on investment. The cloud, at least for established businesses who don't want to change their existing applications, is still just too expensive.

Please know that these companies are, in fact, using some cloud-based applications. They all have virtual private networks setup and their people access their systems over the cloud using remote desktop technologies. Like the respondents in the above surveys, they subscribe to online backup services, share files on DropBox and Microsoft MSFT +1.45% 's file storage, make their calls over Skype, take advantage of Gmail and use collaboration tools like Google GOOG +1.45% Docs or Box. Many of their employees have iPhones and Droids and like to use mobile apps which rely on cloud data to make them more productive. These applications didn't exist a few years ago and their growth and benefits cannot be denied.

Paul-Henri Ferrand, President of Dell DELL +% North America, doesn't see this trend continuing. "Many smaller but growing businesses are looking and/or moving to the cloud," he told me. "There will be some (small businesses) that will continue to buy hardware but I see the trend is clearly toward the cloud. As more business applications become more available for the cloud, the more likely the trend will continue."

[Jan 12, 2017] Digitopoly Congestion on the Last Mile

Notable quotes:
"... By Shane Greenstein On Jan 11, 2017 · Add Comment · In Broadband , communication , Esssay , Net Neutrality ..."
"... The bottom line: evenings require far greater capacity than other times of the day. If capacity is not adequate, it can manifest as a bottleneck at many different points in a network-in its backbone, in its interconnection points, or in its last mile nodes. ..."
"... The use of tiers tends to grab attention in public discussion. ISPs segment their users. Higher tiers bring more bandwidth to a household. All else equal, households with higher tiers experience less congestion at peak moments. ..."
"... such firms (typically) find clever ways to pile on fees, and know how to stymie user complaints with a different type of phone tree that makes calls last 45 minutes. Even when users like the quality, the aggressive pricing practices tend to be quite irritating. ..."
"... Some observers have alleged that the biggest ISPs have created congestion issues at interconnection points for purposes of gaining negotiating leverage. These are serious charges, and a certain amount of skepticism is warranted for any broad charge that lacks specifics. ..."
"... Congestion is inevitable in a network with interlocking interests. When one part of the network has congestion, the rest of it catches a cold. ..."
"... More to the point, growth in demand for data should continue to stress network capacity into the foreseeable future. Since not all ISPs will invest aggressively in the presence of congestion, some amount of congestion is inevitable. So, too, is a certain amount of irritation. ..."
Jan 12, 2017 | www.digitopoly.org
Congestion on the Last Mile
By Shane Greenstein On Jan 11, 2017 · Add Comment · In Broadband , communication , Esssay , Net Neutrality It has long been recognized that networked services contain weak-link vulnerabilities. That is, the performance of any frontier device depends on the performance of every contributing component and service. This column focuses on one such phenomenon, which goes by the label "congestion." No, this is not a new type of allergy, but, as with a bacteria, many users want to avoid it, especially advanced users of frontier network services.

Congestion arises when network capacity does not provide adequate service during heavy use. Congestion slows down data delivery and erodes application performance, especially for time-sensitive apps such as movies, online videos, and interactive gaming.

Concerns about congestion are pervasive. Embarrassing reports about broadband networks with slow speeds highlight the role of congestion. Regulatory disputes about data caps and pricing tiers question whether these programs limit the use of data in a useful way. Investment analysts focus on the frequency of congestion as a measure of a broadband network's quality.

What economic factors produce congestion? Let's examine the root economic causes.

The Basics

Congestion arises when demand for data exceeds supply in a very specific sense.

Start with demand. To make this digestible, let's confine our attention to US households in an urban or suburban area, which produces the majority of data traffic.

No simple generalization can characterize all users and uses. The typical household today uses data for a wide variety of purposes-email, video, passive browsing, music videos, streaming of movies, and e-commerce. Networks also interact with a wide variety of end devices-PCs, tablets, smartphones on local Wi-Fi, streaming to television, home video alarm systems, remote temperature control systems, and plenty more.

It is complicated, but two facts should be foremost in this discussion. First, a high fraction of traffic is video-anywhere from 60 to 80 percent, depending on the estimate. Second, demand peaks at night. Most users want to do more things after dinner, far more than any other time during the day.

Every network operator knows that demand for data will peak (predictably) between approximately 7 p.m. and 11 p.m. Yes, it is predictable. Every day of the week looks like every other, albeit with steady growth over time and with some occasional fluctuations for holidays and weather. The weekends don't look any different, by the way, except that the daytime has a bit more demand than during the week.

The bottom line: evenings require far greater capacity than other times of the day. If capacity is not adequate, it can manifest as a bottleneck at many different points in a network-in its backbone, in its interconnection points, or in its last mile nodes.

This is where engineering and economics can become tricky to explain (and to manage). Consider this metaphor (with apologies to network engineers): Metaphorically speaking, network congestion can resemble a bathtub backed up with water. The water might fail to drain because something is interfering with the mouth of the drain or there is a clog far down the pipes. So, too, congestion in a data network can arise from inadequate capacity close to the household or inadequate capacity somewhere in the infrastructure supporting delivery of data.

Numerous features inside a network can be responsible for congestion, and that shapes which set of households experience congestion most severely. Accordingly, numerous different investments can alleviate the congestion in specific places. A network could require a "splitting of nodes" or a "larger pipe" to support a content delivery network (CDN) or could require "more ports at the point of interconnection" between a particular backbone provider and the network.

As it turns out, despite that complexity, we live in an era in which bottlenecks arise most often in the last mile, which ISPs build and operate. That simplifies the economics: Once an ISP builds and optimizes a network to meet maximum local demand at peak hours, then that same capacity will be able to meet lower demand the rest of the day. Similarly, high capacity can also address lower levels of peak demand on any other day.

Think of the economics this way. An awesome network, with extraordinary capacity optimized to its users, will alleviate congestion at most households on virtually every day of the week, except the most extraordinary. Accordingly, as the network becomes less than awesome with less capacity, it will generate a number of (predictable) days of peak demand with severe congestion throughout the entire peak time period at more households. The logic carries through: the less awesome the network, the greater the number of households who experience those moments of severe congestion, and the greater the frequency.

That provides a way to translate many network engineering benchmarks-such as the percentage of packet loss. More packet loss correlates with more congestion, and that corresponds with a larger number of moments when some household experiences poor service.

Tradeoffs and Externalities

Not all market participants react to congestion in the same way. Let's first focus on the gazillion Web firms that supply the content. They watch this situation with a wary eye, and it's no wonder. Many third-party services, such as those streaming video, deliver a higher-quality experience to users whose network suffers less congestion.

Many content providers invest to alleviate congestion. Some invest in compression software and superior webpage design, which loads in ways that speeds up the user experience. Some buy CDN services to speed delivery of their data. Some of the largest content firms, such as YouTube, Google, Netflix, and Facebook, build their own CDN services to improve delivery.

Next, focus on ISPs. They react with various investment and pricing strategies. At one extreme, some ISPs have chosen to save money by investing conservatively, and they suffer the complaints of users. At the other extreme, some ISPs build a premium network, then charge premium prices for the best services.

There are two good reasons for that variety. First, ISPs differ in their rates of capital investment. Partly this is due to investment costs, which vary greatly with density, topography, and local government relations. Rates of investment tend to be inherited from long histories, sometimes as a product of decisions made many years ago, which accumulated over time. These commitments can change, but generally don't, because investors watch capital commitments and react strongly to any departure from history.

The second reason is more subtle. ISPs take different approaches to raising revenue per household, and this results in (effectively) different relationships with banks and stockholders, and, de facto, different budgets for investment. Where does the difference in revenue come from? For one, competitive conditions and market power differ across neighborhoods. In addition, ISPs use different pricing strategies, taking substantially different approaches to discounts, tiered pricing structures, data cap policies, bundled contract offerings, and nuisance fees.

The use of tiers tends to grab attention in public discussion. ISPs segment their users. Higher tiers bring more bandwidth to a household. All else equal, households with higher tiers experience less congestion at peak moments.

Investors like tiers because they don't obligate ISPs to offer unlimited service and, in the long run, raise revenue without additional costs. Users have a more mixed reaction. Light users like the lower prices of lower tiers, and appreciate saving money for doing little other than email and static browsing.

In contrast, heavy users perceive that they pay extra to receive the bandwidth that the ISP used to supply as a default.

ISPs cannot win for losing. The archetypical conservative ISP invests adequately to relieve congestion some of the time, but not all of the time. Its management then must face the occasional phone calls of its users, which they stymie with phone trees that make service calls last 45 minutes. Even if users like the low prices, they find the service and reliability quite irritating.

The archetypical aggressive ISP, in contrast, achieves a high-quality network, which relieves severe congestion much of the time. Yet, such firms (typically) find clever ways to pile on fees, and know how to stymie user complaints with a different type of phone tree that makes calls last 45 minutes. Even when users like the quality, the aggressive pricing practices tend to be quite irritating.

One last note: It is a complicated situation where ISPs interconnect with content providers. Multiple parties must invest, and the situations involve many supplier interests and strategic contingencies.

Some observers have alleged that the biggest ISPs have created congestion issues at interconnection points for purposes of gaining negotiating leverage. These are serious charges, and a certain amount of skepticism is warranted for any broad charge that lacks specifics.

Somebody ought to do a sober and detailed investigation to confront those theories with evidence. (I am just saying.)

What does basic economics tell us about congestion? Congestion is inevitable in a network with interlocking interests. When one part of the network has congestion, the rest of it catches a cold.

More to the point, growth in demand for data should continue to stress network capacity into the foreseeable future. Since not all ISPs will invest aggressively in the presence of congestion, some amount of congestion is inevitable. So, too, is a certain amount of irritation.

Copyright held by IEEE.

To view the printed essay, click here.

[Nov 09, 2015] Thoughts on the Amazon outage

Notable quotes:
"... The 'Cloud' isn't magic, the 'Cloud' isn't fail-proof, the 'Cloud' requires hardware, software, networking, security, support and execution – just like anything else. ..."
"... Putting all of your eggs in one cloud, so to speak, no matter how much redundancy they say they have seems to be short-sighted in my opinion. ..."
"... you need to assume that all vendors will eventually have an issue like this that affects your overall uptime, brand and churn rate. ..."
"... Amazon's downtime is stratospherically high, and their prices are spectacularly inflated. Their ping times are terrible and they offer little that anyone else doesn't offer. Anyone holding them up as a good solution without an explanation has no idea what they're talking about. ..."
"... Nobody who has even a rudimentary best-practice hosting setup has been affected by the Amazon outage in any way other than a speed hit as their resources shift to a secondary center. ..."
"... Stop following the new-media goons around. They don't know what they're doing. There's a reason they're down twice a month and making excuses. ..."
"... Personally, I do not use a server for "mission critical" applications that I cannot physically kick. Failing that, a knowledgeable SysAdmin that I can kick. ..."
nickgeoghegan.net
Disaster Recovery needs to be a primary objective when planning and implementing any IT project, outsourced or not. The 'Cloud' isn't magic, the 'Cloud' isn't fail-proof, the 'Cloud' requires hardware, software, networking, security, support and execution – just like anything else.

All the fancy marketing speak, recommendations and free trials, can't replace the need to do obsessive due diligence before trusting any provider no matter how big and awesome they may seem or what their marketing department promise.

Prepare for the worst, period.

Putting all of your eggs in one cloud, so to speak, no matter how much redundancy they say they have seems to be short-sighted in my opinion. If you are utilizing an MSP, HSP, CSP, IAAS, SAAS, PAAS, et all to attract/increase/fulfill a large percentage of your revenue or all of your revenue like many companies are doing nowadays then you need to assume that all vendors will eventually have an issue like this that affects your overall uptime, brand and churn rate. A blip here and there is tolerable.

Amazon's downtime is stratospherically high, and their prices are spectacularly inflated. Their ping times are terrible and they offer little that anyone else doesn't offer. Anyone holding them up as a good solution without an explanation has no idea what they're talking about.

The same hosting platform, as always, is preferred: dedicated boxes at geographically disparate and redundant locations, managed by different companies. That way when host 1 shits the bed, hosts 2 and 3 keep churning.

Nobody who has even a rudimentary best-practice hosting setup has been affected by the Amazon outage in any way other than a speed hit as their resources shift to a secondary center.

Stop following the new-media goons around. They don't know what they're doing. There's a reason they're down twice a month and making excuses.

Personally, I do not use a server for "mission critical" applications that I cannot physically kick. Failing that, a knowledgeable SysAdmin that I can kick.

MAT Monitoring and Administration Tool

MAT is an easy to use network enabled UNIX configuration and monitoring tool. It provides an integrated tool for many common system administration tasks, including Backups and Replication It includes a warning system for potential system problems, and graphing of many common system parameters. Click here for more.
Coming soon in version 0.24 will be an embedded interpreter with it you will be able to monitor any parameter you can write a script to capture. It also will create the ability to have OS specific configuration tools.

Unix SysAdm Resources Automated Unix SysMgmt Software

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

Conveniently Administering Remote Servers



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: February 19, 2020