|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
Best | 2018 | 2017 | 2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2010 |
2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003 | 2002 | 2001 | 2000 |
06/24/2008 | Network World ,Virtualization can cause as many problems as it solves if left unmanaged, according to Gartner.
IT professionals may initially be awestruck by the promises of virtualization, but Gartner analysts warn that awe could turn into upset when organizations start to suffer from seven nasty side effects.
David Coyle, research vice president at Gartner, detailed the seven side effects at the research firm's Infrastructure, Operations and Management Summit, which drew nearly 900 attendees. While virtualization promises to solve issues such as underutilization, high hardware costs and poor system availability, the benefits come only when the technology is applied with proper care and consistently monitored for change, Coyle explained.
Here are the reasons Gartner says virtualization is no IT cure-all:
1. Magnified failures. In the physical world, a server hardware failure typically would mean one server failed and backup servers would step in to prevent downtime. In the virtual world, depending on the number of virtual machines residing on a physical box, a hardware failure could impact multiple virtual servers and the applications they host.
"Failures will have a much larger impact, effecting multiple operating systems, multiple applications and those little tiny fires will turn into big fires fast," Coyle said.
2. Degraded performance. Companies looking to ensure top performance of critical applications often dedicate server, network and storage resources for those applications, segmenting them from other traffic to ensure they get the resources they need. With virtualization, sharing resources that can be automatically allocated on demand is the goal in a dynamic environment. At any given time, performance of an application could degrade, perhaps not to a failure, but slower than desired.
3. Obsolete skills. IT might not realize the skill sets it has in-house won't apply to a large virtualized production environment until they have it live. The skills needed to manage virtual environments should span all levels of support, including service desk operators who may be fielding calls regarding their virtual PCs. Companies will feel a bit of a talent shortage when moving toward more virtualized systems, and Coyle recommends starting the training now.
"Virtualized environments require enhanced skill sets, and virtual training across many disciplines," he said.
4. Complex root cause analysis. Virtual machines move -- that is the part of their appeal. But as Coyle pointed out, it is also a potential issue when managing problems. Server problems in the past could be limited to one box, but now the problem can move with the virtual machine and lull IT staff into a false sense of security.
"Is the problem fixed or did you just lose it? You can't tell in a virtual environment," Coyle said. "Are you just transferring the problem around from virtual server to virtual server?"
5. No standardization. Tools and processes used to address the physical environment can't be directly applied to the virtual world, so many IT shops will have to think about standardizing how they address issues in the virtual environment.
"Mature tools and processes must be revamped," Coyle said.
6. Virtual machine sprawl. The most documented side effect to date, virtual server sprawl results from the combination of ease of deployment and lack of life-cycle management of virtual machines. The issue could cause consolidation efforts to go awry when more virtual machines crop up than there are server administrators to manage them.
"The virtualized environment is in constant flux," he said.
7. May be habit forming. Once IT organizations start to use virtualization, they can't stop themselves, Coyle said. He offered tips to help curb the damage done from giving into a virtual addiction.
"Start small. Map dependencies. Create strong change processes. Update runbooks. Invest in capacity management tools. And test, test, test," he said.
ParaFan writes "In a fascinating story on KernelTrap, Theo de Raadt asserts that while virtualization can increase hardware utilization, it does not in any way improve security. In fact, he contends the exact opposite is true:
'You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes.'
de Raadt argues that the lack of support for process isolation on x86 hardware combined with numerous bugs in the architecture are a formula for virtualization decreasing overall security, not increasing it."
August 02, 2007 | Techworld
Open source virtualization developer XenSource has just inked a deal with Symantec to collaborate on embedding Veritas Storage Foundation into XenEnterprise, and delivering HA/DR and backup technology to XenSource's customers. In the wake of that deal, founder and CTO Simon Crosby was in London recently to explain the background to the deal. He also delivers his trenchant thoughts on the future of the virtualization industry -- and launches a serious critique of VMware and even of business partner Microsoft.
Q: How do you see the future of the virtualization market? A: The world has created a new Microsoft -- there's a monster embedded in our industry. So the market is starting to crystallize, partly as a consequence of the way that VMware is building its company. They just want to sell more and more, and it's starting to step on people's toes.
Q: Is VMware really that horrible? A: Unlike VMware, Microsoft doesn't compete with its channel but leaves room for an ecosystem. It's a superb platform player. Microsoft is very conscious of its scale and leaves pockets of $100m markets around for its partners. Our relationship with Microsoft is strong, will remain strong, and strengthens every day. Microsoft has been a very supportive partner.
The chink in VMware's armor is the weakness of its ecosystem -- all its partners are under threat. That said, I wouldn't fault VMware entirely. VMware has grown very fast -- they had to do that so I can't fault them for it, but no-one's making money out of VMware. There's a general sense of unease.
Q: Will virtualization technology be absorbed into the OS? A: There's plenty of scope for development. Microsoft's Viridian feature set has been slashed because the features in the kernel of Server 2008 were fixed and there was otherwise an overlap between it and Viridian. And Red Hat and Novell haven't done much with Xen yet. None of the virtualization platforms are anything but a way of virtualizing themselves.
We have managed to benefit from relationships on both sides. Open source is a very clearly articulated argument -- it's about aligning a community around a common codebase. Some of the open source software (OSS) vendors compete with each other not with the bigger guys. OSS generates pull-through because the customers get a richer set of services -- it's a longer term play. We believe that the virtualization engine is a standard, commoditized product that has to be open. It must address a range of CPUs, and have a big hardware footprint.
It's also important not to make it the whole product so others get an incentive to take it to market. We don't do an ESX [VMware's flagship product] -- that's a car not an engine -- because an engine is more flexible, you can use it anywhere and it gives space for others to develop, and they have financial incentives to do so.
Q: Why is Microsoft not perceived as the big Satan now? A: The consent decree has changed things -- there are 1,400 lawyers at Microsoft. In every conversation with them we find they're absolutely egalitarian about access to APIs. They have huge market control but they realize they have to embrace and manage open source. That means they have to interoperate and work with it, because they know they can't eliminate it -- the world's changed. Also they're huge so their ability to innovate gets clogged up, which leaves tons of space for others to innovate -- they've learned to cooperate with others in markets they can't get to.
Also, I think in terms of the scale of everything Microsoft does, virtualization is only a minor project in a monster organization. Virtualization has become the major shaping force in the industry -- and they [Microsoft] said that they thought that more VMs meant more revenue but they're changing that as customers need to know that it's OK to start Windows in a VM.
Q: Will this change? A: I don't know where they're going with this -- it could be that things are taking longer. The policy is rational but they haven't communicated that to the market yet. It's a huge opportunity for someone to be make a product to manage licensing -- using technology used for DRM and licensing so that you know how long an OS has run etc. It would need to be an independent verifiable source for legal licensing.
Q: Will Xen continue to use the same technology in future -- in other words, para-virtualization? A: Para-virtualization is an awful name: if someone asks what would you rather have, full virtualization or para-virtualization, what's your answer? The aim was to encourage OS vendors to make the OS ready for virtualization -- but 95 percent of applications and OSes are legacy, unvirtualized.
Para-virtualization is relevant in another content -- we use para-virtualized I/O and timers etc by inserting drivers etc into Windows to get a fast stack working. From a product perspective, it means the guest automatically installs the right software and it just works. We hook into the HAL and get the best performance.
But most of the OSes aren't para-virtualized -- there's only RHEL 5 and SLES 10. The important thing is that in future every OS will be ready to run on a hypervisor. [Intel's] VT gives us everything else.
Q: How do you see virtualization evolving over the next two years? A: Hardware vendors will certify the hypervisor and it's up to the customer to do everything else. Customers want to virtualized everything else because the savings are so huge -- the confidence in virtualization is high but it's too complex for the average guy.
On the client, virtualization technology has to be invisible and work using [management] technology such as Intel's vPro. There also has to be a viable ecosystem or it's a niche product.
The world will break into two camps: VMware, where you add more features and sell more software, or open source. We're just a great component -- we do a fantastic job of server virtualization working with best of breed partners -- we plug into storage virtualization and it all works.
We have agreements with people such as Stratus and Marathon -- there's lots we've not announced yet. Virtualization will be another category of IT admin -- you'll find virtualization specialists much as you have database specialists etc now.
Q: What about skill sets? A: Lack of skill sets is a major barrier to take-up. We have over 300 certified partners, over 500 certified trained partner engineers worldwide who train the trainers -- we have a course that partners can resell. For virtualization to be prolific, there has to be a step up in terms of know-how.
Hypervisors, popularized by Xen and VMware, are quickly becoming commodity. They are appropriate for many usage scenarios, but there are scenarios that require system virtualization with high degrees of both isolation and efficiency. Examples include HPC clusters, the Grid, hosting centers, and PlanetLab. We present an alternative to hypervisors that is better suited to such scenarios. The approach is a synthesis of prior work on resource containers and security containers applied to general-purpose, time-shared operating systems. Examples of such container-based systems include Solaris 10, Virtuozzo for Linux, and Linux- VServer. As a representative instance of container-based systems, this paper describes the design and implementation of Linux-VServer. In addition, it contrasts the architecture of Linux-VServer with current generations of Xen, and shows how Linux-VServer provides comparable support for isolation and superior system efficiency.
Fear Factor
Shahri Moin, IT director at Oscient Pharmaceuticals Corp. in Waltham, Mass., is testing Microsoft Virtual Server 2005. So far, so good, he says, but Moin has reservations about upsetting the status quo. “Putting it in production scares the daylights out of me,” he acknowledges. And since he has just a few dozen servers to manage, the most common motivation for adopting virtualization — consolidation — isn’t a big concern. “It’s not going to allow me in a meaningful way to reduce staff or operating costs,” Moin says.PerkinElmer first started using virtual machines in 2005 in a project designed to address space, power and cooling problems in its Boston data center by consolidating physical servers. Jeff Brittain, IT director for the city of Hickory, N.C., found another way to achieve a similar goal. He tested Microsoft Virtual Server 2005 but decided to migrate 40 rack-mounted servers to IBM blade servers instead. “That is accomplishing the consolidation we were looking at,” he says. With no pressing reason to go ahead with Virtual Server, he says he’ll revisit the technology “down the road.”
John Nordin, CIO at Insurance Auto Auctions Inc. in Westchester, Ill., has been testing VMware, but he says he doesn’t trust the technology enough to use it on the 130 servers that run the company’s auction business.During an auction, a car is sold every 40 seconds, and many bids come in electronically. With 750,000 auctions a year, Nordin says he can’t afford problems. “This is super-mission-critical stuff. When it isn’t there, we can see it on the bottom line,” he says.
In early testing of VMware, a virtual server inexplicably reverted back to a prior configuration. That corrupted the system, which had to be restored from tape. Nordin says explanations from his vendors, VMware and Microsoft, haven’t been forthcoming. “I got a lot of the classic multivendor finger-pointing,” he says. “Nobody has been able to give me a root cause, and this thing is never seeing production until I know why this happened.” Even if those answers come, Nordin says, he’ll start out slowly by using the technology only on his print servers.
Bob Holstein, CIO at National Public Radio Inc., isn’t worried about the reliability of VMware ESX Server, which he calls “rock-solid and production-worthy.” He’s testing the product now and plans to do a small rollout on production servers later this year. If all goes according to plan, application servers with low utilization rates will be consolidated and then moved to a collocation facility to make room for more servers that handle live digital audio feeds for NPR’s radio programs. Those servers will stay on physical hardware, however. “Those vendors are not going to support a virtual environment, and they’re too mission-critical to take that risk,” Holstein says.
“I have a ‘show me’ attitude about [VMware] right now,” Holstein says, adding that staffers need to get plenty of hands-on experience with the technology before moving to a production environment.
Nordin agrees. “People who haven’t worked in any partitioned environment shouldn’t underestimate what their system engineers need to learn,” he says. “Make sure you put the training dollars in.”
“Virtualization is kind of a leap of faith,” says Mike French, senior network engineer at Perkin�Elmer. He has spent time explaining the technology to his peers. “It’s a tough thing to break the barrier, but if you build the environment rock-solid with redundancy and safeguards, nobody should ever have a problem.”
Although the technology has been in use for several years, it’s still common to find applications that aren’t supported on virtual machines. “A lot of vendors won’t certify applications as VMware-compatible,” Dattilo says. “In most cases, we assumed the risk, unless it was a mission-critical application.”
A VMware spokesman says that the problem has diminished. Indeed, a few of Dattilo’s vendors, such as Hyperion Solutions Corp. and Business Objects SA, have begun supporting virtual machines since he started working with the technology. As for the others, Dattilo says that most software vendors’ support organizations will still work with his staff on problems, but he hasn’t had any so far.
Nordin says the fact that some software vendors still don’t support applications on virtual servers is evidence that the market still isn’t fully mature.
“Those types of issues have been long resolved in the MVS, VM and Unix space,” he says, adding that server virtualization products “need to get going.” With Red Hat, SUSE and Microsoft embedding hypervisors into the Linux and Windows operating systems, however, application vendors will have little choice but to support it, analysts say.
Software vendors aren’t the only ones who’ve been slow to support their products running in virtual servers. Jon Elsasser, CIO at The Timken Co. in Canton, Ohio, says IT staff resistance to deploying homegrown applications on virtual machines has stopped some projects. “Some internal application-support personnel are a bit leery of it,” Elsasser says, but he expects attitudes to change over time.
Slow Uptake
Even companies that have embraced virtual server technology have limited its penetration into the data center. Those that have adopted virtualization have, on average, only about 20% of their environment virtualized, according to IDC analyst John Humphreys.After a pilot last year, Timken went on to virtualize 35% of its servers. It now has 125 virtual servers running on six quad-processor physical machines, but Elsasser has no plans to expand beyond that. The 100 Windows and SQL servers supporting a new SAP ERP software implementation are off-limits to virtualization, he says. “At this point, we’re just glad it’s running,” Elsasser says.
At PerkinElmer, Dattilo’s goal is to have 52% of servers virtualized by the time the current project there is completed. That includes application servers with relatively low utilization levels, but others, such as Exchange Server mailbox nodes, are staying put.
Payton plans to roll out virtualization at Case Design/Remodeling this fall. “We’ll stay away from Exchange and SQL Server” and focus on low-utilization applications like domain controllers and file- and print-sharing servers, he says. Most users aren’t ready to consider virtualization for important applications, says IDC analyst Steven Elliot. “The really mission-critical stuff is further down the line,” he says.
Tools for managing virtual machines are still evolving, and their availability is “still a little light,” says Dattilo. However, he adds that tools included with the recently introduced VMware Infrastructure 3 Enterprise Edition have solved some of his problems. Currently, every virtual machine on a physical server needs its own instance of Backup Exec. VMware Consolidated Backup eliminates that problem.
Distributing loads across virtual machines — a time-consuming, manual process today — can be automated using Distributed Resource Scheduler. Payton has been testing with the previous version of VMware and says, “We’re running into CPU utilization problems because the limit is set statically.” He and Dattilo both plan to migrate to the new version.
With VMware so far ahead of the competition, there’s little pressure on pricing today. “Software licensing costs are a little high, and their maintenance is out of this world,” says Dattilo. Application software licensing on virtual machines is also in flux. “There’s a lot of confusion among application vendors as to how those will be licensed,” he says, noting he doesn’t want to pay a per-proc�essor premium for running on a quad-processor machine when an application is running in a virtual machine and using just a fraction of those resources.
Analysts expect the adoption curve to accelerate this year as users become more comfortable with virtualization technology. Says Elliot, “2006 is the year of production for large enterprises.”
But that doesn’t mean everyone should rush ahead. “If you don’t have a clear benefit from virtualization today, you can wait,” says Reynolds. That said, most companies will find at least some immediate benefits, whether from consolidation or reduced server configuration and deployment costs. Elsasser says server procurement time savings made his project worthwhile. “It used to take two weeks to deploy a new server,” he says. “Now we can do it in two days, and in an emergency, we can do it in an hour.”
A compromise strategy is to focus on “high-value production deployments” but put off broader implementations, says Reynolds. The next version of Windows Server will offer technology to compete with VMware in 18 months or so, and Linux distributions with virtualization technology will be here even sooner. With those competitive pressures, “VMware could become significantly less expensive,” Reynolds says.
Re:Yawn
(Score:5, Informative)by giminy (94188) on Friday March 09, @02:16PM (#18292472)
We took a system that according to our monitoring sat at essentially 0-1% used (load average: 0.01, 0.02, 0.01) and put it on a virtual.
(http://www.readingfordummies.com/blog/ | Last Journal: Thursday November 21, @05:10PM)
Load average is a bad way of looking at machine utilization. Load average is the average number of processes on the run queue over the last 1,5,15 minutes. Programs running exclusively I/O will be on the sleep queue while the kernel does i/o stuff, giving you a load average of near-zero even though your machine is busy scrambling for files on disk or waiting for network data. Likewise, a program that consists entirely of NOOPs will give you a load average of one (+1 per each additional instance) even if its nice value is all the way up and it is quite interruptable/is really putting zero strain on your system.
Before deciding that a machine is virtualizable, don't just look at load average. Run a real monitoring utility and look at iowait times, etc.
Reid- by hackstraw (262471) on Friday March 09, @03:48PM (#18293756)
(http://www.spamgourmet.com/) Load average is a bad way of looking at machine utilization. Load average is the average number of processes on the run queue over the last 1,5,15 minutes.
This may be wrong, but I've always looked at load as the number of processes waiting for resources (usually disk, CPU, or network).
I've seen boxes with issues that have had a number of processes stuck in the nonkillable D (disk) wait state that were just stuck, but they had no real impact on the system besides artifically running the load up.
I've also seen where load was reported as N/NCPUs and N regardless of the number of CPUs.
Like all statistics, any single number in isolation is just a number. Even if the real meaning is the average number of processes in the run queue, that does not tell you much. Thinking of it as the number of processes waiting for some piece of hardware seems more accurate.- by T-Ranger (10520) <jeffw@cheb u c t o . n s . ca> on Friday March 09, @02:19PM (#18292508)
(http://coherentnetworksolutions.com/) Well, disks may not be a great example. VMWare is of course a product of EMC, which makes (drumroll) high end SAN hardware and software management tools. While Im not quite saying that there is a clear conflict of interest here, the EMC big picture is clear: "now that you have saved a metric shit load of cash on server hardware, spend some of that on a shiny new SAN system". The nicer way of that is that both EMC SANs and VMware do the same thing: consolidation of hardware onto better hardware, abstraction of services provided, finer grained allocation of services, shared overhead - and management.
If spikes on one VM are killing the whole physical host, then you are surely doing something wrong. Perhaps you do need that SAN with very fast disk access. Perhaps you need to schedule migration of VMs from one physical host to another when your report server pegs the hardware. Or, if its an unscheduled spike, you need to have rules that trigger migration if one VM is degrading service to others.
- Re:Yawn
(Score:2, Interesting)
by dthable (163749) <dhable AT uwm DOT edu> on Friday March 09, @01:11PM (#18291506)
(Last Journal: Monday February 12, @01:59PM)I could also see their use when upgrading or patching machines. Just take a copy of the virtual image and try to execute the upgrade (after testing, of course). If it all goes to hell, just flip the switch back. Then you can take hours trying to figure out what went wrong instead of being under the gun.
- Re:Yawn
(Score:4, Interesting)
by afidel (530433) on Friday March 09, @01:55PM (#18292162)Well, our Oracle servers are DL585's with four dual core cpu's, 32GB of ram, dual HBA's backed by an 112 disk SAN and they regularly max out both HBA's, trying to run that kind of load on a VM just doesn't make sense with the I/O latency and throughput degradation that I've seen with VMWare. I know I'm not the only one as I have seen this advice from a number of top professionals that I know and respect. If you have a lightly loaded SQL server or some AD controllers handling a small number of users then they might be good candidates, but any server that is I/O bound and/or spends a significant percentage of the day busy is probably the lowest priority to try to virtualize. You can probably get 99+% of the benefit of virtualization from the other 80-90% of your servers that are likely good candidates.
- by ergo98 (9391) <[email protected]> on Friday March 09, @02:08PM (#18292350)
(http://www.yafla.com/dforbes/ | Last Journal: Tuesday September 27, @10:43AM)I know I'm not the only one as I have seen this advice from a number of top professionals that I know and respect.
Indeed, it has become a bit of a unqualified, blanket meme: "Don't put database servers on virtual machines!" we hear. I heard it just yesterday from an outsourced hardware rep for crying out loud (they were trying to display that they "get" virtualization).
Ultimately, however, it's one of those easy bits of "wisdom" that people parrot because it's cheap advice, and it buys some easy credibility.
Unqualified, however, the statement is complete and utter nonsense. It is absolutely meaningless (just because something can superficially get called a "database" says absolutely nothing about what usage it sees, its disk access patterns, CPU and network needs, what it is bound by, etc).
An accurate rule would be "a machine that saturates one of the resources of a given piece of hardware is not a good candidate to be virtualized on that same piece of hardware" (e.g. your aforementioned database server). That really isn't rocket science, and I think it's obvious to everyone. It also doesn't rely upon some meaningless simplification of application roles.
Note that all of the above is speaking more towards the industry generalization, and not towards you. Indeed, you clarified it more specifically later on.
- Re:Yawn
(Score:2)
by Courageous (228506) on Friday March 09, @03:23PM (#18293384)Well. VMWare has issues with IO latency. One has to watch for that, not try to virtualize everything. But. You say "Virtualization bad" for "CPU intensive," and I cannot agree with that. SPECint2006 and SPECfp2006, as well as rates are within 5% of hard metal for ESX. I've run the tests myself. Old school "CPU intensive" applications are a non-conversation in in virtualizaation today.
It's the network IO and network latency that will kill you if you don't know what you're doing. VMWare has known issues in that area (although they must break through these entirely or 10GE will never work properly in VMWare). One can work around these issues, however I'd simply say it's a Best Practice to simply plan to "not virtualize everything." I'd say target 65%-ish of your compute infrastructure in preplanning and base your real decisions on an actual analysis.
C//
- Re:He must be talking about freeware
I'm certified for both VMware ESX 2.5 and VMware VI3. VMware's best practices are to never use a single path, whether it be for NIC or FC HBA (storage). VMware also has Virtual Switches, which not only allows you to team NICs for load balancing and failover, but also use port groups (VLANs). You can then view pretty throughput graphs for either physical NICs or virtual adapters. It's crazy amazing(TM).(Score:5, Informative) by Semireg (712708) on Friday March 09, @12:59PM (#18291308)
As for "putting many workloads on a box and uptime," this writer should really take a look at VMware VI3 and Vmotion. Not only can you migrate a running VM without downtime, you can "enter maintenance mode" on a physical host, and using DRS (distributed resource scheduler) it will automatically migrate the VMs to hosts and achieve a load balance between CPU/Memory. It's crazy amazing(TM).
Lastly, just to toot a bit of the virtualization horn... VMware's HA will automatically restart your VMs on other physical hosts in your HA cluster. It's not unusual for a Win2k3 VM to boot in under 20 seconds (VMware's BIOS posts in about.5 seconds compared to an IBM xSeries 3850 which takes 6 minutes). Oh, and there is the whole snapshotting feature, memory and disk, which allows for point in time recovery on any host. Yea... downsides indeed.
Virtualization is Sysadmin Utopia. -- cvl, a Virtualization Consultant
- Re:He must be talking about freeware
(Score:2) by div_2n (525075) on Friday March 09, @02:05PM (#18292314) I'm managing VI3 and we use it for almost everything. Ran into some trouble with one antiquated EDI application that just HAD to have a serial port. That is a long discussion, but for reasons I'm quite sure you could guess, I offloaded it to an independent box. We run our ERP software on it and the vendor has tried (unsuccessfully) several times to blame VMWare for issues.
You don't mention it, but consolidated backup just rocks. I have some external Linux based NAS machines that use rsync to keep local copies of both our nightly backups and occasional image backups at both sites.
Thanks to VMWare, it's like I've told management--"Our main facility could burn to the ground and I could have our infrastructure back up and running at our remote site before the remains stop smoldering much less get a check from the insurance company."
- He must. ESX set up properly avoids most pitfalls
(Score:5, Insightful)
by cbreaker (561297) on Friday March 09, @01:38PM (#18291910)
(Last Journal: Tuesday December 12, @07:54PM)Indeed. If you have a proper ESX configuration: At least two hosts, SAN back-end, multiple NIC's, supported hardware - you'll find that almost none of the points are valid.
Teaming, hot-migrations, resource management, and lots of other great tools make modern x86 virtualization really enterprise caliber.
I think that the people that see it as a toy are people that have never used virtualization in the context of a large environment, being used properly with proper hardware. You can virtualize almost any server if you plan properly for it.
In the end, by going virtual you end up actually removing so much complexity from your systems that you'll never know how you did it before. No longer does each server have it's own drivers, quirks, OpenManage/hardware monitor, etc etc. You can create a new VM from a template in 5 minutes, ready to go. You can clone a server in minutes. You can snapshot the disks (and RAM, in ESX3) and you can migrate them to new hardware without bringing them down. You can create scheduled copies of production servers for your test environment. So much more simple then all-hardware.
I'll admit that you shouldn't use virtual servers for everything (yet) but you will eventually be able to run everything virtual, so it's best to get used to it now.
- Virtualization
(Score:4, Interesting)
by DesertBlade (741219) on Friday March 09, @12:51PM (#18291170)Good story, but I disagree in some areas.
Bandwidth concerns. You can have more than one NIC installed on the server and have it dedicated to each virtual machine.
Downtime: If you need to do maintance on the host that may be a slight issue, but I hardly ever have to anything to the host. Also if the host is dying, you can shut donw the Virtual machine and copy it to another server (or move the drive) and bring it up fairly quickly. You also have cluster capability with virtualization.
- it is all roses for Disaster Recovery
(Score:2) by QuantumRiff (120817) on Friday March 09, @12:52PM (#18291196) If your servers become toast, due to whatever reason, you can get a simple workstation, put a ton of RAM in it, and load up your virtual systems. Of course they will be slower, but they will still be running. We don't need to carry expensive 4 hour service contracts, just next business day contracts, saving a ton of money. The nice thing for me with Virtual servers is it is device agnostic, so if I have to recover, worst case, I have only one server to worry about NIC drivers, RAID settings/drivers, etc. After that, its just loading up the virtual server files.
- Re:it is all roses for Disaster Recovery
(Score:1) by bigredradio (631970) on Friday March 09, @01:08PM (#18291444)
(http://www.storix.com/ | Last Journal: Sunday August 20, @03:39PM) Sort of... I agree that you can limit hardware needs, but you also have a central point of failure. If the host OS or local storage goes, you now have lost multiple systems instead of one. One issue I have seen is having external scsi support. At least with Xen, you cannot dynamically allocate a pci scsi card to each node. This may also hold true for fiber channel cards.(not sure). That means, no offsite tape backups for the individual nodes and no access to SAN storage through the virtual nodes.- We're about 95% virtualized and never going back!
(Score:1, Interesting) by Anonymous Coward on Friday March 09, @12:56PM (#18291262) The absolute only place it has not been appropriate are locations requiring high amounts of disk IO. It has been a godsend everywhere else. All of our web servers, application servers, support servers, management servers, blah blah blah. It's all virtual now. Approximately 175 servers are now virtual. The rest are huge SQL Server/Oracle systems.
License controls are fine. All the major players support flexible VM licensing. The only people that bark about change control are those who simply don't understand virtual infrastructure and a good sit-down solved that issue. "Compliance" has not been an issue for us at all. As far as politics are concerned -- if they can't keep up with the future, then they should get out of IT.
FYI: We run VMware ESX on HP hardware (DL585 servers) connected to an EMC Clariion SAN.- Home Use
(Score:2, Insightful) by 7bit (1031746) on Friday March 09, @12:59PM (#18291316) I find Virtualization to be great for home use.
It's safer to browse the web through a VM that is set to not allow access to your main HD's or partitions. Great for any internet activity really, like P2P or running your own server; if it gets hacked they still can't affect the rest of your system or data outside of the VM's domain. It's also much safer to try out new and untested software from within a VM, in case of virus or spyware infection, or just registry corruption or what have you. I can also be useful for code developement within a protected environment.
Did I mention portability? Keep back-up's of your VM file and run it on any system you want after installing something like the Free VMWare Server:
http://www.vmware.com/products/server/ [vmware.com]
- Hype Common Sense
(Score:3, Interesting)
by micromuncher (171881) on Friday March 09, @01:14PM (#18291556)The article mentions a point of common sense that I fought tooth 'n nail about and lost in the Big Company I'm at now.
For a year I fought against virtualizing our sandbox servers because of resource contention issues. One machine pretending to be many with one NIC and one router. We had a web app that pounded a database... pre virtualization it was zippy. Post virtualization it was unusuable. I explained that even though you can Tune virtualized servers, it happens after the fact, and it becomes a big active management problem to make sure your IT department doesn't load up tons of virtual servers to the point it affects everyone virtualized. They argued, well, you don't have a lot of use (a few users, and not a lot of resource utilization.)
My boss eventually gave in. The client went from zippy workability in an app being developed, to slow piece of crap because of resource contention, and its hard to explain that an IT change forced under the hood was the reason for SLOW, and in UAT, SLOW = BUSTED.
That was a huge nail in the coffin for the project. When the user can't use the app on demand, for whatever reason, and they don't want to hear jack about tuning or saving rack space.
So all you IT managers and people thinking you'll get big bonuses by virtualizing everything... consider this... ONE MACHINE, ONE NETWORK CARD, pretending to be many...- Author is completely uninformed
(Score:5, Insightful)
by LodCrappo (705968) on Friday March 09, @01:28PM (#18291790)
(http://www.spogbiper.com/)Increased uptime requirements arise when enterprises stack multiple workloads onto a single server, making it even more essential to keep the server running. "The entire environment becomes as critical as the most critical application running on it," Mann explains. "It is also more difficult to schedule downtime for maintenance, because you need to find a window that's acceptable for all workloads, so uptime requirements become much higher."
No, no, no. First of all, in a real enterprise type solution (something this author seems unfamiliar with) the entire environment is redundant. "the" server? You don't run anything on "the" server, you run it on a server and you just move the virtual machine(s) to another server as needed when there is a problem or maintenance is needed. It is actually very easy to deal with hardware failures.. you don't ever have to schedule downtime, you just move the VMs, fix the broken node, and move on. For software maintenance you just snapshot the image, do your updates, and if they don't work out, you're back online in no time.
In a physical server environment, each application runs on a separate box with a dedicated network interface card (NIC), Mann explains. But in a virtual environment, multiple workloads share a single NIC, and possibly one router or switch as well.
Uh... well maybe you would just install more nics? It seems the "expert" quoted in this article has played around with some workstation level product and has no idea how enterprise level solutions actually work.
The only valid point I find in this whole article is the mention of additional training and support costs. These can be significant, but the flexibility and reliability of the virtualized environment is very often well worth the cost.
- Re:Should I jump?
(Score:2) by 15Bit (940730) on Friday March 09, @03:17PM (#18293318) It will depend what you do with your 3 shuttles.
I just ditched my dual opteron (linux) + shuttle (windows) setup and replaced it with a single Core Duo box with linux virtualized under WinXP. I'm running the VMware free server software (http://www.vmware.com/products/free_virtualizati
o n.html [vmware.com]) and i have to say i'm impressed.The only negatives i've found so far (aside from the obvious ones related to two systems in one computer) are some slowdown in mouse responsiveness in the virtualized linux and the lack of hardware accelerated graphics (these might be the same thing, i don't know). You also have to turn off access of the virtualized OS to the DVDROM or everything gets confused.
The positives are that it was piss easy to set up and really "just works". The VMware'd linux talked to my network card without intervention and happily picked up a unique IP from my DHCPD. NIS/NFS to my fileserver "just worked". I can allocate the VMware OS 1 or 2 cores and vary the amount of RAM it sees. My main use for the linux V/OS is molecular dynamics simulations, the software running message passing via LAM/MPI and all compiled under Intel C and Fortran Compilers. Again, all of that "just worked".
In terms of performance, MD calcs done on 1 cpu seem to be at close to full speed for one core, but running them dual gives only an 80% scaling improvement. That slowdown is about as expected, given that there's another OS running. Another nice side benefit is that i can run an MD calc on 1 cpu and play games with the other. I don't notice any lag.
So to summarise - if i'd paid money for VMware i'd be seriously impressed, but for something to do exactly what i want for free is truly amazing.
- A nice buffer zone!
(Score:1) by Gazzonyx (982402) on Friday March 09, @03:19PM (#18293356) I've found that virtualization is a nice buffer zone from management decisions! Case in point, yesterday my boss (he's got a degree in comp. sci - 20 years ago...), who's just getting somewhat used to the linux server that I set up, decided that we should 'put
/var in the /data directory tree'; I had folded once when he wanted to put /home in /data, for backup reasons, and made it a symlink from /.
Now when he gets these ideas, before just going and doing it on the production server, I can say "How about I make a VM and we'll see how that goes over", thinking under my breath the words of Keith Moon, "That'll go over like a lead zeppelin". It give me a technology to leverage where I can show that an idea is a Bad Idea, without having to trash the production server to prove my point.
I've even set up a virtual network (1 samba PDC and 3 windows machines), to simulate our network on a small scale to set up proof of concepts. If they don't believe that something will work, I can show them without having their blessing to mess with our network. If it doesn't work, I roll back to my snapshots, and I have a virgin virtual network again.Does anyone do this? Has it worked out where you can do a proof of concept that otherwise, without virtualization, you would be confined to whiteboard concepts that no one would listen to?
- Re:the sad thing is how much we need virtualizatio
(Score:2, Interesting)
by dthable (163749) <dhable AT uwm DOT edu> on Friday March 09, @01:17PM (#18291608)
And if the software doesn't require a dedicated machine, the IT department wants one. The company I used to work for would buy a new machine for every application component because they didn't want Notes and a homegrown ASP application to conflict with each other. Seemed like a waste of hardware in my opinion.
(Last Journal: Monday February 12, @01:59PM)
- This is FUD
(Score:2)
by fyngyrz (762201) * on Friday March 09, @05:38PM (#18295052)
(http://www.blackbeltsystems.com/ | Last Journal: Saturday January 27, @06:16PM)...virtualization's problems can include cost accounting (measurement, allocation, license compliance); human issues (politics, skills, training); vendor support (lack of license flexibility); management complexity; security (new threats and penetrations, lack of controls); and image and license proliferation.Examine that quote from the article closely. See anything there that indicates virtualization "doesn't work"? No, nor do I. What they are talking about here has nothing to do with how well virtualization works, what they're complaining about is that a particular tool requires competence to use well in various work environments. Well, no one ever said that virtualization would gift brains to some middle level manager, or teach anyone how to use an office suite, or imbue morals and ethics into those who would steal; virtualization lets you run an operating system in a sandbox, sometimes under another operating system entirely. And it does that perfectly well, or in other words, it works very well indeed. I call FUD.
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: March, 12, 2019