|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
|
In a traditional sense that we will use here virtualization is the simulation of the hardware upon which other software runs. This simulated hardware environment is called a virtual machine (VM). Classic form of virtualization, known as operating system virtualization, provides the ability to multiple instances of OS on the same physical computer under the direction of a special layer of software called hypervisor. There are several forms of virtualization, distinguished primarily by the hypervisor architecture.
Each such virtual instance (or guest) OS thinks that is running on a real hardware with full access to the address space but in reality is operating in a separate VM container which maps this address space into segment of address space of the physical computer. this operation is called address translation. Guest OS can be unmodified (so-called heavy-weight virtualization) or specifically recompiled for the hypervisor API (para-virtualization). In light-weight virtualization a single OS instance presents itself as multiple personalities (called jails or zones), allowing high level of isolation of applications from each other at a very low overhead.
|
There is entirely different type of virtualization often called application virtualization. The latter provides a virtual instruction set and virtual implementation of the application programming interface (API) that a running application expects to use, allowing writing compilers that compile into this vitual instruction set. Along with huge synergy it can permits applications developed for one platform to run on another without modifying the application itself. The Java Virtual Machine (JVM) and, in more limited way, Microsoft .Net are two prominent examples of this type of virtualization. This type acts as an intermediary between the application code, the operating system (OS) API and instruction set of the computer. We will not discuss it here.
Virtualization was pioneered by IBM in early 1960th with its ground breaking VM/CMS. It is still superior to many existing VMs, as it handle virtual memory management for hosts (on hosted OS, virtual memory management layer should be disabled as it provides nothing but additional overhead). Also IBM mainframe hardware was the first virtualization friendly hardware (IBM and HP virtualization):
Contrary to what many PC VMWARE techies believe, virtualization technology did not start with VMWARE back in 1999. It was pioneered by IBM more than 40 years ago. It all started with the IBM mainframe back in the 1960s, with CP-40, an operating system which was geared for the System/360 Mainframe. In 1967, the first hypervisor was developed and the second version of IBM's hypervisor (CP-67) was developed in 1968, which enabled memory sharing across virtual machines, providing each user his or her own memory space. A hypervisor is a type of software that allows multiple operating systems to share a single hardware host. This version was used for consolidation of physical hardware and to more quickly deploy environments, such as development environments. In the 1970s, IBM continued to improve on their technology, allowing you to run MVS, along with other operating systems, including UNIX on the VM/370. In 1997, some of the same folks who were involved in creating virtualization on the mainframe were transitioned towards creating a hypervisor on IBM's midrange platform.
One critical element that IBM's hypervisor has is the fact that virtualization is part of the system's firmware itself, unlike other hypervisor-based solutions. This is because of the very tight integration between the OS, the hardware, and the hypervisor, which is the systems software that sits between the OS and hardware that provides for the virtualization.
In 2001, after a four-year period of design and development, IBM released its hypervisor for its midrange UNIX systems, allowing for logical partitioning. Advanced Power Virtualization (APV) shipped in 2004, which was IBM's first real virtualization solution and allowed for sharing of resources. It was rebranded in 2008 to PowerVM.
As Intel CPUs became dominant in enterprise virtualization technologies invented for other CPUs were gradually reinvented for Intel. In 1998 VMware built VMwae Workstation, which ran on a regular Intel CPU despite the fact that at this time Intel CPUs did not directly supported virtualization extensions. The first mass deployment of virtualization on Intel Platform was not on servers but for "legacy desktop applications" for Windows 98 when organization started moving to Windows 2000 and then Windows XP.
There were three major virtualization solutions for Intel-based computers, XEN, VMware and Microsoft Virtual PC
For system administrators, programmers and consultants VMware desktop provided an opportunity to run linux on the same PCs as Windows. This was very convenient for various demo and such configuration became holy grail for all types of consultants who became major promoter of VMware and ensures its quick penetration at the enterprise level. It also became common solution for training as it permits to provide to each student a set of virtual desktop and servers that would too costly to provide in physical hardware.
The other important area is experimentation: you can create set of virtual machines in no time without usual bureaucratic overhead typical for large organizations.
More problematic area is usage of virtualization for server consolidation. VMware found here its niche but only consolidating "fake" servers -- servers that run applications with almost no users and no load. For servers with real load blades provide much more solid alternative with similar capabilities and cost. Still VMware has some level of success in server space but it is difficult to say how much of it is due to advantages of virtualization and how much due to technical incompetence of corporate IT which simply follows the current fashion.
I was actually surprised that VMware got so much traction with the exorbitant extortion level prices they charge. Pricing that is designed almost perfectly to channel all the savings to VMware itself instead of organization that this deploying VMware hypervisor. For me blades were more always simpler and more promising server consolidation solution with better price/performance ration. So when companies look at virtualization as the way to cut costs they might be looking at the wrong solution. First of all you cannot defy gravity with virtualization: you still have a single channel of access to RAM and with several OS running concurrently the bridge between RAM and CPU became a bottleneck. Only if the application is mostly idle (for example hosts a low traffic websites, etc) it makes sense to consolidate it. So the idea works when you consolidate small and not very loaded servers into fewer, larger, more heavily-loaded physical servers. It also works perfectly well for development and quality servers which by definition are mainly circulating air. For everything else your mileage may vary. For example why on earth I would put Oracle on a virtual machine? to benefit from the ability to migrate to another server? That's fake benefit as it almost never happens in real life without Oracle version upgrade. To provide more uniform environment for all my Oracle installations? Does it worth troubles with disk i/o that I will get ?
So it is very important to avoid excessive zeal in implementing virtualization on enterprise environment and calculate five years total ownership difference between various variants before jumping into the water. If overdone server consolidation via virtualization can bring up a whole new set of complications. And other things equal one should consider cheaper alternatives to VMware like Xen, especially for Linux servers, as again the truth about VMware is that the lion share of saving goes to VMware, not to the company that implement it.
It is very important to avoid excessive zeal. If overdone server consolidation via virtualization can bring up a whole new set of complications. |
In short there is no free lunch. If used in moderation and with Xen instead of VMware to avoid excessive licensing costs, this new techno fashion can help to get rid of "low load" servers as well as cut maintenance cost replacing some servers with specific applications run by "virtual appliances". Also provisioning became really fast which is extremely important in research and lab environment. One can get a server to experiment in 5-10 min instead of 5-10 days :-). This is a win-win situation. It is quite beneficial for environment and for the enterprise as it opens some additional, non-foreseen avenues of savings.
But there is another caveat. The price of servers grows very fast beyond midrange servers and, say, an Intel server that costs $35K will never be able to replace seven reasonably loaded low end servers costing $5K each. And using separate server you do need to worry that they are not too loaded or that peak loads for different servers happens at different time. The main competition here are blade servers. For example the cost of VMware server is approximately $5K with annual maintenance cost of $500. If we can run just four virtual instances under it and the server cost, say $20K, while a small 1U server capable of running one instance costs $5K (no savings on hardware due to higher margins on medium servers) you lose approximately $1K a year per instance in comparison with using physical servers or blades. Advantages due to better maintainability are marginal (if we assume 1U servers are identical and use kickstart and, say, Acronis images fro OS restore) and stability is lower and behavior under simultaneous peaks is highly problematic. In other words virtualization is far from being a free lunch.
At the same time the heavy reliance on virtualized servers for production applications, as well as the task of managing and provisioning them, are fairly new areas in the "new brave" virtualized IT world increases the importance of monitoring applications and enterprise schedulers. In large enterprises that means additional value provided by already installed HP Operations Manager, Tivoli and other ESM applications. Virtualization also has changed configuration management, capacity management, provisioning, patch management, back-ups, and software licensing. It is inherently favorable toward open source software and OS solutions, where you do not pay for each core or physical CPU on the server.
We will call "Dual host virtualization" the scenario when one physical server hosts just two guest OSes. Exactly two, no more, no less.
If applied to all servers in the datacenter this approach guarantees 50% reduction in the number of physical servers. Saving on hardware that motivates many virtualization efforts is a questionable idea as low level servers represent the most competitive segment of server market with profit margins squeezed to minimum; the margins are generally much larger of mid-range and high end servers. But in "Dual host virtualization" some savings on hardware might be squeezed. For example, a fully configured Intel server with two four core CPUs and with, say, with 32GB of RAM costs less that two servers with one fore core CPU and 16GB RAM in each.
Larger number of applications on a single server are possible, but more tricky: such virtual server need careful planning as it faces memory bottleneck and CPU power bottleneck, especially painful if "rush hours" are the same for both applications. If applications have some synergy and have peaks at different time of the say, then one 2U server with two, say, quad core CPU and 32G of memory split equally between two partitions can be even more efficient then two several servers with one quid core CPU and 16G of memory if speed of memory and speed of CPU are equal.
Dual host virtualization works well on all types of enterprise servers: Intel servers, IBM Power servers, HP servers and Sun/Oracle servers (both Intel and UltraSparc based).
If we are talking about Intel platform using Xen or Microsoft VM are probably the only realistic options for Dual host virtualization. VMware is way too expensive. Using Xen and Linux you can squeeze two virtual servers previously running on individual 1U server into one single 2U server and get 30-50% reduction in the cost of both hardware and software maintenance. The latter is approximately $1K per year per server (virtual instances are free under Suse and Red Hat). There are also some marginal savings in electricity and air-conditioning related savings. Low end servers have small and usually less efficient power supplies and using one 2U server instead of two 1U servers lead to almost 30-40% savings in consumed energy (higher saving are possible if 2U server is using a single CPU with, say, four of eight cores).
If you go beyond dual host virtualization outlined above, savings on hardware are more difficult to achieve as low end Intel servers represent the most competitive segment of the Intel server market with profit margins squeezed to minimum; the margins are generally much larger of mid-range and high end Intel servers. The same is true for another architectures as well. In other words, vendor margins on midrange servers and high-end servers work against virtualization. This is especially true about HP, which overcharges customers for midrange server by tremendous margin providing mediocre servers, that are less suitable for running Linux then servers from Dell.
Virtualization is the simulation of the software and/or hardware upon which guest operating systems run. This simulated environment is called a virtual machine (VM). Each instance of an OS and its applications runs in a separate VM called a guest operating system. Those VMs are managed by the hypervisor. There are several forms of virtualization, distinguished by the architecture of hypervisor.
Paravirtualized kernel can provide faster access for resources such
as hard drives and networks. Different types of paravirtualization are
offered by different hypervisors systems. IBM was the pioneer in this
type of virtualization, creating VM/360 in 1972. POWER servers from
IBM running AIX also implement paravirtualization. VM/360 actually did
more then a typical paravirtualization hypervisor on Intel (like Xen):
in VM/CMS environment guest OSes delegates all virtual memory management
to the hypervisor level. The latter is very important as it provides
tremendous savings in memory (common segments of different guests can
be loaded only once) and better efficiency. In case of CMS, multitasking
is also delegated to the hypervisor level. But generally it is up to
the designer of paravirtualization to decide if it handles memory allocation
or not.
The performance hit can be in single digits and that makes running two
guests on a single server "dual guest virtualization ( see above)"
very attractive (you cut number of servers in half while getting almost
native performance from each guest OS). For that reason "dual guest
virtualization" that we mentioned above is widely used for AIX servers
and is one of the major attractions for AIX.
Full virtualization has some negative security implications. Virtualization adds layers of technology, which can increase the security management burden by necessitating additional security controls. Also, combining many systems onto a single physical computer can cause a larger impact if a security compromise occurs, especially grave if it occurs on VM level (access to VM console). Further, some virtualization systems make it easy to share information between the systems; this convenience can turn out to be an attack vector if it is not carefully controlled. In some cases, virtualized environments are quite dynamic, which makes creating and maintaining the necessary security boundaries more complex.
There are two types of hypervisors:
In both bare metal and hosted virtualization, each guest OS appears to have its own hardware, like a regular computer. This includes:
But in reality it is difficult to virtualize storage and networking, so some additional overhead is imminent. Some hypervisors also provide direct memory access (DMA) to high-speed storage controllers and Ethernet controllers, if such features are supported in the hardware CPU on which the hypervisor is running. DMA access from guest OSs can significantly increase the speed of disk and network access, although this type of acceleration prevents some useful virtualization features such as snapshots and moving guest OSs while they are running.
Hypervisors usually provide networking capabilities to the individual guest OSs enabling them to communicate with one another while simultaneously limiting access to the external physical network. The network interfaces that the guest OSs see may be virtual Ethernet controller, physical Ethernet controller, or both. Typical hypervisors offer three primary forms of network access:
When a number of guest OSes exist on a single host, the hypervisor can provide a virtual network for these guest OSs. The hypervisor may implement virtual switches, hubs, and other network devices. Using a hypervisors networking for communications between guests on a single host has the advantage of greatly increased speed because the packets never hit physical networking devices. Internal host-only networking can be done in many ways by the hypervisor. In some systems, the internal network looks like a virtual switch. Others use virtual LAN (VLAN) standards to allow better control of how the guest systems are connected. Most hypervisors also provide internal network address and port translation (NAPT) that acts like a virtual router with NAT.
Networks that are internal to a hypervisors networking structure can pose an operational disadvantage, however. Many networks rely on tools that watch traffic as it flows across routers and switches; these tools cannot view traffic as it moves in a hypervisors network. There are some hypervisors that allow network monitoring, but this capability is generally not as robust as the tools that many organizations have come to expect for significant monitoring of physical networks. Some hypervisors provide APIs that allow a privileged VM to have full visibility to the network traffic. Unfortunately, these APIs may also provide additional ways for attackers to attempt to monitor network communications. Another concern with network monitoring through a hypervisor is the potential for performance degradation or denial of service conditions to occur for the hypervisor because of high volumes of traffic.
Hypervisor systems have many ways of simulating disk storage for guest OSs. All hypervisors, at a minimum, provide virtual hard drives mapped to files, while some of them also have more advanced virtual storage options. In addition, most hypervisors can use advanced storage interfaces on the host system, such as network-attached storage (NAS) and storage area networks (SAN) to present different storage options to the guest OSs.
All hypervisors can present the guest OSs with virtual hard drives though the use of disk images. A disk image is a file on the host that looks to the guest OS like an entire disk drive. Whatever the guest OS writes onto the virtual hard drive goes into the disk image. With hosted virtualization, the disk image appears in the host OS as a file or a folder, and it can be handled like other files and folders. As speed of read access is important this is a natural area of application for SSD disks.
Most virtualization systems also allow a guest OS to access physical hard drives as if they were connected to the guest OS directly. This is different than using disk images. Disk image is a virtual representation of a real drive. The main advantage of using physical hard drives is that unless SSD is used, accessing them is faster than accessing disk images.
Typically virtual systems in enterprise environment use SAN storage. that's probably why EMC bought VMware. This is an active area of development in the virtualization market as it permits migration of guest OS from one physical server (more loaded or less powerful) to another (less loaded and./or more powerful) if one of virtual images experience a bottleneck.
A full virtualization hypervisor encapsulates all of the components of a guest OS, including its applications and the virtual resources they use, into a single logical entity. An image
is a file or a directory that contains, at a minimum, this encapsulated information. Images are stored on hard drives, and can be transferred to other systems the same way that any file can (note, however, that images are often many gigabytes in size). Some virtualization systems use a virtualization image metadata standard called the Open Virtualization Format (OVF) that supports interoperability for image metadata and components across virtualization solutions. A snapshot is a record of the state of a running image, generally captured as the differences between an image and the current state. For example, a snapshot would record changes within virtual storage, virtual memory, network connections, and other state-related data. Snapshots allow the guest OS to be suspended and subsequently resumed without having to shut down or reboot the guest OS. Many, but not all, virtualization systems can take snapshots.On some hypervisors, snapshots of the guest OS can even be resumed on a different host. While a number of issues may be introduced to handle real-time migration, including the transfer delay and any differences that may exist between the two physical servers (e.g., IP address, number of processors or hard disk space), most live-migration solutions provide mechanisms to resolve these issues.
We can distinguish the following five different types of virtualization:
This is hardware domain-based virtualization that is used only on high-end servers. Domain can, essentially, be called "blades with common memory and I/O devices". Those "blades on steroids" are probably the closest thing on getting more power from a singe server without related sacrifices in CPU, memory access and I/O speed, sacrifices that are typical for all other virtualization solutions. Of course there is no free lunch and you need to pay for such luxury. Sun is the most prominent vendor of such servers (mainframe class servers like Sun Fire 15K).
A dynamic system domain (DSD) on Sun Fire 15K is an independent environment, a subset of a server, that is capable of running a unique version of firmware and a unique version of the Solaris operating environment. Each domain is insulated from the other domains. Continued operation of a domain is not affected by any software failures in other domains nor by most hardware failures in any other domain. The Sun Fire 15K system allows up to 18 domains to be configured.
A domain configuration unit (DCU) is a unit of hardware that can be assigned to a single domain; DCUs are the hardware components from which domains are constructed. DCUs that are not assigned to any domain are said to be in no-domain . There several types of DCU: CPU/Memory board, I/O assembly, etc. Sun Fire 15K hardware requires the presence of at least one board containing CPUs and memory, plus at least one of the I/O board types in each configured domain. Typically those servers as NUMA based. Access to memory of other domains is slower then to local memory.
By heavy-weight virtualization we will understand full hardware virtualization as exemplified by VMware. CPU vendors now are paying huge attention to this type of virtualization as they can no longer increase the CPU frequency and are forced to the path of increasing the number of cores. Intel latest CPU that are now dominant in server space are a classic example of this trend. With eight and 10 core CPUs available it is clear tat Intel is putting money on the virtualization trend. IBM P5/P6 and Sun UltraSparc T1/T2/T3 are examples among RISC CPUs.
All new Intel CPUs are "virtualization-friendly" and with the exception of cheapest models contain instructions and hardware capabilities that make heavy-weight virtualization more efficient. First of all this is related to the capability of "zero address relocation": availability of a special register which is added to each address calculation by regular instruction and thus provides illusion of multiple "zero addresses" to the programs.
VMware is the most popular representative of this approach to the design of hypervisor and recently it was greatly helped by Intel and AMD who incorporated virtualization extensions in their CPUs. VMware started to gain popularity before the latest Intel CPUs with virtualization instruction set extensions and demonstrated that it is possible to implement it reasonably efficiently even without hardware support. VMware officially supports a dozen of different types of guests: it can run Linux (Red Hat and Suse), Solaris and Windows as virtual instances(guests) on one physical server. 32-bit Suse can be run in paravirtualized mode on VMware.
The industry consensus is that VMware's solution is overpriced. Please ignore hogwash like the following VM PR:
Horschman countered the 'high pricing' claim saying "Virtualization customers should focus on cost per VM more than upfront license costs when choosing a hypervisor. VMware Infrastructure's exclusive ability to overcommit memory gives it an advantage in cost per VM the others can't match." And he adds, "Our rivals are simply trying to compensate for limitations in their products with realistic pricing."
This overcommitting of memory is a standard feature related to presence of virtual memory subsystem in the hypervisor and first was implemented by IBM VM/CMS in early 1970th. So much about new technology. All those attempts to run dozens of guests on a server with multiple cores (and in mid 2011 you can get 80 core server -- HP DL 980 -- for less then $60K) are more result of incompetence of typical IT brass and related to that the number of servers that simply circulate air in a typical datacenter then the progress in virtualization technology.
No matter how much you can share the memory (and over commitment is just a new term for what IBM VM did since 1972), you can't bypass the limitation of a single channel from CPU to memory, unless this is a NUMA server. The more guests are running the more this channel is stressed and running dozens of instances is possible mainly in situations when they are doing nothing or close to nothing (circulating air in corporate IT jargon). That's happens (unpopular, unused corporate web servers are one typical example), but even for web servers paravirtualization and zones are much better solutions.
Even assuming the same efficiency as multiple standalone 1U servers VMware is not cost efficient unless you can squeeze more then four guests per server. And more then four guests is possible only with servers that are doing nothing or close to nothing because if each guest is equally loaded then each of them can use only 33% or less of memory bandwidth of the server (which means memory channel for guest operating at 333MHz or less, assuming the server uses 1.028GHz memory). Due to this I would not recommend running four heavily used database servers on a single physical server for any organization. But running several servers for the compliance training that was implemented because the company was caught fixing prices along with a server or two which implement a questionnaire about how good is company IT brass in communicating IT policy to rank and file is OK ;-)
The following table demonstrates that the cost savings with less then four guest per physical server are non-existent even if we assume equal efficiency of VMware and separate physical servers. Moreover VMware price premium means that you need at least eight guests on a single physical server to achieve the same cost efficiency as four Xen servers running two guests each (Red Hat and Novell do not charge for additional guests on the same physical server, up to a limit).
Cost of the server | Number of physical servers | Number of guests | Cost of SAN cards (Qlogic) | Cost of SAN storage | Server maintenance (annual) | VM license | VM Maintenance (annual) | OS maintenance (annual) | Five years total cost of ownership | annualized cost per one guest or physical server | Cost efficiency of one guest vs. one 1U server (annualized) | |
VMware solution | ||||||||||||
Running 2 guests | 7 | 1 | 2 | 0.00 | 0.00 | 0.42 | 5 | 1.4 | 0.35 | 25.02 | 12.51 | -3.24 |
Running 4 guests | 10 | 1 | 4 | 4.00 | 3.00 | 0.42 | 5 | 1.4 | 0.35 | 38.52 | 9.63 | -0.36 |
Running 8 guests | 20 | 1 | 8 | 4.00 | 6.00 | 0.42 | 5 | 1.4 | 0.35 | 58.52 | 7.32 | 3.13 |
Xen solution | ||||||||||||
Running 2 guests | 7 | 1 | 2 | 0.00 | 0.00 | 0.42 | 0 | 0 | 0.35 | 13.02 | 6.51 | 2.76 |
Running 4 guests | 10 | 1 | 4 | 4.00 | 3.00 | 0.42 | 0 | 1.3 | 0.35 | 33.02 | 8.26 | 1.02 |
Physical servers | ||||||||||||
two 1U servers | 5 | 2 | 0 | 0.00 | 0.00 | 0.42 | 0 | 0 | 0.35 | 18.54 | 9.27 | 0.00 |
four 1U servers | 5 | 4 | 0 | 0.00 | 0.00 | 0.42 | 0 | 0 | 0.35 | 37.08 | 9.27 | 0.00 |
Notes
1 Even assuming the same efficiency, there is no cost savings running 4 or less guests per VMware server in comparison with equal number of standard 1U servers.
2 The cost of blades is slightly higher then equal number of 1U servers due to the cost of the enclosure but can be assumed equal for simplicity
3 We assume that in case of two instances no SAN is needed/used (internal drives are used for each guest)
4 We assume that in case of 4 guests or more, SAN cards and SAN storage is used
5 For Xen we assume that in case of 4 or more guests Oracle virtual VM is used (which has maintenance fees)
6 For simplicity the cost of SAN storage is assumed to be fixed cost $3K per 1T per 5 years
(includes SAN unit amortization, maintenance and switches, excludes SAN cards in the server itself)
Performance of VMware guests on high loads is not impressive as it should be for any non-paravirtualized hypervisor. Here is a more realistic assessment from a rival Xen camp:
Simon Crosby, CTO of the Virtualization and Management Division at Citrix Systems, writes on his blog: "The bottom line: VMware's 'ROI analysis' offers neither an ROI comparison nor any analysis. But it does offer valuable insight into the mindset of a company that will fight tooth and nail to maintain VI3 sales at the expense of a properly thought through solution that meets end user requirements.
The very fact that the VMware EULA still forbids Citrix or Microsoft or anyone in the Xen community from publishing performance comparisons against ESX is further testimony to VMware's deepest fear, that customers will become smarter about their choices, and begin to really question ROI."
The main advantage of heavy-weight virtualization is almost complete isolation of instances. Paravirtualization and blades achieve similar level of isolation so this advantage is not exclusive.
"The very fact that
the VMware EULA still forbids Citrix or Microsoft or anyone in the
Xen community from publishing performance comparisons against ESX
is further testimony to VMware's deepest fear, that customers will
become smarter about their choices, and begin to really question
ROI."
-- Simon Crosby, Citrix Systems |
The fact that CPUs, memory and I/O channels (PCI bus) are shared among guests means that you will never get the same speed on high simultaneous workloads for several guests as in the case of equal number of standalone servers each with corresponding fraction of CPUs and memory and the same set of applications. Especially problematic is sharing of memory bridge which works on lower speed then CPUs and can starve CPU, becoming the bottleneck well before CPU. Each virtual instance of OS loads pages independently of the other and compete for limited memory bandwidth. Even in best cases that means that each guest gets a fraction of memory bandwidth that is lower then memory bandwidth on a standalone server. So if, for example, two virtual instances are simultaneously active and are performing operations that do not fit in L2 cache only 2/3 of the memory bandwidth (accesses to memory are randomly spread in time so sum should probably be greater then 100%) in comparison with a standalone system. In memory operated on 1.024GHz that means that only 666MHz of bandwidth is availed for each guest while on a standalone server it would be at least 800MHz and can be as high as 1.33GHz. In other words you lose approximately 1/3 of memory bandwidth by jumping into virtualization bandwagon. That's why heavy-weight virtualization behaves bad on memory intensive applications.
There can be a lot of synergy if you run two or more instances of identical OSes. Many pages representing identical part of the kernel and applications can be loaded only once while used in all virtual instances. But I think you lose stack overflow protection this way as your pages are shared by different instances.
As memory speed and memory channel are bottlenecks adding CPUs (or cores) at some point became just wasting of money. The amount of resources used for intercommunication dramatically increases with the growth of the number of CPUs. VMware server farms based on the largest Intel servers like HP DL 980 (up to eight 10 core CPUs ) tend to suffer from this effect. The presence of a full non-modified version of an OS for each partition introduces significant drag on resources (both memory and CPU-wise). I/O load can be diminished by using SAN for each virtual instance OS and multiple cards on the server. Still in some deep sense heavy-weight partitioning is inefficient and will always waist significant part of server resources.
Still this approach is important for running legacy applications which is the area where this type of virtualization shine.
Sun calls heavy-weight virtual partitions "logical domains"(LDOM) . It is supported on Sun's T1-T3 CPU based and all the latest Oracle servers. Sun supports up to 32 guests with this virtualization technology. About differences with LPARs see Rolf M Dietze blog:
Suns LDoms supply a virtual terminal server, so you have consoles for the partitions, but I guess this comes out of the UNIX history: You dont like flying without any sight or instruments at high speed through caves, do you? So you need a console for a partition! T2000 with LDoms seems to support this, at IBM you need to buy an HMC (Linux-PC with HMC-software).
With crossbow virtual network comes to Solaris. LDoms seem to give all advantages of logical partitioning as IBMs have, but hopefully a bit faster and clearly less power consumption.
Sun offers a far more open licensing of course and: You do not need a Windows-PC to administer the machine (iSeries OS/400 is administered from such a thing).
A T2000 is fast and has up to 8 cores (32 thread-CPUs) 16GBRam and has a good price and those that do not really need the pure power and are more interested in partitioning.
The Solaris zones have some restrictions aka no NFS server in zones etc. That is where LDoms come in. Thats why I want to actually compare LDoms and LPARs.
It looks like it becomes cold out there for IBM boxes.
Para-virtualization is a variant of native virtualization, where the VM (hypervisor) emulates only part of hardware and provides a special API requiring OS modifications. The most popular representative of this approach is Xen with AIX as a distant second:
With Xen virtualization, a thin software layer known as the Xen hypervisor is inserted between the servers hardware and the operating system. This provides an abstraction layer that allows each physical server to run one or more virtual servers, effectively decoupling the operating system and its applications from the underlying physical server.
IBM LPARs for AIX are currently the king of the hill in this area because of higher stability in comparison with alternatives. IBM actually pioneered this class of VM machines in late 60 with the release of famous VM/CMS. Until recently Power5 based servers with AIX 5.3 and LPARs were the most battle-tested and reliable virtualized environments based on paravirtualization.
Xen is the king of paravirtualization hill in Intel space. Work on Xen has been supported by UK EPSRC grant GR/S01894, Intel Research, HP Labs and Microsoft Research (Yes, despite naive Linux zealots wining Microsoft did contributed code to Linux ;-). Other things equal it provides higher speed and less overhead then native virtualization. NetBSD was the first to implement Xen. Currently the key platform for Xen is linux with Novell supporting it in production version of Suse.
Xen is now resold commercially by IBM, Oracle and several other companies. XenSource, the company create for commercialization of Xen technology, was bought by Cytrix.
The main advantage of Xen is that it supports live relocation capability. It is also more cost effective solution the VMware that is definitely overpriced.
The main problem is that para-virtualization requires OS kernel modification to be aware of the environment it is running and pass control to hypervisor in case of executing all privileged instructions. Therefore it is not suitable for running legacy OSes and for running Microsoft Windows (although Xen can run it in newer 51xx CPU series)
Para-virtualization improves speed in comparison with heavy-weight virtualization (much less context switching), but does little beyond that. It is unclear how much faster is para-virtualized instance of OS in comparison with heavy-weight virtualization on "virtualization-friendly" CPUs. Xen page claims that:
Xen offers near-native performance for virtual servers with up to 10 times less overhead than proprietary offerings, and benchmarked overhead of well under 5% in most cases compared to 35% or higher overhead rates for other virtualization technologies.
It's unclear was this difference measured of old Intel CPU or new 5xxx series that support virtualization extensions. I suspect the difference on newer CPUs should be smaller.
I would like to stress it again that the level of modification OS is very basic and important idea of factoring out common functions like virtual memory management that was implemented in classic VM/CMS is not utilized. Therefore all the redundant processing typical for heavy-weight virtualization is present in para-virtualization environment.
Note: Xen 3.0 and above support both para-virtualization and full (heavy-weight) virtualization to leverage the built-in hardware support built into the Intel-VT-x and AMD Pacifica processors. According to XenSource Products - Xen 3.0 page:
With the 3.0 release, Xen extends its feature leadership with functionality required to virtualize the servers found in todays enterprise data centers. New features include:
- Support for up to 32-way SMP guest
- Intel VT-x and AMD Pacifica hardware virtualization support
- PAE support for 32 bit servers with over 4 GB memory
- x86/64 support for both AMD64 and EM64T
One very interesting application of paravirtualization are so called virtual appliances. This is a wholenew area that we discuss on a separate page.
Another very interesting application of paravirtualization is "cloud" environment like Amazon Elastic cloud.
All-in-all paravirtualization along with light-weight virtualization (BSD jail and Solaris zones) looks like the most promising types of virtualization.
This type of virtualization was pioneered in Free BCD (jails) and was further developed by Sun and introduced in Solaris 10 as concept of Zones. There are various experimental add-ons of this type for Linux but none got any prominence.
Solaris 10 11/06 and later are capable to clone a Zone as well as relocate it to another box, through a feature called Attach/Detach. The key advantage is that you have a single instance of OS so the price that you paid in case of heavy-weight virtualization is waived. That means that light-weight virtualization is the most efficient resources-wise. It also has great security value. Memory can become a bottleneck here as all memory accesses are channeled via a single controller. Also now it is possible to run Linux applications in zones on X86 servers (branded zones).
Zones are really revolutionary and underappreciated development which were hurt greatly by inept Sun management and subsequent acquisition by Oracle. The key advantage is that you have a single instance of OS so the price that you paid in case of heavy-weight virtualization is waived. That means that light-weight virtualization is the most efficient resources-wise. It also has great security value. Memory can become a bottleneck here as all memory accesses are channeled via a single controller, but you have a single virtual system for all zones -- great advantage that permits to reuse memory for similar processes.
IBM's "lightweight" product would be "Workload manager" for AIX which is an older (2001 ???)and less elegant technology then BSD Jails and Solaris zones:
Current UNIX offerings for partitioning and workload management have clear architectural differences. Partitioning creates isolation between multiple applications running on a single server, hosting multiple instances of the operating system. Workload management supplies effective management of multiple, diverse workloads to efficiently share a single copy of the operating system and a common pool of resources
IBM lightweight virtualization in version of AIX before 6 operated under a different paradigm with the most close thing to zone being a "class". The system administrator (root) can delegate the administration of the subclasses of each superclass to a superclass administrator (a non-root user). Unlike zones classes can be nested:
The central concept of WLM is the class. A class is a collection of processes (jobs) that has a single set of resource limits applied to it. WLM assigns processes to the various classes and controls the allocation of system resources among the different classes. For this purpose, WLM uses class assignment rules and per-class resource shares and limits set by the system administrator. T he resource entitlements and limits are enforced at the class level. This is a way of defining classes of service and regulating the resource utilization of each class of applications to prevent applications with very different resource utilization patterns from interfering with each other when they are sharing a single server.
In AIX 6 IBM adopted Solaris style light-weight virtualization.
One very interesting application of paravirtualization are so called virtual appliances. This is a wholenew area that we discuss on a separate page.
Another very interesting application of paravirtualization is "cloud" environment like Amazon Elactinc cloud.
All-in-all paravirtualization
Blade servers are an increasingly important part of the enterprise datacenters, with consistent double-digit growth which is outpacing the overall server market. IDC estimated that 500,000 blade servers were sold in 2005, or 7% of the total market, with customers spending $2.1 billion.
While blades are not virtualization in pure technical sense, the rack with blades (bladesystem) possesses some additional management capabilities that are similar to virtualized system and that are not present in stand-alone set of 1U servers. Blades usually have shared I/O channel to NAS. They also have shared remote management capaibilities (ILO on HP blades).
They can be viewed as "hardware factorization" approach to server construction, which is not that different from virtualization. The first shot in this direction is the new generation of bladesystems like IBM BladeCenter H system has offered I/O virtualization since February, 2006 and HP BladeSystem c-Class. A bladesystem saves up to 30% power in comparison with rack mounted 1U servers with identical CPU and memory configurations.
Sun also offers blades but it is a minor player in this area. It offers pretty interesting and innovative Sun Blade 8000 Modular System which target higher end that usual blade servers. Here is how Cnet described the key idea behind the server if the article Sun defends big blade server 'Size matters':
Sun co-founder Andy Bechtolsheim, the company's top x86 server designer and a respected computer engineer, shed light on his technical reasoning for the move.
"It's not that our blade is too large. It's that the others are too small," he said.
Today's dual-core processors will be followed by models with four, eight and 16 cores, Bechtolsheim said. "There are two megatrends in servers: miniaturization and multicore--quad-core, octo-core, hexadeci-core. You definitely want bigger blades with more memory and more input-output."
When blade server leaders IBM and HP introduced their second-generation blade chassis earlier this year, both chose larger products. IBM's grew 3.5 inches taller, while HP's grew 7 inches taller. But opinions vary on whether Bechtolsheim's prediction of even larger systems will come true.
"You're going to have bigger chassis," said IDC analyst John Humphries, because blade server applications are expanding from lower-end tasks such as e-mail to higher-end tasks such as databases. On the more cautious side is Illuminata analyst Gordon Haff, who said that with IBM and HP just at the beginning of a new blade chassis generation, "I don't see them rushing to add additional chassis any time soon."
Business reasons as well as technology reasons led Sun to re-enter the blade server arena with big blades rather than more conventional smaller models that sell in higher volumes, said the Santa Clara, Calif.-based company's top server executive, John Fowler. "We believe there is a market for a high-end capabilities. And sometimes you go to where the competition isn't," Fowler said.
As a result of such factorization more and more functions move to the blade enclosure. As a result power consumption improves dramatically as blades typically use low power dissipating CPUs and all blades typically share the same power supply that in case of full or nearly full rack permits power supply to work with much greater power efficiency (twice of more efficient then on a typical server). That cuts air conditioning costs too. Also newer blades monitor air flow and adjust fans accordingly. As a result energy bill can be half of the same amount of U1 servers.
Blades generally solves the problem of memory bandwidth typical for most types of virtualization except domain-based. Think about them are predefined partitions with fixed amount of CPU and memory. Dynamic swap of images between blades is possible. Some I/O can be local and with high speed solid drives very reliable and fast. That permits offloading OS-related IO from application related I/O.
Think about them are predefined (fixed) partitions with fixed number of CPUs and size of memory. Dynamic swap of images between blades is possible. Some I/O can be local as blade typically can carry 2 (half-size blades) or 4 (full size blades) 2.5" disks. With solid state drive being a reliable and fast, albeit expensive alternative to tradition rotating hardrives and memory cards like ioDrive local disk speed can be as good as better as on the large server with, say, sixteen 15K RPM hardrives.
Among major vendors:
There is no free lunch and virtualization is not panacea. It increases the complexity of environment and puts severe stress of a single server that host multiple instances on virtual machines. Failure of this server lead to failure of all instances. The same is true about failure of hypervisor.
All-in-all paravirtualization along with light-weight virtualization (BSD jail and Solaris zones) looks like the most promising types of virtualization.
The natural habitat of virtualization are:
Development, test and stage servers,
Demos
Virtual appliances (only paravirtualization and light-weight virtualization)
legacy versions of OS support on new hardware
Almost idle servers that servers various enterprise consoles and similar low CPU intensive applications (for example specialized internal Web servers and e-commerce servers).
Startup IT infrastructure before startup achieves some level of maturity (as such infrastructure is highly dynamic and mistakes in physical server acquisition/allocation are costly).
Other highly dynamic setups where ability to move guests to a different higher performance server can be of critical importance.
At the same time virtualization opens new capabilities for running multiple instances of the same application, for example Web server and some types of virtualization like paravirtualization and light-weight virtualization (zones) can do it more not less efficiently then a similar single physical server with multiple web servers running on different ports.
Sometimes it make sense to run a single instance of virtual machine on the server to get such advantages as on the fly relocation of instances, virtual images manipulation capabilities, etc. With technologies like Xen that claims less then 5% overhead that approach becomes feasible. "Binary servers" -- servers that host just two applications also look very promising as in this case you still can buy low cost servers and in case of Xen do not need to pay for hypervisor.
Migration of rack-mounted servers to blade servers is probably the most safe approach to server consolidation. Managers without experience of work in partitioned environment shouldn't underestimate what their administrators need to learn and the set of new problems that virtualization creates One good advice is "Make sure you put the training dollars in."
There are also other problems. A lot of software vendors won't certify applications as virtual environment compatible, for example VMware compatible. In such cases running the application in virtual environment means that you need to assume the risks and cannot count on vendor tech support to resolve your issues.
All-in all virtualization is mainly played now in desktop and low end servers space. It make sense to proceed slowly testing the water before jumping in. Those that have adopted virtualization have, on average, only about 20% of their environment virtualized, according to IDC. VMware pricing structure is a little bit ridiculous and nullifies hardware savings, if any. Their maintenance costs are even worse. That means that alternative solutions like Xen3 or Microsoft should be considered on Intel side and IBM and Sun on Unix side. As vendor consolidation is ahead if you don't have a clear benefit from virtualization today, you can wait or limit yourself to "sure bets" like development, testing and staging servers. The next version of Windows Server will put serious pressure on VMware in a year or so. Xen is also making progress with IBM support behind it. With those competitive pressures, VMware could become significantly less expensive in the future.
VMs are also touted as a solution to the computer security problem. It's pretty obvious that they can improve security. After all, if you're running your browser on one VM and your mailer on another, a security failure by one shouldn't affect the other. If one virtual machine is compromised you can just discard it and create an fresh image. There is some merit to that argument, and in many situations it's a good configuration to use. But at the same time the transient nature of Virtual Machines introduces new security and compliance challenges not addressed by traditional systems management processes and tools. For example virtual images are more portable and possibility of stealing the whole OS images and running them on a different VM are very real. New security risks inherent in virtualized environments need to be understood and mitigated.
Here is a suitable definition taken from the article published in Linux Magazine:
"(Virtual machines) offer the ability to partition the resources of a large machine between a large number of users in such a way that those users can't interfere with one another. Each user gets a virtual machine running a separate operating system with a certain amount of resources assigned to it. Getting more memory, disks, or processors is a matter of changing a configuration, which is far easier than buying and physically installing the equivalent hardware."
And FreeBSD and Solaris users has their lightweight VM built in the OS. Actually FreeBSD jails, Solaris 10 zone and Xen are probably the most democratic light weight VM. To counter the threat from free VMs VMware now produces a free version too. VMware Player is able to run virtual machines made in VMware Workstation. There are many free OS's on the website. Most of them are community made. There are also freeware tools for creating VM's, mounting, manipulating and converting VMware disks and floppies, so it is possible to create, run and maintain virtual machines for free (even for commercial use).
Here is how this class of virtual machines is described in Wikipedia
Conventional emulators like Bochs emulate the microprocessor, executing each guest CPU instruction by calling a software subroutine on the host machine that simulates the function of that CPU instruction. This abstraction allows the guest machine to run on host machines with a different type of microprocessor, but is also very slow.
An improvement on this approach is dynamically recompiling blocks of machine instructions the first time they are executed, and later using the translated code directly when the code runs a second time. This approach is taken by Microsoft's Virtual PC for Mac OS X.
VMware Workstation takes an even more optimized approach and uses the CPU to run code directly when this is possible. This is the case for user mode and virtual 8086 mode code on x86. When direct execution is not possible, code is rewritten dynamically. This is the case for kernel-level and real mode code. In VMware's case, the translated code is put into a spare area of memory, typically at the end of the address space, which can then be protected and made invisible using the segmentation mechanisms. For these reasons, VMware is dramatically faster than emulators, running at more than 80% of the speed that the virtual guest OS would run on hardware. VMware boasts an overhead as small as 3%6% for computationally intensive applications.
Although VMware virtual machines run in user mode, VMware Workstation itself requires installing various drivers in the host operating system, notably in order to dynamically switch the GDT and the IDT tables.
One final note: it is often erroneously believed that virtualization products like VMware or Virtual PC replace offending instructions or simply run kernel code in user mode. Neither of these approaches can work on x86. Replacing instructions means that if the code reads itself it will be surprised not to find the expected content; it is not possible to protect code against reading and at the same time allow normal execution; replacing in place is complicated. Running the code unmodified in user mode is not possible either, as most instructions which just read the machine state do not cause an exception and will betray the real state of the program, and certain instructions silently change behavior in user mode. A rewrite is always necessary; a simulation of the current program counter in the original location is performed when necessary and notably hardware code breakpoints are remapped.
The Xen open source virtual machine partitioning project is picking up momentum since acquiring the backing of venture capitalists at the end of 2004. Now, server makers and Linux operating system providers are starting to line up to support the project, contribute code, and make it a feature of their systems at some point in the future. Work on Xen has been supported by UK EPSRC grant GR/S01894, Intel Research, HP Labs and Microsoft Research. Novell and Advanced Micro Devices also back Xen. See also
While everybody seemed to get interested in the open source Xen virtual machine partitioning hypervisor just when XenSource incorporated and made its plans clear for the Linux platform, the NetBSD variant of the BSD Unix platform has been Xen-compatible for over a year now, and will be as fully embracing the technology as Linux is expected to.
Xen has really taken off since Dec, 2004, when the leaders of the Xen project formed a corporation to sell and support Xen and they immediately secured $6 million from venture capitalists Kleiner Perkins Caufield & Byers and Sevin Rosen Funds.
Xen is headed up by Ian Pratt, a senior faculty member at the University of Cambridge in the United Kingdom, who is the chief technology officer at XenSource, the company that has been created to commercialize Xen. Pratt told me in December that he had basically been told to start a company to support Xen because some big financial institutions on Wall Street and in the City (that's London's version of Wall Street for the Americans reading this who may not have heard the term) insisted that he do so because they loved what Xen was doing.
Seven years ago, Ian Pratt joined the senior faculty at the University of Cambridge in the United Kingdom, and after being on the staff for two years, he came up with a schematic for a futuristic, distributed computing platform for wide area network computing called Xenoserver. The idea behind the Xenoserver project is one that now sounds familiar, at least in concept, but sounded pretty sci-fi seven years ago: hundreds of millions of virtual machines running on tens of millions of servers, connected by the Internet, and delivering virtualized computing resources on a utility basis where people are charged for the computing they use. The Xenoserver project consisted of the Xen virtual machine monitor and hypervisor abstraction layer, which allows multiple operating systems to logically share the hardware on a single physical server, the Xenoserver Open Platform for connecting virtual machines to distributed storage and networks, and the Xenoboot remote boot and management system for controlling servers and their virtual machines over the Internet.
Work on the Xen hypervisor began in 1999 at Cambridge, where Pratt was irreverently called the "XenMaster" by project staff and students. During that first year, Pratt and his project team identified how to do secure partitioning on 32-bit X86 servers using a hypervisor and worked out a means for shuttling active virtual machine partitions around a network of machines. This is more or less what VMware does with its ESX Server partitioning software and its VMotion add-on to that product. About 18 months ago, after years of coding the hypervisor in C and the interface in Python, the Xen portion of the Xenoserver project was released as Xen 1.0. According to Pratt, it had tens of thousands of downloads. This provided the open source developers working on Xen with a lot of feedback, which was used to create Xen 2.0, which started shipping last year. With the 2.0 release, the Xen project added the Live Migration feature for moving virtual machines between physical machines, and then added some tweaks to make the code more robust.
Xen and VMware's GSX Server and EXS Server have a major architectural difference. VMware's hypervisor layer completely abstracts the X86 system, which means any operating system supported on X86 processors can be loaded into a virtual machine partition. This, said Pratt, puts tremendous overhead on the systems. Xen was designed from the get-go with an architecture focused on running virtual machines in a lean and mean fashion, and Xen does this by having versions of open source operating systems tweaked to run on the Xen hypervisor. That is why Xen 2.0 only supports Linux 2.4, Linux 2.6, FreeBSD 4.9 and 5.2, and NetBSD 2.0 at the moment; special tweaks of NetBSD and Plan 9 are in the works, and with Solaris 10 soon to be open-source, that will be available as well. With Xen 1.0, Pratt had access to the source code to Windows XP from Microsoft, which allowed the Xen team to put Windows XP inside Xen partitions. With the future "Pacifica" hardware virtualization features in single-core and dual-core Opterons and Intel creating a version of its "Vanderpool" virtualization hardware features in Xeon and Itanium processors also being made for Pentium 4 processors (this is called "Silvervale" for some reason), both Xen and VMware partitioning software will have hardware-assisted virtual machine partitioning. While no one is saying this because they cannot reveal how Pacifica or Vanderpool actually work, these technologies may do most of the X86 abstraction work, and therefore should allow standard, compiled operating system kernels run inside Xen or VMware partitions. That means Microsoft can't stop Windows from being supported inside Xen over the long haul.
Thor Lancelot Simon, one of the key developers and administrators at the NetBSD Foundation that controls the development of NetBSD, reminded everyone that NetBSD has been supporting the Xen 1.2 hypervisor and monitor within a variant of the NetBSD kernel (that's NetBSD/xen instead of NetBSD/i386) since March of last year. Moreover, the foundation's own servers are all equipped with Xen, which allows programmers to work in isolated partitions with dedicated resources and not stomp all over each other as they are coding and compiling. "We aren't naive enough to think that any system has perfect security; but Xen helps us isolate critical systems from each other, and at the same time helps keep our systems physically compact and easy to manage," he said. "When you combine virtualization with Xen with NetBSD's small size, code quality, permissive license, and comprehensive set of security features, it's pretty clear you have a winning combination, which is why we run it on our own systems." NetBSD contributor Manuel Bouyer has done a lot of work to integrate the Xen 2.0 hypervisor and monitor into the NetBSD-current branch, and he said he would be making changes to the NetBSD/i386 release that would all integrate /xen kernels into it and will allow Xen partitions to run in privileged and unprivileged mode.
The Xen 3.0 hypervisor and monitor is expected some time in late 2005 early 2006, with support for 64-bit Xeon and Opteron processors. XenSource's Pratt told me recently that Xen 4.0 is due to be released in the second half of 2005, and it will have better tools for provisioning and managing partitions. It is unclear how the NetBSD project will absorb these changes, but NetBSD 3.0 is expected around the middle of 2005. The project says that they plan to try to get one big release of NetBSD out the door once a year going forward.
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
Best | 2018 | 2017 | 2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2010 |
2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003 | 2002 | 2001 | 2000 |
Jun 08, 2021 | www.collabora.com
For the past few months we have been improving the experimental Wayland driver for Wine, which allows Windows applications to run directly on Wayland compositors. Our goal is to eventually remove the need for XWayland for many use cases, and thus reduce the overall system complexity while eliminating points of potential inefficiency. We For the past few months we have been improving the experimental Wayland driver for Wine, which allows Windows applications to run directly on Wayland compositors. Our goal is to eventually remove the need for XWayland for many use cases, and thus reduce the overall system complexity while eliminating points of potential inefficiency. We We We first announced our work on the driver last December, and posted an update earlier this year. We are now happy to announce a second update for this driver, adding several major features which increase its scope and utility. You can read all the details in the new upstream mailing list RFC (Request for Comment) post . Here is a list of the new features:
Vulkan support comes with window management handling (resizing, fullscreen etc), and can be used either directly or to implement Direct3D through either WineD3D or DXVK. The Wayland driver now exposes multiple monitors to Wine and supports dynamic addition and removal of monitors. It also supports changing the application-perceived resolution of each monitor (through compositor scaling, see Vulkan support comes with window management handling (resizing, fullscreen etc), and can be used either directly or to implement Direct3D through either WineD3D or DXVK. The Wayland driver now exposes multiple monitors to Wine and supports dynamic addition and removal of monitors. It also supports changing the application-perceived resolution of each monitor (through compositor scaling, see The Wayland driver now exposes multiple monitors to Wine and supports dynamic addition and removal of monitors.
- Vulkan support
- Multi-monitor support
- HiDPI handling
- Cursor clipping/relative movement
- Wayland keymap handling
See also Developing Wayland Color Management and High Dynamic Range
May 27, 2021 | ostechnix.com
Each section contains the commands related to do particular set of tasks. You can view help section of a group, for example Networking , like below:
$ virsh help NetworkingYou will see commands related to do networking tasks:
Networking (help keyword 'network'): net-autostart autostart a network net-create create a network from an XML file net-define define an inactive persistent virtual network or modify an existing persistent one from an XML file net-destroy destroy (stop) a network net-dhcp-leases print lease info for a given network net-dumpxml network information in XML net-edit edit XML configuration for a network net-event Network Events net-info network information net-list list networks net-name convert a network UUID to network name net-start start a (previously defined) inactive network net-undefine undefine a persistent network net-update update parts of an existing network's configuration net-uuid convert a network name to network UUID net-port-list list network ports net-port-create create a network port from an XML file net-port-dumpxml network port information in XML net-port-delete delete the specified network porthttps://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-6701402139964678&output=html&h=280&adk=1854693970&adf=2311088936&pi=t.aa~a.2908316952~i.17~rp.4&w=780&fwrn=4&fwrnh=100&lmt=1622084206&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=8125749717&psa=1&ad_type=text_image&format=780x280&url=https%3A%2F%2Fostechnix.com%2Fmanage-kvm-virtual-machines-with-virsh-program%2F&flash=0&fwr=0&pra=3&rh=195&rw=779&rpe=1&resp_fmts=3&wgl=1&fa=27&uach=WyJXaW5kb3dzIiwiMTAuMCIsIng4NiIsIiIsIjkwLjAuODE4LjY2IixbXV0.&dt=1622084205319&bpp=1&bdt=1641&idt=1&shv=r20210517&cbv=%2Fr20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3D9990b9e96bb73c77-2241c2d401ba007d%3AT%3D1622084202%3ART%3D1622084202%3AS%3DALNI_MZ02NtaKV9nHExzUKsGBgLjgl2KwQ&prev_fmts=728x90%2C780x280%2C340x280%2C340x280%2C0x0%2C780x280%2C1903x937&nras=4&correlator=6900489977040&frm=20&pv=1&ga_vid=1411439527.1622084204&ga_sid=1622084204&ga_hid=115694439&ga_fc=0&u_tz=-240&u_his=1&u_java=0&u_h=1080&u_w=1920&u_ah=1040&u_aw=1920&u_cd=24&u_nplug=3&u_nmime=4&adx=367&ady=4138&biw=1903&bih=937&scr_x=0&scr_y=392&eid=31060839&oid=3&psts=AGkb-H_XQGZwSb8-CEBOjUtLmqPuuzrsBESdM5D4IeNs_3WPEf_1aIHT25AbfbvJMrwWN8uFOzgOndMgeR2l%2CAGkb-H_1cQvUd5hrDmWfL_UpmSouWuiqBMCv5ieieaIQCrEPNvvgGmGYsgzs4m7sPboyxbADH47NEfZKdUE%2CAGkb-H_9CK0YYKcy1o14VUy-n4o3aPU_Klx4xObY-34gZiu0_MOIx27Rha-z51qyta3mbmSsUs70N32wU3UZQA%2CAGkb-H_lRFyveOF4wvPReLGBluUFQafWfs6ddm7mKqmZ08OIYTJGRMLOOZLwSwc_1kO6b9fsGx1kF5DK0QYpLw%2CAGkb-H8GzgetCxDKRXrJvt4LHwxtrZkIqaJDY20ccNFYDXCpMztYEpAEe-3-CN_44BBTHonm5XkjDxc4VQk&pvsid=3278248707867948&pem=816&wsm=1&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&eae=0&fc=384&brdim=1920%2C0%2C1920%2C0%2C1920%2C0%2C1920%2C1040%2C1920%2C937&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=128&bc=31&ifi=7&uci=a!7&btvi=4&fsb=1&xpc=ZKQx5gbmK9&p=https%3A//ostechnix.com&dtd=1537
You can further display help section of a specific command as well. For example, I am going to display the help section of "net-name" command:
$ virsh help net-name NAME net-name - convert a network UUID to network name SYNOPSIS net-name <network> OPTIONS [--network] <string> network uuid1.2. List virtual machinesTo view list of guest virtual machines in run or suspend mode, execute the following command:
$ virsh list Id Name State --------------------As you can see, there are no guests in run or suspend mode.
You can use the --inactive option to list inactive guests.
To view all guest machines, run:
$ virsh list --all Id Name State -------------------------------- - centos8-uefi shut off - nginx_centos8 shut offAs you see in the above output, I have two virtual machines namely "centos8-uefi" and "nginx_centos8". Both are powered off.
1.3. Start Virtual machinesTo start a virtual machine, for example "centos8-uefi", run:
$ virsh start centos8-uefiYou will see an output like below:
Domain centos8-uefi startedTo verify if the VM is running, use "list" command:
$ virsh list Id Name State ------------------------------ 1 centos8-uefi running1.4. Save Virtual machinesTo save the current state of a running VM, run:
$ virsh save centos8-uefi centos8-save Domain centos8-uefi saved to centos8-saveThis command stops the guest named "centos8-uefi" and saves the data to a file called "centos8-save". This will take a few moments depending upon the amount of memory in use by your guest machine.
1.5. Restore Virtual machinesTo restore the previously saved state of a VM, just specify the file name like below:
$ virsh restore centos8-save Domain restored from centos8-saveVerify if the VM is restored using "list" command:
$ virsh list Id Name State ------------------------------ 4 centos8-uefi running1.6. Restart Virtual machinesTo restart a running VM, run:
$ virsh reboot centos8-uefi Domain centos8-uefi is being rebooted1.7. Suspend/Pause Virtual machinesTo suspend a running VM, do:
$ virsh suspend centos8-uefi Domain centos8-uefi suspendedVerify it with "list" command:
$ virsh list Id Name State ----------------------------- 1 centos8-uefi paused1.8. Resume Virtual machinesTo resume the paused VM, run:
$ virsh resume centos8-uefi Domain centos8-uefi resumed1.9. Stop active Virtual machinesTo forcibly stop an active VM, and leave it in the inactive state, run:
$ virsh destroy centos8-uefi Domain centos8-uefi destroyedhttps://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-6701402139964678&output=html&h=280&adk=1854693970&adf=3233286314&pi=t.aa~a.2908316952~i.57~rp.4&w=780&fwrn=4&fwrnh=100&lmt=1622084224&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=8125749717&psa=1&ad_type=text_image&format=780x280&url=https%3A%2F%2Fostechnix.com%2Fmanage-kvm-virtual-machines-with-virsh-program%2F&flash=0&fwr=0&pra=3&rh=195&rw=779&rpe=1&resp_fmts=3&wgl=1&fa=27&uach=WyJXaW5kb3dzIiwiMTAuMCIsIng4NiIsIiIsIjkwLjAuODE4LjY2IixbXV0.&dt=1622084205323&bpp=1&bdt=1645&idt=1&shv=r20210517&cbv=%2Fr20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3D9990b9e96bb73c77-2241c2d401ba007d%3AT%3D1622084202%3ART%3D1622084202%3AS%3DALNI_MZ02NtaKV9nHExzUKsGBgLjgl2KwQ&prev_fmts=728x90%2C780x280%2C340x280%2C340x280%2C0x0%2C780x280%2C1903x937%2C780x280&nras=5&correlator=6900489977040&frm=20&pv=1&ga_vid=1411439527.1622084204&ga_sid=1622084204&ga_hid=115694439&ga_fc=0&u_tz=-240&u_his=1&u_java=0&u_h=1080&u_w=1920&u_ah=1040&u_aw=1920&u_cd=24&u_nplug=3&u_nmime=4&adx=367&ady=7807&biw=1903&bih=937&scr_x=0&scr_y=4059&eid=31060839&oid=3&psts=AGkb-H_XQGZwSb8-CEBOjUtLmqPuuzrsBESdM5D4IeNs_3WPEf_1aIHT25AbfbvJMrwWN8uFOzgOndMgeR2l%2CAGkb-H_1cQvUd5hrDmWfL_UpmSouWuiqBMCv5ieieaIQCrEPNvvgGmGYsgzs4m7sPboyxbADH47NEfZKdUE%2CAGkb-H_9CK0YYKcy1o14VUy-n4o3aPU_Klx4xObY-34gZiu0_MOIx27Rha-z51qyta3mbmSsUs70N32wU3UZQA%2CAGkb-H_lRFyveOF4wvPReLGBluUFQafWfs6ddm7mKqmZ08OIYTJGRMLOOZLwSwc_1kO6b9fsGx1kF5DK0QYpLw%2CAGkb-H8GzgetCxDKRXrJvt4LHwxtrZkIqaJDY20ccNFYDXCpMztYEpAEe-3-CN_44BBTHonm5XkjDxc4VQk%2CAGkb-H_CtcRkeiKsnmoVIT41JWBG2jHfkwkQQ7B_DOysOpsXgxsanJOiGgnw4CH1ROr6NCdmgxbx-7Xr59NZjw&pvsid=3278248707867948&pem=816&wsm=1&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&eae=0&fc=384&brdim=1920%2C0%2C1920%2C0%2C1920%2C0%2C1920%2C1040%2C1920%2C937&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=128&bc=31&ifi=8&uci=a!8&btvi=5&fsb=1&xpc=1J9cZIA1Kg&p=https%3A//ostechnix.com&dtd=18784
You can also gracefully stop the VM instead of forcing it like below:
https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-6701402139964678&output=html&h=280&adk=1854693970&adf=4284930908&pi=t.aa~a.2908316952~i.58~rp.4&w=780&fwrn=4&fwrnh=100&lmt=1622084224&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=8125749717&psa=1&ad_type=text_image&format=780x280&url=https%3A%2F%2Fostechnix.com%2Fmanage-kvm-virtual-machines-with-virsh-program%2F&flash=0&fwr=0&pra=3&rh=195&rw=779&rpe=1&resp_fmts=3&wgl=1&fa=27&uach=WyJXaW5kb3dzIiwiMTAuMCIsIng4NiIsIiIsIjkwLjAuODE4LjY2IixbXV0.&dt=1622084205326&bpp=1&bdt=1648&idt=1&shv=r20210517&cbv=%2Fr20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3D9990b9e96bb73c77-2241c2d401ba007d%3AT%3D1622084202%3ART%3D1622084202%3AS%3DALNI_MZ02NtaKV9nHExzUKsGBgLjgl2KwQ&prev_fmts=728x90%2C780x280%2C340x280%2C340x280%2C0x0%2C780x280%2C1903x937%2C780x280%2C780x280&nras=6&correlator=6900489977040&frm=20&pv=1&ga_vid=1411439527.1622084204&ga_sid=1622084204&ga_hid=115694439&ga_fc=0&u_tz=-240&u_his=1&u_java=0&u_h=1080&u_w=1920&u_ah=1040&u_aw=1920&u_cd=24&u_nplug=3&u_nmime=4&adx=367&ady=8131&biw=1903&bih=937&scr_x=0&scr_y=4419&eid=31060839&oid=3&psts=AGkb-H_XQGZwSb8-CEBOjUtLmqPuuzrsBESdM5D4IeNs_3WPEf_1aIHT25AbfbvJMrwWN8uFOzgOndMgeR2l%2CAGkb-H_1cQvUd5hrDmWfL_UpmSouWuiqBMCv5ieieaIQCrEPNvvgGmGYsgzs4m7sPboyxbADH47NEfZKdUE%2CAGkb-H_9CK0YYKcy1o14VUy-n4o3aPU_Klx4xObY-34gZiu0_MOIx27Rha-z51qyta3mbmSsUs70N32wU3UZQA%2CAGkb-H_lRFyveOF4wvPReLGBluUFQafWfs6ddm7mKqmZ08OIYTJGRMLOOZLwSwc_1kO6b9fsGx1kF5DK0QYpLw%2CAGkb-H8GzgetCxDKRXrJvt4LHwxtrZkIqaJDY20ccNFYDXCpMztYEpAEe-3-CN_44BBTHonm5XkjDxc4VQk%2CAGkb-H_CtcRkeiKsnmoVIT41JWBG2jHfkwkQQ7B_DOysOpsXgxsanJOiGgnw4CH1ROr6NCdmgxbx-7Xr59NZjw%2CAGkb-H9BuXZiWDWff1ofu61FFCxOiP1c0KCNHIeVmUqndYK_eG_Mpjhnx7W8G3UAKphQ6wAsCLMI7VDfWoqYRw&pvsid=3278248707867948&pem=816&wsm=1&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&eae=0&fc=384&brdim=1920%2C0%2C1920%2C0%2C1920%2C0%2C1920%2C1040%2C1920%2C937&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=128&bc=31&ifi=9&uci=a!9&btvi=6&fsb=1&xpc=gAYuzeqNEs&p=https%3A//ostechnix.com&dtd=19398
$ virsh destroy centos8-uefi --graceful Domain centos8-uefi destroyed1.10. Shutdown Virtual machinesTo power off a running VM, do:
$ virsh shutdown centos8-uefi Domain centos8-uefi is being shutdown1.11. Retrieve Virtual machines XML dumpTo display the XML configuration file of a VM in the standard output, run:
$ virsh dumpxml centos8-uefiThis command will display the complete configuration details (software and hardware) of the virtual machine:
You can also export the XML dump to a file instead of just displaying it in the standard output like below:
$ virsh dumpxml centos8-uefi > centos8.xmlThis command will dump the "centos8-uefi" XML file in a file named "centos8.xml" and save it in the current working directory.
1.12. Create Virtual machines with XML dumpYou can create a new virtual guest machine using the existing XML from previously created guests. First, create a XML dump as shown above and then create a new VM using the XML file like below:
$ virsh create centos8.xml Domain centos8-uefi created from centos8.xmlThis command will create a new VM and start it immediately. You can verify it using command:
1.13. Edit Virtual machines XML configuration fileIf you wanted to make any changes in a guest machines, you can simply edit its configuration file and do the changes as you wish. The guests can be edited either while they run or while they are offline.
$ virsh edit centos8-uefiThis command will open the file in your default editor that you set with $EDITOR variable.
1.14. Enable console access for Virtual machinesAfter creating KVM guest machines, you can access them via SSH, VNC client, Virt-viewer, Virt-manager and Cockpit web console etc. However, you can't access them using "virsh console" command. The console command is used to connect the virtual serial console for the guest. To access KVM guests using "virsh console" command, you need to enable serial console access in the guest machine. Refer the following guide to enable virsh console access:
1.15. Rename Virtual machinesIf you ever wanted to rename a virtual machine, refer the following guide.
1.16. Display Domain ID of Virtual machinesTo find the domain id of a running guest virtual machine, run:
$ virsh domid centos8-uefi 2Please note that the guest should be running to get its domain id.
1.17. Display Domain name of Virtual machinesTo get the domain name of a running VM, run:
$ virsh domname <domain-id or domain-uuid>Example:
$ virsh domname 2 centos8-uefiHere, 2 is the domain id.
1.18. Display UUID of Virtual machinesTo find the guest machine UUID, run:
$ virsh domuuid <domain-name or domain-id>Example:
$ virsh domuuid centos8-uefiOr,
$ virsh domuuid 2Sample output:
de4100c4-632e-4c09-8dcf-bbde291702681.19. Display Virtual machines detailsTo display a guest machine's information, use domain name, domain id or domain uuid like below:
$ virsh dominfo centos8-uefiOr,
$ virsh dominfo 2Or,
$ virsh dominfo de4100c4-632e-4c09-8dcf-bbde29170268Sample output:
Id: - Name: centos8-uefi UUID: de4100c4-632e-4c09-8dcf-bbde29170268 OS Type: hvm State: shut off CPU(s): 2 Max memory: 2097152 KiB Used memory: 2097152 KiB Persistent: yes Autostart: disable Managed save: no Security model: apparmor Security DOI: 01.20. Display KVM host informationTo get the information of your host system, run:
$ virsh nodeinfoSample output:
CPU model: x86_64 CPU(s): 4 CPU frequency: 1167 MHz CPU socket(s): 1 Core(s) per socket: 2 Thread(s) per core: 2 NUMA cell(s): 1 Memory size: 8058840 KiB1.21. Display Virtual CPU informationTo display the virtual CPU information, run:
$ virsh vcpuinfo <domain-id or domain-name or domain-uuid>Example:
$ virsh vcpuinfo centos8-uefi VCPU: 0 CPU: 3 State: running CPU time: 5.6s CPU Affinity: yyyy VCPU: 1 CPU: 1 State: running CPU time: 0.0s CPU Affinity: yyyy1.22. Find IP address of Virtual machinesFinding the IP address of a virtual machine is not a big deal. If you have console access to the virtual machine, you can easily find its IP address using "ip" command. However, it is also possible to identify a KVM VM's IP address without having to access its console. The following guide explains how to find the IP address of a KVM virtual machine.
1.23. Delete Virtual machinesIf you don't want a VM anymore, simply delete it like below:
$ virsh destroy centos8-uefi$ virsh undefine centos8-uefiThe first command will forcibly stop the VM if it is already running. And the second command will undefine and delete it completely.
You can further use following options to delete the storage volumes and snapshots as well.
--managed-save remove domain managed state file --storage remove associated storage volumes (comma separated list of targets or source paths) (see domblklist) --remove-all-storage remove all associated storage volumes (use with caution) --delete-storage-volume-snapshots delete snapshots associated with volume(s) --wipe-storage wipe data on the removed volumes --snapshots-metadata remove all domain snapshot metadata (vm must be inactive)2. Manage Virtual networksHope you learned how to manage KVM virtual machines with Virsh command in Linux. This section lists the imporant commands to manage KVM virtual networks in Linux using virsh command line utility.
2.1. List virtual networksTo list of available virtual networks, run:
$ virsh net-list Name State Autostart Persistent -------------------------------------------- default active yes yesAs you can see, I have only one virtual network which is the default one.
2.2. Display virtual network detailsTo view the details of a virtual network, run:
$ virsh net-dumpxml defaultReplace "default" with your network name in the above command.
Sample output:
<network connections='1'> <name>default</name> <uuid>ce25d978-e455-47a6-b545-51d01bcb9e6f</uuid> <forward mode='nat'> <nat> <port start='1024' end='65535'/> </nat> </forward> <bridge name='virbr0' stp='on' delay='0'/> <mac address='52:54:00:ee:35:49'/> <ip address='192.168.122.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.122.2' end='192.168.122.254'/> </dhcp> </ip> </network>2.3. Start virtual networksTo start an inactive network, run:
$ virsh net-start <Name-Of-Inactive-Network>To auto-start a network:
$ virsh net-autostart <network-name>2.4. Create virtual networks XML dumpTo create the XML configuration file of an existing virtual network, run:
$ virsh net-dumpxml default > default.xmlThe above command will create XML config of the "default" network and save it in a file named "default.xml" in the current directory.
You can view the XML file using cat command:
$ cat default.xml <network connections='1'> <name>default</name> <uuid>ce25d978-e455-47a6-b545-51d01bcb9e6f</uuid> <forward mode='nat'> <nat> <port start='1024' end='65535'/> </nat> </forward> <bridge name='virbr0' stp='on' delay='0'/> <mac address='52:54:00:ee:35:49'/> <ip address='192.168.122.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.122.2' end='192.168.122.254'/> </dhcp> </ip> </network>2.5. Create new virtual networks from XML fileTo create a new virtual network using an existing XML file and start it immediately, run:
$ virsh net-create <Name-of-XMLfile>If you want to create a network from XML file but don't want to start it automtacially, run:
$ virsh net-define <Name-of-XMLfile>2.6. Deactivate virtual networksTo deactivate an active network, run:
$ virsh net-destroy <network-name>2.7. Delete virtual networksTo delete a virtual network, deactivate it frist as shown above and then run:
$ virsh net-undefine <Name-Of-Inactive-Network>Virsh has a lot of commands and options. Learning to use Virsh command line tool thoroughly is just enough to setup a complete Virtual environment in Linux. You don't need any GUI applications.
For more details, refer virsh man pages.
$ man virsh3. Manage KVM guests graphicallyRemembering all virsh commands is nearly impossible and also unnecessary. If you find it hard to perform all Kvm management tasks from command line, you can try graphical KVM management tools such as Virt-manager and Cockpit.
Conclusion
- Manage KVM Virtual Machines Using Cockpit Web Console
- How To Manage KVM Virtual Machines With Virt-Manager
If you know how to manage KVM virtual machines with Virsh management user interface in Linux, you're half way across to manage an enterprise grade virtualization environment. Setting up KVM and managing KVM virtual machines using virsh command are very important for all Linux administrators.
Featured image by Elias Sch. from Pixabay .
Jan 19, 2021 | www.debugpoint.com
UPDATED ON JANUARY 19, 2021 Share on Facebook Share on Twitter
... ... ...
Install Wine in Ubuntu Method 1: Install from Official Ubuntu RepositoryThe wine package is available in the Ubuntu official repository...
Method 2: Install Wine from the official Wine website (winehq)Step 1: Run the below command to add the i386 architecture that is required for this method.
sudo pkg --add-architecture i386Step 2: Download the package signing key and add it to your system.
wget -O https://dl.winehq.org/wine-builds/winehq.key | sudo apt-key add -Add the custom Wine official repository for Ubuntu 20.04 LTS focal fossa. If you are using any other Ubuntu versions, replace focal with respective release names (E.g. bionic, etc).sudo apt-add-repository 'deb https://dl.winehq.org/wine-builds/ubuntu/ focal main'Refresh the package list using the below command.
sudo apt-get updateFinally, run the below command to install the latest Wine 6.0 stable in Ubuntu 20.04 LTS .
sudo apt-get install --install-recommends winehq-stable
Installing Wine Wait until the installation is complete. While installing, make sure you install the required packages wine-mono and gecko when prompted. These are required for effective usage of Wine. Because many Windows programs depend on the .NET framework and they need the open-source replacement mono package.
wine-mono package install for Wine After the installation is finished, you can check the wine installations by running the below command.
wine --version
Install programs using Wine 6.0 Installing any Windows executable with Wine 6.0 is easy after the install. All you need to do is run the .exe files using the wine installer. For example, in this guide, I have used the notepad++ .exe file for the demo.
SEE ALSO: Run Windows Applications in Linux using Wine - Version 1.7.36 ReleasedDownload the Notepad++ .exe installer from the official website . Right-click and open the .exe using Wine loader.
Then you can follow the onscreen instructions of the installer to install the program as you do in Windows.
After successful installation, you should get the application shortcut in the application menu.
Uninstall Wine from Ubuntu proper wayUninstalling the wine package properly is an event by itself. Because the typical apt remove doesn't remove all the additional directories and files that are created by wine under your home directory. And worse, these directories take up a significant size as well.
So, to properly remove Wine from your Ubuntu 20.04 (or any version) follow the below commands and steps carefully. I have not used the "rm -r" terminal commands because of safety. Instead, use the file manager and delete them one by one.
First, run the below command to remove the winehq-stable package.
sudo apt-get remove purge winehq-stableThen open the file manager and go to your home directory. Make sure you can view the hidden files using CTRL+H.
Then remove all the following directories and their contents as well by delete key.
Aug 13, 2020 | thehftguy.com
- zookeeper says: 22 OCTOBER 2019 AT 11:44
I don't like the tone of this article, but I think you're mostly right. The industry (Google, etc) have invested (sorry, donated) a lot of money and resources to their CNCF foundation to make sure that Docker Inc looses its foot in the door to the Enterprise market. Why did they do it? Well, Docker is a product (install and run it, buy a license for more features, buy a support contract). Kubernetes is an open source project (not a product). Sure, you can install and maintain it yourself with kubeadm. But to really use it for stable systems you still better buy a product. And these players all offer such products based on Kubernetes (GCP/Azure Kubernetes, AKS etc). And because mutli-cloud is a thing that customers want there's even space for new companies like Heptio. No surprise that they are founded by and operated by ex-employees of these large companies.
Docker Inc. is doomed and the only concern of the industry is what will happen to the container registry if they go down. Maybe it will die and "pay for storage" registries (GCP, AWS, etc) will become the new standard. Or maybe it could get donated to the CNCF. I don't think the container registry really is an asset for Docker Inc. It's mostly a liability. It must cost a lot to operate it, and cost probably increases with increasing popularity Kubernetes and the cloud.
So the big players in the cloud industry teamed up as a cartel to extinguish/kill/murder a startup with its "killer" tech. And they did it in a way that will never trigger antitrust because it's all open source and anyone can participate (heck, even Docker Inc. is a lowest-tier member of the CNCF).
The perfect murder!!
Like REPLY
- thehftguy says: 22 OCTOBER 2019 AT 23:43
Thank you. Mostly agreed.
Reworded a few things, you may have another read. Can't believe this went live on HN 5 minutes after the first draft was published.I would add that Docker has missed quite a few steps, they were far from spectators to their fate.
Like REPLY
- Derek says: 23 OCTOBER 2019 AT 03:07
They're not the same thing. Docker is containers, kubernetes is orchestration of containers/machines
Like REPLY
https://c0.pubmine.com/sf/0.0.3/html/safeframe.html
- Liu Zhiyong (@liu_zhiyong) says: 23 OCTOBER 2019 AT 03:30
This article is a surprising page-turner! I am an editor of InfoQ China, I'm this close, this close to translating it into Chinese and reach our China readers! Without a doubt, the Chinese translated editon will add the URL and title of the original! If it's ok, reply to me, thank you!
Like REPLY
- thehftguy says: 23 OCTOBER 2019 AT 23:18
You can translate as long as you specify that it is a translation and link to the original article.
Like REPLY
- Liu Zhiyong (@liu_zhiyong) says: 24 OCTOBER 2019 AT 02:19
Thank you. I will do. BTW, what "f500" means? Do it means fortune 500?what "f50" means?
Like REPLY
- thehftguy says: 24 OCTOBER 2019 AT 19:51
Fortune 500 and Fortune 50
Like REPLY
- Liu Zhiyong (@liu_zhiyong) says: 24 OCTOBER 2019 AT 09:49
hello, the Chinese translated edition is here: https://www.infoq.cn/article/sYNGKaC8RkgGWcTZkuBd
Like REPLY
https://c0.pubmine.com/sf/0.0.3/html/safeframe.html
https://c0.pubmine.com/sf/0.0.3/html/safeframe.html
- Justin says: 23 OCTOBER 2019 AT 04:02
I'm actually new to this world of tech. As such, this was eye opening for me. Any recommedations on where best to start my Kubernetes training?
Like REPLY
- Ignacio says: 23 OCTOBER 2019 AT 14:24
Cool post, you called that Kubernetes was the future so I guess that makes you Mystic-HFTGuy.
You implied BlockChain may be a fad, any thoughts why this could be?
Like REPLY
- Curt J. Sampson says: 25 OCTOBER 2019 AT 10:39
Blockchain itself isn't a fad: it's been the core of Git (and some other distributed VCSs) since before cryptocurrencies existed, and Git alone is far more widely used than all cryptocurrencies put together. (It's an interesting exercise to look at the components of a cryptocurrency, such as blockchain, a distributed ledger, and consensus systems, and consider how use of a Git repo implements these.)
There's certainly a faddish bubble around cryptocurrencies, but the core technologies it uses and the way it uses them are used in other products and have been well accepted for years.
Like REPLY
- Curt J. Sampson says: 25 OCTOBER 2019 AT 10:33
Well, this not entirely unexpected if you look at what Docker is. Containers are just processes with special configuration ; Docker does not run the "containers" themselves but merely configures the kernel to do so.
Orchestration tools such as Kubernetes, too, are configuration tools, but in this case configuring Docker to configure the kernel. Once that layer is in front of Docker, Docker itself is heading toward becomming a commodity: any other container configuration tool that can do the same work can replace Docker, and the orchestration tool itself can even take over at least some of the configuration that Docker was doing.
So Docker is being pressed on the one side by orchestration tools that also do configuration into the wall of the kernel actually running the containers, which Docker can't do. This squeeze doesn't leave it any room to grow, except to try to compete with the orchestration tools. Once Docker let those get ahead of what it provided, the writing was on the wall.
Like REPLY
https://c0.pubmine.com/sf/0.0.3/html/safeframe.html
https://c0.pubmine.com/sf/0.0.3/html/safeframe.html
- ΞXΤЯ3МΞ says: 25 OCTOBER 2019 AT 16:49
I wouldnt really say the demise of docker. My 2 cents.
Liked by 1 person REPLY
- DevOps Guy says: 29 OCTOBER 2019 AT 07:32
Sorry but most people using Kubernetes uses Docker and there are lots of jobs that require Docker https://www.linkedin.com/jobs/docker-jobs/ as a DevOps engineer.
So no Docker ain't going anywhere, just because they can't find a business model to fund themselves doesn't mean they are going down the drain.
This article sounds like it's clickbait. 😉
Like REPLY
- thehftguy says: 29 OCTOBER 2019 AT 19:07
Clickbait aside, Docker is already downsizing.
With no revenues and unable to raise another hundred million as a shrinking business, it might actually go down the drain sooner rather than later.
Like REPLY
- zookeeper says: 31 OCTOBER 2019 AT 14:02
yes, Docker Inc. seems to have cash flow issues – at least on the longer term:
https://www.cnbc.com/2019/09/27/docker-is-trying-to-raise-money-following-arrival-of-ceo-rob-bearden.htmlLike REPLY
https://c0.pubmine.com/sf/0.0.3/html/safeframe.html
https://c0.pubmine.com/sf/0.0.3/html/safeframe.html
- Luke Rawlins says: 5 MARCH 2020 AT 01:12
I'm glad to see that VMWare has been integrating Kubernetes into vshpere. For a little while it seemed like they might've missed the boat on the whole container thing, but it'll be nice to not have to run separate pieces of infrastructure to have the best of both worlds on this.
Aug 01, 2020 | ostechnix.com
COMMAND LINE UTILITIES KVM LINUX LINUX ADMINISTRATION VIRTUALIZATION Quickly Build Virtual Machine Images With Virt-builder Written by Sk July 31, 2020 290 Views 0 comment 2
Virt-builder is a command line tool for building variety of virtual machine images for local or cloud use, easily as well as quickly. It also has many options to customize the images. You can install new application(s) on the VM image, set host name, set root password, run a command or script when the guest VM boots for the first time, add or edit files in the disk image and more. All of these tasks can be done from command line and doesn't require root permissions.
Virt-builder downloads the cleanly prepared, digitally signed OS templates, so you don't have to manually install the OS. All you have to do is just use the Virt-manager GUI or Virt-install command line tool to instantly fire up the VMs with the predefined templates. Virt-builder provides minimal OS templates for popular Linux and Unix variants. You can of course create your own template as well.
https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-6701402139964678&output=html&h=280&slotname=6620160421&adk=2492936727&adf=486728131&w=780&fwrn=4&fwrnh=100&lmt=1596277372&rafmt=1&psa=0&guci=2.2.0.0.2.2.0.0&format=780x280&url=https%3A%2F%2Fostechnix.com%2Fquickly-build-virtual-machine-images-with-virt-builder%2F&flash=0&fwr=0&fwrattr=true&rpe=1&resp_fmts=3&wgl=1&dt=1596302822366&bpp=6&bdt=814&idt=437&shv=r20200729&cbv=r20190131&ptt=9&saldr=aa&abxe=1&prev_fmts=728x90&correlator=5077173503435&frm=20&pv=1&ga_vid=34261185.1596302823&ga_sid=1596302823&ga_hid=1850750620&ga_fc=0&iag=0&icsg=741058623&dssz=32&mdo=0&mso=0&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=175&ady=879&biw=1519&bih=762&scr_x=0&scr_y=0&eid=21066429&oid=3&pvsid=3282901998019898&pem=289&ref=https%3A%2F%2Fwww.linuxtoday.com%2Fit_management%2Fhow-to-quickly-build-virtual-machine-images-with-virt-builder-200731070026.html&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CoeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=2&uci=a!2&btvi=1&fsb=1&xpc=rM70APusNL&p=https%3A//ostechnix.com&dtd=446
Install Virt-builder on LinuxVirt-builder is part of the Libguestfs library, so make sure you have installed it as described in the following guide.
Build Virtual Machine Images With Virt-builderBuilding virtual machines' images with Virt-builder is quite easy and straight forward.
List available virtual machine templatesFirst, list the available OS templates. To do so, run:
$ virt-builder --listAs of writing this guide, the following templates were available:
centos-6 x86_64 CentOS 6.6 centos-7.0 x86_64 CentOS 7.0 centos-7.1 x86_64 CentOS 7.1 centos-7.2 aarch64 CentOS 7.2 (aarch64) centos-7.2 x86_64 CentOS 7.2 centos-7.3 x86_64 CentOS 7.3 centos-7.4 x86_64 CentOS 7.4 centos-7.5 x86_64 CentOS 7.5 centos-7.6 x86_64 CentOS 7.6 centos-7.7 x86_64 CentOS 7.7 centos-7.8 x86_64 CentOS 7.8 centos-8.0 x86_64 CentOS 8.0 centos-8.2 x86_64 CentOS 8.2 cirros-0.3.1 x86_64 CirrOS 0.3.1 cirros-0.3.5 x86_64 CirrOS 0.3.5 debian-10 x86_64 Debian 10 (buster) debian-6 x86_64 Debian 6 (Squeeze) debian-7 sparc64 Debian 7 (Wheezy) (sparc64) debian-7 x86_64 Debian 7 (wheezy) debian-8 x86_64 Debian 8 (jessie) debian-9 x86_64 Debian 9 (stretch) fedora-26 aarch64 Fedora® 26 Server (aarch64) fedora-26 armv7l Fedora® 26 Server (armv7l) fedora-26 i686 Fedora® 26 Server (i686) fedora-26 ppc64 Fedora® 26 Server (ppc64) fedora-26 ppc64le Fedora® 26 Server (ppc64le) fedora-26 x86_64 Fedora® 26 Server fedora-27 aarch64 Fedora® 27 Server (aarch64) fedora-27 armv7l Fedora® 27 Server (armv7l) fedora-27 i686 Fedora® 27 Server (i686) fedora-27 ppc64 Fedora® 27 Server (ppc64) fedora-27 ppc64le Fedora® 27 Server (ppc64le) fedora-27 x86_64 Fedora® 27 Server fedora-28 i686 Fedora® 28 Server (i686) fedora-28 x86_64 Fedora® 28 Server fedora-29 aarch64 Fedora® 29 Server (aarch64) fedora-29 i686 Fedora® 29 Server (i686) fedora-29 ppc64le Fedora® 29 Server (ppc64le) fedora-29 x86_64 Fedora® 29 Server fedora-30 aarch64 Fedora® 30 Server (aarch64) fedora-30 i686 Fedora® 30 Server (i686) fedora-30 x86_64 Fedora® 30 Server fedora-31 x86_64 Fedora® 31 Server fedora-32 x86_64 Fedora® 32 Server freebsd-11.1 x86_64 FreeBSD 11.1 scientificlinux-6 x86_64 Scientific Linux 6.5 ubuntu-10.04 x86_64 Ubuntu 10.04 (Lucid) ubuntu-12.04 x86_64 Ubuntu 12.04 (Precise) ubuntu-14.04 x86_64 Ubuntu 14.04 (Trusty) ubuntu-16.04 x86_64 Ubuntu 16.04 (Xenial) ubuntu-18.04 x86_64 Ubuntu 18.04 (bionic) fedora-18 x86_64 Fedora® 18 fedora-19 x86_64 Fedora® 19 fedora-20 x86_64 Fedora® 20 fedora-21 aarch64 Fedora® 21 Server (aarch64) fedora-21 armv7l Fedora® 21 Server (armv7l) fedora-21 ppc64 Fedora® 21 Server (ppc64) fedora-21 ppc64le Fedora® 21 Server (ppc64le) fedora-21 x86_64 Fedora® 21 Server fedora-22 aarch64 Fedora® 22 Server (aarch64) fedora-22 armv7l Fedora® 22 Server (armv7l) fedora-22 i686 Fedora® 22 Server (i686) fedora-22 x86_64 Fedora® 22 Server fedora-23 aarch64 Fedora® 23 Server (aarch64) fedora-23 armv7l Fedora® 23 Server (armv7l) fedora-23 i686 Fedora® 23 Server (i686) fedora-23 ppc64 Fedora® 23 Server (ppc64) fedora-23 ppc64le Fedora® 23 Server (ppc64le) fedora-23 x86_64 Fedora® 23 Server fedora-24 aarch64 Fedora® 24 Server (aarch64) fedora-24 armv7l Fedora® 24 Server (armv7l) fedora-24 i686 Fedora® 24 Server (i686) fedora-24 x86_64 Fedora® 24 Server fedora-25 aarch64 Fedora® 25 Server (aarch64) fedora-25 armv7l Fedora® 25 Server (armv7l) fedora-25 i686 Fedora® 25 Server (i686) fedora-25 ppc64 Fedora® 25 Server (ppc64) fedora-25 ppc64le Fedora® 25 Server (ppc64le) fedora-25 x86_64 Fedora® 25 Server opensuse-13.1 x86_64 openSUSE 13.1 opensuse-13.2 x86_64 openSUSE 13.2 opensuse-42.1 x86_64 openSUSE Leap 42.1 opensuse-tumbleweed x86_64 openSUSE TumbleweedList available virtual machines using Virt-builder
As you can see, there are multiple OS templates available.
Before building a virtual machine image, you might want to look into the installation notes of the guest OS to know what is in there.
https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-6701402139964678&output=html&h=280&adk=4080564189&adf=93806863&w=780&fwrn=4&fwrnh=100&lmt=1596277372&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=8125749717&psa=0&guci=2.2.0.0.2.2.0.0&ad_type=text_image&format=780x280&url=https%3A%2F%2Fostechnix.com%2Fquickly-build-virtual-machine-images-with-virt-builder%2F&flash=0&fwr=0&pra=3&rh=195&rw=779&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=NT&dt=1596302823427&bpp=2&bdt=1875&idt=-M&shv=r20200729&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3D7d79fc92589bb027%3AT%3D1596302813%3AS%3DALNI_MY6UksT8IYJjz2i3RVYg17meRvoRQ&prev_fmts=728x90%2C780x280%2C340x280%2C0x0&nras=2&correlator=5077173503435&frm=20&pv=1&ga_vid=34261185.1596302823&ga_sid=1596302823&ga_hid=1850750620&ga_fc=0&iag=0&icsg=2964234494&dssz=33&mdo=0&mso=0&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=175&ady=5081&biw=1519&bih=762&scr_x=0&scr_y=0&eid=21066429&oid=3&pvsid=3282901998019898&pem=289&ref=https%3A%2F%2Fwww.linuxtoday.com%2Fit_management%2Fhow-to-quickly-build-virtual-machine-images-with-virt-builder-200731070026.html&rx=0&eae=0&fc=384&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2020-08-01-04&ifi=4&uci=a!4&btvi=2&fsb=1&xpc=bJAUsOiy0e&p=https%3A//ostechnix.com&dtd=63
For example, to view the install notes of Debian 10, run:
$ virt-builder --notes debian-10Sample output:
Debian 10 (buster) This is a minimal Debian install. This image is so very minimal that it only includes an ssh server This image does not contain SSH host keys. To regenerate them use: --firstboot-command "dpkg-reconfigure openssh-server" This template was generated by a script in the libguestfs source tree: builder/templates/make-template.ml Associated files used to prepare this template can be found in the same directory.Build a virtual machine imageI wanted to download the OS templates in a specific directory, so I created this directory:
$ mkdir virtbuilder $ cd virtbuilder/Let us build the Debian 10 virtual machine using command:
$ virt-builder debian-10Sample output:
[ 4.8] Downloading: http://builder.libguestfs.org/debian-10.xz ##################################################################################### 100.0% [ 83.2] Planning how to build this image [ 83.2] Uncompressing [ 101.2] Opening the new disk [ 119.8] Setting a random seed virt-builder: warning: random seed could not be set for this type of guest [ 119.9] Setting passwords virt-builder: Setting random password of root to 66xW1CaIqfM8km2v [ 121.5] Finishing off Output file: debian-10.img Output size: 6.0G Output format: raw Total usable space: 5.8G Free space: 4.9G (84%)Build virtual machine images with virt-builder in Linux
As you can see, this command has built the minimal Debian 10 image. It will not have any user accounts. It will only have random root password and the bare minimum installed software.
The output name of the image should be same as the template name. You can change it as per your liking using the -o option:
$ virt-builder debian-10 -o ostechnix.imgBy default, the image format is img . You can convert it to different format, for example Qcow2, like below:
$ virt-builder debian-10 --format qcow2By default, Virt-builder will build the image same as the host OS architecture. For example, if your host OS is 64-bit, it will then build 64-bit VM. You can change this to 32-bit (if available) using --arch option.
$ virt-builder debian-10 --arch i686Want to build a custom-size image? It is also possible. The following command will build a VM with size 50 GB:
$ virt-builder debian-10 --size 50GThe guest OS is resized automatically using virt-resize command as it is copied to the output.
Set root passwordLike I already mentioned, a random password will be assigned to the root user account while building the image. If you want to set a specific password for the root user, use --root-password option like below:
$ virt-builder centos-8.2 --format qcow2 --root-password password:centosSample output:
[ 5.1] Downloading: http://builder.libguestfs.org/centos-8.2.xz ##################################################################################### 100.0% [ 249.2] Planning how to build this image [ 249.2] Uncompressing [ 271.3] Converting raw to qcow2 [ 281.1] Opening the new disk [ 319.9] Setting a random seed [ 320.4] Setting passwords [ 323.0] Finishing off Output file: centos-8.2.qcow2 Output size: 6.0G Output format: qcow2 Total usable space: 5.3G Free space: 4.0G (74%)The above command will build CentOS 8.2 image and assign password for the root user as "centos" .
You can also set password from a text file:
$ virt-builder centos-8.2 --root-password file:~/ostechnix.txtTo disable the root password, run:
$ virt-builder centos-8.2 --root-password disabledLock root account:
$ virt-builder centos-8.2 --root-password lockedLock root account and disable root password:
$ virt-builder centos-8.2 --root-password locked:disabledTo assign root password, but lock the root account, use the following options:
https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-6701402139964678&output=html&h=280&adk=4080564189&adf=1689273420&w=780&fwrn=4&fwrnh=100&lmt=1596277372&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=8125749717&psa=0&guci=2.2.0.0.2.2.0.0&ad_type=text_image&format=780x280&url=https%3A%2F%2Fostechnix.com%2Fquickly-build-virtual-machine-images-with-virt-builder%2F&flash=0&fwr=0&pra=3&rh=195&rw=779&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=NT&dt=1596302823427&bpp=2&bdt=1875&idt=-M&shv=r20200729&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3D7d79fc92589bb027%3AT%3D1596302813%3AS%3DALNI_MY6UksT8IYJjz2i3RVYg17meRvoRQ&prev_fmts=728x90%2C780x280%2C340x280%2C0x0%2C780x280&nras=3&correlator=5077173503435&frm=20&pv=1&ga_vid=34261185.1596302823&ga_sid=1596302823&ga_hid=1850750620&ga_fc=0&iag=0&icsg=11554169086&dssz=34&mdo=0&mso=0&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=175&ady=9266&biw=1519&bih=762&scr_x=0&scr_y=0&eid=21066429&oid=3&pvsid=3282901998019898&pem=289&ref=https%3A%2F%2Fwww.linuxtoday.com%2Fit_management%2Fhow-to-quickly-build-virtual-machine-images-with-virt-builder-200731070026.html&rx=0&eae=0&fc=384&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2020-08-01-04&ifi=5&uci=a!5&btvi=3&fsb=1&xpc=jyM1FlCRC2&p=https%3A//ostechnix.com&dtd=72
--root-password locked:file:FILENAME --root-password locked:password:PASSWORDWe can use the root password after unlocking the root user with "passwd -u" command.
Create usersTo create user accounts while building a virtual machine image, run:
$ virt-builder centos-8.2 --firstboot-command 'useradd -m -p "" sk ; chage -d 0 sk'The above command will create a user called "sk" with no password and force him to set password at his first login.
Set hostnameTo set the hostname to the VM:
$ virt-builder centos-8.2 --hostname virt.ostechnix.localSample output:
[ 4.7] Downloading: http://builder.libguestfs.org/centos-8.2.xz [ 7.2] Planning how to build this image [ 7.2] Uncompressing [ 31.0] Opening the new disk [ 41.9] Setting a random seed [ 42.0] Setting the hostname: virt.ostechnix.local [ 42.1] Setting passwords virt-builder: Setting random password of root to MRn7fj1GSaeCAHQx [ 44.4] Finishing off Output file: centos-8.2.img Output size: 6.0G Output format: raw Total usable space: 5.3G Free space: 4.0G (74%)Install software on VM imageTo install packages on a VM, run:
$ virt-builder debian-10 --install vimSample output:
[ 5.8] Downloading: http://builder.libguestfs.org/debian-10.xz [ 7.4] Planning how to build this image [ 7.4] Uncompressing [ 25.3] Opening the new disk [ 29.7] Setting a random seed virt-builder: warning: random seed could not be set for this type of guest [ 29.8] Installing packages: vim [ 93.2] Setting passwords virt-builder: Setting random password of root to 45Hj5yxh8vRqLDcu [ 94.9] Finishing off Output file: debian-10.img Output size: 6.0G Output format: raw Total usable space: 5.8G Free space: 4.8G (82%)To install multiple packages, mention them within quotes, with comma-separated like below:
$ virt-builder debian-10 --install "apache2,htop"Update all packages in the VM:
$ virt-builder centos-8.2 --updateIf your VM uses SELinux, you need to do SELinux relabeling after installing or updating packages:
$ virt-builder centos-8.2 --update --selinux-relabelCustomize VM imagesVirt-builder has many options to customize an image. For instance, you can run a specific command/script when the first time VM boots using command:
$ virt-builder debian-10 --firstboot-command 'apt -y update'To add a line in the VM, run:
$ virt-builder centos-8.2 --append-line '/etc/hosts:192.168.225.1 server.ostechnix.local'Caching templatesBy default, all templates will be downloaded from the network for the first time. Since the size of the templates is large, the downloaded templates will be cached in user's home directory.
You can print the details of the cache directory and which templates are currently cached using the following command:
$ virt-builder --print-cacheSample output:
cache directory: /home/sk/.cache/virt-builder [...] centos-7.8 x86_64 no centos-8.0 x86_64 no centos-8.2 x86_64 cached cirros-0.3.1 x86_64 no cirros-0.3.5 x86_64 no debian-10 x86_64 cached debian-6 x86_64 no debian-7 sparc64 no [...]You can also verify it by manually looking in the cache folder:
https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-6701402139964678&output=html&h=280&adk=4080564189&adf=4097132768&w=780&fwrn=4&fwrnh=100&lmt=1596277372&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=8125749717&psa=0&guci=2.2.0.0.2.2.0.0&ad_type=text_image&format=780x280&url=https%3A%2F%2Fostechnix.com%2Fquickly-build-virtual-machine-images-with-virt-builder%2F&flash=0&fwr=0&pra=3&rh=195&rw=779&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=NT&dt=1596302823427&bpp=2&bdt=1874&idt=-M&shv=r20200729&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3D7d79fc92589bb027%3AT%3D1596302813%3AS%3DALNI_MY6UksT8IYJjz2i3RVYg17meRvoRQ&prev_fmts=728x90%2C780x280%2C340x280%2C0x0%2C780x280%2C780x280&nras=4&correlator=5077173503435&frm=20&pv=1&ga_vid=34261185.1596302823&ga_sid=1596302823&ga_hid=1850750620&ga_fc=0&iag=0&icsg=11554169086&dssz=34&mdo=0&mso=0&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=175&ady=12882&biw=1519&bih=762&scr_x=0&scr_y=0&eid=21066429&oid=3&pvsid=3282901998019898&pem=289&ref=https%3A%2F%2Fwww.linuxtoday.com%2Fit_management%2Fhow-to-quickly-build-virtual-machine-images-with-virt-builder-200731070026.html&rx=0&eae=0&fc=384&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2020-08-01-04&ifi=6&uci=a!6&btvi=4&fsb=1&xpc=xPUDtDnzkf&p=https%3A//ostechnix.com&dtd=82
$ ls $HOME/.cache/virt-builder centos-8.2.x86_64.1 debian-10.x86_64.1To download all available templates to your local cache folder, run:
$ virt-builder --cache-all-templatesIf you don't want to cache the template while building the image, use --no-cache option.
To delete all cached templates, run:
$ virt-builder --delete-cache [ 0.0] Deleting: /home/sk/.cache/virt-builderImporting disk images into hypervisorWell, you have downloaded your desired OS and customized it as per your liking. Now what? Just import the image and run a VM using the newly created disk image with a hypervisor. We already have written a step by step guide to create a KVM virtual machine using Qcow2 image. This guide is written specifically for Qcow2, however it is same for importing .img format disk images too.
Virt-builder has hundreds of commands and options. I covered only the basic commands here. For more details, refer the manual age.
$ man virt-builderFeatured Photo by Igor Starkov from Pexels .
Jul 10, 2020 | www.ostechnix.com
Have you decided to switch from Oracle VirtualBox to Kernel-based Virtual Machine? Great! This guide explains how to migrate Virtualbox VMs into KVM VMs in Linux. You might have running some important guest machines on VirtualBox. Instead of creating new KVM guests with same configuration, you can easily migrate the existing Virtualbox machines to KVM as described here.
Migrate Virtualbox VMs Into KVM VMs In LinuxFirst, power off all VMs hosted with KVM and VirtualBox.
The default disk image format of a Virtualbox VM is VDI .
We can find the list of virtualbox disk images and their location using command:
$ vboxmanage list hddsOr,
$ VBoxManage list hddsSample output:
UUID: ecfb6d5c-aa10-4ffc-b40c-b871f0404da8 Parent UUID: base State: created Type: normal (base) Location: /home/sk/VirtualBox VMs/CentOS 8 Server/CentOS 8 Server.vdi Storage format: VDI Capacity: 20480 MBytes Encryption: disabled UUID: 34a5709f-188c-4040-98f9-6093628c3d88 Parent UUID: base State: created Type: normal (base) Location: /home/sk/VirtualBox VMs/Ubuntu 20.04 Server/Ubuntu 20.04 Server.vdi Storage format: VDI Capacity: 20480 MBytes Encryption: disabledAs you can see, I have two virtualbox VMs.
Now I am going to convert CentOS 8 machines' disk image to a raw disk format using "vboxmanage" command:
$ vboxmanage clonehd --format RAW "/home/sk/VirtualBox VMs/CentOS 8 Server/CentOS 8 Server.vdi" CentOS_8_Server.imgOr,
$ VBoxManage clonehd --format RAW "/home/sk/VirtualBox VMs/CentOS 8 Server/CentOS 8 Server.vdi" CentOS_8_Server.imgIf disk images contains spaces in their names, mention the name inside the quotes as shown above.
Sample output:
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Clone medium created in format 'RAW'. UUID: afff3db8-b460-4f68-9c02-0f5d0d766c8eThe RAW image is big too big to use. So let us convert the RAW image format into KVM disk format i.e. compressed qcow2 using qemu-img command:
$ qemu-img convert -f raw CentOS_8_Server.img -O qcow2 CentOS_8_Server.qcow2Done! We have converted Virtualbox disk image format VDI into KVM image format qcow2.
You can now create a new KVM instance by importing the virtual disk image file from command line or using any graphical KVM management applications like Virt-manager or Cockpit web console. Refer the following guide for more details:
Hope this helps.
Thanks for stopping by!
Help us to help you:
- Subscribe to our Email Newsletter : Sign Up Now
- Support OSTechNix : Donate Via PayPal
- Download free E-Books and Videos : OSTechNix on TradePub
- Connect with us: Reddit | Facebook | Twitter | LinkedIn | RSS feeds
Have a Good day!!
Jun 11, 2020 | www.redhat.com
How to use the --privileged flag with container engines Let's take a deep dive into what the --privileged flag does for container engines such as Podman, Docker, and Buildah.
Posted: June 8, 2020 | by Dan Walsh (Red Hat) Linux Containers
- Free course: Deploying containerized applications
- Download now: Red Hat OpenShift Trial
- More on containers
Many users get confused about the
--privileged
flag. Users often equate this flag to unconfined or full root access to the host system. In this blog, I discuss what the--privileged
flag does with container engines such as Podman , Docker , and Buildah .What does the --privileged flag cause container engines to do?
What privileges does it give to the container processes?
Executing container engines with the
--privileged
flag tells the engine to launch the container process without any further "security" lockdown.Note: Running container engines in rootless mode does not mean to run with more privilege than the user executing the command. Containers are blocked from additional access by Linux anyway. Your processes still run as the user process that launched them on the host. So, for example, running
--privileged
does not suddenly allow the container process to bind to a port less than 1024. The kernel does not allow non-root users to bind to these ports, so users launching container processes are not allowed access either.The bottom line is that using the
--privileged
flag does not tell the container engines to add additional security constraints. The--privileged
flag does not add any privilege over what the processes launching the containers have. Tools like Podman and Buildah do NOT give any additional access beyond the processes launched by the user.To understand the
--privileged
flag, you need to understand the security enabled by container engines, and what is disabled.Read-only kernel file systems
Kernel file systems provide a mechanism for a process to alter the way the kernel runs. They also provide information to processes on the system. By default, we don't want container processes to modify the kernel, so we mount kernel file systems as read-only within the container. The read-only mounts prevent privileged processes and processes with capabilities in the user namespace to write to the kernel file systems.
$ podman run fedora mount | grep '(ro' sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime,seclabel) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,relatime,context="system_u:object_r:container_file_t:s0:c268,c852",mode=755,uid=3267,gid=3267) cgroup on /sys/fs/cgroup/systemd type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,xattr,name=systemd) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,net_cls,net_prio) cgroup on /sys/fs/cgroup/hugetlb type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,hugetlb) cgroup on /sys/fs/cgroup/blkio type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,blkio) cgroup on /sys/fs/cgroup/memory type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,memory) cgroup on /sys/fs/cgroup/freezer type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,freezer) cgroup on /sys/fs/cgroup/devices type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,devices) cgroup on /sys/fs/cgroup/cpuset type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,cpuset) cgroup on /sys/fs/cgroup/pids type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,pids) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,cpu,cpuacct) cgroup on /sys/fs/cgroup/perf_event type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,perf_event) proc on /proc/asound type proc (ro,relatime) proc on /proc/bus type proc (ro,relatime) proc on /proc/fs type proc (ro,relatime) proc on /proc/irq type proc (ro,relatime) proc on /proc/sys type proc (ro,relatime) proc on /proc/sysrq-trigger type proc (ro,relatime) tmpfs on /proc/acpi type tmpfs (ro,relatime,context="system_u:object_r:container_file_t:s0:c268,c852",uid=3267,gid=3267) tmpfs on /proc/scsi type tmpfs (ro,relatime,context="system_u:object_r:container_file_t:s0:c268,c852",uid=3267,gid=3267) tmpfs on /sys/firmware type tmpfs (ro,relatime,context="system_u:object_r:container_file_t:s0:c268,c852",uid=3267,gid=3267) tmpfs on /sys/fs/selinux type tmpfs (ro,relatime,context="system_u:object_r:container_file_t:s0:c268,c852",uid=3267,gid=3267)Whereas when I run as
--privileged
, I get:$ podman run --privileged fedora mount | grep '(ro' $None of the kernel file systems are mounted read-only in
--privileged
mode. Usually, this is required to allow processes inside of the container to actually modify the kernel through the kernel file system.Masking over kernel file systems
The /proc file system is namespace-aware, and certain writes can be allowed, so we don't mount it read-only. However, specific directories in the /proc file system need to be protected from writing, and in some instances, from reading. In these cases, the container engines mount tmpfs file systems over potentially dangerous directories, preventing processes inside of the container from using them.
$ podman run fedora mount | grep /proc.*tmpfs tmpfs on /proc/acpi type tmpfs (ro,relatime,context="system_u:object_r:container_file_t:s0:c255,c491",uid=3267,gid=3267) devtmpfs on /proc/kcore type devtmpfs (rw,nosuid,seclabel,size=7995040k,nr_inodes=1998760,mode=755) devtmpfs on /proc/keys type devtmpfs (rw,nosuid,seclabel,size=7995040k,nr_inodes=1998760,mode=755) devtmpfs on /proc/latency_stats type devtmpfs (rw,nosuid,seclabel,size=7995040k,nr_inodes=1998760,mode=755) devtmpfs on /proc/timer_list type devtmpfs (rw,nosuid,seclabel,size=7995040k,nr_inodes=1998760,mode=755) devtmpfs on /proc/sched_debug type devtmpfs (rw,nosuid,seclabel,size=7995040k,nr_inodes=1998760,mode=755) tmpfs on /proc/scsi type tmpfs (ro,relatime,context="system_u:object_r:container_file_t:s0:c255,c491",uid=3267,gid=3267)With
--privileged
, the mount points are not masked over:$ podman run --privileged fedora mount | grep /proc.*tmpfs $Linux capabilities
Linux capabilities are a mechanism for limiting the power of root. The Linux kernel splits the privileges of root (superuser) into a series of distinct units, called capabilities. In the case of rootless containers, container engines still use user namespace capabilities. These capabilities limit the power of root within the user namespace. Container engines launch the containers with a limited number of namespaces enabled to control what goes on inside of the container by default.
$ podman run -d fedora sleep 100 8b1facf07f11486e6379d14432f7c7f89da262d2aba8b55ff52af8570d0a17a9 $ podman top -l capeff EFFECTIVE CAPS AUDIT_WRITE,CHOWN,DAC_OVERRIDE,FOWNER,FSETID,KILL,MKNOD,NET_BIND_SERVICE,NET_RAW,SETFCAP,SETGID,SETPCAP,SETUID,SYS_CHROOTWhen you launch a container with
--privileged
mode, the container launches with the full list of capabilities.$ podman run --privileged -d fedora sleep 100 d571acd1ccda2e6eb31602bf509e21d632cca3d8d524781b0a0123fef17e99f4 $ podman top -l capeff EFFECTIVE CAPS fullNote: In rootless containers, the container processes get full namespace capabilities. These are not the same as full root capabilities. These are NOT real capabilities, but only capabilities over the user namespace. For example, a process with CAP_SETUID is allowed to change its UID to all UIDs mapped into the user namespace, but is not allowed to change the UID to any UID not mapped into the user namespace. When running a rootful container without using user namespace, a process with CAP_SETUID IS allowed to change its UID to any UID on the system.
You can manipulate the capabilities available to a container without running in
--privileged
mode by using the--cap-add
and--cap-drop
flags. For example, if you want to run the container with all capabilities, you could execute:$ podman run --cap-add=all -d fedora sleep 100 9d167c4c0980e70623598dd718b685c0aead6d32c4bb2da35f50f8a58cbc66ea $ podman top -l capeff EFFECTIVE CAPS fullUsing
--cap-drop=all --cap-add setuid
would run a container only with the setuid capability.$ podman run --cap-drop=all --cap-add=setuid -d fedora sleep 100 d7f9954649024e20604ae995c9a05b1efcd7194b3e019f3495a24bfe4779c6aa $ podman top -l capeff EFFECTIVE CAPS SETUIDHere is a link to a talk I gave at Devcon.cz on ways to increase the security in containers. The talk covers a lot of these security features and how to make them better.
Syscall filtering - SECCOMP
Container engines control the syscall tables available to processes inside of the container. This limits the attack surface of the Linux kernel by preventing container processes from executing syscalls inside of the container. If a syscall could cause a kernel exploit and allow a container to break out, then if the syscall is not available to the container processes, you prevent the break out. By default, container engines drop many syscalls. We recently wrote a blog on how to drop many more.
$ podman run -d fedora sleep 100 7ba4decb298a0e38fe0140b8bf039a662f4cd0fd666cd7a7f95d1bc12fdddecc $ podman top -l seccomp SECCOMP filterIf you execute the
--privileged
flag, then the container engines do not use the SECCOMP syscall filters:$ podman run --privileged -d fedora sleep 100 1469d3629d787e11100e3e9d011c97ff0249df1092b24af874f4e1be167f3852 $ podman top -l seccomp SECCOMP disabledYou can also turn off syscall filtering by using the
--security-opt seccomp:unconfined
options without running the full--privileged
flag.$ podman run --security-opt seccomp=unconfined -d fedora sleep 100 c18858a963d2e80e25ed1d118a6e48072047d69fc6efec23b26362408a8a71d3 $ podman top -l seccomp SECCOMP disabledSELinux
SELinux is a labeling system. Every process and every file system object has a label. SELinux policies define rules about what a process label is allowed to do with all of the other labels on the system. I feel SELinux is the best tool for controlling file system break outs of containers. Container engines launch container processes with a single confined SELinux label, usually
container_t
, and then set the container inside of the container to be labeledcontainer_file_t
. The SELinux policy rules basically say that thecontainer_t
processes can only read/write/execute files labeledcontainer_file_t
. If a container process escapes the container and attempts to write to content on the host, the Linux kernel denies access and only allows the container process to write to content labeledcontainer_file_t
.$ podman run -d fedora sleep 100 d4194babf6b877c7100e79de92cd6717166f7302113018686cea650ea40bd7cb $ podman top -l label LABEL system_u:system_r:container_t:s0:c647,c780When you run with the
--privileged
flag, SELinux labels are disabled, and the container runs with the label that the container engine was executed with. This label is usuallyunconfined
and has full access to the labels that the container engine does. In rootless mode, the container runs withcontainer_runtime_t
. In root mode, it runs withspc_t
. The bottom line on both of these labels is that there is no additional confinement on the container process than what was on the container engine process.$ podman run --privileged -d fedora sleep 100 23770ed2fef88b6a674af733a7a80b0d29bfa6a6db2888edf810eaa55ee2d93e $ podman top -l label LABEL unconfined_u:system_r:container_runtime_t:s0Like the other security mechanisms, SELinux confinement can also be disabled directly without requiring full
--privilege
mode.$ podman run --security-opt label=disable -d fedora sleep 100 08d6170f71313bc98293c77686e41cebc3041e82eea189bd8c74d5b60290102f $ podman top -l label LABEL unconfined_u:system_r:container_runtime_t:s0Namespaces
What sometimes surprises users is that namespaces are NOT affected by the
--privileged
flag. This means that the container processes are still living in the virtualization world of containers. Even though they don't have the security constraints enabled, they do not see all of the processes on the system or the host network, for example. Users can disable individual namespaces by using the--pid=host
,--net=host
,--user=host
,--ipc=host
,--uts=host
container engines flags. Years ago, I defined these containers as super privileged containers .$ podman top -l | wc -l 2As you can see, by default,
top
shows only one process running in the container, along with the header:$ podman run --pid=host -d fedora sleep 100 a90f2ccc335343a649dfdd777e252319a16a786a801da2462d2a4dbe0d8f55ad $ podman top -l | wc -l 421When I run the container with
--pid=host
, the container engine does not use the PID namespace, and the container processes see all of the processes on the host as well as the processes inside of the container.Similarly,
--net=host
disables the network namespace, allowing the container processes to use the host network.User namespace
Container engines user namespace is not affected by the
--privileged
flag. Container engines do NOT use user namespace by default. However, rootless containers always use it to mount file systems and use more than a single UID. In the rootless case, user namespace can not be disabled; it is required to run rootless containers. User namespaces prevent certain privileges and add considerable security.Recent versions of Podman use
containers.conf
, which allows you to change the engine's default behavior when it comes to namespaces. If you wanted all of your containers to not use a network namespace by default, you could set this incontainers.conf
.Conclusion
As a security engineer, I actually do not like users running with the
--privileged
mode. I wish they would figure out what privileges their container requires and run with as much security as possible, or better yet, they would redesign their application to run without requiring as many privileges. It's kind of like usingsetenforce 0
in the SELinux world, and you know how much I love that. But the bottom line is, we need users of container engines to understand what happens when they use the--privileged
flag, and why sometimes they need to disable additional features to make their container execute successfully.The open-source community is working on tools in addition to the container engines to make this possible. A couple of examples of these tools are:
- Udica : A tool for creating a custom SELinux policy based on the container's configuration.
- oci-seccomp-bpf-hook : A tool for discovering what system calls rules a container uses and automatically generating a custom seccomp rules filter.
[ Free course: Deploying containerized applications . ]
Jan 27, 2020 | news.softpedia.com
Big news today for Linux gamers and ex-Windows users as the final release of the Wine 5.0 software is now officially available for download with numerous new features and improvements.
After being in development for more than one year, Wine 5.0 is finally here with a lot of enhancements, starting with support for multi-monitor configurations, the reimplementation of the XAudio2 low-level audio API, Vulkan 1.1.126 support, as well as built-in modules in PE (Portable Executable) format.
"This release is dedicated to the memory of Józef Kucia, who passed away in August 2019 at the young age of 30. Józef was a major contributor to Wine's Direct3D implementation, and the lead developer of the vkd3d project. His skills and his kindness are sorely missed by all of us," reads today's announcement .
Improvements to Windows gamesWine 5.0 also brings improvements to numerous Windows games, so Linux gamers would be happy to learn that they'll be able to better enjoy Brothers in Arms: Hell's Highway, Tomb Raider (2013), Tetris for Windows, The Five Cores, Far Cry 5, Sonic Mania, Serious Sam Classic, and The Witcher Enhanced Edition (GOG).
Furthermore, games like UFO: Extraterrestrials Gold, Skyrim (Steam), Splinter Cell: Blacklist, Emperor: Battle for Dune, The Old City: Leviathan, Giants: Citizen Kabuto, Rayman Origins (UPlay), The Evil Within, X Rebirth, Divinity: Original Sin 2, Magic: The Gathering Arena, and the Battle.net app should also work a lot better with Wine 5.0.
Among the Windows apps that received improvements in Wine 5.0, we can mention Acrobat Reader 11, PDF Eraser 1.5, PDF-XChange Viewer 2.5.x, Exact Audio Copy, Express Rip, dbpoweramp CD Ripper, Adobe DNG Converter 11.2+, MindManager Pro 7.0, 7-Zip , ABBYY FineReader 14, Pale Moon, Foxit Reader 6.12, uTorrent 2.2.0, and Xara Photo Graphic Designer 2013 (8.1.1).
Wine 5.0 brings numerous other improvements and bug fixes that you can study on the official changelog . Meanwhile, if you like running Windows apps and games on your GNU/Linux distribution, make sure you update to Wine 5.0 as soon as it arrives in the stable software repositories. You can also download the Wine 5.0 source tarball if you want to compile it yourself.
Dec 01, 2019 | linuxize.com
The
docker run
command takes the following form:docker run [OPTIONS] IMAGE [COMMAND] [ARG...]The name of the image from which the container should be created is the only required argument for the
docker run
command. If the image is not present on the local system, it is pulled from the registry.If no command is specified, the command specified in the Dockerfile's
CMD
orENTRYPOINT
instructions is executed when running the container.Starting from version 1.13, the Docker CLI has been restructured, and all commands have been grouped under the object they interacting with.
Since the
run
command interacts with containers, now it is a subcommand ofdocker container
. The syntax of the new command is as follows:docker container run [OPTIONS] IMAGE [COMMAND] [ARG...]The old, pre 1.13 syntax is still supported. Under the hood,
docker run
command is an alias todocker container run
. Users are encouraged to use the new command syntax.A list of all
Run the Container in the Foregrounddocker container run
options can be found on the Docker documentation page.By default, when no option is provided to the
docker run
command, the root process is started in the foreground. This means that the standard input, output, and error from the root process are attached to the terminal session.docker container run nginxThe output of the nginx process will be displayed on your terminal. Since, there are no connections to the webserver, the terminal is empty.
To stop the container, terminate the running Nginx process by pressing
Run the Container in Detached ModeCTRL+C
.To keep the container running when you exit the terminal session, start it in a detached mode. This is similar to running a Linux process in the background .
Use the
-d
option to start a detached container:docker container run -d nginx050e72d8567a3ec1e66370350b0069ab5219614f9701f63fcf02e8c8689f04faThe detached container will stop when the root process is terminated.
You can list the running containers using the
docker container ls
command.To attach your terminal to the detached container root process, use the
Remove the Container After Exitdocker container attach
command.By default, when the container exits, its file system persists on the host system.
The
--rm
options tellsdocker run
command to remove the container when it exits automatically:docker container run --rm nginxThe Nginx image may not be the best example to clean up the container's file system after the container exits. This option is usually used on foreground containers that perform short-term tasks such as tests or database backups.
Set the Container NameIn Docker, each container is identified by its
UUID
and name. By default, if not explicitly set, the container's name is automatically generated by the Docker daemon.Use the
--name
option to assign a custom name to the container:docker container run -d --name my_nginx nginxThe container name must be unique. If you try to start another container with the same name, you'll get an error similar to this:
docker: Error response from daemon: Conflict. The container name "/my_nginx" is already in use by container "9...c". You have to remove (or rename) that container to be able to reuse that name.Run
docker container ls -a
to list all containers, and see their names:docker container lsCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9d695c1f5ef4 nginx "nginx -g 'daemon of " 36 seconds ago Up 35 seconds 80/tcp my_nginxThe meaningful names are useful to reference the container within a Docker network or when running docker CLI commands.
Publishing Container PortsBy default, if no ports are published, the process running in the container is accessible only from inside the container.
Publishing ports means mapping container ports to the host machine ports so that the ports are available to services outside of Docker.
To publish a port use the
-p
options as follows:-p host_ip:host_port:container_port/protocol
- If no
host_ip
is specified, it defaults to0.0.0.0
.- If no
protocol
is specified, it defaults to TCP.- To publish multiple ports, use multiple
-p
options.To map the TCP port 80 (nginx) in the container to port 8080 on the host localhost interface, you would run:
docker container run --name web_server -d -p 8080:80 nginxYou can verify that the port is published by opening
http://localhost:8080
in your browser or running the followingcurl
command on the Docker host:curl -I http://localhost:8080The output will look something like this:
HTTP/1.1 200 OK Server: nginx/1.17.6 Date: Tue, 26 Nov 2019 22:55:59 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 19 Nov 2019 12:50:08 GMT Connection: keep-alive ETag: "5dd3e500-264" Accept-Ranges: bytesSharing Data (Mounting Volumes)When a container is stopped, all data generated by the container is removed. Docker Volumes are the preferred way to make the data persist and share it across multiple containers.
To create and manage volumes, use the
-p
options as follows:-v host_src:container_dest:options
- The
host_src
can be an absolute path to a file or directory on the host or a named volume.- The
container_dest
is an absolute path to a file or directory on the container.- Options can be
rw
(read-write) andro
(read-only). If no option is specified, it defaults torw
.To explain how this works, let's create a directory on the host and put an
index.html
file in it:mkdir public_html echo "Testing Docker Volumes" > public_html/index.htmlNext, mount the
public_html
directory into/usr/share/nginx/html
in the container:docker run --name web_server -d -p 8080:80 -v $(pwd)/public_html:/usr/share/nginx/html nginxInstead of specifying the absolute path to the
public_html
directory, we're using the$(pwd)
command, which prints the current working directory .Now, if you type
http://localhost:8080
in your browser, you should see the contents of theindex.html
file. You can also usecurl
:curl http://localhost:8080Testing Docker VolumesRun the Container InteractivelyWhen dealing with the interactive processes like
bash
, use the-i
and-t
options to start the container.The
-it
options tells Docker to keep the standard input attached to the terminal and allocate a pseudo-tty:docker container run -it nginx /bin/bashThe container's Bash shell will be attached to the terminal, and the command prompt will change:
root@1da70f1937f5:/#Now, you can interact with the container's shell and run any command inside of it.
In this example, we provided a command (
Conclusion/bin/bash
) as an argument to thedocker run
command that was executed instead of the one specified in the Dockerfile.Docker is the standard for packaging and deploying applications and an essential component of CI/CD, automation, and DevOps.
The
docker container run
command is used to create and run Docker containers.If you have any questions, please leave a comment below.
Oct 08, 2019 | www.reddit.com
AquaeyesTardis 18 points · 6 days agoCalling the uneducated people out on what they see as facts can be rewarding.
I wouldnt call them all uneducated. I think what they are is basically brainwashed. They constantly hear from the sales teams of vendors like Microsoft pitching them the idea of moving everything to Azure.
They do not hear the cons. They do not hear from the folks who know their environment and would know if something is a good fit. At least, they dont hear it enough, and they never see it first hand.
Now, I do think this is their fault... they need to seek out that info more, weigh things critically, and listen to what's going on with their teams more. Isolation from the team is their own doing.
After long enough standing on the edge and only hearing "jump!!", something stupid happens. level 3
ztherion Programmer/Infrastructure/Linux 51 points · 6 days agoApart from performance, what would be some of the downsides of containers? level 4
AirFell85 11 points · 6 days agoThere's little downside to containers by themselves. They're just a method of sandboxing processes and packaging a filesystem as a distributable image. From a performance perspective the impact is near negligible (unless you're doing some truly intensive disk I/O).
What can be problematic is taking a process that was designed to run on exactly n dedicated servers and converting it to a modern 2 to n autoscaling deployment that shares hosting with other apps on n a platform like Kubernetes. It's a significant challenge that requires a lot of expertise and maintenance, so there needs to be a clear business advantage to justify hiring at least one additional full time engineer to deal with it. level 5
justabofh 33 points · 6 days agoELI5:
More logistical layers require more engineers to support.
1 more reply
3 more replies level 4
Untgradd 6 points · 6 days agoContainers are great for stateless stuff. So your webservers/application servers can be shoved into containers. Think of containers as being the modern version of statically linked binaries or fat applications. Static binaries have the problem that any security vulnerability requires a full rebuild of the application, and that problem is escalated in containers (where you might not even know that a broken library exists)
If you are using the typical business application, you need one or more storage components for data which needs to be available, possibly changed and access controlled.
Containers are a bad fit for stateful databases, or any stateful component, really.
Containers also enable microservices, which are great ideas at a certain organisation size (if you aren't sure you need microservices, just use a simple monolithic architecture). The problem with microservices is that you replace complexity in your code with complexity in the communications between the various components, and that is harder to see and debug. level 5
malikto44 5 points · 6 days agoContainers are fine for stateful services -- you can manage persistence at the storage layer the same way you would have to manage it if you were running the process directly on the host.
6 more replies level 5
malikto44 3 points · 6 days agoBacking up containers can be a pain, so you don't want to use them for valuable data unless the data is stored elsewhere, like a database or even a file server.
For spinning up stateless applications to take workload behind a load balancer, containers are excellent.
9 more replies
33 more replies level 3
wildcarde815 Jack of All Trades 12 points · 6 days agoThe problem is that there is an overwhelming din from vendors. Everybody and their brother, sister, mother, uncle, cousin, dog, cat, and gerbil is trying to sell you some pay-by-the-month cloud "solution".
The reason is that the cloud forces people into monthly payments, which is a guarenteed income for companies, but costs a lot more in the long run, and if something happens and one can't make the payments, business is halted, ensuring that bankruptcies hit hard and fast. Even with the mainframe, a company could limp along without support for a few quarters until they could get cash flow enough.
If we have a serious economic downturn, the fact that businesses will be completely shuttered if they can't afford their AWS bill just means fewer companies can limp along when the economy is bad, which will intensify a downturn.
1 more reply
3 more replies level 2
pottertown 10 points · 6 days agoAlso if you can't work without cloud access you better have a second link. level 2
_The_Judge 27 points · 6 days agoOur company viewed the move to Azure less as a cost savings measure and more of a move towards agility and "right now" sizing of our infrastructure.
Your point is very accurate, as an example our location is wholly incapable of moving moving much to the cloud due to half of us being connnected via satellite network and the other half being bent over the barrel by the only ISP in town. level 2
laserdicks 57 points · 6 days agoI'm sorry, but I find management these days around tech wholly inadequate. The idea that you can get an MBA and manage shit you have no idea about is absurd and just wastes everyone elses time for them to constantly ELI5 so manager can do their job effectively. level 2
lokko12 71 points · 6 days agoCalling the uneducated people out on what they see as facts can be rewarding
Aaand political suicide in a corporate environment. Instead I use the following:
"I love this idea! We've actually been looking into a similar solution however we weren't able to overcome some insurmountable cost sinkholes (remember: nothing is impossible; just expensive). Will this idea require an increase in internet speed to account for the traffic going to the azure cloud?" level 3
HORACE-ENGDAHL Jack of All Trades 61 points · 6 days agoWill this idea require an increase in internet speed to account for the traffic going to the azure cloud?
No.
...then people rent on /r/sysadmin about stupid investments and say "but i told them". level 4
linuxdragons 13 points · 6 days agoThis exactly, you can't compromise your own argument to the extent that it's easy to shoot down in the name of giving your managers a way of saving face, and if you deliver the facts after that sugar coating it will look even worse, as it will be interpreted as you setting them up to look like idiots. Being frank, objective and non-blaming is always the best route. level 4
messburg 61 points · 6 days agoYeah, this is a terrible example. If I were his manager I would be starting the paperwork trail after that meeting.
6 more replies level 3
vagrantprodigy07 13 points · 6 days agoI think it's quite an american thing to do it so enthusiastically; to hurt no one, but the result is so condescending. It must be annoying to walk on egg shells to survive a day in the office.
And this is not a rant against soft skills in IT, at all. level 4
widowhanzo 27 points · 6 days agoIt is definitely annoying. level 4
· edited 6 days agosuperkp 42 points · 6 days agoWe work with Americans and they're always so positive, it's kinda annoying. They enthusiastically say "This is very interesting" when in reality it sucks and they know it.
Another less professional example, one of my (non-american) co-workers always wants to go out for coffee (while we have free and better coffee in the office), and the American coworker is always nice like "I'll go with you but I'm not having any" and I just straight up reply "No. I'm not paying for shitty coffee, I'll make a brew in the office". And that's that. Sometimes telling it as it is makes the whole conversation much shorter :D level 5
egamma Sysadmin 39 points · 6 days agoMaybe the american would appreciate the break and a chance to spend some time with a coworker away from screens, but also doesn't want the shit coffee?
Sounds quite pleasant, honestly. level 6
auru21 5 points · 6 days agoYes. "Go out for coffee" is code for "leave the office so we can complain about management/company/customer/Karen". Some conversations shouldn't happen inside the building. level 7
Adobe_Flesh 6 points · 6 days agoAnd complain about that jerk who never joins them
1 more reply level 6
ITaggie Tier II Support/Linux Admin 10 points · 6 days agoInferior American - please compute this - the sole intention was consumption of coffee. Therefore due to existence of coffee in office, trip to coffee shop is not sane. Resistance to my reasoning is futile.
1 more reply
5 more replies level 5
I mean, I've taken breaks just to get away from the screens for a little while. They might just like being around you.
Sep 16, 2019 | opensource.com
Using Vagrant and Ansible to deploy virtual machines for web development 23 Feb 2016 Betsy Gamrat Feed 363 up 1 comment Image by : Jason Baker for Opensource.com. x Subscribe now
Get the highlights in your inbox every week.
https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0 Vagrant and Ansible are tools to efficiently provision virtual machines, also called VMs, or in Vagrant terms, the word "boxes" is often used. We begin with a short discussion of why a web developer would invest the time to use these tools, then cover the required software, an overview of how Vagrant works with virtual machine providers, and the use of Ansible to provision a virtual machine. Background
First, this article builds on our guide to installing and testing eZ Publish 5 in a virtual machine, which may be a useful reference. We'll describe the tools you can use to automate the creation and provisioning of virtual machines. There is much to cover beyond what we talk about here, but for now we will focus on providing a general overview.
Why should you use virtual machines for website development?The traditional ways that developers work on websites are on remote development machines or locally, directly on their main operating system. There are many advantages to using virtual machines for development, including the following:
- The entire development team can have the same server and configuration, without purchasing additional hardware.
- The local virtual machines can be more representative of the production servers.
- You can bring up the virtual machine when you need to work with it and shut it down when you're done. This is especially helpful if you need different versions of software, such as different version of PHP for different projects.
Vagrant and Ansible help to automate the provisioning of the virtual machines. Vagrant handles the starting and stopping of the machines as well as some configuration, and Ansible breaks down the details of the machines into easy-to-read configuration files and installs and configures the software within the virtual machines.
General approachWe usually keep the site code on the host operating system and share these files from the host operating system to the virtual machines. This way, the virtual machines can be loaded with only the software required to run the site, and each team member can use their favorite local tools for code editing, version control, and more under the operating system they use most often.
RisksThis scheme is not without risk and complexity. A virtual machine is a great tool for your collection, but like any tool, you will need to take some time to learn how to use it.
Getting started
- When Vagrant, Ansible, and VirtualBox or another virtual machine provider are running well together, they help development to run more efficiently and can improve the quality of your work. However, when things go wrong they can distract from actual web development. They represent additional tools that you need to maintain and troubleshoot, and you need to train and support your development team to use them properly.
- Host operating system: Be sure your host operating system supports the tools you're planning to use. As I mentioned, this blog post focuses on Ansible. Ansible is not officially supported on Windows as the host. That means if you only have a Windows machine to work with, you'll need to consider using Linux as the controller. (There are some tricks to get it to work on Windows.)
- Performance: Remember the intent of these virtual machines is to support development. They will not run as fast as standalone servers. If that is an issue, you will likely need to invest some time in improving the performance.
- An implicit assumption is made that only one instance of the virtual machine will be running on the host at any given time. If you will be using more than one instance of the virtual machine, you'll need to take that into account as you set it up.
The first step is to install the required software: Vagrant , Ansible , and VirtualBox . We will focus only on VirtualBox in this post, but you can also use a different provider , including many options which are open source. You may need some VirtualBox extensions and Vagrant plugins as well. Take the time to read the documentation carefully.
Then, you will need a starting point for your virtual server, typically called a "base box." For your first virtual machine, it is easiest to use an existing box. There are many boxes up at HashiCorp's Atlas and at Vagrantbox.es which are suitable for testing, but be sure you use a trusted provider for any box which is used in production.
Once you've chosen your box, these commands should bring it to life:
$ vagrant box add name-of-box url-of-box $ vagrant init name-of-box $ vagrant upProvisioning, customizing, and accessing the virtual machineOnce the box is up and running, you can start adding software to it using Ansible. Plan to spend a lot of time learning Ansible. It is well worth the investment. You'll use Ansible to load the system software, create databases, configure the server, create users, set file ownership and permissions, set up services, and much more -- basically, to configure the virtual machine to include everything you need. Once you have the Ansible scripts set up you will be able to re-use them with different virtual machines and also run them against remote servers (which is a topic for another day!).
The easiest way to SSH into the box is:
$ vagrant sshYou may update your /etc/hosts file to map the virtual machine box IP address to an easy to remember name for SSH and browser access. Once the box is running and serving pages, you can start working on the site.
Development workflowUsing a virtual machine as described here doesn't significantly change normal development workflows when it comes to editing code. The host machine and virtual machine share the application files through the path configured under Vagrant. You can edit the files using your code editor of choice on the host, or make minor adjustments with a text editor on the virtual machine. You can also keep version control tools on the host machine. In other words, all of the code modifications made on the host machine automatically show up on the virtual machine.
You just get to enjoy the convenience of local development, and an environment that mimics the server(s) where your code will be deployed!
This article was originally published at Mugo Web . Republished with permission.
Sep 16, 2019 | opensource.com
When installing a VM to run as a server, I usually disable or remove its sound capability. To do so, select Sound and click Remove or right-click on Sound and choose Remove Hardware .
You can also add hardware with the Add Hardware button at the bottom. This brings up the Add New Virtual Hardware screen where you can add additional storage devices, memory, sound, etc. It's like having access to a very well-stocked (if virtual) computer hardware warehouse.
11-vmm_addnewhardware.pngOnce you are happy with your VM configuration, click Begin Installation , and the system will boot and begin installing your specified operating system from the ISO.
12-vmm_rhelbegininstall.pngOnce it completes, it reboots, and your new VM is ready for use.
13-vmm_rhelinstalled.pngVirtual Machine Manager is a powerful tool for desktop Linux users. It is open source and an excellent alternative to proprietary and closed virtualization products.
Sep 02, 2019 | www.linuxuprising.com
I keep seeing Ubuntu (and Ubuntu-based Linux distributions like Linux Mint or Pop!_OS) and Debian 10 users trying to install Wine and running into dependency issues, so I thought I'd make a post about properly installing Wine Staging and Development builds (and Stable, though there are no dependency issues with these builds).
Aug 22, 2019 | www.maketecheasier.com
Save Container Image from Source Host
It's not required to stop the container first, but it's highly recommended that you do so. You will take a snapshot of the data in your Docker instance. If it's running while you do this, there's a small chance some files might end up being incomplete in your snapshot. Imagine someone uploading a 500MB file. When 250MB has been uploaded, you issue the
docker commit
command. The upload then continues, but when you restore this Docker image on another host, only 250MB out of the 500MB might be available.So, if you can, first stop the instance.
docker stop NAME_OF_INSTANCEA Docker container is built out of a generic, initial image. Over time, you add your own changes to this base image. Processes running inside the container might also save their own data or make other changes. To preserve all of this, you can commit this new state to a new image.
Note that if the instance is currently running, this action will pause it while its contents are saved. If you added a lot of data to your container, this operation will take a longer time to complete. If this is a problem, you can avoid this pause by entering
docker commit -p=false NAME_OF_INSTANCE mycontainerimage
instead of the next command. However, don't do this unless absolutely necessary. The odds of creating an image with inconsistent/incomplete data increase in this case.In this tutorial, a generic name has been chosen for the resulting image,
mycontainerimage
. You can change this name if you want to. If you do so, remember to replace it in all subsequent commands where you encounter it.docker commit NAME_OF_INSTANCE mycontainerimageNow, save this image to a file and compress it.
docker save mycontainerimage | gzip > mycontainerimage.tar.gzNext, use your preferred file transfer method and copy
Load Container Image on Destination Hostmycontainerimage.tar.gz
to the host where you want to migrate your container.After you log in to the host where you transferred the image, import it to Docker.
gunzip -c mycontainerimage.tar.gz | docker loadSince you never initialized this container here, you cannot start it with
docker start
yet. Instead, issue the same command you used in the past, when you first ran this Docker instance. The only difference now is that you will use "mycontainerimage" at the end instead of whatever image you used in the past.The next command is just an example; don't copy and paste this unless it applies to you. (No special parameters were required when you ran the image for the first time)
docker run -d --name=PICK_NAME_FOR_CONTAINER mycontainerimageAs contrast, the following is an example of a command where parameter
--publish
was required to forward port 80 on the host machine to port 80 on the container:docker run -d --name=http-server --publish 80:80 mycontainerimageAfterwards, you can stop and start this container normally, with
Transfer Image without Creating a Filedocker stop
anddocker start
commands.Sometimes you may want to skip creating a
mycontainerimage.tar.gz
file. Maybe you don't have enough disk space since the container has a lot of data in it. You can save, compress, transfer, uncompress and load the image on the destination host in one command. After running thedocker commit
command discussed in the first section, you can use this:docker save mycontainerimage | gzip | ssh [email protected] 'gunzip | docker load'It should work from Windows, too, since it now has a built-in SSH client (PuTTY not necessary anymore).
Afterwards, continue with the
Conclusiondocker run
command that applies to your situation.
docker save
anddocker load
are great as an ad hoc solution for moving containers around occasionally. But remember, if you do this often, you might want to set up your own private repository instead.
Jun 14, 2019 | linuxtechlab.com
Learn to create Dockerfile with Dockerfile example
by Shusain · Published October 30, 2018 · Updated October 30, 2018
We have earlier discussed how to create a docker container & also learned some important commands for managing the containers . In this tutorial, we will learn about dockerfile, all its parameters/commands with dockerfile example.Dockerfile is a text file that contains list of commands that are used to build a docker image automatically. Basically a docker file acts as set of instructions that are needed to build a docker image.
( Recommended Read : Complete guide for creating Vagrant boxex with VirtualBox )
Dockerfile ExampleMentioned below is a Dockerfile example that we have already created, for CentOS with webserver (apache) installed on it.
FROM centos:7
MAINTAINER linuxtechlab
LABEL Remarks="This is a dockerfile example for Centos system"
RUN yum -y update && \
yum -y install httpd && \
yum clean all
COPY data/httpd.conf /etc/httpd/conf/httpd.conf
ADD data/html.tar.gz /var/www/html/
EXPOSE 80
ENV HOME /root
WORKDIR /root
ENTRYPOINT ["ping"]
CMD ["google.com"]
ParametersWe will now discuss all the parameters mentioned here one by one so that we have an understanding as to what they actually means,
FROM centos:7
FROM tells which base you would like to use for creating your dockerimage. Since we are using Centos:7, its mentioned there. We can use other OS like centos:6 , ubuntu:16.04 etc
MAINTAINER linuxtechlab
LABEL Remarks="This is a dockerfile example for Centos system"
These both fields MAINTAINER & LABEL Remarks are called labels. They are used to pass information like Maintainer of docker image, Version number, purpose or some other remarks. We can add a number of labels but its recommended to avoid unnecessary labels.
RUN yum -y update && \
yum -y install httpd && \
yum clean all
Now RUN command is responsible for installing or changing the docker image as we see fit. Here we have asked RUN to update our system & than install apcahe on it. We can also ask it to create a directory or to install some other packages.
COPY data/httpd.conf /etc/httpd/conf/httpd.conf
ADD data/html.tar.gz /var/www/html/
COPY & ADD command almost server the same purpose i.e. they are used to copy the files to docker image with one difference. Here we have used the COPY command to copy httpd.conf from data directory to the default location of httpd.conf on docker image.
And we than used ADD command to copy a tar.gz archive to apache's document directory to serve content on the webserver. But you might have noticed we didn't extract it & that's the one difference ADD & COPY command have, ADD command will automatically extract the archive at the destination folder. Also we could have used ADD in place of copy, "ADD data/httpd.conf /etc/httpd/conf/httpd.conf ".
EXPOSE 80
EXPOSE command will open the mentioned port on the docker image to allow access to outside world. We could also use EXPOSE 80/tcp or EXPOSE 53/udp.
ENV HOME /root
ENV command sets up environment variables, here we have used it to set HOME to /root. Syntax for using ENV is
ENV key value
Some examples of ENV usage are,
ENV user admin, ENV database=testdb, ENV PHPVERSION 7 etc etc.
WORKDIR /root
With WORKDIR, we can set working directory for the docker image. Here it has been set to /root.
ENTRYPOINT ["ping"]
CMD ["google.com"]
ENTRYPOINT & CMD are both used to define executable that should run once docker is up. On ENTRYPOINT, we define an executable & with CMD, we define additional parameters that are required for ENTRYPOINT. Like here, we have used ping with ENTRYPOINT but it requires additional parameter, which we provided with CMD. These both commands are used in conjunction with each other.
We can also used CMD alone with something like CMD ["bash"].
Note:- Not all these parameters are required to pass while creating a Dockerfile, you can only the ones you need.
Apart from the commands discussed above, there are some other commands as well that can be used in the Dockerfile & that are mentioned below,
USER
With USER, we can define the user to be used to execute a command like USER dan. We can specify USER with RUN, CMD or with ENTRYPOINT as well.
ONBUILD
ONBUILD command lets you add a trigger that will be executed at a later time when the current image is being used as a base image for another. For example, we have added our own content for website using the dockerfile but we might not want it to be used for other docker images. So we will add ,
ONBUILD RUN rm -rf /var/www/html/*
This will remove the contents when the image is being re-purposed.
So these were all the commands that we can use with our Dockerfiles. Mentioned below Dockerfile examples for Ubuntu & Fedora, for reference,
Ubuntu Dockerfile# Get the base image
FROM ubuntu:16.04
# Install all packages
RUN \
apt-get update && \
apt-get -y upgrade && \
apt-get install -y apache2 && \
# adding some content for Apache server
RUN echo "This is a test docker" > /var/www/html/index.html
# Copying setting file & adding some content to be served by apache
COPY data/httpd.conf /etc/apache2/httpd.conf
# Defining a command to be run after the docker is up
ENTRYPOINT ["elinks"]
CMD ["localhost"]
Fedora DockerfileFROM docker.io/fedora
MAINTAINER linuxtechlab
LABEL Remarks="This is a dockerfile example for Fedora system"
# Updating dependencies, installing Apache and cleaning dnf caches to reduce container size
RUN dnf -y update && \
dnf -y install httpd && \
dnf clean all \
mkdir /data
# Copying apache configuration file & adding some content to be served by apache
COPY data/httpd.conf /etc/httpd/conf/httpd.conf
ADD data/html.tar.gz /var/www/html/
# Adding a script & granting it execute permissions
ADD data/script.sh /data
# Open http port for apache
EXPOSE 80
# Set environment variables.
ENV HOME /root
# Defining a command to be run after the docker is up
CMD ["/data/script.sh"]
Now that we know how to create a Dockerfile, we will use this newly learned skill for our next tutorial, to create a docker image & than will upload the same to DockerHub, official Docker Public Image Registry.
If you think we might have missed something or have some query regarding this tutorial, please let us know using the comment box below.
- onclick360 says: November 28, 2018 at 12:39 am
Very well explained the article keep posting
Explore more about dockerfile with 5 real-time example
https://onclick360.com/dockerfile-example/
Mar 07, 2019 | www.tuxfixer.com
Install and Configure KVM (Bridge Net Interface) on CentOS 7 / RHEL 7 Posted on July 1, 2016 January 29, 2019 by Grzegorz Juszczak
KVM (Kernel-based Virtual Machine) is a virtualization infrastructure for the Linux which requires a processor with hardware virtualization extension to be able to host guest sytems. KVM is convenient solution to test and try different operating systems if you don't have a possibility to purchase expensive and power consuming physical hardware.The below tutorial presents KVM (QEMU) installation and setup along with Linux Bridge configuration on CentOS7 / RedHat7 operating system.
Steps:
1. Verify CPU Hardware Virtualization support
Our CPU must support hardware virtualization ( VT-x ) in order to become KVM Hypervisor and host Virtual Machines (guest operating systems):[root@tuxfixer ~]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 2 Core(s) per socket: 2 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 42 Model name: Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz Stepping: 7 CPU MHz: 800.000 BogoMIPS: 4988.58 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 3072K NUMA node0 CPU(s): 0-32. Disable and stop NetworkManager
NetworkManager is known to cause problems when working with Linux Bridge, so for us it's better to disable it:[root@tuxfixer ~]# systemctl stop NetworkManager [root@tuxfixer ~]# systemctl disable NetworkManager Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service. Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service. Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.3. Install KVM related packages
[root@tuxfixer ~]# yum install qemu-kvm qemu-img libvirt libvirt-python libvirt-client virt-install virt-viewer virt-manager4. Launch and enable libvirtd daemon
[root@tuxfixer ~]# systemctl enable libvirtd [root@tuxfixer ~]# systemctl start libvirtd5. Set system-wide privileges for KVM
We need to add our regular user tuxfixer to kvm group to let him launch virt-manager[root@tuxfixer ~]# usermod -a -G kvm tuxfixerWe also need to set polkit (policy kit) rules for KVM.
Edit file 49-polkit-pkla-compat.rules :[root@tuxfixer ~]# vim /etc/polkit-1/rules.d/49-polkit-pkla-compat.rulesand add the following ath the bottom:
polkit.addRule(function(action, subject) { if (action.id == "org.libvirt.unix.manage" && subject.isInGroup("kvm")) { return polkit.Result.YES; } });6. Create KVM Linux Bridge (bridge KVM hypervisor host network interface with VM network interfaces)
In this tutorial we want Virtual Machines to obtain their IP addresses from the same network where KVM Hypervisor host is connected, that's why we will bridge it's main network interface ( em1 ) with VM network interfaces. To do so, we need to create Linux Bridge from em1 interface on KVM Hypervisor host.Current Hypervisor network configuration (right after KVM installation):
[root@tuxfixer ~]# ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: em1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether d0:67:e5:33:15:3f brd ff:ff:ff:ff:ff:ff inet 192.168.2.3/24 brd 192.168.2.255 scope global dynamic em1 valid_lft 73193sec preferred_lft 73193sec inet6 fe80::d267:e5ff:fe33:153f/64 scope link valid_lft forever preferred_lft forever 3: wlp3s0: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:24:d7:f4:dc:e8 brd ff:ff:ff:ff:ff:ff 4: virbr0: mtu 1500 qdisc noqueue state DOWN link/ether 52:54:00:b7:22:b3 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 5: virbr0-nic: mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 500 link/ether 52:54:00:b7:22:b3 brd ff:ff:ff:ff:ff:ffifcfg-em1 config file (before KVM Linux Bridge creation):
[root@tuxfixer ~]# cat /etc/sysconfig/network-scripts/ifcfg-em1 DEVICE="em1" TYPE="Ethernet" BOOTPROTO="none" NAME="em1" ONBOOT="yes" HWADDR="D0:67:E5:33:15:3F" IPADDR=192.168.2.3 PREFIX=24 GATEWAY=192.168.2.1 PEERDNS="no" NM_CONTROLLED="no"For KVM networking configuration we will use virt-manager application which is a user-friendly GUI frontend for KVM command line interface.
Note : virbr0 interface was created automatically along with KVM installation and represents virtual network existing "inside" KVM environment with NAT (Network Address Translation) enabled.
Since we don't need NAT inside KVM environment (we want to bridge Hypervisor interface), we can remove existing KVM virtual network based on virbr0 interface.
Launch virt-manager as root :
[root@tuxfixer ~]# virt-managervirt-manager window should appear:
Right click: QEMU/KVM -> Details -> Virtual Networks -> Disable network: "default" -> Delete network: "default" based on virbr0
Now we can bridge KVM Hypervisor interface ( em1 ):
Right click: QEMU/KVM -> Details -> Network Interfaces -> Add Interface :
Interface type: Bridge
Interface name: br-em1
Start mode: on boot
Activate now: enabled
IP settings: copy configuration from 'em1′
Bridge settings: STP on, delay 0.00 sec
press Finish to override the existing configuration and create KVM Linux Bridge.Now we can verify newly created Linux Bridge ( br-em1 ):
Check current IP configuration (IP is now assigned to br-em1 and em1 acts now as backend interface only):
[root@tuxfixer ~]# ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: em1: mtu 1500 qdisc pfifo_fast master br-em1 state UP qlen 1000 link/ether d0:67:e5:33:15:3f brd ff:ff:ff:ff:ff:ff 3: wlp3s0: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:24:d7:f4:dc:e8 brd ff:ff:ff:ff:ff:ff 6: br-em1: mtu 1500 qdisc noqueue state UP link/ether d0:67:e5:33:15:3f brd ff:ff:ff:ff:ff:ff inet 192.168.2.3/24 brd 192.168.2.255 scope global br-em1 valid_lft forever preferred_lft forever inet6 fe80::d267:e5ff:fe33:153f/64 scope link valid_lft forever preferred_lft foreverVerify Linux Bridge configuration:
[root@tuxfixer ~]# brctl show bridge name bridge id STP enabled interfaces br-em1 8000.d067e533153f yes em1KVM Linux Bridge is now configured.
7. Further steps – launching VMs
You can proceed now with Virtual Machines installation, you can also launch VMs from already created qcow2 images of VMs, if you have those.
If you need Kali Linux qcow2 images , you can check mine here . Images marked as non-ci have root password configured and are suitable for KVM.
Grzegorz Juszczak Linux , Virtualization centos 7 redhat 7 kvm , configure linux bridge , install configure kvm centos , kvm bridged network centos , kvm installation centos 7 , kvm installation rhel 7 , kvm linux bridge centos , setup kvm bridge centos , setup kvm bridge rhel Download Arch Linux OpenStack KVM qcow2 image by tuxfixer.com Install Xfce 4 Desktop Environment on CentOS 7 22 thoughts on "Install and Configure KVM (Bridge Net Interface) on CentOS 7 / RHEL 7"Leave a Reply
- jan August 22, 2016 at 18:25 cool article
- Grzegorz Juszczak August 22, 2016 at 18:39 Thank You Janek
- JJ September 23, 2017 at 23:51 WTF all network down – fucking | when finished added new – rewrite existing and down + lose ifcfg-* file WTF
- Grzegorz Juszczak October 10, 2017 at 21:46 I encountered such situation once on Debian, anyway on CentOS/RHEL it never happened to me, looks like you need to recreate it manualy.
- jliou October 16, 2017 at 06:21 Best KVM on CentOS 7 installation article so far and I have checked at least three to four articles, including one posted by Dell engineer since I'm using Dell PowerEdge T110 as host.
- lemwish December 10, 2017 at 04:27 Its taken 5 months to land on this gem. Thank you
- Grzegorz Juszczak December 10, 2017 at 13:15 You are welcome
- Dan December 29, 2017 at 15:51 When the bridge is configured is the regular network interface for the host become unusable?
- Grzegorz Juszczak January 2, 2018 at 12:23 Hi Dan
The regular interface acts only as backend device for the bridge, but should be enabled all the time. IP is transferred from this interface to the bridge, but the interface is still working, you can even capture packets from this device using tcpdump/Wireshark.- derek January 4, 2018 at 23:11 Perfect b/c it's right to the point. duckcuckgo sent me here btw!
- Syafril Hermansyah January 18, 2018 at 00:10 Awesome article, working great for multi bridge.
Thank you very much Grzegorz Juszczak.
+1- jack August 2, 2018 at 03:46 Thank you so much, this article has been brilliant!! Finally it works pretty good.
- francis September 20, 2018 at 14:13 now that the host network has no ip, if I want to ssh into the host machine how do I do that
- Grzegorz Juszczak September 20, 2018 at 22:16 After moving the IP address from backend interface to the bridge, you just connect to the bridge via SSH, IP address in fact doesn't change.
- Luander September 26, 2018 at 15:03 Nice article. For me it works until I reboot the host machine. Is this configuration persistent? If not, how to make it survive a reboot?
- Grzegorz Juszczak September 29, 2018 at 21:55 Hi Launder
This configuration is definitely persistent after reboot. There is no magic here, it's simple Linux bridge.- Air November 22, 2018 at 13:31 Can you bridge the wireless network as well in the same manner?
- Grzegorz Juszczak December 2, 2018 at 23:19 Never tried bridging wi-fi interface, but I guess it should be possible, in the same manner
- GregM January 10, 2019 at 17:07 Does this solution allow the guest and host to directly communicate over the primary subnet? This isn't supported in macvtap.
- Grzegorz Juszczak January 20, 2019 at 21:35 Hi GregM
If by writing "primary subnet" you meant the management network, then yes, this solution allows it.
Jan 26, 2019 | techoverflow.net
Solution:
The error message tells you that your current user can't access the docker engine, because you're lacking permissions to access the unix socket to communicate with the engine.
As a temporary solution, you can use
sudo
to run the failed command as root.
However it is recommended to fix the issue by adding the current user to thedocker
group :Run this command in your favourite shell and then completely log out of your account and log back in (if in doubt, reboot!):
sudo usermod -a -G docker $USER
- sudo usermod -a -G docker $USER
sudo usermod -a -G docker $USERAfter doing that, you should be able to run the command without any issues. Run
docker run hello-world
as a normal user in order to check if it works. Reboot if the issue still persists.Logging out and logging back in is required because the group change will not have an effect unless your session is closed.
Control Docker ServiceNow you have Docker installed onto your machine, start the Docker service in case if it is not started automatically after the installation
# systemctl start docker # systemctl enable dockerOnce the service is started, verify your installation by running the following command.
# docker run -it centos echo Hello-WorldLet's see what happens when we run " docker run " command. Docker starts a container with centos base image since we are running this centos container for the first time, the output will look like below.
Unable to find image 'centos:latest' locally Trying to pull repository docker.io/centos ... 0114405f9ff1: Download complete 511136ea3c5a: Download complete b6718650e87e: Download complete 3d3c8202a574: Download complete Status: Downloaded newer image for docker.io/centos:latest Hello-WorldDocker looks for centos image locally, and it is not found, it starts downloading the centos image from Docker registry. Once the image has been downloaded, it will start the container and echo the command " Hello-World " in the console which you can see at the end of the output.
Feb 11, 2019 | bencane.com
Containers and Virtual Machines are often seen as conflicting technology, however, this is often a misunderstanding.
Virtual Machines are a way to take a physical server and provide a fully functional operating environment that shares those physical resources with other virtual machines. A Container is generally used to isolate a running process within a single host to ensure that the isolated processes cannot interact with other processes within that same system. In fact containers are closer to BSD Jails and chroot 'ed processes than full virtual machines.
What Docker provides on top of containersDocker itself is not a container runtime environment; in fact Docker is actually container technology agnostic with efforts planned for Docker to support Solaris Zones and BSD Jails . What Docker provides is a method of managing, packaging, and deploying containers. While these types of functions may exist to some degree for virtual machines they traditionally have not existed for most container solutions and the ones that existed, were not as easy to use or fully featured as Docker.
Now that we know what Docker is, let's start learning how Docker works by first installing Docker and deploying a public pre-built container.
Starting with InstallationAs Docker is not installed by default step 1 will be to install the Docker package; since our example system is running Ubuntu 14.0.4 we will do this using the Apt package manager.
# apt-get install docker.ioReading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: aufs-tools cgroup-lite git git-man liberror-perl Suggested packages: btrfs-tools debootstrap lxc rinse git-daemon-run git-daemon-sysvinit git-doc git-el git-email git-gui gitk gitweb git-arch git-bzr git-cvs git-mediawiki git-svn The following NEW packages will be installed: aufs-tools cgroup-lite docker.io git git-man liberror-perl 0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. Need to get 7,553 kB of archives. After this operation, 46.6 MB of additional disk space will be used. Do you want to continue? [Y/n] yTo check if any containers are running we can execute the docker command using the ps option.
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESThe ps function of the docker command works similar to the Linux ps command. It will show available Docker containers and their current status. Since we have not started any Docker containers yet, the command shows no running containers.
Deploying a pre-built nginx Docker containerOne of my favorite features of Docker is the ability to deploy a pre-built container in the same way you would deploy a package with yum or apt-get . To explain this better let's deploy a pre-built container running the nginx web server. We can do this by executing the docker command again, however, this time with the run option.
# docker run -d nginx Unable to find image 'nginx' locally Pulling repository nginx 5c82215b03d1: Download complete e2a4fb18da48: Download complete 58016a5acc80: Download complete 657abfa43d82: Download complete dcb2fe003d16: Download complete c79a417d7c6f: Download complete abb90243122c: Download complete d6137c9e2964: Download complete 85e566ddc7ef: Download complete 69f100eb42b5: Download complete cd720b803060: Download complete 7cc81e9a118a: Download completeThe run function of the docker command tells Docker to find a specified Docker image and start a container running that image. By default, Docker containers run in the foreground, meaning when you execute docker run your shell will be bound to the container's console and the process running within the container. In order to launch this Docker container in the background I included the -d (detach) flag.
By executing docker ps again we can see the nginx container running.
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f6d31ab01fc9 nginx:latest nginx -g 'daemon off 4 seconds ago Up 3 seconds 443/tcp, 80/tcp desperate_lalandeIn the above output we can see the running container desperate_lalande and that this container has been built from the nginx:latest image.
Docker ImagesImages are one of Docker's key features and is similar to a virtual machine image. Like virtual machine images, a Docker image is a container that has been saved and packaged. Docker however, doesn't just stop with the ability to create images. Docker also includes the ability to distribute those images via Docker repositories which are a similar concept to package repositories. This is what gives Docker the ability to deploy an image like you would deploy a package with yum . To get a better understanding of how this works let's look back at the output of the docker run execution.
# docker run -d nginx Unable to find image 'nginx' locallyThe first message we see is that docker could not find an image named nginx locally. The reason we see this message is that when we executed docker run we told Docker to startup a container, a container based on an image named nginx . Since Docker is starting a container based on a specified image it needs to first find that image. Before checking any remote repository Docker first checks locally to see if there is a local image with the specified name.
Since this system is brand new there is no Docker image with the name nginx , which means Docker will need to download it from a Docker repository.
Pulling repository nginx 5c82215b03d1: Download complete e2a4fb18da48: Download complete 58016a5acc80: Download complete 657abfa43d82: Download complete dcb2fe003d16: Download complete c79a417d7c6f: Download complete abb90243122c: Download complete d6137c9e2964: Download complete 85e566ddc7ef: Download complete 69f100eb42b5: Download complete cd720b803060: Download complete 7cc81e9a118a: Download completeThis is exactly what the second part of the output is showing us. By default, Docker uses the Docker Hub repository, which is a repository service that Docker (the company) runs.
Like GitHub, Docker Hub is free for public repositories but requires a subscription for private repositories. It is possible however, to deploy your own Docker repository, in fact it is as easy as docker run registry . For this article we will not be deploying a custom registry service.
Stopping and Removing the ContainerBefore moving on to building a custom Docker container let's first clean up our Docker environment. We will do this by stopping the container from earlier and removing it.
To start a container we executed docker with the run option, in order to stop this same container we simply need to execute the docker with the kill option specifying the container name.
# docker kill desperate_lalande desperate_lalandeIf we execute docker ps again we will see that the container is no longer running.
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESHowever, at this point we have only stopped the container; while it may no longer be running it still exists. By default, docker ps will only show running containers, if we add the -a (all) flag it will show all containers running or not.
# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f6d31ab01fc9 5c82215b03d1 nginx -g 'daemon off 4 weeks ago Exited (-1) About a minute ago desperate_lalandeIn order to fully remove the container we can use the docker command with the rm option.
# docker rm desperate_lalande desperate_lalandeWhile this container has been removed; we still have a nginx image available. If we were to re-run docker run -d nginx again the container would be started without having to fetch the nginx image again. This is because Docker already has a saved copy on our local system.
To see a full list of local images we can simply run the docker command with the images option.
# docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE nginx latest 9fab4090484a 5 days ago 132.8 MBBuilding our own custom imageAt this point we have used a few basic Docker commands to start, stop and remove a common pre-built image. In order to "Dockerize" this blog however, we are going to have to build our own Docker image and that means creating a Dockerfile .
With most virtual machine environments if you wish to create an image of a machine you need to first create a new virtual machine, install the OS, install the application and then finally convert it to a template or image. With Docker however, these steps are automated via a Dockerfile. A Dockerfile is a way of providing build instructions to Docker for the creation of a custom image. In this section we are going to build a custom Dockerfile that can be used to deploy this blog.
Understanding the ApplicationBefore we can jump into creating a Dockerfile we first need to understand what is required to deploy this blog.
The blog itself is actually static HTML pages generated by a custom static site generator that I wrote named; hamerkop . The generator is very simple and more about getting the job done for this blog specifically. All the code and source files for this blog are available via a public GitHub repository. In order to deploy this blog we simply need to grab the contents of the GitHub repository, install Python along with some Python modules and execute the hamerkop application. To serve the generated content we will use nginx ; which means we will also need nginx to be installed.
So far this should be a pretty simple Dockerfile, but it will show us quite a bit of the Dockerfile Syntax . To get started we can clone the GitHub repository and creating a Dockerfile with our favorite editor; vi in my case.
# git clone https://github.com/madflojo/blog.git Cloning into 'blog'... remote: Counting objects: 622, done. remote: Total 622 (delta 0), reused 0 (delta 0), pack-reused 622 Receiving objects: 100% (622/622), 14.80 MiB | 1.06 MiB/s, done. Resolving deltas: 100% (242/242), done. Checking connectivity... done. # cd blog/ # vi DockerfileFROM - Inheriting a Docker imageThe first instruction of a Dockerfile is the FROM instruction. This is used to specify an existing Docker image to use as our base image. This basically provides us with a way to inherit another Docker image. In this case we will be starting with the same nginx image we were using before.
If we wanted to start with a blank slate we could use the Ubuntu Docker image by specifying ubuntu:latest .
## Dockerfile that generates an instance of http://bencane.com FROM nginx:latest MAINTAINER Benjamin Cane <[email protected]>In addition to the FROM instruction, I also included a MAINTAINER instruction which is used to show the Author of the Dockerfile.
As Docker supports using # as a comment marker, I will be using this syntax quite a bit to explain the sections of this Dockerfile.
Running a test buildSince we inherited the nginx Docker image our current Dockerfile also inherited all the instructions within the Dockerfile used to build that nginx image. What this means is even at this point we are able to build a Docker image from this Dockerfile and run a container from that image. The resulting image will essentially be the same as the nginx image but we will run through a build of this Dockerfile now and a few more times as we go to help explain the Docker build process.
In order to start the build from a Dockerfile we can simply execute the docker command with the build option.
# docker build -t blog /root/blog Sending build context to Docker daemon 23.6 MB Sending build context to Docker daemon Step 0 : FROM nginx:latest ---> 9fab4090484a Step 1 : MAINTAINER Benjamin Cane <[email protected]> ---> Running in c97f36450343 ---> 60a44f78d194 Removing intermediate container c97f36450343 Successfully built 60a44f78d194In the above example I used the -t (tag) flag to "tag" the image as "blog". This essentially allows us to name the image, without specifying a tag the image would only be callable via an Image ID that Docker assigns. In this case the Image ID is 60a44f78d194 which we can see from the docker command's build success message.
In addition to the -t flag, I also specified the directory /root/blog . This directory is the "build directory", which is the directory that contains the Dockerfile and any other files necessary to build this container.
Now that we have run through a successful build, let's start customizing this image.
Using RUN to execute apt-getThe static site generator used to generate the HTML pages is written in Python and because of this the first custom task we should perform within this Dockerfile is to install Python . To install the Python package we will use the Apt package manager. This means we will need to specify within the Dockerfile that apt-get update and apt-get install python-dev are executed; we can do this with the RUN instruction.
## Dockerfile that generates an instance of http://bencane.com FROM nginx:latest MAINTAINER Benjamin Cane <[email protected]> ## Install python and pip RUN apt-get update RUN apt-get install -y python-dev python-pipIn the above we are simply using the RUN instruction to tell Docker that when it builds this image it will need to execute the specified apt-get commands. The interesting part of this is that these commands are only executed within the context of this container. What this means is even though python-dev and python-pip are being installed within the container, they are not being installed for the host itself. Or to put it simplier, within the container the pip command will execute, outside the container, the pip command does not exist.
It is also important to note that the Docker build process does not accept user input during the build. This means that any commands being executed by the RUN instruction must complete without user input. This adds a bit of complexity to the build process as many applications require user input during installation. For our example, none of the commands executed by RUN require user input.
Installing Python modulesWith Python installed we now need to install some Python modules. To do this outside of Docker, we would generally use the pip command and reference a file within the blog's Git repository named requirements.txt . In an earlier step we used the git command to "clone" the blog's GitHub repository to the /root/blog directory; this directory also happens to be the directory that we have created the Dockerfile . This is important as it means the contents of the Git repository are accessible to Docker during the build process.
When executing a build, Docker will set the context of the build to the specified "build directory". This means that any files within that directory and below can be used during the build process, files outside of that directory (outside of the build context), are inaccessible.
In order to install the required Python modules we will need to copy the requirements.txt file from the build directory into the container. We can do this using the COPY instruction within the Dockerfile .
## Dockerfile that generates an instance of http://bencane.com FROM nginx:latest MAINTAINER Benjamin Cane <[email protected]> ## Install python and pip RUN apt-get update RUN apt-get install -y python-dev python-pip ## Create a directory for required files RUN mkdir -p /build/ ## Add requirements file and run pip COPY requirements.txt /build/ RUN pip install -r /build/requirements.txtWithin the Dockerfile we added 3 instructions. The first instruction uses RUN to create a /build/ directory within the container. This directory will be used to copy any application files needed to generate the static HTML pages. The second instruction is the COPY instruction which copies the requirements.txt file from the "build directory" ( /root/blog ) into the /build directory within the container. The third is using the RUN instruction to execute the pip command; installing all the modules specified within the requirements.txt file.
COPY is an important instruction to understand when building custom images. Without specifically copying the file within the Dockerfile this Docker image would not contain the requirements.txt file. With Docker containers everything is isolated, unless specifically executed within a Dockerfile a container is not likely to include required dependencies.
Re-running a buildNow that we have a few customization tasks for Docker to perform let's try another build of the blog image again.
# docker build -t blog /root/blog Sending build context to Docker daemon 19.52 MB Sending build context to Docker daemon Step 0 : FROM nginx:latest ---> 9fab4090484a Step 1 : MAINTAINER Benjamin Cane <[email protected]> ---> Using cache ---> 8e0f1899d1eb Step 2 : RUN apt-get update ---> Using cache ---> 78b36ef1a1a2 Step 3 : RUN apt-get install -y python-dev python-pip ---> Using cache ---> ef4f9382658a Step 4 : RUN mkdir -p /build/ ---> Running in bde05cf1e8fe ---> f4b66e09fa61 Removing intermediate container bde05cf1e8fe Step 5 : COPY requirements.txt /build/ ---> cef11c3fb97c Removing intermediate container 9aa8ff43f4b0 Step 6 : RUN pip install -r /build/requirements.txt ---> Running in c50b15ddd8b1 Downloading/unpacking jinja2 (from -r /build/requirements.txt (line 1)) Downloading/unpacking PyYaml (from -r /build/requirements.txt (line 2)) <truncated to reduce noise> Successfully installed jinja2 PyYaml mistune markdown MarkupSafe Cleaning up... ---> abab55c20962 Removing intermediate container c50b15ddd8b1 Successfully built abab55c20962From the above build output we can see the build was successful, but we can also see another interesting message; ---> Using cache . What this message is telling us is that Docker was able to use its build cache during the build of this image.
Docker build cacheWhen Docker is building an image, it doesn't just build a single image; it actually builds multiple images throughout the build processes. In fact we can see from the above output that after each "Step" Docker is creating a new image.
Step 5 : COPY requirements.txt /build/ ---> cef11c3fb97cThe last line from the above snippet is actually Docker informing us of the creating of a new image, it does this by printing the Image ID ; cef11c3fb97c . The useful thing about this approach is that Docker is able to use these images as cache during subsequent builds of the blog image. This is useful because it allows Docker to speed up the build process for new builds of the same container. If we look at the example above we can actually see that rather than installing the python-dev and python-pip packages again, Docker was able to use a cached image. However, since Docker was unable to find a build that executed the mkdir command, each subsequent step was executed.
The Docker build cache is a bit of a gift and a curse; the reason for this is that the decision to use cache or to rerun the instruction is made within a very narrow scope. For example, if there was a change to the requirements.txt file Docker would detect this change during the build and start fresh from that point forward. It does this because it can view the contents of the requirements.txt file. The execution of the apt-get commands however, are another story. If the Apt repository that provides the Python packages were to contain a newer version of the python-pip package; Docker would not be able to detect the change and would simply use the build cache. This means that an older package may be installed. While this may not be a major issue for the python-pip package it could be a problem if the installation was caching a package with a known vulnerability.
For this reason it is useful to periodically rebuild the image without using Docker's cache. To do this you can simply specify --no-cache=True when executing a Docker build.
Deploying the rest of the blogWith the Python packages and modules installed this leaves us at the point of copying the required application files and running the hamerkop application. To do this we will simply use more COPY and RUN instructions.
## Dockerfile that generates an instance of http://bencane.com FROM nginx:latest MAINTAINER Benjamin Cane <[email protected]> ## Install python and pip RUN apt-get update RUN apt-get install -y python-dev python-pip ## Create a directory for required files RUN mkdir -p /build/ ## Add requirements file and run pip COPY requirements.txt /build/ RUN pip install -r /build/requirements.txt ## Add blog code nd required files COPY static /build/static COPY templates /build/templates COPY hamerkop /build/ COPY config.yml /build/ COPY articles /build/articles ## Run Generator RUN /build/hamerkop -c /build/config.ymlNow that we have the rest of the build instructions, let's run through another build and verify that the image builds successfully.
# docker build -t blog /root/blog/ Sending build context to Docker daemon 19.52 MB Sending build context to Docker daemon Step 0 : FROM nginx:latest ---> 9fab4090484a Step 1 : MAINTAINER Benjamin Cane <[email protected]> ---> Using cache ---> 8e0f1899d1eb Step 2 : RUN apt-get update ---> Using cache ---> 78b36ef1a1a2 Step 3 : RUN apt-get install -y python-dev python-pip ---> Using cache ---> ef4f9382658a Step 4 : RUN mkdir -p /build/ ---> Using cache ---> f4b66e09fa61 Step 5 : COPY requirements.txt /build/ ---> Using cache ---> cef11c3fb97c Step 6 : RUN pip install -r /build/requirements.txt ---> Using cache ---> abab55c20962 Step 7 : COPY static /build/static ---> 15cb91531038 Removing intermediate container d478b42b7906 Step 8 : COPY templates /build/templates ---> ecded5d1a52e Removing intermediate container ac2390607e9f Step 9 : COPY hamerkop /build/ ---> 59efd1ca1771 Removing intermediate container b5fbf7e817b7 Step 10 : COPY config.yml /build/ ---> bfa3db6c05b7 Removing intermediate container 1aebef300933 Step 11 : COPY articles /build/articles ---> 6b61cc9dde27 Removing intermediate container be78d0eb1213 Step 12 : RUN /build/hamerkop -c /build/config.yml ---> Running in fbc0b5e574c5 Successfully created file /usr/share/nginx/html//2011/06/25/checking-the-number-of-lwp-threads-in-linux Successfully created file /usr/share/nginx/html//2011/06/checking-the-number-of-lwp-threads-in-linux <truncated to reduce noise> Successfully created file /usr/share/nginx/html//archive.html Successfully created file /usr/share/nginx/html//sitemap.xml ---> 3b25263113e1 Removing intermediate container fbc0b5e574c5 Successfully built 3b25263113e1Running a custom containerWith a successful build we can now start our custom container by running the docker command with the run option, similar to how we started the nginx container earlier.
# docker run -d -p 80:80 --name=blog blog 5f6c7a2217dcdc0da8af05225c4d1294e3e6bb28a41ea898a1c63fb821989ba1Once again the -d (detach) flag was used to tell Docker to run the container in the background. However, there are also two new flags. The first new flag is --name , which is used to give the container a user specified name. In the earlier example we did not specify a name and because of that Docker randomly generated one. The second new flag is -p , this flag allows users to map a port from the host machine to a port within the container.
The base nginx image we used exposes port 80 for the HTTP service. By default, ports bound within a Docker container are not bound on the host system as a whole. In order for external systems to access ports exposed within a container the ports must be mapped from a host port to a container port using the -p flag. The command above maps port 80 from the host, to port 80 within the container. If we wished to map port 8080 from the host, to port 80 within the container we could do so by specifying the ports in the following syntax -p 8080:80 .
From the above command it appears that our container was started successfully, we can verify this by executing docker ps .
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d264c7ef92bd blog:latest nginx -g 'daemon off 3 seconds ago Up 3 seconds 443/tcp, 0.0.0.0:80->80/tcp blogWrapping upAt this point we now have a running custom Docker container. While we touched on a few Dockerfile instructions within this article we have yet to discuss all the instructions. For a full list of Dockerfile instructions you can checkout Docker's reference page , which explains the instructions very well.
Another good resource is their Dockerfile Best Practices page which contains quite a few best practices for building custom Dockerfiles. Some of these tips are very useful such as strategically ordering the commands within the Dockerfile. In the above examples our Dockerfile has the COPY instruction for the articles directory as the last COPY instruction. The reason for this is that the articles directory will change quite often. It's best to put instructions that will change oftenat the lowest point possible within the Dockerfile to optimize steps that can be cached.
In this article we covered how to start a pre-built container and how to build, then deploy a custom container. While there is quite a bit to learn about Docker this article should give you a good idea on how to get started. Of course, as always if you think there is anything that should be added drop it in the comments below.
Jan 27, 2019 | rancher.com
If you are using the Docker Docker package supplied by Red Hat / CentOS, the package name is
docker
. You can check the installed package by executing:rpm -q dockerIf you are using the Docker package supplied by Red Hat / CentOS, the
dockerroot
group is automatically added to the system. You will need to edit (or create)/etc/docker/daemon.json
to include the following:{ "group": "dockerroot" }Restart Docker after editing or creating the file. After restarting Docker, you can check the group permission of the Docker socket (
/var/run/docker.sock
), which should showdockerroot
as group:srw-rw----. 1 root dockerroot 0 Jul 4 09:57 /var/run/docker.sockAdd the SSH user you want to use to this group, this can't be the
root
user.usermod -aG dockerroot <user_name>To verify that the user is correctly configured, log out of the node and login with your SSH user, and execute
docker ps
:ssh <user_name>@node $ docker ps CONTAINER
Oct 15, 2018 | blog.aslanbrooke.com
Originally from Run Docker Without Sudo Aslan Brooke's Blog
Update the /etc/docker/daemon.json as follow (will require root priveleges):{ "live-restore": true, "group": "dockerroot" }Add user (replace
) to "dockerroot" group using the below command and then restart the docker service. usermod -aG dockerroot <user name> restart docker service
Jan 26, 2019 | stackoverflow.com
Ephreal , Jun 19, 2016 at 9:29
Is there a way I can download a Docker image/container using, for example, Firefox and not using the built-indocker-pull
.I am blocked by the company firewall and proxy, and I can't get a hole through it.
My problem is that I cannot use Docker to get images, that is, Docker save/pull and other Docker supplied functions since it is blocked by a firewall.
i cannot get access to the docker hub. I get a x509: Certificate signed by unknown authority. My company are using zScaler as man-in-the-middle firewall Ephreal Jun 19 '16 at 10:38
erikbwork , Apr 25, 2017 at 13:54
Possible duplicate of How to copy docker images from one host to another without via repository? erikbwork Apr 25 '17 at 13:54vikas027 , Dec 12, 2016 at 11:30
Just an alternative - This is what I did in my organization for couchbase image where I was blocked by a proxy. On my personal laptop (OS X)~$ $ docker save couchbase > couchbase.tar ~$ ls -lh couchbase.docker -rw------- 1 vikas devops 556M 12 Dec 21:15 couchbase.tar ~$ xz -9 couchbase.tar ~$ ls -lh couchbase.tar.xz -rw-r--r-- 1 vikas staff 123M 12 Dec 22:17 couchbase.tar.xzThen, I uploaded the compressed tar ball to Dropbox and downloaded on my work machine. For some reason Dropbox was open :)
On my work laptop (CentOS 7)$ docker load < couchbase.tar.xz
References
Ephreal , Dec 15, 2016 at 15:43
Thank you; didn't know you could save an image into a tar ball. I will try this. Ephreal Dec 15 '16 at 15:43I just had to deal with this issue myself - downloading an image from a restricted machine with Internet access, but no Docker client for use on a another restricted machine with the Docker client, but no Internet access. I posted my question to the DevOps Stack Exchange site :With help from the Docker Community I was able to find a resolution to my problem. What follows is my solution.
So it turns out that the Moby Project has a shell script on the Moby GitHub account which can download images from Docker Hub in a format that can be imported into Docker:
The usage syntax for the script is given by the following:
download-frozen-image-v2.sh target_dir image[:tag][@digest] ...The image can then be imported with
tar
anddocker load
:tar -cC 'target_dir' . | docker loadTo verify that the script works as expected, I downloaded an Ubuntu image from Docker Hub and loaded it into Docker:
user@host:~$ bash download-frozen-image-v2.sh ubuntu ubuntu:latest user@host:~$ tar -cC 'ubuntu' . | docker load user@host:~$ docker run --rm -ti ubuntu bash root@1dd5e62113b9:/#In practice I would have to first copy the data from the Internet client (which does not have Docker installed) to the target/destination machine (which does have Docker installed):
user@nodocker:~$ bash download-frozen-image-v2.sh ubuntu ubuntu:latest user@nodocker:~$ tar -C 'ubuntu' -cf 'ubuntu.tar' . user@nodocker:~$ scp ubuntu.tar user@hasdocker:~and then load and use the image on the target host:
user@hasdocker:~ docker load ubuntu.tar user@hasdocker:~ docker run --rm -ti ubuntu bash root@1dd5e62113b9:/#
Nov 02, 2016 | www.digitalocean.com
IntroductionDocker is an application that makes it simple and easy to run application processes in a container, which are like virtual machines, only more portable, more resource-friendly, and more dependent on the host operating system. For a detailed introduction to the different components of a Docker container, check out The Docker Ecosystem: An Introduction to Common Components .
There are two methods for installing Docker on CentOS 7. One method involves installing it on an existing installation of the operating system. The other involves spinning up a server with a tool called Docker Machine that auto-installs Docker on it.
In this tutorial, you'll learn how to install and use it on an existing installation of CentOS 7.
Prerequisites
- 64-bit CentOS 7 Droplet
- Non-root user with sudo privileges. A CentOS 7 server set up using Initial Setup Guide for CentOS 7 explains how to set this up.
Note: Docker requires a 64-bit version of CentOS 7 as well as a kernel version equal to or greater than 3.10. The default 64-bit CentOS 7 Droplet meets these requirements.
All the commands in this tutorial should be run as a non-root user. If root access is required for the command, it will be preceded by
Step 1 -- Installing Dockersudo
. Initial Setup Guide for CentOS 7 explains how to add users and give them sudo access.The Docker installation package available in the official CentOS 7 repository may not be the latest version. To get the latest and greatest version, install Docker from the official Docker repository. This section shows you how to do just that.
But first, let's update the package database:
- sudo yum check-update
Now run this command. It will add the official Docker repository, download the latest version of Docker, and install it:
- curl -fsSL https://get.docker.com/ | sh
After installation has completed, start the Docker daemon:
- sudo systemctl start docker
Verify that it's running:
- sudo systemctl status docker
The output should be similar to the following, showing that the service is active and running:
Output ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2016-05-01 06:53:52 CDT; 1 weeks 3 days ago Docs: https://docs.docker.com Main PID: 749 (docker)Lastly, make sure it starts at every server reboot:
- sudo systemctl enable docker
Installing Docker now gives you not just the Docker service (daemon) but also the
Step 2 -- Executing Docker Command Without Sudo (Optional)docker
command line utility, or the Docker client. We'll explore how to use thedocker
command later in this tutorial.By default, running the
Output docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?. See 'docker run --help'.docker
command requires root privileges -- that is, you have to prefix the command withsudo
. It can also be run by a user in the docker group, which is automatically created during the installation of Docker. If you attempt to run thedocker
command without prefixing it withsudo
or without being in the docker group, you'll get an output like this:If you want to avoid typing
sudo
whenever you run thedocker
command, add your username to the docker group:
- sudo usermod -aG docker $(whoami)
You will need to log out of the Droplet and back in as the same user to enable this change.
If you need to add a user to the
docker
group that you're not logged in as, declare that username explicitly using:
- sudo usermod -aG docker username
The rest of this article assumes you are running the
Step 3 -- Using the Docker Commanddocker
command as a user in the docker user group. If you choose not to, please prepend the commands withsudo
.With Docker installed and working, now's the time to become familiar with the command line utility. Using
docker
consists of passing it a chain of options and subcommands followed by arguments. The syntax takes this form:
- docker [option] [command] [arguments]
To view all available subcommands, type:
- docker
As of Docker 1.11.1, the complete list of available subcommands includes:
Output attach Attach to a running container build Build an image from a Dockerfile commit Create a new image from a container's changes cp Copy files/folders between a container and the local filesystem create Create a new container diff Inspect changes on a container's filesystem events Get real time events from the server exec Run a command in a running container export Export a container's filesystem as a tar archive history Show the history of an image images List images import Import the contents from a tarball to create a filesystem image info Display system-wide information inspect Return low-level information on a container or image kill Kill a running container load Load an image from a tar archive or STDIN login Log in to a Docker registry logout Log out from a Docker registry logs Fetch the logs of a container network Manage Docker networks pause Pause all processes within a container port List port mappings or a specific mapping for the CONTAINER ps List containers pull Pull an image or a repository from a registry push Push an image or a repository to a registry rename Rename a container restart Restart a container rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save one or more images to a tar archive search Search the Docker Hub for images start Start one or more stopped containers stats Display a live stream of container(s) resource usage statistics stop Stop a running container tag Tag an image into a repository top Display the running processes of a container unpause Unpause all processes within a container update Update configuration of one or more containers version Show the Docker version information volume Manage Docker volumes wait Block until a container stops, then print its exit codeTo view the switches available to a specific command, type:
- docker docker-subcommand --help
To view system-wide information, use:
Step 4 -- Working with Docker Images
- docker info
Docker containers are run from Docker images. By default, it pulls these images from Docker Hub, a Docker registry managed by Docker, the company behind the Docker project. Anybody can build and host their Docker images on Docker Hub, so most applications and Linux distributions you'll need to run Docker containers have images that are hosted on Docker Hub.
To check whether you can access and download images from Docker Hub, type:
- docker run hello-world
The output, which should include the following, should indicate that Docker in working correctly:
Output Hello from Docker. This message shows that your installation appears to be working correctly. ...You can search for images available on Docker Hub by using the
docker
command with thesearch
subcommand. For example, to search for the CentOS image, type:
- docker search centos
The script will crawl Docker Hub and return a listing of all images whose name match the search string. In this case, the output will be similar to this:
Output NAME DESCRIPTION STARS OFFICIAL AUTOMATED centos The official build of CentOS. 2224 [OK] jdeathe/centos-ssh CentOS-6 6.7 x86_64 / CentOS-7 7.2.1511 x8... 22 [OK] jdeathe/centos-ssh-apache-php CentOS-6 6.7 x86_64 / Apache / PHP / PHP M... 17 [OK] million12/centos-supervisor Base CentOS-7 with supervisord launcher, h... 11 [OK] nimmis/java-centos This is docker images of CentOS 7 with dif... 10 [OK] torusware/speedus-centos Always updated official CentOS docker imag... 8 [OK] nickistre/centos-lamp LAMP on centos setup 3 [OK] ...In the OFFICIAL column, OK indicates an image built and supported by the company behind the project. Once you've identifed the image that you would like to use, you can download it to your computer using the
pull
subcommand, like so:
- docker pull centos
After an image has been downloaded, you may then run a container using the downloaded image with the
run
subcommand. If an image has not been downloaded whendocker
is executed with therun
subcommand, the Docker client will first download the image, then run a container using it:
- docker run centos
To see the images that have been downloaded to your computer, type:
- docker images
The output should look similar to the following:
[secondary_lable Output] REPOSITORY TAG IMAGE ID CREATED SIZE centos latest 778a53015523 5 weeks ago 196.7 MB hello-world latest 94df4f0ce8a4 2 weeks ago 967 BAs you'll see later in this tutorial, images that you use to run containers can be modified and used to generate new images, which may then be uploaded ( pushed is the technical term) to Docker Hub or other Docker registries.
Step 5 -- Running a Docker ContainerThe
hello-world
container you ran in the previous step is an example of a container that runs and exits, after emitting a test message. Containers, however, can be much more useful than that, and they can be interactive. After all, they are similar to virtual machines, only more resource-friendly.As an example, let's run a container using the latest image of CentOS. The combination of the -i and -t switches gives you interactive shell access into the container:
- docker run -it centos
Your command prompt should change to reflect the fact that you're now working inside the container and should take this form:
Output [root@59839a1b7de2 /]#Important: Note the container id in the command prompt. In the above example, it is
59839a1b7de2
.Now you may run any command inside the container. For example, let's install MariaDB server in the running container. No need to prefix any command with
sudo
, because you're operating inside the container with root privileges:Step 6 -- Committing Changes in a Container to a Docker Image
- yum install mariadb-server
When you start up a Docker image, you can create, modify, and delete files just like you can with a virtual machine. The changes that you make will only apply to that container. You can start and stop it, but once you destroy it with the
docker rm
command, the changes will be lost for good.This section shows you how to save the state of a container as a new Docker image.
After installing MariaDB server inside the CentOS container, you now have a container running off an image, but the container is different from the image you used to create it.
To save the state of the container as a new image, first exit from it:
- exit
Then commit the changes to a new Docker image instance using the following command. The -m switch is for the commit message that helps you and others know what changes you made, while -a is used to specify the author. The container ID is the one you noted earlier in the tutorial when you started the interactive docker session. Unless you created additional repositories on Docker Hub, the repository is usually your Docker Hub username:
- docker commit -m "What did you do to the image" -a "Author Name" container-id repository / new_image_name
For example:
- docker commit -m "added mariadb-server" -a "Sunday Ogwu-Chinuwa" 59839a1b7de2 finid/centos-mariadb
Note: When you commit an image, the new image is saved locally, that is, on your computer. Later in this tutorial, you'll learn how to push an image to a Docker registry like Docker Hub so that it may be assessed and used by you and others.
After that operation has completed, listing the Docker images now on your computer should show the new image, as well as the old one that it was derived from:
- docker images
The output should be of this sort:
Output REPOSITORY TAG IMAGE ID CREATED SIZE finid/centos-mariadb latest 23390430ec73 6 seconds ago 424.6 MB centos latest 778a53015523 5 weeks ago 196.7 MB hello-world latest 94df4f0ce8a4 2 weeks ago 967 BIn the above example, centos-mariadb is the new image, which was derived from the existing CentOS image from Docker Hub. The size difference reflects the changes that were made. And in this example, the change was that MariaDB server was installed. So next time you need to run a container using CentOS with MariaDB server pre-installed, you can just use the new image. Images may also be built from what's called a Dockerfile. But that's a very involved process that's well outside the scope of this article. We'll explore that in a future article.
Step 7 -- Listing Docker ContainersAfter using Docker for a while, you'll have many active (running) and inactive containers on your computer. To view the active ones, use:
- docker ps
You will see output similar to the following:
Output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f7c79cc556dd centos "/bin/bash" 3 hours ago Up 3 hours silly_spenceTo view all containers -- active and inactive, pass it the
-a
switch:
- docker ps -a
To view the latest container you created, pass it the
-l
switch:
- docker ps -l
Stopping a running or active container is as simple as typing:
- docker stop container-id
The
Step 8 -- Pushing Docker Images to a Docker Repositorycontainer-id
can be found in the output from thedocker ps
command.The next logical step after creating a new image from an existing image is to share it with a select few of your friends, the whole world on Docker Hub, or other Docker registry that you have access to. To push an image to Docker Hub or any other Docker registry, you must have an account there.
This section shows you how to push a Docker image to Docker Hub.
To create an account on Docker Hub, register at Docker Hub . Afterwards, to push your image, first log into Docker Hub. You'll be prompted to authenticate:
- docker login -u docker-registry-username
If you specified the correct password, authentication should succeed. Then you may push your own image using:
- docker push docker-registry-username / docker-image-name
It will take sometime to complete, and when completed, the output will be of this sort:
Output The push refers to a repository [docker.io/finid/centos-mariadb] 670194edfaf5: Pushed 5f70bf18a086: Mounted from library/centos 6a6c96337be1: Mounted from library/centos ...After pushing an image to a registry, it should be listed on your account's dashboard, like that show in the image below.
Apr 07, 2018 | www.collabora.com
Last week, a new version of
docker.io
, the Docker package provided by Debian, was uploaded to Debian Unstable. Quickly afterwards, the package moved to Debian Testing. This is good news for Debian users, as before that the package was more or less abandoned in "unstable", and the future was uncertain.The most striking fact about this change: it's the first time in two years that docker.io has migrated to "testing". Another interesting fact is that, version-wise, the package is moving from
1.13.1
from early 2017 to version18.03
from March 2018: that's a one-year leap forward.Let me give you a very rough summary of how things came to be. I personally started to work on that early in 2018. I joined the Debian Go Packaging Team and I started to work on the many, many Docker dependencies that needed to be updated in order to update the Docker package itself. I could get some of this work uploaded to Debian, but ultimately I was a bit stuck on how to solve the circular dependencies that plague the Docker package. This is where another Debian Developer, Dmitry Smirnov, jumped in. We discussed the current status and issues, and then he basically did all the job, from updating the package to tackling all the long-time opened bugs.
This is for the short story, let me know give you some more details.
The Docker package in Debian
To better understand why this update of the
docker.io
package is such a good news, let's have quick look at the current Debian offer:rmadison -u debian docker.ioIf you're running Debian 8 Jessie, you can install Docker 1.6.2, through backports. This version was released on May 14, 2015. That's 3 years old, but Debian Jessie is fairly old as well.
If you're running Debian 9 Stretch (ie. Debian stable), then you have no install candidate. No-thing. The current Debian doesn't provide any package for Docker. That's a bit sad.
What's even more sad is that for quite a while, looking into Debian unstable didn't look promising either. There used to be a package there, but it had bugs that prevented it to migrate to Debian testing. This package was stuck at the version
1.13.1
, released on Feb 8, 2017. Looking at the git history, there was not much happening.As for the reason for this sad state of things, I can only guess. Packaging Docker is a tedious work, mainly due to a very big dependency tree. After handling all these dependencies, there are other issues to tackle, some related to Go packaging itself, and others due to Docker release process and development workflow. In the end, it's quite difficult to find the right approach to package Docker, and it's easy to make mistakes that cost hours of works. I did this kind of mistakes. More than once.
So packaging Docker is not for the faint of heart, and maybe it's too much of a burden for one developer alone. There was a
docker-maint
mailing list that suggests an attempt to coordinate the effort, however this list was already dead by the time I found it. It looks like the people involved walked away.Another explanation for the disinterest in the Docker package could be that Docker itself already provides a Debian package on docker.com. One can always fall back to this solution, so why bothering with the extra-work of doing a Debian package proper?
That's what the next part is about!
Docker.io vs Docker-ceYou have two options to install Docker on Debian: you can get the package from docker.com (this package is named
docker-ce
), or you can get it from the Debian repositories (this package is nameddocker.io
). You can rebuild both of these packages from source: for docker-ce you can fetch the source code with git (it includes the packaging files), and for docker.io you can just get the source package with apt , like for every other Debian package.So what's the difference between these two packages?
No suspense, straight answer: what differs is the build process, and mostly, the way dependencies are handled.
Docker is written in Go, and Golang comes with some tooling that allows applications to keep a local copy of their dependencies in their source tree. In Go-talk, this is called vendoring . Docker makes heavy use of that (like many other Go applications), which means that the code is more or less self-contained. You can build Docker without having to solve external dependencies, as everything needed is already in-tree.
That's how the
docker-ce
package provided by Docker is built, and that's what makes the packaging files for this package trivial. You can look at these files at https://github.com/docker/docker-ce/tree/master/components/packaging/deb . So everything is in-tree, there's almost no external build dependency, and hence it's real easy for Docker to provide a new package for 'docker-ce' every month.On the other hand, the
docker.io
package provided by Debian takes a completely different approach: Docker is built against the libraries that are packaged in Debian, instead of using the local copies that are present in the Docker source tree. So if Docker is using libABC version 1.0, then it has a build dependency on libABC . You can have a look at the current build dependencies at https://salsa.debian.org/docker-team/docker/blob/master/debian/control .There are more than 100 dependencies there, and that's one reason why the Debian package is a quite time-consuming to maintain. To give you a rough estimation, in order to get the current "stable" release of Docker to Debian "unstable", it took up to 40 uploads of related packages to stabilize the dependency tree.
It's quite an effort. And once again, why bother? For this part I'll quote Dmitry as he puts it better than me:
> Debian cares about reusable libraries, and packaging them individually allows to
> build software from tested components, as Golang runs no tests for vendored
> libraries. It is a mind blowing argument given that perhaps there is more code
> in "vendor" than in the source tree.
>
> Private vendoring have all disadvantages of static linking ,
> making it impossible to provide meaningful security support. On top of that, it
> is easy to lose control of vendored tree; it is difficult to track changes in
> vendored dependencies and there is no incentive to upgrade vendored components.That's about it, whether it matters is up to you and your use-case. But it's definitely something you should know about if you want to make an informed decision on which package you're about to install and use.
To finish with this article, I'd like to give more details on the packaging of docker.io , and what was done to get this new version in Debian.
Under the hood of the docker.io packageLet's have a brief overview of the difficulties we had to tackle while packaging this new version of Docker.
The most outstanding one is circular dependencies. It's especially present in the top-level dependencies of Docker:
docker/swarmkit
,docker/libnetwork
,containerd
... All of these are Docker build dependencies, and all of these depend on Docker to build. Good luck with that ;)To solve this issue, the new docker.io package leverages MUT (Multiple Upstream Tarball) to have these different components downloaded and built all at once, instead of being packaged separately. In this particular case it definitely makes sense, as we're really talking about different parts of Docker. Even if they live in different git repositories, these components are not standalone libraries, and there's absolutely no good reason to package them separately.
Another issue with Docker is "micro-packaging", ie. wasting time packaging small git repositories that, in the end, are only used by one application (Docker in our case). This issue is quite interesting, really. Let me try to explain.
Golang makes it extremely easy to split a codebase among several git repositories. It's so easy that some projects (Docker in our case) do it extensively, as part of their daily workflow. And in the end, at a first glance you can't really say if a dependency of Docker is really a standalone project (that would require a proper packaging), or only just a part of Docker codebase, that happens to live in a different git repository. In this second case, there's really no reason to package it independently of Docker.
As a packager, if you're not a bit careful, you can easily fall in this trap, and start packaging every single dependency without thinking: that's "micro-packaging". It's bad in the sense that it increases the maintenance cost on the long-run, and doesn't bring any benefit. As I said before, docker.io has currently 100+ dependencies, and probably a few of them fall in this category.
While working on this new version of docker.io , we decided to stop packaging such dependencies. The guideline is that if a dependency has no semantic versioning , and no consumer other than Docker, then it's not a library, it's just a part of Docker codebase.
Even though some tools like dh-make-golang make it very easy to package simple Go packages, it doesn't mean that everything should be packaged. Understanding that, and taking a bit of time to think before packaging, is the key to successful Go packaging!
Last wordsI could go on for a while on the technical details, there's a lot to say, but let's not bore you to death, so that's it. I hope by now you understand that:
- There's now an up-to-date
docker.io
package in Debian.docker.io
anddocker-ce
both give you a Docker binary, but through a very different build process.- Maintaining the 'docker.io' package is not an easy task.
If you care about having a Docker package in Debian, feel free to try it out, and feel free to join the maintenance effort!
Let's finish with a few credits. I've been working on that topic, albeit sparingly, for the last 4 months, thanks to the support of Collabora . As for Dmitry Smirnov, the work he did on the docker.io package represents a three weeks, full-time effort, which was sponsored by Libre Solutions Pty Ltd .
I'd like to thank the Debian Go Packaging Team for their support, and also the reviewers of this article, namely Dmitry Smirnov and Hctor Orn Martnez.
Last but not least, I will attend DebConf18 in Taiwan, where I will give a speak on this topic. There's also a BoF on Go Packaging planned.
See you there!
Dec 20, 2018 | chiefio.wordpress.com
Sidebar on Containers: The basic idea is to isolate a bit of production application from all the rest of the system and make sure it has a consistent environment. So you package up your DNS server with the needed files and systems config and what-all and stick it in a container that runs under a host operating system.
It isn't a full Virtual Machine, so it avoids that overhead and inefficiency, but it does isolate your applications from "update and die" problems, most of the time. "Docker" is a big one.
Lately Red Hat et. al. have been pushing for a strongly systemD dependent kubernets instead.
The need to rapidly toss a VM into production and bring up a 'container' application on it drove (IMHO) much of the push to move all sorts of stuff into systemD to make booting very fast (even if it then doesn't work reliably /snarc;)
Much of the commercial world has moved to putting things in Docker or other container systems.
On BSD their equivalent is called "jails" as it keeps each application instance isolated from the system and from other applications. On "my Cray" we used a precursor tech of change root "chroot" to isolate things for security; but I got off that train before it reached the "jails" and "docker" station.
Dec 16, 2018 | www.quora.com
The main benefit of Docker is that it automatically solves the problems with versioning and cross-platform deployment, as the images can be easily recombined to form any version and can run in any environment where Docker is installed. "Run anywhere" meme...
James Lee , former Software Engineer at Google (2013-2016) Answered Jul 12 · Author has 106 answers and 258.1k answer views
There are many beneifits of Docker. Firstly, I would mention the beneifits of Docker and then let you know about the future of Docker. The content mentioned here is from my recent article on Docker.
Docker Beneifits:
Docker is an open-source project based on Linux containers. It uses the features based on the Linux Kernel. For example, namespaces and control groups create containers. But are containers new? No, Google has been using it for years! They have their own container technology. There are some other Linux container technologies like Solaris Zones, LXC, etc.
These container technologies are already there before Docker came into existence. Then why Docker? What difference did it make? Why is it on the rise? Ok, I will tell you why!
Number 1: Docker offers ease of use
Taking advantage of containers wasn't an easy task with earlier technologies. Docker has made it easy for everyone like developers, system admins, architects, and more. Test portable applications are easy to build. Anyone can package an application from their laptop. He/She can then run it unmodified on any public/private cloud or bare metal. The slogan is, "build once, run anywhere"!
Number 2: Docker offers speed
Being lightweight, the containers are fast. They also consume fewer resources. One can easily run a Docker container in seconds. On the other side, virtual machines usually take longer as they go through the whole process of booting up the complete virtual operating system, every time!
Number 3: The Docker Hub
Docker offers an ecosystem known as the Docker Hub. You can consider it as an app store for Docker images. It contains many public images created by the community. These images are ready to use. You can easily search the images as per your requirements.
Number 4: Docker gives modularity and scalability
It is possible to break down the application functionality into individual containers. Docker gives this freedom! It is easy to link containers together and create your application with Docker. One can easily scale and update components independently in the future.
The Future
A lot of people come and ask me that "Will Docker eat up virtual machines?" I don't think so! Docker is gaining a lot of momentum but this won't affect virtual machines. This reason is that virtual machines are better under certain circumstances as compared to Docker. For example, if there is a requirement of running multiple applications on multiple servers, then virtual machines is a better choice. On the contrary, if there is a requirement to run multiple copies of a single application, Docker is a better choice.
Docker containers could create a problem when it comes to security because containers share the same kernel. The barriers between containers are quite thin. But I do believe that security and management improve with experience and exposure. Docker certainly has a great future! I hope that this Docker tutorial has helped you understand the basics of Containers, VM's, and Dockers. But Docker in itself is an ocean. It isn't possible to study Docker in just one article. For an in-depth study of Docker, I recommend this Docker course.
Please feel free to Like/Subscribe/Comment on my YouTube Videos/Channel mentioned below :
David Polstra , Person at ReactiveOps (2016-present) Updated Oct 5, 2017 · Author has 65 answers and 53.7k answer viewsI work at ReactiveOps where we specialize in DevOps-as-a-Service and Kubernetes Consulting. One of our engineers, EJ Etherington , recently addressed this in a blog post:
"Docker is both a daemon (a process running in the background) and a client command. It's like a virtual machine but it's different in important ways. First, there's less duplication. With each extra VM you run, you duplicate the virtualization of CPU and memory and quickly run out resources when running locally. Docker is great at setting up a local development environment because it easily adds the running process without duplicating the virtualized resource. Second, it's more modular. Docker makes it easy to run multiple versions or instances of the same program without configuration headaches and port collisions. Try that in a VM!
With Docker, developers can focus on writing code without worrying about the system on which their code will run. Applications become truly portable. You can repeatably run your application on any other machine running Docker with confidence. For operations staff, Docker is lightweight, easily allowing the running and management of applications with different requirements side by side in isolated containers. This flexibility can increase resource use per server and may reduce the number of systems needed because of its lower overhead, which in turn reduces cost.
Docker has made Linux containerization technology easy to use.
There are a dozen reasons to use Docker. I'll focus here on three: consistency, speed and isolation. By consistency , I mean that Docker provides a consistent environment for your application from development all the way through production – you run from the same starting point every time. By speed , I mean you can rapidly run a new process on a server. Because the image is preconfigured and installed with the process you want to run, it takes the challenge of running a process out of the equation. By isolation , I mean that by default each Docker container that's running is isolated from the network, the file system and other running processes.
A fourth reason is Docker's layered file system. Starting from a base image, every change you make to a container or image becomes a new layer in the file system. As a result, file system layers are cached, reducing the number of repetitive steps during the Docker build process AND reducing the time it takes to upload and download similar images. It also allows you to save the container state if, for example, you need troubleshoot why a container is failing. The file system layers are like Git, but at the file system level. Each Docker image is a particular combination of layers in the same way that each Git branch is a particular combination of commits."
I hope this was helpful. If you would like to learn more, you can read the entire post: Docker Is a Valuable DevOps Tool - One That's Worth Using
Bill William , M.C.A Software and Applications & Java, SRM University, Kattankulathur (2006) Answered Jan 5, 2018
Docker is the most popular file format for Linux-based container development and deployments. If you're using containers, you're most likely familiar with the container-specific toolset of Docker tools that enable you to create and deploy container images to a cloud-based container hosting environment.
This can work great for brand-new environments, but it can be a challenge to mix container tooling with the systems and tools you need to manage your traditional IT environments. And, if you're deploying your containers locally, you still need to manage the underlying infrastructure and environment.
Portability: let's suppose in the case of Linux you have your own customized Nginx container. You can run that Nginx container anywhere, no matter it's a cloud or data center on even your own laptop as long as you have a docker engine running Linux OS.
Rollback: you can just run your previous build image and all charges will automatically roll back.
Image Simplicity: Every image has a tree hierarchy and all the child images depend upon its parent image. For example, let's suppose there is a vulnerability in docker container, you can easily identify and patch that parent image and when you will rebuild child, variability will automatically remove from the child images also.
Container Registry: You can store all images at a central location, you can apply ACLs, you can do vulnerability scanning and image signing.
Runtime: No matter you want to run thousand of container you can start all within five seconds.
Isolation: We can run hundred of the process in one Os and all will be isolated to each other.
Dec 16, 2018 | www.quora.com
Ethen , Web Designer (2015-present) Answered Aug 30, 2018 · Author has 154 answers and 56.2k answer views
Docker is an open platform for every one of the developers bringing them a large number of open source venture including the arrangement open source Docker tools , and the management framework with in excess of 85,000 Dockerized applications. Docker is even today accepted to be something more than only an application stage. What's more, the compartment eco framework is proceeding to develop so quick that with such a large number of Docker devices being made accessible on the web, it starts to feel like an overwhelming undertaking when you are simply attempting to comprehend the accessible alternatives kept directly before you.
Disadvantages Of Docker
Containers don't run at bare-metal speeds.
The container ecosystem is fractured.
Persistent data storage is complicated.
Graphical applications don't work well.
Not all applications benefit from containers.
Advantages Of Docker
Swapnil Kulkarni , Engineering Lead at Persistent Systems (2018-present) Answered Nov 9, 2017 · Author has 58 answers and 24.9k answer views
- Continuous Deployment and Testing
- Multi-Cloud Platforms
- Environment Standardization and Version Control
- Isolation
- Security
From my personal experience, I think people just want to containerize everything without looking at how the architectural considerations change which basically ruins the technology.
e.g. How will someone benefit from creating FAT container images of a size of a VM when the basic advantage of docker is to ship lightweight images.
Nov 19, 2018 | enterprisersproject.com
Among growing container trends , here's an important one: As containers go, so goes container orchestration. That's because most organizations quickly realize that managing containers in production can get complicated in a hurry. Orchestration solves that problem, and while there are multiple options, Kubernetes has become the de facto leader .[ Want to help others understand Kubernetes? Check out our related article, How to explain Kubernetes in plain English. ]
Kubernetes' star appeal does lead to some misunderstandings and outright myths, though. We asked a range of IT leaders and container experts to identify the biggest misconceptions about Kubernetes – and the realities behind each of them – to help people who are just getting going with the technology. Here are five important ones to know before you get your hands dirty.
Misunderstanding #1: Kubernetes is only for public cloudReality: Kubernetes is commonly referred to as a cloud-native technology, and for good reason. The project, which was first developed by a team at Google , currently calls the Cloud Native Computing Foundation home. ( Red Hat , one of the first companies to work with Google on Kubernetes, has become the second-leading contributor to Kubernetes upstream project.)
"Kubernetes is cloud-native in the sense that it has been designed to take advantage of cloud computing architecture [and] to support scale and resilience for distributed applications," says Raghu Kishore Vempati, principal systems engineer at Aricent .
Just remember that "cloud-native" is not wholly synonymous with "public cloud."
"Kubernetes can run on different platforms, be it a personal laptop, VM, rack of bare-metal servers, public/private cloud environment, et cetera," Vempati says.
Notes Red Hat technology evangelist Gordon Haff , "You can cluster together groups of hosts running Linux containers, and Kubernetes helps you easily and efficiently manage those clusters. These clusters can span hosts across public, private, and hybrid clouds ."
Misunderstanding #2: Kubernetes is a finished productReality: Kubernetes isn't really a product at all, much less a finished one.
"Kubernetes is an open source project, not a product," says Murli Thirumale, co-founder and CEO at Portworx . (Portworx co-founder and VP of product management Eric Han was the first Kubernetes product manager while at Google.)
The Kubernetes ecosystem moves very quickly.
New users should understand a fundamental reality here: The Kubernetes ecosystem moves very quickly. It's even been dubbed the fastest-moving project in open source history.
"Take your eyes off of it for only one moment, and everything changes," Frank Reno, senior technical product manager at Sumo Logic . "It is a fast-paced, highly active community that develops Kubernetes and the related projects. As it changes, it also changes the way you need to look at and develop things. It's all for the better, but still, much to keep up on."
Misunderstanding #3: Kubernetes is simple to run out of the box"For those new to Kubernetes there's often an 'aha' moment as they realize it's not that easy to do right."
Reality: It may be "easy" to get it up and running on a local machine, but it can quickly get more complicated from there. "For those new to Kubernetes, there's often an 'aha' moment as they realize it's not that easy to do right," says Amir Jerbi, co-founder and CTO at Aqua Security .
Jerbi notes that this is a key reason for the growth of commercial Kubernetes platforms on top of the open source project, as well as managed services and consultancies. "Setting up and managing K8s correctly requires time, knowledge, and skills, and the skill gap should not be underestimated," Jerbi says.
Some organizations are still going to learn that the hard way, drawn in by the considerable potential of Kubernetes and the table-stakes necessity of a using container management or orchestration tool for running containers at scale in a production environment.
"Kubernetes is a very popular and very powerful platform," says Wei Lien Dang, VP of products at StackRox . "Given the DIY mindset that comes along with open source software, users often think they should be working directly in the Kubernetes system itself. But this understanding is misguided."
Dang points to needs such as supporting high availability and resilience. Both, he says, become easier when using abstraction layers on top of the core Kubernetes platform, such as a UX layer to enable various end users to get the most value out of the technology.
"One of the major benefits of open source software is that it can be downloaded and used with no license cost – but very often, making this community software usable in a corporate environment will require a significant investment in technical effort to integrate [or] bundle with other technologies," says Andy Kennedy, managing director at Tier 2 Consulting . "For example, in order to provide a full set of orchestrated services, Kubernetes relies on other services provided by open source projects, such as registry, security, telemetry, networking, and automation."
Complete container application platforms, such as Red Hat OpenShift , eliminate the need to assemble those pieces yourself.
This gets back to the difference between the Kubernetes project and the maturing Kubernetes platforms built on top of that project.
"Do-it-yourself Kubernetes can work with some dedicated resources, but consider a more productized and supported [platform]," says Portworx's Thirumale. "These will help you go to production faster." Misunderstanding #4: Kubernetes is an all-encompassing framework for building and deploying applications
Reality: "By itself, Kubernetes does not provide any primitives for applications such as databases, middleware, storage, [and so forth]," says Aricent's Vempati.
Developers still need to include the necessary services and components for their respective applications, Vempati notes, yet some people overlook this.
"Kubernetes is a platform for managing containerized workloads and services with independent and composable processes," Vempati says. "How the applications and services are orchestrated on the platform is for the developers to define."
You can't just "lift and shift" a monolithic app into Kubernetes and say, boom, we have a microservices architecture.
In a similar vein, some folks simply misunderstand what Kubernetes does in a more fundamental way. Jared Sikander, CTO at NetEnrich , encounters a key misconception in the marketplace that Kubernetes "provides containerization and microservices ." That's a misnomer. It's a tool for deploying and managing containers and containerized microservices. You can't just "lift and shift" a monolithic app into Kubernetes and say, boom, we have a microservices architecture now.
"In reality, you have to refactor your applications into microservices," Sikander says. "Kubernetes provides the platform to deploy and scale your microservices."
[ Want more advice? Read Microservices and containers: 5 pitfalls to avoid . ]
Misunderstanding #5: Kubernetes inherently secures your containersReality: Container security is one of the brave new worlds in the broader threat landscape. (That's evident in the growing number of container security firms, such as Aqua, StackRox, and others.)
Kubernetes does have critical capabilities for managing the security of your containers, but keep in mind it is not in and of itself a security platform, per se.
"Kubernetes has a lot of powerful controls built in for network policy enforcement, for example, but accessing them natively in Kubernetes means working in a YAML file," says Dang from StackRox. This also gets back to leveraging the right tools or abstraction layers on top of Kubernetes to make its security-oriented features more consumable.
It's also a matter of rethinking your old security playbook for containers and for hybrid cloud and multi-cloud environments in general.
[ Read our related article: Container security fundamentals: 5 things to know . ]
"As enterprises increasingly flock to Kubernetes, too many organizations are still making the dangerous mistake of relying on their previously used security measures – which really aren't suited to protecting Kubernetes and containerized environments," says Gary Duan, CTO at NeuVector . "While traditional firewalls and endpoint security are postured to defend against external threats, malicious threats to containers often grow and expand laterally via internal traffic, where more traditional tools have zero visibility."
MORE ON CONTAINERSSecurity, like other considerations with containers and Kubernetes, is also a very different animal when you're ready to move into production.
In part two of this series, we clear up some of the misconceptions about running Kubernetes in a production environment versus experimenting with it in a test or dev environment. The differences can be significant.
Nov 12, 2018 | opensource.com
Become a better container troubleshooter by using LXC to understand how they work.
Can you have Linux containers without Docker ? Without OpenShift ? Without Kubernetes ?
Yes, you can. Years before Docker made containers a household term (if you live in a data center, that is), the LXC project developed the concept of running a kind of virtual operating system, sharing the same kernel, but contained within defined groups of processes.
Docker built on LXC, and today there are plenty of platforms that leverage the work of LXC both directly and indirectly. Most of these platforms make creating and maintaining containers sublimely simple, and for large deployments, it makes sense to use such specialized services. However, not everyone's managing a large deployment or has access to big services to learn about containerization. The good news is that you can create, use, and learn containers with nothing more than a PC running Linux and this article. This article will help you understand containers by looking at LXC, how it works, why it works, and how to troubleshoot when something goes wrong.
Sidestepping the simplicity Linux ContainersIf you're looking for a quick-start guide to LXC, refer to the excellent Linux Containers website. Installing LXC
- What are Linux containers?
- What is Docker?
- What is Kubernetes?
- An introduction to container terminology
If it's not already installed, you can install LXC with your package manager.
On Fedora or similar, enter:
$ sudo dnf install lxc lxc-templates lxc-docOn Debian, Ubuntu, and similar, enter:
$ sudo apt install lxcCreating a network bridgeMost containers assume a network will be available, and most container tools expect the user to be able to create virtual network devices. The most basic unit required for containers is the network bridge, which is more or less the software equivalent of a network switch. A network switch is a little like a smart Y-adapter used to split a headphone jack so two people can hear the same thing with separate headsets, except instead of an audio signal, a network switch bridges network data.
You can create your own software network bridge so your host computer and your container OS can both send and receive different network data over a single network device (either your Ethernet port or your wireless card). This is an important concept that often gets lost once you graduate from manually generating containers, because no matter the size of your deployment, it's highly unlikely you have a dedicated physical network card for each container you run. It's vital to understand that containers talk to virtual network devices, so you know where to start troubleshooting if a container loses its network connection.
To create a network bridge on your machine, you must have the appropriate permissions. For this article, use the sudo command to operate with root privileges. (However, LXC docs provide a configuration to grant users permission to do this without using sudo .)
$ sudo ip link add br0 type bridgeVerify that the imaginary network interface has been created:
$ sudo ip addr show br0
7: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc
noop state DOWN group default qlen 1000
link/ether 26:fa:21:5f:cf:99 brd ff:ff:ff:ff:ff:ffSince br0 is seen as a network interface, it requires its own IP address. Choose a valid local IP address that doesn't conflict with any existing IP address on your network and assign it to the br0 device:
$ sudo ip addr add 192.168.168.168 dev br0And finally, ensure that br0 is up and running:
$ sudo ip link set br0 upSetting the container configThe config file for an LXC container can be as complex as it needs to be to define a container's place in your network and the host system, but for this example the config is simple. Create a file in your favorite text editor and define a name for the container and the network's required settings:
lxc.utsname = opensourcedotcom
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.hwaddr = 4a:49:43:49:79:bd
lxc.network.ipv4 = 192.168.168.1/24
lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3596Save this file in your home directory as mycontainer.conf .
The lxc.utsname is arbitrary. You can call your container whatever you like; it's the name you'll use when starting and stopping it.
The network type is set to veth , which is a kind of virtual Ethernet patch cable. The idea is that the veth connection goes from the container to the bridge device, which is defined by the lxc.network.link property, set to br0 . The IP address for the container is in the same network as the bridge device but unique to avoid collisions.
With the exception of the veth network type and the up network flag, you invent all the values in the config file. The list of properties is available from man lxc.container.conf . (If it's missing on your system, check your package manager for separate LXC documentation packages.) There are several example config files in /usr/share/doc/lxc/examples , which you should review later.
Launching a container shellAt this point, you're two-thirds of the way to an operable container: you have the network infrastructure, and you've installed the imaginary network cards in an imaginary PC. All you need now is to install an operating system.
However, even at this stage, you can see LXC at work by launching a shell within a container space.
$ sudo lxc-execute --name basic \
--rcfile ~/mycontainer.conf /bin/bash \
--logfile mycontainer.log
#In this very bare container, look at your network configuration. It should look familiar, yet unique, to you.
# /usr/sbin/ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state [...]
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
[...]
22: eth0@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> [...] qlen 1000
link/ether 4a:49:43:49:79:bd brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.168.167/24 brd 192.168.168.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 2003:db8:1:0:214:1234:fe0b:3596/64 scope global
valid_lft forever preferred_lft forever
[...]Your container is aware of its fake network infrastructure and of a familiar-yet-unique kernel.
# uname -av
Linux opensourcedotcom 4.18.13-100.fc27.x86_64 #1 SMP Wed Oct 10 18:34:01 UTC 2018 x86_64 x86_64 x86_64 GNU/LinuxUse the exit command to leave the container:
# exitInstalling the container operating systemBuilding out a fully containerized environment is a lot more complex than the networking and config steps, so you can borrow a container template from LXC. If you don't have any templates, look for a separate LXC template package in your software repository.
The default LXC templates are available in /usr/share/lxc/templates .
$ ls -m /usr/share/lxc/templates/
lxc-alpine, lxc-altlinux, lxc-archlinux, lxc-busybox, lxc-centos, lxc-cirros, lxc-debian, lxc-download, lxc-fedora, lxc-gentoo, lxc-openmandriva, lxc-opensuse, lxc-oracle, lxc-plamo, lxc-slackware, lxc-sparclinux, lxc-sshd, lxc-ubuntu, lxc-ubuntu-cloudPick your favorite, then create the container. This example uses Slackware.
$ sudo lxc-create --name slackware --template slackwareWatching a template being executed is almost as educational as building one from scratch; it's very verbose, and you can see that lxc-create sets the "root" of the container to /var/lib/lxc/slackware/rootfs and several packages are being downloaded and installed to that directory.
Reading through the template files gives you an even better idea of what's involved: LXC sets up a minimal device tree, common spool files, a file systems table (fstab), init files, and so on. It also prevents some services that make no sense in a container (like udev for hardware detection) from starting. Since the templates cover a wide spectrum of typical Linux configurations, if you intend to design your own, it's wise to base your work on a template closest to what you want to set up; otherwise, you're sure to make errors of omission (if nothing else) that the LXC project has already stumbled over and accounted for.
Once you've installed the minimal operating system environment, you can start your container.
$ sudo lxc-start --name slackware \
--rcfile ~/mycontainer.confYou have started the container, but you have not attached to it. (Unlike the previous basic example, you're not just running a shell this time, but a containerized operating system.) Attach to it by name.
$ sudo lxc-attach --name slackware
#Check that the IP address of your environment matches the one in your config file.
# / usr / sbin / ip addr SHOW | grep eth
34 : eth0@if35: < BROADCAST , MULTICAST , UP , LOWER_UP > mtu 1500 [ ... ] 1000
link / ether 4a: 49 : 43 : 49 : 79 :bd brd ff:ff:ff:ff:ff:ff link - netnsid 0
inet 192 . 168 . 168 . 167 / 24 brd 192 . 168 . 168 . 255 scope global eth0Exit the container, and shut it down.
# exit
$ sudo lxc-stop slackware Running real-world containers with LXCIn real life, LXC makes it easy to create and run safe and secure containers. Containers have come a long way since the introduction of LXC in 2008, so use its developers' expertise to your advantage.
While the LXC instructions on linuxcontainers.org make the process simple, this tour of the manual side of things should help you understand what's going on behind the scenes.
Sep 05, 2018 | opensource.com
A sysadmin's guide to containers What you need to know to understand how containers work. 27 Aug 2018 Daniel J Walsh (Red Hat) Feed 30 up 2 comments Image by :
Internet Archive Book Images . Modified by Opensource.com. CC BY-SA 4.0 x Get the newsletter
Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0
The term "containers" is heavily overused. Also, depending on the context, it can mean different things to different people.
Traditional Linux containers are really just ordinary processes on a Linux system. These groups of processes are isolated from other groups of processes using resource constraints (control groups [cgroups]), Linux security constraints (Unix permissions, capabilities, SELinux, AppArmor, seccomp, etc.), and namespaces (PID, network, mount, etc.).
containers_primer_cover_1.jpg Download the Containers PrimerIf you boot a modern Linux system and took a look at any process with
cat /proc/PID/cgroup
, you see that the process is in a cgroup. If you look at/proc/PID/status
, you see capabilities. If you look at/proc/self/attr/current
, you see SELinux labels. If you look at/proc/PID/ns
, you see the list of namespaces the process is in. So, if you define a container as a process with resource constraints, Linux security constraints, and namespaces, by definition every process on a Linux system is in a container. This is why we often say Linux is containers, containers are Linux . Container runtimes are tools that modify these resource constraints, security, and namespaces and launch the container.Docker introduced the concept of a container image , which is a standard TAR file that combines:
- Rootfs (container root filesystem): A directory on the system that looks like the standard root (
/
) of the operating system. For example, a directory with/usr
,/var
,/home
, etc.- JSON file (container configuration): Specifies how to run the rootfs; for example, what command or entrypoint to run in the rootfs when the container starts; environment variables to set for the container; the container's working directory ; and a few other settings.
Docker "
tar
's up" the rootfs and the JSON file to create the base image . This enables you to install additional content on the rootfs, create a new JSON file, andtar
the difference between the original image and the new image with the updated JSON file. This creates a layered image .The definition of a container image was eventually standardized by the Open Container Initiative (OCI) standards body as the OCI Image Specification .
Tools used to create container images are called container image builders . Sometimes container engines perform this task, but several standalone tools are available that can build container images.
Docker took these container images ( tarballs ) and moved them to a web service from which they could be pulled, developed a protocol to pull them, and called the web service a container registry .
Container engines are programs that can pull container images from container registries and reassemble them onto container storage . Container engines also launch container runtimes (see below).
linux_container_internals_2.0_-_hosts.pngLinux container internals. Illustration by Scott McCarty. CC BY-SA 4.0
Container storage is usually a copy-on-write (COW) layered filesystem. When you pull down a container image from a container registry, you first need to untar the rootfs and place it on disk. If you have multiple layers that make up your image, each layer is downloaded and stored on a different layer on the COW filesystem. The COW filesystem allows each layer to be stored separately, which maximizes sharing for layered images. Container engines often support multiple types of container storage, including
overlay
,devicemapper
,btrfs
,aufs
, andzfs
.Linux Containers
After the container engine downloads the container image to container storage, it needs to create a container runtime configuration. The runtime configuration combines input from the caller/user along with the content of the container image specification. For example, the caller might want to specify modifications to a running container's security, add additional environment variables, or mount volumes to the container.
- What are Linux containers?
- What is Docker?
- What is Kubernetes?
- An introduction to container terminology
The layout of the container runtime configuration and the exploded rootfs have also been standardized by the OCI standards body as the OCI Runtime Specification .
Finally, the container engine launches a container runtime that reads the container runtime specification; modifies the Linux cgroups, Linux security constraints, and namespaces; and launches the container command to create the container's PID 1 . At this point, the container engine can relay
stdin
/stdout
back to the caller and control the container (e.g., stop, start, attach).Note that many new container runtimes are being introduced to use different parts of Linux to isolate containers. People can now run containers using KVM separation (think mini virtual machines) or they can use other hypervisor strategies (like intercepting all system calls from processes in containers). Since we have a standard runtime specification, these tools can all be launched by the same container engines. Even Windows can use the OCI Runtime Specification for launching Windows containers.
At a much higher level are container orchestrators. Container orchestrators are tools used to coordinate the execution of containers on multiple different nodes. Container orchestrators talk to container engines to manage containers. Orchestrators tell the container engines to start containers and wire their networks together. Orchestrators can monitor the containers and launch additional containers as the load increases. Topics Containers Containers column Cloud About the author Daniel J Walsh - Daniel Walsh has worked in the computer security field for almost 30 years. Dan joined Red Hat in August 2001. Dan leads the RHEL Docker enablement team since August 2013, but has been working on container technology for several years. He has led the SELinux project, concentrating on the application space and policy development. Dan helped developed sVirt, Secure Virtualization. He also created the SELinux Sandbox, the Xguest user and the Secure Kiosk. Previously, Dan worked Netect/Bindview... More about me
Jul 30, 2018 | clusteringformeremortals.com
Your student could be the next Doogie Howser of Cloud Computing with free training and cloud computing resources July 30, 2018 August 1, 2018 daveberm Leave a comment
Students with any interest in Information Technology or Computer Science are going to be joining a world dominated by Cloud Computing . And of course the major cloud service providers (CSP) would all love to see the young people embrace their cloud platform to host the next big thing like Facebook, Instagram or SnapChat. The top three CSP all have free offerings for students, hoping to win their minds and hearts.
But before you jump right in to cloud computing, the novice student might want to start with some basic fundamentals of computer programming at one of the many free online resources, including Khan Academy.
Microsoft is offering free Azure services for students. There are two different offerings. The first is targeted at high school students ages 13+ and the second is geared towards college students 18+.
Microsoft Azure for Students Starter Offer is for those high school students that are interested in building applications in the cloud. While there are not as many free services or credits as being offered at the college level, there is certainly enough available for free to really get some hands on experience with some cutting edge technology for the self starter. How cool would it be for your high school to start a Cloud Computing Club, or to integrate this offering into some of the IT classes they may already be taking.
Azure for Students is targeted at the college level student and has many more features available for free. Any student in computer science or information technology should definitely get some hands on experience with these cutting edge cloud technologies and this is the perfect way to do it with no additional out of pocket expense.
A good way to get introduced to the Azure Cloud is to start with some free online training courses Microsoft delivers in partnership with Pluralsight.
AWS Educate . Not to be outdone, AWS also offers some free cloud services to students and educators. These seem to be in terms of free cloud credits, which if managed properly can go a long way. AWS also delivers an educational program that can be combined with an AP class in Computer Science if your high school wants to participate.
Google Cloud Platform (GCP) also has education grants available for computer science majors at accredited universities. These seem to be the most restrictive of the three as they are available for Computer Science Majors only at accredited universities.
GCP does also offer training, but from what I can find I don't see any free training offerings. If you want some hands on training you will have to r egister for some classes . The plus side of this is that these classes all seem to be instructor led, either online or in an actual classroom. The downside is I don't think a lot of 13 year olds are going to shell out any money to start developing on the CGP when there are other free training opportunities available on AWS or Azure.
For the ambitious young student, the resources are certainly there for you to be the next Doogie Howser of Cloud Computing.
Jul 27, 2018 | www.tecmint.com
Download Your Free eBooks NOW - 10 Free Linux eBooks for Administrators | 4 Free Shell Scripting eBooksVirtualization and containers are hot topics in today's IT industry. In this article we will list the necessary tools to manage and configure both in Linux systems.
For many decades, virtualization has helped IT professionals to reduce operational costs and increase energy savings. A virtual machine (or VM for short) is an emulated computer system that runs on top of another system known as host.
VMs have limited access to the host's hardware resources (CPU, memory, storage, network interfaces, USB devices, and so forth). The operating system running on the virtual machine is often referred to as the guest operating system.
CPU ExtensionsBefore we proceed, we need to check if the virtualization extensions are enabled on our CPU(s). To do that, use the following command, where vmx and svm are the virtualization flags on Intel and AMD processors, respectively:
# grep --color -E 'vmx|svm' /proc/cpuinfoNo output means the extensions are either not available or not enabled in the BIOS . While you may continue without them, performance will be negatively impacted.
Install Virtualization Tools in LinuxTo begin, let's install the necessary tools. In CentOS you will need the following packages:
# yum install qemu-kvm libvirt libvirt-client virt-install virt-viewerwhereas in Ubuntu:
$ sudo apt-get install qemu-kvm qemu virt-manager virt-viewer libvirt-bin libvirt-devNext, we will download a CentOS 7 minimal ISO file for later use:
# wget http://mirror.clarkson.edu/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1804.isoAt this point we are ready to create our first virtual machine with the following specifications:
- RAM: 512 MB (Note that the host must have at least 1024 MB)
- 1 virtual CPU
- 8 GB disk
- Name: centos7vm
# virt-install --name=centos7vm --ram=1024 --vcpus=1 --cdrom=/home/user/CentOS-7-x86_64-Minimal-1804.iso --os-type=linux --os-variant=rhel7 --network type=direct,source=eth0 --disk path=/var/lib/libvirt/images/centos7vm.dsk,size=8Depending on the computing resources available on the host, the above command may take some time to bring up the virtualization viewer. This tool will enable you to perform the installation as if you were doing it on a bare metal machine.
How to Manage Virtual Machines in LinuxAfter you have created a virtual machine, here are some commands you can use to manage it:
List all VMs:
# virsh --list allGet info about a VM (centos7vm in this case):
# virsh dominfo centos7vmEdit the settings of centos7vm in your default text editor:
# virsh edit centos7vmEnable or disable autostart to have the virtual machine boot (or not) when the host does:
# virsh autostart centos7vm # virsh autostart --disable centos7vmStop centos7vm:
# virsh shutdown centos7vmOnce it is stopped, you can clone it into a new virtual machine called centos7vm2 :
# virt-clone --original centos7vm --auto-clone --name centos7vm2And that's it. From this point on, you may want to refer to the virt-install , virsh , and virt-clone man pages for further info.
Feb 28, 2018 | www.datamation.com
By MikeOh Shark February 27 2018 09:44 PST
I just do an actual install to a flash drive. Format as ext4, reboot to the live media, and turn off journaling to save wear on the flash drive. Set /tmp, /var/log, /var/spool, and a few other frequently written directories to tmpfs; again to reduce wear on the flash drive. Turn off swap. I have been using a Linux on a flash drive for years and with prelink, ulatencyd, and preload, it runs as well as from a hard drive. I suppose the proper way would be to use an overlay filesystem and a persistence file but this worked for me. Just boot to USB. Another way would be to install to an external USB drive and put the boot loader on the external drive.
Jan 20, 2018 | itsfoss.com
Wine 3.0 is out now with Direct3D 10, 11 support. You can run Windows software more effectively on Linux now.
Jul 16, 2017 | www.cyberciti.biz
How to install and setup LXC (Linux Container) on Fedora Linux 26 Posted on July 13, 2017 July 13, 2017 in Categories Fedora Linux , Linux , Linux Containers (LXC) last updated July 13, 2017 H ow do I install, create and manage LXC (Linux Containers an operating system-level virtualization) on Fedora Linux version 26 server?LXC is an acronym for Linux Containers. It is nothing but an operating system-level virtualization technology for running multiple isolated Linux distros (systems containers) on a single Linux host. This tutorial shows you how to install and manage LXC containers on Fedora Linux server.
Our sample setup Installation
The LXC often described as a lightweight virtualization technology. You can think LXC as chrooted jail on steroids. There is no guest operating system involved. You can only run Linux distros with LXC. You can not run MS-Windows or *BSD or any other operating system with LXC. You can run CentOS, Fedora, Ubuntu, Debian, Gentoo or any other Linux distro using LXC. Traditional virtualization such as KVM/XEN/VMWARE and paravirtualization need a full operating system image for each instance. You can run any operating system using traditional virtualization.Type the following dnf command to install lxc and related packages on Fedora 26:
Start and enable needed services
$ sudo dnf install lxc lxc-templates lxc-extra debootstrap libvirt perl gpg
Sample outputs:
First start virtualization daemon named libvirtd and lxc using the systemctl command:
$ sudo systemctl start libvirtd.service
$ sudo systemctl start lxc.service
$ sudo systemctl enable lxc.service
Sample outputs:Created symlink /etc/systemd/system/multi-user.target.wants/lxc.service ? /usr/lib/systemd/system/lxc.service.Verify that services are running:
$ sudo systemctl status libvirtd.service
Sample outputs:
? libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2017-07-13 07:25:30 UTC; 40s ago Docs: man:libvirtd(8) http://libvirt.org Main PID: 3688 (libvirtd) CGroup: /system.slice/libvirtd.service ??3688 /usr/sbin/libvirtd ??3760 /usr/sbin/dnsmasq --conf-file = /var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper ??3761 /usr/sbin/dnsmasq --conf-file = /var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper Jul 13 07:25:31 nixcraft-f26 dnsmasq [3760] : compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp [3760] : DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp [3760] : DHCP, sockets bound exclusively to interface virbr0 Jul 13 07:25:31 nixcraft-f26 dnsmasq [3760] : reading /etc/resolv.conf Jul 13 07:25:31 nixcraft-f26 dnsmasq [3760] : using nameserver 139.162.11.5#53 Jul 13 07:25:31 nixcraft-f26 dnsmasq [3760] : using nameserver 139.162.13.5#53 Jul 13 07:25:31 nixcraft-f26 dnsmasq [3760] : using nameserver 139.162.14.5#53 Jul 13 07:25:31 nixcraft-f26 dnsmasq [3760] : read /etc/hosts - 3 addresses Jul 13 07:25:31 nixcraft-f26 dnsmasq [3760] : read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp [3760] : read /var/lib/libvirt/dnsmasq/default.hostsfile? libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2017-07-13 07:25:30 UTC; 40s ago Docs: man:libvirtd(8) http://libvirt.org Main PID: 3688 (libvirtd) CGroup: /system.slice/libvirtd.service ??3688 /usr/sbin/libvirtd ??3760 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper ??3761 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: DHCP, sockets bound exclusively to interface virbr0 Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: reading /etc/resolv.conf Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: using nameserver 139.162.11.5#53 Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: using nameserver 139.162.13.5#53 Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: using nameserver 139.162.14.5#53 Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: read /etc/hosts - 3 addresses Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: read /var/lib/libvirt/dnsmasq/default.hostsfile
And:
$ sudo systemctl status lxc.service
Sample outputs:
? lxc.service - LXC Container Initialization and Autoboot Code Loaded: loaded (/usr/lib/systemd/system/lxc.service; enabled; vendor preset: disabled) Active: active (exited) since Thu 2017-07-13 07:25:34 UTC; 1min 3s ago Docs: man:lxc-autostart man:lxc Main PID: 3830 (code = exited, status=0/SUCCESS) CPU: 9ms Jul 13 07:25:34 nixcraft-f26 systemd [1] : Starting LXC Container Initialization and Autoboot Code... Jul 13 07:25:34 nixcraft-f26 systemd [1] : Started LXC Container Initialization and Autoboot Code.? lxc.service - LXC Container Initialization and Autoboot Code Loaded: loaded (/usr/lib/systemd/system/lxc.service; enabled; vendor preset: disabled) Active: active (exited) since Thu 2017-07-13 07:25:34 UTC; 1min 3s ago Docs: man:lxc-autostart man:lxc Main PID: 3830 (code=exited, status=0/SUCCESS) CPU: 9ms Jul 13 07:25:34 nixcraft-f26 systemd[1]: Starting LXC Container Initialization and Autoboot Code... Jul 13 07:25:34 nixcraft-f26 systemd[1]: Started LXC Container Initialization and Autoboot Code. LXC networking
To view configured networking interface for lxc, run:
$ sudo brctl show
Sample outputs:bridge name bridge id STP enabled interfaces virbr0 8000.525400293323 yes virbr0-nicYou must set default bridge to virbr0 in the file /etc/lxc/default.conf:
$ sudo vi /etc/lxc/default.conf
Sample config (replace lxcbr0 with virbr0 for lxc.network.link):lxc.network.type = veth lxc.network.link = virbr0 lxc.network.flags = up lxc.network.hwaddr = 00:16:3e:xx:xx:xxSave and close the file. To see DHCP range used by containers, enter:
$ sudo systemctl status libvirtd.service | grep range
Sample outputs:Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1hTo check the current kernel for lxc support, enter:
$ lxc-checkconfig
Sample outputs:Kernel configuration not found at /proc/config.gz; searching... Kernel configuration found at /boot/config-4.11.9-300.fc26.x86_64 --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled --- Control groups --- Cgroup: enabled Cgroup clone_children flag: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled Bridges: enabled Advanced netfilter: enabled CONFIG_NF_NAT_IPV4: enabled CONFIG_NF_NAT_IPV6: enabled CONFIG_IP_NF_TARGET_MASQUERADE: enabled CONFIG_IP6_NF_TARGET_MASQUERADE: enabled CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled FUSE (for use with lxcfs): enabled --- Checkpoint/Restore --- checkpoint restore: enabled CONFIG_FHANDLE: enabled CONFIG_EVENTFD: enabled CONFIG_EPOLL: enabled CONFIG_UNIX_DIAG: enabled CONFIG_INET_DIAG: enabled CONFIG_PACKET_DIAG: enabled CONFIG_NETLINK_DIAG: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfigHow can I create a Ubuntu Linux container?Type the following command to create Ubuntu 16.04 LTS container:
$ sudo lxc-create -t download -n ubuntu-c1 -- -d ubuntu -r xenial -a amd64
Sample outputs:Setting up the GPG keyring Downloading the image index Downloading the rootfs Downloading the metadata The image cache is now ready Unpacking the rootfs --- You just created an Ubuntu container (release=xenial, arch=amd64, variant=default) To enable sshd, run: apt-get install openssh-server For security reason, container images ship without user accounts and without a root password. Use lxc-attach or chroot directly into the rootfs to set a root password or create user accounts.To setup admin password, run:
$ sudo chroot /var/lib/lxc/ubuntu-c1/rootfs/ passwd ubuntu
Enter new UNIX password: Retype new UNIX password: passwd: password updated successfullyMake sure root account is locked out:
How do I create a Debain Linux container?
$ sudo chroot /var/lib/lxc/ubuntu-c1/rootfs/ passwd
To start container run:
$ sudo lxc-start -n ubuntu-c1
To login to the container named ubuntu-c1 use ubuntu user and password set earlier:
$ lxc-console -n ubuntu-c1
Sample outputs:
You can now install packages and configure your server. For example, to enable sshd, run apt-get command / apt command :
ubuntu@ubuntu-c1:~$ sudo apt-get install openssh-server
To exit from lxc-console type Ctrl+a q to exit the console session and back to the host .Type the following command to create Debian 9 ("stretch") container:
$ sudo lxc-create -t download -n debian-c1 -- -d debian -r stretch -a amd64
Sample outputs:Setting up the GPG keyring Downloading the image index Downloading the rootfs Downloading the metadata The image cache is now ready Unpacking the rootfs --- You just created a Debian container (release=stretch, arch=amd64, variant=default) To enable sshd, run: apt-get install openssh-server For security reason, container images ship without user accounts and without a root password. Use lxc-attach or chroot directly into the rootfs to set a root password or create user accounts.Setup root account password , run:
How do I create a CentOS Linux container?
$ sudo chroot /var/lib/lxc/debian-c1/rootfs/ passwd
Start the container and login into it for management purpose, run:
$ sudo lxc-start -n debian-c1
$ lxc-console -n debian-c1Type the following command to create CentOS 7 container:
$ sudo lxc-create -t download -n centos-c1 -- -d centos -r 7 -a amd64
Sample outputs:Setting up the GPG keyring Downloading the image index Downloading the rootfs Downloading the metadata The image cache is now ready Unpacking the rootfs --- You just created a CentOS container (release=7, arch=amd64, variant=default) To enable sshd, run: yum install openssh-server For security reason, container images ship without user accounts and without a root password. Use lxc-attach or chroot directly into the rootfs to set a root password or create user accounts.Set the root account password and start the container:
How do I create a Fedora Linux container?
$ sudo chroot /var/lib/lxc/centos-c1/rootfs/ passwd
$ sudo lxc-start -n centos-c1
$ lxc-console -n centos-c1Type the following command to create Fedora 25 container:
$ sudo lxc-create -t download -n fedora-c1 -- -d fedora -r 25 -a amd64
Sample outputs:Setting up the GPG keyring Downloading the image index Downloading the rootfs Downloading the metadata The image cache is now ready Unpacking the rootfs --- You just created a Fedora container (release=25, arch=amd64, variant=default) To enable sshd, run: dnf install openssh-server For security reason, container images ship without user accounts and without a root password. Use lxc-attach or chroot directly into the rootfs to set a root password or create user accounts.Set the root account password and start the container:
How do I create a CentOS 6 Linux container and store it in btrfs ?
$ sudo chroot /var/lib/lxc/fedora-c1/rootfs/ passwd
$ sudo lxc-start -n fedora-c1
$ lxc-console -n fedora-c1You need to create or format hard disk as btrfs and use that one:
# mkfs.btrfs /dev/sdb
# mount /dev/sdb /mnt/btrfs/
If you do not have /dev/sdb create an image using the dd or fallocate command as follows:
# fallocate -l 10G /nixcraft-btrfs.img
# losetup /dev/loop0 /nixcraft-btrfs.img
# mkfs.btrfs /dev/loop0
# mount /dev/loop0 /mnt/btrfs/
# btrfs filesystem show
Sample outputs:Label: none uuid: 4deee098-94ca-472a-a0b5-0cd36a205c35 Total devices 1 FS bytes used 361.53MiB devid 1 size 10.00GiB used 3.02GiB path /dev/loop0Now create a CentOS 6 LXC:
# lxc-create -B btrfs -P /mnt/btrfs/ -t download -n centos6-c1 -- -d centos -r 6 -a amd64
# chroot /mnt/btrfs/centos6-c1/rootfs/ passwd
# lxc-start -P /mnt/btrfs/ -n centos6-c1
# lxc-console -P /mnt/btrfs -n centos6-c1
# lxc-ls -P /mnt/btrfs/ -f
Sample outputs:NAME STATE AUTOSTART GROUPS IPV4 IPV6 centos6-c1 RUNNING 0 - 192.168.122.145 -How do I see a list of all available images?Type the following command:
$ lxc-create -t download -n NULL -- --list
Sample outputs:Setting up the GPG keyring Downloading the image index --- DIST RELEASE ARCH VARIANT BUILD --- alpine 3.1 amd64 default 20170319_17:50 alpine 3.1 armhf default 20161230_08:09 alpine 3.1 i386 default 20170319_17:50 alpine 3.2 amd64 default 20170504_18:43 alpine 3.2 armhf default 20161230_08:09 alpine 3.2 i386 default 20170504_17:50 alpine 3.3 amd64 default 20170712_17:50 alpine 3.3 armhf default 20170103_17:50 alpine 3.3 i386 default 20170712_17:50 alpine 3.4 amd64 default 20170712_17:50 alpine 3.4 armhf default 20170111_20:27 alpine 3.4 i386 default 20170712_17:50 alpine 3.5 amd64 default 20170712_17:50 alpine 3.5 i386 default 20170712_17:50 alpine 3.6 amd64 default 20170712_17:50 alpine 3.6 i386 default 20170712_17:50 alpine edge amd64 default 20170712_17:50 alpine edge armhf default 20170111_20:27 alpine edge i386 default 20170712_17:50 archlinux current amd64 default 20170529_01:27 archlinux current i386 default 20170529_01:27 centos 6 amd64 default 20170713_02:16 centos 6 i386 default 20170713_02:16 centos 7 amd64 default 20170713_02:16 debian jessie amd64 default 20170712_22:42 debian jessie arm64 default 20170712_22:42 debian jessie armel default 20170711_22:42 debian jessie armhf default 20170712_22:42 debian jessie i386 default 20170712_22:42 debian jessie powerpc default 20170712_22:42 debian jessie ppc64el default 20170712_22:42 debian jessie s390x default 20170712_22:42 debian sid amd64 default 20170712_22:42 debian sid arm64 default 20170712_22:42 debian sid armel default 20170712_22:42 debian sid armhf default 20170711_22:42 debian sid i386 default 20170712_22:42 debian sid powerpc default 20170712_22:42 debian sid ppc64el default 20170712_22:42 debian sid s390x default 20170712_22:42 debian stretch amd64 default 20170712_22:42 debian stretch arm64 default 20170712_22:42 debian stretch armel default 20170711_22:42 debian stretch armhf default 20170712_22:42 debian stretch i386 default 20170712_22:42 debian stretch powerpc default 20161104_22:42 debian stretch ppc64el default 20170712_22:42 debian stretch s390x default 20170712_22:42 debian wheezy amd64 default 20170712_22:42 debian wheezy armel default 20170712_22:42 debian wheezy armhf default 20170712_22:42 debian wheezy i386 default 20170712_22:42 debian wheezy powerpc default 20170712_22:42 debian wheezy s390x default 20170712_22:42 fedora 22 amd64 default 20170216_01:27 fedora 22 i386 default 20170216_02:15 fedora 23 amd64 default 20170215_03:33 fedora 23 i386 default 20170215_01:27 fedora 24 amd64 default 20170713_01:27 fedora 24 i386 default 20170713_01:27 fedora 25 amd64 default 20170713_01:27 fedora 25 i386 default 20170713_01:27 gentoo current amd64 default 20170712_14:12 gentoo current i386 default 20170712_14:12 opensuse 13.2 amd64 default 20170320_00:53 opensuse 42.2 amd64 default 20170713_00:53 oracle 6 amd64 default 20170712_11:40 oracle 6 i386 default 20170712_11:40 oracle 7 amd64 default 20170712_11:40 plamo 5.x amd64 default 20170712_21:36 plamo 5.x i386 default 20170712_21:36 plamo 6.x amd64 default 20170712_21:36 plamo 6.x i386 default 20170712_21:36 ubuntu artful amd64 default 20170713_03:49 ubuntu artful arm64 default 20170713_03:49 ubuntu artful armhf default 20170713_03:49 ubuntu artful i386 default 20170713_03:49 ubuntu artful ppc64el default 20170713_03:49 ubuntu artful s390x default 20170713_03:49 ubuntu precise amd64 default 20170713_03:49 ubuntu precise armel default 20170713_03:49 ubuntu precise armhf default 20170713_03:49 ubuntu precise i386 default 20170713_03:49 ubuntu precise powerpc default 20170713_03:49 ubuntu trusty amd64 default 20170713_03:49 ubuntu trusty arm64 default 20170713_03:49 ubuntu trusty armhf default 20170713_03:49 ubuntu trusty i386 default 20170713_03:49 ubuntu trusty powerpc default 20170713_03:49 ubuntu trusty ppc64el default 20170713_03:49 ubuntu xenial amd64 default 20170713_03:49 ubuntu xenial arm64 default 20170713_03:49 ubuntu xenial armhf default 20170713_03:49 ubuntu xenial i386 default 20170713_03:49 ubuntu xenial powerpc default 20170713_03:49 ubuntu xenial ppc64el default 20170713_03:49 ubuntu xenial s390x default 20170713_03:49 ubuntu yakkety amd64 default 20170713_03:49 ubuntu yakkety arm64 default 20170713_03:49 ubuntu yakkety armhf default 20170713_03:49 ubuntu yakkety i386 default 20170713_03:49 ubuntu yakkety powerpc default 20170713_03:49 ubuntu yakkety ppc64el default 20170713_03:49 ubuntu yakkety s390x default 20170713_03:49 ubuntu zesty amd64 default 20170713_03:49 ubuntu zesty arm64 default 20170713_03:49 ubuntu zesty armhf default 20170713_03:49 ubuntu zesty i386 default 20170713_03:49 ubuntu zesty powerpc default 20170317_03:49 ubuntu zesty ppc64el default 20170713_03:49 ubuntu zesty s390x default 20170713_03:49 ---How do I list the containers existing on the system?Type the following command:
$ lxc-ls -f
Sample outputs:NAME STATE AUTOSTART GROUPS IPV4 IPV6 centos-c1 RUNNING 0 - 192.168.122.174 - debian-c1 RUNNING 0 - 192.168.122.241 - fedora-c1 RUNNING 0 - 192.168.122.176 - ubuntu-c1 RUNNING 0 - 192.168.122.56 -How do I query information about a container?The syntax is:
$ lxc-info -n {container}
$ lxc-info -n centos-c1
Sample outputs:Name: centos-c1 State: RUNNING PID: 5749 IP: 192.168.122.174 CPU use: 0.87 seconds BlkIO use: 6.51 MiB Memory use: 31.66 MiB KMem use: 3.01 MiB Link: vethQIP1US TX bytes: 2.04 KiB RX bytes: 8.77 KiB Total bytes: 10.81 KiBHow do I stop/start/restart a container?The syntax is:
How do I monitor container statistics?
$ sudo lxc-start -n {container}
$ sudo lxc-start -n fedora-c1
$ sudo lxc-stop -n {container}
$ sudo lxc-stop -n fedora-c1To display containers, updating every second, sorted by memory use:
How do I destroy/delete a container?
$ lxc-top --delay 1 --sort m
To display containers, updating every second, sorted by cpu use:
$ lxc-top --delay 1 --sort c
To display containers, updating every second, sorted by block I/O use:
$ lxc-top --delay 1 --sort b
Sample outputs:
The syntax is:
How do I creates, lists, and restores container snapshots?
$ sudo lxc-destroy -n {container}
$ sudo lxc-stop -n fedora-c2
$ sudo lxc-destroy -n fedora-c2
If a container is running, stop it first and destroy it:
$ sudo lxc-destroy -f -n fedora-c2
The syntax is as follows as per snapshots operation. Please note that you must use snapshot aware file system such as BTRFS/ZFS or LVM.
Create snapshot for a containerList snapshot for a container
$ sudo lxc-snapshot -n {container} -c "comment for snapshot"
$ sudo lxc-snapshot -n centos-c1 -c "13/July/17 before applying patches"Restore snapshot for a container
$ sudo lxc-snapshot -n centos-c1 -L -C
Destroy/Delete snapshot for a container
$ sudo lxc-snapshot -n centos-c1 -r snap0
Posted by: Vivek Gite
$ sudo lxc-snapshot -n centos-c1 -d snap0
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on Twitter , Facebook , Google+ .
May 29, 2017 | news.softpedia.com
What's new in this release (see below for details):
- - - TCP and UDP connection support in WebServices.
- - - Various shader improvements for Direct3D 11.
- - - Improved support for high DPI settings.
- - - Partial reimplementation of the GLU library.
- - - Support for recent versions of OSMesa.
- - - Window management improvements on macOS.
- + - Direct3D command stream runs asynchronously.
- + - Better serial and parallel ports autodetection.
- + - Still more fixes for high DPI settings.
- + - System tray notifications on macOS.
- - Various bug fixes.
- ... improved support for Warhammer 40,000: Dawn of War III that'll be ported to Linux and SteamOS platforms by Feral Interactive on June 8, Wine 2.9 is here to introduce support for tesselation shaders in Direct3D, binary mode support in WebServices, RegEdit UI improvements, and clipboard changes detected through Xfixes.
...
The Wine 2.9 source tarball can be downloaded right now from our website if you fancy compiling it on your favorite GNU/Linux distribution, but please try to keep in mind that this is a pre-release version not suitable for production use. We recommend installing the stable Wine branch if you want to have a reliable and bug-free experience.
Wine 2.9 will also be installable from the software repos of your operating system in the coming days.
May 19, 2015 | ZDNet
Shuttleworth said, "LXD crushes traditional virtualisation for common enterprise environments, where density and raw performance are the primary concerns. Canonical is taking containers to the level of a full hypervisor, with guarantees of CPU, RAM, I/O and latency backed by silicon and the latest Ubuntu kernels."
So what is crushing? According to Shuttleworth, LXD runs guest machines 14.5 times more densely and with 57 percent less latency than KVM. So, for example, you can run 47 KVM Ubuntu VMs on a 16GB Intel server, or an amazing 536 LXD Ubuntu containers on the same hardware.
Shuttleworth also stated that LXD was far faster than KVM. For example, all 536 guests started with LXD in far less time than it took KVM to launch its 37 guests. "On average" he claimed," LXD guests started in 1.5 seconds, while KVM guests took 25 seconds to start."
As for latency, Shuttleworth boasted that, "Without the overhead emulation of a VM, LXD avoids the scheduling latencies and other performance hazards. Using a sample 0MQ [a popular Linux high-performance asynchronous messaging library] workload, LXD guests had 57 percent less latency for KVM guests.
Thus, LXD should cut more than half of the latency for such latency-sensitive workloads as voice or video transcode. This makes LXD an important potential tool in the move to network function virtualisation (NFV) in telecommunications and media, and the convergence of cloud and high performance computing.
Indeed, Shuttleworth claimed that with LXD the Ubuntu containers ran at speeds so close to bare-metal that they couldn't see any performance difference. Now, that's impressive!
Virtualization has swept through the data center in recent years, enabling IT transformation and serving as the secret sauce behind cloud computing. Now it's time to examine what's next for virtualization as the data center options mature and virtualization spreads to desktops, networks, and beyond.
LXD, however, as Shuttleworth pointed out, is not a replacement for KVM or other hypervisor technologies such as Xen. Indeed, it can't replace them. In addition, LXD is not trying to displace Docker as a container technology.
Virtually A Machine
No website about Xen can be considered complete without an opinion on this topic. KVM got included into the Linux kernel and is considered the right solution by most distributions and top Linux developers, including Linus Thorvalds himself. This made many people think Xen is somehow inferior or is on the way to decline. The truth is, these solutions differ both in terms of underlying technology and common applications.
How Xen works
Xen not only didn't make it to the main tree of the Linux kernel. It doesn't even run on Linux, although it looks like it. It's a bare metal hypervisor (or: type 1 hypervisor)- a piece of software that runs directly on hardware. If you install a Xen package on your normal Linux distribution, after rebooting you will see Xen messages first. It will then boot your existing system into a first, specially privileged virtual machine called dom0.
This makes the process quite complex. If you start experimenting with Xen and at first attempt make your machine unbootable, don't worry - it happened to many people, including Yours Truly. You can also download Xen Server - commercial, but free distribution of Xen which comes with a simple to use installer, a specially tailored, minimal Linux system in dom0 and enterprise-class management tools. I'll write some more about diffences between XenServer and "community" Xen in a few days.
It also means you won't be able to manipulate VMs using ordinary Linux tools, e.g. stop them with kill and monitor with top. However, Xen comes with some great management software and even greater 3rd-party apps are available (be careful, some of them don't work with Xen Server). They can fully utilize interesting features of Xen, like storing snapshots of VMs and live-migration between physical servers.Xen is also special for its use of technology called paravirtualization. In short, it means that the guest operating systems knows it runs on a virtualized system. There is an obvious downside: it needs to be specially modified, although with open source OSes that's not much of an issue. But there's also one very important advantage: speed. Xen delivers almost native performance. Other virtualization platforms use this approach in a very limited way, usually in form of a driver package that you install on a guest systems. This improves the speed compaired to completely non-paravirtualized system, but is still far from what can be achieved with Xen.
How KVM works
KVM runs inside a Linux system, not above it - it's called type 2, or hosted hypervisor. This has several significant implications. From technical point of view, it makes it easier to deploy and manage, no need for special boot-time support; but it also makes it harder to deliver good performance. From political point of view, Linux developers view it as superior to Xen because it's a part of the system, not an outside piece of software.
KVM requires CPU with hardware virtualization support. Most new server, desktop and laptop processors from Intel and AMD work with KVM. Older CPUs or low-power units for netbooks, PDAs and the like lack this feature. Hardware-assisted virtualization makes it possible to run an unmodified operating system with an adequate speed. Xen can do it too, although this feature is mostly used to run Windows or other proprietary guests. Even with hardware support, pure virtualization is still much slower than paravirtualization.
Rest of the world
Some VMware server platforms and Microsoft Hyper-V are bare-metal hypervisors, like Xen. VMware's desktop solutions (Player, Workstation) are hosted, as well as QEMU, VirtualBox, Microsoft Virtual PC and pretty much everything else. None of them employ a full paravirtualization, although they sometimes offer drivers improving the performance of guest systems.
KVM only runs on machines with hardware virtualization support. Some enterprise platforms have this requirement too. VirtualBox and desktop versions of VMware work on CPUs lacking virtualization support, but the performance is greatly reduced.
What shoud you choose?
For the server, grid or cloud
If you want to run Linux, BSD or Solaris guests, nothing beats the paravirtualized performance of Xen. For Windows and other proprietary operating systems, there's not much difference between the platforms. Performance and features are similar.
In the beginning KVM lacked live migration and good tools. Nowadays most open source VM management applications (like virt-manager on the screenshot) support both Xen and KVM. Live migration was added in 2007. The whole system is considered stable, although some people still have reservations and think it's not mature enough. Out of the box support in leading Linux distributions is definitely a good point.
VMware is the most widespread solutions - as they proudly admit, it's used by all companies from Fortune 100. Main disadvantage is poor support from open source community. If free management software from VMware is not enough for you, you usually have no choice but to buy a commercial solution - and they don't come cheap. Expect to pay several thousand $ per server or even per CPU.
My subjective choice would be: 1 - Xen, 2 - KVM, 3 - VMware ESXi.
For the personal computer
While Xen is my first choice for the server, it would be very far on the list of "best desktop virtualization platforms". One reason is poor support for power management. It slowly improves, but still I wouldn't install Xen on my laptop. Also the installation method is more suitable for server platforms, but inconvenient for the desktop.
KVM falls somewhere in the middle. As a hosted hypervisor, it's easier to run. Your Linux distribution probably already supports it. Yet, it lacks some user-friendliness of true desktop solutions and if your CPU doesn't have virtualization extensions, you're out of luck.
VMware Player (free of charge, but not open source) is extremaly easy to use, when you want to run VMs prepared by somebody else (hence the name Player - nothing to do with games). Creating a new machine requires editing configuration file or external software (eg. this web-based VM creator). What I really like is convenient hardware management (see screenshot) - just one click to decide if your USB drive belongs to host or guest operating system, another to mount ISO image as guest's DVD-ROM. Another feature is easy file sharing between guest and host. Player's bigger brother is VMware Workstation (about $180). It comes with the ability to create new VMs as well as some other additions. Due to the number of features it slightly harder to use, but still very user-friendly.
VMware offers special drivers for guest operating systems. They are bundled with Workstation, for Player they have to be downloaded separately (or you can borrow them from Workstation, even demo download - license allows it). They are especially useful if you want to run Windows guest, even on older CPUs without hardware assist it's quite responsive.
VirtualBox comes close to VMware. It also has the desktop look&feel and runs on non-hardware-assisted platforms. Bundled guest additions improve performance of virtualized systems. Sharing files and hardware is easy - but not that easy. Overall, in both speed and features, it comes second.
My subjective choice: 1 - VMware Player or Workstation, 2 - VirtualBox, 3 - KVM
EDIT: I later found out that new version of VirtualBox is superior to VMware Player.
In this paper, we question whether hypervisors are really acting as a disruptive force in OS research, instead arguing that they have so far changed very little at a technical level. Essentially, we have retained the conventional Unix-like OS interface and added a new ABI based on PC hardware which is highly unsuitable for most purposes.
Despite commercial excitement, focus on hypervisor design may be leading OS research astray. However, adopting a different approach to virtualization and recognizing its value to academic research holds the prospect of opening up kernel research to new directions.
Because KVM virtual machines are regular processes, the standard memory conservation techniques apply. But unlike regular processes, KVM guests contain a nested operating system, which impacts memory overcommitment in two key ways. KVM guests can have greater memory overcommitment potential than regular processes. This is due to a large difference between minimum and maximum guest memory requirements caused by swings in utilization.Capitalizing on this variability is central to the appeal of virtualization, but it is not always easy. While the host is managing the memory allocated to a KVM guest, the guest kernel is simultaneously managing the same memory. Lacking any form of collaboration between the host and guest, neither the host nor the guest memory manager is able to make optimal decisions regarding caching and swapping, which can lead to less efficient use of memory and degraded performance.
Linux provides additional mechanisms to address memory overcommitment specific to virtualization.
- Memory ballooning is a technique in which the host instructs a cooperative guest to release some of its assigned memory so that it can be used for another purpose. This technique can help refocus memory pressure from the host onto a guest.
- Kernel Same-page Merging (KSM) uses a kernel thread that scans previously identified memory ranges for identical pages, merges them together, and frees the duplicates. Systems that run a large number of homogeneous virtual machines benefit most from this form of memory sharing.
- Other resource management features such as Cgroups have applications in memory overcommitment that can dynamically shuffle resources among virtual machines.
Learn how to use the open source Clonezilla Live cloning software to convert your physical server to a virtual one. Specifically, see how to perform a physical-to-virtual system migration using an image-based method.
As mentioned, IBM has one virtualization type on their midrange systems, PowerVM, formerly referred to as Advanced Power Virtualization. IBM uses a type-one hypervisor for its logical partitioning and virtualization, similar in some respects to Sun Microsystems' LDOMs and VMWARE's ESX server. Type-1 hypervisors run directly on a host's hardware, as a hardware control and guest operating system, which is an evolvement of IBM's classic originally hypervisor- vp/cms. Generally speaking, they are more efficient, more tightly integrated with hardware, better performing, and more reliable than other types of hypervisors. Figure 1 illustrates some of the fundamental differences between the different types of partitioning and hypervisor-based virtualization solutions. IBM LPARs and HP vPars fall into the first example -- hardware partitioning (through their logical partitioning products), while HP also offers physical partitioning through nPars.
Figure 1. Server virtualization approaches
IBM's solution, sometimes referred to as para-virtualization, embeds the hypervisor within the hardware platform. The fundamental difference with IBM is that there is one roadmap, strategy, and hypervisor, all integrated around one hardware platform: IBM Power Systems. Because of this clear focus, IBM can enhance and innovate, without trying to mix and match many different partitioning and virtualization models around different hardware types. Further, they can integrate their virtualization into the firmware, where HP simply cannot or chooses not to.
Intel Enterprise Virtualization and Consolidation Page
Comparison of virtual machines - Wikipedia, the free encyclopedia
VMware - Wikipedia, the free encyclopedia
Virtual machine - Wikipedia, the free encyclopedia
Computer Laboratory - Xen virtual machine monitor
Work on Xen has been supported by UK EPSRC grant GR/S01894, Intel Research, HP Labs and Microsoft Research. For further details contact [email protected].
Microsoft Virtual PC Official Website
VMware
Official Website
VMware Community WebForum
VMware's Back by Kenji Kato
Rob Bastiaansen VMware page
Virtual Machine Technology Online Training
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haters Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: February 03, 2021