What is Container?
Containers are, in short, a technology solution to the problem of ensuring that software running in one server environment runs reliably when it is moved to another server environment.
This can be from the developer’s laptop to the test environment, from all software production phases to production, and perhaps from a physical machine in the data center to a virtual machine in a private or public cloud. Docker creator Solomon Hykes says problems arise when the software environment is fluid. “You’ll test using Python 2.7 and then in production it will run on Python 3 and something weird will happen and we’ll expect everything to work fine. Or you’ll rely on the behavior of a certain version of an SSL library and with another version you’ll do your tests on Debian and production will be on Red Hat, these situations can be reproduced. “And it’s not just a different version component that can cause problems,” he added. “The network topology might be different, or the security policies and storage might be different, but the software has to work on all these varying conditions.”
How do containers solve this problem?
Simply put, a container consists of the entire runtime environment required for software: an application, plus all its dependencies, libraries and other binary files, and the configuration files needed to run it, in a single package. By encapsulating the application platform and its dependencies, differences in OS distributions and the underlying infrastructure are abstracted in this way.
Runtime refers to the collection of software and hardware resources that enable a piece of computer software to run on a system. The runtime is a unified mechanism designed to make the program run regardless of the programming language used.
What is the difference between containers and virtual machines?
What we do with virtualization technology is actually a technology that enables more efficient use of existing physical resources by emulating them on virtual server systems. However, in this technology, each virtual server hosts its own operating systems and applications have to run on these operating systems, so a physical server running three virtual machines has a hypervisor and three separate operating systems on it.
In contrast, with Containers, a server running three containerized applications runs a single operating system, and each container shares the operating system kernel with other containers. The shared parts of the operating system are read-only, with each container having its own accessed areas for writing data. This means that containers are much lighter and use far fewer resources than virtual machines.
What other advantages do containers offer?
A container can be only 10 megabytes in size, while a virtual machine with its own operating system can be several gigabytes in size. Therefore, a single server can host many more containers than virtual machines.
Another important advantage is that virtual machines can take a few minutes to initialize their operating systems and run the applications they host; containers, on the other hand, can start up instantly when needed and can be deleted completely automatically when the need is over, allowing the resources in the system to be used much more dynamically.
A third benefit is that containers allow for greater modularity. Instead of a complex application running in a single container, it allows each module to run on a separate container (database, frontend applications, etc.) This is the microservice approach. Applications built this way are easier to manage because each module is relatively simple and changes can be made to modules without having to rebuild the entire application. Because containers are very lightweight, modules (or microservices) can be launched only when they are needed and used immediately.
Is there a standard Container type?
Until 2015, the Docker specification had set a standard in the Container world, but in 2015, a company called CoreOS created its own App Container Image (ACI) specification that differed from Docker’s container specification, which started a Container movement in the Linux world with different specs, causing standards to be changed by different people.
However, in the same year, an initiative called the Open Container Project was announced. Later this initiative was renamed the Open Container Initiative (OCI). Under the auspices of the Linux Foundation, the goal of the OCI was to create and develop industry standards for all platforms. The initiative was initially based on Docker technologies and started using 5% of the code in this architecture.
The sponsors of the project include AWS, Google, IBM, HP, Microsoft, VMware, Red Hat, Oracle, Twitter and HP, as well as Docker and CoreOS.
Why are all these companies participating in the OCI Initiative?
The idea of OCI is to standardize the basic building blocks of container technology (such as the container format) so that everyone can benefit from them. This means that organizations can focus on developing the additional software needed to support standard container use in an enterprise or cloud environment, rather than spending resources on developing competing container technologies.
How secure are containers?
Many people believe that containers are less secure than virtual machines because it is assumed that if there is a vulnerability in the server kernel where the container is running, this risk can also apply to other containers that share it. This is also true for a hypervisor, but because a hypervisor provides much less functionality than a Linux kernel (typically protecting filesystems, networks, application process controls, etc.), it offers a much smaller attack surface.In the last few years, however, much effort has been made to improve the security of containers, for example, with advanced digital signing systems to prevent unapproved containers from running and kernel updates. However, managing this is difficult and complex, so software has been developed to manage this type of security, for example, Twistlock offers software that profiles a container’s expected behavior and “white lists” and processes, networking activities (such as source and destination IP addresses and ports), and even specific storage applications so that any malicious or unexpected behavior can be flagged.Another container security company, Polyverse, takes a different approach. To minimize the time it takes for a hacker to exploit an application running in a container, it can restart the container every few seconds in a known good state to limit malicious access.
Will containers replace server virtualization?
This is unlikely in the foreseeable future for a number of important reasons
First, there is a widespread view that virtual machines offer better security than containers because of the increased level of isolation they provide.
Second, a large number of management tools are not yet as comprehensive and easy to use as the software used to manage virtualized infrastructure, such as VMware’s vCenter or Microsoft’s System Center. Companies that have made significant investments in such software are unlikely to abandon their virtualized infrastructure. In addition, other reasons such as the complexity of Container components, the fact that the management tools used in each layer are far behind in becoming standardized, and the lack of maturity levels of the tools used in this world also affect the approach of companies on this issue.
Perhaps more importantly, virtualization and containers are seen as complementary technologies rather than competitors. This is because containers can be run on lightweight virtual machines to improve isolation and therefore security, and hardware virtualization makes it easier to manage the hardware infrastructure (networks, servers and storage) needed to support containers.
VMware encourages customers who invest in virtual machine management infrastructure to run containers on the Photon OS Linux distribution inside lightweight virtual machines that can then be managed from vCenter. This is VMware’s “complement in a VM” strategy.
Today, we are closely following VMware’s all-encompassing approach with Projet Tanzu, which is now very close to maturity.
Both approaches have their benefits, but the key will be to adapt to be able to use containers within a virtualized infrastructure rather than replacing virtual machines.
Of course, perhaps the most important factor that will have a deeper impact on these approaches will be the way in which vendors will go about their licensing models. This is because it will be decisive that companies will not want to continue to be crushed under the License fees in a formation that has Open Source technologies at its heart.
So who really has the potential to use Container?
Although there is no definite definition of this most difficult question, we know that Container architectures have started to be tried at least by development teams as of today in all sectors that have “agile” approaches in developing technology and business models that are dependent on it, Electronic Commerce companies, new generation initiatives that provide services over the internet, social platforms, electronic government agencies, companies that analyze big data, and so on. Apart from this, many global E-Commerce companies have been developing their systems on these platforms for a while. However, the transformation of companies that use and develop applications that we call monolithic is still at the very beginning of the road. In short,
- If you need a system that automatically replicates itself and multiplexes services during load,
- If you want to make your service layer more modular,
- If the services you provide need load balancing and high availability in different geographical locations,
- If you are a software company, if you need to manage the processes you use while creating your applications in a much more streamlined and automated way,
- If you aim for your software to work platform independent,
Container architectures may be suitable for you. Apart from this, we are at a very suitable time for the investments you have already made to be compatible with such systems or to be transformed.
Find out about differences between Kubernetes and Docker: https://devopstipstricks.com/kubernetes-vs-docker/