Linux Containers and the Future Cloud
|
|
|
- Walter Norman
- 10 years ago
- Views:
Transcription
1 Linux Containers and the Future Cloud Rami Rosen 1
2 About me Bio: Rami Rosen, a Linux kernel expert, author of a recently published book, Linux Kernel Networking - Implementation and Theory, 648 pages, Apress,
3 Table of Contents Namespaces Namespaces implementation UTS namespace Network Namespaces PID namespaces Cgroups cgroups and kernel namespaces cgroups VFS devices cgroup controller Linux containers LXC management Docker Dockerfile CRIU 3
4 General Lightweight process virtualization is not new. Solaris Zones. BSD jails. AIX WPARs (Workload Partitions). Linux-based containers projects. Why now? Primarily because kernel support is now available (namespaces and cgroups). Kernel 3.8 (released in February 2013). 4
5 Scope and Agenda This lecture is a sequel to a lecture given in May 2013 in Haifux: Resource Management in Linux, Agenda: Namespaces and cgroups the Linux container building blocks. Linux-based containers (focusing on the LXC Open Source project, implementation and some hands-on examples). Docker an engine for creating and deploying containers. CRIU (Checkpoint/restore in userspace) Scope: We will not deal in depth with security aspects of containers. We will not deal with containers in Android. 5
6 Namespaces There are currently 6 namespaces: mnt (mount points, filesystems) pid (processes) net (network stack) ipc (System V IPC) uts (hostname) user (UIDs) Eric Biederman talked about 10 namespaces in OLS security namespace and security keys namespace will probably not be developed (replaced by user namespaces) time namespace. RFC for device namespaces was recently sent (Android; Cellrox). 6
7 6 New CLONE Flags 6 new flags to the clone() system call: CLONE_NEWNS CAP_SYS_ADMIN CLONE_NEWUTS CAP_SYS_ADMIN CLONE_NEWIPC CAP_SYS_ADMIN CLONE_NEWPID CAP_SYS_ADMIN CLONE_NEWNET CAP_SYS_ADMIN CLONE_NEWUSER 3.8 No capability is required 7
8 3 System calls for handling namespaces Three system calls are used for namespaces: clone() - creates a new process and a new namespace; the process is attached to the new namespace. Process creation and process termination methods, fork() and exit() methods, were patched to handle the new namespace CLONE_NEW* flags. unshare() - does not create a new process; creates a new namespace and attaches the current process to it. The unshare() system call was added in 2005: see new system call, unshare : setns(int fd, int nstype) - a new system call was added, for joining an existing namespace. Available only from kernel 3.0 see man 2 setns. 8
9 Namespaces do *not* have names in the kernel. With ip netns sub command, we can assign names to network namespaces, but these are kept in userspace and not in the kernel. An inode is created when the namespace is created, for each namespace. ls -al /proc/<pid>/ns 9
10 Namespaces implementation A member named nsproxy was added to the process descriptor, struct task_struct. nsproxy includes 5 inner namespaces: uts_ns, ipc_ns, mnt_ns, pid_ns, net_ns. Notice that user ns (user namespace) is missing in this list. it is a member of the credentials object (struct cred) which is a member of the process descriptor, task_struct. It is better, in terms of performance, than having an object to each namespace, and incrementing the reference counter of each when forking. A method named task_nsproxy(struct task_struct *tsk), to access the nsproxy of a specified process. (include/linux/nsproxy.h) There is an initial, default namespace for each of the 6 namespaces. 10
11 Userspace additions Userspace additions to support namespaces: IPROUTE package some additions like ip netns add/ip netns del, ip link set... netns, and more (will be discussed in later slides). util-linux package unshare util with support for all the 6 namespaces. nsenter a wrapper around setns(). shadow/shadow-utils (for supporting user namespaces) Still not integrated in all distros. 11
12 UTS namespace UTS (Unix timesharing) namesapce Very simple to implement. A member named uts_ns was added (uts_namespace object) to the nsproxy structure. The uts_ns object includes a new_utsname object, which includes these members: sysname nodename release version machine domainname 12
13 UTS namespace -contd. A method called utsname() was added to fetch the new_utsname object of the uts namespace associated with the current process: static inline struct new_utsname *utsname(void) { return ¤t->nsproxy->uts_ns->name; } The new implementation of gethostname(): SYSCALL_DEFINE2(gethostname, char user *, name, int, len) { struct new_utsname *u;... u = utsname(); if (copy_to_user(name, u->nodename, i)) errno = -EFAULT;... } Similar approach was taken in the uname() and the sethostname() system calls. Question: What is missing in UTS implementation shown above? 13
14 UTS namespace -contd. Answer: UTS procfs entries, like: /proc/sys/kernel/hostname /proc/sys/kernel/domainname With IPC namespace, the principle is the same; it is simply a bigger namespace. 14
15 Network Namespaces A network namespace is logically another copy of the network stack, with its own routes, firewall rules, and network interfaces. The network namespace is represented by struct net (defined in include/net/net_namespace.h). Struct net includes all network stack ingredients, like: The loopback device. SNMP stats. (netns_mib) All network tables:routing, neighboring, etc. All sockets /proc and /sys entries. 15
16 Network Namespaces implementation A network interface belongs to exactly one network namespace. This addition was made to the net_device structure: struct net *nd_net; for the Network namespace this network device is inside. A method was added: dev_net(const struct net_device *dev) to access the nd_net namespace associated with the specified network device. A socket belongs to exactly one network namespace. Added sk_net to struct sock (also a pointer to struct net), for the Network namespace this socket is inside. Added sock_net() and sock_net_set() methods (get/set network namespace of a socket) 16
17 Network Namespaces example: Create two namespaces, called "myns1" and "myns2": ip netns add myns1 ip netns add myns2 Implementation: see netns_add() in ipnetns.c (iproute2) Delete myns1 network namespace is done by: ip netns del myns1 Notice that after deleting a namespace, all its migratable network devices are moved to the default network namespace. You can monitor addition/removal of network namespaces by: ip netns monitor - prints one line for each addition/removal event which occurs. 17
18 You list the network namespaces (which were added via ip netns add ) by: ip netns list this simply reads the namespaces under: /var/run/netns You can find the pid (or list of pids) in a specified network namespace by: ip netns pids namespacename You can find the network namespace of a specified pid by: ip netns identify #pid 18
19 You can move the eth0 network interface to myns1 network namespace by: ip link set eth0 netns myns1 This triggers changing the network namespace of the net_device to myns1. It is handled by the dev_change_net_namespace() method, net/core/dev.c. You can start a shell in the new namespace by: ip netns exec myns1 bash Running now ifconfig -a in myns1 will show eth0 and the loopback device. 19
20 You can move a network device to the default, initial namespace by: ip link set eth0 netns 1 In a network namespace, which was created by ip netns add, network applications which look for files under /etc, will first look in /etc/netns/myns1/, and then in /etc. For example, if we will add the following entry " in /etc/netns/myns1/hosts, and run: ping we will see that we are pinging
21 PID namespaces Added a member named pid_ns (pid_namespace object) to the nsproxy object. Processes in different PID namespaces can have the same process ID. When creating the first process in a new namespace, its PID is 1. Behavior like the init process: When a process dies, all its orphaned children will now have the process with PID 1 as their parent (child reaping). Sending SIGKILL signal does not kill process 1, regardless of which namespace the command was issued from (initial namespace or other PID namespace). 21
22 PID namespaces contd. pid namespaces can be nested, up to 32 nesting levels. (MAX_PID_NS_LEVEL). See: multi_pidns.c, Michael Kerrisk, from 22
23 Cgroups (Control Groups) cgroups (control groups) subsystem is a Resource Management solution providing a generic processgrouping framework. This work was started by developers from Google (primarily Paul Menage and Rohit Seth) in 2006 under the name "process containers ; in 2007, renamed to Control Groups. Maintainers: Li Zefan (Huawei) and Tejun Heo. The memory controller (memcg), which is probably it is the most complex, is maintained separately (4 maintainers) Namespaces provide a per process resource isolation solution. Cgroups, on the other hand, provides resource management solution (handling groups). Systemd is a replacement for SysV init scripts. It uses aggressive parallelization capabilities in order to start services. It was integrated into Fedora since Fedora 15. It is based on DBUS messages. Fedora systemd uses cgroups. Ubuntu does not have systemd yet, but will have it in the future as it was decided that Debian will use systemd. RHEL 7 will use systemd. (RHEL 7 is to be be released during 2014; it is based on Fedora 19). Systemd-nspawn: uses namesapces/cgroups to create containers (A tool for testing/debugging of systemd that should run inside a container; no need for config files, easy to use; not meant to compete with LXC or Libvirt LXC or Docker) 23
24 Cgroups implementation The implementation of cgroups requires a few, simple hooks into the rest of the kernel, none in performance-critical paths: In boot phase (init/main.c) to preform various initializations. In process creation and destroy methods, fork() and exit(). A new file system of type "cgroup" (VFS) Process descriptor additions (struct task_struct) Addition of procfs entries: For each process: /proc/pid/cgroup. System-wide: /proc/cgroups 24
25 cgroups and kernel namespaces Note that the cgroups is not dependent upon namespaces; you can build cgroups without namespaces kernel support, and vice versa. There was an attempt in the past to add "ns" subsystem (ns_cgroup, namespace cgroup subsystem); with this, you could mount a namespace subsystem by: mount -t cgroup -ons. This code was removed in 2011 (by a patch by Daniel Lezcano), in favor of using user namespaces instead. See: 7aea92010acf54ad d5d68772e2 25
26 cgroups VFS Cgroups uses a Virtual File System All entries created in it are not persistent and deleted after reboot. All cgroups actions are performed via filesystem actions (create/remove directory, reading/writing to files in it, mounting/mount options). Usually, cgroups are mounted on /sys/fs/cgroup (for example, by systemd) This is not mandatory; mounting cgroups can be done also on every path in the filesystem. 26
27 In order to use a cgroups filesystem (browse it/attach tasks to cgroups, etc) it must be mounted. The control group can be mounted anywhere on the filesystem. Systemd uses /sys/fs/cgroup. When mounting, we can specify with mount options (-o) which subsystems we want to use. There are 11 cgroup subsystems (controllers) (kernel rc4, April 2013); two can be built as modules. (All subsystems are instances of cgroup_subsys struct) cpuset_subsys - defined in kernel/cpuset.c. freezer_subsys - defined in kernel/cgroup_freezer.c. mem_cgroup_subsys - defined in mm/memcontrol.c; Aka memcg - memory control groups. blkio_subsys - defined in block/blk-cgroup.c. net_cls_subsys - defined in net/sched/cls_cgroup.c (can be built as a kernel module) net_prio_subsys - defined in net/core/netprio_cgroup.c (can be built as a kernel module) devices_subsys - defined in security/device_cgroup.c. perf_subsys (perf_event) - defined in kernel/events/core.c hugetlb_subsys - defined in mm/hugetlb_cgroup.c. cpu_cgroup_subsys - defined in kernel/sched/core.c cpuacct_subsys - defined in kernel/sched/core.c 27
28 devices cgroup controller Also referred to as :devcg (devices control group) devices cgroup provides enforcing restrictions on opening and mknod operations on device files. 3 files: devices.allow, devices.deny, devices.list. devices.allow can be considered as devices whitelist devices.deny can be considered as devices blacklist. devices.list available devices. Each entry is 4 fields: Type: can be a (all), c (char device), or b (block device). All means all types of devices, and all major and minor numbers. Major number. Minor number. Access: composition of 'r' (read), 'w' (write) and 'm' (mknod). 28
29 devices - example /dev/null major number is 1 and minor number is 3 (You can fetch the major/minor number from Documentation/devices.txt) mkdir /sys/fs/cgroup/devices/0 By default, for a new group, you have full permissions: cat /sys/fs/cgroup/devices/0/devices.list a *:* rwm echo a *:* rmw > /sys/fs/cgroup/devices/0/devices.deny This denies rmw access from all devices. echo $$ > /sys/fs/cgroup/devices/0/tasks echo "test" > /dev/null bash: /dev/null: Operation not permitted echo a *.* rwm > /sys/fs/cgroup/devices/0/devices.allow This adds the 'a *:* rwm' entry to the whitelist. echo "test" > /dev/null Now there is no error. 29
30 What will be the future cloud infrastructure? Two types of virtualization: Running VMs: (Xen, Kvm) another OS instance by Hardware Virtualization/Para Virtualization solutions Lightweight Process level (Containers) ( Aka Virtual Environment (VE) and Virtual Private Server (VPS). Will VMs disappear from clouds infrastructure? Probably not. With VMs, you can run different kernels on different guests; with containers you must use the same kernel. VM may support non-linux OSs such as Microsoft Windows, etc. You cannot run Windows in a container. Security. OpenVZ: security audit, performed by Solar Designer in 2005 ( Containers advantages: Nesting in containers is supported out of the box, though it was not tested thoroughly. Start-up and Shut-down time: With process-level virtualization, starting and shutting down a container is faster than with VMs like Xen/KVM. Test: starting 4 Fedora containers as daemons in less then half a second on a laptop. Density - you can deploy more containers on a host then VMs (see next in the density slides). Take into account that twice the numbers of containers on a host leads to a higher overall profit. The hypervisor is here to stay : 30
31 LXCBENCH: Based on Phoronix Test Suite (PTS), see: - GENIVI Alliance organization project. You can have more containers on a host then kvm/xen Vms. Lightweight Virtualization with Linux Containers (LXC) - Jérôme Petazzoni The 5th China Cloud Computing Conference, June virtual machines container Containers and The Cloud: Do you need another virtual environment, a lecture given in Enterprise End User Summit, May 2013, by James Bottomley, CTO, Server Virtualization for Parallels. Thanks fto James Bottomley from Parallels for giving an explicit permission to show the following chart from his lecture slides: The benchmarks were performed on a Xeon server.
32 Chart of Densities 7 7
33 Containerization is the new virtualization Containers are in use by many PaaS (Platform as a Service) companies; to mention a few dotcloud (which changed later its name to docker): Parallels - Heroku - Pantheon - OpenShift of Red Hat: more. 31
34 What is a container? There is no spec specifying what a container should implement. There are several Linux-based Containers projects: The Google containers (still in Beta phase): Based on cgroups. It is being considered to add namespaces usage in the future. Vserver: There are no plans to upstream it. Some of the interfaces were defined more than ten years ago. 32
35 The OpenVZ project OpenVZ (based on modified kernel): Origins: a company called SWsoft, which was Founded in Virtuozzo was released (a proprietary virtualization product) 2005 OpenVZ (Open Virtuozzo) released (Linux only) 2008 SWsoft merged into parallels. OpenVZ uses cgroups (originally it used a different resource management mechanism called bean counters). Using ulimt is not enough. OpenVZ is sponsored by a hosting and cloud services company named Parallels ( vzctl user space tool. In production Features which are not in mainline yet like vswap or ploop (it is not upstream) 33
36 The OpenVZ project OpenVZ (with the modified kernel) uses 5 namespaces out of the 6. User namespace is not yet used in it. Cgroups are not used yet. There is an effort to move to cgroups in the upcoming OpenVZ kernel (which will be RHEL7-based). The following plot was contributed by Kir Kolyshkin, and it was plotted using a script from OpenVZ wiki, using gnuplot: For comparison, for kernel 3.14 there are 1233 patches from Intel ( 34
37 OpenVZ team kernel contribution stats 35
38 LXC containers LXC containers: Website: Background a french company called Meiosys which developed an HPC product called MetaCluster with checkpoint/restore bought by IBM in Originally based on a kernel patch which was not merged, so rewritten from scratch in LTC. Their CKRM solution (resource management) was rejected. An Open Source project; over 60 contributors. Maintainers: Serge Hallyn(Ubuntu), Stéphane Graber(Ubuntu) Only on Linux, but can run on many platforms. Why now? User namespaces were integrated into mainline since 3.8 kernel (February 2013). CRIU 1.0 version was released only recently. In the past, several projects refrained from using LXC since it was immature (for example, NOVA of OpenStack: Using OpenVZ to build a PaaS with Openstack's Nova, Daniel Salinas, Rackspace, LPC This might change now. Lxc-libvirt not a single line of code in common between LXC and libvirt lxc. virsh command line tool Maintained by Daniel P. Berrange, RedHat. 36
39 LXC containers contd. There is no data structure in the kernel representing a container. A container is a userspace construct (lxc_container struct) There are several patches in the kernel to support checkpoint/restore of processes in general (though these processes can be also containers). A container is a Linux userspace process. Created by the clone() system call. When a new container is created, it is always with these two namespaces: PID namespace (CLONE_NEWPID flag is set). MNT namespace (CLONE_NEWNS flag is set). If lxc.network.type = none in the container config file, then CLONE_NEWNET is not set in the flags which are passed to clone(). In such a case, we will have the same network namespace as the host, and see the same network interfaces as the host that created it. 37
40 Templates There are templates for creating a container (under /usr/share/lxc/templates in Fedora). These templates are shell scripts. In LXC 1.0 there are 14 templates, and in LXC 0.9, there are 11 templates. The simplest container is lxc-busybox. Fedora templates - Michael H. Warfield, Stéphane Graber, Serge Hallyn, others. Oracle template - Dwight Engen, others. 38
41 Templates glitches and quirks With busybox containers, you must use the -k (send SIGKILL) in order to terminate the container, or use busybox from git, where a patch to enable getting SIGPWR was applied. With Fedora, lxc-stop takes a lot of time. Adding -k will make it quicker (but terminate ungracefully). In the past, creating Fedora containers prior to Fedora 15 (with the -r flag) had some issues. The name of the bridge interfaces are different for fedora (virbr0) and Ubuntu (lxcbr0). Setting lxc.stopsignal to SIGKILL (or some other signal) in a Fedora container config file does not work. 39
42 Creating and Destroying a Container Examples for creating a container: lxc-create -t fedora -n myfedoract => The container will be created under /var/lib/lxc/myfedoract (This is true when using a distro package like with yum install lxc or apt-get install lxc. When building from source, when using prefix=/usr, the container will also be created in that path). You can also create containers of older Fedora distros, with the -r flag. For example, create fedora 18 distro with: lxc-create -t fedora -n fedora R 18 40
43 Creating and Destroying a Container contd. Or creating Ubuntu release: lxc-create -t fedora -n ubuntuct -- -r raring lxc-create -t busybox -n mybusyboxct Example of removing a container: lxc-destroy -n busyboxct => /var/lib/lxc/myfedoract will be removed. 41
44 Example: host and containers 42
45 Unix sockets Commands There are 7 commands which can be sent by Unix domain sockets from a host to a container (but not vice verse): LXC_CMD_CONSOLE (console) LXC_CMD_STOP (stop) LXC_CMD_GET_STATE (get_state) LXC_CMD_GET_INIT_PID (get_init_pid) LXC_CMD_GET_CLONE_FLAGS (get_clone_flags) LXC_CMD_GET_CGROUP (get_cgroup) LXC_CMD_GET_CONFIG_ITEM (get_config_item) 43
46 LXC management Running a container named busyboxct: lxc-start -n busyboxct This in fact creates with clone() a new process (with at least a new PID and MNT namespace), and subsequently calls execvp() with /sbin/init of the busybox container A python3 short script to start a container named fedoranew: #!/usr/bin/python3 import lxc container = lxc.container("fedoranew") container.start() Stopping a container named busyboxct: lxc-stop -n busyboxct This sends a SIGPWR signal. You can also perform a non-graceful shutdown with SIGKILL by: lxc-stop -n busyboxct -k 44
47 lxc-start -n <containername> -l INFO -o /dev/stdout For showing log messages with logpriority of INFO. lxc-start is for system containers. lxc-execute is for application containers. Monitoring a container: lxc-monitor -n busyboxct 'busyboxct' changed state to [STOPPING] 'busyboxct' changed state to [STOPPED] lxc-ls --active Shows all running containers. 45
48 lxc-clone for copy on write. lxc-snapshot. 46
49 Freezing a container lxc-freeze -n <containername> Will write frozen to /sys/fs/cgroup/freezer/lxc/<containername>/freezer.state lxc-unfreeze -n <containername> Will write THAWED to /sys/fs/cgroup/freezer/lxc/<containername>/freezer.state 47
50 Displaying info about a container is done with lxc-info -n bbct: Name: bbct State: RUNNING PID: 415 IP: The pid is the process ID as seen from the host. 48
51 LXC releases LXC 1.0 was released in February ( ). Available in: About 10 months after LXC was released, in April LXC is included in Ubuntu (Trusty Tahr) LTS (Long Term Support): Ubuntu comes with 5 years of security and bug fixes updates: end of life in April
52 LXC Templates These are the 14 shell templates available in LXC 1.0 (there were 11 in LXC 0.9.0). lxc-busybox only 376 lines. lxc-centos lxc-cirros lxc-debian lxc-download lxc-fedora 1400 lines lxc-gentoo lxc-openmandriva lxc-opensuse lxc-oracle lxc-plamo lxc-sshd lxc-ubuntu lxc-ubuntu-cloud 50
53 Containers configuration If lxc.utsname is not specified in the configuration file, the hostname will be the same as of the host that created the container. When starting the container with the following entries: lxc.network.type = phys lxc.network.link = em1 lxc.network.name = eth3 The em1 network interface, which was in the host, will be moved to the new container, and will be named eth3. You will no longer see it in the host. After stopping the container, it will return to the host. 51
54 Setting Container Memory by cgroups Setting cgroups entries can be done from the host by one of three ways. For example, for setting the container memory, we can use: 1) In the container configuration file: lxc.cgroup.memory.limit_in_bytes = 512M Then: cat /sys/fs/cgroup/memory/lxc/fedoract/memory.limit_in_bytes (instead of the default, which is bytes) 2) lxc-cgroup -n fedoract memory.limit_in_bytes 512M 3) echo 512M > /sys/fs/cgroup/memory/lxc/fedoract/memory.limit_in_bytes 52
55 lxc-cgroup -n fedora devices.list displays the allowed devices to be used on a container called fedora : c *:* m b *:* m c 1:3 rwm c 1:5 rwm c 1:7 rwm c 5:0 rwm c 1:8 rwm c 1:9 rwm c 136:* rwm c 5:2 rwm 53
56 Disabling the out of memory killer You can disable the out of memory killer with memcg: echo 1 > /sys/fs/cgroup/memory/0/memory.oom_control This disables the oom killer. cat /sys/fs/cgroup/memory/0/memory.oom_control oom_kill_disable 1 under_oom 0 54
57 Security Anatomy of a user namespaces vulnerability (March 2013) CVE It occurred when calling clone() with both CLONE_FS and CLONE_NEWUSER. The fix was to disable such mask. See: RHEL 6 and 5 disabled user namespaces (CONFIG_USER_NS): RHEL 7: user namespaces are enabled in the kernel (for ABI comparability) but disabled in userspace. In Fedora 20 it is disabled, will probably be enabled in Fedora 21. Daniel J Walsh (SELinux expert), Red Hat. Linux Security Summit (LSS), 2012: Libvirt-sandbox (libvirt-lxc is too complex). LXC is not yet secure. If I want real security I will use KVM. Daniel P. Berrangé, September 2011, 55
58 Security contd seccomp (secure computing mode) The seccomp kernel layer was developed by Andrea Arcangeli By setting lxc.seccomp to a path for a seccomp policy file, you can define seccomp policy (which means to specify the numbers of the system calls which are allowed). seccomp2: specifying the system calls by name. capabilities You can disable capability in a container by setting lxc.cap.drop in the container. For example: lxc.cap.drop=sys_module disables CAP_SYS_MODULE, so insmod from within a container will fail. See man 7 capabilities 56
59 Security contd Config entries: For example, by default, /proc is mounted as rw, so the following sequence will reboot the host: echo 1 > /proc/sys/kernel/sysrq echo b > /proc/sysrq-trigger Setting lxc.mount.auto = proc:mixed in the config file will mount /proc as read-write, but remount /proc/sys and /proc/sysrq-trigger as read-only. You can also use lxc.mount.auto = sys:ro to mount /sys as read only. There are other options. unprivileged containers are containers which are created by a non-root user. requires kernel 3.13 or higher. User namespaces should be enabled in the kernel. A lot of work was done in order to provide support for unprivileged containers. /etc/subuid /etc/subgid 57
60 Unprivileged containers links: Introduction to unprivileged containers, post 7 out of 10 in the LXC 1.0 blog post series by Stéphane Graber: Semi-Unprivileged Linux Containers (LXC) on Debian 7 Stable, Assaf Gordon: 58
61 Cgmanager The cgroup manager A daemon which is started early and opens a UNIX domain socket on /sys/fs/cgroup/cgmanager/sock; developed by Serge Hallyn. In order to create cgroups, DBUS messages (DBUS method calls) are sent over this socket. Will Systemd use the cgmanager? 59
62 Cgmanger Diagram 60
63 Cgmamanger Example Example (based on lxc/src/tests/lxc-test-unpriv) cat /proc/cgroups #subsys_namehierarchy num_cgroups enabled cpuset cpu cpuacct memory devices freezer net_cls blkio perf_event hugetlb
64 Create cgroups called test with DBUS messages - example #!/bin/sh TUSER="test" if [ -e /sys/fs/cgroup/cgmanager/sock ]; then for d in $(grep -v ^# /proc/cgroups awk '{print $1}'); do dbus-send \ --address=unix:path=/sys/fs/cgroup/cgmanager/sock \ --type=method_call /org/linuxcontainers/cgmanager org.linuxcontainers.cgmanager0_0.create \ string:$d string:$tuser done fi 62
65 Docker Docker is a Linux container engine. An open Source project; first release: 3/2013, by dotcloud (a company which later changed its name to Docker Inc). Written first in Python, and later in a programming language called Go (initially developed by Google). git repository: 63
66 Docker 0.8 release in February 2014 (includes Mac OSX support). Docker 0.9 released in New default driver: libcontainer. By default, does not use LXC at all to create containers, but uses cgroups/namespaces directly. Does not support currently user namespaces. In order to switch to working with LXC, you should run: docker -d -e lxc The following diagram is from the Docker website (Thanks to Jerome Petazzoni from Docker Inc. for giving explicit permission to show it) 64
67 65
68 Docker 1.0 Docker 1.0 intended for Q2 of Still probably not production ready. Anyhow, according to Solomon Hykes (Docker CTO), non privileged containers (using user namespaces) are not a pre-requisite for Docker ( 66
69 Docker and RedHat collaboration In September 2013, RedHat announced collaboration with dotcloud. There was a Docker package for Ubuntu, based on LXC. 67
70 Docker and RedHat collaboration Initially, the plan was that RedHat will add code to Docker, for using XML-based libvirt-lxc, which is also used by RedHat in Xen, KVM and other virtualization solutions. Dan Walsh recently (Red Hat Czech conference 2014) suggested to use systemd-nspwan container instead of Libvirt-LXC / LXC. Systemd-nspwan is relatively short (3170 lines) SELinux support to systemd-nspwan was added by him. 68
71 Docker - examples On Fedora 20: yum install docker-io Start the docker daemon: systemctl start docker.sevice docker run -i -t ubuntu /bin/bash This runs a root Ubuntu container, which was prepared beforehand by the Docker team. You cannot modify the root container images in the docker repository. 69
72 Docker examples (contd.) docker stop <container ID> to stop a container. (you can get the container ID by docker ps). Run docker help to get the list of available commands. Docker has some similarities to git. As opposed to git, which is text-based, Dokcer deals with binaries. No rebase or merger operations. For example, if you will run from inside a container: touch /x.txt touch /y.txt yum install net-tools and then exit the container and enter again, you will not see these changes (as opposed to what happens when you work with containers by the lxc-start tool) b0814f648e7b is the container ID, which you can get by docker ps docker diff <containerid> will show the changes you made. 70
73 First trial docker diff shows: 'A' -> Add 'D' -> Delete 'C' -> Change docker commit b0814f648e7b fedora docker push fedora 2014/01/25 19:31:04 Impossible to push a "root" repository. Please rename your repository in <user>/<repo> (ex: <user>/fedora) 71
74 docker commit b0814f648e7b rruser/fedora You will need to enter credentials: Please login prior to push: Login against server at Username: As in git, you can add -m message text. You can create an account free registration in: You use the username/password you get for pushing the image to the docker repository. You can install a private docker registry server. 72
75 Docker public registry: 73
76 Dockerfile The Dockerfile is composed of a sequence of commands It has a simple syntax for automating building images. Dockerfile entries start with an uppercase commands. The first line in a Dockerfile must start with FROM, specifying base image, like Fedora or Ubuntu. Example of a simple Dockerfile: FROM fedora MAINTAINER Rami Rosen RUN yum install net-tools We can use the docker build command to create containers according to a specified Dockerfile: docker build -t rruser/fedora. Dockerfile tutorial in : 74
77 Docker and LXC Docker before 0.9 use LXC to create and manage containers. Docker versions before 0.9 do not use LXC GO bindings. Docker 0.9 (3/14) by default does not use LXC at all to create containers, but uses cgroups/namespaces directly (libcontainer), without using any of the LXC userspace commands. In order to switch to working with LXC, you should run the docker daemon thus: docker -d -e lxc Using libcontainer: This will remove the burden of supporting many LXC versions. 75
78 From: Please note Docker is currently under heavy development. It should not be used in production (yet). 76
79 CRIU Checkpoint/Restore for Linux in userspace (Pavel Emalyanov is the team leader.) An OpenVZ project; Licensed under GPLv2. Why do we need Checkpoint/Restore? Maintenance: Installing a new kernel. HW addition/fixes or HW maintenance Load Balancing. Recovery from disaster by snapshots. 2010: the kernel community rejected Oren Laddan checkpoint/restore kernel patches. There was in fact a joint effort of OpenVZ, who had their own kernel implementation, IBM, Oren Laddan, and a couple of others; after it was not accepted, as it was too complicated, Pavel Emalyanov started to push the hybrid approach: C/R mostly in userspace, with much less code in kernel. V19 of the patch included 27,000-line diff from rc8( Following this rejection, OpenVZ team decided to stop work on checkpoint/restore in kernel, and to open the CRIU project, for checkpoint/restore in userspace. 77
80 CRIU contd. More than 150 kernel patches (some of them used also outside of CRIU, for example UNIX sockets and the SS tool). Usually wrapped in #ifdef CONFIG_CHECKPOINT_RESTORE blocks. See: C/R tool July 2012: the first release of the checkpoint-restore tool: C/R tool 0.2 September 2012: Support for dump and restore a simple LXC container. C/R tool 1.0 November C/R tool 1.1 January C/R tool 1.2 February p.haul (Process haul) git branch - migration of containers and also arbitrary processes. - U-Haul is a moving equipment company in the USA... 78
81 CRIU-Example Checkpointing is done by: criu dump -t $PID -D images -D tells the folder to put the generated image files. Restoring is done by: criu restore -D images $PID CRIU is intended to support checkpoint/restore of: OpenVZ containers LXC containers Docker containers More in the future. 79
82 CRIU plugins Intergrated into CRIU 1.1 Intended for external dependencies. External Unix socket (test/unix-callback/) External bind mounts (test/mounts/ext/) External files (test/ext-links/) More. See: CRIU doesn't support (currently) cgroups, hugetlbfs, fusectl, configfs, mqueue,debugfs file systems in containers 80
83 Summary We have discussed various ingredients of Linuxbased containers: namespaces enable secure/functional separation cgroups allow resource usage control LXC automates it using config files and userspace commands. The Docker engine simplifies creating and deploying containers into the cloud. CRIU adds the ability to save/restore processes and containers in particular. 81
84 Tips How can I know that I am inside a containers? Look at cat /proc/1/environ With LXC containers, this will show container=lxc If it a Fedora container, systemd-detect-virt will show: lxc. 82
85 Thank you! 83
Lightweight Virtualization: LXC Best Practices
Lightweight Virtualization: LXC Best Practices Christoph Mitasch LinuxCon Barcelona 2012 Slide 1/28 About Based in Bavaria, Germany Selling server systems in Europe ~100 employees >10.000 customers Slide
Linux Kernel Namespaces (an intro to soft-virtualization) kargig [at] void.gr @kargig GPG: 79B1 9198 B8F6 803B EC37 5638 897C 0317 7011 E02C
Linux Kernel Namespaces (an intro to soft-virtualization) kargig [at] void.gr @kargig GPG: 79B1 9198 B8F6 803B EC37 5638 897C 0317 7011 E02C whoami System & services engineer @ GRNET Messing with Linux,
Lightweight Virtualization with Linux Containers (LXC)
Lightweight Virtualization with Linux Containers (LXC) The 5th China Cloud Computing Conference June 7th, 2013 China National Convention Center, Beijing Outline Introduction : who, what, why? Linux Containers
Resource management: Linux kernel Namespaces and cgroups
Resource management: Linux kernel Namespaces and cgroups Rami Rosen [email protected] Haifux, May 2013 www.haifux.org 1/121 TOC Network Namespace PID namespaces UTS namespace Mount namespace user namespaces
The Bro Network Security Monitor
The Bro Network Security Monitor Bro Live!: Training for the Future Jon Schipp NCSA [email protected] BroCon14 NCSA, Champaign-Urbana, IL Issues Motivations Users: Too much time is spent passing around,
Lightweight Virtualization. LXC containers & AUFS
Lightweight Virtualization LXC containers & AUFS SCALE11x February 2013, Los Angeles Those slides are available at: http://goo.gl/bfhsh Outline Intro: who, what, why? LXC containers Namespaces Cgroups
Network Virtualization Tools in Linux PRESENTED BY: QUAMAR NIYAZ & AHMAD JAVAID
Network Virtualization Tools in Linux PRESENTED BY: QUAMAR NIYAZ & AHMAD JAVAID Contents Introduction Types of Virtualization Network Virtualization OS Virtualization OS Level Virtualization Some Virtualization
Do Containers fully 'contain' security issues? A closer look at Docker and Warden. By Farshad Abasi, 2015-09-16
Do Containers fully 'contain' security issues? A closer look at Docker and Warden. By Farshad Abasi, 2015-09-16 Overview What are Containers? Containers and The Cloud Containerization vs. H/W Virtualization
SLURM Resources isolation through cgroups. Yiannis Georgiou email: [email protected] Matthieu Hautreux email: matthieu.hautreux@cea.
SLURM Resources isolation through cgroups Yiannis Georgiou email: [email protected] Matthieu Hautreux email: [email protected] Outline Introduction to cgroups Cgroups implementation upon
lxc and cgroups in practice sesja linuksowa 2012 wojciech wirkijowski wojciech /at/ wirkijowski /dot/ pl
lxc and cgroups in practice sesja linuksowa 2012 wojciech wirkijowski wojciech /at/ wirkijowski /dot/ pl agenda introducion cgroups lxc examples about me sysadmin at tieto home page: reconlab.com in spare
Platform as a Service and Container Clouds
John Rofrano Senior Technical Staff Member, Cloud Automation Services, IBM Research [email protected] or [email protected] Platform as a Service and Container Clouds using IBM Bluemix and Docker for Cloud
Virtualization in Linux
Virtualization in Linux Kirill Kolyshkin September 1, 2006 Abstract Three main virtualization approaches emulation, paravirtualization, and operating system-level virtualization are covered,
Intro to Docker and Containers
Contain Yourself Intro to Docker and Containers Nicola Kabar @nicolakabar [email protected] Solutions Architect at Docker Help Customers Design Solutions based on Docker
Cisco Application-Centric Infrastructure (ACI) and Linux Containers
White Paper Cisco Application-Centric Infrastructure (ACI) and Linux Containers What You Will Learn Linux containers are quickly gaining traction as a new way of building, deploying, and managing applications
Virtual Private Systems for FreeBSD
Virtual Private Systems for FreeBSD Klaus P. Ohrhallinger 06. June 2010 Abstract Virtual Private Systems for FreeBSD (VPS) is a novel virtualization implementation which is based on the operating system
Use Cases for Docker in Enterprise Linux Environment CloudOpen North America, 2014 Linda Wang Sr. Software Engineering Manager Red Hat, Inc.
Use Cases for Docker in Enterprise Linux Environment CloudOpen North America, 2014 Linda Wang Sr. Software Engineering Manager Red Hat, Inc. 1 2 Containerize! 3 Use Cases for Docker in the Enterprise Linux
Operating Systems Virtualization mechanisms
Operating Systems Virtualization mechanisms René Serral-Gracià Xavier Martorell-Bofill 1 1 Universitat Politècnica de Catalunya (UPC) May 26, 2014 Contents 1 Introduction 2 Hardware Virtualization mechanisms
Virtualization analysis
Page 1 of 15 Virtualization analysis CSD Fall 2011 Project owner Björn Pehrson Project Coaches Bruce Zamaere Erik Eliasson HervéNtareme SirajRathore Team members Bowei Dai [email protected] 15 credits Elis Kullberg
Managing Linux Resources with cgroups
Managing Linux Resources with cgroups Thursday, August 13, 2015: 01:45 PM - 02:45 PM, Dolphin, Americas Seminar Richard Young Executive I.T. Specialist IBM Systems Lab Services Agenda Control groups overview
Linux Virtualization. Kir Kolyshkin <[email protected]> OpenVZ project manager
Linux Virtualization Kir Kolyshkin OpenVZ project manager What is virtualization? Virtualization is a technique for deploying technologies. Virtualization creates a level of indirection
Type-C Hypervisors. @DustinKirkland Ubuntu Product & Strategy Canonical Ltd.
Type-C Hypervisors @DustinKirkland Ubuntu Product & Strategy Canonical Ltd. Canonical is the company behind Ubuntu 2004 600+ FOUNDATION EMPLOYEES 30+ COUNTRIES London Beijing Boston Shanghai Taipei What
ISLET: Jon Schipp, Ohio Linux Fest 2015. [email protected]. An Attempt to Improve Linux-based Software Training
ISLET: An Attempt to Improve Linux-based Software Training Jon Schipp, Ohio Linux Fest 2015 [email protected] Project Contributions The Netsniff-NG Toolkit SecurityOnion Bro Team www.open-nsm.net The
Practical Applications of Virtualization. Mike Phillips <[email protected]> IAP 2008 SIPB IAP Series http://stuff.mit.edu/iap/ http://stuff.mit.
Practical Applications of Virtualization Mike Phillips IAP 2008 SIPB IAP Series http://stuff.mit.edu/iap/ http://stuff.mit.edu/sipb/ Some Guy Rambling About Virtualization Stuff He's Read
WHITEPAPER INTRODUCTION TO CONTAINER SECURITY. Introduction to Container Security
Introduction to Container Security Table of Contents Executive Summary 3 The Docker Platform 3 Linux Best Practices and Default Docker Security 3 Process Restrictions 4 File & Device Restrictions 4 Application
RH033 Red Hat Linux Essentials or equivalent experience with Red Hat Linux..
RH131 Red Hat Linux System Administration Course Summary For users of Linux (or UNIX) who want to start building skills in systems administration on Red Hat Linux, to a level where they can attach and
Options in Open Source Virtualization and Cloud Computing. Andrew Hadinyoto Republic Polytechnic
Options in Open Source Virtualization and Cloud Computing Andrew Hadinyoto Republic Polytechnic No Virtualization Application Operating System Hardware Virtualization (general) Application Application
Example of Standard API
16 Example of Standard API System Call Implementation Typically, a number associated with each system call System call interface maintains a table indexed according to these numbers The system call interface
where.com meets.org Open Source events since 1996 Nils Magnus: Docker Security
Nils Magnus, LinuxTag Association Docker Security: Who can we trust, what should we verify? Ubiquitous Containers: Excitement vs. production use 3 53% of decision-makers have concerns about security Source:
Red Hat Certifications: Red Hat Certified System Administrator (RHCSA)
Red Hat Certifications: Red Hat Certified System Administrator (RHCSA) Overview Red Hat is pleased to announce a new addition to its line of performance-based certifications Red Hat Certified System Administrator
PARALLELS SERVER BARE METAL 5.0 README
PARALLELS SERVER BARE METAL 5.0 README 1999-2011 Parallels Holdings, Ltd. and its affiliates. All rights reserved. This document provides the first-priority information on the Parallels Server Bare Metal
Git Fusion Guide 2015.3. August 2015 Update
Git Fusion Guide 2015.3 August 2015 Update Git Fusion Guide 2015.3 August 2015 Update Copyright 1999-2015 Perforce Software. All rights reserved. Perforce software and documentation is available from http://www.perforce.com/.
2972 Linux Options and Best Practices for Scaleup Virtualization
HP Technology Forum & Expo 2009 Produced in cooperation with: 2972 Linux Options and Best Practices for Scaleup Virtualization Thomas Sjolshagen Linux Product Planner June 17 th, 2009 2009 Hewlett-Packard
CSE 120 Principles of Operating Systems. Modules, Interfaces, Structure
CSE 120 Principles of Operating Systems Fall 2000 Lecture 3: Operating System Modules, Interfaces, and Structure Geoffrey M. Voelker Modules, Interfaces, Structure We roughly defined an OS as the layer
Cloud Security with Stackato
Cloud Security with Stackato 1 Survey after survey identifies security as the primary concern potential users have with respect to cloud computing. Use of an external computing environment raises issues
Cloud Simulator for Scalability Testing
Cloud Simulator for Scalability Testing Nitin Singhvi ([email protected]) 1 Introduction Nitin Singhvi 11+ Years of experience in technology, especially in Networking QA. Currently playing roles
Docker : devops, shared registries, HPC and emerging use cases. François Moreews & Olivier Sallou
Docker : devops, shared registries, HPC and emerging use cases François Moreews & Olivier Sallou Presentation Docker is an open-source engine to easily create lightweight, portable, self-sufficient containers
REDEFINING THE ENTERPRISE OS RED HAT ENTERPRISE LINUX 7
REDEFINING THE ENTERPRISE OS RED HAT ENTERPRISE LINUX 7 Rodrigo Freire Sr. Technical Account Manager May 2014 1 Roadmap At A Glance CY2010 CY2011 CY2012 CY2013 CY2014.0 RHEL 7 RHEL 5 Production 1.1.0 RHEL
Docker on OpenStack. August 2014. Author : Nitin Agarwal [email protected]. Supervisor(s) : Belmiro Moreira
Docker on OpenStack August 2014 Author : Nitin Agarwal [email protected] Supervisor(s) : Belmiro Moreira CERN openlab Summer Student Report 2014 Project Specification CERN is establishing a large
Containers, Docker, and Security: State of the Union
Containers, Docker, and Security: State of the Union 1 / Who am I? Jérôme Petazzoni (@jpetazzo) French software engineer living in California Joined Docker (dotcloud) more than 4 years ago (I was at Docker
Linstantiation of applications. Docker accelerate
Industrial Science Impact Factor : 1.5015(UIF) ISSN 2347-5420 Volume - 1 Issue - 12 Aug - 2015 DOCKER CONTAINER 1 2 3 Sawale Bharati Shankar, Dhoble Manoj Ramchandra and Sawale Nitin Shankar images. ABSTRACT
OPEN CLOUD INFRASTRUCTURE BUILT FOR THE ENTERPRISE
RED HAT ENTERPRISE LINUX OPENSTACK PLATFORM OPEN CLOUD INFRASTRUCTURE BUILT FOR THE ENTERPRISE Arthur Enright Principal Product Manager Virtulization Business Unit I.T. CHALLENGES WORKLOADS ARE EVOLVING
Securing Linux Containers
Securing Linux Containers 1 Securing Linux Containers GIAC (GCUX) Gold Certification Author: Advisor: Richard Carbone Accepted: July 26, 2015 This paper is licensed under a Creative Commons Attribution-ShareAlike
PARALLELS SERVER 4 BARE METAL README
PARALLELS SERVER 4 BARE METAL README This document provides the first-priority information on Parallels Server 4 Bare Metal and supplements the included documentation. TABLE OF CONTENTS 1 About Parallels
virtualization.info Review Center SWsoft Virtuozzo 3.5.1 (for Windows) // 02.26.06
virtualization.info Review Center SWsoft Virtuozzo 3.5.1 (for Windows) // 02.26.06 SWsoft Virtuozzo 3.5.1 (for Windows) Review 2 Summary 0. Introduction 1. Installation 2. VPSs creation and modification
KVM, OpenStack, and the Open Cloud
KVM, OpenStack, and the Open Cloud Adam Jollans, IBM & Mike Kadera, Intel CloudOpen Europe - October 13, 2014 13Oct14 Open VirtualizaGon Alliance 1 Agenda A Brief History of VirtualizaGon KVM Architecture
EXPLORING LINUX KERNEL: THE EASY WAY!
EXPLORING LINUX KERNEL: THE EASY WAY! By: Ahmed Bilal Numan 1 PROBLEM Explore linux kernel TCP/IP stack Solution Try to understand relative kernel code Available text Run kernel in virtualized environment
Deployment - post Xserve
MONTREAL 1/3 JULY 2011 Deployment - post Xserve Pascal Robert Miguel Arroz David LeBer The Menu Deployment options Deployment on CentOS Linux Deployment on Ubuntu Linux Deployment on BSD Hardware/environment
Best Practices for Python in the Cloud: Lessons Learned @ActiveState
Best Practices for Python in the Cloud: Lessons Learned @ActiveState Best Practices for Python in the Cloud Presented by: Gisle Aas, Senior Developer, ActiveState whoami? Gisle Aas! [email protected]!
User Guide for VMware Adapter for SAP LVM VERSION 1.2
User Guide for VMware Adapter for SAP LVM VERSION 1.2 Table of Contents Introduction to VMware Adapter for SAP LVM... 3 Product Description... 3 Executive Summary... 3 Target Audience... 3 Prerequisites...
ServerPronto Cloud User Guide
ServerPronto Cloud User Guide Virtual machines Virtual machines are based on templates and are deployed on hypervisors. Hypervisors give them access to CPU, disk and network resources. The ServerPronto
Cloud Server. Parallels. Key Features and Benefits. White Paper. www.parallels.com
Parallels Cloud Server White Paper Key Features and Benefits www.parallels.com Table of Contents Introduction... 3 Key Features... 3 Distributed Cloud Storage (Containers and Hypervisors)... 3 Rebootless
Quick Start Guide for Parallels Virtuozzo
PROPALMS VDI Version 2.1 Quick Start Guide for Parallels Virtuozzo Rev. 1.1 Published: JULY-2011 1999-2011 Propalms Ltd. All rights reserved. The information contained in this document represents the current
LSN 10 Linux Overview
LSN 10 Linux Overview ECT362 Operating Systems Department of Engineering Technology LSN 10 Linux Overview Linux Contemporary open source implementation of UNIX available for free on the Internet Introduced
Building Docker Cloud Services with Virtuozzo
Building Docker Cloud Services with Virtuozzo Improving security and performance of application containers services in the cloud EXECUTIVE SUMMARY Application containers, and Docker in particular, are
Windows Template Creation Guide. How to build your own Windows VM templates for deployment in Cloudturk.
Windows Template Creation Guide How to build your own Windows VM templates for deployment in Cloudturk. TABLE OF CONTENTS 1. Preparing the Server... 2 2. Installing Windows... 3 3. Creating a Template...
Chapter 16: Virtual Machines. Operating System Concepts 9 th Edition
Chapter 16: Virtual Machines Silberschatz, Galvin and Gagne 2013 Chapter 16: Virtual Machines Overview History Benefits and Features Building Blocks Types of Virtual Machines and Their Implementations
SUSE Virtualization Technologies Roadmap
SUSE Virtualization Technologies Roadmap Michal Svec Mike Latimer Senior Product Manager [email protected] Senior Engineering Manager [email protected] Agenda Virtualization @SUSE Enhancements in XEN/KVM
Backup & Disaster Recovery Appliance User Guide
Built on the Intel Hybrid Cloud Platform Backup & Disaster Recovery Appliance User Guide Order Number: G68664-001 Rev 1.0 June 22, 2012 Contents Registering the BDR Appliance... 4 Step 1: Register the
How To Install Openstack On Ubuntu 14.04 (Amd64)
Getting Started with HP Helion OpenStack Using the Virtual Cloud Installation Method 1 What is OpenStack Cloud Software? A series of interrelated projects that control pools of compute, storage, and networking
SUSE Virtualization Technologies Roadmap
SUSE Virtualization Technologies Roadmap Michal Svec Senior Product Manager [email protected] Jason Douglas Senior Engineering Manager [email protected] Agenda Virtualization @SUSE Enhancements in XEN/KVM
RCL: Software Prototype
Business Continuity as a Service ICT FP7-609828 RCL: Software Prototype D3.2.1 June 2014 Document Information Scheduled delivery 30.06.2014 Actual delivery 30.06.2014 Version 1.0 Responsible Partner IBM
RedHat (RHEL) System Administration Course Summary
Contact Us: (616) 875-4060 RedHat (RHEL) System Administration Course Summary Length: 5 Days Prerequisite: RedHat fundamentals course Recommendation Statement: Students should have some experience with
SUSE Manager. A Comprehensive Linux Server Management the Linux Way. Name. Title Email
SUSE Manager A Comprehensive Linux Server Management the Linux Way Name Title Email Agenda 2 Product Overview Features and Functionality Management Module Provisioning Module Monitoring Roadmap Pricing
Parallels Cloud Server 6.0 Readme
Parallels Cloud Server 6.0 Readme Copyright 1999-2012 Parallels IP Holdings GmbH and its affiliates. All rights reserved. Contents About This Document... 3 About Parallels Cloud Server 6.0... 3 What's
Building a Kubernetes Cluster with Ansible. Patrick Galbraith, ATG Cloud Computing Expo, NYC, May 2016
Building a Kubernetes Cluster with Ansible Patrick Galbraith, ATG Cloud Computing Expo, NYC, May 2016 HPE ATG HPE's (HP Enterprise) Advanced Technology Group for Open Source and Cloud embraces a vision
KVM, OpenStack, and the Open Cloud
KVM, OpenStack, and the Open Cloud Adam Jollans, IBM Southern California Linux Expo February 2015 1 Agenda A Brief History of VirtualizaJon KVM Architecture OpenStack Architecture KVM and OpenStack Case
Solaris For The Modern Data Center. Taking Advantage of Solaris 11 Features
Solaris For The Modern Data Center Taking Advantage of Solaris 11 Features JANUARY 2013 Contents Introduction... 2 Patching and Maintenance... 2 IPS Packages... 2 Boot Environments... 2 Fast Reboot...
Limiting PostgreSQL resource consumption using the Linux kernel
Limiting PostgreSQL resource consumption using the Linux kernel Prague, 2012 Hans-Jürgen Schönig PostgreSQL: A typical setup PostgreSQL optimized for... - maximum performance - throughput Parameters...
Parallels Virtuozzo Containers 4.7 for Linux Readme
Parallels Virtuozzo Containers 4.7 for Linux Readme This document provides the first-priority information about Parallels Virtuozzo Containers 4.7 for Linux and supplements the included documentation.
Rebuild Perfume With Python and PyPI
perfwhiz Documentation Release 0.1.0 Cisco Systems, Inc. February 01, 2016 Contents 1 Overview 3 1.1 Heatmap Gallery............................................. 3 1.2 perfwhiz Workflow............................................
Incremental Backup Script. Jason Healy, Director of Networks and Systems
Incremental Backup Script Jason Healy, Director of Networks and Systems Last Updated Mar 18, 2008 2 Contents 1 Incremental Backup Script 5 1.1 Introduction.............................. 5 1.2 Design Issues.............................
The Case for SE Android. Stephen Smalley [email protected] Trust Mechanisms (R2X) National Security Agency
The Case for SE Android Stephen Smalley [email protected] Trust Mechanisms (R2X) National Security Agency 1 Android: What is it? Linux-based software stack for mobile devices. Very divergent from typical
VMware Server 2.0 Essentials. Virtualization Deployment and Management
VMware Server 2.0 Essentials Virtualization Deployment and Management . This PDF is provided for personal use only. Unauthorized use, reproduction and/or distribution strictly prohibited. All rights reserved.
The Virtualization Practice
The Virtualization Practice White Paper: Managing Applications in Docker Containers Bernd Harzog Analyst Virtualization and Cloud Performance Management October 2014 Abstract Docker has captured the attention
Automated deployment of virtualization-based research models of distributed computer systems
Automated deployment of virtualization-based research models of distributed computer systems Andrey Zenzinov Mechanics and mathematics department, Moscow State University Institute of mechanics, Moscow
Developing tests for the KVM autotest framework
Lucas Meneghel Rodrigues [email protected] KVM Forum 2010 August 9, 2010 1 Automated testing Autotest The wonders of virtualization testing 2 How KVM autotest solves the original problem? Features Test structure
Operating System Structures
COP 4610: Introduction to Operating Systems (Spring 2015) Operating System Structures Zhi Wang Florida State University Content Operating system services User interface System calls System programs Operating
Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide
Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide July 2010 1 Specifications are subject to change without notice. The Cloud.com logo, Cloud.com, Hypervisor Attached Storage, HAS, Hypervisor
Professional Xen Visualization
Professional Xen Visualization William von Hagen WILEY Wiley Publishing, Inc. Acknowledgments Introduction ix xix Chapter 1: Overview of Virtualization : 1 What Is Virtualization? 2 Application Virtualization
Best Practices on monitoring Solaris Global/Local Zones using IBM Tivoli Monitoring
Best Practices on monitoring Solaris Global/Local Zones using IBM Tivoli Monitoring Document version 1.0 Gianluca Della Corte, IBM Tivoli Monitoring software engineer Antonio Sgro, IBM Tivoli Monitoring
Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V
Installation Guide for Microsoft Hyper-V Egnyte Inc. 1890 N. Shoreline Blvd. Mountain View, CA 94043, USA Phone: 877-7EGNYTE (877-734-6983) www.egnyte.com 2013 by Egnyte Inc. All rights reserved. Revised
MontaVista Linux Carrier Grade Edition
MontaVista Linux Carrier Grade Edition WHITE PAPER Beyond Virtualization: The MontaVista Approach to Multi-core SoC Resource Allocation and Control ABSTRACT: MontaVista Linux Carrier Grade Edition (CGE)
How Bigtop Leveraged Docker for Build Automation and One-Click Hadoop Provisioning
How Bigtop Leveraged Docker for Build Automation and One-Click Hadoop Provisioning Evans Ye Apache Big Data 2015 Budapest Who am I Apache Bigtop PMC member Software Engineer at Trend Micro Develop Big
Virtualizare sub Linux: avantaje si pericole. Dragos Manac
Virtualizare sub Linux: avantaje si pericole Dragos Manac 1 Red Hat Enterprise Linux 5 Virtualization Major Hypervisors Xen: University of Cambridge Computer Laboratory Fully open sourced Set of patches
Linux System Administration on Red Hat
Linux System Administration on Red Hat Kenneth Ingham September 29, 2009 1 Course overview This class is for people who are familiar with Linux or Unix systems as a user (i.e., they know file manipulation,
Virtual Servers. Virtual machines. Virtualization. Design of IBM s VM. Virtual machine systems can give everyone the OS (and hardware) that they want.
Virtual machines Virtual machine systems can give everyone the OS (and hardware) that they want. IBM s VM provided an exact copy of the hardware to the user. Virtual Servers Virtual machines are very widespread.
Develop a process for applying updates to systems, including verifying properties of the update. Create File Systems
RH413 Manage Software Updates Develop a process for applying updates to systems, including verifying properties of the update. Create File Systems Allocate an advanced file system layout, and use file
Virtual Systems with qemu
Virtual Systems with qemu Version 0.1-2011-02-08 Christian Külker Inhaltsverzeichnis 1 Image Creation 2 1.1 Preparations.................................. 2 1.2 Creating a Disk Image.............................
Review from last time. CS 537 Lecture 3 OS Structure. OS structure. What you should learn from this lecture
Review from last time CS 537 Lecture 3 OS Structure What HW structures are used by the OS? What is a system call? Michael Swift Remzi Arpaci-Dussea, Michael Swift 1 Remzi Arpaci-Dussea, Michael Swift 2
Parallels Cloud Server 6.0
Parallels Cloud Server 6.0 Readme September 25, 2013 Copyright 1999-2013 Parallels IP Holdings GmbH and its affiliates. All rights reserved. Contents About This Document... 3 About Parallels Cloud Server
Small Systems Solutions is the. Premier Red Hat and Professional. VMware Certified Partner and Reseller. in Saudi Arabia, as well a competent
T R A I N I N G C O U R S E S T H E # 1 L I N U X A N D O P E N S O U R C E P R O V I D E R I N S A U D I A R A B I A Introd uction to Linux Administra tion Adva nce Linux Ad ministrati on Linux Identity
Module I-7410 Advanced Linux FS-11 Part1: Virtualization with KVM
Bern University of Applied Sciences Engineering and Information Technology Module I-7410 Advanced Linux FS-11 Part1: Virtualization with KVM By Franz Meyer Version 1.0 February 2011 Virtualization Architecture
Deploying Red Hat Enterprise Virtualization On Tintri VMstore Systems Best Practices Guide
TECHNICAL WHITE PAPER Deploying Red Hat Enterprise Virtualization On Tintri VMstore Systems Best Practices Guide www.tintri.com Contents Intended Audience... 4 Introduction... 4 Consolidated List of Practices...
Introduction to Git. Markus Kötter [email protected]. Notes. Leinelab Workshop July 28, 2015
Introduction to Git Markus Kötter [email protected] Leinelab Workshop July 28, 2015 Motivation - Why use version control? Versions in file names: does this look familiar? $ ls file file.2 file.
Red Hat Linux Administration II Installation, Configuration, Software and Troubleshooting
Course ID RHL200 Red Hat Linux Administration II Installation, Configuration, Software and Troubleshooting Course Description Students will experience added understanding of configuration issues of disks,
