Ramnode openvz machines limitations – KVM vs OPENVZ – venet vs veth – and simfs

INTRO

OpenVZ and KVM can make VPS (Virtual Private Servers). So whats a VPS? Its just a Virtual Machine (your own OS that you can access but it sits on an already made computer. So 1 computer can have multiple operating systems, the main one which is called the host OS and then it can have guest OSes which are the other VPS operating systems. The Host OS is not a VPS, its the Guest OS that become VPS. A VPS is a VPS also once it can be accessed some sort of way most likely thru Network via IP). This is done either with a specialized kernel on the host OS, or a specilized kernel module (like a driver) on the host OS. OpenVZ uses the LXC kernel module.

Great info: http://www.linux.org/threads/server-types-vps-vs-kvm.4776/

This webserver, Ram.infotinks.com, runs on an openvz container (This very site) well at least it did as off of 5-1-2014 and I have no plans to move it. The openvz container is managed by ramnode.com. Hence the name RAM.infotinks.com

Great service ramnode.com, fast machines disks feel as fast as ram (jk) but really they are quick

So ramnode sells VPS (virtual private servers) as KVM (kinda of like regular VMs, and closer to the concept of VMware/XEN/HyperV then say an OpenVZ container) or as an OpenVZ container (where the different VPS sit inside the same host os kernel and just share the kernel and they sit in “containers” think of it like “jails” or “chroots” if your familiar with that). OpenVZ is faster then KVM as it has less layers to work thru, Since openvz shares a kernel and there are less layers between the host kernel and an openvz vps, where as there are extra layers needed by KVM. But with KVM you get more control of your resources and your resources arent shared (well they can be) but they are more dedicated to the VPS. Where as in OpenVZ everyone shares everything, but there is a concept of restricting everyone to their resources.

So which is better? For more control – KVM, For speed/performance – OpenVZ.

Some limitations of an OpenVZ container:

* Linux only as it shares the linux kernel. So you cant run Windows as a VPS. Either way its fine I only use VPS for servers and Linux is my favorite server.

* Cant increase your swap space, openvz containers are made so you cant increase swap space, your given the swap space and your stuck with it. In my case 512 MB ram and 512 MB swap. Why? Because SWAP is heavy read write and we dont wanna boggle down the drive that will be used by the other guest VPS . Swap is only allowed to be set for the whole system (host OS) not for each container (VPS/guest OS). You can read many places that talk about it:

1) http://forums.vpslink.com/linux/621-swap-space.html#post3915

2) http://lowendbox.com/blog/how-to-tell-your-openvz-vps-is-swapping/

3) http://www.maxwhale.com/how-to-read-cat-procuser_beancounters-result/

* Also no ethernet/mac address, I only get an IP. The ethernet/mac operations are handled by the host machine (that I have no access to). I just get like a “subset” of that mac address with my own ip address. So certain tools/programs wont work. My nicks are named “venet0” so thats how you know you will not have a mac address. You can see below my HW-addr is like a billion 0s instead of a MAC. Also there is another type of interface that OpenVZ support called “veth” but ramnode only gave me “venet”. MORE NOTES BELOW LISTING MORE LIMITATIONS OF VENET and VETH (the only 2 interface choices in OpenVZ)

* I dont really get filesystem information or devices, they are completely virtual and are handled by the host OS that I have no access to. You can see my FS is called simfs and it says it exists in /dev/ as /dev/simfs, but there is no file there called /dev/simfs (its on the host OS i bet). MORE NOTE BELOW LISTING MORE LIMITATIONS (SimFS the only type of FS on OpenVZ)

OUTPUT:

root@ram:~# df -T -P -h
Filesystem Type Size Used Avail Use% Mounted on
/dev/simfs simfs 20G 7.4G 13G 37% /
tmpfs tmpfs 52M 56K 52M 1% /run
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 205M 0 205M 0% /run/shm

root@ram:~# free -tm
total used free shared buffers cached
Mem: 512 473 38 0 0 81
-/+ buffers/cache: 392 119
Swap: 512 138 373
Total: 1024 611 412

root@ram:~# ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:542957 errors:0 dropped:0 overruns:0 frame:0
TX packets:542957 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1523307767 (1.4 GiB) TX bytes:1523307767 (1.4 GiB)

venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:127.0.0.2 P-t-P:127.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.255
inet6 addr: 2604:180:1::2629:8351/128 Scope:Global
inet6 addr: 2604:180:1::29c9:d1a9/128 Scope:Global
inet6 addr: 2604:180:1::b62e:a3e7/128 Scope:Global
inet6 addr: 2604:180:1::8c17:278d/128 Scope:Global
inet6 addr: 2604:180:1::ea64:53ce/128 Scope:Global
inet6 addr: 2604:180:1::6215:e57e/128 Scope:Global
inet6 addr: 2604:180:1::48b3:95e/128 Scope:Global
inet6 addr: 2604:180:1::b31b:74db/128 Scope:Global
inet6 addr: 2604:180:1::2172:b306/128 Scope:Global
inet6 addr: 2604:180:1::89b2:e9ca/128 Scope:Global
inet6 addr: 2604:180:1::caf5:d9ec/128 Scope:Global
inet6 addr: 2604:180:1::928:521e/128 Scope:Global
inet6 addr: 2604:180:1::a7b3:d82f/128 Scope:Global
inet6 addr: 2604:180:1::f7de:b3d3/128 Scope:Global
inet6 addr: 2604:180:1::cd03:ef5b/128 Scope:Global
inet6 addr: 2604:180:1::a6f0:10e1/128 Scope:Global
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:2034030 errors:0 dropped:0 overruns:0 frame:0
TX packets:4371493 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:190657468 (181.8 MiB) TX bytes:5724243947 (5.3 GiB)

venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:192.249.61.185 P-t-P:192.249.61.185 Bcast:192.249.61.185 Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1

 OPENVZ VENET (ramnode uses venet) vs VETH:

– maybe not all ramnode machines use venet, but I assume thats their forumla (so probably all ramnodes use venet)

Below is a copy paste from the link listed above http://openvz.org/Differences_between_venet_and_veth

My explanation of the difference between venet and veth:

Venet is faster and doesn’t add the ethernet Layer 2 layer (its faster because less layers). It keeps the Layer 2 information of the HOST OS and just adds another Layer 3 (your own IP). Where as Veth adds another Layer 2 (your own MAC address) and another Layer 3 (your own IP). So notice with Venet you dont get your own MAC address. So the benefit of Veth is you get a MAC Address for the cost of speed, but you benefit from all the greatnesses of having your own MAC Address (so you can send out your own broadcasts, and listen to traffic promiscously – like promiscous mode, and you can bridge like a switch). However you have less security with having veth (your MAC Address) because thats just another way someone can listen to your traffic and access you. Where as Venet is just an IP that can be easily barricaded away.

NOTE TO SELF: Less layers = Faster. More layers = More complex = Slower. Hence ISCSI is slower then NFS. ISCSI has the whole emulation layer and all that mumbo jumbo where as NFS is just simple NFS.

Differences between venet and veth

OpenVZ provides veth (Virtual ETHernet) or venet (Virtual NETwork) devices (or both) for in-CT networking. Here we describe the differences between those devices.

  • veth allows broadcasts in CT, so you can use even a DHCP server inside a CT, or a samba server with domain broadcasts or other such stuff.
  • veth has some security implications. It is normally bridged directly to the host physical ethernet device and so must be treated with the same considerations as a real ethernet device on a standalone host. The CT users can access a veth device as they would a real ethernet interface. However, the CT root user is the only one that has priviledged access to the veth device.
  • With venet device, only OpenVZ host node administrator can assign an IP to a CT. With veth device, network settings can be fully done on CT side by the CT administrator. CT should setup correct gateway, IP/netmask etc. and then a node admin can only choose where your traffic goes.
  • veth devices can be bridged together and/or with other devices. For example, in host system admin can bridge veth from 2 CTs with some VLAN eth0.X. In this case, these 2 CTs will be connected to this VLAN.
  • venet device is a bit faster and more efficient.
  • With veth devices, IPv6 auto generates an address from MAC.

The brief summary:

Differences between veth and venet
Feature veth venet
MAC address Yes No
Broadcasts inside CT Yes No
Traffic sniffing Yes No
Network security Low [1] High[2]
Can be used in bridges Yes No
IPv6 ready Yes Yes
Performance Fast Fastest
  1.  Independent of host. Each CT must setup its own separate network security.
  2.  Controlled by host.

SIMFS – What is it

This is copy paste from: http://anonexp.blogspot.com/2013/01/simfs-openvz-container-filesystem.html

My Explanation thanks to Anons:

Basically simfs is just a virtual filesystem, it sits in reality as folder on the host OS. The guest OS pretends its / root. ITs like the equivalent of a chroot except now we give it a name and make it appear in df output so that the user of the VPS know if they are using up their Quota. Also since the simfs has no file that it works with, there is no such thing as /dev/simfs, you cant do things that require that file like filesystem checks, and filesystem repairs, and hexedits of the filesystem. Or a d/ddrescue/clone of the full filesystem is also impossible.

AnonExps Explanation:

simfs : openvz container filesystem
OpenVZ guests get a filesystem called “simfs” for the root filesystem.

simfs is a proxy-filesystem. simfs is not an actual filesystem; it’s a map to a directory on the host (by default /vz/private/). This file system allows to isolate a particular CT from other CTs.

The /proc/mounts file in the guest VM looks like this

 

[root@centos32 /]# cat /proc/mounts
/dev/simfs / simfs rw,relatime 0 0
proc /proc proc rw,relatime 0 0
sysfs /sys sysfs rw,relatime 0 0
none /dev devtmpfs rw,relatime,mode=755 0 0
none /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0
none /dev/shm tmpfs rw,relatime 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0

 

The df command displays the mounted partition as follows

 

[root@centos32 /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/simfs 10G 1.2G 8.9G 12% /
none 128M 4.0K 128M 1% /dev
none 128M 0 128M 0% /dev/shm

 

Can we run fsck on the simfs filesystem?

No. fsck can be run only on file systems on block devices (such as /dev/sda for example) and we cannot run fsck on proxy file system such as simfs.

Leave a Reply

Your email address will not be published. Required fields are marked *