OVM (Oracle VM) for SPARC networking features are easier and better than PowerVM

Of course these are my thoughts and open to any comment.
My ideas and reasons are below
Feel free to communicate by bulent.yucesoy@gmail.com

It may also be good to examine these figures ( ovm-networking / powervm-networking / rhev-networking )

I   Solaris and OVM has more features

-       OVM has DLMP(Data Link Multi Path) feature at Layer-2
You can create Link Aggregation(LA) either member ports are not on same physical network switch.
Also it requires no dependency,config or requirement at network switch layer.
There exists no such utility at PowerVM. At PowerVM, member ports must be on same physical switch to create LA. 2 ports being on same physical network switch is SPoF and nonsense but it is LA requirement.
Also network admin must make configurations at network switch for LA to work.
So, DLMP is painless and perfect.

 

-       Solaris has great builtin network tools

o    QoS settings with flowadm command,

o    load balancer with ilb command

o    creating bridges has more options. ( e.g you can create bridge with TRILL protocol at solaris , you cant do it at AIX)

 

-       OVM has LINK-STATE for virtual interfaces. ( linkprop=phys-state)
PowerVM has no LINK-STATE for virtual interfaces, it has this only on physical interfaces. So, you MUST enter gateway IP and enable DGD ( Dead Gateway Detection)

-       Solaris has IPMP at LDOMs. Both virtual interfaces are active by default. ( You can set standby property to ON with set-ifprop command if you want to make that interface passive.)
AIX has NIB. Only one virtual interface can become active at NIB, more than one is not allowed.

II     PowerVM default virtual switch (ethernet0) at hypervisor creates extra complexity.

-       AIX PowerVM has already a single builtin virtual switch(VSW) called "ethernet0" at hypervisor.
For example Oracle VM(OVM) has no such builtin virtual switch at hypervisor level.
Oracle VM has virtual switches at service domain levels. (OVM service domain=PowerVM VIO)

Virtual switch at hypervisor level sometimes brings advantages but sometimes brings disadvantages i think.
- It brings advantage that 2 VIO can communicate at hypervisor level (e.g sea control channel)
- It brings disadvantage that it sometimes may require more complex config.

When and why complex?

Because of ethernet0 inside hypervisor, methods to prevent network loop MUST come into play.
(such method neccessities create complexities.)
--- You MUST either use PVID method ( which prevents VLAN tagging) or
--- You MUST use SEA-Failover method ( which prevents active-active VIO config)

You cant use this built-in ethernet0 switch with active-active VIO config having VLAN tagging.
( Such config is a normal and necessary desire. )

So, when active-active VIOs with VLAN tagging exists, you MUST prefer not to use this ethernet0 but create you own virtual switches per VIO. Your multiple virtual switches will again be created inside PowerVM hypervisor (not inside VIO) but you must create separate VSWs for each VIO. When you created separate VSWs per VIO, it seems logically the same with OVM design creating VSWs inside service domains. (OVM service domain=PowerVM VIO)

You also will need AIX NIB at LPAR level to manage LPAR traffic. (you will need Solaris IPMP when using OVM either.)


You can examine the issue much more detailed at below article.

Using Virtual Switches in PowerVM to Drive Maximum Value of 10 Gb Ethernet

http://www.wmduszyk.com/wp-content/uploads/2012/01/PowerVM-VirtualSwitches-091010.pdf

 

III    you may need to specify different unique PVID values to prevent loop and route traffic

If you live a necessity to enter different and unique PVID values at your networking config, it will be a strange issue that you put a reservation on network admin. It means you block network admin that he cant create a VLAN with the VLAN ID you entered for your PVID not to cause a duplication. Separate PVID values may be needed to route traffic healthier and prevent looping. Default Ethernet virtual switch “ethernet0” is again the biggest actor here. If you never use default virtual switch “ethernet0” and always create separate virtual switches like in OVM, it will be better for you. You have no issues like this in OVM.