work { solaris 10 + VCS 5.0 + solaris 9 branded zone, shared configuration

I had a problem where I needed to run a Solaris 9 image for an application group that required special permissions and a compiler. Since we have budget cuts and no where else to put them, I decided to use a Solaris 9 branded zone within our Solaris 10, shared zone (i.e. the zones failover from node to node) VCS cluster.

The installation and configuration of the zone worked, up to a point.
Switchover didn't work because branded zones, when migrated have to run a
physical:virtual mapping to be optimized for running on the node. I
tried various things to no avail, preonline triggers, custom scripts,
but none worked well or reliably or were just too unwieldy to implement, especially in the short deadline I had.

So I decided to modify the VCS Zone Agent.

NOTE: This probably invalidates your support with Symantec, so use at your own risk.

This procedure has to be done on all nodes. The agent is written in perl.


1. make a backup of /opt/VRTSvcs/bin/Zone/online
2. Add following to /opt/VRTSvcs/bin/Zone/online

to the "# Zone specific commands" section:

my $SOL9P2V = "/usr/lib/brand/solaris9/s9_p2v";

change the following:
$cmd = "$ZONEADM -z $ZoneName boot";


# modified for use with Branded Solaris Zones - 1/29/2009
my $BrandedZone = `$ZONECFG -z $ZoneName info | $GREP brand`;
$cmd = "$ZONEADM -z $ZoneName boot";
if ($BrandedZone =~ m/brand: solaris9/i ) {
$cmd = "$SOL9P2V $ZoneName && $ZONEADM -z $ZoneName boot";

All the variables except for "$BrandedZone" and "SOL9P2V" are defined elsewhere within the agent.

What this little snippet does is check to see if the zone has a 'brand' defined, if it does it adds the part to run the P2V mapping command and boot the zone, otherwise it does what it would normally do (just boot the zone).

It does lengthen the boot time of the zone a little and the output of the command will look odd as the P2V command also tries to update any patches needed to run on the node (which is handled at branded zone creation time).

Running a shared zone is fairly easy to setup. You create the zone on one node, copy the appropriate .xml files from /etc/zones/ to the other nodes, add appropriate line to /etc/zones/index, make sure your mountpoints exist, define the resources into VCS and enjoy. This setup will let you have one zone that can float to multiple nodes.

Note the assumptions are that the networking and storage is setup properly.

Other things to note, Solaris branded zones cannot use the "inherit-pkg-dir" functionality. So this means you can't inherit /opt/VRTSvcs/bin, so some functionality will be different. However this did not seem to impact the running or viability of the zone for use.

No comments: