timsutton/osx-vm-templates

How to integrate with System Image Utility to create NetRestore/NetBoot image

cybertk opened this issue · 6 comments

I'm able to create a workable VirtualBox VM with packer. I'm wonder how can I convert the vmdk to NetRestore/NetBoot Image, so that I can deploy images to real hardwares.

Currently I have tried to mount vmdk with Paragon VMDK Mounter, it's failed to create NetRestore Image from mounted volume with both System Image Utility and Automator Flow

I also observed that when the creating process is finished, vmdk disk is unmounted.

But I can create NetBoot Image successfully with same mounted volume.

screen shot 2015-10-22 at 00 19 49

From the following screenshot you can see Image Source has been reset to Install OS X EI Capitan as the vmdk was unmounted.

screen shot 2015-10-21 at 23 37 26

I wouldn't recommend trying to create hardware images directly from this template. It's much, much more common to use a tool like AutoDMG to build hardware for block restores to physical Macs.

This works a bit similarly to NetRestore, where the OS installer is actually executed on your build machine but targeted to a new empty disk image. AutoDMG is purpose-built, however, and lets you add additional installer packages and applications to the process, and can incorporate available software updates.

Thanks @timsutton, as we need create system images for both hardware and VM. I'm looking for a general solution which can handle them both consistently. And I think osx-vm-templates is what I need. Do you have any ideas about that?

I think if you want consistency, you need an automat-able process, and that's what AutoDMG was designed for - taking an OS X installer and layering on additional apps or packages. You can block restore that image onto a physical Mac.

People usually go with a minimal approach here and only create a few base configurations, and let it connect to a client management system (Munki, Casper, Puppet, etc.) on boot.

vfuse is a tool that was designed to take the output from AutoDMG and convert that to a bootable VM, and also optionally configure it for use with Packer. So you could take the same source you have for physical Macs and VMs, and use Packer to do the additional VM-specific config, Vagrant, etc. bits that you need to make the environment distributable internally as a VM.

It's a great solution anyway.

I still have some questions, as you said that AutoDMG is used to create a minimal base image, then install(provision) additional apps/tools after the first boot with CM tools. In our situation, the provision step is very time-consuming as it's heavy depends on the network, almost all apps are installed from Homebrew. So I think I should create a base image include all the apps/tools we need instead of downloading them on each client.

I read the documentations of AutoDMG, it's said that it will mount a image on local, and run Install to this mounted disk, the customized apps/tools should be carried in a hand-made pkg format which are called during installation.

It will be a huge effort and hard to carry all apps/tools in such a pkgs, so I'm thinking if it's not the right way to handle my situation?

It probably isn't worth the effort to "convert" the things you install with brew to install up front in your image, because those things will drift out of date over time. I'm looking at doing this for things like Xcode, which take ages either way, but in a system where we would be re-generating the images automatically anyway.

Homebrew is also really only designed to be run by the "default" admin user on a machine. Do you have a way that you can automatically install/provision brew/cask and your formulae automatically upon deploying an image to a machine? I've seen people come up with ways to be able to automate Brew installs but it looks like a moving target. I'm not sure the Ruby install script has any support for doing a non-interactive version that you could, for example, run as the default 501 uid user in order to pre-populate things. That user would also need to already have its home directory created with at least a ~/Library directory, etc.

The "connect to a client management system" workflow I mentioned does add significant per-client deployment time if you compare to how fast it is to block-restore an image to a machine (even just ethernet, and via TB target disk on a fast SSD it's measurable in seconds). An image restore and additional post-config tasks may be 10-15 minutes over Gigabit ethernet compared to 30 minutes or more where it's installing many things additionally after restoring.

But, those who maintain software in a client management system are already spending time keeping that software up to date (with tools like AutoPkg), and the benefit is that when the machine is done deploying, everything is up to date. If you are putting everything into your image, that image will be out of date tomorrow and you still might want a way to make sure things are up to date. I don't mind that if I deploy a lab computer with 150 installer packages that it takes over an hour, because it's automated to the point where I don't have to do anything except click "start" on the deployment workflow.

Really depends on your environment's needs, though.

Thanks for the discussion! Closing this for now.