Quantcast
Channel: We Got Served Forums - New Posts
Viewing all articles
Browse latest Browse all 5022

Observations On Windows Server 2012 Essentials

$
0
0
Surprisingly I haven't seen too many people talking about their experiences with WS2012E in here, so I thought I'd throw a little something up to get the ball rolling.

I have been sitting on WHSv1 since it was first released back in 2007, having skipped WHS2011 due to the lack of native DE-like functionality. I've been primarily using it as a file storage tank and as a backup service for my various Windows machines, which it has continued to perform well at over the years.

However as WHSv1 has been showing its age it has come time to upgrade. My most pressing concern is that the lack of GPT disk support on the storage end has been rearing its ugly head. At this point the price/storage curve's sweet spot has been moving to 3TB drives, not to mention practical limitations in the number of disks my server can hold, so with my 8TB pool filling up, I needed to be able to move to 3TB+ drives.

Meanwhile on the client backup side the lack of UEFI/GPT backup compatibility is about to become a problem. All of my current machines are BIOS, but once my Win8 laptop arrives I'm going to be in need of a GPT-capable backup solution, which of course not even WHS2011 can currently do.

Coupled with the fact that MS dropped official support for WHSv1 back on the 8th (with absolutely no guidance on whether WS2K3 security patches alone would be enough) I couldn't put this off any longer; it was time to upgrade. With Christmas (and more specifically, gift giving) out of the way I've finally had a chance to sit down and go ahead with my upgrade plans.

Planning The Upgrade

My WHSv1 system was built around an Athlon II X2 system that was originally put together in early 2010 (put together in a hurry after the previous Athlon X2 host failed). While WS2012E is not particularly CPU hungry, the Athlon II was already a cheap, low power/performance CPU in 2010, never mind 2013. More specifically it didn't officially support more than 8GB of RAM, and I wanted to have something beefier to make up for the higher load of the OS, and to make sure I didn't end up being CPU limited when doing Storage Spaces parity writes.

Since my server has been relegated to living in a closet to keep from annoying the rest of the family, I've always built it out of low-power parts. This means low wattage CPUs coupled with 5400 RPM hard drives; components that won't generate a ton of heat given the enclosed space. Consequently I went the same route for my WS2012E box. Most of my hard drives, along with the case were pulled from the WHSv1 box, so I only needed to pick up a CPU/mobo, and some larger hard drives as part of my retirement plan.

I eventually went with:

Intel Core i3-3220 (55W)
Asus P8B75-M
2x WD Red 3TB

Officially the i3-3220 is rated for 55W, but in practice I'd say the power consumption is at least 15W lower than that, which is great since it's the principle heat source for the server. The iGPU is rubbish, but then I didn't need that; what I needed was a fast dual core CPU, which the 3220 could easily deliver. That was paired with Asus's mATX P8B75-M board, a fairly standard B75 board featuring 4 DIMM slots and 6 SATA ports, among other things. B75 is Intel's low-end 7-series chipset, and among other things only has 1 SATA 6Gbps port, but since this is a file server the features they withhold are of no great importance.

On the storage side I was inheriting 1x WD Green 1.5TB, 2x WD Green 2TB, and 1x WD Red 2TB. The 3TB drives would both be temporary storage for my files so that I could pull them off of the old drives in order to prep them for addition to the server's Storage Spaces storage pool, and ultimately to be part of that storage pool. Officially the Reds are WD's recommended 5400 RPM drives for NAS usage, and while I doubt I specifically needed their NASy functionality (I'm not running a proper RAID setup) they are WD's only 5400 RPM drives using 1TB platters (higher areal density = higher read/write rates), and they have a longer warranty than the Green drives, which given the Green's very short warranty was going to be desirable regardless.

On a final note, since unlike WHSv1 WS2012E installs the OS to its own disk, I also needed a disk for the OS itself. I had an old OCZ Vertex 2 128GB SSD I pulled from my desktop a few months back when I made the jump to a 256GB SSD, so I went ahead and earmarked that for OS storage. Technically WS2012E has a minimum requirement of 160GB, but it installs on a 128GB drive just fine with plenty of space leftover. 128GB SSDs still aren't cheap these days, but the improvement in system responsiveness and boot times compared to WHSv1 is just incredible. This beats the pants off of WHSv1 grinding along because it's trying to do OS functions and file serving off of the same HDD.

As for the software itself, yikes. I eventually just capitulated and grabbed WS2012E from Newegg for $500. CDW lists it for cheaper, but there's also some confusion over OEM vs Educational, etc, so it was easier to grab it from a more consumer-friendly store like Newegg. $500 is incredibly expensive and to be honest I don't need most of the enterprise features (such as domains) that lead to that price tag, but as WHS2011 wasn't going to cut it, I didn't have a lot of choice.

At best I'm hopeful I can get a full 10 years out of this copy of Windows; $500 is much easier to swallow when amortized over 10 years at $50/year. The good news is that unlike WHSv1 and WHS2011, WS2012E doesn't have any obvious missing features such as GPT disk support (WHSv1) or GPT backup support (WHS2011) so I may just be able to make it 10 years. Someone hit me up in 2023 and let's find out. :P

Building The Server

There's not a lot to say here. This is mostly time consuming labor; pulling out the old mobo/CPU, pulling out the necessary hard drives, installing the 3TB hard drives (and SSD), and then finally installing the new mobo. The drive cage I'm using to mount 3.5" HDDs in the 5.25" bays is rubbish, so I spent far more time on this than I really wanted to. Buy a better drive cage now and you'll thank yourself later.

On a final note, though it wasn't part of my plan, I ultimately pulled the Intel GigE NIC from the WHSv1 rig and put it in here too. The Asus mobo uses a Realtek NIC, and while Realtek is fine 99% of the time, I have run into esoteric issues before (specifically, imploding when running a Minecraft server). Since I already had the NIC and it's still superior to Realtek in 2012, there was no reason not to throw it in.

Installing Windows Server 2012 Essentials

Like Windows 8, WS2012E is UEFI compatible out of the box, so I made sure to load the WS2012E installer on a USB drive that was FAT32 formatted rather than NTFS so that it would be able to install in UEFI mode. UEFI doesn't impart any massive benefits, but it's been in my experience that it boots a few seconds faster, which is always nice. Furthermore it allowed me to enable Secure Boot, which while I doubt I will ever really benefit from may prove useful in the future should the box ever get hacked.

All things considered the installation process is uneventful, and Terry's previous articles already cover it down to the excruciating detail. It's basically the standard Windows installation process, after which you're going through the WS2012E server setup wizard to create the necessary accounts and to setup the domain. Follow the instructions and it will be difficult to screw this up.

File Serving/Storage Spaces

Once WS2012E was up and running, the first order of business was to get it sorted out as a file server. My plan from the start has been to put all 6 HDDs into a Storage Spaces storage pool, so that's the first thing I did with the 3TB Reds, creating a temporary Simple storage space to fully utilize their capacity. This is a RAID 0-like mode, so the throughput is very good with the limit being the read rate from my older, somewhat fragmented WHSv1 drives. My file collection (without redundancy) is about 5TB, so with reads from the other hard drives varying between 50-100MB/sec, this took quite a while.

Thia was compounded by the fact that I needed to start the copy job from each disk as the previous finished, along with handling the eventual folder merge and system file issues that halt a copy job. The Explorer changes that went into Win8 also went into WS2012E, so some of these issues were deferred until the end of the copy job, but I found that some things (folder merges I think?) had to be handled right away, which meant the server could spend some time doing nothing if the issue popped up when I wasn't around to babysit it. In retrospect I probably should have looked into Teracopy or another 3rd party copy utility.

In any case, after a day or so this was finally completed, allowing me to add the old drives into the storage pool, bringing it to its full capacity. At this point I decided to mimic the storage setup I had for WHSv1, creating a storage space for each share, and using thin provisioning so that I don't run into capacity issues in the future. I'm using all 4 storage space modes here, with files I don't care about being stored on simple pools, standard files being two-way mirrored, very important work files being three-way mirrored (note: this actually requires 5 drives for some reason), and finally the bulk of my files (Blu-Ray rips, etc) being stored on a parity space.

It's the parity space concept in particular that drew me towards WS2012E, as even 3rd party alternatives don't implement anything like this in quite the same manner. It's essentially software RAID 5 with the ability to flexibly handle drives of different sizes and performance, and not a lot of RAID 5's performance. Previously these large files had been stored in a folder duplicated WHSv1 share, which kept the files safe but ate a lot of space. By MS's own description parity spaces are ideal for archival storage, which is exactly what I was looking for, so this seemed like a good match.

According to some of MS's dev documentation the parity space has a lot of low-level configuration options, of particular importance the number of columns to stripe data across. But I ultimately went ahead and used the Storage Spaces control panel, using its default 3 column organization. 3 column in this case means that for every 2 bytes of data 1 byte of parity information is written, essentially requiring 1 parity hard disk for every 2 data hard disks, meaning the final redundancy overhead is 50%. A larger number of columns would have brought this overhead down, but with 6 HDDs a 3 column setup mapped well enough, plus using anything else requires banging around in PowerShell, which I'm still not intimately familiar with.

With the storage spaces setup, I decided to press my luck and try out ReFS, MS's new file system. Eventually this will be the default file system for Windows after it has had enough real world testing that MS is satisfied it's stable, but for the time being it's limited to being used for storage volumes on Windows Server 2012. The advantage of ReFS here is that it can repair corruption without taking the affected volume offline entirely (more uptime, yay!), and also it stores checksums for a file's metadata and optionally the file itself. From what I hear file checksumming is even turned on by default for mirrored spaces, but I cannot confirm this at this time. In any case I'm going out on a limb here, but since I'm not doing anything terribly exotic I'm reasonably confident in MS that I'm willing to try ReFS to reap the benefits.

Finally, I also created another simple space for the various WS2012E special folders. Client backups and File History are both stored on a simple space, since I'm not particularly concerned about client backups surviving in the case of a server disk failure (it's just more data to salvage)

With the various storage spaces setup, it was back to file copying, this time copying data out of the temporary simple pool into its final home. Quite honestly this was even slower than getting it in, mostly due to the number of different copy operations required (a separate one for each share), and I think by the end it took 3 days until everything was exactly where I wanted it.

Of particular note, thinly provisioned storage spaces do not significantly shrink as data is removed from them. The temporary storage space I was using was still around 90% of its maximum size even after I had moved all of the data off of it, meaning it was occupying a lot of very real storage pool space while holding absolutely no data. The fact that I even used a temporary storage space was in case this happened (as otherwise I have quite a bit of data that only needed simple resiliency and could stay on), so I'm glad I went ahead as I did. Thinly provisioned storage spaces are best used for data sets that are going to be constant in size or will grow; if they'll shrink it seems that there's a good chance you'll lose that "empty" space.

On another note, 71% is the magic number for selecting which disk to write a slab to. What I've found out is that Storage Spaces will attempt to write out slabs equally across every disk (regardless of capacity) until they are 71% full. At that point it will favor disks below 71% unless it needs the fuller disks to complete a resiliency requirement (such as parity). Over the long run this means that your disks will reach similar utilization (on a percentage basis), but at the start they can be wildly different. This also means that if you try hard enough you can back yourself into a corner since Storage Spaces does not balance existing files. E.G. if you somehow end up with 1 very full disk and 2 very empty disks while using a parity space, then you'll never be able to fill up those empty disks as the full disk will run out of space first.

On a final note, parity is for archival storage and for good reason. I didn't take any formal benchmarks here, but parity write performance on a thinly provisioned space is at best the speed of the slowest disk, if not slower. In my case that meant I was getting around 30-50MB/sec; nothing great, but since almost all of my writing to that space was a one-time operation it wasn't a huge problem. I wouldn't mind if performance was faster, but I don't have any complaints. It's clearly disk bound (rather than being CPU bound from calculating parity data), and I suspect it's shooting itself in the foot by doing some operations that require randomish writes (i.e. things that require seeks), but I can't currently prove this later theory.

That said, for day-to-day use RAM helps here. The more RAM you have the more data can be buffered in RAM to hide the slower write speeds of the parity space. The 16GB of RAM I have means that upwards of 8GB can be buffered without the user seeing the slower parity write speeds, but of course this also depends on the voodoo WS2012E uses when it comes to buffering. and what else any RAM is being used for at the time. I went with 16GB because it's what I had spare, but 8GB DIMMs are reaching prices so low that 32GB isn't unreasonable even for home use.

WS2012E's voodoo buffering probably makes itself the most obvious when it triggers a pause on the client upload. Once the buffer fills up, rather than slowing down the client upload to a rate equal to the write out speed, it will become bursty and temporary halt the upload while letting upwards of a gigabyte write out. In the background WS2012E is still chugging along, but if you're using to seeing a consistent (but slow) upload from the client end of things it will throw you off the first time you see it.

Client Backups

My other purpose for having a WS2012E box is for Microsoft's fantastic Windows backup functionality, particularly compatibility with UEFI/GPT systems. Coming from WHSv1 I find that the process hasn't changed much, not that this is a bad thing.

If anything the exposed UI for client backups has become more spartan since WHSv1. All of the core functionality is still there such as the ability to set a backup window and repair the database. On the other hand the backup configuration wizard no longer shows you a list of folder sizes, so it's harder to audit the amount of space a backup is taking. Furthermore I've yet to figure out how to exclude single files from the backup set (WS2012E doesn't exclude Win8's swapfile.sys by default) or how to trigger a manual backup cleanup.

Other than that this is the same great backup functionality MS has had since WHSv1. On the client end backups are quiet and painless, and while I didn't do any formal timed tests, I feel that day-to-day (incremental) backups are faster than WHSv1. It seems to spend less time analyzing the disk sending data to the server, although a detailed breakdown of the backup process is another thing that's no longer in the UI.

On the server end MS seems to have either improved their dedupe or their compression, or more likely both. It's hard to get exact numbers here since I'd be comparing my current backup database to an older database that had months of backups, but it seems like my backup database is a good 10-15% smaller than was under WHSv1 under comparable scenarios.

Client restore performance is also better, I feel. The amount of time needed to mount a backup set is at least 20 seconds faster. I am not particularly enamored with the restore wizard though, as it's designed more for restoring a known file/directory than perusing. Thankfully the underpinnings of the process haven't changed, so in the background the backup is still being mounted as a drive that you can browse and copy out of, so long as you leave the wizard open (closing the wizard unmounts the backup).

I haven't had a chance to try a bare metal restore yet. I need to get around to that, but I've been lazy/busy so far.

File History

One of the things MS did with Win8 was remove the Previous Versions feature (their Volume Shadow Storage based time machine-like feature) in favor of File History. One of the things WS2012E does by default is take over managing File History on Win8 clients.

The management ability itself works just fine. You get to select the frequency, retention, and what to backup (some libraries, all libraries, or the entire user profile), which then gets pushed out to Win8 clients as it should, though I haven't figured out at what frequency these settings are pushed out.

The advantage of using File History is that it's far more frequent than a complete client backup. Instead of a daily backup you run File History backups up to every 10 minutes, which makes it possible to grab individual revisions of a document or file. I haven't had to do a File History restore yet, but much like Previous Versions I'm sure it will pull my fanny out of a fire sooner or later.

The problem with File History is that it eats up space extremely quickly because of a combination of flaws. First and foremost it lacks dedupe functionality, so any file changes will cause File History to make a complete copy of a file; for large files this gets to be especially bad. Second of all it appears to be based solely on timestamps rather than contents, so anything that causes a time stamp change is in effect causing File History to make a fresh backup.

The end result is that I'm running into a particularly annoying problem: File History is making a complete backup of my 500MB Outlook IMAP database several times a day. That adds up extremely quickly, to the point where I've had to disable File History on my Win8 machine for the moment. Adding insult to injury, if you use the central management function there's no way to exclude Outlook databases from File History, since you only have the 3 coarse options. If the client manages File History, then the client can exclude individual folders; it's just server management that's out of luck.

Dashboard

From a design perspective the Dashboard is significantly different from the WHSv1 client console, but functionally they're much more similar than they look. Once you get past the first-time setup tasks that the Dashboard wants you to go through I find that I spend most of my time looking at the Devices and Storage tabs.

The separation of storage configuration from the Dashboard takes a bit of getting used to. Whereas in WHSv1 you could setup folder duplication on a per-share basis, for DE-like functionality you have to jump into the Storage Spaces control panel. This is easy enough once you get used to it, but it's one of those things that's clearly not the same as WHSv1. Now you're having to create a storage space, then go back into the Dashboard and find that storage space to create the appropriate share(s) in.

This also means that the handy storage pool pie chart is no longer available, so it's not possible to figure out the breakdown between files/backups/duplication at a glance. This wouldn't be so bad if not for the fact that the Dashboard doesn't keep track of the amount of space occupied by a server folder (you can find out, but it has to count up the files).

Otherwise I feel like I should have more to say about the Dashboard. It's one of those things that just works, and in fact it's critical to the entire WS2012E experience. I've poked around in the standard Windows Server server manager, and to be frank I couldn't imagine using Windows Server 2012 if that was the standard interface. It's powerful, but I don't have the time nor inclination to spend a week reading a book to better understand just what it can do. So like the console on WHSv1, the Dashboard on WS2012E is the essential magic that dumbs down server administration to the point that it's practical for my needs. Though the fact that MS didn't throw out the server manager is appreciated; in particular some of the advanced Storage Spaces functionality is only accessible in there, so it's nice to have it to fall back on when the Dashboard isn't enough.

With that said, I'm finding that the Dashboard is taking a ridiculous amount of time to launch. Even with a fast processor and an SSD it takes a good 20 seconds for the Dashboard to launch, which is not frustrating but it is annoying. At this point I've resorted to leaving it open (in an inactive remote desktop session) at all times so that I can get to it quicker. I'm not sure what's going on in the background that is making it take so long to load; it seems excessive.

Finally, has anyone figured out how to better manage alerts? In particular WS2012E keeps throwing off an alert whenever MS pushes out an important update. This would be all well and good if not for the fact that Windows Defender/MSSE definition updates are classified as important. That means I keep getting alerts for definition updates, even though the problem solves itself within a couple of hours when the client actually gets around to installing the update. Ignoring the alert hasn't been working in the long-term since the alert gets cleared when the client installs the update, only to come back with the next update, etc.

Launchpad

There's not a lot to say here. Coming from WHSv1 it's a bit of a change-up since it's more of a small application than a notification tray app, but otherwise there's not much to it. The ability to view alerts locally (rather than having to load up the Dashboard) is a very welcome change though.

Meanwhile I'm not using the Domain functionality here at all, so I'll gloss over that. However I wish MS would make domain joining a server side option rather than a client side option, so that I could skip it entirely. I accidentally joined my Win7 HTPC to the domain when I forgot to set the registry entry to skip it, so I had to spend some time undoing that.

Mac Support

Speaking of the Launchpad, let's not confuse the wonderful Windows Launchpad with the Mac version. Mac support throughout WS2012E is rubbish. The Mac Launchpad looks mostly the same, but it's missing all the important features that you'd actually want to use it for: the Dashboard, and client backups.

Since only Windows supports the remote application functionality that the Launchpad uses to bring up the Dashboard on a client's desktop, that function is completely AWOL on the Mac. Using the Remote Desktop client to RD into the server solves that problem, but that's a separate download that doesn't integrate with the Launchpad.

Meanwhile the lack of client backups isn't entirely MS's fault, but they do a god-awful job explaining it. Apple hasn't supported using Time Machine to back up clients to anything other than AFP shares since 10.6.6, so you can't force Time Machine to backup to WS2012E's SMB shares. The problem is not only does the Launchpad not tell you this, but for some reason MS still lets you tell the Launchpad to try (even though it hasn't been possible for 2+ years now), which then doesn't spit out any kind of useful error when it fails.

So what is the Mac Launchpad good for? It's good for monitoring alerts, and really that's it. You can't backup/restore on a Mac and you can't access the Dashboard, so the Launchpad serves no other useful purpose. MS badly needs to implement AFP support on WS2012 (even if it's just an experimental download) to make this worthwhile.

Remote Web Access/Media Streaming

I never made significant use of remote web access in WHSv1, and I doubt that will change here. Still, I do keep it enabled in case I need to remotely grab a file for whatever season. It seems to work well enough, but I don't have much more insight than that.

I'm not planning on using the VPN functionality or media streaming either, so for the moment the two of those features are turned off. It's nice to have it there, but I don't have a use for it at the moment.

I do wish Microsoft would make the homeserver.com domain available to WS2012E though. I get that it's meant for Home Server products, whereas remotewebaccess.com is for SBS/Essential products, but since Windows Home Server has been canceled it would be nice to be able to keep my homeserver.com subdomain.

XP Support though VMWare

While I don't have any Vista machines left, I unfortunately still have one XP machine on my network. Ideally I'd like to be rid of it entirely, but that's not my call; the damn thing will undoubtedly last until April 8th, 2014. As a result I needed an XP backup solution, since WS2012E can't do it on its own.

Ultimately I opted to go ahead with using VMWare Workstation to run an instance of WHSv1 specifically for that XP machine. This ended up working out rather well; with WHSv1 doing nothing other than backing up that machine it consumes very little in the way of resources. As it stands the virtual disk for the system drive resides on the SSD, while a separate virtual disk that holds the client backups and file shares sits in one of my simple storage spaces. The space consumed is insignificant, and with 16GB of RAM on my server the 1GB of RAM allocated to VMWare is pretty light. Note that I did need to use Bridged mode here, as the WHSv1 VM needs a real network connection and IP address for the XP client to find it.

This also means I can hold onto my homeserver.com subdomain for now, even though my WHSv1 box doesn't have any ports exposed to the Internet. The security certificate chain still works here too, so I can go to my homeserver.com subdomain and log into my WS2012E box just as well as if I went to remotewebaccess.com.

On a final note I have yet to figure out how to automatically start the VM after a server reboot. But since this is just for XP backups and the server reboots so infrequently this hasn't been an issue.

Other

This isn't really about any one feature, but rather a random list of interesting/useful things I've discovered thus far.
  • Metro is completely useless here. Not only is it not used for any of the server utilities, but Metro and Remote Desktop don't get along all that well. My server spends all of its time on the desktop, and somehow I doubt the Windows Sever team really wanted to have Metro here at all.
  • Disable SMB signing. It's a good idea, but if my LAN gets MITM'd I have bigger problems. In the meantime it doesn't seem to get along well with things that aren't Win8.
  • I'm not sure if this has any practical uses at the moment, but the way Storage Spaces is implemented means that spaces show up as physical disks (i.e. PhysicalDisk); specifically 4K sector disks. WHSv1 doesn't support 4K sectors so I wasn't able to use this to my advantage in my VMWare instance, but it seems to me that this will help with performance when virtualizing newer OSes, since it can map directly against the space rather than operating a virtual disk inside what amounts to another virtual disk.
Conclusion

Although I'm admittedly still trying to completely get over the sticker shock of WS2012E, I have so far been extremely pleased with the OS. The client backup functionality remains unrivaled, and the combination of Storage Spaces plus WS2012E's file server functionality has been meeting my file storage needs as well as if not better than WHSv1. Given the price tag I doubt I would have jumped into this had I not already been using WHSv1, but I'm here now and I'm happy with what I've gotten out of WS2012E.

That said, I still think it's a shame that MS discontinued Windows Home Server. A more stripped down version of this as WHS2012 would have worked perfectly; exclude Domain functionality, VPN, and a few other things and you would have something that is the logical next step for WHS by bringing the codebase up to Server 2012/Win8, and throwing in Storage Spaces.

Finally, I'm glad to see that the release cycle for WHS/Essentials has been sped up dramatically. With WHSv1 and WHS/Essentials 2011 we essentially had to wait 3-4 years after the release of the base OS (Server 2003/2008) for these specialized versions to come along. Whereas with WS2012E, Essentials came along mere months after the base OS, which means it's no longer lagging other Windows releases and missing important functionality in the process (e.g. GPT disk support, SMB 3.0, etc). Now I realize that this is largely because MS reused various 2011 components such as the Launchpad and the Dashboard, but I would like to think they'll be able to keep up concurrent development like this in the future.

Viewing all articles
Browse latest Browse all 5022

Trending Articles