Pages

12/24/2016

Raid Types And Levels

In computer storage, the standard RAID levels comprise a basic set of RAID (redundant array of independent disks) configurations that employ the techniques of striping, mirroring, or parity to create large reliable data stores from multiple general-purpose computer hard disk drives (HDDs). The most common types are RAID 0 (striping), RAID 1 and its variants (mirroring), RAID 5 (distributed parity), and RAID 6 (dual parity). RAID levels and their associated data formats are standardized by the Storage Networking Industry Association (SNIA) in the Common RAID Disk Drive Format (DDF) standard.
While most RAID levels can provide good protection against and recovery from hardware defects or defective sectors/read errors (hard errors), they do not provide any protection against data loss due to catastrophic failures (fire, water) or soft errors such as user error, software malfunction, malware infection. For valuable data, RAID is only one building block of a larger data loss prevention and recovery scheme, it cannot replace a backup plan.

RAID Levels

There are many different ways to organize data in a RAID array. These ways are called "RAID levels". Different RAID levels have different speed and fault tolerance properties. RAID level 0 is not fault tolerant. Levels 1, 5, 6, and 1+0 are fault tolerant to a different degree - should one of the hard drives in the array fail, the data is still reconstructed on the fly and no access interruption occurs.
RAID levels 2, 3, and 4 are theoretically defined but not used in practice.
There are some more complex layouts: RAID 5E/5EE (integrating some spare space), RAID 50 and 60 (a combination of RAID 5 or 6 with RAID 0), and RAID DP. These are however beyond the scope of this reference.

RAID levels triangle

RAID levels comparison chart

RAID 0RAID 1RAID 5RAID 6RAID 10
Min number of disks22344
Fault to­le­ran­ceNone1 disk1 disk2 disks1 disk
Disk space over­headNone50%1 disk2 disks50%
Read speedFastFastSlow, see belowFast
Write speedFastFairSlow, See belowFair
Hard­ware costCheapHigh (disks)HighVery highHigh (disks)

Striping and blocks

Striping is a technique to store data on the disk array. The contigous stream of data is divided into blocks, and blocks are written to multiple disks in a specific pattern. Striping is used with RAID levels 0, 5, 6, and 10.
Block size is selected when the array is created. Typically, blocks are from 32KB to 128KB in size.

RAID Level 0 (Stripe set)

Use RAID0 when you need performance but the data is not important.
In a RAID0, the data is divided into blocks, and blocks are written to disks in turn.
RAID0 provides the most speed improvement, especially for write speed, because read and write requests are evenly distributed across all the disks in the array. Note that RAID1, Mirror, can provide the same improvement with reads but not writes. So if the request comes for, say, blocks 1, 2, and 3, each block is read from its own disk. Thus, the data is read three times faster than from a single disk.
However, RAID0 provides no fault tolerance at all. Should any of the disks in the array fail, the entire array fails and all the data is lost.
RAID0 solutions are cheap, and RAID0 uses all the disk capacity.
If RAID0 controller fails, you can do a RAID0 recovery relatively easy using RAID recovery software. However you should keep in mind that if the disk failure happens, data is lost irreversibly.
Disk 1Disk 2Disk 3
123
456
789

RAID Level 1 (Mirror)

Use mirroring when you need reliable storage of relatively small capacity.
Mirroring (RAID1) stores two identical copies of data on two hard drives. Should one of the drives fail, all the data can be read from the other drive. Mirroring does not use blocks and stripes.
Read speed can be improved in certain implementations, because read requests are sent to two drives in turn. Similar to RAID0, this should increase speed by the factor of two. However, not all implementations take advantage of this technique.
Write speed on RAID1 is the same as the write speed of a single disk, because all the copies of the data must be updated.
RAID1 uses the capacity of one of its drives to maintain fault tolearnce. This amounts to 50% capacity loss for the array. E.g. if you combine two 500GB drives in RAID1, you'd only get 500GB of usable disk space.
If RAID1 controller fails you do not need to recover neither array configuration nor data from it. To get data you should just connect any of the drives to the known-good computer.
Disk 1Disk 2
11
22
33

RAID Level 5 (Stripe with parity)

RAID5 fits as large, reliable, relatively cheap storage.
RAID5 writes data blocks evenly to all the disks, in a pattern similar to RAID0. However, one additional "parity" block is written in each row. This additional parity, derived from all the data blocks in the row, provides redundancy. If one of the drives fails and thus one block in the row is unreadable, the contents of this block can be reconstructed using parity data together with all the remaining data blocks.
If all drives are OK, read requests are distributed evenly across drives, providing read speed similar to that of RAID0. For N disks in the array, RAID0 provides N times faster reads and RAID5 provides (N-1) times faster reads. If one of the drives has failed, the read speed degrades to that of a single drive, because all blocks in a row are required to serve the request.
Write speed of a RAID5 is limited by the parity updates. For each written block, its corresponding parity block has to be read, updated, and then written back. Thus, there is no significant write speed improvement on RAID5, if any at all.
The capacity of one member drive is used to maintain fault tolerance. E.g. if you have 10 drives 1TB each, the resulting RAID5 capacity would be 9TB.
If RAID5 controller fails, you can still recover data from the array with RAID 5 recovery software. Unlike RAID0, RAID5 is redundant and it can survive one member disk failure.
While the diagram on the right might seem simple enough, there is a variety of different layouts in practical use. Left/right and synchronous/asynchronous produce four possible combinations . Further complicating the issue, certain controllers implement delayed parity.
Disk 1Disk 2Disk 3
12P
3P4
P56
78P

RAID Level 6 (Stripe with dual parity)

RAID6 is a large, highly reliable, relatively expensive storage.
RAID6 uses a block pattern similar to RAID5, but utilizes two different parity functions to derive two different parity blocks per row. If one of the drives fails, its contents are reconstructed using one set of parity data. If another drive fails before the array is recovered, the contents of the two missing drives are reconstructed by combining the remaining data and two sets of parity.
Read speed of the N-disk RAID6 is (N-2) times faster than the speed of a single drive, similar to RAID levels 0 and 5. If one or two drives fail in RAID6, the read speed degrades significantly because a reconstruction of missing blocks requires an entire row to be read.
There is no significant write speed improvement in RAID6 layout. RAID6 parity updates require even more processing than that in RAID5.
The capacity of two member drives is used to maintain fault tolerance. For an array of 10 drives 1TB each, the resulting RAID6 capacity would be 8TB.
The recovery of a RAID6 from a controller failure is fairly complicated.
Disk 1Disk 2Disk 3Disk 4
12P1P2
3P1P24
P1P256
P278P1

RAID Level 10 (Mirror over stripes)

RAID10 is a large, fast, reliable, but expensive storage.
RAID10 uses two identical RAID0 arrays to hold two identical copies of the content.
Read speed of the N-drive RAID10 array is N times faster than that of a single drive. Each drive can read its block of data independently, same as in RAID0 of N disks.
Writes are two times slower than reads, because both copies have to be updated. As far as writes are concerned, RAID10 of N disks is the same as RAID0 of N/2 disks.
Half the array capacity is used to maintain fault tolerance. In RAID10, the overhead increases with the number of disks, contrary to RAID levels 5 and 6, where the overhead is the same for any number of disks. This makes RAID10 the most expensive RAID type when scaled to large capacity.
If there is a controller failure in a RAID10, any subset of the drives forming a complete RAID0 can be recovered in the same way the RAID0 is recovered.
Similarly to RAID 5, several variations of the layout are possible in implementation.
Disk 1Disk 2Disk 3Disk 4
1212
3434
5656
7878

6/25/2016

How to Configure and Publish InfoPath Form to SharePoint 2013

We can use InfoPath to easily customize a list with a design environment for a designing and publishing form via commonly used Windows controls such as check boxes, text boxes, command buttons, and option buttons. In this post, I will show you how to configure and publish a InfoPath Form to SharePoint 2013.
Notes: To configure and publish InfoPath to SharePoint 2013, the following items are required:
1.  SharePoint Enterprise 2013
2.  Microsoft InfoPath Designer 2013
3.  An account to access the SharePoint 2013 Site
4.  Having the SharePoint Server Enterprise Site Collection feature activated
To activate the SharePoint Server Enterprise Site Collection feature, go to Site Setting --> under Site Collection Administration, select Site Collection Feature --> Active SharePoint Server Enterprise Site Collection features
sp_01
Make sure States Service is started.  Go to Central Administration --> click Application Management --> under Service Application section --> click Manage Service Application
sp_02
If you don’t see State Service in the list, go to Central Admin --> click Configuration Wizard --> click Launch the Farm Configuration Wizard--> click Start the Wizard --> Select State Service and click OK.

Create the InfoPath form

Step 1: To create the InfoPath Form, open InfoPath Designer 2013 and under Available Form Templates, select Blank Form and click the Design Form button.
sp_03
The design form will appear as shown:
sp_04
Step2: Design the Form.
Go to the Insert tab and in the tables section, double click the Two-Column 2 Heading table to insert the table into the Design Form.
sp_05
Click to add heading text and type "Contact Info".  Right click on the form, select Insert, and then Rows Below (repeat this 2 times to add 2 rows). The Form should look like this:
sp_06
In the Fields window on the the right hand, right click on the myFields folder, and click Add.
sp_07
This "Add Field or Group" form should show up:
SP_08
Repeat the steps above to add the remaining fields, using the "text" data type:
sp_09
Drag each field and add it to the form:
sp_10
Now, we have Simple InfoPath form with the info above.

III. Publish InfoPath to SharePoint 2013

To publish the InfoPath Form, click File, click Publish, and then click SharePoint Server to bring up the Publishing Wizard form.  On the first page of the Publish Wizard, type the URL of the SharePoint site where you would like the new form to be located and click Next.
sp_11
On this page of the Publishing Wizard, select the option corresponding to where you want to publish the InfoPath to:
sp_12
Publish InfoPath Form Template to Form Library
Select the Form Library option if you want publish the new form to a SharePoint Form library only.  Click Next to go to the next Publishing Wizard page.
sp_13
On this page, you have the option to publish to an existing form library or create a new form library.  If you select create a new form library and click Next, this page will appear:
sp_14
On this page, type the Name of the new library and click Next. I chose "InfoPath Library".  Here, you have the option to promote form fields into the columns of the library.  Click Add to add those columns and click Next.  These fields will be available as columns in the SharePoint Site.
sp_15
Then, click Publish to publish the InfoPath Form.  Select Open this form library … and click Close.
sp_16
The InfoPath Library on SharePoint is opened.
sp_17
Click New Document and the InfoPath Form will be open in edit mode.
sp_18
Fill out the form and click Save.  Type the name of the InfoPath Form and click Save.
sp_19
The InfoPath Form is in the SharePoint site as shown below:
sp_20Publish InfoPath Form Template to Site Content Type
Select the Site Content Type (advanced) option if you want to publish the InfoPath form template that binds to a Site Content Type.  This option will allow the form template to be used in multiple libraries and sites.
sp_21
sp_22
Give a Content Type name and click Next.
sp_23
Specify a location and file name for the form template.
sp_24
sp_25
sp_26

sp_27
To use InfoPath as a Content Type, go to the Library in which you want to create the form, click the Library tab --> click Library Settings --> click Advance Settings --> under Allow management of content types, select Yes.
Under the Content Type section, click Add from existing site content types --> Select Microsoft InfoPath from the "Select site content types from" drop down list --> select Info Patch Content Type from Available Site Content Types box --> click Add, and click OK.
sp_28
Go back to the Library and click the Files tab, click the down arrow at New Document, and click InfoPathCT.
sp_29
Publish InfoPath as Approved Form Template
Select the Administrator-approved form template and click Next. This option will make the form template available on all site collections.
sp_30
Specify a location and file name for the form template and click Next.
sp_31
sp_32
sp_33
sp_34
To re-use the InfoPath template on other Site Collections, follow these steps below:
In Central Administration, click General Application Settings --> under InfoPath Forms Services, click Manage form templates.
sp_35Click Update form template and select file InfoPathSC which we saved before and click Ok
sp_36
Make sure the form template is uploaded success and click OK
sp_37
Now the Form Template is uploaded to the Manage Form Template page.
sp_38
Click on the down arrow besides the Form template to activate the InfoPath Form Template at the site collection.
sp_39
Go to the other site collection to activate the feature (Click Site Settings --> under Site Collection Administrator section, select Site Collection Features --> Activate InfoPathSC).
sp_40
After that, go to the list in which you want to re-use InfoPath Form and add Content Type.
sp_41Now, you can use the InfoPathSC Form in that site collection.

5/22/2016

Windows Server 2012 Hyper-V Live Migration

What Is Live Migration?
Live Migration is the equivalent of vMotion. The purpose of this feature is to move virtual machines from one location to another without any downtime. Well, that’s the perception of Live Migration and vMotion. As anyone who as ever used these features in a lab will know, there is actually some downtime when vMotion or Live Migration are used. A better definition would be: Live Migration (or vMotion) allows you to move virtual machines without losing service availability. That’s a very subtle difference in definitions, which we will explain later on in this article.
The purpose of Live Migration is flexibility. Virtual machines are abstracted from the hardware on which they run. This flexibility allows us to match our virtual machines to our resources and to replace hardware more easily. It makes IT and the business more agile and response – all without impacting on the operations of the business.
Back to Basics
Often there is confusion between Live Migration and high availability (HA). This is due to the fact that Live Migration (and vMotion) historically required a host cluster with shared storage. But things have changed, and it’s important to understand the differences between Live Migration and HA.
Live Migration is a proactive operation. Maybe an administrator wants to power down a host and is draining it of virtual machines. The process moves the virtual machines, over a designated Live Migration network, with no drop in services availability. Maybe System Center wants to load balance virtual machines (VMM Dynamic Optimization). Live Migration is a planned and preventative action – virtual machines move with no downtime to service availability.
High availability, on the other hand, is reactive and unplanned. HA is the function of failover clustering in the Windows Server world. Hosts are clustered and virtual machines are marked as being highly available. Those virtual machines are stored on some shared storage, such as a SAN, a shared Storage Pool, or a common SMB 3.0 share. If a host fails, all of the virtual machines there were running on it stop. The other hosts in the cluster detect the failure via failed heartbeats. The remaining hosts failover the virtual machines that were on the now dead host. Those failed over virtual machines automatically power up. You’ll note that there is downtime.
Read those two paragraphs again. There was no mention of failover clustering when Live Migration was discussed as a planned operation. Windows Server 2012 Hyper-V Live Migration does not require failover clustering: You can do Live Migration without the presence of a cluster. However, HA is the reason that failover clustering exists.
Promises of Live Migration
There are two very important promises made by Microsoft when it comes to Live Migration:
  • The virtual machine will remain running no matter what happens.Hyper-V Live Migration does not burn bridges. The source copy of a virtual machine and its files remain where they are until a move is completed and verified. If something goes wrong during the move, the virtual machine will remain running in the source location. Those who stress-tested Live Migration in the beta of Windows Server 2012 witnessed how this worked. It is reassuring to know that you can move mission critical workloads without risk to service uptime.
  • No new features will prevent Live Migration.Microsoft understands the importance of flexibility. All new features will be designed and implemented to allow Live Migration. Examples of features that have caused movement restrictions on other platforms are Single Root IO Virtualization (SR-IOV) and virtual fiber channel. There are no such restrictions with Hyper-V – you can quite happily move Hyper-V virtual machines with every feature enabled.
Live Migration Changes in Windows Server 2012 Hyper-V
Windows Server 2012 features a number of major changes to Live Migration, some of which shook up the virtualization industry when they were first announced.
  • Performance enhancements:Some changes were made to the memory synchronization algorithm to reduce page copies from the source host to the destination host.
  • Simultaneous Live Migration:You can perform multiple simultaneous Live Migrations across a network between two hosts, with no arbitrary limits.
  • Live Migration Queuing:A clustered host can queue up lots of Live Migrations so that virtual machines can take it in turn to move.
  • Storage Live Migration:We can move the files (all or some) of a virtual machine without affecting the availability of services provided by that virtual machine.
  • SMB 3.0 and Live Migration:The new Windows Server shared folder storage system is supported as shared storage for Live Migration with or without a Hyper-V cluster.
  • Shared Nothing Live Migration:We can move virtual machines between two non-clustered hosts, between a non-clustered host and a clustered host, and between two clustered hosts.
Performance Enhancements
Let’s discuss how Live Migration worked in Windows Server 2008 R2 Hyper-V before we look at how the algorithm was tuned. Say a virtual machine, VM01, is running on HostA. We decide we want to move the virtual machine to HostB via Live Migration. The process will work as follows:
  1. Hyper-V will create a copy of VM01’s specification and configure dependencies on HostB.
  2. The memory of VM01 is divided up into a bitmap that tracks changes to the pages. Each page was copied from the first to the last from HostA to HostB. Each page was marked as clean after it was copied.
  3. The virtual machine is running so memory is changing. Each changed page is marked as dirty in the bitmap.  Live Migration will copy the dirty pages again, marking them clean after the copy. The virtual machine is still running, so some of the pages will change again and be marke as dirty. The dirty copy process will repeat until (a) it has been done 10 times or (b) there is almost nothing left to copy.
  4. What remains of the VM01 that has not been copied to HostB is referred to as the state. At this point VM01 is paused on HostA.
  5. The state is copied from HostA to HostB, thus completing the virtual machine copy.
  6. VM01 is resumed on HostB.
  7. If VM01 runs successfully on HostB then all trace of it is removed from HostA.
This process moves the memory and processor of the virtual machine from HostA to HostB, both in the same host cluster. The files of the virtual machine are on some shared storage (a SAN in Windows Server 2008 R2) that is used by the cluster.
It is between the pause in step 4 and the resume in step 6 that the virtual machine is actually offline. This is where a ping test drops a packet. Ping is a tool based on the ICMP diagnostic protocol. Ping is designed to find latency. That’s exactly what happens when that ping fails to respond during Live Migration or vMotion. The virtual machine is briefly unavailable. Most applications are based on more tolerant protocols which will allow servers several seconds to respond. Both vMotion and Live Migration take advantage of that during the switch over of the virtual machine from the source to the destination host. That means your end users can be reading the email, using the CRM client, or connected to a Citrix XenApp server, and they might not notice anything other than a slight dip in performance for a second or two. That’s a very small price for a business-friendly feature like Live Migration or vMotion.
Aside from the cluster requirement, the other big change in this process in Windows Server 2012 is that the first memory copy from HostA to HostB has been tuned to reflect memory activity. The initial page copy is prioritized, with least used memory being copied first, and the most recently used memory being copied last. This should lead to fewer copy iterations and faster Live Migration of individual virtual machines.
Simultaneous Live Migration
In Windows Server 2008 R2, we could only perform one simultaneous Live Migration between any two hosts within a cluster. With host capacities growing (up to 4 TB RAM and 1,024 VMs on a host) we need to be able to move virtual machines more quickly. Imagine how long it would take to drain a host with 256 GB RAM over a 1 GbE link! Hosts of this capacity (or greater) should use 10 GbE networking for the Live Migration network. Windows Server 2008 R2 couldn’t make full use of this bandwidth – but Windows Server 2012 can. Combined with simultaneous Live Migration, Hyper-V can move lots of virtual machines very quickly, taking advantage of 10 Gbps, 40 Gbps, or even 56 Gbps networking! This makes large data center operations happen very quickly.
The default number of simultaneous Live Migrations is two, as you can see in the below screenshot. You can tune the host based on its capabilities. Running too many Live Migrations at once is expensive; not only does it consume the bandwidth of the Live Migration network (which might be converged with other networks) but it also consumes resources on the source and destination hosts. Don’t worry – Hyper-V will protect you from yourself. Hyper-V will only perform the number of concurrent Live Migrations that it can successfully do.

LM_01
A common question is this: My source host is configured to allow 20 concurrent Live Migrations and my destination host will allow five. How many Live Migrations will be done? The answer is simple: Hyper-V will respect every host’s maximum, so only five Live Migrations will happen at once between these two hosts.
You might also notice in the above screenshot that Storage (Live) Migration also has a concurrency limit, which defaults to two.
Live Migration Queuing
Imagine you have a cluster with two nodes, HostA and HostB. Both nodes are configured to allow ten simultaneous Live Migrations. HostA is running 100 virtual machines and you want to place this host in maintenance mode. Failover Cluster manager will orchestrate the Live Migration of the virtual machines. All virtual machines will queue up, and up to ten (depending on host resources) will live migrate at the same time. As virtual machines leave HostA, other virtual machines will start to live migrate, and eventually all of the virtual machines will be running on HostB.
Storage Live Migration
A much sought-after feature for Hyper-V was the ability to relocate the files of a virtual machine without affecting service uptime. This is what Storage Live Migration gives us. The tricky bit is moving the active virtual hard disks because they are being updates. Here is how Microsoft made the process work:
  1. The running virtual machine is using its virtual hard disk which is stored on the source device.
  2. An administrator decides to move the virtual machine’s files and Hyper-V starts to copy the virtual hard disk to the destination device.
  3. The IO for the virtual hard disk continues as normal but now it is mirrored to the copy that is being built up in the destination device.
  4. Live Migration has a promise to live up to; the new virtual hard disk is verified as successfully copied
  5. Finally the files of the virtual machine can be removed from the source device
LM_02

Storage Live Migration can move all of the files of a virtual machine as follows:
  • From on folder to another on the same volume
  • To another drive
  • From one storage device to another, such as from a local drive to an SMB 3.0 share
  • You can move files from one server to another
When using Storage Live Migration, you can choose to:
  • Move all files into a single folder for the virtual machine
  • Choose to only move some files
  • Scatter the various files of a virtual machine to different specified locations
SMB 3.0 and Live Migration
Windows Server 2012 introduces SMB 3.0 – an economic, continuously available, and scalable storage strategy that is supported by Windows Server 2012 Hyper-V. Live Migration supports storing virtual machines on SMB 3.0 shared storage. This means that a virtual machine can be running on HostA and be quickly moved to run on HostB, without moving the files of the virtual machine. Scenarios include a failover cluster of hosts using a common SMB 3.0 share and a collection of non-clustered hosts that have access to a common SMB 3.0 share.
Shared-Nothing Live Migration
Thanks to Shared-Nothing Live Migration we can move virtual machines between any two Windows Server 2012 Hyper-V hosts that do not have any shared storage. This means we can move virtual machines:
  • Move the virtual machine that is stored on the local drive of a non-clustered host to another non-clustered host, and store the files on the destination host’s storage.
  • From a non-clustered host to a clustered host, with the files placed on the cluster’s shared storage. Then we can make the virtual machine highlight available to add it to the cluster.
  • Remove the highly available attribute of a virtual machine and move it from a clustered host to a non-clustered host.
  • Remove the highly available attribute of a virtual machine and move it from a host in a source cluster to a host in a destination cluster, where the virtual machine will be made highly available again.
In other words, it doesn’t matter what kind of Windows Server 2012 or Hyper-V Server 2012 host you have. You can move that virtual machine.
Hyper-V Is Breaking Down Barriers to IT Agility
You can easily move virtual machines from one host to another in Windows Server 2012 Hyper-V. The requirements for this are as follows.
  • You are running Windows Server 2012 Hyper-V on the source and destination hosts.
  • The source and destination hosts have the same processor family (all Intel or all AMD).
  • If there are mixed generations of processor then you might have to enable processor compatibility mode in the settings of the virtual machine. It is a good idea to always buy a new processor when acquiring new hosts – this gives you a better chance at buying compatible host processors in 12-18 months’ time.
  • The hosts must be in the same domain.
  • You have at last 1 GbE of connectivity between the hosts. This bandwidth can be a share of a greater link that is guaranteed by the features of converged networking (such as QoS). Ideally, you will size the Live Migration network according to the amount of RAM in the hosts and the time it takes to drain the host for maintenance.
With all of those basic requirements configured, the only barrier remaining to Live Migration is the network that the virtual machine is running one. For example, if the virtual machine is running on 192.168.1.0/24 then you don’t want to live migrate it to a host that is connected to 10.0.1.0/24. You could do that, but the virtual machine would be unavailable to clients… unless the destination host was configured to use Hyper-V Network Virtualization, but that’s a whole other article.

5/21/2016

What’s new in Hyper-V in Windows Server 2016

The first release of Hyper-V was with Windows Server 2008, It was a solid and reliable product from the beginning, but with limited features compared to its competition, especially VMware.

Server 2012 R2 introduced Generation 2 VMs, which remove legacy hardware emulation such as BIOS, PCI bus and IDE controllers to improve performance and enable features like UEFI (Unified Extensible Firmware Interface) Secure Boot.
The scalability of Hyper-V VMs has also improved, so that since Server 2012 R2 you can now configure up to 64 virtual processors, 1TB of RAM, 64 TB virtual hard drives, and up to 256 virtual SCSI disks.

In Windows Server 2016 Microsoft is adding more features, and the changes are significant. Many of the changes are already available in Windows 10, for development and testing. The goal of Windows Server architect Jeffrey Snover is to make Windows a “cloud OS”, which includes the notion of on-demand compute resources, VMs that spin up or down as needed.
Improvements in Hyper-V are an immediate benefit to Microsoft’s Azure cloud platform and its users, as well as to those deploying Azure Stack, which offers a subset of Azure features for deployment on premises.

Two complementary Server 2016 features are also worth noting. The first is Nano Server, a stripped-down edition of Windows Server optimised for hosting Hyper-V, running in a VM, or running a single application. There is no desktop or even a local log-on, since it is designed to be automated with PowerShell.
The benefits include faster restarts, lower attack surface, and the ability to run more VMs on the same physical hardware. Fewer features also means fewer patches, and fewer forced reboots. In Server 2016, Microsoft recommends Nano Server as the default host for Hyper-V.
The second feature is containers. Using containers, both the application and its resources and dependencies are packaged, so that deployment is automated. Containers go hand in hand with microservices, the concept of decomposing applications into small units each of which runs separately.
Microsoft’s new operating system supports both Windows Server Containers, which use shared OS files and memory, and Hyper-V containers, which have their own OS kernel files and memory. The idea is that Hyper-V containers have greater isolation and security, at the expense of efficiency.
Top of the what’s new list is nested virtualisation, the ability to run VMs in VMs. This is a catch-up with competing hypervisors that already have this feature, but an essential one, since it allows Hyper-V to be used even when your server infrastructure is virtualised on the Azure cloud or elsewhere.
Hyper-V depends on CPU extensions, Intel VT-x or AMD-V, and nested virtualisation includes these extensions in the virtual CPU presented to the guest OS, enabling guests to run their own hardware-based hypervisor. The feature could also help developers working in a VM, since device emulators which use these extensions may work.
Nested Virtualisation works in the latest preview of Windows Server 2016 (currently Technical Preview 4) and in recent builds of Windows 10. You have to run PowerShell scripts to enable the feature in both the host and a VM. There are currently some limitations. Dynamic memory, live migration and checkpoints do not work on VMs which have the feature enabled, though they do work in the innermost guest VMs.
Shielded VMs
One of the disadvantages of cloud computing is that physical access to your infrastructure is in the hands of a third-party, with obvious security implications. The idea of Shielded VMs is to mitigate that by having VMs that cannot be accessed by host administrators.
Shielded VMs use Microsoft’s Bitlocker encryption, Secure Boot and virtual TPM (Trusted Platform Module), and require a new feature called the Host Guardian Service. Once configured, a Shielded VM will only run on designated hosts. The VM is encrypted, as is network traffic for features like Live Migration.
Running a Shielded VM has annoyances. You cannot access the VM from the Hyper-V manager, and you cannot mount its virtual disk drive from outside the VM. There is also, according to Microsoft, up to a 10 per cent performance impact because of the encryption.

ReFS recommended

Microsoft’s Resilient File System (ReFS) was introduced in Windows Server 2012, but its visibility has been limited since installations still use NTFS by default. However, according to Microsoft Program Manager Ben Armstrong, ReFS is recommended for Hyper-V hosts in Server 2016.
It is much faster for certain operations used by Hyper-V, including creating a fixed-size VHDX (Virtual Hard Drive) and performing a file merge, used by Hyper-V checkpoints and Hyper-V Replica. In Armstrong’s demonstration at a TechDay in Sweden, a merge performed in NTFS took 29 seconds, versus two seconds for ReFS.

Production Checkpoints

Production checkpoints in Hyper-V 2016
Checkpoints let you take a snapshot of a VM, with the ability to reset the VM back to that snapshot later. They are ideal for safe experimentation, or for troubleshooting, but Microsoft has never recommended them for production use, because of reliability issues. Microsoft has now introduced Production Checkpoints, which are supported. The difference is that Production Checkpoints use backup technology inside the guest rather than saving the state. Production Checkpoints do not save application state. Both types of checkpoint remain available.

New VM binary configuration file format

Microsoft fell in love with XML some years back, and used it everywhere it could, including for VM configuration files. While its plain-text readability sounds a good idea, performance has turned out to be poor, because of the bloat of XML libraries invoked to parse the files. A new binary format with a .VMCX extension is used in Server 2016.

Compatibility between Hyper-V versions

In this release of Hyper-V, you can run VMs created with the previous release in compatibility mode, so you have the flexibility of moving both ways between old and new hosts. A VM is not upgraded until you specifically choose "Upgrade Configuration Version" or run the appropriate PowerShell command. This makes gradual migration between releases much easier.

Online configuration improvements

Hyper-V in Server 2016 supports more configuration changes while a VM is running than before. You can now add and remove network adapters, add and remove memory when dynamic memory is not configured, and add or remove drives from VMs that are replicated.

Rolling Hyper-V cluster upgrade

If you run Hyper-V on a failover cluster, you can upgrade to Server 2016 without downtime thanks to a feature called Rolling Hyper-V Cluster upgrade. You begin by adding a node running Server 2016, then gradually upgrade each node, moving VMs between nodes to avoid downtime. Finally, when all nodes are upgraded, you can upgrade the functionality of the cluster to the 2016 level.

Storage resiliency

Hyper-V VMs typically access storage on a SAN (Storage Area Network). If there is an intermittent network failure, and storage stops responding, then VMs on previous versions of Hyper-V crash. In Server 2016, VMs are paused instead, and will unpause when storage returns.

Guest Cluster improvements

A Guest Cluster is a failover cluster composed of two or more VMs. Microsoft intends that Guest Clusters will eventually have the same functionality as a standalone VM. A new feature in Server 2016 is the ability to resize a shared VHDX while online.

New backup infrastructure

One of the advantages of virtualisation is you can backup a VM from the host. In current versions of Hyper-V, this uses VSS (Volume Shadow Copy Service) to ensure data consistency. Server 2016 introduces a new “native change block API” that does not require VSS, or snapshots of SAN storage. Since backup failures are often caused by VSS failures, this should improve reliability.

PowerShell Direct

In Server 2016, Hyper-V administrators can open a PowerShell session inside a VM from the host operating system, without requiring a network or any remote management configuration. It is the same kind of direct connection that enables you to interact with the desktop or copy files to and from a VM, using only the Hyper-V administration tools. PowerShell Direct does require Windows 10 or Windows Server 2016 in the guest VM. You also need user credentials for the VM, and the feature will not work with Shielded VMs.

Hyper-V futures

Hyper-V is an integral part of Windows Server and critical to Microsoft’s strategy, so the company’s efforts to improve it are welcome. It is an excellent hypervisor, though issues remain when it comes to management and deployment tools. The standalone Hyper-V administrator works well, especially the latest version which allows you to connect as another user, but this is designed for small-scale use.
If you are managing large numbers of hosts and VMs, or deploying a private cloud with self-service provisioning, Microsoft's solution is the intricacies or System Center, or the emerging Azure Stack, which is now in preview.
Of the two, Azure Stack looks more like the long-term direction since it uses the same portal and APIs (though with cut-down features) as Microsoft's public cloud.


5/20/2016

Transferring AD FSMO Roles Using NTDS utility

To transfer the FSMO roles from the Ntdsutil command follow the following steps:
Caution: Using the Ntdsutil utility incorrectly may result in partial or complete loss of Active Directory functionality.
  • On any domain controller, click Start, click Run, type Ntdsutil in the Open box, and then click OK.
Note: To see a list of available commands at any of the prompts in the Ntdsutil tool, type ?, and then press ENTER.
  • Type connections, and then press ENTER.
For example, to transfer the RID Master role, you would type transfer rid master:
Options are:
  1. You will receive a warning window asking if you want to perform the transfer. Click on Yes.
  2. After you transfer the roles, type q and press ENTER until you quit Ntdsutil.exe.
  3. Restart the server and make sure you update your backup.

Reference