Simple and fast way to ensure a PowerShell script runs always “as Administrator”

22 Jan

Sooner or later when you are writing PowerShell scripts you have the situation where you want to ensure that the Script is running with elevated user rights (aka “run as Administrator”). Often this is the case when the script should make some configurations changes or some Cmdlets, used in the script, works only with elevated user rights.

When you search the web you can find several solutions with functions or if statement to check the right of the user under which the script currently run and then abort if he does not have admin rights.
But actually there is a simple, builtin, way to ensure the the script runs only in a PowerShell session which was started with “run as Administrator”.
Simply add the following line (with the #) at the top of your script:

When the Script is then started in a normal (not elevated) PowerShell session it fails with the following, very clear, error message:

The script ‘youscriptname.ps1’ cannot be run because it contains a “#requires” statement for running as Administrator. The current Windows PowerShell session is not running as Administrator. Start Windows PowerShell by using the Run as Administrator option, and then try running the script again.

This works with PowerShell 4.0 and later and there also other ‘Requires’ statements which an be used in Script. For example to ensure a specific version of a PowerShell Module is installed.
A full reference an be found on in the online PowerShell Documentation.

Azure Stack – The Azure in your data center

28 Aug

With the beginning of this year Microsoft Inspire conference (formerly Microsoft Worldwide Partner Conference – WPC) the long-awaited Microsoft Azure Stack became GA and is now order able from hardware vendors. But before you order your own Azure Stack instance it’s important to know what Azure Stack exactly is and if it makes even sense for you.

The continued Cloud OS Vision

(image source: Microsoft)

Long, long time ago 😉, about 4 years ago, together with the release of Windows Server and System Center 2012 R2, Microsoft came up with the vision to give the customers a platform which is consistent to Azure. The idea behind that is that regardless if your application is running in Azure, in your On-Premises data center or in the data center of a local Service Provider, always the same platform is underneath. But when we look back, with the 2012 R2 suit and the Windows Azure Pack as customer facing Self-Service portal, this goal was not really reached. In the meantime, even the consistent experience of the Self-Service portal is gone. Azure Pack was based on the “old” Azure Management Portal which is now, in public Azure, mostly replaced by the new ARM based Azure Portal.

Azure Stack the successor of Windows Azure Pack?
Since the announcement of Azure Stack (which is now nearly two year ago by the way) there is ongoing some confusion in the IT world. For many Azure Stack seems to be the successor and replacement for System Center and Windows Azure Pack or simply anything was Microsoft has released for the data center before Azure Stack. But, this is not what Azure Stack is supposed to be. Even when the Cloud OS vision is clearly still recognizable in Azure Stack it is, however, a completely new product category which has Microsoft never done in this form before. Even more it is not an alternative for Azure or the replacement of traditional virtualization infrastructures (based on System Center and Hyper-V, VMware or whatever). Azure Stack is much more Azure or part of your Azure strategy. And therefore, you must commit to Azure when you want to use Azure Stack.

(image source: Microsoft)

The integrated system experience – Or the Azure Stack Appliance
So, what does “a new product category” mean? It is relatively simple. Azure Stack is not delivered as a software which you can setup on your own defined hardware and configure for your individual needs. Azure Stack will be basically delivered as an appliance which is specified, build and updated by the hardware vendor of your choice and Microsoft. Or in other words Azure Stack is a SAN equivalent system which provides not storage but Azure Services in your data center. This means, for you as a customer, that you have more time to focus on running applications, provide value-added services to your customers and develop modern cloud applications instead of keeping your virtualization infrastructure up & running.

New IT roles for operating the “Appliance”
In Microsoft eyes to running an IT infrastructure in this new “Appliance” form, leads to two new roles in IT. The Cloud Architect and the Cloud Administrator or Operator.

The Cloud Architect is the one who does ensure that the Azure Stack “Appliance” can get properly integrated in the existing IT infrastructure (Network, Monitoring Systems, Identity System etc.). He does also plan the offering on the Azure Stack for internal or external customers. These are short-term tasks which can also perfectly done by an external partner.

After Azure Stack is integrated in your IT infrastructure the Cloud Operator or Administrator is responsible to operating the Azure Stack. But this is not a very high skilled role and probably also not a very time intensive task. Because of the appliance approach Azure Stack is operated by a simple management web interface (like the Azure Portal) and not by complicated Administrator consoles for which a deep knowledge of the whole system is necessary. The Cloud Operator will mainly monitor the integrated Azure Stack system and when a red light comes up he will either do simple remediate actions (e.g. restarting a service or apply an update) or he will contact the support which is provided jointly by Microsoft and the hardware vendors.

(image source: Microsoft)

Do I need Azure Stack?
Azure Stack is and will not be the all mighty platform for everyone and every use case. Azure Stack is for you when you want to adopt the cloud model and develop and run modern cloud applications, which are depends at most partially on (IaaS) VMs. But because of various reasons you cannot go directly to Azure. Such reasons can for example be requirements for low latency, law and regulations which restricts to store data outside of a specific country or bad or no internet connectivity. For all other use cases there is still Hyper-V, System Center and Windows Azure Pack. They will be fully supported and maintained from Microsoft for, at least, the next 5 years. Windows Azure Pack, for example, is compatible with Windows Server 2016 and will be support until 2027.

So in short this means:

Azure Stack is for you when:

  • You want to adopt the cloud model and focus on delivering services instead on building and operating infrastructures (no DIY infrastructure)
  • You want to develop or run modern cloud applications based on Azure services
  • But you cannot go to Azure (because of regulations, latency, bad connectivity, etc.)

Azure Stack is not your platform when:

  • You need traditional virtualization or even physical servers
  • You do not want or you cannot adopt the cloud model and use public cloud or Azure at all
  • You have a lot of legacy application which have the need for old operating systems (2008, 2008 R2, 2012…)

So, I need one. Where can I get it and what does it cost?
First, you must select your preferred hardware vendor. Today you have the choice between HPE, Dell EMC or Lenovo. In the future, also systems from Cisco and Huawei will be available. When you have selected a hardware vendor you must decide which size of the integrated system you need. Currently configuration with 4, 8 or 12 nodes are available which cannot be extended in the first 6 Month. After that Microsoft promises to come up with an update which add the functionality to extend the Azure Stack integrated systems.

After you have chosen your preferred vendor and size you will order the integrated system (hardware) directly from the hardware vendor and the hardware pricing is defined by the hardware vendor.

When it comes to licensing cost, Azure Stack works the same as Azure which means you pay only what you use (pas-as-you-use). That means every service and every VM you are provisioning on Azure Stack will be billed on hours or transactions base. Exactly like it is in public Azure. However, the prices are a bit lower because you already payed for hardware, power, connectivity etc. For completely disconnected Azure Stack setup, Microsoft offers also a “capacity model”, which allows you to license the whole capacity at once. This way you will pay a fixed yearly fee, based on the counts of physical cores in your system. For more details about the prices the pricing datasheet from Microsoft gives you a great overview.

(This blog post has also been posted under

Get insights about the performance of your Windows systems with Grafana

12 May

Ever dreamed about some mission control like dashboards to get a quick insight about the performance of your Windows systems? 😊

If yes, then you probably like a view like this:

So here is how you get such a dashboard for your system in 6 simple steps in under an hour:

Install a VM with Ubuntu Linux 16.04.2 LTS

Even when it is Linux, no rocket science is needed here 😊. Just download the ISO image from the Ubuntu Website, attach it to your VM and boot form it. After that you get asked some simple questions about time zone, keyboard and partition settings. The most you can accept with the defaults or choose simple your preferred languages etc. Quite easy.

Set time zone to UTC

Login in to your Ubuntu system and change the time zone to UTC. As the InfluxDB (the backend) uses UTC time internally it is a clever idea to set the time zone for the system also to UTC.
To do so run the following command. Then choose “Non of the above” > “UTC”.

Install InfluxDB

InfluxDB is the he backend of the solution where all data is stored. It is a database engine which is built form the ground up to store metric data and for doing real-time analytics.
To install InfluxDB run the following commands on the Linux VM:

Install Grafana

Grafana is the frontend which will generate your nice-looking dashboard with the data stored in the InfluxDB. To install Grafana run the following commands on the Linux VM:

Install Telegraf on your Windows system

Now we are ready to collect data from our systems with the Telegraf, a small agent which can collect data from many various sources. One of these source is Windows Perfmon Counters which we will use here.

1. Download the Windows version of the Telegraf agent
2. Copy the content of the zip file to C:\Program Files\telegraf on your systems
3. Replace the telegraf.conf with this one. -> telegraf.conf
So all needed perform counters get collected which are needed for the example dashboard in the last step.
4.  Also in the telegraf.conf, update the urls  paramter so it point to the IP address of your Linux VM

5. Install Telegraf as service and start it

Create Dahsboards and have fun! 🙂

The last step is to create your nice dashboards in the Grafana web UI. A good starting point is the “Telegraf & Influx Windows Host Overview” dashboard which can directly imported from the repository

Login into the Grafana Web UI -> http://<your linux VM IP>:3000 (Username: admin, Password: admin)

First Grafana need to know it’s data source. Click on the Grafana logo in the top left corner and select “Data Source” in the menu. Then click on “+ Add data source“.

Define an Name for the Data Source (e.g. InfluxDB-telegraf) and choose “InfluxDB” as Type.
The URL is http://localhost:8086 has we have installed the InfluxDB locally. “Proxy” as the access type is correct.
The telegraf agent will automatically create the data base “telegraf”. So enter “telegraf” as Database name. As user you can enter anything. InfluxDB does not need any credentials by default but the Grafana interface wants you to enter something. (otherwise you can not save the data source)

Now go ahead and import your first dashboard.  Select Dashboard > Import in the menu

Enter “1902” and click on “Load

Change the Name if you like and select the data source just created in the step above (InfluxDB-telegraf) and then click on Import.

And tada! 🙂

Further steps

Now the Telegraf / InfluxDB setup is collecting performance data of your windows machines. With Grafana the collected data can visualized in a meaningful way so the determination the health of your system gets easy.

To further customize the data and visualization to your specific needs you can:

Script to build streched file server cluster with Storage Replica

7 Mar

One possible scenario for use of Storage Replica in Windows Server 2016 is to build a stretched file server cluster based on two VMs on two different sites. With this configuration you can build a highly available file server across two sites without the need of replicated SAN or similarly. Instead you can simply use the Storage which is locally available at each site and leverage Storage Replica to replicate the data volumes inside the VMs. In case one of the Sites fails, the File Server Role will automatically fail over to the second site and the end user will probably not even notice it.

Recently I have made some tests with such a set up in my Homelab where I had the need to rebuild quickly the whole environment. Therefore I made a simple script with all the needed PowerShell commands.

You can get a copy of the Script at my GitHub Repository

The Script is intended to run on a third machine, like for example a Management Server which has the Windows Server 2016 RSAT Tools installed. Especially the Hyper-V, Failover Cluster and Storage Replica Cmdlets are required.

After you set the correct parameter values and you are really sure everything is right 😉 , you can run the script in one step. Or, probably the more interesting approach, is to open the script in the PowerShell ISE and run the individual steps one by one.
For this purpose the script has comments which mark the indivudaul steps:

So have fun with PowerShell and Storage Replica. A very nice combination! 🙂

SCVMM: When the deployment of new VM template suddenly fails

2 Mar

Recently I ran in a very strange behavior when deploying a VM template with Server 2016 through VMM 2012 R2. First of all to enable the full support of Windows Server 2016-based VMs in VMM 2012 you need at least Update Rollup 11 Hotfix 1 installed. But even after installing  the latest UR (UR12 in my case), the deployment of a Server 2016 VM has failed.

The Issue:
Everytime when a new VM is deployed from a Server 2016 VM template the process fails at specialize phase of the sysprep. However all other existing templates with Server 2012 were working as expected.

Because in in this phase also the domain join happens I decided to give another try with a VM template which has no domain join configured. And tada, the VM was deployed successfully. 

The root cause:
With this finding my assumption was that, when the VM template is configure for domain join, VMM adds something in the unattend.xml which Server 2016 does not like that much. So I inspected the unattend.xml file of a failed deployment and there I found the following section which has looked a litte bit strange:

 Somehow the Domain of the domain join account was missing. 

The Solution:
So I checked the VMM Run As Account which was specified as domain join credentials in the VM template. And as you can see, we have also no domain information here.  

After changing the username to “domain\vm domain join” the deployment went through smooth as it should. Inspecting the unattend.xml file showed that the domain is now also correctly filled in.

When the deployment of a new VM Template in VMM suddenly fails at the domain join step, double check the run as account and be sure that there is also the domain name in the username field.
In my case it was a template with Server 2016. But I think chances are good as the same could also happens with new VM templates with another guest OS.

Be aware of DSC pull server compatibility issues with WMF 5.0 and 5.1

20 Feb

Apparently, there are some incompatibilities when WMF 5.0 computers wants to communicate with a DSC pull server running on WMF 5.1 or vice versa. This is especially the case when the “client” node and the pull server are not running the same OS version. For example, when you have a DSC pull server running on Server 2012 R2 (with WMF 5.0) and some DSC nodes running on Server 2016 (which as WMF 5.1 built in).

Currently I experienced two issues:

  1. A DSC pull client running on WMF 5.1 cannot send status reports when the DSC pull server is running still on WMF 5.0. This is because WMF 5.1 has invented the new “AdditinalData” parameter in the status report. I have reported this bug also on GitHub: 
  2. A DSC pull client running von WMF 5.0 cannot communicate at all with a DSC pull server running on WMF 5.1.

Solution / Workaround for issue 1:
As the WMF 5.1 RTM no (again) available the simplest solution would be to upgrade the server and/or client to WMF 5.1. However, when you have to upgrade the DSC pull server then you must create a new EDB file and reregister all clients. Otherwise the issue preserve because the “AdditionalData” field is still missing in the database.

Solution / Workaround for issue 2:
The root cause of this issue can be found in the release notes of WMF 5.1:
“Previously, the DSC pull client only supported SSL3.0 and TLS1.0 over HTTPS connections. When forced to use more secure protocols, the pull client would stop functioning. In WMF 5.1, the DSC pull client no longer supports SSL 3.0 and adds support for the more secure TLS 1.1 and TLS 1.2 protocols.”

So, starting with WMF 5.1 the DSC pull server does not support TLS 1.0 anymore, but in reverse a DSC pull client running on WMF 5.0 is still using TLS 1.0 and can therefore not connect anymore to the DSC pull server.

The solution, without deploying WMF 5.1 to all pull clients, is to alter the behavior of the DSC pull server so he accepts again TLS 1.0 connections. This can be done by changing the following registry key on the DSC pull server:

Change Value from 0x0 to 0x1 and reboot the DSC pull server.
Afterward DSC pull clients running on WMF 5.0 can connect again to the DSC pull server.

How to enable CredSSP for PowerShell Remoting through GPO

19 Oct

In a domain environment CredSSP can easily enabled through a GPO. To do so there are three GPO settings to configure:

  1. Computer Configuration > Administrative Templates > Windows Components > Windows Remote Management (WinRM) > WinRM Client > Allow CredSSP Authentication (Enable)
  2. Computer Configuration > Administrative Templates > Windows Components > Windows Remote Management (WinRM) >  WinRM Service > Allow CredSSP Authentication (Enable)
  3. Computer Configuration > Administrative Templates  > System > Credential Delegation > Allow delegation of fresh credentials (add wsman/*<.FQDN of your domain>)
  4. If in your environment are computers in an other, not trusted, AD domain to which you want connect using explicit credential and CredSSP you have to enabled also the following GPO setting.
    Computer Configuration > Administrative Templates  > System > Credential Delegation > Allow delegation of fresh credentials with NTLM-only server authentication (add wsman/*<.FQDN of your other domain>)

Now you are ready to use CredSSP within your PowerShell remote sessions.

And a final word of warning! 😉
When you are using CredSSP your credentials were transferred to the remote system and your account is then a potential target for a pass-to-hash attack. Or with other words an attacker can steal your credentials. So only use CreddSSP with your PowerShell Remote session if you really have a need for it!

Webinar “Azure Automation and PowerShell DSC” (German)

10 Oct

Tomorrow, on Tuesday October 11 2016 at 2pm (CEST) I will do a webinar in German about Azure Automation and PowerShell DSC . I will explain the basic concepts of Azure Automation, Automation Runbook and PowerShell DSC.

A main part of the webinar will be a example scenario to automatically deploy and configure a VM using Azure Automation Runbooks and Azure Automation DSC. I will configure the whole scenario live during the webinar.


When you interested in the scripts, which I am using to configure the scenario, you can get it here.

If you like to attend the webinar  you can still register here for free.

How to setup VMFleet to stress test your Storage Spaces Direct deployment

26 May

As an outcome of the Microsoft’s demos about Storage Spaces Direct at Ignite conference and the Intel Developer Forum, Microsoft published recently a bunch of PowerShell Script known as “VMFleet“.

VMFleet is a basically a collection of PowerShell script to easily create a bunch of VMs and run some stress tests in it (mostly with the DISKSPD tool) to test the performance of an underlying Storage Space Direct deployment or simply for demonstration purposes.

After you have downloaded the files it is not quite obviously how to get started as the included documentation does not give you some simple step by step guidelines how to get all up and running.

So I decided to write my own short guide with the needed steps to get the VMFleet up and running.

Update 03.01.2017: Since I wrote this post in May 2016, Microsoft has apparently extended the VMFleet with some new scripts to setup the environment. So I updated the setup steps below. Thanks to my colleague and coworker Patrick Mutzner for pointing that out!


  • A  VHDX file with installed Windows Server 2012 R2 / 2016 Core and the password of the local Administrator account set. (sysprep must not be run!)
    A fixed VHDX file is recommended to eliminate “warmup” effects when starting the test runs.
    Note: The VHDX should be at least 20GB big because the VMFleet Scripts will create a load test file of 10GB inside the VMs.
  • A functional Storage Spaces Direct Cluster (with existing Storage Pool and no configured Virtual Disks)

VMFleet Setup:

  1. First you need to create one Volume/CSV per node to store the test VMs.
  2. Then create an additional CSV to store the VMFleet scripts and for the collection of the test results from the VMs.
  3. Extract the ZIP file with the VMFleet scripts on one of the cluster nodes.
    (For example extract the downloaded ZIP file to C:\Source on the first cluster node)
  4. Run the install-vmfleet.ps1 Script. This will create the needed folder structure on the “collect” CSV. (The volume created at Step 2)
  5. Copy the DISKSPD.exe to C:\ClusterStorage\Collect\Control\Tools

    Note: Everything in this folder will later automatically be copied into every test VM. So you can also copy other files to this folder if you need them later inside the test VMs.

  6. Copy template VHDX to C:\ClusterStorage\Collect 
  7. Run the update-csv.ps1 (unter C:\ClusterStorage\Collect\Control) script to rename the mountpoints of the CSV volumes and to distribute the CSV even in the cluster so every node is the owner of one CSV.
  8. Now it’s time to create a bunch of test VMs on the four CSVs. This is done by the create-vmfleet.ps1 Script.

    As parameters you must specify the path to the template VHDX file, how many VM per Node you like to create, the Administrator password of the template VHDX and a Username and its Password which has access to the Cluster Nodes (HostAdmin in this example). This account will be used inside the VMs to connect back to the cluster so that the script running inside the VMs get access to  the collect CSV  (C:\ClusterStorage\collect).
  9. Because the create-vmfleet.ps1 creates the VMs with the default values from the New-VM Cmdlet you should run now the set-vmfleet.ps1 Script to change the count of vCPUs and the memory size of the test VMs to your desired values.
  10. (Optional) Now before you begin with the tests you can check your Storage Spaces Direct Cluster configuration with the test-clusterhealth.ps1 script. The Script will check on all nodes if there are any RDMA configuration errors and it will do some basic health checks of the Storage Spaces Direct setup.
  11. As the last preparation step start the watch-cluster.ps1 script. This will give you a nice dashboard like overview of the most interesting storage performance counters across all nodes in the cluster. So you get an overview of the storage load and performance.
  12. At this Point you are ready to start you first test run. 
    Simply start all test VMs
     manually or with the start-vmfleet.ps1 script. After the VMs have booted up they will automatically look for the run.ps1 script on the collect CSV (at C:\ClusterStorage\Collect\Control). But by default the test run is in paused state. So to start the actual load test simply run the clear-pause.ps1 script. This will kick off the dskspd.exe in every VMs and you can observe how the numbers from the watch-cluster.ps1 will explode… 😉

To change the test pattern simply change the parameters in the run.ps1 Script and either restart all test VMs (with stop-vmfleet.ps1 and start-vmfleet.ps1 scripts) or pause and resume the tests with the set-pause.ps1 and clear-pause.ps1 scripts.

That’s it. VMFleet is ready. So have fun while testing your S2D deployment and while looking at (hopefully) some awesome IOPS numbers. 🙂

Replicate or migrate VMware VMs with a client OS to Azure with Azure Site Recovery

11 Apr

The official (not working 🙁 ) way:
To replicate VMware VMs to Azure you have to install the ASR Mobility Service in the VM. But what is when in the VM is running a Client OS (Windows 7, 8.1, 10) instead of Windows Server? Officially this is not supported by Azure Site Recovery and when you try to install the Mobility Service you get the following nice, or not so nice 😉 , message:


The unofficial but working way: 
However, beside the fact that a single VM in Azure does not qualify for a SLA guarantee and may have down times, there is technically actually no reason why you cannot run a Client OS in an Azure VM. Especially if the VMs are used for Dev/Test scenarios. So why should it then not be possible to replicate or migrate these VMs to Azure with ASR, you may ask?  And you know what? With a little trick (installing the MSI directly on command line) it’s actually really possible. Here are the steps needed to get the Mobility Service running on a Client OS:

  1. Get the Mobility Service .exe file from your ASR Process Server and copy it to temporary location on the VM which you want to replicate to Azure. You can find the setup file in the install folder of the Process Server under home\svsystems\pushinstallsvc\repository (e.g. D:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\pushinstallsvc\repository\Microsoft-ASR_UA_9.0.0.0_Windows_GA_31Dec2015_Release.exe)
  2. Run the exe and make notice of the folder to where the files get extracted by the installer
  3. Keep the Setup Wizard open and copy the content of the folder from step 2 to a temporary location
  4. Now you can install the Mobility Service MSI directly with msiexec by executing the following command line.
  5. Finally start “C:\Program Files (x86)\Microsoft Azure Site Recovery\agent\hostconfigwxcommon.exe” an enter the Passphrase of the ASR Process Server to connect the Agent to the ASR Server.

That’s it. Now you can replicate and failover the VM with ASR like any other Windows Server VM. Success! 🙂