Simple and fast way to ensure a PowerShell script runs always “as Administrator”

22 Jan

Sooner or later when you are writing PowerShell scripts you have the situation where you want to ensure that the Script is running with elevated user rights (aka “run as Administrator”). Often this is the case when the script should make some configurations changes or some Cmdlets, used in the script, works only with elevated user rights.

When you search the web you can find several solutions with functions or if statement to check the right of the user under which the script currently run and then abort if he does not have admin rights.
But actually there is a simple, builtin, way to ensure the the script runs only in a PowerShell session which was started with “run as Administrator”.
Simply add the following line (with the #) at the top of your script:


When the Script is then started in a normal (not elevated) PowerShell session it fails with the following, very clear, error message:

The script ‘youscriptname.ps1’ cannot be run because it contains a “#requires” statement for running as Administrator. The current Windows PowerShell session is not running as Administrator. Start Windows PowerShell by using the Run as Administrator option, and then try running the script again.

This works with PowerShell 4.0 and later and there also other ‘Requires’ statements which an be used in Script. For example to ensure a specific version of a PowerShell Module is installed.
A full reference an be found on in the online PowerShell Documentation.

Script to build streched file server cluster with Storage Replica

7 Mar

One possible scenario for use of Storage Replica in Windows Server 2016 is to build a stretched file server cluster based on two VMs on two different sites. With this configuration you can build a highly available file server across two sites without the need of replicated SAN or similarly. Instead you can simply use the Storage which is locally available at each site and leverage Storage Replica to replicate the data volumes inside the VMs. In case one of the Sites fails, the File Server Role will automatically fail over to the second site and the end user will probably not even notice it.

Recently I have made some tests with such a set up in my Homelab where I had the need to rebuild quickly the whole environment. Therefore I made a simple script with all the needed PowerShell commands.

You can get a copy of the Script at my GitHub Repository

The Script is intended to run on a third machine, like for example a Management Server which has the Windows Server 2016 RSAT Tools installed. Especially the Hyper-V, Failover Cluster and Storage Replica Cmdlets are required.

After you set the correct parameter values and you are really sure everything is right 😉 , you can run the script in one step. Or, probably the more interesting approach, is to open the script in the PowerShell ISE and run the individual steps one by one.
For this purpose the script has comments which mark the indivudaul steps:

So have fun with PowerShell and Storage Replica. A very nice combination! 🙂

Be aware of DSC pull server compatibility issues with WMF 5.0 and 5.1

20 Feb

Apparently, there are some incompatibilities when WMF 5.0 computers wants to communicate with a DSC pull server running on WMF 5.1 or vice versa. This is especially the case when the “client” node and the pull server are not running the same OS version. For example, when you have a DSC pull server running on Server 2012 R2 (with WMF 5.0) and some DSC nodes running on Server 2016 (which as WMF 5.1 built in).

Currently I experienced two issues:

  1. A DSC pull client running on WMF 5.1 cannot send status reports when the DSC pull server is running still on WMF 5.0. This is because WMF 5.1 has invented the new “AdditinalData” parameter in the status report. I have reported this bug also on GitHub: https://github.com/PowerShell/PowerShell/issues/2921 
  2. A DSC pull client running von WMF 5.0 cannot communicate at all with a DSC pull server running on WMF 5.1.
     

Solution / Workaround for issue 1:
As the WMF 5.1 RTM no (again) available the simplest solution would be to upgrade the server and/or client to WMF 5.1. However, when you have to upgrade the DSC pull server then you must create a new EDB file and reregister all clients. Otherwise the issue preserve because the “AdditionalData” field is still missing in the database.

Solution / Workaround for issue 2:
The root cause of this issue can be found in the release notes of WMF 5.1:
“Previously, the DSC pull client only supported SSL3.0 and TLS1.0 over HTTPS connections. When forced to use more secure protocols, the pull client would stop functioning. In WMF 5.1, the DSC pull client no longer supports SSL 3.0 and adds support for the more secure TLS 1.1 and TLS 1.2 protocols.”

So, starting with WMF 5.1 the DSC pull server does not support TLS 1.0 anymore, but in reverse a DSC pull client running on WMF 5.0 is still using TLS 1.0 and can therefore not connect anymore to the DSC pull server.

The solution, without deploying WMF 5.1 to all pull clients, is to alter the behavior of the DSC pull server so he accepts again TLS 1.0 connections. This can be done by changing the following registry key on the DSC pull server:

Change Value from 0x0 to 0x1 and reboot the DSC pull server.
Afterward DSC pull clients running on WMF 5.0 can connect again to the DSC pull server.

How to enable CredSSP for PowerShell Remoting through GPO

19 Oct

In a domain environment CredSSP can easily enabled through a GPO. To do so there are three GPO settings to configure:

  1. Computer Configuration > Administrative Templates > Windows Components > Windows Remote Management (WinRM) > WinRM Client > Allow CredSSP Authentication (Enable)
    image
  2. Computer Configuration > Administrative Templates > Windows Components > Windows Remote Management (WinRM) >  WinRM Service > Allow CredSSP Authentication (Enable)
    image
  3. Computer Configuration > Administrative Templates  > System > Credential Delegation > Allow delegation of fresh credentials (add wsman/*<.FQDN of your domain>)
    image
  4. If in your environment are computers in an other, not trusted, AD domain to which you want connect using explicit credential and CredSSP you have to enabled also the following GPO setting.
    Computer Configuration > Administrative Templates  > System > Credential Delegation > Allow delegation of fresh credentials with NTLM-only server authentication (add wsman/*<.FQDN of your other domain>)
    image

Now you are ready to use CredSSP within your PowerShell remote sessions.

And a final word of warning! 😉
When you are using CredSSP your credentials were transferred to the remote system and your account is then a potential target for a pass-to-hash attack. Or with other words an attacker can steal your credentials. So only use CreddSSP with your PowerShell Remote session if you really have a need for it!

How to setup VMFleet to stress test your Storage Spaces Direct deployment

26 May

As an outcome of the Microsoft’s demos about Storage Spaces Direct at Ignite conference and the Intel Developer Forum, Microsoft published recently a bunch of PowerShell Script known as “VMFleet“.

VMFleet is a basically a collection of PowerShell script to easily create a bunch of VMs and run some stress tests in it (mostly with the DISKSPD tool) to test the performance of an underlying Storage Space Direct deployment or simply for demonstration purposes.

After you have downloaded the files it is not quite obviously how to get started as the included documentation does not give you some simple step by step guidelines how to get all up and running.

So I decided to write my own short guide with the needed steps to get the VMFleet up and running.

Update 03.01.2017: Since I wrote this post in May 2016, Microsoft has apparently extended the VMFleet with some new scripts to setup the environment. So I updated the setup steps below. Thanks to my colleague and coworker Patrick Mutzner for pointing that out!

Prerequisites:

  • A  VHDX file with installed Windows Server 2012 R2 / 2016 Core and the password of the local Administrator account set. (sysprep must not be run!)
    A fixed VHDX file is recommended to eliminate “warmup” effects when starting the test runs.
    Note: The VHDX should be at least 20GB big because the VMFleet Scripts will create a load test file of 10GB inside the VMs.
  • A functional Storage Spaces Direct Cluster (with existing Storage Pool and no configured Virtual Disks)

VMFleet Setup:

  1. First you need to create one Volume/CSV per node to store the test VMs.
  2. Then create an additional CSV to store the VMFleet scripts and for the collection of the test results from the VMs.
  3. Extract the ZIP file with the VMFleet scripts on one of the cluster nodes.
    (For example extract the downloaded ZIP file to C:\Source on the first cluster node)
  4. Run the install-vmfleet.ps1 Script. This will create the needed folder structure on the “collect” CSV. (The volume created at Step 2)
  5. Copy the DISKSPD.exe to C:\ClusterStorage\Collect\Control\Tools

    Note: Everything in this folder will later automatically be copied into every test VM. So you can also copy other files to this folder if you need them later inside the test VMs.

  6. Copy template VHDX to C:\ClusterStorage\Collect 
  7. Run the update-csv.ps1 (unter C:\ClusterStorage\Collect\Control) script to rename the mountpoints of the CSV volumes and to distribute the CSV even in the cluster so every node is the owner of one CSV.
    update-csv
  8. Now it’s time to create a bunch of test VMs on the four CSVs. This is done by the create-vmfleet.ps1 Script.

    As parameters you must specify the path to the template VHDX file, how many VM per Node you like to create, the Administrator password of the template VHDX and a Username and its Password which has access to the Cluster Nodes (HostAdmin in this example). This account will be used inside the VMs to connect back to the cluster so that the script running inside the VMs get access to  the collect CSV  (C:\ClusterStorage\collect).
  9. Because the create-vmfleet.ps1 creates the VMs with the default values from the New-VM Cmdlet you should run now the set-vmfleet.ps1 Script to change the count of vCPUs and the memory size of the test VMs to your desired values.
  10. (Optional) Now before you begin with the tests you can check your Storage Spaces Direct Cluster configuration with the test-clusterhealth.ps1 script. The Script will check on all nodes if there are any RDMA configuration errors and it will do some basic health checks of the Storage Spaces Direct setup.
    test-cluster 
  11. As the last preparation step start the watch-cluster.ps1 script. This will give you a nice dashboard like overview of the most interesting storage performance counters across all nodes in the cluster. So you get an overview of the storage load and performance.
    watch-cluster 
  12. At this Point you are ready to start you first test run. 
    Simply start all test VMs
     manually or with the start-vmfleet.ps1 script. After the VMs have booted up they will automatically look for the run.ps1 script on the collect CSV (at C:\ClusterStorage\Collect\Control). But by default the test run is in paused state. So to start the actual load test simply run the clear-pause.ps1 script. This will kick off the dskspd.exe in every VMs and you can observe how the numbers from the watch-cluster.ps1 will explode… 😉

To change the test pattern simply change the parameters in the run.ps1 Script and either restart all test VMs (with stop-vmfleet.ps1 and start-vmfleet.ps1 scripts) or pause and resume the tests with the set-pause.ps1 and clear-pause.ps1 scripts.

That’s it. VMFleet is ready. So have fun while testing your S2D deployment and while looking at (hopefully) some awesome IOPS numbers. 🙂

Configuring Hyper-V Hosts with converged networking through PowerShell DSC

10 Dec

Lately I had to rebuild the Hyper-V Hosts in my home lab several times because of the release of the different TPs for Windows Server 2016. This circumstance (and because I am a great fan of PowerShell DSC Winking smile) gave me the Idea to do the whole base configuration of the Hyper-V Host, including the LBFO NIC Teaming, vSwitch and vNICs for the converged networking configuration, through PowerShell DSC.
But soon I realized that the DSC Resource Kit from Microsoft provides DSC resources only for a subset of the needed configuration steps. The result was some PowerShell modules with my custom DSC resources.

My custom DSC resources for the approach:
image

cLBFOTeam: To create and configure the Windows built-in NIC Teaming
cVNIC: To create and configuring virutal network adapters for Hyper-V host management
cPowerPlan: To set a desired Power Plan in Windows (e.g. High Performance Plan)

You can get the moduels from the PowerShell Gallery (Install-Module) or from GitHub. They will hopefully help everyone who has a similar intend  Winking smile

More to come:
Yes, I am not quite finished yet and I have more in the pipline. Winking smile
Currently I am also working on a fork of the xHyperV Module with a adopteted xVMSwitch resource with a paramter to specify the MinimumBandwidth mode of the Switch.
Futuremore I am also planing to add support for the SET (Swicht Embedded Teaming) in Windows Server 2016 to the xVMSwitch resource.

So you may soon read more about this topic here. In the meantime, happy DSCing! Smile

Build a Windows Server 2016 TP3 VHD with minimal GUI

6 Sep

In the TP3 the installation option was changed. Therefore, when you create a VHD(X) directly from the ISO with the Convert-WindowsImage.ps1 Script you have choice to create a VHD with Core Server or the full GUI with Desktop Experience but nothing in between. To create a VHD with the minimal server interface (core server with Server Manger and mmc GUIs) or the Server Graphical Shell (without Desktop Experience) you have to add the corresponding features with DISM.

This is how you add the minimal server interface to a VHD with the core server installation:

  1. Create a Core Server VHD with the Convert-WindowsImage.ps1 Script
  2. Mount the Windows Server 2016 TP3 ISO (double click it)
  3. Mount the install.wim from the ISO with DISM
    (Index 4 for Datacenter, 2 for Standard Edition)
  4. Add the Server-Gui-Mgmt-Infra (for Minimal Server Interface) or the Server-Gui-Shell (for full Server Graphical Shell) feature to the VHD by specify the mounted WIM as source.
Update, 11/20/2015:
This does not work any more with the TP4 which is now public available as the feature “Server-Gui-Mgmt-Infra” is gone now. You can add the feature “Server-Gui-Mgmt” with DISM which gives you a similar experience. But the feature is not even listed in PowerShell (Get-WindowsFeature) so I think this is probably far form supported.
With other words: No “Minimal Server Interface” in TP4 anymore.
 

PowerShell DSC resource to enable/disable Microsoft Update

16 Jun

Ever get tired to manually set the check box for Microsoft Update in the above screen on a bunch of servers (e.g. in a new test lab setup or so)? Then this one is for you.

 I wrote recently, mostly as an exercise, a PowerShell DSC Module with a resource to enable (or disable) the Microsoft Update.

 I have then published the Module von GitHub to get another exercise. 😉
So if you interested you can get the Module from here:

https://github.com/J0F3/cMicrosoftUpdate

After you get the module, enabling the Microsoft Update settings will look like this:

Happy DSCing! 🙂

Query Terminal Services Profile Path of AD Users through PowerShell

9 Apr

If you like to query Terminal Services or Remote Desktop Server Profile Path with PowerShell you cannot use the Get-ADUser Cmdlet. Instead you have to go through ADSI. The Scripting Guy has explained this in detail on his blog: http://blogs.technet.com/b/heyscriptingguy/archive/2008/10/23/how-can-i-edit-terminal-server-profiles-for-users-in-active-directory.aspx

This works basically very well for all user object where the path for the Terminal Services Profile is set or was set sometime in the past and is now empty. But if you have a user object for which the Terminal Services settings in AD were never touched you get a funky error message:
Exception calling “InvokeGet” with “1” argument(s): “The directory property cannot be found in the cache.

If you do an ad hoc query then this is not really a problem. But if you want to export the settings for all ad users into a CSV file the error will probably bother you.
So what we can do? If you have a look at the properties of the ADUser object, which the Get-ADUser Cmdlet returns, you can see that there is a property with the name “userProperties” with a cryptic value. That’s where the Terminal Services Profile Path is actually stored.

userparamaduser

But it the User Object had never set a Terminal Service Profile Path this property does simply not exist:
nouserparamaduser

Now, as workaround, you can first check for the existence of “userProperties” property before you query the Terminal Services Profile Path with ADSI. This could look like this:

 

How to delete a specific Recovery Point in DPM

10 Feb

There is no possibility to delete a recovery point (backup) in DPM via GUI. But good old friend PowerShell helps. 🙂

Here is an example. We want to delete the Recovery Point from the evening of February 8 from the Backup of the file server FSSRV001 in the Protection Group “File Backup”. DPM Server name is “DPMSRV001”

  1. First get the Protection Group to which the recovery point belongs
  2. Then you need the Data Source from where the data in the recovery point comes from.
  3. And finally you need to select the Recovery Point which you want to delete. You can identify the Recovery Point by Date and Time.
  4. Now we have all of the needed information an we can delete the Recovery Point.