Introducing the Horizon Golden Image Deployment Tool

Intro

Some of you might have seen previews on the socials but for the last few months I have been working hard on a GUI based tool to deploy golden images to VMware Horizon Instant Clone Desktop Pools and RDS Farms. This because who isn’t sick and tired of having to go into each and every Pod admin interface to o a million clicks to deploy a few new golden images?

The work started mid December and as usual the first 75% was done pretty quickly so mid January I had a first working version. While I expected a first version that would only support Desktop Pools to be available mid February (Vmug Virtual EUC day someone?) this build actually already had most of the parts in place to support both Desktop Pools and RDS Farms. After this first build my regular work for ControlUp started picking up again so I had less time to fix the numerous bugs that I encountered. Well bugs? Most where logic errors on my part but most of them have been ironed out by now. All in all the version that you can use today has cost me ~80hrs in building & testing and many many more hours thinking about solutions.

Is the tool perfect? Definitely not but it works and it’s more than what we’ve had before and I will be looking into impriving it even more.

The Tool

I guess you got bored with the talking and would like to see the tool now. The content of thos blog post might get it’s own page later for documentation purposes. The tool itself can be downloaded from this GitHub repository. Make sure to grab both the xaml and ps1 files, put them in a single folder and you should be good. The tool entirely runs on Powershell and is using the WPF Framework for the GUI.

Quick jumps:

Requirements

There are only 2 real requirements:

  • Powershell 7.3 or later (for performance reasons I used some PS 7.3 options so the tool WILL break with an older version.)
  • Horizon 8 2206 or later (the reason for this is that otherwise the secondary images wouldn’t be available and that’s a feature I wanted in there for sure.)

Deplying a new golden Image

For normal usage you can just start the ps1 file.

This will bring you to this tab, before the first use though you need to go to the configuration page.

Fill in one of your cconnection servers, credentials and hit the test button. If you have a cloud pod setup the other pods will be automatically detected and make sure to check the Ignore Certificate Errors checkbox in case that’s needed!

If the test was successfull you are good to go to either the Desktop Pools or RDS Farms tabs. Both are almost completely the same so I will only show desktop pools

Hit the connect button and all Instant Clone Pools or Farms from all pods will be auto populated in the first pull down menu.

The second pulldown menu allows you to select a new source VM

And the third pull down the snapshot (I guess you guessed that already, I might need to add labels but they are so ugly in WPF)

To the right you have all kinds of options that you should recognize from the regular gui including add vTPM that was added in Horizon 2206. This checkbox isn’t in the RDS Farms as we currently simply don’t have the option to have Horizon add a vTPM there. If the options are valid ( like more cores that cores/socket and if those numbers will work (can’t do 3 cores and 2 cores per socket!) the Deploy Golden Image becomes available. Hit this to start the deployment, you can check the status by hitting the refresh button. (don’t tell anyone but the functions does exactly the same as a connect)

Handling Secondary Images

By default a secondary image will not be pushed to any machines. Just select the Push As Secondary Image checkbox and hit Deploy Golden Image button.

What you can also do is select one or more of the machines and deploy the Golden Images ot those machines

Once a Secondary Image has been deployed the three other buttons come available.

From top to bottom you can either cancel the secondary image completely, apply the golden image to more desktops (selecting ones that already has it won’t break anything and will just deploy it when possible) and promote the Secondary Image to the Primary golden image for the pool. The latter will cause a rebuild of ALL machines including the ones already running on the image. In the future I will also add a button to configure a machine to run the original image.

Settings and logs location

All settings are stored in an xml file in %appdata%\HGIDTool this includes the password configured in the Settings tab as a regular Encrypted PowerShell Credentials object. If you didn’t hit save on the settings tab this will be automatically done when closing the tool.

In case you want to borrow some of my code the log files contain every API call that I do to get date, push an image or handlke secondary images. This is the same output as you’d get when running in -verbose mode so that’s only needed when you’re troubleshooting the tool.

[REST]Pushing a new image Horizon 8 2206 style

With Horizon 8 2206 one of the new features is the fact that you can select a new amount of cpu’s and memory when deploying a new image.

If you know me a but you might understand that I want to know how we can do this using the api’s. As we’ve seen before we needed to do a post against /inventory/v1/desktop-pools/{id}/action/schedule-push-image for build 2206 this was changed to /inventory/v2/desktop-pools/{id}/action/schedule-push-image.

Let’s compare the content of the body that we need to send.

v1:

{
  "add_virtual_tpm": false,
  "im_stream_id": "6f85b3a5-e7d0-4ad6-a1e3-37168dd1ed51",
  "im_tag_id": "0103796c-102b-4ed3-953f-3dfe3d23e0fe",
  "logoff_policy": "WAIT_FOR_LOGOFF",
  "parent_vm_id": "vm-1",
  "snapshot_id": "snapshot-1",
  "start_time": 1587081283000,
  "stop_on_first_error": true
}

v2

{
  "add_virtual_tpm": false,
  "compute_profile_num_cores_per_socket": 1,
  "compute_profile_num_cpus": 4,
  "compute_profile_ram_mb": 4096,
  "im_stream_id": "6f85b3a5-e7d0-4ad6-a1e3-37168dd1ed51",
  "im_tag_id": "0103796c-102b-4ed3-953f-3dfe3d23e0fe",
  "logoff_policy": "WAIT_FOR_LOGOFF",
  "machine_ids": [
    "816d44cb-b486-3c97-adcb-cf3806d53657",
    "414927f3-1a3b-3e4c-81b3-d39602f634dc"
  ],
  "parent_vm_id": "vm-1",
  "selective_push_image": true,
  "snapshot_id": "snapshot-1",
  "start_time": 1587081283000,
  "stop_on_first_error": true
}

So besides the cpu/memory changes we obviously also have something called selective_push_image. This has to do with the added functionality of pushing a secondary image from Horizon 2111. While the example shows it as true the data model makes clear that it is not required and defaults to false. The array of machine_ids reflects the list of machines where the secondary image has to be applied.

compute_profile_num_cores_per_socket	integer($int32)
example: 1
minimum: 1
exclusiveMinimum: false
exclusiveMaximum: false
Indicates the number of cores per socket for the CPU in the compute profile to be configured on clones.
If set, both compute_profile_num_cpus and compute_profile_ram_mb need to be set.

compute_profile_num_cpus	integer($int32)
example: 4
minimum: 1
exclusiveMinimum: false
exclusiveMaximum: false
Indicates the number of CPUs in the compute profile to be configured on clones.
If set, this must be a multiple of compute_profile_num_cores_per_socket.

compute_profile_ram_mb	integer($int32)
example: 4096
minimum: 1024
exclusiveMinimum: false
exclusiveMaximum: false
Indicates the RAM in MB in the compute profile to be configured on clones.

machine_ids	[
example: List [ "816d44cb-b486-3c97-adcb-cf3806d53657", "414927f3-1a3b-3e4c-81b3-d39602f634dc" ]
Set of machines from the desktop pool on which the new image is to be applied. This can be set when selective_push_image is set to true.

selective_push_image	boolean
example: true
Indicates whether selective push image is to be applied. If set to true, the new image will be applied to specified machine_ids in the desktop pool. The image published with this option will be held as a pending image, unless it is promoted or cancelled. The default value is false.

To be able to use this I have updated my previous image deployment script for Horizon 2206.

New arguments are:

  • AddVirtualTPM
    • Boolean to add a virtual TPM or not
  • SecondaryImage
    • Boolean to define the image as secondary (required if you also supply machine_ids)
  • Machine_Ids
    • Array with machine_ids to supply the secondary image to
  • CoresPerSocket
    • Int with # of Cores per Socket
  • CPUs
    • Int for total number of cpus
  • MemoryinMB
    • Memory in mb so 4096 or 6192 for example

 

<#
    .SYNOPSIS
    Pushes a new Golden Image to a Desktop Pool

    .DESCRIPTION
    This script uses the Horizon rest api's to push a new golden image to a VMware Horizon Desktop Pool

    .EXAMPLE
    .\Horizon_Rest_Push_Image.ps1 -ConnectionServerURL https://pod1cbr1.loft.lab -Credentials $creds -vCenterURL "https://pod1vcr1.loft.lab" -DataCenterName "Datacenter_Loft" -baseVMName "W21h1-2021-09-08-15-48" -BaseSnapShotName "Demo Snapshot" -DesktopPoolName "Pod01-Pool02"

    .PARAMETER Credential
    Mandatory: No
    Type: PSCredential
    Object with credentials for the connection server with domain\username and password. If not supplied the script will ask for user and password.

    .PARAMETER ConnectionServerURL
    Mandatory: Yes
    Default: String
    URL of the connection server to connect to

    .PARAMETER vCenterURL
    Mandatory: Yes
    Username of the user to look for

    .PARAMETER DataCenterName
    Mandatory: Yes
    Domain to look in

    .PARAMETER BaseVMName
    Mandatory: Yes
    Domain to look in

    .PARAMETER BaseSnapShotName
    Mandatory: Yes
    Domain to look in

    .PARAMETER DesktopPoolName
    Mandatory: Yes
    Domain to look in

    .PARAMETER StoponError
    Mandatory: No
    Boolean to stop on error or not

    .PARAMETER logoff_policy
    Mandatory: No
    String FORCE_LOGOFF or WAIT_FOR_LOGOFF to set the logoff policy.

    .PARAMETER Scheduledtime
    Mandatory: No
    Time to schedule the image push in [DateTime] format.

    .PARAMETER AddVirtualTPM
    Mandatory: No
    Default: $False
    Boolean FORCE_LOGOFF or WAIT_FOR_LOGOFF to set the logoff policy.

    .PARAMETER SecondaryImage
    Mandatory: No (Yes if machine_ids is supplied)
    Default: $False
    Boolean FORCE_LOGOFF or WAIT_FOR_LOGOFF to set the logoff policy.

    .PARAMETER Machine_Ids
    Mandatory: No
    Array Array of Machine_ids to apply the secondary image to.

    .PARAMETER CoresPerSocket
    Mandatory: No (unless CPUs or MemoryinMB is supplies)
    Int Amount of cores per socket.

    .PARAMETER CPUs
    Mandatory: No (unless MemoryinMB or CoresPerSocket is supplies)
    Int Total number of cores.

    .PARAMETER MemoryinMB
    Mandatory: No (unless CPUs or CoresPerSocket is supplies)
    Int New memory in MB

    .NOTES
    Minimum required version: VMware Horizon 8 2206
    Created by: Wouter Kursten
    First version: 03-11-2021
    Changes: 05-09-2022 -   Added resizing of cpu/memory
                        -   Added secondary image functionality
                        -   Added option to add Virtual TPM


    .COMPONENT
    Powershell Core
#>

[CmdletBinding(DefaultParameterSetName = 'Generic')]
param (
    [Parameter(Mandatory=$false,
    HelpMessage='Credential object as domain\username with password' )]
    [PSCredential] $Credentials,

    [Parameter(Mandatory=$true,
    HelpMessage='FQDN of the connectionserver' )]
    [ValidateNotNullOrEmpty()]
    [string] $ConnectionServerURL,

    [parameter(Mandatory = $true,
    HelpMessage = "URL of the vCenter to look in i.e. https://vcenter.domain.lab")]
    [ValidateNotNullOrEmpty()]
    [string]$vCenterURL,

    [parameter(Mandatory = $true,
    HelpMessage = "Name of the Datacenter to look in.")]
    [ValidateNotNullOrEmpty()]
    [string]$DataCenterName,

    [parameter(Mandatory = $true,
    HelpMessage = "Name of the Golden Image VM.")]
    [ValidateNotNullOrEmpty()]
    [string]$BaseVMName,

    [parameter(Mandatory = $true,
    HelpMessage = "Name of the Snapshot to use for the Golden Image.")]
    [ValidateNotNullOrEmpty()]
    [string]$BaseSnapShotName,

    [parameter(Mandatory = $true,
    HelpMessage = "Name of the Desktop Pool.")]
    [ValidateNotNullOrEmpty()]
    [string]$DesktopPoolName,

    [parameter(Mandatory = $false,
    HelpMessage = "True or false for stop on error.")]
    [ValidateNotNullOrEmpty()]
    [bool]$StoponError = $true,

    [parameter(Mandatory = $false,
    HelpMessage = "Use WAIT_FOR_LOGOFF or FORCE_LOGOFF.")]
    [ValidateSet('WAIT_FOR_LOGOFF','FORCE_LOGOFF', IgnoreCase = $false)]
    [string]$logoff_policy = "WAIT_FOR_LOGOFF",

    [parameter(Mandatory = $false,
    HelpMessage = "DateTime object for the moment of scheduling the image push.Defaults to immediately")]
    [datetime]$Scheduledtime,

    [parameter(Mandatory = $false,
    HelpMessage = "Bool for adding a Virtual TPM or not.")]
    [ValidateNotNullOrEmpty()]
    [bool]$AddVirtualTPM = $False,

    [parameter(Mandatory = $false,
    HelpMessage = "True or false to set this image as secondary image.")]
    [ValidateNotNullOrEmpty()]
    [bool]$SecondaryImage = $False,

    [parameter(Mandatory = $false,
    HelpMessage = "Array of machine_ids to apply the secondary image to.")]
    [ValidateNotNullOrEmpty()]
    [array]$Machine_Ids,

    [parameter(Mandatory = $false,
    HelpMessage = "New Number of cores per socket.")]
    [ValidateNotNullOrEmpty()]
    [int]$CoresPerSocket,

    [parameter(Mandatory = $false,
    HelpMessage = "New Number of CPU's.")]
    [ValidateNotNullOrEmpty()]
    [int]$CPUs,

    [parameter(Mandatory = $false,
    HelpMessage = "New amount of memory in MB.")]
    [ValidateNotNullOrEmpty()]
    [int]$MemoryinMB
)

if($Credentials){
    $username=($credentials.username).split("\")[1]
    $domain=($credentials.username).split("\")[0]
    $password=$credentials.password
}
else{
    $credentials = Get-Credential
    $username=($credentials.username).split("\")[1]
    $domain=($credentials.username).split("\")[0]
    $password=$credentials.password
}

$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($password) 
$UnsecurePassword = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)

function Get-HRHeader(){
    param($accessToken)
    return @{
        'Authorization' = 'Bearer ' + $($accessToken.access_token)
        'Content-Type' = "application/json"
    }
}
function Open-HRConnection(){
    param(
        [string] $username,
        [string] $password,
        [string] $domain,
        [string] $url
    )

    $Credentials = New-Object psobject -Property @{
        username = $username
        password = $password
        domain = $domain
    }

    return invoke-restmethod -Method Post -uri "$ConnectionServerURL/rest/login" -ContentType "application/json" -Body ($Credentials | ConvertTo-Json)
}
function Close-HRConnection(){
    param(
        $accessToken,
        $ConnectionServerURL
    )
    return Invoke-RestMethod -Method post -uri "$ConnectionServerURL/rest/logout" -ContentType "application/json" -Body ($accessToken | ConvertTo-Json)
}

if($CPUs -AND $CoresPerSocket -AND $MemoryinMB){
    $resize = $true
}
elseif($CPUs -OR $CoresPerSocket -OR $MemoryinMB){
    throw "If either CPUs, CoresPerSOcket or MemoryinGB is supplied, all must be supplied."
}
else{
    $resize = $false
}

if($Machine_Ids -AND !($SecondaryImage)){
    throw "If either Machine_Ids is supplied SecondaryImage also needs to be supplied."
}


try{
    $accessToken = Open-HRConnection -username $username -password $UnsecurePassword -domain $Domain -url $ConnectionServerURL
}
catch{
    throw "Error Connecting: $_"
}

$vCenters = Invoke-RestMethod -Method Get -uri "$ConnectionServerURL/rest/monitor/v2/virtual-centers" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken)
$vcenterid = ($vCenters | where-object {$_.name -like "*$vCenterURL*"}).id
$datacenters = Invoke-RestMethod -Method Get -uri "$ConnectionServerURL/rest/external/v1/datacenters?vcenter_id=$vcenterid" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken)
$datacenterid = ($datacenters | where-object {$_.name -eq $DataCenterName}).id
$basevms = Invoke-RestMethod -Method Get -uri "$ConnectionServerURL/rest/external/v1/base-vms?datacenter_id=$datacenterid&filter_incompatible_vms=false&vcenter_id=$vcenterid" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken)
$basevmid = ($basevms | where-object {$_.name -eq $baseVMName}).id
$basesnapshots = Invoke-RestMethod -Method Get -uri "$ConnectionServerURL/rest/external/v1/base-snapshots?base_vm_id=$basevmid&vcenter_id=$vcenterid" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken)
$basesnapshotid = ($basesnapshots | where-object {$_.name -eq $BaseSnapShotName}).id
$desktoppools = Invoke-RestMethod -Method Get -uri "$ConnectionServerURL/rest/inventory/v1/desktop-pools" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken)
$desktoppoolid = ($desktoppools | where-object {$_.name -eq $DesktopPoolName}).id

$datahashtable = [ordered]@{}
$datahashtable.add('add_virtual_tpm',$AddVirtualTPM)
if($resize){
    $datahashtable.add('compute_profile_num_cores_per_socket',$CoresPerSocket)
    $datahashtable.add('compute_profile_num_cpus',$CPUs)
    $datahashtable.add('compute_profile_ram_mb',$MemoryinMB)
}
$datahashtable.add('logoff_policy',$logoff_policy)
if($Machine_Ids){
    $datahashtable.add('machine_ids',$Machine_Ids)
}
$datahashtable.add('parent_vm_id',$basevmid)
if($SecondaryImage){
    $datahashtable.add('selective_push_image',$SecondaryImage)
}
$datahashtable.add('snapshot_id',$basesnapshotid)
if($Scheduledtime){
    $starttime = get-date $Scheduledtime
    $epoch = ([DateTimeOffset]$starttime).ToUnixTimeMilliseconds()
    $datahashtable.add('start_time',$epoch)
}
$datahashtable.add('stop_on_first_error',$StoponError)
$json = $datahashtable | convertto-json

Invoke-RestMethod -Method Post -uri "$ConnectionServerURL/rest/inventory/v2/desktop-pools/$desktoppoolid/action/schedule-push-image" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken) -body $json

As always the script is available on Github.

Usage:

I am using all the new arguments except the virtual tpm one.

D:\GIT\Various_Scripts\Horizon_Rest_Push_Image_VDI_2206.ps1 -ConnectionServerURL https://pod1cbr1.loft.lab -Credentials $creds -vCenterURL "https://pod1vcr1.loft.lab" -DataCenterName "Datacenter_Loft" -baseVMName "W21h1-gi-2022-08-19-09-36" -BaseSnapShotName "VM Snapshot 9%2f5%2f2022, 6:50:12 PM" -DesktopPoolName "Pod01-Pool03" -CPU 2 -CoresPerSocket 1 -MemoryinMB 4096 -SecondaryImage $true -Machine_Ids $array

Horizon 8 API changelog pages now available

Since several version in each VMware Horizon release notes pages there has been this mention:

This links to this page: VMware Horizon REST APIs (84155) for a while this page was updated with the latest additions to the REST api’s but I guess people didn’t want to do that anymore so that page now simply links to the API Explorer:

Since I used this changelog to decide on what I wanted to blog about I thought it was time to create a changelog myself. What I did was download all the swagger specifications and compare them. For reasons I had to do this in excel but at least I now have a nice source for this. (ping me if you would like to have the excel file). This result in a new menu item at the top of my blog with the changelog for every Horizon 8 version. Yes I know the rest api’s have been available since 7.10 but I decided to start with 8.0. Please use the menu on top of the links below to go to the changelog for the version that you would like to see.

Horizon 8.0 (2006)

Horizon 8.1 (2103)

Horizon 8.2 (2106)

Horizon 8.3 (2109)

Horizon 8.4 (2111)

Horizon 8.5 (2203) – no changes

Horizon 8.6 (2206)

 

My recipe for a successful vExpert application

This was also posted on the vExpert blog here.

One of the questions that the vExpert pro’s get is what people need to add to their vExpert application for it to be successful. While the things we do during the year are different for everyone, the first thing you need to do is to write everything down as extensively as possible. I will use my own current application as an example of how you can write everything down. There is no single truth to creating your application so you should see it as inspiration for creating your own.

One remark before you start editing or creating your application: make sure to save as often as possible. The vExpert portal needs to have a timeout and you wouldn’t be the first person where it times out and the progress hasn’t been saved and thus lost. I prefer to keep track of everything in a word document.

Preparation:

  • During the year keep track of your activities, this way you won’t forget anything.

(Required) Ingredients for this recipe:

  • Qualification path
  • Write down activities
  • Add url’s to activities if available
  • Be as extensive as possible
  • Work with a vExpert PRO

Step 1, Qualification path:

You need to select a qualification path. I could try as a partner as I work for ControlUp but I always use evangelist as all of my work in the community is done on my own credentials.VCDX are automatically vExperts but these are always checked against the VCDX directory so there’s no way to cheat there.

Step 2, Checkboxes:

If you created content make sure to select all the relevant checkboxes

Step 3 Other Media:

Give examples of the content you have created. There is no need to list every blog post but I always add links to the ones that I think are the most useful for the community. While I don’t consider views count relevant I do add a total amount of posts and views.

Step 3 Events and speaking:

Been an (online) event speaker or Podcast guest? You can list those here, if possible add a link for proof of these. For vmug events I know a lot of the old links give a 404 these days so make sure they work at the moment of submitting. If you can’t get a direct link to the event maybe you can find tweets about it that you were presenting so add those (I would advise to mention why you did that though). If you are an organizer of something like I do with the EMEA Breakfast events those can also be listed here. I won’t hold it against you when voting if you don’t have an attendance count but if you have it it’s even better.

<I would seriously consider hitting that save button here>

Step 4 Communities, Tools and Resources:

Tools & resources: Here is where you can link that nifty github repo where you share all your scripts. I for example list my Personal github profile but also the links to the vCheck for Horizon and the python module for Horizon as those are the main projects I worked on.

Communities: List every and any related community that you are active on, this includes Discord channels, Slack channels, forums etc even if they are not in English. Make sure to add a link to your profile or post history so the people who are voting can easily find your activity. If there’s no link available you can at least post your username.

Step 5, VMware Programs:

This is where you can list all your vExpert titles but also things like VMware Champions, VMware{Code} CodeCoach, beta programs, work as certification SME, Tanzu heroes, VMware Influencers and what not.

<Hit that save button again>

Step 6 Other Activities:

Here’s where you can list the things that aren’t publicly available so it can be very relevant for Customers & partners. I list the work that I internally do at ControlUp even if it is related to my work.

Step 7, References:

If you have a VMware employee that you worked a lot with and that can vouch for you make sure to check that box and list their email address. The same applies for the vExpert PRO that you worked with.

You don’t need to be a community crazy person like I am to become a vExpert but as said the most successful recipe for a successful application is to write everything down as extensively as possible. Do you have something that you don’t know if it helps? Just add it, it might be the thing that completes the picture for the PRO’s when we start voting.

If you want to reach out to a vExpert pro you can find the directory here and there’s also Facebook and Linkedin Groups.

Horizon 8 2111 GA: What’s new in the rest api’s?

So just found out that Horizon 8 2111 dropped today and there have been some welcome changes to the rest api’s. Luckily VMware does have these covered by now in THIS kb article.

In short these are the changes:

Inventory : Desktop Pools
Create desktop pool
Update desktop pool
Delete desktop pool
v5 version of List
v5 version of Get
Inventory : Desktop Actions
Validate Installed Applications
Validate VM Names Info
Resume Task on Desktop pool
Pause task on Desktop pool
Inventory : Farms
v3 version of List
v3 version of Get
v2 version of Create
v2 version of Update
Inventory : Farm Actions
Add RDS servers to farm
Remove RDS servers from farm
Schedule Maintenance (and image management schedule maintenance)
Cancel Schedule Maintenance
Validate Installed Applications
Inventory : Global Application Entitlements
v2 version of List
v2 version of Get
Create Global Application Entitlements
Update Global Application Entitlements
Delete Global Application Entitlements
List Compatible Backup Global Application Entitlements
Inventory : Global Desktop Entitlements
version of List
v2 version of Get
Create Global Desktop Entitlements
Update Global Desktop Entitlements
Delete Global Desktop Entitlements
List Compatible Backup Global Desktop Entitlements
Inventory : Global Sessions
List
Disconnect
Logoff
Reset
Restart
Send Message

so finally we’re able to create desktop pools using REST and to push a new image to rds farms. I will make sure to update the Python module ®Soon and create some blog posts on the new options.

Horizon REST API + Powershell 7: pagination and filtering (with samples)

A while ago Robin Stolpe (Twitter) asked me if it was possible to find what machines a user is assigned to in a Horizon environment. To answer this I first started messing with the soap api’s and had a really hard time to filter for the user id with the various machine related queries. When looking for the assignedUser property this was no problem but this has been deprecated and replaced by assignedUsers because of the added functionality for assigning multiple users to a machine. Instead of becoming too frustrated I decided to switch paths and user Powershell with the rest api’s.

TLDR: I have defined a broadly usable function for just about all Horizon REST GET api calls with or without filtering that also works for GET calls without any additions and for GET calls that require an Id for example. You can scroll down to the bottom to get that function and a script that uses it.

Warning: the sample code in the script & example function require PowerShell 7!

 

Filtering

One of the things I hadn’t done before with these was filtering and pagination in a more useful way than just writing the entire url out. VMware has a guide available for filtering that can be found here. This was a good way to get started but I found it easiest to skip the single searches entirely and always use the And or Or filtering types for chained filtering.

The method I am using to create the filter is to first define an ordered hashtable. Why ordered? The api calls require the Name/value pairs in a certain order and if you just add them to a regular hashtable this order will change.

$filterhashtable = [ordered]@{}

Next I add the first Name/value pair for the filtertype, this is either And or Or

$filterhashtable.add('type', 'And')

Next I add another pair with name filters and value an array. I could use .add again or just set the name like I do here:

$filterhashtable.filters = @()

The filters name array members again need to be ordered hashtable’s (as you can see I search for a user here)

$userfilter= [ordered]@{}
$userfilter.add('type','Equals')
$userfilter.add('name','name')
$userfilter.add('value',$User)

$domainfilter= [ordered]@{}
$domainfilter.add('type','Equals')
$domainfilter.add('name','domain')
$domainfilter.add('value',$Domain)

and I add both of them to the filters object

$filterhashtable.filters+=$userfilter
$filterhashtable.filters+=$domainfilter

and lets’s show what’s in the $filterhashtable

To be able to use this within the invoke-restmethod url I need to convert this to json and compress it to a single line

$filterflat = $filterhashtable | ConvertTo-Json -Compress
$filterflat

Pagination

For the pagination I needed the HAS_MORE_RECORDS property of the returned headers. If this is TRUE there are more records to be found, sadly this is not available in the classic invoke-restmethod from Powershell v5. With Powershell 7 you can add -ResponseHeadersVariable responseheader to store the headers in a variable called $responseheader. With this variable you can easily create a do while loop.

$urlstart= $ServerURL+"/rest/"+$RestMethod+"?page="
$results = [System.Collections.ArrayList]@()
$page = 1
$uri = $urlstart+$page+"&size=$pagesize"
$response = Invoke-RestMethod $uri -Method 'GET' -Headers (Get-HRHeader -accessToken $accessToken) -ResponseHeadersVariable responseheader
$response.foreach({$results.add($_)}) | out-null
if ($responseheader.HAS_MORE_RECORDS -contains "TRUE") {
    do {
        $page++
        $uri = $urlstart+$page+"&size=$pagesize"
        $response = Invoke-RestMethod $uri -Method 'GET' -Headers (Get-HRHeader -accessToken $accessToken) -ResponseHeadersVariable responseheader
        $response.foreach({$results.add($_)}) | out-null
    } until ($responseheader.HAS_MORE_RECORDS -notcontains "TRUE")
}
return $results

Please be advised that without some additional parameters this code isn’t usable yet, scroll down for something you can really use.

[sta_anchor id=”function” /]

The function

To combine the above 2 items I have created a function that can use all of the above but is also able to do regular get calls and get calls that require an id in the url.

function Get-HorizonRestData(){
    [CmdletBinding(DefaultParametersetName='None')] 
    param(
        [Parameter(Mandatory=$true,
        HelpMessage='url to the server i.e. https://pod1cbr1.loft.lab' )]
        [string] $ServerURL,

        [Parameter(Mandatory=$true,
        ParameterSetName="filteringandpagination",
        HelpMessage='Array of ordered hashtables' )]
        [array] $filters,

        [Parameter(Mandatory=$true,
        ParameterSetName="filteringandpagination",
        HelpMessage='Type of filter Options: And, Or' )]
        [ValidateSet('And','Or')]
        [string] $Filtertype,

        [Parameter(Mandatory=$false,
        ParameterSetName="filteringandpagination",
        HelpMessage='Page size, default = 500' )]
        [int] $pagesize = 500,

        [Parameter(Mandatory=$true,
        HelpMessage='Part after the url in the swagger UI i.e. /rest/external/v1/ad-users-or-groups' )]
        [string] $RestMethod,

        [Parameter(Mandatory=$true,
        HelpMessage='Part after the url in the swagger UI i.e. /rest/external/v1/ad-users-or-groups' )]
        [PSCustomObject] $accessToken,

        [Parameter(Mandatory=$false,
        ParameterSetName="filteringandpagination",
        HelpMessage='$True for rest methods that contain pagination and filtering, default = False' )]
        [switch] $filteringandpagination,

        [Parameter(Mandatory=$false,
        ParameterSetName="id",
        HelpMessage='To be used with single id based queries like /monitor/v1/connection-servers/{id}' )]
        [string] $id
    )
    if($filteringandpagination){
        if ($filters){
            $filterhashtable = [ordered]@{}
            $filterhashtable.add('type',$filtertype)
            $filterhashtable.filters = @()
            foreach($filter in $filters){
                $filterhashtable.filters+=$filter
            }
            $filterflat=$filterhashtable | convertto-json -Compress
            $urlstart= $ServerURL+"/rest/"+$RestMethod+"?filter="+$filterflat+"&page="
        }
        else{
            $urlstart= $ServerURL+"/rest/"+$RestMethod+"?page="
        }
        $results = [System.Collections.ArrayList]@()
        $page = 1
        $uri = $urlstart+$page+"&size=$pagesize"
        $response = Invoke-RestMethod $uri -Method 'GET' -Headers (Get-HRHeader -accessToken $accessToken) -ResponseHeadersVariable responseheader
        $response.foreach({$results.add($_)}) | out-null
        if ($responseheader.HAS_MORE_RECORDS -contains "TRUE") {
            do {
                $page++
                $uri = $urlstart+$page+"&size=$pagesize"
                $response = Invoke-RestMethod $uri -Method 'GET' -Headers (Get-HRHeader -accessToken $accessToken) -ResponseHeadersVariable responseheader
                $response.foreach({$results.add($_)}) | out-null
            } until ($responseheader.HAS_MORE_RECORDS -notcontains "TRUE")
        }
    }
    elseif($id){
        $uri= $ServerURL+"/rest/"+$RestMethod+"/"+$id
        $results = Invoke-RestMethod $uri -Method 'GET' -Headers (Get-HRHeader -accessToken $accessToken) -ResponseHeadersVariable responseheader
    }
    else{
        $uri= $ServerURL+"/rest/"+$RestMethod
        $results = Invoke-RestMethod $uri -Method 'GET' -Headers (Get-HRHeader -accessToken $accessToken) -ResponseHeadersVariable responseheader
    }

    return $results
}

As you can see there are several arguments:

  • ServerURL
    • This is the url to the connection server i.e. https://server.domain
  • Filters
    • An Array of ordered hashtables as you can find in the filtering paragraph
  • filtertype
    • Sets the filter type, this needs to be And or Or
  • PageSize
    • This is optional if you want to change from the default 500 results that I have set
  • RestMethod
    • This is the RestMethod that you can copy from the Swagger URL or API Explorer.
  • AccessToken
    • This is the accesstoken you get as a result when using open-hrconnection from previous samples to authenticate (see the sample script below)
  • Filteringandpagination
    • Add this argument to use the filtering and/or pagination options
  • Id
    • Use this for REST API Get calls where an Id is required in the URI

Examples

some usable examples would be:

Get-HorizonRestData -ServerURL $url -RestMethod "/monitor/connection-servers" -accessToken $accessToken 
Get-HorizonRestData -ServerURL $url -RestMethod "/monitor/connection-servers" -accessToken $accessToken -id $connectionserverid
Get-HorizonRestData -ServerURL $url -filteringandpagination -Filtertype "And" -filters $machinefilters -RestMethod "/inventory/v1/machines" -accessToken $accessToken

[sta_anchor id=”script” /]

Sample Script

The script below (and available on Github here) aks for credentials if you don’t supply the object, connectionserver FQDN (no url needed), user and domain to search for and returns an array of machines the user is assigned to. It uses the default functions Andrew Morgan created a long time ago and my function to use the get methods.

<#
    .SYNOPSIS
    Retreives all machines a user is assigned to

    .DESCRIPTION
    This script uses the Horizon rest api's to query the Horizon database for all machines a user is assigned to.

    .EXAMPLE
    .\find_user_assigned_desktops.ps1 -Credential $creds -ConnectionServerFQDN pod2cbr1.loft.lab -UserName "User2"

    .PARAMETER Credential
    Mandatory: No
    Type: PSCredential
    Object with credentials for the connection server with domain\username and password

    .PARAMETER ConnectionServerFQDN
    Mandatory: Yes
    Default: String
    FQDN of the connection server to connect to

    .PARAMETER User
    Mandatory: Yes
    Username of the user to look for

    .PARAMETER Domain
    Mandatory: Yes
    Domain to look in

    .NOTES
    Created by: Wouter Kursten
    First version: 02-10-2021

    .COMPONENT
    Powershell Core

#>

[CmdletBinding()]
param (
    [Parameter(Mandatory=$false,
    HelpMessage='Credential object as domain\username with password' )]
    [PSCredential] $Credential,

    [Parameter(Mandatory=$true,  HelpMessage='FQDN of the connectionserver' )]
    [ValidateNotNullOrEmpty()]
    [string] $ConnectionServerFQDN,

    [parameter(Mandatory = $true,
    HelpMessage = "Username of the user to look for.")]
    [string]$User = $false,

    [parameter(Mandatory = $true,
    HelpMessage = "Domain where the user object exists.")]
    [string]$Domain = $false
)

function Get-HRHeader(){
    param($accessToken)
    return @{
        'Authorization' = 'Bearer ' + $($accessToken.access_token)
        'Content-Type' = "application/json"
    }
}
function Open-HRConnection(){
    param(
        [string] $username,
        [string] $password,
        [string] $domain,
        [string] $url
    )

    $Credentials = New-Object psobject -Property @{
        username = $username
        password = $password
        domain = $domain
    }

    return invoke-restmethod -Method Post -uri "$url/rest/login" -ContentType "application/json" -Body ($Credentials | ConvertTo-Json)
}

function Close-HRConnection(){
    param(
        $accessToken,
        $url
    )
    return Invoke-RestMethod -Method post -uri "$url/rest/logout" -ContentType "application/json" -Body ($accessToken | ConvertTo-Json)
}

function Get-HorizonRestData(){
    [CmdletBinding(DefaultParametersetName='None')] 
    param(
        [Parameter(Mandatory=$true,
        HelpMessage='url to the server i.e. https://pod1cbr1.loft.lab' )]
        [string] $ServerURL,

        [Parameter(Mandatory=$true,
        ParameterSetName="filteringandpagination",
        HelpMessage='Array of ordered hashtables' )]
        [array] $filters,

        [Parameter(Mandatory=$true,
        ParameterSetName="filteringandpagination",
        HelpMessage='Type of filter Options: And, Or' )]
        [ValidateSet('And','Or')]
        [string] $Filtertype,

        [Parameter(Mandatory=$false,
        ParameterSetName="filteringandpagination",
        HelpMessage='Page size, default = 500' )]
        [int] $pagesize = 500,

        [Parameter(Mandatory=$true,
        HelpMessage='Part after the url in the swagger UI i.e. /external/v1/ad-users-or-groups' )]
        [string] $RestMethod,

        [Parameter(Mandatory=$true,
        HelpMessage='Part after the url in the swagger UI i.e. /external/v1/ad-users-or-groups' )]
        [PSCustomObject] $accessToken,

        [Parameter(Mandatory=$false,
        ParameterSetName="filteringandpagination",
        HelpMessage='$True for rest methods that contain pagination and filtering, default = False' )]
        [switch] $filteringandpagination,

        [Parameter(Mandatory=$false,
        ParameterSetName="id",
        HelpMessage='To be used with single id based queries like /monitor/v1/connection-servers/{id}' )]
        [string] $id
    )
    if($filteringandpagination){
        if ($filters){
            $filterhashtable = [ordered]@{}
            $filterhashtable.add('type',$filtertype)
            $filterhashtable.filters = @()
            foreach($filter in $filters){
                $filterhashtable.filters+=$filter
            }
            $filterflat=$filterhashtable | convertto-json -Compress
            $urlstart= $ServerURL+"/rest/"+$RestMethod+"?filter="+$filterflat+"&page="
        }
        else{
            $urlstart= $ServerURL+"/rest/"+$RestMethod+"?page="
        }
        $results = [System.Collections.ArrayList]@()
        $page = 1
        $uri = $urlstart+$page+"&size=$pagesize"
        $response = Invoke-RestMethod $uri -Method 'GET' -Headers (Get-HRHeader -accessToken $accessToken) -ResponseHeadersVariable responseheader
        $response.foreach({$results.add($_)}) | out-null
        if ($responseheader.HAS_MORE_RECORDS -contains "TRUE") {
            do {
                $page++
                $uri = $urlstart+$page+"&size=$pagesize"
                $response = Invoke-RestMethod $uri -Method 'GET' -Headers (Get-HRHeader -accessToken $accessToken) -ResponseHeadersVariable responseheader
                $response.foreach({$results.add($_)}) | out-null
            } until ($responseheader.HAS_MORE_RECORDS -notcontains "TRUE")
        }
    }
    elseif($id){
        $uri= $ServerURL+"/rest/"+$RestMethod+"/"+$id
        $results = Invoke-RestMethod $uri -Method 'GET' -Headers (Get-HRHeader -accessToken $accessToken) -ResponseHeadersVariable responseheader
    }
    else{
        $uri= $ServerURL+"/rest/"+$RestMethod
        $results = Invoke-RestMethod $uri -Method 'GET' -Headers (Get-HRHeader -accessToken $accessToken) -ResponseHeadersVariable responseheader
    }

    return $results
}

if($Credential){
    $creds = $credential
}
else{
    $creds = get-credential
}

$ErrorActionPreference = 'Stop'

$username=($creds.username).split("\")[1]
$domain=($creds.username).split("\")[0]
$password=$creds.password

$url = "https://$ConnectionServerFQDN"

$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($password) 
$UnsecurePassword = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)

$accessToken = Open-HRConnection -username $username -password $UnsecurePassword -domain $Domain -url $url

$userfilters = @()
$userfilter= [ordered]@{}
$userfilter.add('type','Equals')
$userfilter.add('name','name')
$userfilter.add('value',$User)
$userfilters+=$userfilter
$domainfilter= [ordered]@{}
$domainfilter.add('type','Equals')
$domainfilter.add('name','domain')
$domainfilter.add('value',$Domain)
$userfilters+=$domainfilter

$userobject = Get-HorizonRestData -ServerURL $url -filteringandpagination -Filtertype "And" -filters $userfilters -RestMethod "/external/v1/ad-users-or-groups" -accessToken $accessToken

$machinefilters = @()
$machinefilter= [ordered]@{}
$machinefilter.add('type','Contains')
$machinefilter.add('name','user_ids')
$machinefilter.add('value',($userobject).id)
$machinefilters+=$machinefilter
$machines = Get-HorizonRestData -ServerURL $url -filteringandpagination -Filtertype "And" -filters $machinefilters -RestMethod "/inventory/v1/machines" -accessToken $accessToken

return $machines

I use it like this to only display the machine names

(D:\GIT\Scripts\find_user_assigned_desktops.ps1 -Credential $creds -ConnectionServerFQDN "pod1cbr1.loft.lab" -User "user1" -Domain "loft.lab").name

You see some names in the 2*** range double but that is a Desktop Pool with Multiple Assignments

Getting the full machine objects is also possible

D:\GIT\Scripts\find_user_assigned_desktops.ps1 -Credential $creds -ConnectionServerFQDN "pod1cbr1.loft.lab" -User "m_wouter" -Domain "loft.lab"

Script to cleanup desktops running on old snapshot

So last year Guy Leech asked if if I had a script to identify machines running on an old snapshot. I Created a script for that here. This week Madan Kumar asked for a script that finds these same VDI desktops but that also cleans them out if needed. For this I have created the Horizon_cleanup_old_images.ps1 script (yes I suck at making up names).

If you run a get-help for the script you’ll see this:

By default the script only requires a Connectionserverfqdn and poolname as it works on a per pool level. It will try to give the users a gracefully logoff and has options to force the logoff ( in case their sessions is locked) or to delete the machine. And if you just want to have a preview there’s an option for that as well.

Optional arguments are:

-credential: this can be created with get-credential or can be retrieved from a stored credentials xml file, just make sure that it looks like domain\username and password

-deletedesktops: if used it will forcefully try to logoff the users but always deletes the desktop

-forcedlogoff: A normal logoff doesn’t work when the sessions is locked so you might need to force it

-preview: no actions are taken, just the information will be displayed to screen.

Let’s use the script

d:\git\scripts\Horizon_cleanup_old_image.ps1 -Credential $creds -ConnectionServerFQDN pod2cbr1.loft.lab -poolname "Pod02-Pool02" -preview

Yes I use write-host but it’s all 1 liners so shouldn’t be too slow and I like colors but as you see with the preview mode it shows what would happen. One of these sessions is locked so let’s see what happens when I log them off.

yes an error but I think it’s clear why, the graceful logoff worked for 2 users but not the third one, I will add the forced option now.

That looks good and when I look at the desktop pool everything is fine there as well.

And that’s being confirmed by the script

Now I will use the delete option for my other desktop pool.

First again with the preview option

and without

and seen from the Horizon Admin

As linked above the script can be found on github but also below this line.

<#
    .SYNOPSIS
    Cleans up desktops running on an image that's not the default for a desktop pool

    .DESCRIPTION
    This script uses the Horizon soap api's to pull data about machines inside a desktop pool that are running on a snapshot or base vm that's not currently configiured on the desktop pool. By default it logs off the users but there are options to forcefully logoff the user or delete the machines.

    .EXAMPLE
    .\Horizon_cleanup_old_image.ps1 -Credential $creds -ConnectionServerFQDN pod2cbr1.loft.lab -poolname "Pod02 Pool02" -delete -preview

    .PARAMETER Credential
    Mandatory: Yes
    Type: PSCredential
    Object with credentials for the connection server with domain\username and password

    .PARAMETER ConnectionServerFQDN
    Mandatory: No
    Default: String
    FQDN of the connection server to connect to

    .PARAMETER Poolname
    Mandatory: Yes
    Type: string
    Display name of the Desktop Pool to check

    .PARAMETER Deletedesktops
    Mandatory: No
    Enables the deleteion of the desktops, this includes an attempt to forcefully logoff the users.

    .PARAMETER Forcedlogoff
    Mandatory: No
    Enables the forcefully logging off of the users.

    .PARAMETER Preview
    Mandatory: No
    Makes the script run in preview mode and not undertake any actions.

    .NOTES
    Created by: Wouter Kursten
    First version: 27-06-2021

    .COMPONENT
    VMWare PowerCLI

#>

[CmdletBinding()]
param (
    [Parameter(Mandatory=$false,
    HelpMessage='Credential object as domain\username with password' )]
    [PSCredential] $Credential,

    [Parameter(Mandatory=$true,  HelpMessage='FQDN of the connectionserver' )]
    [ValidateNotNullOrEmpty()]
    [string] $ConnectionServerFQDN,

    [parameter(Mandatory = $true,
    HelpMessage = "Display Name of the desktop pool to logoff the users.")]
    [string]$poolname = $false,

    [Parameter(Mandatory=$false, 
    HelpMessage='Deletes the desktops instead of forcing the logoff' )]
    [switch] $deletedesktops,

    [Parameter(Mandatory=$false, 
    HelpMessage='Gives a preview only, no action will be undertaken.' )]
    [switch] $preview,

    [Parameter(Mandatory=$false, 
    HelpMessage='Forcefully logs off the users in case the desktop is locked or disconnected.' )]
    [switch] $forcedlogoff
)

if($Credential){
    $creds = $credential
}
else{
    $creds = get-credential
}

$ErrorActionPreference = 'Stop'

# Preview info
if($preview){
    write-host "Running in preview mode no actions will be taken" -foregroundcolor Magenta
 }

 # Loading powercli modules
 Import-Module VMware.VimAutomation.HorizonView
 Import-Module VMware.VimAutomation.Core

$hvserver1=connect-hvserver $ConnectionServerFQDN -credential $creds
$Services1= $hvServer1.ExtensionData

# --- Get Services for interacting with the Horizon API Service ---
$Services1= $hvServer1.ExtensionData

# --- Get Desktop pool
$poolqueryservice=new-object vmware.hv.queryserviceservice
$pooldefn = New-Object VMware.Hv.QueryDefinition
$pooldefn.queryentitytype='DesktopSummaryView'
$pooldefn.Filter= New-Object VMware.Hv.QueryFilterEquals -property @{'MemberName'='desktopSummaryData.displayName'; 'value'=$poolname}
try{
    $poolqueryResults = $poolqueryService.QueryService_Create($Services1, $pooldefn) 
    $poolqueryservice.QueryService_DeleteAll($services1)
    $results = $poolqueryResults.results
}
catch{
    write-error "There was an error retreiving details for $poolname"
}

# we need more details of the pool though and check if we even got one
if($results.count -eq 1){
    $pool = $Services1.Desktop.Desktop_Get($results.id)
}
else{
    write-host "No pool found with name $poolname" -foregroundcolor Red
    break
}

# Search for machine details
$queryservice=new-object vmware.hv.queryserviceservice
$defn = New-Object VMware.Hv.QueryDefinition
$defn.queryentitytype='MachineDetailsView'
$defn.filter = New-Object VMware.Hv.QueryFilterEquals -Property @{ 'memberName' = 'desktopData.id'; 'value' = $pool.id }
[array]$queryResults = $queryService.QueryService_Create($Services1, $defn)
$services1.QueryService.QueryService_DeleteAll()
# Process the results
if ($queryResults.results.count -ge 1){
    [array]$poolmachines=$queryResults.results
    [array]$wrongsnaps=$poolmachines | where-object {$_.managedmachinedetailsdata.baseimagesnapshotpath -notlike  $pool.automateddesktopdata.VirtualCenternamesdata.snapshotpath -OR $_.managedmachinedetailsdata.baseimagepath -notlike $pool.automateddesktopdata.VirtualCenternamesdata.parentvmpath}
    # If there are desktops on a wrong snapsot we need to do something with that info
    if($wrongsnaps.count -ge 1){
        if($deletedesktops){
            write-host "Removing:" $wrongsnaps.data.name -foregroundcolor yellow
            $deletespec = new-object vmware.hv.machinedeletespec
            $deletespec.DeleteFromDisk = $true
            $deletespec.ForceLogoffSession = $true
            if(!$preview){
                $Services1.Machine.Machine_DeleteMachines($wrongsnaps.id, $deletespec)
            }
        }
        else{
            write-host "Logging users off from:" $wrongsnaps.data.name -foregroundcolor yellow
            [array]$sessiondata = $wrongsnaps.sessiondata
            write-host "Users being logged off are:" $sessiondata.username -foregroundcolor yellow
            if(!$preview){
                if($forcedlogoff){
                    write-host "Forcefully logging off users" -foregroundcolor yellow
                    $services1.session.Session_LogoffSessionsForced($sessiondata.id)
                }
                else{
                    write-host "Gracefully logging off users" -foregroundcolor yellow
                    $services1.session.Session_LogoffSessions($sessiondata.id)
                }
            }
        }
    }
    else{
        write-host "No machines found on a wrong snapshot" -foregroundcolor Green
    }
}
else{
    write-host "No machines found in $poolname" -foregroundcolor red
}

 

vCheck for Horizon : small updates & running it (automatically) from ControlUp

Remark : as of the time I was writing this article the vCheck for Horizon  sba hasn’t been published yet. Contact me if you want me to send you the xml file.

Last week I have been doing a few small updates to the vCheck for Horizon. The file with the encrypted password is now replaced by an xml file that holds all of the credentials and is usable directly for Horizon connections. I have been using this for my blog posts and demo’s for a while already. Also I have renamed the security & composer plugins by default to old as those have been deprecated and I don’t want to have them active by default anymore. Further I have done some small changes to other plugins.

Creating the xml file

Running the vCheck from horizon

But but wasn’t the vCheck all about running it automagically?

Creating the xml file

[sta_anchor id=”xml” /]

this can be easily done like this:

$creds = get-credential
$creds | export-clixml filename.xml

as you might have seen during my demo’s you can use this $creds object to make a connection.

$creds = import-clixml filename.xml
$server = connect-hvserver connectionserver.fq.dn -credential $creds

Make sure to save this file to a good location and edit the “00 Connection Plugin for View.ps1” file with the proper location. I haven’t completely removed the old ways yet, they’re still partially there.

Running the vCheck script from ControlUp

[sta_anchor id=”controlup” /]

Before everything else: make sure the machine that will run the vCheck has the latest version of PowerCLI installed. Download it here, and unpack the modules to “C:\Program Files\WindowsPowerShell\Modules” like this:

There are several things that I needed to do before I was able to run the vCheck from the ControlUp console or as an Automated Action (more on this later!). First I needed a way of providing credentials to the script. This is something I do using the Create Credentials for Horizon Scripts script action. If you already use some of the other Horizon related sba’s or the Horizon Sync script you might already have this configured. What it essentially does is creating an xml file as described above in a folder dedicated for ControlUp and usable by the service account that ran the sba. More on this can be found here.

Next was a way to get the vCheck itself. I do this by downloading it from github and unpack the zip file to C:\ProgramData\ControlUp\ScriptSupport\vCheck-HorizonView-master if downloads from the interwebz aren’t available you can download and unzip it yourself, just make sure it looks like this:

Next would be all the settings for email, connection server etcetera. This is all done trough the Script Action Arguments. WHat technically happens is that the Script Action removes both the globalvariables.ps1 and the “00 Connection Plugin for View.ps1” files and recreates them with those settings.

Once you have downloaded the vCheck you can run it manually so right click any random machine and select the vCheck for Horizon.

and fill in all the details you need. If you set the Send Email to false there is no need for the rest of the info.

Once you see this screen the script has completed, make sure to read the last line for the correct status and where you saved to file to.

And on my d:\ I see the file (and some others I used for testing)

And even better I received it in my mail as well

But but wasn’t the vCheck all about running it automagically?

[sta_anchor id=”automated” /]

Well yes it is and you can do that starting with version 8.5 that we have announced today! For now it will need to run hourly but in future releases we can also do this once a day. What you need to do for this is first change the default for the script action. Select that sba and hot the modify button.

Under settings you can keep the execution context as console (this will use the monitor when you run it as an automated action) or select other machine to pick a scripting server. Also make sure to select a shared credential that has a Horizon credentials file created on this machine.

Now go to the arguments tab and edit the default settings to set them to what you need.

Before:

Editing the first one

and done, press ok after this.

Now click finalize so the monitors can also use them and go trough all the steps ( no need to share with the community) an dmake sure the ControlUp Monitors have the permissions to run the sba.

Next we go to triggers > add trigger and select the new type: scheduled and click next

select the hourly schedule and a proper start and end time and click next

On the filter criteria make sure that you set the name to a single machine, otherwise the script will run for all machines in your environment. I recommend the same machine you used for the execution context. Keep in mind that the schedule functions just like any other trigger so you need to filter properly to what machine it applies and if you don’t it will apply to all of them.

You can select the folder this machine resides in and/or select a schedule to run the script in.

At the follow-up actions click add and select run an action from the pulldown menu. Under script name select the vCheck for Horizon, click ok and next

Make sure to create a clear name and click finish

The html should automatically end up in the export location and in your email.

 

The VMware Labs flings monthly for April 2021 – Easy deploy for AVI Loadbalancers & NSX Mobile!

So it feels like yesterday that I created the previous flings post but April flew by and it’s almost summer, time for some more bad weather over here in The Netherlands. I see a three new flings and ten who received an update. One of the new ones already has quite a long changelog with fixes!

New Releases

NSX Mobile

vRealize Orchestrator Parser

Easy Deploy for NSX Advanced Load Balancing

Updates

Virtual Machine Compute Optimizer

Community Networking Driver for ESXi

ESXi Arm Edition

vSphere HTML5 Web Client

Horizon Session Recording

vSphere Mobile Client

Workspace ONE Access Migration Tool

VMware OS Optimization Tool

SDDC Import/Export for VMware Cloud on AWS

VMware Event Broker Appliance

New Releases

[sta_anchor id=”nsxmob” /]

NSX Mobile

Are you an NSX admin? Do you spend major part of your work in monitoring the network and/or its security? Do you have the NSX web UI open on your laptop/desktop most of the day to make sure all the services are up and connectivity between systems is fine?

Carrying a laptop all the time with you could be quite challenging task, especially in situations like the current pandemic. However, your smartphone would be on you most of the time. NSX Mobile brings the ease of monitoring the networking and security right from your phone.

NSX Mobile complements the full-fledged NSX-T web UI by providing monitoring capabilities on the go. If you find something wrong, you can use the conventional web UI or ask someone else to investigate the matter immediately. Focus of the app is to provide instant notifications when something goes wrong and side by side ability to monitor the network & its security from a smartphone.

Features

  • Just install it & login with your NSX-T credentials to get started (make sure that the NSX version is 3.0+ and the NSX IP/domain name is reachable from your smartphone)
  • List and search all networking and security entities (e.g. Tier-0s, Network segments, Firewall Rules, etc.)
  • View alarms generated on NSX (e.g. CPU usage high OR a VPN failed to realize OR Intrusion detected in case if you have IDS firewall, etc.)
  • Push notifications – COMING SOON
  • Quick actions (enable/disable options wherever possible, actions on failures/alarms) – COMING SOON

[sta_anchor id=”vrop” /]

vRealize Orchestrator Parser

The vRealize Orchestrator Parser is a tool developed to extend the vRealize Build Tools Fling toolchain or to be used stand-alone with the Export Package to Folder option in native vRealize Orchestrator(vRO).

vRealize Orchestrator Parser parses vRO workflow XML files and extracts programming language code (Javascript, Python, Powershell, etc) and stores it as discrete files, that can then be checked into a source code control system, and or edited directly as discrete programming language source code from a traditional text-based source code editor, such as Visual Studio Code. These discrete files can also be consumed by other, third-party CI/CD systems like Maven and Jenkins. They can be edited, and they can be imported back into vRO workflow XML files. ‘Diffs’ and changes on the resulting code files are easily observed and tied to SCCS version numbers and releases, and can easily be merged and branched through normal software engineering development practices.

[sta_anchor id=”ednxcalb” /]

Easy Deploy for NSX Advanced Load Balancing

Easy Deploy for NSX Advanced Load Balancer (formerly Avi Networks) Fling is a virtual appliance that helps you deploy Avi in a handful of clicks!
This will enable you to leverage the power of multi-cloud application services platform that includes load balancing, web application firewall, container ingress, and application analytics across any cloud. No extensive knowledge required as it’s meant to make demo, training and proof-of-concept (POC) easy!

Features

  • A familiar VMware Clarity User Interface
  • Automatically deploy an Avi Controller and Avi Service Engines
  • Seamless integration with your VMware Cloud on AWS environment (with AVS and GCVE support coming soon!)
  • Option to deploy sample app that leverages Avi load balancing

For more information, read this blog post: New Fling: Easy Deploy for NSX Load Balancing a.k.a EasyAvi

Changelog

1.2.5

  • New Avi release supported – 20.1.5

1.2.4

  • Fixed SDDC conflict – what if you want to redeploy on the same sddc with another EasyAvi
  • Fixes “Output link /avi at the end does not work” issue
  • Destroy.sh – avoid TF error when trying to delete CL
  • Fixed “Typo in the outputs Advanced pplication Private IP Address”
  • Check for vCenter API connectivity before starting TF
  • Hide the button “DFW – Update NSX exclusion list with SE(s)”
  • Hide Domain Name field in the UI

1.2.3

  • Changed getMypublic.sh by beforeTf.sh

1.2.2

  •  Fix typo in outputs

1.2.1

  • Hide Public IP in outputs if empty

1.2.0

  • MD5 Checksum
  • Remove cat sddc.json from logs
  • Auto Apply
  • Auto routing to Step 3

1.1.0

  • Minor fixes

1.0.0

  • First Release

Updates

[sta_anchor id=”vmco” /]

Virtual Machine Compute Optimizer

The Virtual Machine Computer Optimizer (VMCO) is a Powershell script and module that uses the PowerCLI module to capture information about the Hosts and VMS running in your vSphere environment, and reports back on whether the VMs are configured optimally based on the Host CPU and memory. It will flag a VM as “TRUE” if it is optimized and “FALSE” if it is not. For non-optimized VMs, a recommendation is made that will keep the same number of vCPUs currently configured, with the optimal number of virtual cores and sockets.

Changelog

Version 3.0.0

  • Script will install or update the required modules (VMCO and PowerCLI). The script is now a single script that acts as the easy button to walk through the module installs, connecting to a vCenter, and exporting the results.

[sta_anchor id=”cndfesxi” /]

Community Networking Driver for ESXi

The Community Networking Driver for ESXi fling provides the user the ability to use usually not supported by ESXi.

Changelog

April 08, 2021 – v1.1

Net-Community-Driver_1.1.0.0-1vmw.700.1.0.15843807_17858744.zip
md5: 587d7d408184c90f6baf4204bb309171

  • Resolve issue when using Intel vPro which can cause ESXi PSOD

[sta_anchor id=”esxiae” /]

ESXi Arm Edition

ESXi on arm based system, nuf said!

Changelog

April 02, 2021 – v1.3

Note: Upgrade is NOT possible, only fresh installation is supported. If you select “Preserve VMFS” option, you can re-register your existing Virtual Machines.

  • Improved hardware compatibility (various bug fixes/enhancements)
  • Add support for Experimental Ampere Altra (single socket systems only) (please see Requirements for more details)
  • ACPI support for virtual machines
  • NVMe and PVSCSI boot support in vEFI
  • Workaround for ISO boot on some Arm servers
  • Address VMM crash with newer guest OSes and Neoverse N1-based systems
  • Improved guest interrupt controller virtualization
  • Improved (skeletal) PMU virtualization
  • Improved big endian VM support

Build 17839012
VMware-VMvisor-Installer-7.0.0-17839012.aarch64.iso

[sta_anchor id=”vhtml5wc” /]

vSphere HTML5 Web Client

Event though we have had the html5 web client around for a while they’re still using the html5 fling to test some new features!

Changelog

Fling 5.0 – build 15670023

Updated the instructions with the new location of some files and services for the HTML5 client fling v6.pdf

[sta_anchor id=”hsr” /]

Horizon Session Recording

The Horizon Session Recording tool allows for (on-demand) recording of Horizon sessions.

Changelog

Version 2.2.5

  • Added support for > Horizon 8.1

[sta_anchor id=”vmobc” /]

vSphere Mobile Client

Personally I don’t have a use for it but I do like the idea of being able to manage my vSphere from a mobile device using the vSphere Mobile Client.

Changelog

Version 2.2.0 Update:

New features:

  • Add filtering by severity options for Alarm and Events
  • Add windows key button in the virtual console keyboard for key combos

Improvements:

  • Improve VM console stability on device rotation
  • Add missing back button on the login pages
  • Update app logo icon and splash screens

Version 2.1.0 Update:

Improvements:

  • Compatibility with some ESXi versions using certain licenses has been improved. Operations should now work against those hosts.

Version 2.0.0 Update:

New features:

  • Introduction of VMware Cloud with VMware on AWS. Access your cloud vCenter servers from within the mobile app.
  • VM details page: navigation to related objects now possible

Improvements:

  • Virtual Machine details page now loads faster when it’s powered off
  • Fixed an issue where the app would show two spinners when navigating between views

[sta_anchor id=”wsoneamt” /]

Workspace ONE Access Migration Tool

[sta_anchor id=”osot” /]

Workspace ONE Access Migration Tool helps ease migration of Apps from one WS1 Access tenant to another (on-premises to SaaS or SaaS to SaaS) and use cases that require mirroring one tenant to another (for setting up UAT from PROD or vice versa) by providing capabilities listed below

Changelog

Version 1.0.0.24

  • Migrate App Entitlements (groups only)
  • New Logo and UI branding
  • Bug fixes

VMware OS Optimization Tool

Optimize you must, use you should OSOT!

Changelog

April 5, b2003

  • Resolved bug where Windows Store Apps were being removed even though they were being selected to be kept. This included changing the filter condition for Remove All Windows built-in apps.

[sta_anchor id=”sddciefvcoaws” /]

SDDC Import/Export for VMware Cloud on AWS

The SDDC Import/Export for VMware Cloud on AWS tool enables you to save and restore your VMware Cloud on AWS (VMC) Software-Defined Data Center (SDDC) networking and security configuration.

Changelog

Version 1.4

  • New feature – on-prem NSX-T DFW configuration export, import into VMC on AWS
  • New feature – on-prem vCenter folder structure export, import into VMC on AWS
  • New feature – Indented JSON output for easier reading
  • Bugfix – Bumped minimum Python version to the actual requirement of 3.6
  • Bugfix – Fixed issue where the exception block of a try/except on GET calls errored

[sta_anchor id=”veba” /]

VMware Event Broker Appliance

The VMware Event Broker Appliance (VEBA) Fling enables customers to unlock the hidden potential of events in their SDDC to easily event-driven automation based on vCenter Server Events and take vCenter Server Events to the next level! Extending vSphere by easily triggering custom or prebuilt actions to deliver powerful integrations within your datacenter across public cloud has never been more easier before.

Changelog

https://www.virtuallyghetto.com/2021/04/vmware-event-broker-appliance-veba-v0-6-is-now-available.html

Creating a RDS farm using the Python module for VMware Horizon

One of the goals and hopes I had with my 100DaysOfCode (I am writing this on day 100!) was that the Horizon REST api’s to create desktop pools and RDS farms would have been available at the end. Only half of that came out and with Horizon 8 2103 we can finally create a RDS farm using those rest api’s. I have decided to add this to the Python module based on a dictionary that the user sends to the new_farm method. I could still add a fully fetched function but that would require a lot of arguments and using **kwargs is an option but than the user would still need to find out what to use.

First I will need to know what json data I actually need, let’s have a look at the api explorer page to get a grip on this

{
  "access_group_id": "6fd4638a-381f-4518-aed6-042aa3d9f14c",
  "automated_farm_settings": {
    "customization_settings": {
      "ad_container_rdn": "CN=Computers",
      "cloneprep_customization_settings": {
        "post_synchronization_script_name": "cloneprep_postsync_script",
        "post_synchronization_script_parameters": "p1 p2 p3",
        "power_off_script_name": "cloneprep_poweroff_script",
        "power_off_script_parameters": "p1 p2 p3",
        "priming_computer_account": "a219420d-4799-4517-8f78-39c74c7c4efc"
      },
      "instant_clone_domain_account_id": "6f85b3a5-e7d0-4ad6-a1e3-37168dd1ed51",
      "reuse_pre_existing_accounts": false
    },
    "enable_provisioning": true,
    "max_session_type": "LIMITED",
    "max_sessions": 50,
    "min_ready_vms": 0,
    "nics": [
      {
        "network_interface_card_id": "c9896e51-48a2-4d82-ae9e-a0246981b473",
        "network_label_assignment_specs": [
          {
            "enabled": true,
            "max_label": 1,
            "max_label_type": "LIMITED",
            "network_label_name": "vm-network"
          }
        ]
      }
    ],
    "pattern_naming_settings": {
      "max_number_of_rds_servers": 5,
      "naming_pattern": "vm-{n}-sales"
    },
    "provisioning_settings": {
      "base_snapshot_id": "snapshot-1",
      "datacenter_id": "datacenter-1",
      "host_or_cluster_id": "domain-s425",
      "im_stream_id": "6f85b3a5-e7d0-4ad6-a1e3-37168dd1ed51",
      "im_tag_id": "3d45b3a5-e7d0-4ad6-a1e3-37168dd1ed51",
      "parent_vm_id": "vm-2",
      "resource_pool_id": "resgroup-1",
      "vm_folder_id": "group-v1"
    },
    "stop_provisioning_on_error": true,
    "storage_settings": {
      "datastores": [
        {
          "datastore_id": "datastore-1"
        }
      ],
      "replica_disk_datastore_id": "datastore-1",
      "use_separate_datastores_replica_and_os_disks": false,
      "use_view_storage_accelerator": false,
      "use_vsan": false
    },
    "transparent_page_sharing_scope": "VM",
    "vcenter_id": "f148f3e8-db0e-4abb-9c33-7e5205ccd360"
  },
  "description": "Farm Description",
  "display_name": "ManualFarm",
  "display_protocol_settings": {
    "allow_users_to_choose_protocol": true,
    "default_display_protocol": "PCOIP",
    "grid_vgpus_enabled": true,
    "session_collaboration_enabled": false
  },
  "enabled": true,
  "load_balancer_settings": {
    "cpu_threshold": 10,
    "disk_queue_length_threshold": 15,
    "disk_read_latency_threshold": 10,
    "disk_write_latency_threshold": 15,
    "include_session_count": true,
    "memory_threshold": 10
  },
  "name": "ManualFarm",
  "rds_server_ids": [
    "5134796a-322g-5fe5-343f-4daa5d25ebfe",
    "2a43f96c-102b-4ed3-953f-35deg43d43b0ge"
  ],
  "server_error_threshold": 0,
  "session_settings": {
    "disconnected_session_timeout_minutes": 5,
    "disconnected_session_timeout_policy": "NEVER",
    "empty_session_timeout_minutes": 5,
    "empty_session_timeout_policy": "AFTER",
    "logoff_after_timeout": false,
    "pre_launch_session_timeout_minutes": 10,
    "pre_launch_session_timeout_policy": "AFTER"
  },
  "type": "MANUAL",
  "use_custom_script_for_load_balancing": false
}

This also includes some that are not required so for my own farm I settled with this json. This is for an Instant Clone farm.

{
    "access_group_id": "6fd4638a-381f-4518-aed6-042aa3d9f14c",
    "automated_farm_settings": {
        "customization_settings": {
            "ad_container_rdn": "OU=Pod1,OU=RDS,OU=VMware,OU=EUC",
            "instant_clone_domain_account_id": "6f85b3a5-e7d0-4ad6-a1e3-37168dd1ed51",
            "reuse_pre_existing_accounts": true
        },
        "enable_provisioning": false,
        "max_session_type": "LIMITED",
        "max_sessions": 50,
        "min_ready_vms": 1,
        "pattern_naming_settings": {
            "max_number_of_rds_servers": 2,
            "naming_pattern": "vm-{n}-sales"
        },
        "provisioning_settings": {
            "base_snapshot_id": "snapshot-1",
            "datacenter_id": "datacenter-1",
            "host_or_cluster_id": "domain-s425",
            "parent_vm_id": "vm-2",
            "resource_pool_id": "resgroup-1",
            "vm_folder_id": "group-v1"
        },
        "stop_provisioning_on_error": true,
        "storage_settings": {
            "datastores": [
                {
                    "datastore_id": "datastore-1"
                }
            ],
            "use_separate_datastores_replica_and_os_disks": false,
            "use_view_storage_accelerator": false,
            "use_vsan": false
        },
        "transparent_page_sharing_scope": "VM",
        "vcenter_id": "f148f3e8-db0e-4abb-9c33-7e5205ccd360"
    },
    "description": "demo_farm",
    "display_name": "demo_farm",
    "display_protocol_settings": {
        "allow_users_to_choose_protocol": true,
        "default_display_protocol": "BLAST",
        "grid_vgpus_enabled": false,
        "session_collaboration_enabled": true
    },
    "enabled": false,
    "load_balancer_settings": {
        "cpu_threshold": 10,
        "disk_queue_length_threshold": 15,
        "disk_read_latency_threshold": 10,
        "disk_write_latency_threshold": 15,
        "include_session_count": true,
        "memory_threshold": 10
    },
    "name": "demo_farm",
    "server_error_threshold": 0,
    "session_settings": {
        "disconnected_session_timeout_minutes": 5,
        "disconnected_session_timeout_policy": "NEVER",
        "empty_session_timeout_minutes": 5,
        "empty_session_timeout_policy": "AFTER",
        "logoff_after_timeout": false,
        "pre_launch_session_timeout_minutes": 10,
        "pre_launch_session_timeout_policy": "AFTER"
    },
    "type": "AUTOMATED",
    "use_custom_script_for_load_balancing": false
}

As said I send a dictionary to the method so let’s import data into a dict called data and I will print it to screen. The dictionary needs to follow this specific order of lines so that’s why a json is very useful to start with.

with open('/mnt/d/homelab/farm.json') as f:
    data = json.load(f)

As you can see in both the json and the output there’s a lot of things we can change and some things that we need to change lik id’s for all the components like vCenter, base vm, base snapshot and more. First I need the access_group_id this can be retreived using the get_local_access_groups method. For all of these I will also set the variable in the dictionary that we need.

local_access_group = next(item for item in (config.get_local_access_groups()) if item["name"] == "Root")
data["access_group_id"] = local_access_group["id"]

Than it’s time for the Instant Clone Admin id

ic_domain_account = next(item for item in (config.get_ic_domain_accounts()) if item["username"] == "administrator")
data["automated_farm_settings"]["customization_settings"]["instant_clone_domain_account_id"] = ic_domain_account["id"]

For the basevm and snapshot id’s I used the same method but a bit differently as I had already used this method in another script

vcenters = monitor.virtual_centers()
vcid = vcenters[0]["id"]
dcs = external.get_datacenters(vcenter_id=vcid)
dcid = dcs[0]["id"]

base_vms = external.get_base_vms(vcenter_id=vcid,datacenter_id=dcid,filter_incompatible_vms=True)

base_vm = next(item for item in base_vms if item["name"] == "srv2019-p1-2020-10-13-08-44")
basevmid=base_vm["id"]

base_snapshots = external.get_base_snapshots(vcenter_id=vcid, base_vm_id=base_vm["id"])

base_snapshot = next(item for item in base_snapshots if item["name"] == "Created by Packer")

snapid=base_snapshot["id"]
data["automated_farm_settings"]["provisioning_settings"]["base_snapshot_id"] = snapid
data["automated_farm_settings"]["provisioning_settings"]["parent_vm_id"] = basevmid

Host or cluster id

host_or_clusters = external.get_hosts_or_clusters(vcenter_id=vcid, datacenter_id=dcid)
for i in host_or_clusters:
    if (i["details"]["name"]) == "Cluster_Pod1":
        host_or_cluster = i
data["automated_farm_settings"]["provisioning_settings"]["host_or_cluster_id"] = host_or_cluster["id"]

Resource Pool

resource_pools = external.get_resource_pools(vcenter_id=vcid, host_or_cluster_id=host_or_cluster["id"])
for i in resource_pools:
    # print(i)
    if (i["type"] == "CLUSTER"):
        resource_pool = i
data["automated_farm_settings"]["provisioning_settings"]["resource_pool_id"] = resource_pool["id"]

VM folder again is a bit different as I have to get the id from one of the children objects

vm_folders = external.get_vm_folders(vcenter_id=vcid, datacenter_id=dcid)
for i in vm_folders:
    children=(i["children"])
    for ii in children:
        # print(ii["name"])
        if (ii["name"]) == "Pod1":
            vm_folder = i
data["automated_farm_settings"]["provisioning_settings"]["vm_folder_id"] = vm_folder["id"]

Datacenter and vcenter id’s I already had to grab for the base vm and base snapshot so I can just add them

data["automated_farm_settings"]["provisioning_settings"]["datacenter_id"] = dcid
data["automated_farm_settings"]["vcenter_id"] = vcid

Datastores is a bit more funky as there can be multiple so I needed to create a list first and than populate that based on the name of the datastores I have.

datastore_list = []
datastores = external.get_datastores(vcenter_id=vcid, host_or_cluster_id=host_or_cluster["id"])
for i in datastores:
    # print(i)
    if (i["name"] == "VDI-500") or i["name"] == "VDI-200":
        ds = {}
        ds["datastore_id"] = i["id"]
        datastore_list.append(ds)
data["automated_farm_settings"]["storage_settings"]["datastores"] = datastore_list

For my final script I put them in a bit different order and I decided to change a whole lot more options but if you have your json perfected this shouldn’t always be required. Also take note that for true/false in the json that I use the True/False from python.

import requests, getpass, urllib, json

import vmware_horizon

requests.packages.urllib3.disable_warnings()
url = input("URL\n")

username = input("Username\n")

domain = input("Domain\n")

pw = getpass.getpass()

hvconnectionobj = vmware_horizon.Connection(username = username,domain = domain,password = pw,url = url)
hvconnectionobj.hv_connect()
print("connected")

monitor = obj=vmware_horizon.Monitor(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
external=vmware_horizon.External(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
inventory=vmware_horizon.Inventory(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
config=vmware_horizon.Config(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)

with open('/mnt/d/homelab/farm.json') as f:
    data = json.load(f)

vcenters = monitor.virtual_centers()
vcid = vcenters[0]["id"]
dcs = external.get_datacenters(vcenter_id=vcid)
dcid = dcs[0]["id"]

base_vms = external.get_base_vms(vcenter_id=vcid,datacenter_id=dcid,filter_incompatible_vms=True)

base_vm = next(item for item in base_vms if item["name"] == "srv2019-p1-2020-10-13-08-44")
basevmid=base_vm["id"]

base_snapshots = external.get_base_snapshots(vcenter_id=vcid, base_vm_id=base_vm["id"])

base_snapshot = next(item for item in base_snapshots if item["name"] == "Created by Packer")

snapid=base_snapshot["id"]

host_or_clusters = external.get_hosts_or_clusters(vcenter_id=vcid, datacenter_id=dcid)
for i in host_or_clusters:
    if (i["details"]["name"]) == "Cluster_Pod1":
        host_or_cluster = i

resource_pools = external.get_resource_pools(vcenter_id=vcid, host_or_cluster_id=host_or_cluster["id"])
for i in resource_pools:
    # print(i)
    if (i["type"] == "CLUSTER"):
        resource_pool = i

vm_folders = external.get_vm_folders(vcenter_id=vcid, datacenter_id=dcid)
for i in vm_folders:
    children=(i["children"])
    for ii in children:
        # print(ii["name"])
        if (ii["name"]) == "Pod1":
            vm_folder = i

datastore_list = []
datastores = external.get_datastores(vcenter_id=vcid, host_or_cluster_id=host_or_cluster["id"])
for i in datastores:
    # print(i)
    if (i["name"] == "VDI-500") or i["name"] == "VDI-200":
        ds = {}
        ds["datastore_id"] = i["id"]
        datastore_list.append(ds)

local_access_group = next(item for item in (config.get_local_access_groups()) if item["name"] == "Root")
ic_domain_account = next(item for item in (config.get_ic_domain_accounts()) if item["username"] == "administrator")

data["access_group_id"] = local_access_group["id"]
data["automated_farm_settings"]["customization_settings"]["ad_container_rdn"] = "OU=Pod1,OU=RDS,OU=VMware,OU=EUC"
data["automated_farm_settings"]["customization_settings"]["reuse_pre_existing_accounts"] = True
data["automated_farm_settings"]["customization_settings"]["instant_clone_domain_account_id"] = ic_domain_account["id"]
data["automated_farm_settings"]["enable_provisioning"] = False
data["automated_farm_settings"]["max_sessions"] = 50
data["automated_farm_settings"]["min_ready_vms"] = 3
data["automated_farm_settings"]["pattern_naming_settings"]["max_number_of_rds_servers"] = 4
data["automated_farm_settings"]["pattern_naming_settings"]["naming_pattern"] = "farmdemo-{n:fixed=3}"
data["automated_farm_settings"]["provisioning_settings"]["base_snapshot_id"] = snapid
data["automated_farm_settings"]["provisioning_settings"]["parent_vm_id"] = basevmid
data["automated_farm_settings"]["provisioning_settings"]["host_or_cluster_id"] = host_or_cluster["id"]
data["automated_farm_settings"]["provisioning_settings"]["resource_pool_id"] = resource_pool["id"]
data["automated_farm_settings"]["provisioning_settings"]["vm_folder_id"] = vm_folder["id"]
data["automated_farm_settings"]["provisioning_settings"]["datacenter_id"] = dcid
data["automated_farm_settings"]["stop_provisioning_on_error"] = True
data["automated_farm_settings"]["storage_settings"]["datastores"] = datastore_list
data["automated_farm_settings"]["transparent_page_sharing_scope"] = "GLOBAL"
data["automated_farm_settings"]["vcenter_id"] = vcid
data["description"] = "Python_demo_farm"
data["display_name"] = "Python_demo_farm"
data["display_protocol_settings"]["allow_users_to_choose_protocol"] = True
data["display_protocol_settings"]["default_display_protocol"] = "BLAST"
data["display_protocol_settings"]["session_collaboration_enabled"] = True
data["enabled"] = False
data["load_balancer_settings"]["cpu_threshold"] = 12
data["load_balancer_settings"]["disk_queue_length_threshold"] = 16
data["load_balancer_settings"]["disk_read_latency_threshold"] = 12
data["load_balancer_settings"]["disk_write_latency_threshold"] = 16
data["load_balancer_settings"]["include_session_count"] = True
data["load_balancer_settings"]["memory_threshold"] = 12
data["name"] = "Python_demo_farm"
data["session_settings"]["disconnected_session_timeout_minutes"] = 5
data["session_settings"]["disconnected_session_timeout_policy"] = "NEVER"
data["session_settings"]["empty_session_timeout_minutes"] = 6
data["session_settings"]["empty_session_timeout_policy"] = "AFTER"
data["session_settings"]["logoff_after_timeout"] = False
data["session_settings"]["pre_launch_session_timeout_minutes"] = 12
data["session_settings"]["pre_launch_session_timeout_policy"] = "AFTER"
data["type"] = "AUTOMATED"

inventory.new_farm(farm_data=data)

end=hvconnectionobj.hv_disconnect()
print(end)

How does this look? Actually you don’t see a lot happening but the farm will have been created

As always the script can be found on my github in the examples folder together with the json file.

With this I am closing my 100DaysOfCode challenge but I pledge to keep maintaining the python module and I will extend it when new REST api calls arrive for VMware Horizon.