[REST]Pushing a new image Horizon 8 2206 style

With Horizon 8 2206 one of the new features is the fact that you can select a new amount of cpu’s and memory when deploying a new image.

If you know me a but you might understand that I want to know how we can do this using the api’s. As we’ve seen before we needed to do a post against /inventory/v1/desktop-pools/{id}/action/schedule-push-image for build 2206 this was changed to /inventory/v2/desktop-pools/{id}/action/schedule-push-image.

Let’s compare the content of the body that we need to send.

v1:

{
  "add_virtual_tpm": false,
  "im_stream_id": "6f85b3a5-e7d0-4ad6-a1e3-37168dd1ed51",
  "im_tag_id": "0103796c-102b-4ed3-953f-3dfe3d23e0fe",
  "logoff_policy": "WAIT_FOR_LOGOFF",
  "parent_vm_id": "vm-1",
  "snapshot_id": "snapshot-1",
  "start_time": 1587081283000,
  "stop_on_first_error": true
}

v2

{
  "add_virtual_tpm": false,
  "compute_profile_num_cores_per_socket": 1,
  "compute_profile_num_cpus": 4,
  "compute_profile_ram_mb": 4096,
  "im_stream_id": "6f85b3a5-e7d0-4ad6-a1e3-37168dd1ed51",
  "im_tag_id": "0103796c-102b-4ed3-953f-3dfe3d23e0fe",
  "logoff_policy": "WAIT_FOR_LOGOFF",
  "machine_ids": [
    "816d44cb-b486-3c97-adcb-cf3806d53657",
    "414927f3-1a3b-3e4c-81b3-d39602f634dc"
  ],
  "parent_vm_id": "vm-1",
  "selective_push_image": true,
  "snapshot_id": "snapshot-1",
  "start_time": 1587081283000,
  "stop_on_first_error": true
}

So besides the cpu/memory changes we obviously also have something called selective_push_image. This has to do with the added functionality of pushing a secondary image from Horizon 2111. While the example shows it as true the data model makes clear that it is not required and defaults to false. The array of machine_ids reflects the list of machines where the secondary image has to be applied.

compute_profile_num_cores_per_socket	integer($int32)
example: 1
minimum: 1
exclusiveMinimum: false
exclusiveMaximum: false
Indicates the number of cores per socket for the CPU in the compute profile to be configured on clones.
If set, both compute_profile_num_cpus and compute_profile_ram_mb need to be set.

compute_profile_num_cpus	integer($int32)
example: 4
minimum: 1
exclusiveMinimum: false
exclusiveMaximum: false
Indicates the number of CPUs in the compute profile to be configured on clones.
If set, this must be a multiple of compute_profile_num_cores_per_socket.

compute_profile_ram_mb	integer($int32)
example: 4096
minimum: 1024
exclusiveMinimum: false
exclusiveMaximum: false
Indicates the RAM in MB in the compute profile to be configured on clones.

machine_ids	[
example: List [ "816d44cb-b486-3c97-adcb-cf3806d53657", "414927f3-1a3b-3e4c-81b3-d39602f634dc" ]
Set of machines from the desktop pool on which the new image is to be applied. This can be set when selective_push_image is set to true.

selective_push_image	boolean
example: true
Indicates whether selective push image is to be applied. If set to true, the new image will be applied to specified machine_ids in the desktop pool. The image published with this option will be held as a pending image, unless it is promoted or cancelled. The default value is false.

To be able to use this I have updated my previous image deployment script for Horizon 2206.

New arguments are:

  • AddVirtualTPM
    • Boolean to add a virtual TPM or not
  • SecondaryImage
    • Boolean to define the image as secondary (required if you also supply machine_ids)
  • Machine_Ids
    • Array with machine_ids to supply the secondary image to
  • CoresPerSocket
    • Int with # of Cores per Socket
  • CPUs
    • Int for total number of cpus
  • MemoryinMB
    • Memory in mb so 4096 or 6192 for example

 

<#
    .SYNOPSIS
    Pushes a new Golden Image to a Desktop Pool

    .DESCRIPTION
    This script uses the Horizon rest api's to push a new golden image to a VMware Horizon Desktop Pool

    .EXAMPLE
    .\Horizon_Rest_Push_Image.ps1 -ConnectionServerURL https://pod1cbr1.loft.lab -Credentials $creds -vCenterURL "https://pod1vcr1.loft.lab" -DataCenterName "Datacenter_Loft" -baseVMName "W21h1-2021-09-08-15-48" -BaseSnapShotName "Demo Snapshot" -DesktopPoolName "Pod01-Pool02"

    .PARAMETER Credential
    Mandatory: No
    Type: PSCredential
    Object with credentials for the connection server with domain\username and password. If not supplied the script will ask for user and password.

    .PARAMETER ConnectionServerURL
    Mandatory: Yes
    Default: String
    URL of the connection server to connect to

    .PARAMETER vCenterURL
    Mandatory: Yes
    Username of the user to look for

    .PARAMETER DataCenterName
    Mandatory: Yes
    Domain to look in

    .PARAMETER BaseVMName
    Mandatory: Yes
    Domain to look in

    .PARAMETER BaseSnapShotName
    Mandatory: Yes
    Domain to look in

    .PARAMETER DesktopPoolName
    Mandatory: Yes
    Domain to look in

    .PARAMETER StoponError
    Mandatory: No
    Boolean to stop on error or not

    .PARAMETER logoff_policy
    Mandatory: No
    String FORCE_LOGOFF or WAIT_FOR_LOGOFF to set the logoff policy.

    .PARAMETER Scheduledtime
    Mandatory: No
    Time to schedule the image push in [DateTime] format.

    .PARAMETER AddVirtualTPM
    Mandatory: No
    Default: $False
    Boolean FORCE_LOGOFF or WAIT_FOR_LOGOFF to set the logoff policy.

    .PARAMETER SecondaryImage
    Mandatory: No (Yes if machine_ids is supplied)
    Default: $False
    Boolean FORCE_LOGOFF or WAIT_FOR_LOGOFF to set the logoff policy.

    .PARAMETER Machine_Ids
    Mandatory: No
    Array Array of Machine_ids to apply the secondary image to.

    .PARAMETER CoresPerSocket
    Mandatory: No (unless CPUs or MemoryinMB is supplies)
    Int Amount of cores per socket.

    .PARAMETER CPUs
    Mandatory: No (unless MemoryinMB or CoresPerSocket is supplies)
    Int Total number of cores.

    .PARAMETER MemoryinMB
    Mandatory: No (unless CPUs or CoresPerSocket is supplies)
    Int New memory in MB

    .NOTES
    Minimum required version: VMware Horizon 8 2206
    Created by: Wouter Kursten
    First version: 03-11-2021
    Changes: 05-09-2022 -   Added resizing of cpu/memory
                        -   Added secondary image functionality
                        -   Added option to add Virtual TPM


    .COMPONENT
    Powershell Core
#>

[CmdletBinding(DefaultParameterSetName = 'Generic')]
param (
    [Parameter(Mandatory=$false,
    HelpMessage='Credential object as domain\username with password' )]
    [PSCredential] $Credentials,

    [Parameter(Mandatory=$true,
    HelpMessage='FQDN of the connectionserver' )]
    [ValidateNotNullOrEmpty()]
    [string] $ConnectionServerURL,

    [parameter(Mandatory = $true,
    HelpMessage = "URL of the vCenter to look in i.e. https://vcenter.domain.lab")]
    [ValidateNotNullOrEmpty()]
    [string]$vCenterURL,

    [parameter(Mandatory = $true,
    HelpMessage = "Name of the Datacenter to look in.")]
    [ValidateNotNullOrEmpty()]
    [string]$DataCenterName,

    [parameter(Mandatory = $true,
    HelpMessage = "Name of the Golden Image VM.")]
    [ValidateNotNullOrEmpty()]
    [string]$BaseVMName,

    [parameter(Mandatory = $true,
    HelpMessage = "Name of the Snapshot to use for the Golden Image.")]
    [ValidateNotNullOrEmpty()]
    [string]$BaseSnapShotName,

    [parameter(Mandatory = $true,
    HelpMessage = "Name of the Desktop Pool.")]
    [ValidateNotNullOrEmpty()]
    [string]$DesktopPoolName,

    [parameter(Mandatory = $false,
    HelpMessage = "True or false for stop on error.")]
    [ValidateNotNullOrEmpty()]
    [bool]$StoponError = $true,

    [parameter(Mandatory = $false,
    HelpMessage = "Use WAIT_FOR_LOGOFF or FORCE_LOGOFF.")]
    [ValidateSet('WAIT_FOR_LOGOFF','FORCE_LOGOFF', IgnoreCase = $false)]
    [string]$logoff_policy = "WAIT_FOR_LOGOFF",

    [parameter(Mandatory = $false,
    HelpMessage = "DateTime object for the moment of scheduling the image push.Defaults to immediately")]
    [datetime]$Scheduledtime,

    [parameter(Mandatory = $false,
    HelpMessage = "Bool for adding a Virtual TPM or not.")]
    [ValidateNotNullOrEmpty()]
    [bool]$AddVirtualTPM = $False,

    [parameter(Mandatory = $false,
    HelpMessage = "True or false to set this image as secondary image.")]
    [ValidateNotNullOrEmpty()]
    [bool]$SecondaryImage = $False,

    [parameter(Mandatory = $false,
    HelpMessage = "Array of machine_ids to apply the secondary image to.")]
    [ValidateNotNullOrEmpty()]
    [array]$Machine_Ids,

    [parameter(Mandatory = $false,
    HelpMessage = "New Number of cores per socket.")]
    [ValidateNotNullOrEmpty()]
    [int]$CoresPerSocket,

    [parameter(Mandatory = $false,
    HelpMessage = "New Number of CPU's.")]
    [ValidateNotNullOrEmpty()]
    [int]$CPUs,

    [parameter(Mandatory = $false,
    HelpMessage = "New amount of memory in MB.")]
    [ValidateNotNullOrEmpty()]
    [int]$MemoryinMB
)

if($Credentials){
    $username=($credentials.username).split("\")[1]
    $domain=($credentials.username).split("\")[0]
    $password=$credentials.password
}
else{
    $credentials = Get-Credential
    $username=($credentials.username).split("\")[1]
    $domain=($credentials.username).split("\")[0]
    $password=$credentials.password
}

$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($password) 
$UnsecurePassword = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)

function Get-HRHeader(){
    param($accessToken)
    return @{
        'Authorization' = 'Bearer ' + $($accessToken.access_token)
        'Content-Type' = "application/json"
    }
}
function Open-HRConnection(){
    param(
        [string] $username,
        [string] $password,
        [string] $domain,
        [string] $url
    )

    $Credentials = New-Object psobject -Property @{
        username = $username
        password = $password
        domain = $domain
    }

    return invoke-restmethod -Method Post -uri "$ConnectionServerURL/rest/login" -ContentType "application/json" -Body ($Credentials | ConvertTo-Json)
}
function Close-HRConnection(){
    param(
        $accessToken,
        $ConnectionServerURL
    )
    return Invoke-RestMethod -Method post -uri "$ConnectionServerURL/rest/logout" -ContentType "application/json" -Body ($accessToken | ConvertTo-Json)
}

if($CPUs -AND $CoresPerSocket -AND $MemoryinMB){
    $resize = $true
}
elseif($CPUs -OR $CoresPerSocket -OR $MemoryinMB){
    throw "If either CPUs, CoresPerSOcket or MemoryinGB is supplied, all must be supplied."
}
else{
    $resize = $false
}

if($Machine_Ids -AND !($SecondaryImage)){
    throw "If either Machine_Ids is supplied SecondaryImage also needs to be supplied."
}


try{
    $accessToken = Open-HRConnection -username $username -password $UnsecurePassword -domain $Domain -url $ConnectionServerURL
}
catch{
    throw "Error Connecting: $_"
}

$vCenters = Invoke-RestMethod -Method Get -uri "$ConnectionServerURL/rest/monitor/v2/virtual-centers" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken)
$vcenterid = ($vCenters | where-object {$_.name -like "*$vCenterURL*"}).id
$datacenters = Invoke-RestMethod -Method Get -uri "$ConnectionServerURL/rest/external/v1/datacenters?vcenter_id=$vcenterid" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken)
$datacenterid = ($datacenters | where-object {$_.name -eq $DataCenterName}).id
$basevms = Invoke-RestMethod -Method Get -uri "$ConnectionServerURL/rest/external/v1/base-vms?datacenter_id=$datacenterid&filter_incompatible_vms=false&vcenter_id=$vcenterid" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken)
$basevmid = ($basevms | where-object {$_.name -eq $baseVMName}).id
$basesnapshots = Invoke-RestMethod -Method Get -uri "$ConnectionServerURL/rest/external/v1/base-snapshots?base_vm_id=$basevmid&vcenter_id=$vcenterid" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken)
$basesnapshotid = ($basesnapshots | where-object {$_.name -eq $BaseSnapShotName}).id
$desktoppools = Invoke-RestMethod -Method Get -uri "$ConnectionServerURL/rest/inventory/v1/desktop-pools" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken)
$desktoppoolid = ($desktoppools | where-object {$_.name -eq $DesktopPoolName}).id

$datahashtable = [ordered]@{}
$datahashtable.add('add_virtual_tpm',$AddVirtualTPM)
if($resize){
    $datahashtable.add('compute_profile_num_cores_per_socket',$CoresPerSocket)
    $datahashtable.add('compute_profile_num_cpus',$CPUs)
    $datahashtable.add('compute_profile_ram_mb',$MemoryinMB)
}
$datahashtable.add('logoff_policy',$logoff_policy)
if($Machine_Ids){
    $datahashtable.add('machine_ids',$Machine_Ids)
}
$datahashtable.add('parent_vm_id',$basevmid)
if($SecondaryImage){
    $datahashtable.add('selective_push_image',$SecondaryImage)
}
$datahashtable.add('snapshot_id',$basesnapshotid)
if($Scheduledtime){
    $starttime = get-date $Scheduledtime
    $epoch = ([DateTimeOffset]$starttime).ToUnixTimeMilliseconds()
    $datahashtable.add('start_time',$epoch)
}
$datahashtable.add('stop_on_first_error',$StoponError)
$json = $datahashtable | convertto-json

Invoke-RestMethod -Method Post -uri "$ConnectionServerURL/rest/inventory/v2/desktop-pools/$desktoppoolid/action/schedule-push-image" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken) -body $json

As always the script is available on Github.

Usage:

I am using all the new arguments except the virtual tpm one.

D:\GIT\Various_Scripts\Horizon_Rest_Push_Image_VDI_2206.ps1 -ConnectionServerURL https://pod1cbr1.loft.lab -Credentials $creds -vCenterURL "https://pod1vcr1.loft.lab" -DataCenterName "Datacenter_Loft" -baseVMName "W21h1-gi-2022-08-19-09-36" -BaseSnapShotName "VM Snapshot 9%2f5%2f2022, 6:50:12 PM" -DesktopPoolName "Pod01-Pool03" -CPU 2 -CoresPerSocket 1 -MemoryinMB 4096 -SecondaryImage $true -Machine_Ids $array

My recipe for a successful vExpert application

This was also posted on the vExpert blog here.

One of the questions that the vExpert pro’s get is what people need to add to their vExpert application for it to be successful. While the things we do during the year are different for everyone, the first thing you need to do is to write everything down as extensively as possible. I will use my own current application as an example of how you can write everything down. There is no single truth to creating your application so you should see it as inspiration for creating your own.

One remark before you start editing or creating your application: make sure to save as often as possible. The vExpert portal needs to have a timeout and you wouldn’t be the first person where it times out and the progress hasn’t been saved and thus lost. I prefer to keep track of everything in a word document.

Preparation:

  • During the year keep track of your activities, this way you won’t forget anything.

(Required) Ingredients for this recipe:

  • Qualification path
  • Write down activities
  • Add url’s to activities if available
  • Be as extensive as possible
  • Work with a vExpert PRO

Step 1, Qualification path:

You need to select a qualification path. I could try as a partner as I work for ControlUp but I always use evangelist as all of my work in the community is done on my own credentials.VCDX are automatically vExperts but these are always checked against the VCDX directory so there’s no way to cheat there.

Step 2, Checkboxes:

If you created content make sure to select all the relevant checkboxes

Step 3 Other Media:

Give examples of the content you have created. There is no need to list every blog post but I always add links to the ones that I think are the most useful for the community. While I don’t consider views count relevant I do add a total amount of posts and views.

Step 3 Events and speaking:

Been an (online) event speaker or Podcast guest? You can list those here, if possible add a link for proof of these. For vmug events I know a lot of the old links give a 404 these days so make sure they work at the moment of submitting. If you can’t get a direct link to the event maybe you can find tweets about it that you were presenting so add those (I would advise to mention why you did that though). If you are an organizer of something like I do with the EMEA Breakfast events those can also be listed here. I won’t hold it against you when voting if you don’t have an attendance count but if you have it it’s even better.

<I would seriously consider hitting that save button here>

Step 4 Communities, Tools and Resources:

Tools & resources: Here is where you can link that nifty github repo where you share all your scripts. I for example list my Personal github profile but also the links to the vCheck for Horizon and the python module for Horizon as those are the main projects I worked on.

Communities: List every and any related community that you are active on, this includes Discord channels, Slack channels, forums etc even if they are not in English. Make sure to add a link to your profile or post history so the people who are voting can easily find your activity. If there’s no link available you can at least post your username.

Step 5, VMware Programs:

This is where you can list all your vExpert titles but also things like VMware Champions, VMware{Code} CodeCoach, beta programs, work as certification SME, Tanzu heroes, VMware Influencers and what not.

<Hit that save button again>

Step 6 Other Activities:

Here’s where you can list the things that aren’t publicly available so it can be very relevant for Customers & partners. I list the work that I internally do at ControlUp even if it is related to my work.

Step 7, References:

If you have a VMware employee that you worked a lot with and that can vouch for you make sure to check that box and list their email address. The same applies for the vExpert PRO that you worked with.

You don’t need to be a community crazy person like I am to become a vExpert but as said the most successful recipe for a successful application is to write everything down as extensively as possible. Do you have something that you don’t know if it helps? Just add it, it might be the thing that completes the picture for the PRO’s when we start voting.

If you want to reach out to a vExpert pro you can find the directory here and there’s also Facebook and Linkedin Groups.

Creating desktop pools using the Horizon REST Api with Powershell

For years people have been asking, when can we start creating Horizon Desktop Pools using REST? The first years the answers was that there was no REST Api at all but since they added REST in 7.10 Horizon the answer was whenever VMware feels like adding this option. With the recent release of Horizon 8 2111 they have finally added the option to do this. I have created a script that uses a json file as base I grabbed this from the api explorer. But the sample below is already edited down so I can create a simple desktop pool. You can grab my version below or here on Github.

{
  "access_group_id": "daef8c91-fa65-4a39-93aa-24d87d5aca94",
  "allow_multiple_user_assignments": false,
  "customization_settings": {
    "ad_container_rdn": "OU=Pool01,OU=VDI,OU=Pod1,OU=VMware,OU=EUC",
    "customization_type": "CLONE_PREP",
    "do_not_power_on_vms_after_creation": false,
    "instant_clone_domain_account_id": "9f83ed94-4d44-4c9a-abd3-ffc23e7389de",
    "reuse_pre_existing_accounts": true
  },
  "description": "Desktop pool description",
  "display_assigned_machine_name": false,
  "display_machine_alias": false,
  "display_name": "instantclonepool",
  "display_protocol_settings": {
    "allow_users_to_choose_protocol": true,
    "default_display_protocol": "BLAST",
    "grid_vgpus_enabled": false,
    "renderer3d": "MANAGE_BY_VSPHERE_CLIENT",
    "session_collaboration_enabled": true
  },
  "enable_client_restrictions": false,
  "enable_provisioning": false,
  "enabled": false,
  "name": "demo_pool",
  "naming_method": "PATTERN",
  "pattern_naming_settings": {
    "max_number_of_machines": 5,
    "naming_pattern": "Pool-{n:fixed=2}",
    "number_of_spare_machines": 1,
    "provisioning_time": "UP_FRONT"
  },
  "provisioning_settings": {
    "add_virtual_tpm": false,
    "base_snapshot_id": "snapshot-1",
    "datacenter_id": "datacenter-1",
    "host_or_cluster_id": "domain-s425",
    "parent_vm_id": "vm-2",
    "resource_pool_id": "resgroup-1",
    "vm_folder_id": "group-v1"
  },
  "session_settings": {
    "allow_multiple_sessions_per_user": false,
    "allow_users_to_reset_machines": true,
    "delete_or_refresh_machine_after_logoff": "DELETE",
    "disconnected_session_timeout_minutes": 5,
    "disconnected_session_timeout_policy": "AFTER",
    "logoff_after_timeout": false,
    "power_policy": "ALWAYS_POWERED_ON",
  },
  "session_type": "DESKTOP",
  "source": "INSTANT_CLONE",
  "stop_provisioning_on_error": false,
  "storage_settings": {
    "datastores": [
      {
        "datastore_id": "datastore-1",
        "sdrs_cluster": false
      },
      {
        "datastore_id": "datastore-1",
        "sdrs_cluster": false
      }
    ],
    "reclaim_vm_disk_space": false,
    "use_separate_datastores_replica_and_os_disks": false,
    "use_vsan": false
  },
  "transparent_page_sharing_scope": "VM",
  "type": "AUTOMATED",
  "user_assignment": "FLOATING",
  "vcenter_id": "f148f3e8-db0e-4abb-9c33-7e5205ccd360"
}

Essentially using only the json file could be enough to create a pool but I decided to create a powershell script that uses the json as a base and where I add functionality to make it more flexible. You cna supply names for things like datacenter, desktop pool name, display name and description & many others. If you want you can hardcode these names or even the id’s if you like but I have only done that in the json for the, access_group_id and the instant_clone_domain_account_id. You can go as crazy as you like and define each and every option as a parameter but I prefer a good base with the things that are changed the most often as parameters.

This is a sample of how I start the script, there is no feedback as the POST command doesn’t give any proper feedback if it ran well. I think the params are clear but you need to be careful to use a Relative Distinguished Name and not a normal Distinguished Name (the DC = parts are missing as you can see). For datastores you need to always need to use an array.

D:\GIT\Various_Scripts\Horizon_Rest_create_Desktop_Pool.ps1 -Credentials $creds -ConnectionServerURL https://pod1cbr1.loft.lab -jsonfile 'D:\homelab\new-pool-rest.json' -vCenterURL pod1vcr1.loft.lab -DataCenterName "Datacenter_Loft" -ClusterName "Dell 620" -BaseVMName "W21h1-2021-11-05-13-00" -BaseSnapShotName "Created by Packer" -DatastoreNames ("vdi-200","vdi-500") -VMFolderPath "/Datacenter_Loft/vm" -DesktopPoolName "Rest_Pool_demo" -DesktopPoolDisplayName "Rest Display name" -DesktopPoolDescription "rest description" -namingmethod "Restdemo{n:fixed=2}" -ADOUrdn "OU=Pool01,OU=VDI,OU=Pod1,OU=VMware,OU=EUC"

this gives this result:

Now the script itself, I am still using the default rest functions and only a slight bit of error handling for checking if the json exists and importing it. For the rest it’s a list of api calls that you (partially) have seen before in other scripts. The datastoressobjects array is filled with separate objects per datastore. The json is converted to a regular array at the beginning so it’s very easy to replace all variables that need replacing. To convert the object to a usable json file you need to change the max depth as this is default only 2 an that’s not enough for this json file. I set it to 100 just to keep things easy. If you want to download the script I would recommend grabbing it from Github.

In a future blog post I will cover adding different VM Networks to the desktop pool. With this json file the default nic of the Golden Image is used.

<#
    .SYNOPSIS
    Creates a new Golden Image to a Desktop Pool

    .DESCRIPTION
    This script uses the Horizon rest api's to create a new VMware Horizon Desktop Pool

    .EXAMPLE
    Horizon_Rest_create_Desktop_Pool.ps1 -Credentials $creds -ConnectionServerURL https://pod1cbr1.loft.lab -jsonfile 'D:\homelab\new-pool-rest.json' -vCenterURL pod1vcr1.loft.lab -DataCenterName "Datacenter_Loft" -ClusterName "Dell 620" -BaseVMName "W21h1-2021-11-05-13-00" -BaseSnapShotName "Created by Packer" -DatastoreNames  ("vdi-200","vdi-500") -VMFolderPath "/Datacenter_Loft/vm" -DesktopPoolName "Rest_Pool_demo2" -DesktopPoolDisplayName "Rest DIsplay name" -DesktopPoolDescription "rest description" -namingmethod "Rest-{n:fixed=2}" -ADOUrdn "OU=Pool01,OU=VDI,OU=Pod1,OU=VMware,OU=EUC"

    .PARAMETER Credential
    Mandatory: No
    Type: PSCredential
    Object with credentials for the connection server with domain\username and password. If not supplied the script will ask for user and password.

    .PARAMETER ConnectionServerURL
    Mandatory: Yes
    Default: String
    URL of the connection server to connect to

    .PARAMETER vCenterURL
    Mandatory: Yes
    Username of the user to look for

    .PARAMETER DataCenterName
    Mandatory: Yes
    Name of the datacenter

    .PARAMETER BaseVMName
    Mandatory: Yes
    Name of the Golden Image VM

    .PARAMETER BaseSnapShotName
    Mandatory: Yes
    Name of the Golden Image Snapshot

    .PARAMETER DesktopPoolName
    Mandatory: Yes
    Name of the Desktop Pool to ctreate

    .PARAMETER jsonfile
    Mandatory: Yes
    Full path to the JSON file to use as base

    .PARAMETER ClusterName
    Mandatory: Yes
    Name of the vCenter Cluster to place the vm's in

    .PARAMETER DatastoreNames
    Mandatory: Yes
    Array of names of the datastores to use

    .PARAMETER VMFolderPath
    Mandatory: Yes
    Path to the folder where the folder with pool vm's will be placed including the datacenter with forward slashes so /datacenter/folder

    .PARAMETER DesktopPoolDisplayName
    Mandatory: Yes
    Display name of the desktop pool

    .PARAMETER DesktopPoolDescription
    Mandatory: Yes
    Description of the desktop pool

    .PARAMETER NamingMethod
    Mandatory: Yes
    Naming method of the vm's

    .PARAMETER ADOUrdn
    Mandatory: Yes
    Relative Distinguished Name for the OU where the vm's will be placed

    .NOTES
    Minimum required version: VMware Horizon 8 2111
    Created by: Wouter Kursten
    First version: 08-12-2021

    .COMPONENT
    Powershell Core
#>

<#
Copyright © 2021 Wouter Kursten
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, 
including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#>

[CmdletBinding()]
param (
    [Parameter(Mandatory=$false,
    HelpMessage='Credential object as domain\username with password' )]
    [PSCredential] $Credentials,

    [Parameter(Mandatory=$true,  HelpMessage='FQDN of the connectionserver' )]
    [ValidateNotNullOrEmpty()]
    [string] $ConnectionServerURL,

    [parameter(Mandatory = $true,
    HelpMessage = "URL of the vCenter to look in i.e. https://vcenter.domain.lab")]
    [ValidateNotNullOrEmpty()]
    [string]$vCenterURL,

    [parameter(Mandatory = $true,
    HelpMessage = "Name of the Datacenter to look in.")]
    [ValidateNotNullOrEmpty()]
    [string]$DataCenterName,

    [parameter(Mandatory = $true,
    HelpMessage = "Name of the Golden Image VM.")]
    [ValidateNotNullOrEmpty()]
    [string]$BaseVMName,

    [parameter(Mandatory = $true,
    HelpMessage = "Name of the Snapshot to use for the Golden Image.")]
    [ValidateNotNullOrEmpty()]
    [string]$BaseSnapShotName,

    [parameter(Mandatory = $true,
    HelpMessage = "Name of the Desktop Pool.")]
    [ValidateNotNullOrEmpty()]
    [string]$DesktopPoolName,

    [parameter(Mandatory = $true,
    HelpMessage = "Display Name of the Desktop Pool.")]
    [ValidateNotNullOrEmpty()]
    [string]$DesktopPoolDisplayName,

    [parameter(Mandatory = $true,
    HelpMessage = "Description of the Desktop Pool.")]
    [ValidateNotNullOrEmpty()]
    [string]$DesktopPoolDescription,

    [parameter(Mandatory = $true,
    HelpMessage = "Name of the cluster where the Desktop Pool will be placed.")]
    [ValidateNotNullOrEmpty()]
    [string]$ClusterName,

    [parameter(Mandatory = $true,
    HelpMessage = "Array of names for the datastores where the Desktop will be placed.")]
    [ValidateNotNullOrEmpty()]
    [array]$DatastoreNames,

    [parameter(Mandatory = $true,
    HelpMessage = "Path to the folder where the folder for the Desktop Pool will be placed i.e. /Datacenter_Loft/vm")]
    [ValidateNotNullOrEmpty()]
    [string]$VMFolderPath,

    [parameter(Mandatory = $true,
    HelpMessage = "Relative Distinguished Name for the OU where the vm's will be placed i.e. OU=Pool01,OU=VDI,OU=Pod1,OU=VMware,OU=EUC")]
    [ValidateNotNullOrEmpty()]
    [string]$ADOUrdn,

    [parameter(Mandatory = $true,
    HelpMessage = "Naming method for the VDI machines.")]
    [ValidateNotNullOrEmpty()]
    [string]$NamingMethod,

    [parameter(Mandatory = $true,
    HelpMessage = "Full path to the Json with Desktop Pool details.")]
    [ValidateNotNullOrEmpty()]
    [string]$jsonfile
)

try{
    test-path $jsonfile  | out-null
}
catch{
    throw "Json file not found"
}
try{
    $sourcejson = get-content $jsonfile | ConvertFrom-Json
}
catch{
    throw "Error importing json file"
}

if($Credentials){
    $username=($credentials.username).split("\")[1]
    $domain=($credentials.username).split("\")[0]
    $password=$credentials.password
}
else{
    $credentials = Get-Credential
    $username=($credentials.username).split("\")[1]
    $domain=($credentials.username).split("\")[0]
    $password=$credentials.password
}

$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($password) 
$UnsecurePassword = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)

function Get-HRHeader(){
    param($accessToken)
    return @{
        'Authorization' = 'Bearer ' + $($accessToken.access_token)
        'Content-Type' = "application/json"
    }
}
function Open-HRConnection(){
    param(
        [string] $username,
        [string] $password,
        [string] $domain,
        [string] $url
    )

    $Credentials = New-Object psobject -Property @{
        username = $username
        password = $password
        domain = $domain
    }

    return invoke-restmethod -Method Post -uri "$ConnectionServerURL/rest/login" -ContentType "application/json" -Body ($Credentials | ConvertTo-Json)
}
function Close-HRConnection(){
    param(
        $accessToken,
        $ConnectionServerURL
    )
    return Invoke-RestMethod -Method post -uri "$ConnectionServerURL/rest/logout" -ContentType "application/json" -Body ($accessToken | ConvertTo-Json)
}

try{
    $accessToken = Open-HRConnection -username $username -password $UnsecurePassword -domain $Domain -url $ConnectionServerURL
}
catch{
    throw "Error Connecting: $_"
}

$vCenters = Invoke-RestMethod -Method Get -uri "$ConnectionServerURL/rest/monitor/v2/virtual-centers" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken)
$vcenterid = ($vCenters | where-object {$_.name -like "*$vCenterURL*"}).id
$datacenters = Invoke-RestMethod -Method Get -uri "$ConnectionServerURL/rest/external/v1/datacenters?vcenter_id=$vcenterid" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken)
$datacenterid = ($datacenters | where-object {$_.name -eq $DataCenterName}).id
$clusters = Invoke-RestMethod -Method Get -uri "$ConnectionServerURL/rest/external/v1/hosts-or-clusters?datacenter_id=$datacenterid&vcenter_id=$vcenterid" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken)
$clusterid = ($clusters | where-object {$_.details.name -eq $ClusterName}).id
$datastores = Invoke-RestMethod -Method Get -uri "$ConnectionServerURL/rest/external/v1/datastores?host_or_cluster_id=$clusterid&vcenter_id=$vcenterid" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken)
$datastoreobjects = @()
foreach ($datastoreName in $DatastoreNames){
    $datastoreid = ($datastores | where-object {$_.name -eq $datastoreName}).id
    [PSCustomObject]$dsobject=[ordered]@{
        datastore_id = $datastoreid
    }
    $datastoreobjects+=$dsobject

}
$resourcepools = Invoke-RestMethod -Method Get -uri "$ConnectionServerURL/rest/external/v1/resource-pools?host_or_cluster_id=$clusterid&vcenter_id=$vcenterid" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken)
$resourcepoolid = $resourcepools[0].id
$vmfolders = Invoke-RestMethod -Method Get -uri "$ConnectionServerURL/rest/external/v1/vm-folders?datacenter_id=$datacenterid&vcenter_id=$vcenterid" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken)
$vmfolderid = ($vmfolders | where-object {$_.path -eq $VMFolderPath}).id
$basevms = Invoke-RestMethod -Method Get -uri "$ConnectionServerURL/rest/external/v1/base-vms?datacenter_id=$datacenterid&filter_incompatible_vms=false&vcenter_id=$vcenterid" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken)
$basevmid = ($basevms | where-object {$_.name -eq $baseVMName}).id
$basesnapshots = Invoke-RestMethod -Method Get -uri "$ConnectionServerURL/rest/external/v1/base-snapshots?base_vm_id=$basevmid&vcenter_id=$vcenterid" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken)
$basesnapshotid = ($basesnapshots | where-object {$_.name -eq $BaseSnapShotName}).id
$sourcejson.provisioning_settings.base_snapshot_id = $basesnapshotid
$sourcejson.provisioning_settings.datacenter_id = $datacenterid
$sourcejson.provisioning_settings.host_or_cluster_id = $clusterid
$sourcejson.provisioning_settings.parent_vm_id = $basevmid
$sourcejson.provisioning_settings.vm_folder_id = $vmfolderid
$sourcejson.provisioning_settings.resource_pool_id = $resourcepoolid
$sourcejson.vcenter_id = $vcenterid
$sourcejson.storage_settings.datastores = $datastoreobjects
$sourcejson.name = $DesktopPoolName
$sourcejson.display_name = $DesktopPoolDisplayName
$sourcejson.description = $DesktopPoolDescription
$sourcejson.pattern_naming_settings.naming_pattern = $namingmethod

$json = $sourcejson | convertto-json -Depth 100

try{
    Invoke-RestMethod -Method Post -uri "$ConnectionServerURL/rest/inventory/v1/desktop-pools" -ContentType "application/json" -Headers (Get-HRHeader -accessToken $accessToken) -body $json
}
catch{
    throw $_
}

 

 

[API]New way to gather Horizon Events

A good bunch of my audience has probably already noticed it but with Horizon 8 release 2106 VMware has added a new method to gather Horizon Events: the AuditEventSummaryView query. In this post I will describe how to consume this query using the soap API. I have been told by VMware specialists that this updated version of the eventsummaryview is actually safe to use and wont put a burden on the connection servers.

A quick small script to consume this query could look like this:

[CmdletBinding()]
param (
    [Parameter(Mandatory=$false,
    HelpMessage='Credential object as domain\username with password' )]
    [PSCredential] $Credential,

    [Parameter(Mandatory=$true,  HelpMessage='FQDN of the connectionserver' )]
    [ValidateNotNullOrEmpty()]
    [string] $ConnectionServerFQDN
)

if($Credential){
    $creds = $credential
}
else{
    $creds = get-credential
}

$ErrorActionPreference = 'Stop'

# Loading powercli modules
Import-Module VMware.VimAutomation.HorizonView
Import-Module VMware.VimAutomation.Core

$hvserver1=connect-hvserver $ConnectionServerFQDN -credential $creds
$Services1= $hvServer1.ExtensionData

$queryservice=new-object vmware.hv.queryserviceservice
$defn = New-Object VMware.Hv.QueryDefinition
$defn.queryentitytype='AuditEventSummaryView'


$eventlist = @()
$GetNext = $false
$queryResults = $queryservice.QueryService_Create($Services1, $defn)
do {
    if ($GetNext) {
        $queryResults = $queryservice.QueryService_GetNext($Services1, $queryResults.id) 
    }
    $eventlist += $queryResults.results
    $GetNext = $true
}
while ($queryResults.remainingCount -gt 0)
$queryservice.QueryService_Delete($Services1, $queryResults.id)
return $eventlist

I run it like this, show the event count and the last one

$creds = import-clixml d:\homelab\creds.xml
$events = D:\GIT\Scripts\get-horizon-audit-events.ps1 -ConnectionServerFQDN loftcbr01.loft.lab -Credential $creds
$events.count
$events | select-object -last 1

If you want to filter the data a bit more there are plenty of options for that:

I have added some filtering options to the above script, if you supply the filtertype argument the filterdata and filtervalue are mandatory. Filtertype for now can be either Equals or Contains, filterdata can be any of the data types from the image above and the value is the value you’re going to filter on. To be honest not all of the data types worked when I was creating this post but the message actually did.

[CmdletBinding(DefaultParameterSetName='noFilter')]
param (
    [Parameter(Mandatory=$false,
    HelpMessage='Credential object as domain\username with password' )]
    [PSCredential] $Credential,

    [Parameter(Mandatory=$true,  HelpMessage='FQDN of the connectionserver' )]
    [ValidateNotNullOrEmpty()]
    [string] $ConnectionServerFQDN,

    [Parameter(ParameterSetName='Filter',Mandatory=$true,HelpMessage = "Name of the data type to filter on.")]
    [Parameter(ParameterSetName='noFilter',Mandatory=$false,HelpMessage = "Name of the data type to filter on.")]
    [string]$filterdata,

    [Parameter(ParameterSetName='Filter',Mandatory=$true,HelpMessage = "Value to filter on.")]
    [Parameter(ParameterSetName='noFilter',Mandatory=$false,HelpMessage = "Value to filter on.")]
    [string]$filtervalue,

    [Parameter(ParameterSetName='Filter',HelpMessage = "FIltertype: Equals or Contains.")]
    [validateset("Equals","Contains")]
    [string]$filtertype

)

if($Credential){
    $creds = $credential
}
else{
    $creds = get-credential
}

$ErrorActionPreference = 'Stop'

# Loading powercli modules
Import-Module VMware.VimAutomation.HorizonView
Import-Module VMware.VimAutomation.Core

$hvserver1=connect-hvserver $ConnectionServerFQDN -credential $creds
$Services1= $hvServer1.ExtensionData

$queryservice=new-object vmware.hv.queryserviceservice
$defn = New-Object VMware.Hv.QueryDefinition
$defn.queryentitytype='AuditEventSummaryView'

if($filtertype){
    if($filtertype -eq "Contains"){
        $defn.Filter= New-Object VMware.Hv.QueryFilterContains -property @{'MemberName'=$filterdata; 'value'=$filtervalue}
    }
    else{
        $defn.Filter= New-Object VMware.Hv.QueryFilterEquals -property @{'MemberName'=$filterdata; 'value'=$filtervalue}
    }
}

$eventlist = @()
$GetNext = $false
$queryResults = $queryservice.QueryService_Create($Services1, $defn)
do {
    if ($GetNext) {
        $queryResults = $queryservice.QueryService_GetNext($Services1, $queryResults.id) 
    }
    $eventlist += $queryResults.results
    $GetNext = $true
}
while ($queryResults.remainingCount -gt 0)
$queryservice.QueryService_Delete($Services1, $queryResults.id)
return $eventlist

I run and check it like this:

$events = D:\GIT\Scripts\get-horizon-audit-events.ps1 -ConnectionServerFQDN loftcbr01.loft.lab -Credential $creds -filtertype Contains -filterdata message -filtervalue "has logged in"
$events | Select-Object message -last 10

The last version shown here can be downloaded from my github: Various_Scripts/get-horizon-audit-events.ps1 at master · Magneet/Various_Scripts (github.com)

 

How to add Rocky Linux (& Other not supported Linux Flavors) to ControlUp

Disclaimer: I am a ControlUp employee but am posting this on my own. We officially only support the Linux flavors here: https://support.controlup.com/hc/en-us/articles/360001497917-Linux-Integration-with-ControlUp

Disclaimer 2: I am NOT a Linux expert, I consider myself a basic user for most Linux things.

With the recent rise of the truly community driven (beta) releases of Rocky Linux I thought it was time to see what would be needed to add Rocky Linux to the ControlUp console. If you want to know more about how and what with Rocky Linux I recommend reading my good friend Angelo Luciani’s blog post here. TLDR: Rocky Linux is a project from one of the CentOS co-founders and is supposed to be enterprise grade once GA is reached.

Adding the linux machines to ControlUp

Fixing Linux to work with ControlUp

Adding linux machines to ControlUp

[sta_anchor id=”adding” /]

To compare the process with CentOS 7, 8 and Ubuntu 2004 LTS I have deployed fresh vm’s for these distributions. For Rocky and the CentOS vm’s I selected Server with a GUI, for Ubuntu I picked the default installation. All of them got 1 cpu, 2GB ram and 64GB disk space (overkill, yes I know). I also added the same user to all of them during the deployment. Network wise I just gave them a proper host name and selected dhcp as that was enough for this test.

The setup screen for Rocky Linux

To be able to add unsupported Linux Flavors you need to enable this in Settings > Agents

After deploying all 4 of the vm’s I added them to the ControlUp console with the same Linux Data collector.

I could use another credential that I setup in the past for my domoticz & Unify linux machines, after this click add to add the machines.

I entered the ip range to scan, hit scan, selected the 4 linux machines (note: CentOS 8 is not being reported as a not supported flavor, I have reported a bug for this), click add and hit ok twice.

By default the CentOS 7 machine is the only one working correctly.

if you see some weird negative number for CPU make sure to check your locale settings that it is the same as the regional settings on your Console/Monitor.

Now let’s make sure we get proper metrics in the console 🙂

Fixing Linux machines to work with ControlUp

[sta_anchor id=”fixing” /]

Rocky Linux needs the same fixes as CentOS 8, Ubuntu needs another step.

  1. Install gcc : for CentOS / Rock
    sudo yum install gcc

    or for Ubuntu

    sudo apt-get install gcc
  2. run
    ps -V
    1. if the output has ps from procps you need to remember the number 6
    2. If the output has ps from procps-ng you need to remember the number 7
  3. Create a file called a.c
    1. run:
      nano a.c

      (or vi, whatever you like)

    2. Add:
      1. #include <stdio.h> 
        int main() 
        {
          printf("6\n");
          return 0;
        }

        replace the 6 with 7 if you had procps-ng in the previous step

    3. save this file
  4. run:
    gcc a.c
  5. run:
    sudo cp a.out /bin/rpm
  6. Install sysstat: For Rocky Linux / CentOS
    sudo yum install sysstat

    or for Ubuntu

    sudo apt-get install sysstat
  7. Reboot ( yes I needed this)

For Rocky Linux and CentOS you’re done

For Ubuntu you will also need to install ifconfig from net-tools before the reboot

  1. run:
    sudo apt-get install net-tools

For me this was enough to have all four of these machines look properly in the ControlUp Console. Remember we might not officially support this way of working but it works good

Creating a RDS farm using the Python module for VMware Horizon

One of the goals and hopes I had with my 100DaysOfCode (I am writing this on day 100!) was that the Horizon REST api’s to create desktop pools and RDS farms would have been available at the end. Only half of that came out and with Horizon 8 2103 we can finally create a RDS farm using those rest api’s. I have decided to add this to the Python module based on a dictionary that the user sends to the new_farm method. I could still add a fully fetched function but that would require a lot of arguments and using **kwargs is an option but than the user would still need to find out what to use.

First I will need to know what json data I actually need, let’s have a look at the api explorer page to get a grip on this

{
  "access_group_id": "6fd4638a-381f-4518-aed6-042aa3d9f14c",
  "automated_farm_settings": {
    "customization_settings": {
      "ad_container_rdn": "CN=Computers",
      "cloneprep_customization_settings": {
        "post_synchronization_script_name": "cloneprep_postsync_script",
        "post_synchronization_script_parameters": "p1 p2 p3",
        "power_off_script_name": "cloneprep_poweroff_script",
        "power_off_script_parameters": "p1 p2 p3",
        "priming_computer_account": "a219420d-4799-4517-8f78-39c74c7c4efc"
      },
      "instant_clone_domain_account_id": "6f85b3a5-e7d0-4ad6-a1e3-37168dd1ed51",
      "reuse_pre_existing_accounts": false
    },
    "enable_provisioning": true,
    "max_session_type": "LIMITED",
    "max_sessions": 50,
    "min_ready_vms": 0,
    "nics": [
      {
        "network_interface_card_id": "c9896e51-48a2-4d82-ae9e-a0246981b473",
        "network_label_assignment_specs": [
          {
            "enabled": true,
            "max_label": 1,
            "max_label_type": "LIMITED",
            "network_label_name": "vm-network"
          }
        ]
      }
    ],
    "pattern_naming_settings": {
      "max_number_of_rds_servers": 5,
      "naming_pattern": "vm-{n}-sales"
    },
    "provisioning_settings": {
      "base_snapshot_id": "snapshot-1",
      "datacenter_id": "datacenter-1",
      "host_or_cluster_id": "domain-s425",
      "im_stream_id": "6f85b3a5-e7d0-4ad6-a1e3-37168dd1ed51",
      "im_tag_id": "3d45b3a5-e7d0-4ad6-a1e3-37168dd1ed51",
      "parent_vm_id": "vm-2",
      "resource_pool_id": "resgroup-1",
      "vm_folder_id": "group-v1"
    },
    "stop_provisioning_on_error": true,
    "storage_settings": {
      "datastores": [
        {
          "datastore_id": "datastore-1"
        }
      ],
      "replica_disk_datastore_id": "datastore-1",
      "use_separate_datastores_replica_and_os_disks": false,
      "use_view_storage_accelerator": false,
      "use_vsan": false
    },
    "transparent_page_sharing_scope": "VM",
    "vcenter_id": "f148f3e8-db0e-4abb-9c33-7e5205ccd360"
  },
  "description": "Farm Description",
  "display_name": "ManualFarm",
  "display_protocol_settings": {
    "allow_users_to_choose_protocol": true,
    "default_display_protocol": "PCOIP",
    "grid_vgpus_enabled": true,
    "session_collaboration_enabled": false
  },
  "enabled": true,
  "load_balancer_settings": {
    "cpu_threshold": 10,
    "disk_queue_length_threshold": 15,
    "disk_read_latency_threshold": 10,
    "disk_write_latency_threshold": 15,
    "include_session_count": true,
    "memory_threshold": 10
  },
  "name": "ManualFarm",
  "rds_server_ids": [
    "5134796a-322g-5fe5-343f-4daa5d25ebfe",
    "2a43f96c-102b-4ed3-953f-35deg43d43b0ge"
  ],
  "server_error_threshold": 0,
  "session_settings": {
    "disconnected_session_timeout_minutes": 5,
    "disconnected_session_timeout_policy": "NEVER",
    "empty_session_timeout_minutes": 5,
    "empty_session_timeout_policy": "AFTER",
    "logoff_after_timeout": false,
    "pre_launch_session_timeout_minutes": 10,
    "pre_launch_session_timeout_policy": "AFTER"
  },
  "type": "MANUAL",
  "use_custom_script_for_load_balancing": false
}

This also includes some that are not required so for my own farm I settled with this json. This is for an Instant Clone farm.

{
    "access_group_id": "6fd4638a-381f-4518-aed6-042aa3d9f14c",
    "automated_farm_settings": {
        "customization_settings": {
            "ad_container_rdn": "OU=Pod1,OU=RDS,OU=VMware,OU=EUC",
            "instant_clone_domain_account_id": "6f85b3a5-e7d0-4ad6-a1e3-37168dd1ed51",
            "reuse_pre_existing_accounts": true
        },
        "enable_provisioning": false,
        "max_session_type": "LIMITED",
        "max_sessions": 50,
        "min_ready_vms": 1,
        "pattern_naming_settings": {
            "max_number_of_rds_servers": 2,
            "naming_pattern": "vm-{n}-sales"
        },
        "provisioning_settings": {
            "base_snapshot_id": "snapshot-1",
            "datacenter_id": "datacenter-1",
            "host_or_cluster_id": "domain-s425",
            "parent_vm_id": "vm-2",
            "resource_pool_id": "resgroup-1",
            "vm_folder_id": "group-v1"
        },
        "stop_provisioning_on_error": true,
        "storage_settings": {
            "datastores": [
                {
                    "datastore_id": "datastore-1"
                }
            ],
            "use_separate_datastores_replica_and_os_disks": false,
            "use_view_storage_accelerator": false,
            "use_vsan": false
        },
        "transparent_page_sharing_scope": "VM",
        "vcenter_id": "f148f3e8-db0e-4abb-9c33-7e5205ccd360"
    },
    "description": "demo_farm",
    "display_name": "demo_farm",
    "display_protocol_settings": {
        "allow_users_to_choose_protocol": true,
        "default_display_protocol": "BLAST",
        "grid_vgpus_enabled": false,
        "session_collaboration_enabled": true
    },
    "enabled": false,
    "load_balancer_settings": {
        "cpu_threshold": 10,
        "disk_queue_length_threshold": 15,
        "disk_read_latency_threshold": 10,
        "disk_write_latency_threshold": 15,
        "include_session_count": true,
        "memory_threshold": 10
    },
    "name": "demo_farm",
    "server_error_threshold": 0,
    "session_settings": {
        "disconnected_session_timeout_minutes": 5,
        "disconnected_session_timeout_policy": "NEVER",
        "empty_session_timeout_minutes": 5,
        "empty_session_timeout_policy": "AFTER",
        "logoff_after_timeout": false,
        "pre_launch_session_timeout_minutes": 10,
        "pre_launch_session_timeout_policy": "AFTER"
    },
    "type": "AUTOMATED",
    "use_custom_script_for_load_balancing": false
}

As said I send a dictionary to the method so let’s import data into a dict called data and I will print it to screen. The dictionary needs to follow this specific order of lines so that’s why a json is very useful to start with.

with open('/mnt/d/homelab/farm.json') as f:
    data = json.load(f)

As you can see in both the json and the output there’s a lot of things we can change and some things that we need to change lik id’s for all the components like vCenter, base vm, base snapshot and more. First I need the access_group_id this can be retreived using the get_local_access_groups method. For all of these I will also set the variable in the dictionary that we need.

local_access_group = next(item for item in (config.get_local_access_groups()) if item["name"] == "Root")
data["access_group_id"] = local_access_group["id"]

Than it’s time for the Instant Clone Admin id

ic_domain_account = next(item for item in (config.get_ic_domain_accounts()) if item["username"] == "administrator")
data["automated_farm_settings"]["customization_settings"]["instant_clone_domain_account_id"] = ic_domain_account["id"]

For the basevm and snapshot id’s I used the same method but a bit differently as I had already used this method in another script

vcenters = monitor.virtual_centers()
vcid = vcenters[0]["id"]
dcs = external.get_datacenters(vcenter_id=vcid)
dcid = dcs[0]["id"]

base_vms = external.get_base_vms(vcenter_id=vcid,datacenter_id=dcid,filter_incompatible_vms=True)

base_vm = next(item for item in base_vms if item["name"] == "srv2019-p1-2020-10-13-08-44")
basevmid=base_vm["id"]

base_snapshots = external.get_base_snapshots(vcenter_id=vcid, base_vm_id=base_vm["id"])

base_snapshot = next(item for item in base_snapshots if item["name"] == "Created by Packer")

snapid=base_snapshot["id"]
data["automated_farm_settings"]["provisioning_settings"]["base_snapshot_id"] = snapid
data["automated_farm_settings"]["provisioning_settings"]["parent_vm_id"] = basevmid

Host or cluster id

host_or_clusters = external.get_hosts_or_clusters(vcenter_id=vcid, datacenter_id=dcid)
for i in host_or_clusters:
    if (i["details"]["name"]) == "Cluster_Pod1":
        host_or_cluster = i
data["automated_farm_settings"]["provisioning_settings"]["host_or_cluster_id"] = host_or_cluster["id"]

Resource Pool

resource_pools = external.get_resource_pools(vcenter_id=vcid, host_or_cluster_id=host_or_cluster["id"])
for i in resource_pools:
    # print(i)
    if (i["type"] == "CLUSTER"):
        resource_pool = i
data["automated_farm_settings"]["provisioning_settings"]["resource_pool_id"] = resource_pool["id"]

VM folder again is a bit different as I have to get the id from one of the children objects

vm_folders = external.get_vm_folders(vcenter_id=vcid, datacenter_id=dcid)
for i in vm_folders:
    children=(i["children"])
    for ii in children:
        # print(ii["name"])
        if (ii["name"]) == "Pod1":
            vm_folder = i
data["automated_farm_settings"]["provisioning_settings"]["vm_folder_id"] = vm_folder["id"]

Datacenter and vcenter id’s I already had to grab for the base vm and base snapshot so I can just add them

data["automated_farm_settings"]["provisioning_settings"]["datacenter_id"] = dcid
data["automated_farm_settings"]["vcenter_id"] = vcid

Datastores is a bit more funky as there can be multiple so I needed to create a list first and than populate that based on the name of the datastores I have.

datastore_list = []
datastores = external.get_datastores(vcenter_id=vcid, host_or_cluster_id=host_or_cluster["id"])
for i in datastores:
    # print(i)
    if (i["name"] == "VDI-500") or i["name"] == "VDI-200":
        ds = {}
        ds["datastore_id"] = i["id"]
        datastore_list.append(ds)
data["automated_farm_settings"]["storage_settings"]["datastores"] = datastore_list

For my final script I put them in a bit different order and I decided to change a whole lot more options but if you have your json perfected this shouldn’t always be required. Also take note that for true/false in the json that I use the True/False from python.

import requests, getpass, urllib, json

import vmware_horizon

requests.packages.urllib3.disable_warnings()
url = input("URL\n")

username = input("Username\n")

domain = input("Domain\n")

pw = getpass.getpass()

hvconnectionobj = vmware_horizon.Connection(username = username,domain = domain,password = pw,url = url)
hvconnectionobj.hv_connect()
print("connected")

monitor = obj=vmware_horizon.Monitor(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
external=vmware_horizon.External(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
inventory=vmware_horizon.Inventory(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
config=vmware_horizon.Config(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)

with open('/mnt/d/homelab/farm.json') as f:
    data = json.load(f)

vcenters = monitor.virtual_centers()
vcid = vcenters[0]["id"]
dcs = external.get_datacenters(vcenter_id=vcid)
dcid = dcs[0]["id"]

base_vms = external.get_base_vms(vcenter_id=vcid,datacenter_id=dcid,filter_incompatible_vms=True)

base_vm = next(item for item in base_vms if item["name"] == "srv2019-p1-2020-10-13-08-44")
basevmid=base_vm["id"]

base_snapshots = external.get_base_snapshots(vcenter_id=vcid, base_vm_id=base_vm["id"])

base_snapshot = next(item for item in base_snapshots if item["name"] == "Created by Packer")

snapid=base_snapshot["id"]

host_or_clusters = external.get_hosts_or_clusters(vcenter_id=vcid, datacenter_id=dcid)
for i in host_or_clusters:
    if (i["details"]["name"]) == "Cluster_Pod1":
        host_or_cluster = i

resource_pools = external.get_resource_pools(vcenter_id=vcid, host_or_cluster_id=host_or_cluster["id"])
for i in resource_pools:
    # print(i)
    if (i["type"] == "CLUSTER"):
        resource_pool = i

vm_folders = external.get_vm_folders(vcenter_id=vcid, datacenter_id=dcid)
for i in vm_folders:
    children=(i["children"])
    for ii in children:
        # print(ii["name"])
        if (ii["name"]) == "Pod1":
            vm_folder = i

datastore_list = []
datastores = external.get_datastores(vcenter_id=vcid, host_or_cluster_id=host_or_cluster["id"])
for i in datastores:
    # print(i)
    if (i["name"] == "VDI-500") or i["name"] == "VDI-200":
        ds = {}
        ds["datastore_id"] = i["id"]
        datastore_list.append(ds)

local_access_group = next(item for item in (config.get_local_access_groups()) if item["name"] == "Root")
ic_domain_account = next(item for item in (config.get_ic_domain_accounts()) if item["username"] == "administrator")

data["access_group_id"] = local_access_group["id"]
data["automated_farm_settings"]["customization_settings"]["ad_container_rdn"] = "OU=Pod1,OU=RDS,OU=VMware,OU=EUC"
data["automated_farm_settings"]["customization_settings"]["reuse_pre_existing_accounts"] = True
data["automated_farm_settings"]["customization_settings"]["instant_clone_domain_account_id"] = ic_domain_account["id"]
data["automated_farm_settings"]["enable_provisioning"] = False
data["automated_farm_settings"]["max_sessions"] = 50
data["automated_farm_settings"]["min_ready_vms"] = 3
data["automated_farm_settings"]["pattern_naming_settings"]["max_number_of_rds_servers"] = 4
data["automated_farm_settings"]["pattern_naming_settings"]["naming_pattern"] = "farmdemo-{n:fixed=3}"
data["automated_farm_settings"]["provisioning_settings"]["base_snapshot_id"] = snapid
data["automated_farm_settings"]["provisioning_settings"]["parent_vm_id"] = basevmid
data["automated_farm_settings"]["provisioning_settings"]["host_or_cluster_id"] = host_or_cluster["id"]
data["automated_farm_settings"]["provisioning_settings"]["resource_pool_id"] = resource_pool["id"]
data["automated_farm_settings"]["provisioning_settings"]["vm_folder_id"] = vm_folder["id"]
data["automated_farm_settings"]["provisioning_settings"]["datacenter_id"] = dcid
data["automated_farm_settings"]["stop_provisioning_on_error"] = True
data["automated_farm_settings"]["storage_settings"]["datastores"] = datastore_list
data["automated_farm_settings"]["transparent_page_sharing_scope"] = "GLOBAL"
data["automated_farm_settings"]["vcenter_id"] = vcid
data["description"] = "Python_demo_farm"
data["display_name"] = "Python_demo_farm"
data["display_protocol_settings"]["allow_users_to_choose_protocol"] = True
data["display_protocol_settings"]["default_display_protocol"] = "BLAST"
data["display_protocol_settings"]["session_collaboration_enabled"] = True
data["enabled"] = False
data["load_balancer_settings"]["cpu_threshold"] = 12
data["load_balancer_settings"]["disk_queue_length_threshold"] = 16
data["load_balancer_settings"]["disk_read_latency_threshold"] = 12
data["load_balancer_settings"]["disk_write_latency_threshold"] = 16
data["load_balancer_settings"]["include_session_count"] = True
data["load_balancer_settings"]["memory_threshold"] = 12
data["name"] = "Python_demo_farm"
data["session_settings"]["disconnected_session_timeout_minutes"] = 5
data["session_settings"]["disconnected_session_timeout_policy"] = "NEVER"
data["session_settings"]["empty_session_timeout_minutes"] = 6
data["session_settings"]["empty_session_timeout_policy"] = "AFTER"
data["session_settings"]["logoff_after_timeout"] = False
data["session_settings"]["pre_launch_session_timeout_minutes"] = 12
data["session_settings"]["pre_launch_session_timeout_policy"] = "AFTER"
data["type"] = "AUTOMATED"

inventory.new_farm(farm_data=data)

end=hvconnectionobj.hv_disconnect()
print(end)

How does this look? Actually you don’t see a lot happening but the farm will have been created

As always the script can be found on my github in the examples folder together with the json file.

With this I am closing my 100DaysOfCode challenge but I pledge to keep maintaining the python module and I will extend it when new REST api calls arrive for VMware Horizon.

The VMware Labs flings monthly for March 2021 – New Website

A day late but never late than never, this is your monthly overview with all the latest and greatest VMware flings. The flings site received a new fresh look head out to https://flings.vmware.com to have a look yourself. I am not sure if the (are they new?) tags for updated or new flings are okay yet as one has been marked updated but it seems to be new while one new one I could find a blog post for from last month but I missed it for my overview, not sure what happened there. Overall I see four new flings and ten received an update.

New Releases

Configuration Wizard for Nuance PowerMic

vRealize Automation Code Stream CLI

Hillview: Distributed Data Visualization

SDDC Import/Export for VMware Cloud on AWS

Updates

Workspace ONE Mobileconfig Importer

Workspace One UEM Workload Migration Tool

vSAN Hardware Compatibility List Checker

Vmss2core

vRealize Build Tools

Horizon Cloud Pod Architecture Tools

Workspace ONE App Analyzer for macOS

App Volumes Packaging Utility

DoD Security Technical Implementation Guide(STIG) ESXi VIB

VMware OS Optimization Tool

New Releases

[sta_anchor id=”cwfnpm” /]

Configuration Wizard for Nuance PowerMic

Nuance PowerMics are the leading dictation device used in Healthcare today.
This handy standalone Fling will assist in determining the optimal PowerMic configuration for a specific customer environment.

To determine the configuration that works best for you, we need to know a few details about the customer environment:

  • Endpoint type
  • Endpoint vendor
  • Endpoint operating system
  • Single/nested mode
  • Horizon protocol

The PowerMic Configuration Wizard will then provide the specific settings for optimal PowerMic performance and accuracy in the customers environment.

[sta_anchor id=”vracsc” /]

vRealize Automation Code Stream CLI

vRealize Automation Code Stream CLI is a command line tool written in Go to interact with the vRealize Automation Code Stream APIs. This Fling is written to help automate Code Stream and provide a simple way to migrate content between Code Stream instances and projects.

  • Import and Export Code Stream artefacts such as Pipelines, Variables, Endpoints
  • Perform CRUD operations on Code Stream artefacts such as Pipelines, Variables, Endpoints
  • Trigger Executions of Pipelines

[sta_anchor id=”hvddv” /]

Hillview: Distributed Data Visualization

Hillview is a simple cloud-based spreadsheet program for browsing large data collections. The data manipulated is read-only. Users can sort, find, filter, transform, query, zoom-in/out, and chart data. Operations are performed using direct manipulation in the GUI. Hillview is designed to work on very large data sets (billions of rows). Hillview can import data from a variety of sources: CSV files, ORC files, Parquet files, databases, parallel databases; new connectors can be added with relatively little effort. Hillview takes advantage of all the cores of the worker machines for fast visualizations.

Hillview is a distributed system, composed of two pieces:

  • A distributed set of one or many workers, which should be installed close to the data (e.g., on the machines that host the data).
  • A front-end service that runs a web server and aggregates data from all workers.

The source code of Hillview is available as an open-source project with an Apache-2 license from Hillview’s github repository. For any questions, feature requests or bug reports please file an issue on github.

[sta_anchor id=”sddciefvcoaws” /]

SDDC Import/Export for VMware Cloud on AWS

The SDDC Import/Export for VMware Cloud on AWS tool enables you to save and restore your VMware Cloud on AWS (VMC) Software-Defined Data Center (SDDC) networking and security configuration.

There are many situations when customers want to migrate from an existing SDDC to a different one. While HCX addresses the data migration challenge, this tool offers customers the ability to copy the configuration from a source to a destination SDDC.

A few example migration scenarios are:

  • SDDC to SDDC migration from bare-metal (i3) to a different bare-metal type (i3en)
  • SDDC to SDDC migration from VMware-based org to an AWS-based org
  • SDDC to SDDC migration from region (i.e. London) to a different region (i.e. Dublin).

Other use cases are:

  • Backups – save the entire SDDC configuration
  • Lab purposes – customers or partners might want to deploy SDDCs with a pre-populated configuration.
  • DR purposes – deploy a pre-populated configuration in conjunction with VMware Site Recovery or VMware Cloud Disaster Recovery

Detailed instructions can be found on the Instructions tab, in the README.md included in the Zip file or on Patrick’s blog (http://www.patrickkremer.com/sddc-import-export/).

Details about the use cases and origins of the project can be found on Nico’s blog (https://nicovibert.com/2021/02/08/fling-sddc-import-export-for-vmware-cloud-on-aws/).

Updates

[sta_anchor id=”wsonemci” /]

Workspace ONE Mobileconfig Importer

The Workspace ONE mobileconfig Importer gives you the ability to import existing mobileconfig files directly into a Workspace ONE UEM environment as a Custom Settings profile, import app preference plist files in order to created managed preference profiles, and to create new Custom Settings profiles from scratch. When importing existing configuration profiles, the tool will attempt to separate each PayloadContent dictionary into a separate payload for the Workspace ONE profile.

Changelog

Version 1.1

  • Support for Big Sur
  • Updated icon

[sta_anchor id=”wsoneuemwmt” /]

Workspace One UEM Workload Migration Tool

Quite a few updates for this fling already!

The Workspace One UEM Workload Migration Tool allows a seamless migration of Applications and Device configurations between different Workspace One UEM environments. With the push of a button, workloads move from UAT to Production, instead of having to manually enter the information or upload files manually. Therefore, decreasing the time to move data between Dev/UAT environments to Production.

Changelog

Version 2.1.0

  • Fixed app upload issues for Workspace One UEM 1910+
  • Fixed profile search issue for Workspace One UEM 1910+
  • Added profile update support
  • Added template folder structure creation
  • Updated Mac app to support notarization for Catalina

Version 2.0.1

  • Fixed Baseline Migration issue
  • Fixed Profile Errors not displaying in the UI

Version 2.0.0

  • Baseline Migration Support
  • MacOS application
  • UI refactoring to make bulk migrations easier
  • Added support for script detection with Win32 applications

Version 1.0.1

  • Fixed issue with expired credentials.

[sta_anchor id=”vsanhclc” /]

vSAN Hardware Compatibility List Checker

The vSAN Hardware Compatibility List Checker is a tool that verifies all installed storage adapters against the vSAN supported storage controller list. The tool will verify if the model, driver and firmware version of the storage adapter are supported.

Changelog

Version 2.2

  • Support multi-platforms for Windows, Linux and MacOS
  • Bug fixed

[sta_anchor id=”vmss2core” /]

Vmss2core

Vmss2core is a tool to convert VMware checkpoint state files into formats that third party debugger tools understand. It can handle both suspend (.vmss) and snapshot (.vmsn) checkpoint state files (hereafter referred to as a ‘vmss file’) as well as both monolithic and non-monolithic (separate .vmem file) encapsulation of checkpoint state data.

Changelog

Version 1.0.1

  • Fixed running out of memory issues
  • Added support for more versions of Windows 10/Windows 2016

[sta_anchor id=”vrbt” /]

vRealize Build Tools

vRealize Build Tools provides tools to development and release teams implementing solutions based on vRealize Automation (vRA) and vRealize Orchestrator (vRO). The solution targets Virtual Infrastructure Administrators and Solution Developers working in parallel on multiple vRealize-based projects who want to use standard DevOps practices.

Changelog

Version 2.12.5 Update

  • [vRBT] Package installer – Add support for installation on a standalone vRO (not embedded) version 8.x with basic authentication
  • [vRA] Added catalog entitlements and examples to the vra-ng archetype
  • [vRO] Support / in Workflow name or path, by substituting it with dash (-) character.
  • [vRBT] Added http / socket timeouts support in the installer
  • [ABX] Support for ABX actions
  • [vRO] Support of placeholders in workflow description
  • [vRA] Import vRA8 custom resources before blueprints
  • [vROPS] Fixed policy import / export problem with vROPs 8.2, maintaining backward compatibility
  • [MVN] Fixed ussue with installer timeouts
  • [TS] vRO pkg – Adds support for slash in workflow path or name
  • [vRBT] Package installer – updated documentation, added checking of workflow input, writing of workflow error message to file, setting of installer exit code when executing of a workflow
  • [TS] Allow additional trigger events for policies trigered by the vcd mqtt plugin
  • [MVN] Fix Missing vRA Tenant After Successful package import
  • [MVN] Fix vROPS import fails on certain assets

[sta_anchor id=”hcpat” /]

Horizon Cloud Pod Architecture Tools

The Horizon Cloud Pod Architecture Tools mainly acts as a wrapper around the lmvutil for Horizon Cloud Pod Archtitecture.

Changelog

Version 1.1

Based on the customer requests, have added few more command line options for CSV reports generation and AD LDS data cleanup.

What’s New:

Adds support to cleanup stale global local entitlement assignments from ADAM DB.

  • Global AD LDS Command:
    adlds-analyzer.cmd –resolve-localpool-ga
  • Scans the cloud pod database and resolves the stale entries of local pool global assignments.
    Note: This resolves deleted local pool conflicts of current pod only. If dashboard session data load error or session search fails in a different pod, a scan and resolve has to be executed in that pod.
  • List of new commands added to Local AD LDS:
    adlds-analyzer.cmd –export-machine
  • All machine data exported as CSV file. Compatibility: Horizon 7.10 and above, 8.x
    adlds-analyzer.cmd –export-machine -pool=”DesktopPool1,DesktopPool2″
  • All machine data exported as CSV file. Use -pool= to filter machines by desktop pool name. Compatibility: Horizon 7.10 and above, 8.x
  • Spaces are not allowed in -pool= optional argument.
    adlds-analyzer.cmd –export-session
  • All local sessions data exported as CSV file. Compatibility: Horizon 7.10 and above, 8.x
    adlds-analyzer.cmd –export-session -pool=”PoolName1,PoolName2″ -farm=”FarmName1,FarmName2″
  • All local session data exported as CSV file. Use -pool= to filter sessions by desktop pool name. -farm= filter sessions by RDS farm name. Compatibility: Horizon 7.10 and above, 8.x
  • Spaces are not allowed in -pool= and -farm= optional argument.
    adlds-analyzer.cmd –check-apps-integrity
  • Scans and lists the stale application icons in local ADLDS instance.
    adlds-analyzer.cmd –check-apps-integrity -input=”AbsoluteFilePath1″,”AbsoluteFilePath2″
  • Reads the list of adam LDIF files in “-input=” for parsing and exports the stale application icons data to a file.
    adlds-analyzer.cmd –export-named-lic-users
  • Exports the utilized and un-utilized named license users list information.

[sta_anchor id=”wsoneaafm” /]

Workspace ONE App Analyzer for macOS

The Workspace ONE macOS App Analyzer will determine any Privacy Permissions, Kernel Extensions, or System Extensions needed by an installed macOS application, and can be used to automatically create profiles in Workspace ONE UEM to whitelist those same settings when deploying apps to managed devices.

Changelog

Version 1.2.1

  • Fixed bug that caused crash with certain System Extension configurations

[sta_anchor id=”avpu” /]

App Volumes Packaging Utility

This App Volumes Packaging Utility helps to package applications. With this fling, packagers can add the necessary metadata to MSIX app attach VHDs so they can be used alongside existing AV format packages. The MSIX format VHDs will require App Volumes 4, version 2006 or later and Windows 10, version 2004 or later.

Changelog

Version 1.2 Update

  • Fixed bugs found in internal testing.

[sta_anchor id=”dodstigg” /]

DoD Security Technical Implementation Guide(STIG) ESXi VIB

The DoD Security Technical Implementation Guide (‘STIG’) ESXi VIB is a Fling that provides a custom VMware-signed ESXi vSphere Installation Bundle (‘VIB’) to assist in remediating Defense Information Systems Agency STIG controls for ESXi. This VIB has been developed to help customers rapidly implement the more challenging aspects of the vSphere STIG. These include the fact that installation is time consuming and must be done manually on the ESXi hosts. In certain cases, it may require complex scripting, or even development of an in-house VIB that would not be officially digitally signed by VMware (and therefore would not be deployed as a normal patch would). The need for a VMware-signed VIB is due to the system level files that are to be replaced. These files cannot be modified at a community supported acceptance level. The use of the VMware-signed STIG VIB provides customers the following benefits:

  • The ability to use vSphere Update Manager (‘VUM’) to quickly deploy the VIB to ESXi hosts (you cannot do this with a customer created VIB)
  • The ability to use VUM to quickly check if all ESXi hosts have the STIG VIB installed and therefore are also in compliance
  • No need to manually replace and copy files directly on each ESXi host in your environment
  • No need to create complex shell scripts that run each time ESXi boots to re-apply settings

Changelog

Update March 2021

  • New ESXi 7.0 STIG VIB release
  • Updated sshd_config file to meet the ESXi 7.0 Draft STIG which is also now the default config in 7.0 U2 with the exception of permitting root user logins.
  • Removed /etc/vmware/welcome file from VIB since it can be configured via the UI or PowerCLI now with issue.
  • Draft ESXi content can be found here: https://github.com/vmware/dod-compliance-and-automation/tree/master/vsphere/7.0/docs
  • See the updated Overview and Installation guide included in the download.

[sta_anchor id=”osot” /]

VMware OS Optimization Tool

No comments needed, just use the OS Optimization Tool when creating your golden images!

Changelog

March 2021, b2002

  • Fixed issue where the theme file was being updated by a Generalize task and the previously selected optimizations including wallpaper color were being lost.
  • The administrator username used during Generalize was not getting passed through properly to the unattend answer file. This resulted in a mismatch when using some languages versions of Windows.
  • Removed legacy code GPO Policy corruption
  • Removed CMD.exe box that displayed at logon.
  • Windows Store Apps were not being removed properly on Windows 10 version 20H2. Fixed the optimizations to cope with the differences introduced in this version.

Optimizations

Changed step Block all consumer Microsoft account user authentication to be unselected by default. When disabled this was causing failures to login to Edge and Windows store.

Changed the step Turn off Thumbnail Previews in File Explorer to be unselected by default. This was causing no thumbnails to show for store apps in search results.

Windows Update

On Non-Enterprise editions of Windows 10, KB4023057 installs a new application called Microsoft Update Health Tools: https://support.microsoft.com/en-us/topic/kb4023057-update-for-windows-10-update-service-components-fccad0ca-dc10-2e46-9ed1-7e392450fb3a. Added logic to ensure that the Windows Update Medic Service is disabled including after re-enabling and disabling Windows Updates using the Update tab.

Templates

Windows 8 and 8.1 templates have been removed from the list of built-in templates. To optimize these versions of Windows, use the separate download for version b1130.

Removed old Windows 10 templates from the Public Templates repository:

  • Windows 10 1809-2004-Server 2019
  • Windows 10 1507-1803-Server 2016

January 2021, b2001 Bug Fixes

  • All optimization entries have been added back into the main user template. This allows manual tuning and selection of all optimizations.
  • Fixed two hardware acceleration selections were not previously controlled by the Common Option for Visual Effect to disable hardware acceleration.

Optimize

  • During an Optimize, the optimization selections are automatically exported to a default json file (%ProgramData%\VMware\OSOT\OptimizedTemplateData.json).

Analyze

  • When an Analyze is run, if the default json file exists (meaning that this image has already been optimized), this is imported and used to select the optimizations and the Common Options selections with the previous choices.
  • If the default selections are required, on subsequent runs of the OS Optimization Tool, delete the default json file, relaunch the tool and run Analyze.

Command Line

  • The OptimizedTemplateData.json file can also be used from the command line with the -applyoptimization parameter.

Optimizations

  • Changed entries for Hyper-V services to not be selected by default. These services are required for VMs deployed onto Azure. Windows installation sets these to manual (trigger) so these so not cause any overhead on vSphere, when left with the default setting.

The VMware Labs flings monthly for February 2021 – Reach alert!

It’s been a busy month on the flings front, no less than 17(!!) new releases and updated flings. This is how my browser tabs look:

If I have therm all correct there are 6 new releases and 10 updates (2 1 of which update without a changelog so a boo for that!) so this post is going to be a long one!

New Releases

Community NVMe Driver for ESXi

VMware Cloud Foundation Powernova

Workspace ONE Access Migration Tool

Sample Data Platform Deployment on Virtualized Cloud Infrastructure

Community Networking Driver for ESXi

Code Stream Concourse Integrator

Updates

ESXi Compatibility Checker

VMware Machine Learning Platform

Virtualized High Performance Computing Toolkit

Horizon Peripherals Intelligence

Workspace ONE App Analyzer for macOS

VMware OS Optimization Tool

Horizon Helpdesk Utility

HCIBench

Horizon Reach

Workspace ONE Discovery

App Volumes Migration Utility

New Releases

[sta_anchor id=”cndfe” /]

Community NVMe Driver for ESXi

This Fling is a collection of ESXi Native Drivers which enables ESXi to recognize and consume various NVMe-based storage devices. These devices are not officially on the VMware HCL and have been developed to enable and support the VMware Community.

Currently, this Fling provides an emulated NVMe driver for the Apple 2018 Intel Mac Mini 8,1 and the Apple 2019 Intel Mac Pro 7,1 allowing customers to use the local NVMe SSD with ESXi. This driver is packaged up as an Offline Bundle and is only activated when it detects ESXi has been installed on either an Apple Mac Mini or Apple Mac Pro.

[sta_anchor id=”vcfp” /]

VMware Cloud Foundation Powernova

VMware Cloud Foundation Powernova is a Fling built on top of VCF that provides the users the ability to perform Power Operations (Power ON, Power OFF) seamlessly across the entire inventory. It has a sleek UI to visualize the entire VCF inventory (which is the first of its kind for VCF) across the domains of VCF.

The UI is easy to use and elucidates the current Health and Power State of each node in the VCF inventory. Powernova lets the user work on the Power Operations on the components with domain specific inter dependencies automatically resolved.

Powernova also performs valid health checks on all nodes in the VCF inventory to ensure Power Operations are performed only on healthy nodes. Powernova takes minimal input (4 user defined inputs on their VCF system) and does all the magic for the users behind the scenes.

If any infrastructure maintenance activity, VCF migration activity, or power operations need to be performed only on specific domains in VCF, then Powernova is the one stop solution for all VCF users.

[sta_anchor id=”wsoamt” /]

Workspace ONE Access Migration Tool

Workspace ONE Access Migration Tool helps ease migration of Apps from one WS1 Access tenant to another (on-premises to SaaS or SaaS to SaaS) and use cases that require mirroring one tenant to another (for setting up UAT from PROD or vice versa) by providing capabilities listed below

Features
  • Copying of App Categories
  • Migrating Weblinks (3rd party IDP), icons as is
  • Creating a link to federated apps and copying the icons (to maintain the same user experience)
  • Copying App Assignment to a Category mapping

[sta_anchor id=”sdpdovci” /]

Sample Data Platform Deployment on Virtualized Cloud Infrastructure

Data is king and your users need a sample data platform quickly.

With this Fling, you will leverage your VMware Cloud Foundation 4.0 with vRealize Automation deployment and stand a sample data platform based on vSphere Virtual Machines in less than 20-minutes comprising of Kafka, Spark, Solr, and ELK.

You can also choose whether to deploy a wavefront proxy and configure the components to send data to the wavefront proxy or use your own.

[sta_anchor id=”cndfe” /]

Community Networking Driver for ESXi

This Fling is a collection of ESXi Native Drivers which enables ESXi to recognize and consume various PCIe-based network adapters (See Requirements for details). These devices are not officially on the VMware HCL and have been developed to enable and support the VMware Community.

[sta_anchor id=”csci” /]

Code Stream Concourse Integrator

The Code Stream Concourse Integrator (CSCI) Fling provides integration between a vRealize Automation Code Stream and Concourse CI tools with which users can trigger Concourse CI pipelines from Code Stream pipelines without any additional tooling/scripting. This enables users to use the features from both the tools flexibly and seamlessly as per their needs. This solution is built using Code Stream’s extensibility feature named Custom Integration.

Updates

[sta_anchor id=”ecc” /]

ESXi Compatibility Checker

The ESXi Compatibility Checker helps the vSphere admin out in checking if their environment will work with later versions of ESXi. [non-sponsored advertisement]Also check Runecast, they can run a simulation for you as well.[/non-sponsored advertisement]

Changelog

Build 20210219

  • Fix for ESX / VC 7.0 U1 Versioning issues
  • A new logo 😉

[sta_anchor id=”vmlp” /]

VMware Machine Learning Platform

Our goal is to provide an end-to-end ML platform for Data Scientists to perform their job more effectively by running ML workloads on top of VMware infrastructure.

Using vMLP allows to:

Save the costs by enabling efficient use of shared GPUs for ML workfloads
Reduce the risks of broken Data Science workflows by leveraging well-tested and ready-to-use demos and project templates
Faster “go-to-market” for ML models by utilizing end-to-end oriented tooling including fast and easy model deployment and serving via standardized REST API

Changelog

Version 0.4.1

  • Jupyter: R Kernel
  • Jupyter: BitFusion 2.5.0 Demo
  • Jupyter: MADlib/RTS4MADlib on Greenplum Demo
  • Multiple bug fixes

[sta_anchor id=”vhpct” /]

Virtualized High Performance Computing Toolkit

This toolkit is intended to facilitate managing the lifecycle of these special configurations by leveraging vSphere APIs. It also includes features that help vSphere administrators perform some common vSphere tasks that are related to creating such high-performing environments, such as VM cloning, setting Latency Sensitivity, and sizing vCPUs, memory, etc.

Changelog

Nope 🙁

[sta_anchor id=”hpi” /]

Horizon Peripherals Intelligence

Horizon Peripherals Intelligence is an online self-serviced diagnosis service that can help increase the satisfaction when using peripheral devices with Horizon product by both the end users and the admin user. Currently, we support diagnosis for the following device categories – USB storage devices, USB printers, USB scanners, Cameras. We will continue to cover more device categories in the future

Changelog

Version 1.0

  • Add support for USB Audios, Speechmics, Signaturepads, Barcode scanners
  • Add support for L10n of web pages in simplified Chinese, traditional Chinese and English
  • Add support for window 7 and windows 2012R2
  • Add support for 32 bits OS
  • Add support for cmdline installation

[sta_anchor id=”woaafm” /]

Workspace ONE App Analyzer for macOS

The Workspace ONE macOS App Analyzer will determine any Privacy Permissions, Kernel Extensions, or System Extensions needed by an installed macOS application, and can be used to automatically create profiles in Workspace ONE UEM to whitelist those same settings when deploying apps to managed devices.

Changelog

Version 1.2 

  • Added support for Big Sur
  • Updated icon

[sta_anchor id=”osot” /]

VMware OS Optimization Tool

Image optimize you must with osot!

Changelog

  • nope 🙁

Update: OSOT didn’t receive an update, someone only edited the page according to Hilko.

[sta_anchor id=”hhu” /]

Horizon Helpdesk Utility

Besides ControlUp the helpdesk fling  is the best tool to help your users.

Changelog

Version 1.5.0.24

  • Added support for Horizon 8.1

[sta_anchor id=”hcibench” /]

HCIBench

HCIBench stands for “Hyper-converged Infrastructure Benchmark”. It’s essentially an automation wrapper around the popular and proven open source benchmark tools: Vdbench and Fio that make it easier to automate testing across a HCI cluster. HCIBench aims to simplify and accelerate customer POC performance testing in a consistent and controlled way. The tool fully automates the end-to-end process of deploying test VMs, coordinating workload runs, aggregating test results, performance analysis and collecting necessary data for troubleshooting purposes.

Changelog

Version 2.5.3

  • Fixed graphite permission issue which blocked vdbench/fio grafana display
  • Updated drop cache script to make it compatible with upcoming vSphere
  • md5sum: 622625cc7a551bd7bf07ff4f19a57a17 HCIBench_2.5.3.ova

[sta_anchor id=”reach” /]

Horizon Reach

Again if you’re not a ControlUp customer Reach is the next best thing to manage you’re Horizon environment.

Changelog

Version 1.3.1.2

  • Added support for Horizon 8.1
  • Bugfixes

[sta_anchor id=”wsod” /]

Workspace ONE Discovery

VMware Workspace ONE UEM is used to manage Windows 10 endpoints, whether it be Certificate Management, Application Deployment or Profile Management. The Discovery Fling enables you to view these from the device point of view and review the Workspace ONE related services, which applications have been successfully deployed, use the granular view to see exactly what has been configured with Profiles, view User & Machine certificates and see which Microsoft Windows Updates have been applied.

Changelog

February, 16, 2021 – Version 1.2

  • Replaced icon
  • New logo 🙂

[sta_anchor id=”avmu” /]

App Volumes Migration Utility

App Volumes Migration Utility allows admins to migrate AppStacks managed by VMware App Volumes 2.18, to the new application package format of App Volumes 4. The format of these packages in App Volumes 4 have evolved to improve performance and help simplify application management.

Changelog

Version 1.0.7 Update

  • Migration fails if their are blacklisted registry entries containing embedded NULL chars.
  • File system migration fails if their are directories having a trailing DOT name ( ex- Microsoft. ).

Filtering/Searching and pagination with the Python module for VMware Horizon

Yesterday I added the first method to the VMware Horizon Python module that makes use of filtering while the day before that I added pagination. VMware{Code} has a document describing available options for both but let me give some explanation.

Pagination

Pagination is where you perform a query but only get an x amount of objects returned by default. The rest of the objects are available on the next page or pages. This is exactly what I ran into with the vmware.hv.helper Powershell module a long time ago. With the REST api’s this is rather easy to add since if there are more pages/objects left the headers will contain a key named HAS_MORE_RECORDS. For all the methods that I add where pagination is supported you don’t need to handle this though as I have added it to the method itself. What I did add was the option the change the maximum page size. I default to 100 and the maximum is 1000, if you supply an interrupt higher than 1000 this will be corrected to 1000.

Filtering

Filtering needs some more work from the user of the module to be able to use it.

What options are there for filtering?

For the type we have: And, Or and Not

For the filters themselves there are: Equals, NotEquals, Contains, StartsWith and Between.

The formula is you pick one from the first row and combine that with one or more from the second row.

To apply these the document describes the base schema like this:

{
    “type”: ”And”,
    “filter”: <filter object>
}

and a filter object looks like this:

{
    "type":"Equals",
    "name":"domain",
    "value":"ad-example0"
}

or this for a range:

{
    "type":"Between",
    "name":"assignedUsers",
    "fromValue":"10",
    "toValue":"20"
}

Combining both into a single object looks like this:

{
    "type":"Not",
    "filter": {
        "type":"Equals",
        "name":"domain",
        "value":"ad-example0"
    }
}

This all looks like a dictionary with a nested dictionary when translating it to Python but when you have multiple filters it suddenly looks like this:

{
    "type":"And",
  "filters": [
        {
            "type":"Equals", 
            "name":"domain",
            "value":"ad-example0"
        },
        {
            "type":"StartsWith", 
            "name":"name",
            "value":"test"
        }
    ]
}

otherwise know as a dictionary with a list of dictionaries in it and since the latter also works with a single dict inside the list I have taken that route. The document also describes encoding and minifying the code to it works for a REST api call but I have done all of that for you so no need to worry about it, just build the dictionary and you are good!

Now let’s actually perform a search

First I create my base object with the type AND and a list for the filters key

filter_dict = {}
filter_dict["type"] = "And"
filter_dict["filters"] = []

Next I create the filters object where the type is contains and I filter on the field name with the value LP-00

filter1={}
filter1["type"] = "Contains"
filter1["name"] = "name"
filter1["value"] = "LP-00"

And now I add the filters1 object to the filter_dict filters list

filter["filters"].append(filter1)

and I get the machines with a pagesize of 1 to show the pagination (the pool with these machines only has 2 😉 )

machines = obj.get_machines(maxpagesize=1, filter = filter_dict)

And this would be the entire python script

import requests, getpass, urllib, json
import vmware_horizon

requests.packages.urllib3.disable_warnings()

url="https://loftcbr01.loft.lab"
username = "m_wouter"
domain = "loft.lab"
pw = getpass.getpass()

hvconnectionobj = vmware_horizon.Connection(username = username,domain = domain,password = pw,url = url)
hvconnectionobj.hv_connect()

obj = vmware_horizon.Inventory(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)

filter_dict = {}
filter_dict["type"] = "And"
filter_dict["filters"] = []
filter1={}
filter1["type"] = "Contains"
filter1["name"] = "name"
filter1["value"] = "LP-00"

filter["filters"].append(filter1)

machines = obj.get_machines(maxpagesize=1, filter = filter_dict)

for i in machines:
    print(i["name"])

hvconnectionobj.hv_disconnect()

And it shows this in python:

Calling the EMEA #vCommunity for a Monthly Virtual #vBreakfast EMEA!

For years Fred Hofer has been organizing the vBreakfast for VMworld EMEA across the street of the Fira in a very nice place. A big thank you to Fred to do this every year!! Due to the mess this world currently is in we couldn’t do it in person this year and we moved virtual. Big thanks to Runecast for organizing the virtual space we could use but sadly we had to do it without the vWorldfamous Grumpy Waiter. I really like the way it was setup with separate tables and everything. The only issue was that it seemed to use whatever Microphone, speaker and webcam that where configured in Chrome and switching was almost impossible.

During this meeting we came up with the idea to organize a monthly virtual vBreakfast for EMEA. Kev Johnson offered to do it again on the site that Runecast used for this but I think it would be more flexible if we used something like zoom. Luckily I can use a paid license so it’s no problem to make this a 2 hour long come and go whenever you like session. The talk can be tech, it can be non-tech whatever you like!

When?

Every Last Friday of the month (yes we’ll move the december one!)

How late?

7.30am UK time to 9.30 (8.30 Amsterdam to 10.30, 9.30 Israel time to11.30, you get what I am saying)

How do I attend?

Drop me an email at vbreakfastemea at gmail.com and I will forward you the invite

But I don’t live in EMEA can I still attend?

Hell yeah, I don’t give a flying F..K where you are from, everyone is welcome and the more the better!

I don’t use VMware but Nutanix or Hyper-V or Citrix?

Didn’t I just say everyone is welcome? Yes I also don’t care what hypervisor or EUC platform you use!

I work for company XYZ and want to sponsor this!

I think the only way of sponsoring that could work is if you organize a real breakfast for everyone that signs up. If you are prepared to do that we won’t offer more in return than a mention and a thank you as this is an organized unstructured meeting to meetup with friends.