Getting Horizon Events using the REST API

In a previous post I described on how to configure the Horizon Event database using the REST API’s. In this post I will describe on how you can retreive those events using a script that I have created. To get the first few events is easy, just use the /external/v1/audit-events api cmdlet and you get the first batch of events in an unsorted fashion. The script that I have created will get the events since a certain date and if you want only gets the types with a certain severity.

The script is created for Powershell 7 and has been tested with 7.3.4

Parameters

I have written 4 parameters into this script, 2 are mandatory and 2 are optional

  • Credential
    • This optional parameter needs to be a credential object from get-credential. If this is not supplied you will be asked to provide credentials in domain\username and password.
  • ConnectionServerFQDN
    • This mandatory parameter needs to be a string object with the fqdn of the connection server to connetc to i.e. server.domain.dom
  • SinceDate
    • This mandatory parameter needs to be a datetime object for the earliest date to get events for. for example use (get-date).adddays(-100) to get events up to 100 days old.
  • AuditSeverityTypes
    • This optional parameter needs to be an array with SeverityTypes to get events for. Allowed types are : INFO,WARNING,ERROR,AUDIT_SUCCESS,AUDIT_FAIL,UNKNOWN.

Usage

First I get my credentials using get-credential, you cna also import them from an xml using import-clixml creds.xml for example

$credentials = get-credential

Next I get all events for the last day using:

.\Horizon_Rest_Get_Events.ps1 -ConnectionServerFQDN pod1cbr1.loft.lab -sincedate (get-date).AddDays(-1) -Credential $credentials

Or just the ERROR and INFO events using:

.\Horizon_Rest_Get_Events.ps1 -ConnectionServerFQDN pod1cbr1.loft.lab -sincedate (get-date).AddDays(-100) -Credential $credentials -auditseveritytypes "ERROR","AUDIT_FAIL"

Yes I had to get back in days some further to get error events.

The Script

The script itself can be found on my github .

My portable Lab

For a long time I have been wanting to have some kind of portable lab but I had a few requirements that where limiting me:

  • has kvm (for troubleshooting as things always break for me)
  • low power consumption
  • Does’t require a dedicated suitcase for transportation

Not so long ago I got a newer laptop from ControlUp to replace my (not that old) annoying heat and noise generating Dell Lattitude 5401. I never could get this thing even a bit quiet without hurting performance. While I found it annoying to work daily on I thought it might be a good match to turn this laptop in a portable lab.

The Host

One of the advantages was that the laptop did have an intel nic built in so I decided to see if I could get ESXi installed on it. After some tinkering I found out that the only thing required for this was to change the disk mode in the bios from raid to ahci. (Why dell? why? the thing has only one nvme…) and after that change ESXi installed without issues.

Now with the 16GB that my machine had I could run ESXi but it was far from enough for what I was going to need and neither was the storage at 512GB. I did have an intel nvme 1TB 660P nvme in one of my servers that I honestly wasn’t even using so that was swapped in quickly. I also had a Gigabite Brix box with 64GB that I had planned to run 24/7 but with the rising energy costs I never did that and it was mostly gathering dust as my regular homelab has more than enough cpu/memory. These where 2 Lexar Modules and after another quick swap you might have seen this picture on twitter.

Specs

Connectivity

Routing

So I had my host with build-in KVM & UPS, now I also needed connectivity. When using it at home it has it’s own dedicated vlan but on the go I needed some kind of router that can use the same vlan. I did not wat to go for a virtual router as I do not want to potentially connect an ESXi host to a for me unknown network. Again I had a few requirements:

  • USB powered (Requiring a single socket is already enough)
  • Small
  • 2 ports (wan and lan side)
  • gbit
  • wifi (so my regular laptop can connect)
  • Bonus : usb tethering for my phone in case all other connection options fail

In the end I decided to order the GL.iNet GL-AR300M16. While there might have been cheaper options there aren’t that many that have multi rj45 ports and also have openwrt as that is a very flexible OS for routers like these. The only thing I have had to change oimn the router was the ip addresses that it uses + I had to disablke dhcp as I wanted the domain controller to do that. It switches very easily between wired and tethering so it’s checkeng all the boxes that I needed.

Storage

Yes I know I said this machine is a mobile lab but I did want to have an option to connect shared storage in case I want to do maintenance etc. I did not want to use the built-in nic for this so I went looking for a usb to rj45 adapter that supports 2,5gbps as I have a Qnap TS-464 with 2,5gbps connectivity. The first step was to check the usb nic fling for what adapters will work. Now this list doesn’t include the speed but a quick look at amazon showed me that the RealTek 8156 will work and the images (not the text!) in this amazon CY USB-C to 2,5GBPS adapter showed that it should work. So another quick amazon order later I was able to prove it worked and I had connectivity to my NAS.

The Setup

The host

On Host level I didn’t need to make too much changes besides network configuration. The only things done so far was to install the USB NIC Fling, configure NTP, add storage and last but not least enable TPS! For a LAB this small any gain in memory availability is essential so TPS is really needed.

vCenter

I went with the smalles vCenter option available and for now it’s running ok with 2cpu’s and 14GB of ram.

The domain

The first VM to be deployed was my domain controller as that’s needed for just about everything I do. I went for a domain called LoaL.lab wich is an abbreviation for Lab On A Laptop. As always I name my vm’s with the environment name in it so the first VM to be deployed was LoaLDC which was quickly transformed into a domain controller, dns and dhcp server.

VM’s

The goal for this portable lab was to have something that can showcase ControlUp Integrated with vSphere, Horizon, App Volumes & DEM so I ended up with these VM’s. As you can see the connection server more or less is the bitch of this setup and has multiple functions:

NameFunctionCPUMemory
LoaLVCvCenter (Tiny)214
LoalDCDomain Controller26
LoaLCSVMware Horizon Connection Server, SQL Server, File server212
LoaLAPPApp Volumes Server26
LoalCUControlUp Monitor26
LoalRecVMware Horizon Session Recorder (needed for demo)24
LoalW10Static management VM, also available via Horizon28

Desktop Pools & RDS Farms

So besides the manual desktop pool for the mgmt VM I also have an Instant Clone Pool and and RDS Farm. For both I have an App Volume available with notepad ++ and one of the test users also has a writable volume. All iof this still works while I have made the desktops and rds machines as small as possible: 1 cpu and 1gb of ram! Don’t expect that you are able to do a lot but I can login and start notepad++ and that’s enough here. In the VDI pool there are 2 machines while there is a single RDS host deployed. Both Golden Images where optimized to dead with just about everything selected in the VMware OS Optimization Tool.

On the RDS Host the app volume is published using the per user on-demand integration that was added for Horizon 2206. See this article: Revolutionize virtual apps by publishing apps on demand on generic RDSH servers – VMware End-User Computing Blog

Notepad ++ as on-demand App Volume
The login sequence
And as seen from App Volumes

Resource Consumption

So with 64GB of ram this is clearly the bottleneck for this system but how is it looking after it has been running for a few hours? I am receiving a warning about memory consumption but the current usage is 58GB but I don’t see any swapping or ballooning so that’s nice. I also like the Shared Common metric

Power Consumption

While it might be less relevant for a mobile lab I am interested in the power consumption and this is the graph for the current period. This data is coming from a Blitzwolf smart plug connected to my Homey. So the peak at boot time comes close to 80W but seems to stabalize after that.

The ControlUp setup

As a ControlUp employee one of this lab’s usecases was to display what we can do with ControlUp. This is a brand new CU environment that is mainly using Real-Time DX but for fun I have also deployed the Edge DX Agent to a few servers.

Introducing the Horizon Golden Image Deployment Tool

Intro

Some of you might have seen previews on the socials but for the last few months I have been working hard on a GUI based tool to deploy golden images to VMware Horizon Instant Clone Desktop Pools and RDS Farms. This because who isn’t sick and tired of having to go into each and every Pod admin interface to o a million clicks to deploy a few new golden images?

The work started mid December and as usual the first 75% was done pretty quickly so mid January I had a first working version. While I expected a first version that would only support Desktop Pools to be available mid February (Vmug Virtual EUC day someone?) this build actually already had most of the parts in place to support both Desktop Pools and RDS Farms. After this first build my regular work for ControlUp started picking up again so I had less time to fix the numerous bugs that I encountered. Well bugs? Most where logic errors on my part but most of them have been ironed out by now. All in all the version that you can use today has cost me ~80hrs in building & testing and many many more hours thinking about solutions.

Is the tool perfect? Definitely not but it works and it’s more than what we’ve had before and I will be looking into impriving it even more.

The Tool

I guess you got bored with the talking and would like to see the tool now. The content of thos blog post might get it’s own page later for documentation purposes. The tool itself can be downloaded from this GitHub repository. Make sure to grab both the xaml and ps1 files, put them in a single folder and you should be good. The tool entirely runs on Powershell and is using the WPF Framework for the GUI.

Quick jumps:

Requirements

There are only 2 real requirements:

  • Powershell 7.3 or later (for performance reasons I used some PS 7.3 options so the tool WILL break with an older version.)
  • Horizon 8 2206 or later (the reason for this is that otherwise the secondary images wouldn’t be available and that’s a feature I wanted in there for sure.)

Deplying a new golden Image

For normal usage you can just start the ps1 file.

This will bring you to this tab, before the first use though you need to go to the configuration page.

Fill in one of your cconnection servers, credentials and hit the test button. If you have a cloud pod setup the other pods will be automatically detected and make sure to check the Ignore Certificate Errors checkbox in case that’s needed!

If the test was successfull you are good to go to either the Desktop Pools or RDS Farms tabs. Both are almost completely the same so I will only show desktop pools

Hit the connect button and all Instant Clone Pools or Farms from all pods will be auto populated in the first pull down menu.

The second pulldown menu allows you to select a new source VM

And the third pull down the snapshot (I guess you guessed that already, I might need to add labels but they are so ugly in WPF)

To the right you have all kinds of options that you should recognize from the regular gui including add vTPM that was added in Horizon 2206. This checkbox isn’t in the RDS Farms as we currently simply don’t have the option to have Horizon add a vTPM there. If the options are valid ( like more cores that cores/socket and if those numbers will work (can’t do 3 cores and 2 cores per socket!) the Deploy Golden Image becomes available. Hit this to start the deployment, you can check the status by hitting the refresh button. (don’t tell anyone but the functions does exactly the same as a connect)

Handling Secondary Images

By default a secondary image will not be pushed to any machines. Just select the Push As Secondary Image checkbox and hit Deploy Golden Image button.

What you can also do is select one or more of the machines and deploy the Golden Images ot those machines

Once a Secondary Image has been deployed the three other buttons come available.

From top to bottom you can either cancel the secondary image completely, apply the golden image to more desktops (selecting ones that already has it won’t break anything and will just deploy it when possible) and promote the Secondary Image to the Primary golden image for the pool. The latter will cause a rebuild of ALL machines including the ones already running on the image. In the future I will also add a button to configure a machine to run the original image.

Settings and logs location

All settings are stored in an xml file in %appdata%\HGIDTool this includes the password configured in the Settings tab as a regular Encrypted PowerShell Credentials object. If you didn’t hit save on the settings tab this will be automatically done when closing the tool.

In case you want to borrow some of my code the log files contain every API call that I do to get date, push an image or handlke secondary images. This is the same output as you’d get when running in -verbose mode so that’s only needed when you’re troubleshooting the tool.

Horizon 8 (2209) REST API changelog

I have just added a page that lists the Horizon 8 2209 changes in the API’s. If you’re interested in the overall release notes, please see this link.

Some of the changes that stand out are brand new calls related to Radius & SAML settings but also options to bind to a domain, setting up unauthenticated users. licenses and pre-logon settings.

As always, a complete overview of all calls can be found in the API Explorer. But this is a list of the changes.

/config/v1/admin-users-or-groups/permissions get
/config/v1/gssapi-authenticators get
/config/v1/gssapi-authenticators/{id} get
/config/v1/licenses get
/config/v1/pre-logon-settings get
/config/v1/radius-authenticators get
/config/v1/radius-authenticators/{id} get
/config/v1/saml-authenticators get
/config/v1/saml-authenticators/{id} get
/config/v1/true-sso-enrollment-servers get
/config/v1/true-sso-enrollment-servers/{id} get
/config/v1/unauthenticated-access-users get
/config/v1/unauthenticated-access-users post
/config/v1/unauthenticated-access-users/{id} delete
/config/v1/unauthenticated-access-users/{id} get
/config/v2/connection-servers get
/config/v2/connection-servers/{id} get
/config/v2/settings/security get
/config/v3/settings get
/config/v3/settings put
/config/v3/settings/general get
/config/v3/settings/general put
/external/v1/ad-domains/action/bind post
/external/v1/ad-domains/action/update-auxiliary-accounts post
/external/v1/domains get
/external/v2/ad-users-or-groups/action/hold post
/external/v2/ad-users-or-groups/action/release-hold post
/login get
/monitor/v3/gateways get
/monitor/v3/connection-servers get
/monitor/v2/connection-servers/{id} get
/monitor/v2/gateways/{id} get

Horizon 8 API changelog pages now available

Since several version in each VMware Horizon release notes pages there has been this mention:

This links to this page: VMware Horizon REST APIs (84155) for a while this page was updated with the latest additions to the REST api’s but I guess people didn’t want to do that anymore so that page now simply links to the API Explorer:

Since I used this changelog to decide on what I wanted to blog about I thought it was time to create a changelog myself. What I did was download all the swagger specifications and compare them. For reasons I had to do this in excel but at least I now have a nice source for this. (ping me if you would like to have the excel file). This result in a new menu item at the top of my blog with the changelog for every Horizon 8 version. Yes I know the rest api’s have been available since 7.10 but I decided to start with 8.0. Please use the menu on top of the links below to go to the changelog for the version that you would like to see.

Horizon 8.0 (2006)

Horizon 8.1 (2103)

Horizon 8.2 (2106)

Horizon 8.3 (2109)

Horizon 8.4 (2111)

Horizon 8.5 (2203) – no changes

Horizon 8.6 (2206)

 

Creating a RDS farm using the Python module for VMware Horizon

One of the goals and hopes I had with my 100DaysOfCode (I am writing this on day 100!) was that the Horizon REST api’s to create desktop pools and RDS farms would have been available at the end. Only half of that came out and with Horizon 8 2103 we can finally create a RDS farm using those rest api’s. I have decided to add this to the Python module based on a dictionary that the user sends to the new_farm method. I could still add a fully fetched function but that would require a lot of arguments and using **kwargs is an option but than the user would still need to find out what to use.

First I will need to know what json data I actually need, let’s have a look at the api explorer page to get a grip on this

{
  "access_group_id": "6fd4638a-381f-4518-aed6-042aa3d9f14c",
  "automated_farm_settings": {
    "customization_settings": {
      "ad_container_rdn": "CN=Computers",
      "cloneprep_customization_settings": {
        "post_synchronization_script_name": "cloneprep_postsync_script",
        "post_synchronization_script_parameters": "p1 p2 p3",
        "power_off_script_name": "cloneprep_poweroff_script",
        "power_off_script_parameters": "p1 p2 p3",
        "priming_computer_account": "a219420d-4799-4517-8f78-39c74c7c4efc"
      },
      "instant_clone_domain_account_id": "6f85b3a5-e7d0-4ad6-a1e3-37168dd1ed51",
      "reuse_pre_existing_accounts": false
    },
    "enable_provisioning": true,
    "max_session_type": "LIMITED",
    "max_sessions": 50,
    "min_ready_vms": 0,
    "nics": [
      {
        "network_interface_card_id": "c9896e51-48a2-4d82-ae9e-a0246981b473",
        "network_label_assignment_specs": [
          {
            "enabled": true,
            "max_label": 1,
            "max_label_type": "LIMITED",
            "network_label_name": "vm-network"
          }
        ]
      }
    ],
    "pattern_naming_settings": {
      "max_number_of_rds_servers": 5,
      "naming_pattern": "vm-{n}-sales"
    },
    "provisioning_settings": {
      "base_snapshot_id": "snapshot-1",
      "datacenter_id": "datacenter-1",
      "host_or_cluster_id": "domain-s425",
      "im_stream_id": "6f85b3a5-e7d0-4ad6-a1e3-37168dd1ed51",
      "im_tag_id": "3d45b3a5-e7d0-4ad6-a1e3-37168dd1ed51",
      "parent_vm_id": "vm-2",
      "resource_pool_id": "resgroup-1",
      "vm_folder_id": "group-v1"
    },
    "stop_provisioning_on_error": true,
    "storage_settings": {
      "datastores": [
        {
          "datastore_id": "datastore-1"
        }
      ],
      "replica_disk_datastore_id": "datastore-1",
      "use_separate_datastores_replica_and_os_disks": false,
      "use_view_storage_accelerator": false,
      "use_vsan": false
    },
    "transparent_page_sharing_scope": "VM",
    "vcenter_id": "f148f3e8-db0e-4abb-9c33-7e5205ccd360"
  },
  "description": "Farm Description",
  "display_name": "ManualFarm",
  "display_protocol_settings": {
    "allow_users_to_choose_protocol": true,
    "default_display_protocol": "PCOIP",
    "grid_vgpus_enabled": true,
    "session_collaboration_enabled": false
  },
  "enabled": true,
  "load_balancer_settings": {
    "cpu_threshold": 10,
    "disk_queue_length_threshold": 15,
    "disk_read_latency_threshold": 10,
    "disk_write_latency_threshold": 15,
    "include_session_count": true,
    "memory_threshold": 10
  },
  "name": "ManualFarm",
  "rds_server_ids": [
    "5134796a-322g-5fe5-343f-4daa5d25ebfe",
    "2a43f96c-102b-4ed3-953f-35deg43d43b0ge"
  ],
  "server_error_threshold": 0,
  "session_settings": {
    "disconnected_session_timeout_minutes": 5,
    "disconnected_session_timeout_policy": "NEVER",
    "empty_session_timeout_minutes": 5,
    "empty_session_timeout_policy": "AFTER",
    "logoff_after_timeout": false,
    "pre_launch_session_timeout_minutes": 10,
    "pre_launch_session_timeout_policy": "AFTER"
  },
  "type": "MANUAL",
  "use_custom_script_for_load_balancing": false
}

This also includes some that are not required so for my own farm I settled with this json. This is for an Instant Clone farm.

{
    "access_group_id": "6fd4638a-381f-4518-aed6-042aa3d9f14c",
    "automated_farm_settings": {
        "customization_settings": {
            "ad_container_rdn": "OU=Pod1,OU=RDS,OU=VMware,OU=EUC",
            "instant_clone_domain_account_id": "6f85b3a5-e7d0-4ad6-a1e3-37168dd1ed51",
            "reuse_pre_existing_accounts": true
        },
        "enable_provisioning": false,
        "max_session_type": "LIMITED",
        "max_sessions": 50,
        "min_ready_vms": 1,
        "pattern_naming_settings": {
            "max_number_of_rds_servers": 2,
            "naming_pattern": "vm-{n}-sales"
        },
        "provisioning_settings": {
            "base_snapshot_id": "snapshot-1",
            "datacenter_id": "datacenter-1",
            "host_or_cluster_id": "domain-s425",
            "parent_vm_id": "vm-2",
            "resource_pool_id": "resgroup-1",
            "vm_folder_id": "group-v1"
        },
        "stop_provisioning_on_error": true,
        "storage_settings": {
            "datastores": [
                {
                    "datastore_id": "datastore-1"
                }
            ],
            "use_separate_datastores_replica_and_os_disks": false,
            "use_view_storage_accelerator": false,
            "use_vsan": false
        },
        "transparent_page_sharing_scope": "VM",
        "vcenter_id": "f148f3e8-db0e-4abb-9c33-7e5205ccd360"
    },
    "description": "demo_farm",
    "display_name": "demo_farm",
    "display_protocol_settings": {
        "allow_users_to_choose_protocol": true,
        "default_display_protocol": "BLAST",
        "grid_vgpus_enabled": false,
        "session_collaboration_enabled": true
    },
    "enabled": false,
    "load_balancer_settings": {
        "cpu_threshold": 10,
        "disk_queue_length_threshold": 15,
        "disk_read_latency_threshold": 10,
        "disk_write_latency_threshold": 15,
        "include_session_count": true,
        "memory_threshold": 10
    },
    "name": "demo_farm",
    "server_error_threshold": 0,
    "session_settings": {
        "disconnected_session_timeout_minutes": 5,
        "disconnected_session_timeout_policy": "NEVER",
        "empty_session_timeout_minutes": 5,
        "empty_session_timeout_policy": "AFTER",
        "logoff_after_timeout": false,
        "pre_launch_session_timeout_minutes": 10,
        "pre_launch_session_timeout_policy": "AFTER"
    },
    "type": "AUTOMATED",
    "use_custom_script_for_load_balancing": false
}

As said I send a dictionary to the method so let’s import data into a dict called data and I will print it to screen. The dictionary needs to follow this specific order of lines so that’s why a json is very useful to start with.

with open('/mnt/d/homelab/farm.json') as f:
    data = json.load(f)

As you can see in both the json and the output there’s a lot of things we can change and some things that we need to change lik id’s for all the components like vCenter, base vm, base snapshot and more. First I need the access_group_id this can be retreived using the get_local_access_groups method. For all of these I will also set the variable in the dictionary that we need.

local_access_group = next(item for item in (config.get_local_access_groups()) if item["name"] == "Root")
data["access_group_id"] = local_access_group["id"]

Than it’s time for the Instant Clone Admin id

ic_domain_account = next(item for item in (config.get_ic_domain_accounts()) if item["username"] == "administrator")
data["automated_farm_settings"]["customization_settings"]["instant_clone_domain_account_id"] = ic_domain_account["id"]

For the basevm and snapshot id’s I used the same method but a bit differently as I had already used this method in another script

vcenters = monitor.virtual_centers()
vcid = vcenters[0]["id"]
dcs = external.get_datacenters(vcenter_id=vcid)
dcid = dcs[0]["id"]

base_vms = external.get_base_vms(vcenter_id=vcid,datacenter_id=dcid,filter_incompatible_vms=True)

base_vm = next(item for item in base_vms if item["name"] == "srv2019-p1-2020-10-13-08-44")
basevmid=base_vm["id"]

base_snapshots = external.get_base_snapshots(vcenter_id=vcid, base_vm_id=base_vm["id"])

base_snapshot = next(item for item in base_snapshots if item["name"] == "Created by Packer")

snapid=base_snapshot["id"]
data["automated_farm_settings"]["provisioning_settings"]["base_snapshot_id"] = snapid
data["automated_farm_settings"]["provisioning_settings"]["parent_vm_id"] = basevmid

Host or cluster id

host_or_clusters = external.get_hosts_or_clusters(vcenter_id=vcid, datacenter_id=dcid)
for i in host_or_clusters:
    if (i["details"]["name"]) == "Cluster_Pod1":
        host_or_cluster = i
data["automated_farm_settings"]["provisioning_settings"]["host_or_cluster_id"] = host_or_cluster["id"]

Resource Pool

resource_pools = external.get_resource_pools(vcenter_id=vcid, host_or_cluster_id=host_or_cluster["id"])
for i in resource_pools:
    # print(i)
    if (i["type"] == "CLUSTER"):
        resource_pool = i
data["automated_farm_settings"]["provisioning_settings"]["resource_pool_id"] = resource_pool["id"]

VM folder again is a bit different as I have to get the id from one of the children objects

vm_folders = external.get_vm_folders(vcenter_id=vcid, datacenter_id=dcid)
for i in vm_folders:
    children=(i["children"])
    for ii in children:
        # print(ii["name"])
        if (ii["name"]) == "Pod1":
            vm_folder = i
data["automated_farm_settings"]["provisioning_settings"]["vm_folder_id"] = vm_folder["id"]

Datacenter and vcenter id’s I already had to grab for the base vm and base snapshot so I can just add them

data["automated_farm_settings"]["provisioning_settings"]["datacenter_id"] = dcid
data["automated_farm_settings"]["vcenter_id"] = vcid

Datastores is a bit more funky as there can be multiple so I needed to create a list first and than populate that based on the name of the datastores I have.

datastore_list = []
datastores = external.get_datastores(vcenter_id=vcid, host_or_cluster_id=host_or_cluster["id"])
for i in datastores:
    # print(i)
    if (i["name"] == "VDI-500") or i["name"] == "VDI-200":
        ds = {}
        ds["datastore_id"] = i["id"]
        datastore_list.append(ds)
data["automated_farm_settings"]["storage_settings"]["datastores"] = datastore_list

For my final script I put them in a bit different order and I decided to change a whole lot more options but if you have your json perfected this shouldn’t always be required. Also take note that for true/false in the json that I use the True/False from python.

import requests, getpass, urllib, json

import vmware_horizon

requests.packages.urllib3.disable_warnings()
url = input("URL\n")

username = input("Username\n")

domain = input("Domain\n")

pw = getpass.getpass()

hvconnectionobj = vmware_horizon.Connection(username = username,domain = domain,password = pw,url = url)
hvconnectionobj.hv_connect()
print("connected")

monitor = obj=vmware_horizon.Monitor(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
external=vmware_horizon.External(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
inventory=vmware_horizon.Inventory(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
config=vmware_horizon.Config(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)

with open('/mnt/d/homelab/farm.json') as f:
    data = json.load(f)

vcenters = monitor.virtual_centers()
vcid = vcenters[0]["id"]
dcs = external.get_datacenters(vcenter_id=vcid)
dcid = dcs[0]["id"]

base_vms = external.get_base_vms(vcenter_id=vcid,datacenter_id=dcid,filter_incompatible_vms=True)

base_vm = next(item for item in base_vms if item["name"] == "srv2019-p1-2020-10-13-08-44")
basevmid=base_vm["id"]

base_snapshots = external.get_base_snapshots(vcenter_id=vcid, base_vm_id=base_vm["id"])

base_snapshot = next(item for item in base_snapshots if item["name"] == "Created by Packer")

snapid=base_snapshot["id"]

host_or_clusters = external.get_hosts_or_clusters(vcenter_id=vcid, datacenter_id=dcid)
for i in host_or_clusters:
    if (i["details"]["name"]) == "Cluster_Pod1":
        host_or_cluster = i

resource_pools = external.get_resource_pools(vcenter_id=vcid, host_or_cluster_id=host_or_cluster["id"])
for i in resource_pools:
    # print(i)
    if (i["type"] == "CLUSTER"):
        resource_pool = i

vm_folders = external.get_vm_folders(vcenter_id=vcid, datacenter_id=dcid)
for i in vm_folders:
    children=(i["children"])
    for ii in children:
        # print(ii["name"])
        if (ii["name"]) == "Pod1":
            vm_folder = i

datastore_list = []
datastores = external.get_datastores(vcenter_id=vcid, host_or_cluster_id=host_or_cluster["id"])
for i in datastores:
    # print(i)
    if (i["name"] == "VDI-500") or i["name"] == "VDI-200":
        ds = {}
        ds["datastore_id"] = i["id"]
        datastore_list.append(ds)

local_access_group = next(item for item in (config.get_local_access_groups()) if item["name"] == "Root")
ic_domain_account = next(item for item in (config.get_ic_domain_accounts()) if item["username"] == "administrator")

data["access_group_id"] = local_access_group["id"]
data["automated_farm_settings"]["customization_settings"]["ad_container_rdn"] = "OU=Pod1,OU=RDS,OU=VMware,OU=EUC"
data["automated_farm_settings"]["customization_settings"]["reuse_pre_existing_accounts"] = True
data["automated_farm_settings"]["customization_settings"]["instant_clone_domain_account_id"] = ic_domain_account["id"]
data["automated_farm_settings"]["enable_provisioning"] = False
data["automated_farm_settings"]["max_sessions"] = 50
data["automated_farm_settings"]["min_ready_vms"] = 3
data["automated_farm_settings"]["pattern_naming_settings"]["max_number_of_rds_servers"] = 4
data["automated_farm_settings"]["pattern_naming_settings"]["naming_pattern"] = "farmdemo-{n:fixed=3}"
data["automated_farm_settings"]["provisioning_settings"]["base_snapshot_id"] = snapid
data["automated_farm_settings"]["provisioning_settings"]["parent_vm_id"] = basevmid
data["automated_farm_settings"]["provisioning_settings"]["host_or_cluster_id"] = host_or_cluster["id"]
data["automated_farm_settings"]["provisioning_settings"]["resource_pool_id"] = resource_pool["id"]
data["automated_farm_settings"]["provisioning_settings"]["vm_folder_id"] = vm_folder["id"]
data["automated_farm_settings"]["provisioning_settings"]["datacenter_id"] = dcid
data["automated_farm_settings"]["stop_provisioning_on_error"] = True
data["automated_farm_settings"]["storage_settings"]["datastores"] = datastore_list
data["automated_farm_settings"]["transparent_page_sharing_scope"] = "GLOBAL"
data["automated_farm_settings"]["vcenter_id"] = vcid
data["description"] = "Python_demo_farm"
data["display_name"] = "Python_demo_farm"
data["display_protocol_settings"]["allow_users_to_choose_protocol"] = True
data["display_protocol_settings"]["default_display_protocol"] = "BLAST"
data["display_protocol_settings"]["session_collaboration_enabled"] = True
data["enabled"] = False
data["load_balancer_settings"]["cpu_threshold"] = 12
data["load_balancer_settings"]["disk_queue_length_threshold"] = 16
data["load_balancer_settings"]["disk_read_latency_threshold"] = 12
data["load_balancer_settings"]["disk_write_latency_threshold"] = 16
data["load_balancer_settings"]["include_session_count"] = True
data["load_balancer_settings"]["memory_threshold"] = 12
data["name"] = "Python_demo_farm"
data["session_settings"]["disconnected_session_timeout_minutes"] = 5
data["session_settings"]["disconnected_session_timeout_policy"] = "NEVER"
data["session_settings"]["empty_session_timeout_minutes"] = 6
data["session_settings"]["empty_session_timeout_policy"] = "AFTER"
data["session_settings"]["logoff_after_timeout"] = False
data["session_settings"]["pre_launch_session_timeout_minutes"] = 12
data["session_settings"]["pre_launch_session_timeout_policy"] = "AFTER"
data["type"] = "AUTOMATED"

inventory.new_farm(farm_data=data)

end=hvconnectionobj.hv_disconnect()
print(end)

How does this look? Actually you don’t see a lot happening but the farm will have been created

As always the script can be found on my github in the examples folder together with the json file.

With this I am closing my 100DaysOfCode challenge but I pledge to keep maintaining the python module and I will extend it when new REST api calls arrive for VMware Horizon.

The VMware Labs flings monthly for March 2021 – New Website

A day late but never late than never, this is your monthly overview with all the latest and greatest VMware flings. The flings site received a new fresh look head out to https://flings.vmware.com to have a look yourself. I am not sure if the (are they new?) tags for updated or new flings are okay yet as one has been marked updated but it seems to be new while one new one I could find a blog post for from last month but I missed it for my overview, not sure what happened there. Overall I see four new flings and ten received an update.

New Releases

Configuration Wizard for Nuance PowerMic

vRealize Automation Code Stream CLI

Hillview: Distributed Data Visualization

SDDC Import/Export for VMware Cloud on AWS

Updates

Workspace ONE Mobileconfig Importer

Workspace One UEM Workload Migration Tool

vSAN Hardware Compatibility List Checker

Vmss2core

vRealize Build Tools

Horizon Cloud Pod Architecture Tools

Workspace ONE App Analyzer for macOS

App Volumes Packaging Utility

DoD Security Technical Implementation Guide(STIG) ESXi VIB

VMware OS Optimization Tool

New Releases

[sta_anchor id=”cwfnpm” /]

Configuration Wizard for Nuance PowerMic

Nuance PowerMics are the leading dictation device used in Healthcare today.
This handy standalone Fling will assist in determining the optimal PowerMic configuration for a specific customer environment.

To determine the configuration that works best for you, we need to know a few details about the customer environment:

  • Endpoint type
  • Endpoint vendor
  • Endpoint operating system
  • Single/nested mode
  • Horizon protocol

The PowerMic Configuration Wizard will then provide the specific settings for optimal PowerMic performance and accuracy in the customers environment.

[sta_anchor id=”vracsc” /]

vRealize Automation Code Stream CLI

vRealize Automation Code Stream CLI is a command line tool written in Go to interact with the vRealize Automation Code Stream APIs. This Fling is written to help automate Code Stream and provide a simple way to migrate content between Code Stream instances and projects.

  • Import and Export Code Stream artefacts such as Pipelines, Variables, Endpoints
  • Perform CRUD operations on Code Stream artefacts such as Pipelines, Variables, Endpoints
  • Trigger Executions of Pipelines

[sta_anchor id=”hvddv” /]

Hillview: Distributed Data Visualization

Hillview is a simple cloud-based spreadsheet program for browsing large data collections. The data manipulated is read-only. Users can sort, find, filter, transform, query, zoom-in/out, and chart data. Operations are performed using direct manipulation in the GUI. Hillview is designed to work on very large data sets (billions of rows). Hillview can import data from a variety of sources: CSV files, ORC files, Parquet files, databases, parallel databases; new connectors can be added with relatively little effort. Hillview takes advantage of all the cores of the worker machines for fast visualizations.

Hillview is a distributed system, composed of two pieces:

  • A distributed set of one or many workers, which should be installed close to the data (e.g., on the machines that host the data).
  • A front-end service that runs a web server and aggregates data from all workers.

The source code of Hillview is available as an open-source project with an Apache-2 license from Hillview’s github repository. For any questions, feature requests or bug reports please file an issue on github.

[sta_anchor id=”sddciefvcoaws” /]

SDDC Import/Export for VMware Cloud on AWS

The SDDC Import/Export for VMware Cloud on AWS tool enables you to save and restore your VMware Cloud on AWS (VMC) Software-Defined Data Center (SDDC) networking and security configuration.

There are many situations when customers want to migrate from an existing SDDC to a different one. While HCX addresses the data migration challenge, this tool offers customers the ability to copy the configuration from a source to a destination SDDC.

A few example migration scenarios are:

  • SDDC to SDDC migration from bare-metal (i3) to a different bare-metal type (i3en)
  • SDDC to SDDC migration from VMware-based org to an AWS-based org
  • SDDC to SDDC migration from region (i.e. London) to a different region (i.e. Dublin).

Other use cases are:

  • Backups – save the entire SDDC configuration
  • Lab purposes – customers or partners might want to deploy SDDCs with a pre-populated configuration.
  • DR purposes – deploy a pre-populated configuration in conjunction with VMware Site Recovery or VMware Cloud Disaster Recovery

Detailed instructions can be found on the Instructions tab, in the README.md included in the Zip file or on Patrick’s blog (http://www.patrickkremer.com/sddc-import-export/).

Details about the use cases and origins of the project can be found on Nico’s blog (https://nicovibert.com/2021/02/08/fling-sddc-import-export-for-vmware-cloud-on-aws/).

Updates

[sta_anchor id=”wsonemci” /]

Workspace ONE Mobileconfig Importer

The Workspace ONE mobileconfig Importer gives you the ability to import existing mobileconfig files directly into a Workspace ONE UEM environment as a Custom Settings profile, import app preference plist files in order to created managed preference profiles, and to create new Custom Settings profiles from scratch. When importing existing configuration profiles, the tool will attempt to separate each PayloadContent dictionary into a separate payload for the Workspace ONE profile.

Changelog

Version 1.1

  • Support for Big Sur
  • Updated icon

[sta_anchor id=”wsoneuemwmt” /]

Workspace One UEM Workload Migration Tool

Quite a few updates for this fling already!

The Workspace One UEM Workload Migration Tool allows a seamless migration of Applications and Device configurations between different Workspace One UEM environments. With the push of a button, workloads move from UAT to Production, instead of having to manually enter the information or upload files manually. Therefore, decreasing the time to move data between Dev/UAT environments to Production.

Changelog

Version 2.1.0

  • Fixed app upload issues for Workspace One UEM 1910+
  • Fixed profile search issue for Workspace One UEM 1910+
  • Added profile update support
  • Added template folder structure creation
  • Updated Mac app to support notarization for Catalina

Version 2.0.1

  • Fixed Baseline Migration issue
  • Fixed Profile Errors not displaying in the UI

Version 2.0.0

  • Baseline Migration Support
  • MacOS application
  • UI refactoring to make bulk migrations easier
  • Added support for script detection with Win32 applications

Version 1.0.1

  • Fixed issue with expired credentials.

[sta_anchor id=”vsanhclc” /]

vSAN Hardware Compatibility List Checker

The vSAN Hardware Compatibility List Checker is a tool that verifies all installed storage adapters against the vSAN supported storage controller list. The tool will verify if the model, driver and firmware version of the storage adapter are supported.

Changelog

Version 2.2

  • Support multi-platforms for Windows, Linux and MacOS
  • Bug fixed

[sta_anchor id=”vmss2core” /]

Vmss2core

Vmss2core is a tool to convert VMware checkpoint state files into formats that third party debugger tools understand. It can handle both suspend (.vmss) and snapshot (.vmsn) checkpoint state files (hereafter referred to as a ‘vmss file’) as well as both monolithic and non-monolithic (separate .vmem file) encapsulation of checkpoint state data.

Changelog

Version 1.0.1

  • Fixed running out of memory issues
  • Added support for more versions of Windows 10/Windows 2016

[sta_anchor id=”vrbt” /]

vRealize Build Tools

vRealize Build Tools provides tools to development and release teams implementing solutions based on vRealize Automation (vRA) and vRealize Orchestrator (vRO). The solution targets Virtual Infrastructure Administrators and Solution Developers working in parallel on multiple vRealize-based projects who want to use standard DevOps practices.

Changelog

Version 2.12.5 Update

  • [vRBT] Package installer – Add support for installation on a standalone vRO (not embedded) version 8.x with basic authentication
  • [vRA] Added catalog entitlements and examples to the vra-ng archetype
  • [vRO] Support / in Workflow name or path, by substituting it with dash (-) character.
  • [vRBT] Added http / socket timeouts support in the installer
  • [ABX] Support for ABX actions
  • [vRO] Support of placeholders in workflow description
  • [vRA] Import vRA8 custom resources before blueprints
  • [vROPS] Fixed policy import / export problem with vROPs 8.2, maintaining backward compatibility
  • [MVN] Fixed ussue with installer timeouts
  • [TS] vRO pkg – Adds support for slash in workflow path or name
  • [vRBT] Package installer – updated documentation, added checking of workflow input, writing of workflow error message to file, setting of installer exit code when executing of a workflow
  • [TS] Allow additional trigger events for policies trigered by the vcd mqtt plugin
  • [MVN] Fix Missing vRA Tenant After Successful package import
  • [MVN] Fix vROPS import fails on certain assets

[sta_anchor id=”hcpat” /]

Horizon Cloud Pod Architecture Tools

The Horizon Cloud Pod Architecture Tools mainly acts as a wrapper around the lmvutil for Horizon Cloud Pod Archtitecture.

Changelog

Version 1.1

Based on the customer requests, have added few more command line options for CSV reports generation and AD LDS data cleanup.

What’s New:

Adds support to cleanup stale global local entitlement assignments from ADAM DB.

  • Global AD LDS Command:
    adlds-analyzer.cmd –resolve-localpool-ga
  • Scans the cloud pod database and resolves the stale entries of local pool global assignments.
    Note: This resolves deleted local pool conflicts of current pod only. If dashboard session data load error or session search fails in a different pod, a scan and resolve has to be executed in that pod.
  • List of new commands added to Local AD LDS:
    adlds-analyzer.cmd –export-machine
  • All machine data exported as CSV file. Compatibility: Horizon 7.10 and above, 8.x
    adlds-analyzer.cmd –export-machine -pool=”DesktopPool1,DesktopPool2″
  • All machine data exported as CSV file. Use -pool= to filter machines by desktop pool name. Compatibility: Horizon 7.10 and above, 8.x
  • Spaces are not allowed in -pool= optional argument.
    adlds-analyzer.cmd –export-session
  • All local sessions data exported as CSV file. Compatibility: Horizon 7.10 and above, 8.x
    adlds-analyzer.cmd –export-session -pool=”PoolName1,PoolName2″ -farm=”FarmName1,FarmName2″
  • All local session data exported as CSV file. Use -pool= to filter sessions by desktop pool name. -farm= filter sessions by RDS farm name. Compatibility: Horizon 7.10 and above, 8.x
  • Spaces are not allowed in -pool= and -farm= optional argument.
    adlds-analyzer.cmd –check-apps-integrity
  • Scans and lists the stale application icons in local ADLDS instance.
    adlds-analyzer.cmd –check-apps-integrity -input=”AbsoluteFilePath1″,”AbsoluteFilePath2″
  • Reads the list of adam LDIF files in “-input=” for parsing and exports the stale application icons data to a file.
    adlds-analyzer.cmd –export-named-lic-users
  • Exports the utilized and un-utilized named license users list information.

[sta_anchor id=”wsoneaafm” /]

Workspace ONE App Analyzer for macOS

The Workspace ONE macOS App Analyzer will determine any Privacy Permissions, Kernel Extensions, or System Extensions needed by an installed macOS application, and can be used to automatically create profiles in Workspace ONE UEM to whitelist those same settings when deploying apps to managed devices.

Changelog

Version 1.2.1

  • Fixed bug that caused crash with certain System Extension configurations

[sta_anchor id=”avpu” /]

App Volumes Packaging Utility

This App Volumes Packaging Utility helps to package applications. With this fling, packagers can add the necessary metadata to MSIX app attach VHDs so they can be used alongside existing AV format packages. The MSIX format VHDs will require App Volumes 4, version 2006 or later and Windows 10, version 2004 or later.

Changelog

Version 1.2 Update

  • Fixed bugs found in internal testing.

[sta_anchor id=”dodstigg” /]

DoD Security Technical Implementation Guide(STIG) ESXi VIB

The DoD Security Technical Implementation Guide (‘STIG’) ESXi VIB is a Fling that provides a custom VMware-signed ESXi vSphere Installation Bundle (‘VIB’) to assist in remediating Defense Information Systems Agency STIG controls for ESXi. This VIB has been developed to help customers rapidly implement the more challenging aspects of the vSphere STIG. These include the fact that installation is time consuming and must be done manually on the ESXi hosts. In certain cases, it may require complex scripting, or even development of an in-house VIB that would not be officially digitally signed by VMware (and therefore would not be deployed as a normal patch would). The need for a VMware-signed VIB is due to the system level files that are to be replaced. These files cannot be modified at a community supported acceptance level. The use of the VMware-signed STIG VIB provides customers the following benefits:

  • The ability to use vSphere Update Manager (‘VUM’) to quickly deploy the VIB to ESXi hosts (you cannot do this with a customer created VIB)
  • The ability to use VUM to quickly check if all ESXi hosts have the STIG VIB installed and therefore are also in compliance
  • No need to manually replace and copy files directly on each ESXi host in your environment
  • No need to create complex shell scripts that run each time ESXi boots to re-apply settings

Changelog

Update March 2021

  • New ESXi 7.0 STIG VIB release
  • Updated sshd_config file to meet the ESXi 7.0 Draft STIG which is also now the default config in 7.0 U2 with the exception of permitting root user logins.
  • Removed /etc/vmware/welcome file from VIB since it can be configured via the UI or PowerCLI now with issue.
  • Draft ESXi content can be found here: https://github.com/vmware/dod-compliance-and-automation/tree/master/vsphere/7.0/docs
  • See the updated Overview and Installation guide included in the download.

[sta_anchor id=”osot” /]

VMware OS Optimization Tool

No comments needed, just use the OS Optimization Tool when creating your golden images!

Changelog

March 2021, b2002

  • Fixed issue where the theme file was being updated by a Generalize task and the previously selected optimizations including wallpaper color were being lost.
  • The administrator username used during Generalize was not getting passed through properly to the unattend answer file. This resulted in a mismatch when using some languages versions of Windows.
  • Removed legacy code GPO Policy corruption
  • Removed CMD.exe box that displayed at logon.
  • Windows Store Apps were not being removed properly on Windows 10 version 20H2. Fixed the optimizations to cope with the differences introduced in this version.

Optimizations

Changed step Block all consumer Microsoft account user authentication to be unselected by default. When disabled this was causing failures to login to Edge and Windows store.

Changed the step Turn off Thumbnail Previews in File Explorer to be unselected by default. This was causing no thumbnails to show for store apps in search results.

Windows Update

On Non-Enterprise editions of Windows 10, KB4023057 installs a new application called Microsoft Update Health Tools: https://support.microsoft.com/en-us/topic/kb4023057-update-for-windows-10-update-service-components-fccad0ca-dc10-2e46-9ed1-7e392450fb3a. Added logic to ensure that the Windows Update Medic Service is disabled including after re-enabling and disabling Windows Updates using the Update tab.

Templates

Windows 8 and 8.1 templates have been removed from the list of built-in templates. To optimize these versions of Windows, use the separate download for version b1130.

Removed old Windows 10 templates from the Public Templates repository:

  • Windows 10 1809-2004-Server 2019
  • Windows 10 1507-1803-Server 2016

January 2021, b2001 Bug Fixes

  • All optimization entries have been added back into the main user template. This allows manual tuning and selection of all optimizations.
  • Fixed two hardware acceleration selections were not previously controlled by the Common Option for Visual Effect to disable hardware acceleration.

Optimize

  • During an Optimize, the optimization selections are automatically exported to a default json file (%ProgramData%\VMware\OSOT\OptimizedTemplateData.json).

Analyze

  • When an Analyze is run, if the default json file exists (meaning that this image has already been optimized), this is imported and used to select the optimizations and the Common Options selections with the previous choices.
  • If the default selections are required, on subsequent runs of the OS Optimization Tool, delete the default json file, relaunch the tool and run Analyze.

Command Line

  • The OptimizedTemplateData.json file can also be used from the command line with the -applyoptimization parameter.

Optimizations

  • Changed entries for Hyper-V services to not be selected by default. These services are required for VMs deployed onto Azure. Windows installation sets these to manual (trigger) so these so not cause any overhead on vSphere, when left with the default setting.

Powercli script to assign a dedicated Horizon machine to multiple users

Yesterday Robin Stolpe again reached out that he was having issues assigning multiple accounts to the same dedicated machine. He couldn’t get this running with the vmware.hv.helper and looking that with how it is implemented now it will probably never work. I decided to put together some of the functions I have used for ControlUp script based actions and some of my other work to put together the following script (that can be found on Github here.)

[CmdletBinding()]
Param
(
    [Parameter(Mandatory=$False,
    ParameterSetName="separatecredentials",
    HelpMessage='Enter a username' )]
    [ValidateNotNullOrEmpty()]
    [string] $Username,

    [Parameter(Mandatory=$false,
    ParameterSetName="separatecredentials",
    HelpMessage='Domain i.e. loft.lab' )]
    [string] $Domain,

    [Parameter(Mandatory=$false,
    ParameterSetName="separatecredentials",
    HelpMessage='Password in plain text' )]
    [string] $Password,

    [Parameter(Mandatory=$true,  HelpMessage='FQDN of the connectionserver' )]
    [ValidateNotNullOrEmpty()]
    [string] $ConnectionServerFQDN,

    [Parameter(Mandatory=$false,
    ParameterSetName="credsfile",
    HelpMessage='Path to credentials xml file' )]
    [ValidateNotNullOrEmpty()]
    [string] $Credentialfile,

    [Parameter(Mandatory=$false,  HelpMessage='username of the user to logoff (domain\user i.e. loft.lab\user1')]
    [ValidateNotNullOrEmpty()]
    [string[]] $TargetUsers,

    [Parameter(Mandatory=$false, HelpMessage='Name of the desktop pool the machine belongs to')]
  [string] $TargetPool,

    [Parameter(Mandatory=$false, HelpMessage='dns name of the machine the user is on i.d. lp-002.loft.lab')]
  [string] $TargetMachine,

    [Parameter(Mandatory=$false, HelpMessage='domain for the target users')]
  [string] $TargetDomain
)

if($Credentialfile -and ((test-path $Credentialfile) -eq $true)){
    try{
        write-host "Using credentialsfile"
        $credentials=Import-Clixml $Credentialfile
        $username=($credentials.username).split("\")[1]
        $domain=($credentials.username).split("\")[0]
        $secpw=$credentials.password
        $BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($secpw)
        $password = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)
    }
    catch{
        write-error -Message "Error importing credentials"
        break
    }
}
elseif($Credentials -and ((test-path $credentials) -eq $false)){
    write-error "Invalid Path to credentials file"
    break
}
elseif($username -and $Domain -and $Password){
    write-host "Using separate credentials"
}


function Get-HVDesktopPool {
    param (
        [parameter(Mandatory = $true,
        HelpMessage = "Displayname of the Desktop Pool.")]
        [string]$HVPoolName,
        [parameter(Mandatory = $true,
        HelpMessage = "The Horizon View Connection server object.")]
        [VMware.VimAutomation.HorizonView.Impl.V1.ViewObjectImpl]$HVConnectionServer
    )
    # Try to get the Desktop pools in this pod
    try {
        # create the service object first
        [VMware.Hv.QueryServiceService]$queryService = New-Object VMware.Hv.QueryServiceService
        # Create the object with the definiton of what to query
        [VMware.Hv.QueryDefinition]$defn = New-Object VMware.Hv.QueryDefinition
        # entity type to query
        $defn.queryEntityType = 'DesktopSummaryView'
        # Filter on the correct displayname
        $defn.Filter = New-Object VMware.Hv.QueryFilterEquals -property @{'memberName'='desktopSummaryData.displayName'; 'value' = "$HVPoolname"}
        # Perform the actual query
        [array]$queryResults= ($queryService.queryService_create($HVConnectionServer.extensionData, $defn)).results
        # Remove the query
        $queryService.QueryService_DeleteAll($HVConnectionServer.extensionData)
        # Return the results
        if (!$queryResults){
            write-host "Can't find $HVPoolName, exiting."
            exit
        }
        else {
            return $queryResults
        }
    }
    catch {
        write-host 'There was a problem retreiving the Horizon View Desktop Pool.'
    }
}

function Get-HVDesktopMachine {
    param (
        [parameter(Mandatory = $true,
        HelpMessage = "ID of the Desktop Pool.")]
        [VMware.Hv.DesktopId]$HVPoolID,
        [parameter(Mandatory = $true,
        HelpMessage = "Name of the Desktop machine.")]
        [string]$HVMachineName,
        [parameter(Mandatory = $true,
        HelpMessage = "The Horizon View Connection server object.")]
        [VMware.VimAutomation.HorizonView.Impl.V1.ViewObjectImpl]$HVConnectionServer
    )

    try {
        # create the service object first
        [VMware.Hv.QueryServiceService]$queryService = New-Object VMware.Hv.QueryServiceService
        # Create the object with the definiton of what to query
        [VMware.Hv.QueryDefinition]$defn = New-Object VMware.Hv.QueryDefinition
        # entity type to query
        $defn.queryEntityType = 'MachineDetailsView'
        # Filter so we get the correct machine in the correct pool
        $poolfilter = New-Object VMware.Hv.QueryFilterEquals -property @{'memberName'='desktopData.id'; 'value' = $HVPoolID}
        $machinefilter = New-Object VMware.Hv.QueryFilterEquals -property @{'memberName'='data.name'; 'value' = "$HVMachineName"}
        $filterlist = @()
        $filterlist += $poolfilter
        $filterlist += $machinefilter
        $filterAnd = New-Object VMware.Hv.QueryFilterAnd
        $filterAnd.Filters = $filterlist
        $defn.Filter = $filterAnd
        # Perform the actual query
        [array]$queryResults= ($queryService.queryService_create($HVConnectionServer.extensionData, $defn)).results
        # Remove the query
        $queryService.QueryService_DeleteAll($HVConnectionServer.extensionData)
        # Return the results
        if (!$queryResults){
            write-host "Can't find $HVPoolName, exiting."
            exit
        }
        else{
            return $queryResults
        }
    }
    catch {
        write-host 'There was a problem retreiving the Horizon View Desktop Pool.'
    }
}

function Get-HVUser {
    param (
        [parameter(Mandatory = $true,
        HelpMessage = "User loginname..")]
        [string]$HVUserLoginName,
        [parameter(Mandatory = $true,
        HelpMessage = "Name of the Domain.")]
        [string]$HVDomain,
        [parameter(Mandatory = $true,
        HelpMessage = "The Horizon View Connection server object.")]
        [VMware.VimAutomation.HorizonView.Impl.V1.ViewObjectImpl]$HVConnectionServer
    )

    try {
        # create the service object first
        [VMware.Hv.QueryServiceService]$queryService = New-Object VMware.Hv.QueryServiceService
        # Create the object with the definiton of what to query
        [VMware.Hv.QueryDefinition]$defn = New-Object VMware.Hv.QueryDefinition
        # entity type to query
        $defn.queryEntityType = 'ADUserOrGroupSummaryView'
        # Filter to get the correct user
        $userloginnamefilter = New-Object VMware.Hv.QueryFilterEquals -property @{'memberName'='base.loginName'; 'value' = $HVUserLoginName}
        $domainfilter = New-Object VMware.Hv.QueryFilterEquals -property @{'memberName'='base.domain'; 'value' = "$HVDomain"}
        $filterlist = @()
        $filterlist += $userloginnamefilter
        $filterlist += $domainfilter
        $filterAnd = New-Object VMware.Hv.QueryFilterAnd
        $filterAnd.Filters = $filterlist
        $defn.Filter = $filterAnd
        # Perform the actual query
        [array]$queryResults= ($queryService.queryService_create($HVConnectionServer.extensionData, $defn)).results
        # Remove the query
        $queryService.QueryService_DeleteAll($HVConnectionServer.extensionData)
        # Return the results
        if (!$queryResults){
            write-host "Can't find user $HVUserLoginName in domain $HVDomain, exiting."
            exit
        }
        else {
            return $queryResults
        }
    }
    catch {
        write-host 'There was a problem retreiving the user.'
    }
}

$hvserver1=connect-hvserver $ConnectionServerFQDN -user $username -domain $domain -password $password
$Services1= $hvServer1.ExtensionData

$desktop_pool=Get-HVDesktopPool -hvpoolname $TargetPool -HVConnectionServer $hvserver1

$poolid=$desktop_pool.id

$machine = get-hvdesktopmachine -HVConnectionServer $hvserver1 -HVMachineName $TargetMachine -HVPoolID $poolid
$machineid = $machine.id
$useridlist=@()

foreach ($targetuser in $TargetUsers){
    $user = Get-HVUser -HVConnectionServer $hvserver1 -hvdomain $TargetDomain -HVUserLoginName $targetUser
    $useridlist+=$user.id
}

$Services1.Machine.Machine_assignUsers($machineid, $useridlist)

So first I have 3 functions to get the Pool, the machine and users. With a foreach on the $Targetusers list I create a list of the userid’s that is required to use for the Machine_assignUsers function of the machine service.

Powercli script to (forcefully) log off Horizon users

A long time ago in a galaxy far far away I wrote this blog post to log a user off from their vdi session. Today I got an inquiry from Robin Stolpe that he was trying to make it a script with arguments instead if the menu’s but was having some issues with that. This gave me the chance to make it a bit nicer of a script with the option to user username/domain/password as credentials but also a credentialfile , optional forcefully logging off the users and with Robin’s requirements of being able to provide the exact username and the machine that user is working on.

The script can be ran like this:

D:\GIT\Scripts\logoff_user.ps1 -Credentialfile "D:\homelab\creds.xml" -TargetUser "loft.lab\m_wouter" -TargetMachine "lp-001.loft.lab" -ConnectionServerFQDN loftcbr01.loft.lab

or with credentials and the -force parameter

D:\GIT\Scripts\logoff_user.ps1 -TargetUser "loft.lab\m_wouter" -TargetMachine "lp-001.loft.lab" -ConnectionServerFQDN loftcbr01.loft.lab -Username m_wouter -domain loft.lab -password "HAHAHAHA" -force

Now let’s have a look how the script is build.

So I started with the parameters and for that I included 2 parameter sets so you can either choose to have the separate credentials or to use a credentials file.

[CmdletBinding()]
Param
(
    [Parameter(Mandatory=$False,
    ParameterSetName="separatecredentials",
    HelpMessage='Enter a username' )]
    [ValidateNotNullOrEmpty()]
    [string] $Username,

    [Parameter(Mandatory=$false,
    ParameterSetName="separatecredentials",
    HelpMessage='Domain i.e. loft.lab' )]
    [string] $Domain,

    [Parameter(Mandatory=$false,
    ParameterSetName="separatecredentials",
    HelpMessage='Password in plain text' )]
    [string] $Password,

    [Parameter(Mandatory=$true,  HelpMessage='FQDN of the connectionserver' )]
    [ValidateNotNullOrEmpty()]
    [string] $ConnectionServerFQDN,

    [Parameter(Mandatory=$false,
    ParameterSetName="credsfile",
    HelpMessage='Path to credentials xml file' )]
    [ValidateNotNullOrEmpty()]
    [string] $Credentialfile,

    [Parameter(Mandatory=$false, HelpMessage='Synchronise the local site only' )]
    [switch] $Force,

    [Parameter(Mandatory=$false,  HelpMessage='username of the user to logoff (domain\user i.e. loft.lab\user1')]
    [ValidateNotNullOrEmpty()]
    [string] $TargetUser,

    [Parameter(Mandatory=$false, HelpMessage='dns name of the machine the user is on i.d. lp-002.loft.lab')]
  [string] $TargetMachine
)

Than I check if a credential file was supplied and if I can actually import it

if($Credentialfile -and ((test-path $Credentialfile) -eq $true)){
    try{
        write-host "Using credentialsfile"
        $credentials=Import-Clixml $Credentialfile
        $username=($credentials.username).split("\")[1]
        $domain=($credentials.username).split("\")[0]
        $secpw=$credentials.password
        $BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($secpw)
        $password = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)
    }
    catch{
        write-error -Message "Error importing credentials"
        break
    }
}
elseif($Credentials -and ((test-path $credentials) -eq $false)){
    write-error "Invalid Path to credentials file"
}
elseif($username -and $Domain -and $Password){
    write-host "Using separate credentials"
}

The file doesn’t exist:

Or an error importing the xml (duh, what do you think what happoens when you use a json instead of xml, fool!)

Then it’s a matter of logging in, performing a query and checking if there’s really a session for this user. As you can see I am using machineOrRDSServerDNS so it should also work for RDS sessions.

$hvserver1=connect-hvserver $ConnectionServerFQDN -user $username -domain $domain -password $password
$Services1= $hvServer1.ExtensionData

$queryService = New-Object VMware.Hv.QueryServiceService
$sessionfilterspec = New-Object VMware.Hv.QueryDefinition
$sessionfilterspec.queryEntityType = 'SessionLocalSummaryView'
$sessionfilter1= New-Object VMware.Hv.QueryFilterEquals
$sessionfilter1.membername='namesData.userName'
$sessionfilter1.value=$TargetUser
$sessionfilter2= New-Object VMware.Hv.QueryFilterEquals
$sessionfilter2.membername='namesData.machineOrRDSServerDNS'
$sessionfilter2.value=$TargetMachine
$sessionfilter=new-object vmware.hv.QueryFilterAnd
$sessionfilter.filters=@($sessionfilter1, $sessionfilter2)
$sessionfilterspec.filter=$sessionfilter
$session=($queryService.QueryService_Create($Services1, $sessionfilterspec)).results
$queryService.QueryService_DeleteAll($services1)
if($session.count -eq 0){
    write-host "No session found for $targetuser on $targetmachine"
    break
}

And last but not least logging the user of with or without the -force option

if($Force){
    write-host "Forcefully logging off $targetUser from $targetmachine"
    $Services1.Session.Session_Logoffforced($session.id)
}
else{
    write-host "Logging off $targetUser from $targetmachine"
    try{
        $Services1.Session.Session_Logoff($session.id)
    }
    catch{
        write-error "error logging the user off, maybe the sessions was locked. Try with -force"
    }
}

This session was locked

So let’s force that thing

And here’s the entire script but you can also find it on my github.

[CmdletBinding()]
Param
(
    [Parameter(Mandatory=$False,
    ParameterSetName="separatecredentials",
    HelpMessage='Enter a username' )]
    [ValidateNotNullOrEmpty()]
    [string] $Username,

    [Parameter(Mandatory=$false,
    ParameterSetName="separatecredentials",
    HelpMessage='Domain i.e. loft.lab' )]
    [string] $Domain,

    [Parameter(Mandatory=$false,
    ParameterSetName="separatecredentials",
    HelpMessage='Password in plain text' )]
    [string] $Password,

    [Parameter(Mandatory=$true,  HelpMessage='FQDN of the connectionserver' )]
    [ValidateNotNullOrEmpty()]
    [string] $ConnectionServerFQDN,

    [Parameter(Mandatory=$false,
    ParameterSetName="credsfile",
    HelpMessage='Path to credentials xml file' )]
    [ValidateNotNullOrEmpty()]
    [string] $Credentialfile,

    [Parameter(Mandatory=$false, HelpMessage='Synchronise the local site only' )]
    [switch] $Force,

    [Parameter(Mandatory=$false,  HelpMessage='username of the user to logoff (domain\user i.e. loft.lab\user1')]
    [ValidateNotNullOrEmpty()]
    [string] $TargetUser,

    [Parameter(Mandatory=$false, HelpMessage='dns name of the machine the user is on i.d. lp-002.loft.lab')]
  [string] $TargetMachine
)

if($Credentialfile -and ((test-path $Credentialfile) -eq $true)){
    try{
        write-host "Using credentialsfile"
        $credentials=Import-Clixml $Credentialfile
        $username=($credentials.username).split("\")[1]
        $domain=($credentials.username).split("\")[0]
        $secpw=$credentials.password
        $BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($secpw)
        $password = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)
    }
    catch{
        write-error -Message "Error importing credentials"
        break
    }
}
elseif($Credentials -and ((test-path $credentials) -eq $false)){
    write-error "Invalid Path to credentials file"
    break
}
elseif($username -and $Domain -and $Password){
    write-host "Using separate credentials"
}


$hvserver1=connect-hvserver $ConnectionServerFQDN -user $username -domain $domain -password $password
$Services1= $hvServer1.ExtensionData

$queryService = New-Object VMware.Hv.QueryServiceService
$sessionfilterspec = New-Object VMware.Hv.QueryDefinition
$sessionfilterspec.queryEntityType = 'SessionLocalSummaryView'
$sessionfilter1= New-Object VMware.Hv.QueryFilterEquals
$sessionfilter1.membername='namesData.userName'
$sessionfilter1.value=$TargetUser
$sessionfilter2= New-Object VMware.Hv.QueryFilterEquals
$sessionfilter2.membername='namesData.machineOrRDSServerDNS'
$sessionfilter2.value=$TargetMachine
$sessionfilter=new-object vmware.hv.QueryFilterAnd
$sessionfilter.filters=@($sessionfilter1, $sessionfilter2)
$sessionfilterspec.filter=$sessionfilter
$session=($queryService.QueryService_Create($Services1, $sessionfilterspec)).results
$queryService.QueryService_DeleteAll($services1)
if($session.count -eq 0){
    write-host "No session found for $targetuser on $targetmachine"
    break
}

if($Force){
    write-host "Forcefully logging off $targetUser from $targetmachine"
    $Services1.Session.Session_Logoffforced($session.id)
}
else{
    write-host "Logging off $targetUser from $targetmachine"
    try{
        $Services1.Session.Session_Logoff($session.id)
    }
    catch{
        write-error "error logging the user off, maybe the sessions was locked. Try with -force"
    }
}

Pushing a new image using the VMware Horizon Python Module

One of the REST api calls that where added for Horizon 8 2012 was the ability to push images to Desktop Pools (sadly not for farms yet). This week I added that functionality to the VMware Horizon Python Module. Looking at the swagger UI these are the needed arguments:

So the source can be either the streams from Horizon Cloud or a regular vm/snapshot combo. For the time you will need to use some moment in epoch. The optional items for adding the virtual tpm, stop on error I have set the default for what they are listed. As logoff policy I have chosen to set a default in WAIT_FOR_LOGOFF.

For this blog posts I have to go with the vm/snapshot combo as I don’t have streams setup at the moment. First I need to connect:

import requests, getpass, urllib, json, operator, numpy, time
import vmware_horizon


requests.packages.urllib3.disable_warnings()
url="https://pod2cbr1.loft.lab"
username = input("Username\n")
domain = input("Domain\n")
pw = getpass.getpass()

hvconnectionobj = vmware_horizon.Connection(username = username,domain = domain,password = pw,url = url)
hvconnectionobj.hv_connect()
print("connected")

Than I open the ports for the classes I will be using

monitor = obj=vmware_horizon.Monitor(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
external=vmware_horizon.External(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
inventory=vmware_horizon.Inventory(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)

Now let’s look at what the desktop_pool_push_image method needs

First I will grab the correct desktop pool, I will use Pod02-Pool02 this time. There are several ways to get the correct pool but I have chosen to use this one.

desktop_pools=inventory.get_desktop_pools()
desktop_pool = next(item for item in desktop_pools if item["name"] == "Pod02-Pool02")
poolid=desktop_pool["id"]

To get the VM and Snapshots I first need to get the vCenter and datacenter id’s

vcenters = monitor.virtual_centers()
vcid = vcenters[0]["id"]
dcs = external.get_datacenters(vcenter_id=vcid)
dcid = dcs[0]["id"]

I created a new golden image last Friday and it has this name: W10-L-2021-03-19-17-27 so I need to get the compatible base vm’s and get the id for this one

base_vms = external.get_base_vms(vcenter_id=vcid,datacenter_id=dcid,filter_incompatible_vms=True)
base_vm = next(item for item in base_vms if item["name"] == "W10-L-2021-03-19-17-27")
basevmid=base_vm["id"]

I had Packer create a snapshot and I can get that in a similar way

base_snapshots = external.get_base_snapshots(vcenter_id=vcid, base_vm_id=base_vm["id"])
base_snapshot = next(item for item in base_snapshots if item["name"] == "Created by Packer")
snapid=base_snapshot["id"]

I get the current time in epoch using the time module (google is your best friend to define a moment in the future in epoch)

current_time = time.time()

For this example I add all the arguments but if you don’t change fromt he defaults that’s not needed

inventory.desktop_pool_push_image(desktop_pool_id=poolid,parent_vm_id=basevmid,snapshot_id=snapid, start_time=current_time, add_virtual_tpm=False, stop_on_first_error=False, logoff_policy="FORCE_LOGOFF")

And closing the connection

end=hvconnectionobj.hv_disconnect()
print(end)

and when I now look at my desktop pool it’s pushing the new image

I have created a new folder on Github for examples and the script to deploy new images is the first example. I did move a couple of the names to variables so make ie better usable. You can find it here. Or see the code below this.

import requests, getpass, urllib, time
import vmware_horizon

requests.packages.urllib3.disable_warnings()

url                     = "https://pod2cbr1.loft.lab"
desktop_pool_name       = "Pod02-Pool01"
base_vm_name            = "W10-L-2021-03-19-17-27"
snapshot_name           = "Snap_2"

username = input("Username\n")
domain = input("Domain\n")
pw = getpass.getpass()

hvconnectionobj = vmware_horizon.Connection(username = username,domain = domain,password = pw,url = url)
hvconnectionobj.hv_connect()
print("connected")
monitor = obj=vmware_horizon.Monitor(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
external=vmware_horizon.External(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
inventory=vmware_horizon.Inventory(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)

desktop_pools=inventory.get_desktop_pools()
desktop_pool = next(item for item in desktop_pools if item["name"] == desktop_pool_name)
poolid=desktop_pool["id"]

vcenters = monitor.virtual_centers()
vcid = vcenters[0]["id"]
dcs = external.get_datacenters(vcenter_id=vcid)
dcid = dcs[0]["id"]

base_vms = external.get_base_vms(vcenter_id=vcid,datacenter_id=dcid,filter_incompatible_vms=True)
base_vm = next(item for item in base_vms if item["name"] == base_vm_name)
basevmid=base_vm["id"]

base_snapshots = external.get_base_snapshots(vcenter_id=vcid, base_vm_id=base_vm["id"])
base_snapshot = next(item for item in base_snapshots if item["name"] == snapshot_name)
snapid=base_snapshot["id"]

current_time = time.time()
inventory.desktop_pool_push_image(desktop_pool_id=poolid,parent_vm_id=basevmid,snapshot_id=snapid, start_time=current_time)

end=hvconnectionobj.hv_disconnect()
print(end)