Remark : as of the time I was writing this article the vCheck for Horizon sba hasn’t been published yet. Contact me if you want me to send you the xml file.
Last week I have been doing a few small updates to the vCheck for Horizon. The file with the encrypted password is now replaced by an xml file that holds all of the credentials and is usable directly for Horizon connections. I have been using this for my blog posts and demo’s for a while already. Also I have renamed the security & composer plugins by default to old as those have been deprecated and I don’t want to have them active by default anymore. Further I have done some small changes to other plugins.
Make sure to save this file to a good location and edit the “00 Connection Plugin for View.ps1” file with the proper location. I haven’t completely removed the old ways yet, they’re still partially there.
Running the vCheck script from ControlUp
[sta_anchor id=”controlup” /]
Before everything else: make sure the machine that will run the vCheck has the latest version of PowerCLI installed. Download it here, and unpack the modules to “C:\Program Files\WindowsPowerShell\Modules” like this:
There are several things that I needed to do before I was able to run the vCheck from the ControlUp console or as an Automated Action (more on this later!). First I needed a way of providing credentials to the script. This is something I do using the Create Credentials for Horizon Scripts script action. If you already use some of the other Horizon related sba’s or the Horizon Sync script you might already have this configured. What it essentially does is creating an xml file as described above in a folder dedicated for ControlUp and usable by the service account that ran the sba. More on this can be found here.
Next was a way to get the vCheck itself. I do this by downloading it from github and unpack the zip file to C:\ProgramData\ControlUp\ScriptSupport\vCheck-HorizonView-master if downloads from the interwebz aren’t available you can download and unzip it yourself, just make sure it looks like this:
Next would be all the settings for email, connection server etcetera. This is all done trough the Script Action Arguments. WHat technically happens is that the Script Action removes both the globalvariables.ps1 and the “00 Connection Plugin for View.ps1” files and recreates them with those settings.
Once you have downloaded the vCheck you can run it manually so right click any random machine and select the vCheck for Horizon.
and fill in all the details you need. If you set the Send Email to false there is no need for the rest of the info.
Once you see this screen the script has completed, make sure to read the last line for the correct status and where you saved to file to.
And on my d:\ I see the file (and some others I used for testing)
And even better I received it in my mail as well
But but wasn’t the vCheck all about running it automagically?
[sta_anchor id=”automated” /]
Well yes it is and you can do that starting with version 8.5 that we have announced today! For now it will need to run hourly but in future releases we can also do this once a day. What you need to do for this is first change the default for the script action. Select that sba and hot the modify button.
Under settings you can keep the execution context as console (this will use the monitor when you run it as an automated action) or select other machine to pick a scripting server. Also make sure to select a shared credential that has a Horizon credentials file created on this machine.
Now go to the arguments tab and edit the default settings to set them to what you need.
Before:
Editing the first one
and done, press ok after this.
Now click finalize so the monitors can also use them and go trough all the steps ( no need to share with the community) an dmake sure the ControlUp Monitors have the permissions to run the sba.
Next we go to triggers > add trigger and select the new type: scheduled and click next
select the hourly schedule and a proper start and end time and click next
On the filter criteria make sure that you set the name to a single machine, otherwise the script will run for all machines in your environment. I recommend the same machine you used for the execution context. Keep in mind that the schedule functions just like any other trigger so you need to filter properly to what machine it applies and if you don’t it will apply to all of them.
You can select the folder this machine resides in and/or select a schedule to run the script in.
At the follow-up actions click add and select run an action from the pulldown menu. Under script name select the vCheck for Horizon, click ok and next
Make sure to create a clear name and click finish
The html should automatically end up in the export location and in your email.
One of the goals and hopes I had with my 100DaysOfCode (I am writing this on day 100!) was that the Horizon REST api’s to create desktop pools and RDS farms would have been available at the end. Only half of that came out and with Horizon 8 2103 we can finally create a RDS farm using those rest api’s. I have decided to add this to the Python module based on a dictionary that the user sends to the new_farm method. I could still add a fully fetched function but that would require a lot of arguments and using **kwargs is an option but than the user would still need to find out what to use.
First I will need to know what json data I actually need, let’s have a look at the api explorer page to get a grip on this
As said I send a dictionary to the method so let’s import data into a dict called data and I will print it to screen. The dictionary needs to follow this specific order of lines so that’s why a json is very useful to start with.
with open('/mnt/d/homelab/farm.json') as f:
data = json.load(f)
As you can see in both the json and the output there’s a lot of things we can change and some things that we need to change lik id’s for all the components like vCenter, base vm, base snapshot and more. First I need the access_group_id this can be retreived using the get_local_access_groups method. For all of these I will also set the variable in the dictionary that we need.
local_access_group = next(item for item in (config.get_local_access_groups()) if item["name"] == "Root")
data["access_group_id"] = local_access_group["id"]
Than it’s time for the Instant Clone Admin id
ic_domain_account = next(item for item in (config.get_ic_domain_accounts()) if item["username"] == "administrator")
data["automated_farm_settings"]["customization_settings"]["instant_clone_domain_account_id"] = ic_domain_account["id"]
For the basevm and snapshot id’s I used the same method but a bit differently as I had already used this method in another script
vcenters = monitor.virtual_centers()
vcid = vcenters[0]["id"]
dcs = external.get_datacenters(vcenter_id=vcid)
dcid = dcs[0]["id"]
base_vms = external.get_base_vms(vcenter_id=vcid,datacenter_id=dcid,filter_incompatible_vms=True)
base_vm = next(item for item in base_vms if item["name"] == "srv2019-p1-2020-10-13-08-44")
basevmid=base_vm["id"]
base_snapshots = external.get_base_snapshots(vcenter_id=vcid, base_vm_id=base_vm["id"])
base_snapshot = next(item for item in base_snapshots if item["name"] == "Created by Packer")
snapid=base_snapshot["id"]
data["automated_farm_settings"]["provisioning_settings"]["base_snapshot_id"] = snapid
data["automated_farm_settings"]["provisioning_settings"]["parent_vm_id"] = basevmid
Host or cluster id
host_or_clusters = external.get_hosts_or_clusters(vcenter_id=vcid, datacenter_id=dcid)
for i in host_or_clusters:
if (i["details"]["name"]) == "Cluster_Pod1":
host_or_cluster = i
data["automated_farm_settings"]["provisioning_settings"]["host_or_cluster_id"] = host_or_cluster["id"]
Resource Pool
resource_pools = external.get_resource_pools(vcenter_id=vcid, host_or_cluster_id=host_or_cluster["id"])
for i in resource_pools:
# print(i)
if (i["type"] == "CLUSTER"):
resource_pool = i
data["automated_farm_settings"]["provisioning_settings"]["resource_pool_id"] = resource_pool["id"]
VM folder again is a bit different as I have to get the id from one of the children objects
vm_folders = external.get_vm_folders(vcenter_id=vcid, datacenter_id=dcid)
for i in vm_folders:
children=(i["children"])
for ii in children:
# print(ii["name"])
if (ii["name"]) == "Pod1":
vm_folder = i
data["automated_farm_settings"]["provisioning_settings"]["vm_folder_id"] = vm_folder["id"]
Datacenter and vcenter id’s I already had to grab for the base vm and base snapshot so I can just add them
Datastores is a bit more funky as there can be multiple so I needed to create a list first and than populate that based on the name of the datastores I have.
datastore_list = []
datastores = external.get_datastores(vcenter_id=vcid, host_or_cluster_id=host_or_cluster["id"])
for i in datastores:
# print(i)
if (i["name"] == "VDI-500") or i["name"] == "VDI-200":
ds = {}
ds["datastore_id"] = i["id"]
datastore_list.append(ds)
data["automated_farm_settings"]["storage_settings"]["datastores"] = datastore_list
For my final script I put them in a bit different order and I decided to change a whole lot more options but if you have your json perfected this shouldn’t always be required. Also take note that for true/false in the json that I use the True/False from python.
import requests, getpass, urllib, json
import vmware_horizon
requests.packages.urllib3.disable_warnings()
url = input("URL\n")
username = input("Username\n")
domain = input("Domain\n")
pw = getpass.getpass()
hvconnectionobj = vmware_horizon.Connection(username = username,domain = domain,password = pw,url = url)
hvconnectionobj.hv_connect()
print("connected")
monitor = obj=vmware_horizon.Monitor(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
external=vmware_horizon.External(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
inventory=vmware_horizon.Inventory(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
config=vmware_horizon.Config(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
with open('/mnt/d/homelab/farm.json') as f:
data = json.load(f)
vcenters = monitor.virtual_centers()
vcid = vcenters[0]["id"]
dcs = external.get_datacenters(vcenter_id=vcid)
dcid = dcs[0]["id"]
base_vms = external.get_base_vms(vcenter_id=vcid,datacenter_id=dcid,filter_incompatible_vms=True)
base_vm = next(item for item in base_vms if item["name"] == "srv2019-p1-2020-10-13-08-44")
basevmid=base_vm["id"]
base_snapshots = external.get_base_snapshots(vcenter_id=vcid, base_vm_id=base_vm["id"])
base_snapshot = next(item for item in base_snapshots if item["name"] == "Created by Packer")
snapid=base_snapshot["id"]
host_or_clusters = external.get_hosts_or_clusters(vcenter_id=vcid, datacenter_id=dcid)
for i in host_or_clusters:
if (i["details"]["name"]) == "Cluster_Pod1":
host_or_cluster = i
resource_pools = external.get_resource_pools(vcenter_id=vcid, host_or_cluster_id=host_or_cluster["id"])
for i in resource_pools:
# print(i)
if (i["type"] == "CLUSTER"):
resource_pool = i
vm_folders = external.get_vm_folders(vcenter_id=vcid, datacenter_id=dcid)
for i in vm_folders:
children=(i["children"])
for ii in children:
# print(ii["name"])
if (ii["name"]) == "Pod1":
vm_folder = i
datastore_list = []
datastores = external.get_datastores(vcenter_id=vcid, host_or_cluster_id=host_or_cluster["id"])
for i in datastores:
# print(i)
if (i["name"] == "VDI-500") or i["name"] == "VDI-200":
ds = {}
ds["datastore_id"] = i["id"]
datastore_list.append(ds)
local_access_group = next(item for item in (config.get_local_access_groups()) if item["name"] == "Root")
ic_domain_account = next(item for item in (config.get_ic_domain_accounts()) if item["username"] == "administrator")
data["access_group_id"] = local_access_group["id"]
data["automated_farm_settings"]["customization_settings"]["ad_container_rdn"] = "OU=Pod1,OU=RDS,OU=VMware,OU=EUC"
data["automated_farm_settings"]["customization_settings"]["reuse_pre_existing_accounts"] = True
data["automated_farm_settings"]["customization_settings"]["instant_clone_domain_account_id"] = ic_domain_account["id"]
data["automated_farm_settings"]["enable_provisioning"] = False
data["automated_farm_settings"]["max_sessions"] = 50
data["automated_farm_settings"]["min_ready_vms"] = 3
data["automated_farm_settings"]["pattern_naming_settings"]["max_number_of_rds_servers"] = 4
data["automated_farm_settings"]["pattern_naming_settings"]["naming_pattern"] = "farmdemo-{n:fixed=3}"
data["automated_farm_settings"]["provisioning_settings"]["base_snapshot_id"] = snapid
data["automated_farm_settings"]["provisioning_settings"]["parent_vm_id"] = basevmid
data["automated_farm_settings"]["provisioning_settings"]["host_or_cluster_id"] = host_or_cluster["id"]
data["automated_farm_settings"]["provisioning_settings"]["resource_pool_id"] = resource_pool["id"]
data["automated_farm_settings"]["provisioning_settings"]["vm_folder_id"] = vm_folder["id"]
data["automated_farm_settings"]["provisioning_settings"]["datacenter_id"] = dcid
data["automated_farm_settings"]["stop_provisioning_on_error"] = True
data["automated_farm_settings"]["storage_settings"]["datastores"] = datastore_list
data["automated_farm_settings"]["transparent_page_sharing_scope"] = "GLOBAL"
data["automated_farm_settings"]["vcenter_id"] = vcid
data["description"] = "Python_demo_farm"
data["display_name"] = "Python_demo_farm"
data["display_protocol_settings"]["allow_users_to_choose_protocol"] = True
data["display_protocol_settings"]["default_display_protocol"] = "BLAST"
data["display_protocol_settings"]["session_collaboration_enabled"] = True
data["enabled"] = False
data["load_balancer_settings"]["cpu_threshold"] = 12
data["load_balancer_settings"]["disk_queue_length_threshold"] = 16
data["load_balancer_settings"]["disk_read_latency_threshold"] = 12
data["load_balancer_settings"]["disk_write_latency_threshold"] = 16
data["load_balancer_settings"]["include_session_count"] = True
data["load_balancer_settings"]["memory_threshold"] = 12
data["name"] = "Python_demo_farm"
data["session_settings"]["disconnected_session_timeout_minutes"] = 5
data["session_settings"]["disconnected_session_timeout_policy"] = "NEVER"
data["session_settings"]["empty_session_timeout_minutes"] = 6
data["session_settings"]["empty_session_timeout_policy"] = "AFTER"
data["session_settings"]["logoff_after_timeout"] = False
data["session_settings"]["pre_launch_session_timeout_minutes"] = 12
data["session_settings"]["pre_launch_session_timeout_policy"] = "AFTER"
data["type"] = "AUTOMATED"
inventory.new_farm(farm_data=data)
end=hvconnectionobj.hv_disconnect()
print(end)
How does this look? Actually you don’t see a lot happening but the farm will have been created
As always the script can be found on my github in the examples folder together with the json file.
With this I am closing my 100DaysOfCode challenge but I pledge to keep maintaining the python module and I will extend it when new REST api calls arrive for VMware Horizon.
One of the REST api calls that where added for Horizon 8 2012 was the ability to push images to Desktop Pools (sadly not for farms yet). This week I added that functionality to the VMware Horizon Python Module. Looking at the swagger UI these are the needed arguments:
So the source can be either the streams from Horizon Cloud or a regular vm/snapshot combo. For the time you will need to use some moment in epoch. The optional items for adding the virtual tpm, stop on error I have set the default for what they are listed. As logoff policy I have chosen to set a default in WAIT_FOR_LOGOFF.
For this blog posts I have to go with the vm/snapshot combo as I don’t have streams setup at the moment. First I need to connect:
Now let’s look at what the desktop_pool_push_image method needs
First I will grab the correct desktop pool, I will use Pod02-Pool02 this time. There are several ways to get the correct pool but I have chosen to use this one.
desktop_pools=inventory.get_desktop_pools()
desktop_pool = next(item for item in desktop_pools if item["name"] == "Pod02-Pool02")
poolid=desktop_pool["id"]
To get the VM and Snapshots I first need to get the vCenter and datacenter id’s
I created a new golden image last Friday and it has this name: W10-L-2021-03-19-17-27 so I need to get the compatible base vm’s and get the id for this one
base_vms = external.get_base_vms(vcenter_id=vcid,datacenter_id=dcid,filter_incompatible_vms=True)
base_vm = next(item for item in base_vms if item["name"] == "W10-L-2021-03-19-17-27")
basevmid=base_vm["id"]
I had Packer create a snapshot and I can get that in a similar way
base_snapshots = external.get_base_snapshots(vcenter_id=vcid, base_vm_id=base_vm["id"])
base_snapshot = next(item for item in base_snapshots if item["name"] == "Created by Packer")
snapid=base_snapshot["id"]
I get the current time in epoch using the time module (google is your best friend to define a moment in the future in epoch)
current_time = time.time()
For this example I add all the arguments but if you don’t change fromt he defaults that’s not needed
and when I now look at my desktop pool it’s pushing the new image
I have created a new folder on Github for examples and the script to deploy new images is the first example. I did move a couple of the names to variables so make ie better usable. You can find it here. Or see the code below this.
One of the challenges with the Horizon REST API’s is that they are not feature complete yet and if you ain’t on the latest version you need to scroll trough the api explorer or Swagger UI to find if the URL you need is available. I have created a short script for both python and powershell that will show all the available urls.
If you’ve taken a good look at the Swagger page you’ll see there’s a link to the api docs almost at the top
If you open this you get something that looks like a json but it’s not readable (yet!)
and with another select -expandproperty you see all the details
$json.paths | select -expandproperty "/inventory/v1/rds-servers/{id}" | select -ExpandProperty get
With Python you can start with something similar
import json,requests,urllib
requests.packages.urllib3.disable_warnings()
response = requests.get("https://pod2cbr1.loft.lab/rest/v1/api-docs?group=Default" , verify=False)
data = response.json()
for i in data["paths"]:
print(i)
but this will just give the url’s
To be able to drill down I decided to bring the url, method and the description into a list and print that if needed. This example is just with the method and url but you can add the description as well. The list is to make it easier to filter on.
import json,requests,urllib
requests.packages.urllib3.disable_warnings()
response = requests.get("https://pod2cbr1.loft.lab/rest/v1/api-docs?group=Default" , verify=False)
data = response.json()
list=[]
paths=data["paths"]
for i in paths:
for method in paths[i]:
obj = {}
obj["method"] = method
obj["url"] = i
obj["description"] = paths[i][method]
list.append(obj)
for i in list:
print(i["method"], i["url"])
Yesterday I added the first method to the VMware HorizonPython module that makes use of filtering while the day before that I added pagination. VMware{Code} has a document describing available options for both but let me give some explanation.
Pagination
Pagination is where you perform a query but only get an x amount of objects returned by default. The rest of the objects are available on the next page or pages. This is exactly what I ran into with the vmware.hv.helper Powershell module a long time ago. With the REST api’s this is rather easy to add since if there are more pages/objects left the headers will contain a key named HAS_MORE_RECORDS. For all the methods that I add where pagination is supported you don’t need to handle this though as I have added it to the method itself. What I did add was the option the change the maximum page size. I default to 100 and the maximum is 1000, if you supply an interrupt higher than 1000 this will be corrected to 1000.
Filtering
Filtering needs some more work from the user of the module to be able to use it.
What options are there for filtering?
For the type we have: And, Or and Not
For the filters themselves there are: Equals, NotEquals, Contains, StartsWith and Between.
The formula is you pick one from the first row and combine that with one or more from the second row.
To apply these the document describes the base schema like this:
This all looks like a dictionary with a nested dictionary when translating it to Python but when you have multiple filters it suddenly looks like this:
otherwise know as a dictionary with a list of dictionaries in it and since the latter also works with a single dict inside the list I have taken that route. The document also describes encoding and minifying the code to it works for a REST api call but I have done all of that for you so no need to worry about it, just build the dictionary and you are good!
Now let’s actually perform a search
First I create my base object with the type AND and a list for the filters key
Until now all of my blogging about the Horizon api’s was about consuming the SOAP api using PowerCLI. Since a couple of releases Horizon also has a REST api and since 7.12 we are also able to change some settings using that. So now it’s time for me to dive into the Horizon REST api’s. I will consume them using Powershell since I am the most comfortable using that but you can use whatever method you prefer..
First of all we need to create an accesstoken, we can do this by using some code that I simply stole from Andrew Morgan because why would I re-invent the wheel? From his git repository I grabbed three basic functions: get-HRHeader, Open-HRConnection and close-hrconnection. there’s also a refresh-hrconnection but I won’t need that for now.
(I am grabbing it from the command line here but when I run the scripts I have my creds hardcoded to make my life for the duration of this blog post a bit easier)
Next up is actually getting some data. The first thing that I wil do is show the connection servers. This can be done with the following API call. The part after -uri “$url/rest/ is what you can find int he api explorer. The method is the method also shown in the api explorer.
This works but I can’t say that it’s really usable. Now this is not the first time I do something with REST api’s (haven’t done it a lot though to be honest) so I know this can easily be converted to json to make it visible. What I will do is that I put it in a variable first.
Now this DOES look usable! Let’s take a look what is under general_settings
$settings.general_settings
Let’s say I want to change the forced logoff message
$settings.general_settings.forced_logoff_message="Get lost, the Bastard Operator From Hell is here."
Now my variable has the change but I need to send this to the server. This can be done using a put method and the settings variable has to be added as json. The second line is to pull the new settings from my connection server showing it directly in a json format.
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here:
Cookie Policy