[HorizonAPI] Disabling Provisioning and/or disabling entire Desktop Pools and RDS Farms

Today I saw the question on the VMware{Code} Slack Channel if anyone ever managed to disabled Desktop Pools using PowerCLI. I was like yeah I have done that and you might need to user the helperservice for that. I offered to create q fast and quick blog post about it so here we go.

First as always I connect to my Connection Server and use a query to retrieve the Pool that I am going to disable.

$creds=import-clixml creds.xml
$hvserver=connect-hvserver pod1cbr1.loft.lab -Credential $creds
$hvservice=$hvserver.ExtensionData
$poolqueryservice=new-object vmware.hv.queryserviceservice
$pooldefn = New-Object VMware.Hv.QueryDefinition
$filter = New-Object VMware.Hv.QueryFilterEquals -Property @{ 'memberName' = 'desktopSummaryData.name'; 'value' = "Pod01_Pool01" }
$pooldefn.filter=$filter
$pooldefn.queryentitytype='DesktopSummaryView'
$pool = ($poolqueryService.QueryService_Create($hvservice, $pooldefn)).results

With this object I can show you the details of the desktop pool

($hvservice.Desktop.Desktop_Get($pool.id)).base
($hvservice.Desktop.Desktop_Get($pool.id)).desktopsettings

Like I said to actually change things I need the helper service so this is what you do to initialize that.

$desktopservice=new-object vmware.hv.DesktopService
$desktophelper=$desktopservice.read($HVservice, $pool.id)
$desktophelper.getdesktopsettingshelper() | gm

As we saw in the second screenshot I need the desktopsettings and than Enabled

$desktophelper.getdesktopsettingshelper().getenabled()

To change the setting in the helper I need to use sethelper($False)

$desktophelper.getdesktopsettingshelper().setEnabled($False)

Now this has not been changed yet on the desktop pool itself, to do that we need to use desktopservice.update and I also show the result of the change.

$desktopservice.update($hvservice, $desktophelper)
($hvservice.Desktop.Desktop_Get($pool.id)).desktopsettings

And to reverse this

$desktophelper.getdesktopsettingshelper().setEnabled($True)
$desktopservice.update($hvservice, $desktophelper)
($hvservice.Desktop.Desktop_Get($pool.id)).desktopsettings

Disabling provisioning uses the same methodology just in another spot.

To disable provisioning ( the | gm is not needed, it’s just there to show you whats’s in there):

($hvservice.Desktop.Desktop_Get($pool.id)).automateddesktopdata.virtualcenterprovisioningsettings
$desktophelper.getAutomatedDesktopDataHelper().getVirtualCenterProvisioningSettingsHelper() | gm
$desktophelper.getAutomatedDesktopDataHelper().getVirtualCenterProvisioningSettingsHelper().getenableprovisioning()
$desktophelper.getAutomatedDesktopDataHelper().getVirtualCenterProvisioningSettingsHelper().setenableprovisioning($False)
$desktopservice.update($hvservice, $desktophelper)
($hvservice.Desktop.Desktop_Get($pool.id)).automateddesktopdata.virtualcenterprovisioningsettings

And to revert it

$desktophelper.getAutomatedDesktopDataHelper().getVirtualCenterProvisioningSettingsHelper().setenableprovisioning($True)
$desktopservice.update($hvservice, $desktophelper)
($hvservice.Desktop.Desktop_Get($pool.id)).automateddesktopdata.virtualcenterprovisioningsettings

For RDSH farms the process is similar some of the naming is just different. First to get the farm object

$farmqueryservice=new-object vmware.hv.queryserviceservice
$farmdefn = New-Object VMware.Hv.QueryDefinition
$filter = New-Object VMware.Hv.QueryFilterEquals -Property @{ 'memberName' = 'data.name'; 'value' = "Pod01-Farm01" }
$farmdefn.filter=$filter
$farmdefn.queryentitytype='FarmSummaryView'
$farm = ($farmqueryservice.QueryService_Create($hvservice, $farmdefn)).results
($hvservice.Farm.farm_get($farm.id)).data

And to create the helper and disable the farm

$farmservice=New-Object VMware.Hv.FarmService
$farmhelper=$farmservice.read($hvservice,$farm.id)
$farmhelper.getDataHelper().setenabled($False)
$farmservice.update($hvservice,$farmhelper)
($hvservice.Farm.farm_get($farm.id)).data

And in reverse 🙂

$farmhelper.getDataHelper().setenabled($True)
$farmservice.update($hvservice,$farmhelper)
($hvservice.Farm.farm_get($farm.id)).data

And now the provisioning part

($hvservice.Farm.farm_get($farm.id)).automatedfarmdata.virtualcenterprovisioningsettings
$farmhelper.getAutomatedFarmDataHelper().getvirtualcenterprovisioningsettingshelper().setenableprovisioning($False)
$farmservice.update($hvservice,$farmhelper)
($hvservice.Farm.farm_get($farm.id)).automatedfarmdata.virtualcenterprovisioningsettings

Guess what?

$farmhelper.getAutomatedFarmDataHelper().getvirtualcenterprovisioningsettingshelper().setenableprovisioning($True)
$farmservice.update($hvservice,$farmhelper)
($hvservice.Farm.farm_get($farm.id)).automatedfarmdata.virtualcenterprovisioningsettings

[HorizonAPI]Using the Datastore service (incl sizing calculation!)

I was looking on my blog for information to use the datastore information using the Horizon api’s but couldn’t find it so here’s a post on that.

This posts uses the soap api’s next time I’ll see what we can do with the REST api.

Index

First I will make a connection like I always do

Now let’s see what methods are available under the Datastore service

Let’s start with the easy 3 first aka the bottom ones

[sta_anchor id=”datastore_listdatastoresbyhostorcluster” unsan=”Datastore_ListDatastoresByHostOrCluster” /]

Datastore_ListDatastoresByHostOrCluster

The name says enough with Datastore_ListDatastoresByHostOrCluster you are able to list datastores using the HostOrClusterID.

I am cutting some corners here how to find this out but to get this HostOrClusterID we need to get the DatacenterId and to get that we’ll need the VirtualcenterId.

To get all virtualcenters in a pod you need to use virtualcenters_list() and what I do in this example is listing them first and than putting the first virtualcenter in an variable.

$hvservice.VirtualCenter.VirtualCenter_List()
$VC=$hvservice.VirtualCenter.VirtualCenter_List() | Select-Object -first 1

and the same for the datacenter using the virtualcenterID

$hvservice.Datacenter.Datacenter_List($vc.id)
$DC=$hvservice.Datacenter.Datacenter_List($vc.id) | Select-Object -first 1

With the datacenter ID I’ll retreive the info under HostOrCluster and store it in an variable.

$hvservice.HostOrCluster.HostOrCluster_GetHostOrClusterTree($dc.id)
$tree=$hvservice.HostOrCluster.HostOrCluster_GetHostOrClusterTree($dc.id)

Let’s browse this object and see what we can find

We can clearly see the name here and as I need Cluster_Pod2 I am putting that one in an object

$pod2cluster=$tree.TreeContainer.Children.info | select-object -last 1
$pod2cluster

And with this object I can get to my datastores and again I store them in an object

$hvservice.Datastore.Datastore_ListDatastoresByHostOrCluster($pod2cluster.id)
$datastores=$hvservice.Datastore.Datastore_ListDatastoresByHostOrCluster($pod2cluster.id)

Let’s see what’s in there

So we see most of the basic info in here that we might need including name, capacity and free space. Not sure why the numberofvm’s is empty as all of them have vm’s.

[sta_anchor id=”datastore_listdatastoresbydesktoporfarm” unsan=”Datastore_ListDatastoresByDesktopOrFarm” /]

Datastore_ListDatastoresByDesktopOrFarm

Let’s see what we need for this one

So an object is needed of the type VMware.Hv.DatastoreSpec let’s define the object and see what’s in it.

As I am not 100% sure if all are required or not and what might break I’ll have a look at the API explorer article of this.

So it requires either a DesktopID OR a FarmID wile you can provide the hostorclusterId but that will be populated if you don’t provide one.

I am not going to build the query here to get a desktop pool so I’ll just use get-hvpool and get-hvfarm from the vmware.hv.helper powershell module.

Next I put the $pool.id in the spec and get the details

$spec.DesktopId=$pool.id
$hvservice.Datastore.Datastore_ListDatastoresByDesktopOrFarm($spec)
$datastores=$hvservice.Datastore.Datastore_ListDatastoresByDesktopOrFarm($spec)
$datastores.datastoredata

So this lists all the datastores that I have available in this cluster. I know this 100% sure as the ISO datastore is a read-only datastore that doesn’t have any desktops.

Let’s do the same using the farmId

$spec.DesktopId=$null
$spec.FarmId=$farm.id
$datastores=$hvservice.Datastore.Datastore_ListDatastoresByDesktopOrFarm($spec)
$datastores

Same amount of datastores so the same result.

[sta_anchor id=”datastore_listdatastoreclustersbyhostorcluster” unsan=”Datastore_ListDatastoreClustersByHostOrCluster” /]

Datastore_ListDatastoreClustersByHostOrCluster

As I don’t have any datastore clusters in my lab I cannot show it but you’ll need the same hostorclusterid as we used for Datastore_ListDatastoresByHostOrCluster

[sta_anchor id=”datastore_getusage” unsan=”Datastore_GetUsage” /]

Datastore_GetUsage

This method shows what desktop pools are using a particular datastore. When doing a dry run it shows that a DatastoreId is needed.

I will use one of the items that I still have stored in my $datastores variable

$datastore=$datastores |Select-Object -last 1
$datastore.DatastoreData
$hvservice.Datastore.Datastore_GetUsage($datastore.id)

So this ia a rather boring datastore as it only has 1 pool configured to use it (and it doesn’t even have any vm’s from this pool on it) but you’ll see that there is another datastore configured for this pool as wel. I do have a more used datastore though on a local nvme drive.

$datastore=$datastores | where {$_.datastoredata.name -like "*nvme*"}
$hvservice.Datastore.Datastore_GetUsage($datastore.id)

As you can see it shows the desktop pools and even the single farm I have that use this datastore each with their own disk usage.

[sta_anchor id=”datastore_getdatastorerequirements” unsan=”Datastore_GetDatastoreRequirements” /]

Datastore_GetDatastoreRequirements

The Datastore_GetDatastoreRequirements method does a calculation of what disk space might be needed for a desktop pool.

So let’s see what we need

$reqspec=new-object VMware.Hv.DatastoreRequirementSpec
$reqspec

That’s a lof and as a screenshot wouldn’t fit here is the link to the APi explorer page on it: here

To fill these things I will use the $pool variable that I still have stored.

$reqspec.DesktopId=$pool.id
$reqspec.Source="INSTANT_CLONE_ENGINE"
$reqspec.VmId=$pool.AutomatedDesktopData.VirtualCenterProvisioningSettings.VirtualCenterProvisioningData.parentvm
$reqspec.SnapshotId=$pool.AutomatedDesktopData.VirtualCenterProvisioningSettings.VirtualCenterProvisioningData.Snapshot
$reqspec.PoolSize=30
$hvservice.Datastore.Datastore_GetDatastoreRequirements($reqspec)

And when I change the poolsize

Calling the EMEA #vCommunity for a Monthly Virtual #vBreakfast EMEA!

For years Fred Hofer has been organizing the vBreakfast for VMworld EMEA across the street of the Fira in a very nice place. A big thank you to Fred to do this every year!! Due to the mess this world currently is in we couldn’t do it in person this year and we moved virtual. Big thanks to Runecast for organizing the virtual space we could use but sadly we had to do it without the vWorldfamous Grumpy Waiter. I really like the way it was setup with separate tables and everything. The only issue was that it seemed to use whatever Microphone, speaker and webcam that where configured in Chrome and switching was almost impossible.

During this meeting we came up with the idea to organize a monthly virtual vBreakfast for EMEA. Kev Johnson offered to do it again on the site that Runecast used for this but I think it would be more flexible if we used something like zoom. Luckily I can use a paid license so it’s no problem to make this a 2 hour long come and go whenever you like session. The talk can be tech, it can be non-tech whatever you like!

When?

Every Last Friday of the month (yes we’ll move the december one!)

How late?

7.30am UK time to 9.30 (8.30 Amsterdam to 10.30, 9.30 Israel time to11.30, you get what I am saying)

How do I attend?

Drop me an email at vbreakfastemea at gmail.com and I will forward you the invite

But I don’t live in EMEA can I still attend?

Hell yeah, I don’t give a flying F..K where you are from, everyone is welcome and the more the better!

I don’t use VMware but Nutanix or Hyper-V or Citrix?

Didn’t I just say everyone is welcome? Yes I also don’t care what hypervisor or EUC platform you use!

I work for company XYZ and want to sponsor this!

I think the only way of sponsoring that could work is if you organize a real breakfast for everyone that signs up. If you are prepared to do that we won’t offer more in return than a mention and a thank you as this is an organized unstructured meeting to meetup with friends.

My Golden Image build using HashiCorp Packer

After a Tweet last week by former colleague and fellow vExpert Jeroen Buren, my reaction on that and another question that we got  I decided to finally make some time and document how my Packer Golden Image build works. To be honest I don’t think that it’s anything spectacular and most of it has been borrowed from either Mark Brookfield or someone else but I forgot who, sorry for that! (if you recognize your work send me a note and I’ll update this piece) While my templates aren’t really complicated I am happy with them and they are exactly what I need in my lab. Things can definitely be done better but it’s enough for me.

I use 2 main files, 1 with the generic settings for the type of image and one that has the variables for the vCenter where it will be created. The last one looks like this:

{
    "vm_name":"W10-p2-{{isotime \"2006-01-02-15-04\"}}",
    "vcenter_server":"pod1vcr1.loft.lab",
    "username":"administrator@vsphere.local",
    "password":"hahahahanope!",
    "datastore":"NVME1TB (1)",
    "datastore_iso":"ISO",
    "cluster": "Cluster_Pod2",
    "network": "dpg_loft_102",
    "winrm_username": "Administrator",
    "winrm_password": "VMware1!"
}

The VM name is W10-p2-dateandtime the isotime combined with that default time makes sure that I get the current date and time of running the script. For more information see this page: https://www.packer.io/guides/workflow-tips-and-tricks/isotime-template-function. I have separate datastores for ISO’s and where the VM will be created while that port group is on a dVswitch.

The 2nd file is slightly more complicated:

{
    "builders": [
    {
        "type": "vsphere-iso",
        "vcenter_server":      "{{user `vcenter_server`}}",
        "username":            "{{user `username`}}",
        "password":            "{{user `password`}}",
        "insecure_connection": "true",
 
        "vm_name": "{{user `vm_name`}}",
        "datastore": "{{user `datastore`}}",
    "Notes": "Windows 10 1909 Instant Clone Image build using Packer {{isotime \"2006-01-02-15-04\"}}",
        "create_snapshot": true,
        "cluster": "{{user `cluster`}}",
        "network": "{{user `network`}}",
        "boot_order": "disk,cdrom",
 
        "vm_version":       15,  
        "guest_os_type": "windows9_64Guest",
    "firmware":	"bios",
 
        "communicator": "winrm",
        "winrm_username": "{{user `winrm_username`}}",
        "winrm_password": "{{user `winrm_password`}}",
    "winrm_timeout": "5h",
 
        "CPUs":             2,
        "RAM":              6064,
        "RAM_reserve_all":  false,
    "video_ram": 128000,
    
    "remove_cdrom": true,
 
        "disk_controller_type":  "pvscsi",
        "disk_size":        51200,
        "disk_thin_provisioned": true,
    
    "configuration_parameters": {
      "svga.autodetect" : "FALSE",
      "svga.numDisplays" : "2"
    },
 
        "network_card": "vmxnet3",
 
        "iso_paths": [
        "[{{user `datastore_iso`}}] Windows_10_1909_enterprise.iso",
        "[{{user `datastore_iso`}}] VMware-Tools-windows-11.0.5-15389592.iso"
        ],
 
        "floppy_files": [
            "{{template_dir}}/setup/"
        ],
        "floppy_img_path": "[{{user `datastore_iso`}}] floppy/pvscsi-Windows8.flp"
    }
    ],
 
    "provisioners": [
    {
            "type": "windows-shell",
      "script": "{{template_dir}}/setup/onedrive.cmd"
        },
    {
      "type": "windows-update",
      "search_criteria": "IsInstalled=0",
      "filters": [
        "exclude:$_.Title -like '*Preview*'",
        "include:$true"
      ],
      "update_limit": 25
    },
    {
      "type": "windows-restart",
      "restart_timeout": "15m",
      "restart_check_command": "powershell -command \"& {Write-Output 'restarted.'}\""
    },
        {
            "type": "powershell",
            "inline": [
        "Set-TimeZone -Id 'W. Europe Standard Time'",
                "Get-AppXPackage -AllUsers | Where {($_.name -notlike \"Photos\") -and ($_.Name -notlike \"Calculator\") -and ($_.Name -notlike \"Store\")} | Remove-AppXPackage -ErrorAction SilentlyContinue",
                "Get-AppXProvisionedPackage -Online | Where {($_.DisplayName -notlike \"Photos\") -and ($_.DisplayName -notlike \"Calculator\") -and ($_.DisplayName -notlike \"Store\")} | Remove-AppXProvisionedPackage -Online -ErrorAction SilentlyContinue"     
            ]
        },
    {
      "type": "windows-restart",
      "restart_timeout": "15m",
      "restart_check_command": "powershell -command \"& {Write-Output 'restarted.'}\""
    },
        {
            "type": "powershell",
            "scripts": [
                "{{template_dir}}/setup/Horizon_Agent_IC.ps1"
                "{{template_dir}}/setup/appvolumes.ps1",
                "{{template_dir}}/setup/dem.ps1",
        "{{template_dir}}/setup/fslogix.ps1",
                ,
        "{{template_dir}}/setup/CU.ps1"
            ]
        },
    {
      "type": "windows-restart",
      "restart_timeout": "15m",
      "restart_check_command": "powershell -command \"& {Write-Output 'restarted.'}\""
    },
    {
            "type": "powershell",
            "scripts": [
        "{{template_dir}}/setup/osot.ps1"
            ]
        }
    ]
}

Some specifics: to mark my GI’s I always create a note with the type and build date again using the isotime.

"Notes": "Windows 10 1909 Instant Clone Image build using Packer {{isotime \"2006-01-02-15-04\"}}",

And as I am very lazy I also have it creating a snapshot for me

"create_snapshot": true,

These make sure I have more than the default ram for the build in graphics adapter

"RAM":              6064,
"RAM_reserve_all":  false,
"video_ram": 128000,


"configuration_parameters": {
  "svga.autodetect" : "FALSE",
  "svga.numDisplays" : "2"
},

Some versions of Packer had an issue with ejecting the cd-rom’s but that has been fixed now.

"remove_cdrom": true,

There are several optimizations that take place like the app volumes script at the beginning (onedrive.cmd) and the VMware OS Optimization Tool in the end (osot.ps1).

All the agents are shared from a webserver and this is one of the ps1 scripts that starts the installation, the horizon agent in this case.

$ErrorActionPreference = "Stop"
 
$webserver = "loftfls01.loft.lab"
$url = "http://" + $webserver
$installer = "VMware-Horizon-Agent-x86_64-8.0.0-16530789.exe"
$listConfig = "/s /v ""/qn VDM_VC_MANAGED_AGENT=1 ADDLOCAL=Core,ClientDriveRedirection,RTAV,TSMMR,VmwVaudio,USB,NGVC,PerfTracker,HelpDesk"""
 
# Verify connectivity
Test-Connection $webserver -Count 1
 
# Get Horizon Agent
Invoke-WebRequest -Uri ($url + "/" + $installer) -OutFile C:\$installer
 
# Unblock installer
Unblock-File C:\$installer -Confirm:$false -ErrorAction Stop
 
# Install Horizon Agent
Try 
{
   Start-Process C:\$installer -ArgumentList $listConfig -PassThru -Wait -ErrorAction Stop
}
Catch
{
   Write-Error "Failed to install the Horizon Agent"
   Write-Error $_.Exception
   Exit -1 
}
 
# Cleanup on aisle 4...
Remove-Item C:\$installer -Confirm:$false

and the osot.ps1 looks like this

$ErrorActionPreference = "Stop"
 
$webserver = "loftfls01.loft.lab"
$url = "http://" + $webserver
$osot = "VMwareOSOptimizationTool.exe"
$osotConfig = "VMwareOSOptimizationTool.exe.config"
 
# Verify connectivity
Test-Connection $webserver -Count 1
 
# Get Files
ForEach ($file in $osot,$osotConfig) {
   Invoke-WebRequest -Uri ($url + "/" + $file) -OutFile C:\$file
}
 
# Run OSOT
C:\VMwareOSOptimizationTool.exe -o -t "VMware Templates\Windows 10 and Server 2016 or later" -f all
 
# Sleep before cleanup
Start-Sleep -Seconds 180
 
# Cleanup on aisle 4...
ForEach ($file in $osot,$osotConfig) {
   Remove-Item C:\$file -Confirm:$false
}

I have even created a simple powershell script that starts the build with a couple extra options. -Timestamp-ui to show the timestamp while the -force isn’t needed anymore as each build has it’s own name but I keep it in there.

[CmdletBinding()]
param(
Parameter(Mandatory)]
[string]$environmentfile,
[Parameter(Mandatory)]
[string]$buildfile
)

c:\software\packer\packer.exe build -force -timestamp-ui -var-file $environmentfile $buildfile

So how does this look?

I understand that this is far from a full explanation of all the options in the json files but I think most things are rather generic with a few things that I have highlighted.

Total running time in my lab highly depends on what host I use (core speed) and what iso is used as I also install Windows Updates. The server 2019 ISO updated in sept 2020 takes 40 minutes while Windows 10 1909 without extra patches takes just over an hour.

Jon Howe also did a nice write-up with some more explanation: https://www.virtjunkie.com/vmware-template-packer/#Packer_Template_File_User_Variables

Adding Nutanix CE CVM nodes & Prism Central VM’s to ControlUp

Take note: at the moment of writing I am preparing to upgrade my Nutanix #vCommunity Edition to the latest version so I am still at version 20191030.415 of AHV. Also be aware that I can’t do any promises on if what I am doing is supported to do on your production Nutanix systems but as long as you don’t install any extra packages it’s just an ssh sessions that pulls some data.

Today I had a call with Samuel Legrand and a potential ControlUp customer where they had some questions about the Nutanix CVM resource usage. Because of that I tried to add the CVM of the Nutanix Community Edition in my lab and that resulted in this tweet:

So yes it’s possible to add the CVM as a monitored Linux Machine in ControlUp and it’s rather easy to do so. Let me show you what to do. To make sure that I don’t try to potentially mess with any CentOS packages I disable the installation of missing packages on Linux machines in the ControlUp console.

Next up is defining a (shared) credential I choose a shared credential so I get to see the data in Insights as well. You do this under Monitor Settings > Domain Identity

Click Add Credentials, use .\nutanix as username (so it’s seen as a local account and not a domain account), fill in the password and make sure the Friendly name is clear for what it is.

Next up we’ll actually add the machines by configuring a Linux Data Collector. Click Linux Data Collector in the ribbon, make up a name and select the credential you just created.

After this click add and use an IP range to scan or work with single IP’s, press scan and click add for the correct systems and hit ok.

Strongly advisable is select a dedicated data collector instead of the Console/Monitor this can be any system that has the ControlUp agent installed. In the Linux Data Collector screen click the arrow for the Connector Options, select ControlUp Console / Monitor and click remove.

Now click add and select the machine you want to use as data collector and click ok twice.

The machines will now be added to the bottom of the tree

Before the next screenshot I moved them into a folder but you can clearly see that things like OS, memory utilization are working. CPU is not and when we tried it with the production systems from the customer they actually gave an error this is most probably caused by the fact that not all required rpm’s are installed. It’s better than nothing though!

When drilling down on the machines to the processes it’s clear what the biggest consumers of processes are for the CVM.

And for Prism Central CE

So with this we could monitor the Nutanix pieces and create triggers for if or when one of the processes would become unavailable or when the entire machine goes down.

The VMware Labs flings monthly for September 2020

[Disclaimer] any typo’s in this blog posts are Brian Madden’s fault since I am listening to his VMworld session with Carl Webster[/disclaimer]

Yes this blog post is really being created during this session. Hopefully you also where able to join VMworld because even though I hate the virtual events and am missing the live interaction with people I am still kinda liking the event. This month 4 new flings have been released while another 4 received updates.

New

vRealize Network Insight and HCX Integration

Python Utility for VMware Cloud Foundation Management

VMCA Certificate Generator

SQL30 – An ORM for SQLITE on ESX

Updates

Supernova – Accelerating Machine Learning Inference

VMware Machine Learning Platform

Horizon Session Recording

App Volumes Entitlement Sync

New Flings

[sta_anchor id=”vnihi” /]

vRealize Network Insight and HCX Integration

This integration script between vRealize Network Insight (vRNI) and VMware HCX, allows you to streamline the application migration process.

First, use the application discovery methods within vRNI to discover the application boundaries, including the VMs (or other workloads), to form application constructs within vRNI. This integration script then synchronizes the vRNI application constructs into HCX Mobility Groups, saving you the time that it would’ve taken to do this manually. After the sync, you can pick up the migration process and execute the migration.

[sta_anchor id=”puvcfm” /]

Python Utility for VMware Cloud Foundation Management

Python Utility for VMware Cloud Foundation Management is a lightweight python-based command line VCF administration tool. It offers an interactive shell interface to interact with the VMware Cloud Foundation (SDDC Manager) public API.

After the utility is installed vcfkite binary is available on machine which shall be used to launch interactive shell interface to interact with SDDC Manager(uses SDDC Manager Public API’s)

This utility is platform independent (tested on Linux CentOS/Windows W2K12) and would be distributed as a python’s wheel package which can be easily installed using python’s native pip3 command (For details information check the Instructions and Requirements sections).

The purpose of this utility is to make VMware Cloud Foundation API’s more accessible to users/customers and work towards the adoption of the VMware Cloud Foundation API & VMware Cloud Foundation in general.

[sta_anchor id=”vmcacg” /]

VMCA Certificate Generator

The VMware Certificate Authority (VMCA) Certificate Generator gives you the ability to simply retrieve certificates signed by the VMware Certificate Authority (VMCA) running on vCenter / PSC.

This can be useful when you don’t have access to a company wide Certificate Authority (e.g. small-business or running in a lab), but you want to have valid certificates for your services.
The certificates can be used for other VMware products like vRealize Suite, NSX as well as 3rd party services.
Once you trust the VMCA root certificate (to be retrieved by the vCenter URL or over this tool), you trust all services with the new certificates.

The validity of the certificates is not changeable and depends on the vCenter version. With vCenter 7.0 you’ll get certificates valid for 2 years.

[sta_anchor id=”sofsoe” /]

SQL30 – An ORM for SQLITE on ESX

Persistence is an integral part of any real world software application. A widely used method of achieving (file based) persistence is through SQLITE database. Python natively provides support for interacting with SQLITE with the help of module “sqlite” (sqlite3 in python3). SQLAlchemy is a popular ORM used for interacting with sqlite database in Python. However, this and many other similar ORMs do not work on ESX Hypervisor as is. It is because these packages have dependency on other packages which are not supported / installable on ESX hypervisor further.

In this fling, we present SQL30, a ZERO weight ORM for SQLITE database written using only native python constructs. This python package has no dependency on any external python package and works as is on ESX version 6.5 and above.

SQL30 is useful as :

It helps developers achieve persistence in Python applications even on platforms such as ESX which have limited / trailing support for Python.
It improves productivity as developers can write (SQL based) database applications without having to learn SQL itself.

 

Updated Flings

[sta_anchor id=”samli” /]

Supernova – Accelerating Machine Learning Inference

The Supernova – Accelerating Machine Learning Inference fling is made to enable Machine learning on the edge.

Changelog

Version 1.0 Update

Compared 0.0.1, this release supports:

  1. New HW accelerators – AMD GPU + Xilinx FPGA
  2. CPU Accelerations with OpenVINO
  3. Basic K8S deployment
  4. Versatile APIs
  5. vSphere Bitfusion support
  6. More use cases like facial mask

 

[sta_anchor id=”vmlp” /]

VMware Machine Learning Platform

The VMware Machine Learning Platform helps Data Scientists perform their magic ML workloads on VMware infrastructure.

Changelog

Version 0.4.0 Update

  • Federated Data and Model Manager (FDMM)
  • Docs Service accessible at /vmlp-docs/
  • Better LDAP deployment configuration
  • Image “prepull” tool
  • Added BitFusion 2.0 support to Keras/Pytorch examples
  • Lots of bug fixes

Includes contributions from: Jiahao “Luke” Chen (Federated Data and Model Manager)

[sta_anchor id=”hsr” /]

Horizon Session Recording

The Horizon Session Recording fling can assist in troubleshooting the desktop by not having the user repeat their steps time over time.

Changelog

Version 2.1.50 Update

  • Fixed multiple deadlocks in the agent recording.
  • Added support for Horizon 8 (2006).

[sta_anchor id=”aves” /]

App Volumes Entitlement Sync

The App Volumes Entitlement Sync is a great help when you have multiple App Volumes environments to make sure everyone gets the same apps everywhere.

Changelog

Version 4.3 Update

  • Fixes Typo in Secondary Site Server Label
  • No longer shows non-replicated applications as errors in the Fix Apps dialog–Much easier to read what will be fixed.

 

 

[HorizonAPI]Finding VDI or RDS machines based on wrong/old Golden Image

One of the first thing I did years ago when I first learned of the Horizon API’s was to start working on the vCheck for Horizon as I at that point was managing a Horizon 6.2* environment with lots of pools and lots of issues. With the vCheck I didn’t need to log into all pod’s anymore and nor did I need to check each and every pool after a recompose if all desktops had the correct image. Last week Guy Leech asked me if there was a script that could do this for RDS farms as he was working on a script that has to do with App Volumes & RDS hosts. I was like hell yeah we have that but when I looked at the vCheck and had to admit that it was a actually a hell no.

So after creating a new RDS image that could be used with Instant Clones this week it was time to create that vCheck. This morning and I even splashed a bug in the VDI wrong snapshot check when a Desktop Pool doesn’t have any machines in it. This led to this tweet that you might have seen:

SO what is actually the magic behind these checks? To be honest it is rather simple as the names of both the VM and the Snapshot in use are embedded in object both on pool/farm level and in the machine objects themselves.

First I connect to the connection server so we’ll use a credentials file and I also define 2 variables that we will use later

$hvconserver="pod2cbr1.loft.lab"
$credsfile="D:\homelab\creds.xml"

$creds=Import-Clixml $credsfile

$hvserver=connect-hvserver -Server $hvconserver -Credential $creds
$hvservice=$hvserver.ExtensionData
$wrongsnapdesktops=@()
$wrongsnaphosts=@()

After this I use 2 query’s to gather Pool and Farm information. The summaryviews don’t contain the needed information so I have to use farm.farm_get with the id to get what we need.

# --- Get Desktop pools
$poolqueryservice=new-object vmware.hv.queryserviceservice
$pooldefn = New-Object VMware.Hv.QueryDefinition
$pooldefn.queryentitytype='DesktopSummaryView'
$poolqueryResults = $poolqueryService.QueryService_Create($hvservice, $pooldefn)
$pools = foreach ($poolresult in $poolqueryResults.results){$hvservice.desktop.desktop_get($poolresult.id)}
$poolqueryservice.QueryService_DeleteAll($hvservice)
# --- Get RDS Farms

$Farmqueryservice=new-object vmware.hv.queryserviceservice
$Farmdefn = New-Object VMware.Hv.QueryDefinition
$Farmdefn.queryentitytype='FarmSummaryView'
$FarmqueryResults = $FarmqueryService.QueryService_Create($hvservice, $Farmdefn)
$farms = foreach ($farmresult in $farmqueryResults.results){$hvservice.farm.farm_get($farmresult.id)}
$Farmqueryservice.QueryService_DeleteAll($hvservice)

So how does this look?

and inside the automateddesktopdata and automatedfarmdata we find a property called virtualcenternamesdata that has what we need

Next I will create object for both the first farm and the first pool to show what where we need to look for in the vdi/machine objects

$queryservice=new-object vmware.hv.queryserviceservice
$defn = New-Object VMware.Hv.QueryDefinition
$defn.queryentitytype='MachineSummaryView'
$defn.filter = New-Object VMware.Hv.QueryFilterEquals -Property @{ 'memberName' = 'base.desktop'; 'value' = $pool.id }
$queryResults = $queryService.QueryService_Create($hvservice, $defn)
$poolmachines=$hvservice.machine.machine_getinfos($queryResults.results.id)


$queryservice=new-object vmware.hv.queryserviceservice
$defn = New-Object VMware.Hv.QueryDefinition
$defn.queryentitytype='RDSServerInfo'
$defn.filter = New-Object VMware.Hv.QueryFilterEquals -Property @{ 'memberName' = 'base.farm'; 'value' = $farm.ID }
$queryResults = $queryService.QueryService_Create($hvservice, $defn)
$farmmachines=$queryresults.results

As you can see I take an extra step for the desktops as the information that we need is not visible in the MachineSummaryView and the MachineDetailsView is a mess to run query’s for. The VDI machines have the GI and snapshot data stored in $machines.managedachinedata.viewcomposerdata (yes also for Instant Clones) while the rds hosts have it stored in RdsServerMaintenanceData.

After this it’s a matter of combining that information into a nice script that will grab it all for you.

$hvconserver="pod2cbr1.loft.lab"
$credsfile="D:\homelab\creds.xml"

$creds=Import-Clixml $credsfile

$hvserver=connect-hvserver -Server $hvconserver -Credential $creds
$hvservice=$hvserver.ExtensionData
$wrongsnapdesktops=@()
$wrongsnaphosts=@()

# --- Get Desktop pools
$poolqueryservice=new-object vmware.hv.queryserviceservice
$pooldefn = New-Object VMware.Hv.QueryDefinition
$pooldefn.queryentitytype='DesktopSummaryView'
$poolqueryResults = $poolqueryService.QueryService_Create($hvservice, $pooldefn)
$pools = foreach ($poolresult in $poolqueryResults.results){$hvservice.desktop.desktop_get($poolresult.id)}
$poolqueryservice.QueryService_DeleteAll($hvservice)
# --- Get RDS Farms

$Farmqueryservice=new-object vmware.hv.queryserviceservice
$Farmdefn = New-Object VMware.Hv.QueryDefinition
$Farmdefn.queryentitytype='FarmSummaryView'
$FarmqueryResults = $FarmqueryService.QueryService_Create($hvservice, $Farmdefn)
$farms = foreach ($farmresult in $farmqueryResults.results){$hvservice.farm.farm_get($farmresult.id)}
$Farmqueryservice.QueryService_DeleteAll($hvservice)




foreach ($pool in $pools){
  $poolname=$pool.base.name

  if ($pool.type -like "*automated*"){
    $queryservice=new-object vmware.hv.queryserviceservice
    $defn = New-Object VMware.Hv.QueryDefinition
    $defn.queryentitytype='MachineSummaryView'

    $defn.filter = New-Object VMware.Hv.QueryFilterEquals -Property @{ 'memberName' = 'base.desktop'; 'value' = $pool.id }

        $queryResults = $queryService.QueryService_Create($hvservice, $defn)

    if ($queryResults.results.count -ge 1){
      $poolmachines=$hvservice.machine.machine_getinfos($queryResults.results.id)
      $wrongsnaps=$poolmachines | where {$_.managedmachinedata.viewcomposerdata.baseimagesnapshotpath -notlike  $pool.automateddesktopdata.VirtualCenternamesdata.snapshotpath -OR $_.managedmachinedata.viewcomposerdata.baseimagepath -notlike $pool.automateddesktopdata.VirtualCenternamesdata.parentvmpath}
      if ($wrongsnaps){
        foreach ($wrongsnap in $wrongsnaps){
          $wrongsnapdesktops+= New-Object PSObject -Property @{
            "VM Name" = $wrongsnap.base.name;
            "VM Snapshot" = $wrongsnap.managedmachinedata.viewcomposerdata.baseimagesnapshotpath;
            "VM GI" = $wrongsnap.managedmachinedata.viewcomposerdata.baseimagepath;
            "Pool Snapshot" = $pool.automateddesktopdata.VirtualCenternamesdata.snapshotpath;
            "Pool GI" = $pool.automateddesktopdata.VirtualCenternamesdata.parentvmpath;
          }
        }
      }
    }
    $hvservice.QueryService.QueryService_DeleteAll()
  }
}

foreach ($farm in $farms){
  $farmname=$farm.data.name

  if ($farm.type -like "*automated*"){
    $queryservice=new-object vmware.hv.queryserviceservice
    $defn = New-Object VMware.Hv.QueryDefinition
    $defn.queryentitytype='RDSServerInfo'

    $defn.filter = New-Object VMware.Hv.QueryFilterEquals -Property @{ 'memberName' = 'base.farm'; 'value' = $farm.ID }

    $queryResults = $queryService.QueryService_Create($hvservice, $defn)
    $farmmachines=$queryResults.Results
    if ($farmmachines.count -ge 1){
      $wrongsnaps=$farmmachines | where {$_.rdsservermaintenancedata.baseimagesnapshotpath -notlike  $farm.automatedfarmdata.VirtualCenternamesdata.snapshotpath -OR $_.rdsservermaintenancedata.baseimagepath -notlike $farm.automatedfarmdata.VirtualCenternamesdata.parentvmpath}
      if ($wrongsnaps){
        foreach ($wrongsnap in $wrongsnaps){
          $wrongsnaphosts+= New-Object PSObject -Property @{
            "RDS Name" = $wrongsnap.base.name;
            "VM Snapshot" = $wrongsnap.rdsservermaintenancedata.baseimagesnapshotpath;
            "VM GI" = $wrongsnap.rdsservermaintenancedata.baseimagepath;
            "Farm Snapshot" = $farm.automatedfarmdata.VirtualCenternamesdata.snapshotpath;
            "Farm GI" = $farm.automatedfarmdata.VirtualCenternamesdata.parentvmpath;
          }
        }
      
      }
      $hvservice.QueryService.QueryService_DeleteAll()
    }
  }
}
$wrongsnaphosts
$wrongsnapdesktops

Yes this is the same idea as what I use in the vCheck and what I will be using in the ControlUp Script Based Action that I will be creating soon.

 

The VMware Labs flings monthly for August 2020- Time for a new OSOT

The schedule builder for VMworld is open but we should have been at VMworld US around this time if only that stupid virus would have stayed away. In august there where three new fling releases and eight got one or more updates.

New

Software-Defined Data Center Skywalk

Federated Machine Learning on Kubernetes

VMware Container For Folding@Home

Updates

FlowGate

VMware Machine Learning Platform

Demo Appliance for Tanzu Kubernetes Grid

Infrastructure Deployer for vCloud NFV

Workspace ONE UEM SCIM Adapter

VMware OS Optimization Tool

App Volumes Migration Utility

USB Network Native Driver for ESXi

New Releases

[sta_anchor id=”skywalk” /]

Software-Defined Data Center Skywalk

Even with the description it’s not always clear what Software-Defined Data Center Skywalk does but apparently it helps in building vpn’s between VMC & on-prem datacenters.

The current API/UI workflow requires multiple operations in different VMC Software Defined Data Centers’ (SDDC) either using API’s or UI. We are solving the problem to auto register, discover, connect VPN’s between VMC SDDC’s on single click event. The Distributed Firewall DFW firewall policies are also mapped on user inputs from on-premises to VMC SDDC using this interface.

[sta_anchor id=”fmlk” /]

Federated Machine Learning on Kubernetes

Federated Machine Learning (FML) is one of the most promising machine learning technologies to solve data silos and strengthening data privacy and security, which is accepted by more and more financial organization. FATE is an opensource project hosted by Linux Foundation to provide a federated learning framework. FATE has been used to increase the performance of predictions in credit reporting, insurance and other financial areas, as well as surveillance and visual detection projects. It helps organizations to comply with strict privacy regulations and laws such as GDPR and CCPA.

This Fling provides a tool to quickly deploy and manage a FATE cluster by either Docker-compose or Kubernetes. Its features include:

Test and develop models in Jupyter using Federated Machine Learning technologies;
Build a FATE cluster with full life-cycle management of federated learning platform.
In the Fling, a command line tool talks to Kubenetes to initiate an entire FATE cluster. The Fling includes a sample configuration which can be used to quickly deploy and try out federated learning. The configuration can be customized based on actual requirements.

[sta_anchor id=”vcfh” /]

VMware Container For Folding@Home

VMware Container for Folding@ Home is a docker container for running folding at home client. This container is supported on both Docker standalone clients and on a Kubernetes Cluster. Optional command line toggle GPU support on or off as well as all other common FAH client command line in puts.

The Folding@Home container is configured to automatically join Team VMware ID 52737. Everyone is welcome to join! Check out http://vmwa.re/fah for team and individual statistics.

Updated flings

[sta_anchor id=”flowgate” /]

FlowGate

In enterprise data centers, IT infrastructure and facility are generally managed separately, which leads to information gaps. Collaboration between facility and IT infrastructure systems are limited or manual, and virtualization adds more complexity.

The goal of Flowgate is to make facility awareness in IT management system and make IT operations management and automation better on high availability, cost saving and improved sustainability, with more information on power, cooling, environment (e.g. humidity, temperature) and security.

Changelog

Version 1.1.2 Update

  • Add Chassis support in API
  • Add PDU phase data.
  • Upgrade Springboot from 1.4.7 to 2.3.7

[sta_anchor id=”vmlp” /]

VMware Machine Learning Platform

The goal of vmlp is to provide an end-to-end ML platform for Data Scientists to perform their job more effectively by running ML workloads on top of VMware infrastructure.

Changelog

Version 0.3.0

  • Federated ML based on FATE
  • Istio 1.4.9
  • Horovod 0.19.2
  • Upgraded major components (MLflow 1.10.0, Pandas 1.0.3 and others)
  • Important stability bug fixes
  • Added documentation

Includes contributions from: Jiahao “Luke” Chen (bug fixes and Federated ML/FATE integration),
Shan Lahiri (Getting Started Guide), Jason Hutson (relentlessly debugging Kubernetes on VMware
infra), Nick Ford (sorting out VMware NSX Advanced Load Balancer/AVI Networks configuration and issues)

[sta_anchor id=”datkg” /]

Demo Appliance for Tanzu Kubernetes Grid

A Virtual Appliance that pre-bundles all required dependencies to help customers in learning and deploying standalone Tanzu Kubernetes Grid (TKG) clusters running on either VMware Cloud on AWS and/or vSphere 6.7 Update 3 environment for Proof of Concept, Demo and Dev/Test purposes.

Changelog

Aug 10, 2020 – v1.1.3

  • Support for latest TKG 1.1.3 release
  • Support for TKG Workload Cluster upgrade workflow from K8s 1.17.9 to 1.18.6
  • TKG Crash Diagnostic utility (crash-diagnostics) included in appliance
  • Helm (3.2.4) included in appliance
  • Updated to latest version of Harbor (1.10.3), Docker Compose (1.26.2), Kubectl (1.18.6), Octant (0.14.1) and TMC (d11404fb) CLI in appliance
  • PowerCLI script to automate 100% of pre-req for running on TKG on VMware Cloud on AWS

TKG-Demo-Appliance-1.1.3.ova
MD5: 86ce0c263ebcb6d20addcb6e1767e55a

[sta_anchor id=”idvn” /]

Infrastructure Deployer for vCloud NFV

Infrastructure Deployer for vCloud NFV is an automation-based deployment tool used for setting up the VMware vCloud NFV platform

Changelog

Version 3.3 Update

  • Updated RAID version from 3.2.1 vCloud NFV VCD to 3.3 vCloud NFV OSE (OpenStack Edition)

[sta_anchor id=”wousa” /]

Workspace ONE UEM SCIM Adapter

Workspace ONE UEM SCIM Adapter provides SCIM user/group management capabilities to Workspace ONE UEM. The middleware translates the System for Cross-Domain Identity Management, SCIM, to a CRUD REST framework that Workspace ONE UEM can interpret. This capability allows Workspace ONE UEM to synchronize cloud-based identity resources (users/groups/entitlements) without the need for an LDAP endpoint (service to service model). Examples include Azure AD, Okta, and Sailpoint.

Changelog

20.08 Release Notes & Update:

**Please Note:** If you have already setup WS1 SCIM Adapter, it is possible that moving to 20.08 will create new accounts. Please consider resetting Directory Services configuation for the OG you are connecting to.

New Features:

  • Deployments now exclusively supported on Docker. See install instructions for more details on how to orchestrate the deployment using the included Helm chart.

Bugs Fixed:

  • createGroup returns unexpected error due to missing payload return

Other Notes:

  • Bitnami deployment script introduced in 20.03 has been deprecated. Although it is still possible to deploy on Appliance form-factors, future development will be exclusively supported on Docker.

[sta_anchor id=”osot” /]

VMware OS Optimization Tool

I have read in plenty of places that people managed to mess up their image with OSOT and that they’re never going to use it anymore and even worse accept unoptimized images in production. This is the wrong choice in my opinion. please use osot or other ways to optimize your image but think about what you need to optimize and test it!

Changelog

August, 2020, b1171 Version Update

Optimizations

Disable Passive Polling is no longer selected by default as this was shown to cause issues with some applications thinking they did not have internet connectivity. Note that this optimization entry was previously incorrectly named as Enable Passive Polling.
Added new setting to Use WDDM graphics display driver for Remote Desktop Connections.

UI Improvements

Brand new interface functionality to allow searching of the optimizations to find specific entries. This is available on both the Optimize and My Templates tabs and allows you to find and view settings based on what you type in.
Added a grid splitter to extend area of left tree view under My Templates.

Common Options

New controls to simplify keeping Cortana search and how the search box appears in the taskbar.

Generalize

New option to specify the Administrator account to use after running SysPrep. This defaults to the current user account. The account specified is also added to the Administrators and Remote Desktop Users groups.
New option to perform an automatic restart after the Generalize task has completed.

Bug Fixes

  • Common Options settings were reset after an optimization. These should now be retained.
  • Changed the way the default profile was used to ensure that this works when OSOT is run using the system account.
  • Windows Syspart Repair was being prevented from being disabled properly.
  • Windows Superfetch was being prevented from being disabled properly.
  • Windows Update was sometimes not disabled properly after running a generalize.
  • Updated templates were saved to the wrong location.

August, 2020, b1170 Update

Templates

New combined template for all versions of Windows 10 and Windows Server 2016 and 2019. Optimizations can have optional parameters to filter the version that a setting is applied to.

Optimizations

Turn off NCSI is no longer selected by default as this was shown to cause issues with some applications thinking they did not have internet connectivity.

New Optimizations added and some removed, For details see: https://techzone.vmware.com/resource/vmware-operating-system-optimization-tool-guide#Template_Updates

Bug Fixes

  • Fixed issues with re-enabling Windows Update functionality on Server 2016 and 2019.
  • Fixed issue that was preventing Windows Antimalware from being disabled properly.

Common Options

Changed interface and language on the Common Options page for Windows Update to remove confusion. This option can only be used to disable Windows Update as part of an optimization task. To re-enable Windows Update functionality, use the Update button on the main menu ribbon.

Guides

Updated OSOT user guide: VMware Operating System Optimization Tool Guide.

[sta_anchor id=”avmu” /]

App Volumes Migration Utility

App Volumes Migration Utility allows admins to migrate AppStacks managed by VMware App Volumes 2.18, to the new application package format of App Volumes 4. The format of these packages in App Volumes 4 have evolved to improve performance and help simplify application management.

Changelog

1.0.4 Version Update

  1. Fix for “AppVolumes Manager is invalid” error shown in the UI when connecting to App Volumes Manager 4 version 2006.
  2. Fix for the bug “failed to get old appID from YML entries” in the AppCapture.log during migration of appstacks.

[sta_anchor id=”unnde” /]

USB Network Native Driver for ESXi

USB has become one the most widely adopted connection type in the world & USB network adapters are also popular among Edge computing platforms. In some platforms, there is either limited or no PCI/PCIe slots for I/O expansion & in some cases, an Ethernet port is not even available. Another advantage of a USB-based network adapter is that it can be hot-plugged into an system without a reboot which means no impact to the workload, same is true for hot-remove.

This Fling supports the most popular USB network adapter chipsets found in the market. The ASIX USB 2.0 gigabit network ASIX88178a, ASIX USB 3.0 gigabit network ASIX88179, Realtek USB 3.0 gigabit network RTL8152/RTL8153 and Aquantia AQC111U. These are relatively inexpensive devices that many of our existing vSphere customers are already using and are familiar with.

Changelog

Aug 24, 2020 – v1.6Add

  • support for Aquantia and Trendnet AQC111U (0xe05a:0x20f4)
  • Add support for Realtek RTL8153 (0x045e:0x07c6)
  • Add support for Realtek RTL8156 (0x0bda:0x8156)
  • Support for persistent VMkernel to USB NIC MAC Address mappings
  • Simplified USB NIC persistency
  • Resolved link speed issue for RTL8153 chipsets

Note 1: There are known issues when using Jumbo Frame 9K for RTL* chipset, this is still being investigated. For now, only up to 4K is supported.

Note 2: This will be the last release which will include support for ESXi 6.5

ESXi700-VMKUSB-NIC-FLING-39035884-component-16770668.zip
ESXi670-VMKUSB-NIC-FLING-39203948-offline_bundle-16780994.zip
ESXi650-VMKUSB-NIC-FLING-39176435-offline_bundle-16775917.zip

Get your code ready for the first VMware{Code} CodeCon scripting contest by ControlUp (Including a getting started)

When the idea was pitched to the VMware{Code} Code Coaches for a Code Connect event I thought it would be fun to have a small scripting contest of the main Hackathon. Since I am employed by ControlUp the idea is to combine this with the script actions that are available in our very own Real-Time Console. I will show a bit later how this works but for me this functionality is super-cool and was on of the the main reasons I wanted to work with ControlUp.

The goal is to create one (more if you have time) script that can be run by right clicking an object in the Console and performing whatever action that you want the script to take. In our community library (or on GitHub) there are over 300 tested and ready to use scripts that have proven themselves for our customers. You can go as crazy as you want. Want to start a docker container doing a full file index when CPU load goes over 50%? No idea why you would want to (to each his own) but if you can script it you can enter it in the contest. Keep in mind, though, that we don’t have support for containers at the moment.

Sounds great!, But what is ControlUp?

ControlUp is way more than the EUC monitoring tool you might think it is. With connections to all major hypervisors available it’s the perfect tool for managing your virtual infrastructure. With features like Automated Actions, controllers for registry, file system and services an admin’s life can be made infinitely easier. It’s time to do away with all those annoying tickets and focus more on the exciting parts of life.

Getting started

Note: if you already have an existing ControlUp environment that you can use there’s no need to do this again.

The easiest way to get started (and I know you’re not going to believe this) would be to point to our quick- start guide. But you know what? I don’t just want to do that. For this contest, it’s sufficient to have a single domain joined desktop and whatever environment you want to develop the script for. When you have this head over to ControlUp.com to start your free 21 day trial. Enter your email and in just a few minutes, you’ll have a download link in your inbox that includes a short video on how to register. Keep in mind that the trial is for two weeks, so don’t start too far in advance of the contest or you will be set back to the basic license level, which doesn’t include some of the features you might need.

Note: if this happens to you, please contact me in that case and I’ll see what I can do for you.

The zip file that you’ll download has the ControlUp Console in it, so there’s no need to install anything at first. Just start the Console, create a new account (if you don’t already have one), and define an environment name. Make up an organization name and you’re done with the first part (yes, it really is that easy).

Next up is creating folders and adding Computers. If you want you can add your vCenter server and EUC environments. With this you’re set to get started on your script. Add at least the computer you are running the console from so you have something to work with.

Script Actions

Adding scripts

So, how do we use script actions? First, you’ll need to add some to your local library by downloading them or creating your own. Start with clicking the Script actions button.

Once you’ve done that, you can download existing scripts from our repository by clicking “add script” when you find something that interests you.s. These scripts will be available under the Organizational Scripts tab. From  the My Draft Scripts tab. you can create your own scripts (and, not going to lie: it’s really fun).

Editing Scripts

When you create or edit a script, you will be served a page where you need to define a name and description. If you want to submit scripts to the ControlUp Script Library, be sure to choose these carefully.

Next, you can select what you want to run this script against; the options include Computer, Host, Session, process, and scores of others. Select advanced if you want to run it against multiple types of metrics. What I mean with running against will be explained in the Running Scripts chapter.

Execution context is where the script will run. This can be the machine where the console is running, target machine, or some other machine. :A great option would be a machine that’s available in the console that some Powershell modules installed. Security context is the account under which the scripts will run. If we’re talking about a Target machine, the script will run under the account that is used for the agent (most likely the network service). If you want to create a specific account to run your scripts under, you’ll need to add a monitor and create a shared credential;, both will be explained in the chapter on bonus points.

Now, we’re cooking with gas! The next screen is where you actually create the script; I would recommend that you use your favorite editor and copy/paste it here. Under “script type,” you can select cmd/VBS/Powershell for the script type you want to run. If you want to use metrics from the Console as an argument in your scripts [in Powershell], you can define them like this:

[string]$HVConnectionServerFQDN = $args[0]
# username
[string]$HVUsername = $args[1]
# Name of the Desktop pool
[string]$HVDesktopPoolname = $args[2]
# The message as a string
[string]$HVMessage = $args[3]
# Message severity
[string]$HVMessageSeverity = $args[4]

In the end it might look like this:

As always, I recommend documenting your solution in the script itself so it’s always visible. This applies even more for the scripting contest since it’s quite handy to know what solution you wanted to tackle.

On the last screen, you can add arguments that can be used in the script. These can come from metrics that we provide in the grid or from manual input by the user with or without a default. The choice that you made at the second screen impacts the available metrics that you can select from the grid here.

When you’re done hit finish to be able to use the script.

Using scripts

To use a script right click an object that you want to run the script against, potentially filter for the name in the search box and select the script.

Depending on the arguments you’ve made, you might see a box that prompts you to enter details that were configured in the arguments (or you’ll get the results of the script).

Possible bonus points when using Automated Actions

By deploying a so called monitor from ControlUp it’s possible to add some very interesting functionality. To make things a little bit more challenging (it’s actually very easy to setup a monitor) I’ll you you find out yourself on how to do it.

Triggers are, in essence, tasks that get started when a metric goes over or under a defined threshold. These tasks can be sending emails, recording event logs, or playing a sound, but they can also  run an action, aka a script. If you explain in your script how to use it as an automated action to solve a problem, you might just get some bonus points for it. Popular examples are: paging out all active memory to the swap file when a user disconnects to free up memory or limiting CPU usage for certain processes if they gone haywire.

Submitting your entry

Just before the event there will be a breakout session with details on how you can submit your entry.

Judging

The judges will be announced during the event. Due to this being a virtual event, they can’t be bribed with booze, beer, stroopwafels (yum!) or other good food, because we can’t live on good promises.

Prizes

The winner will receive a prize, an interview with ControlUp, and will be able to present their script as the next ControlUp Script of the Month (yes, there WILL be a blog)! Maybe—just maybe—we might even throw something in for all contestants. Stay tuned!

Using the Horizon 8 swagger page

A couple weeks back when Horizon 8 was released they also made us happy with the Swagger page to browse the rest api methods. One thing it lacks though is a way to easily authenticate to actually try them. There is an Authenticate button but I couldn’t find any information on what it actually needs. While creating my previous blog post I was messing around with things and actually found a way to authenticate. First I tried to authenticate using the actual api method for that but trying any call afterwards still showed me as not being authenticated. You could copy/paste the access token though and you’ll see in the script how that might work, or check the 3rd screenshot.

the swagger can be found like this: https://loftcbr01.loft.lab/rest/swagger-ui.html

Let’s have a look at the script.

$url = read-host "url for connectionserver"

$username=read-host "Username"
$domain=Read-host "Domain"
$password=read-host "Password" -AsSecureString

$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($password) 
$UnsecurePassword = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)

function Get-HRHeader(){
    param($accessToken)
    return @{
        'Authorization' = 'Bearer ' + $($accessToken.access_token)
        'Content-Type' = "application/json"
    }
}
function Open-HRConnection(){
    param(
        [string] $username,
        [string] $password,
        [string] $domain,
        [string] $url
    )

    $Credentials = New-Object psobject -Property @{
        username = $username
        password = $password
        domain = $domain
    }

    return invoke-restmethod -Method Post -uri "$url/rest/login" -ContentType "application/json" -Body ($Credentials | ConvertTo-Json)
}

function Close-HRConnection(){
    param(
        $accessToken,
        $url
    )
    return Invoke-RestMethod -Method post -uri "$url/rest/logout" -ContentType "application/json" -Body ($accessToken | ConvertTo-Json)
}
try{
$accessToken = Open-HRConnection -username $username -password $UnsecurePassword -domain $Domain -url $url
Set-Clipboard (Get-HRHeader -accessToken $accessToken).Authorization
}
catch{
    write-host "Error while authenticating"
}

To make it directly usable I have chosen to ask for web address of the server, username, domain and password and in the end I copy the token you need to the clipboard for you. Let’s have a look at it

No further output but I can paste what I have in the clipboard now in the Authenticate field at the swagger page, hit authorize and close.

And now I can try api calls, pulling machines from the inventory for example.

SO that’s how we can actually use the Swagger page to try api calls.