[HorizonAPI] Disabling Provisioning and/or disabling entire Desktop Pools and RDS Farms

Today I saw the question on the VMware{Code} Slack Channel if anyone ever managed to disabled Desktop Pools using PowerCLI. I was like yeah I have done that and you might need to user the helperservice for that. I offered to create q fast and quick blog post about it so here we go.

First as always I connect to my Connection Server and use a query to retrieve the Pool that I am going to disable.

$creds=import-clixml creds.xml
$hvserver=connect-hvserver pod1cbr1.loft.lab -Credential $creds
$hvservice=$hvserver.ExtensionData
$poolqueryservice=new-object vmware.hv.queryserviceservice
$pooldefn = New-Object VMware.Hv.QueryDefinition
$filter = New-Object VMware.Hv.QueryFilterEquals -Property @{ 'memberName' = 'desktopSummaryData.name'; 'value' = "Pod01_Pool01" }
$pooldefn.filter=$filter
$pooldefn.queryentitytype='DesktopSummaryView'
$pool = ($poolqueryService.QueryService_Create($hvservice, $pooldefn)).results

With this object I can show you the details of the desktop pool

($hvservice.Desktop.Desktop_Get($pool.id)).base
($hvservice.Desktop.Desktop_Get($pool.id)).desktopsettings

Like I said to actually change things I need the helper service so this is what you do to initialize that.

$desktopservice=new-object vmware.hv.DesktopService
$desktophelper=$desktopservice.read($HVservice, $pool.id)
$desktophelper.getdesktopsettingshelper() | gm

As we saw in the second screenshot I need the desktopsettings and than Enabled

$desktophelper.getdesktopsettingshelper().getenabled()

To change the setting in the helper I need to use sethelper($False)

$desktophelper.getdesktopsettingshelper().setEnabled($False)

Now this has not been changed yet on the desktop pool itself, to do that we need to use desktopservice.update and I also show the result of the change.

$desktopservice.update($hvservice, $desktophelper)
($hvservice.Desktop.Desktop_Get($pool.id)).desktopsettings

And to reverse this

$desktophelper.getdesktopsettingshelper().setEnabled($True)
$desktopservice.update($hvservice, $desktophelper)
($hvservice.Desktop.Desktop_Get($pool.id)).desktopsettings

Disabling provisioning uses the same methodology just in another spot.

To disable provisioning ( the | gm is not needed, it’s just there to show you whats’s in there):

($hvservice.Desktop.Desktop_Get($pool.id)).automateddesktopdata.virtualcenterprovisioningsettings
$desktophelper.getAutomatedDesktopDataHelper().getVirtualCenterProvisioningSettingsHelper() | gm
$desktophelper.getAutomatedDesktopDataHelper().getVirtualCenterProvisioningSettingsHelper().getenableprovisioning()
$desktophelper.getAutomatedDesktopDataHelper().getVirtualCenterProvisioningSettingsHelper().setenableprovisioning($False)
$desktopservice.update($hvservice, $desktophelper)
($hvservice.Desktop.Desktop_Get($pool.id)).automateddesktopdata.virtualcenterprovisioningsettings

And to revert it

$desktophelper.getAutomatedDesktopDataHelper().getVirtualCenterProvisioningSettingsHelper().setenableprovisioning($True)
$desktopservice.update($hvservice, $desktophelper)
($hvservice.Desktop.Desktop_Get($pool.id)).automateddesktopdata.virtualcenterprovisioningsettings

For RDSH farms the process is similar some of the naming is just different. First to get the farm object

$farmqueryservice=new-object vmware.hv.queryserviceservice
$farmdefn = New-Object VMware.Hv.QueryDefinition
$filter = New-Object VMware.Hv.QueryFilterEquals -Property @{ 'memberName' = 'data.name'; 'value' = "Pod01-Farm01" }
$farmdefn.filter=$filter
$farmdefn.queryentitytype='FarmSummaryView'
$farm = ($farmqueryservice.QueryService_Create($hvservice, $farmdefn)).results
($hvservice.Farm.farm_get($farm.id)).data

And to create the helper and disable the farm

$farmservice=New-Object VMware.Hv.FarmService
$farmhelper=$farmservice.read($hvservice,$farm.id)
$farmhelper.getDataHelper().setenabled($False)
$farmservice.update($hvservice,$farmhelper)
($hvservice.Farm.farm_get($farm.id)).data

And in reverse 🙂

$farmhelper.getDataHelper().setenabled($True)
$farmservice.update($hvservice,$farmhelper)
($hvservice.Farm.farm_get($farm.id)).data

And now the provisioning part

($hvservice.Farm.farm_get($farm.id)).automatedfarmdata.virtualcenterprovisioningsettings
$farmhelper.getAutomatedFarmDataHelper().getvirtualcenterprovisioningsettingshelper().setenableprovisioning($False)
$farmservice.update($hvservice,$farmhelper)
($hvservice.Farm.farm_get($farm.id)).automatedfarmdata.virtualcenterprovisioningsettings

Guess what?

$farmhelper.getAutomatedFarmDataHelper().getvirtualcenterprovisioningsettingshelper().setenableprovisioning($True)
$farmservice.update($hvservice,$farmhelper)
($hvservice.Farm.farm_get($farm.id)).automatedfarmdata.virtualcenterprovisioningsettings

[HorizonAPI]Using the Datastore service (incl sizing calculation!)

I was looking on my blog for information to use the datastore information using the Horizon api’s but couldn’t find it so here’s a post on that.

This posts uses the soap api’s next time I’ll see what we can do with the REST api.

Index

First I will make a connection like I always do

Now let’s see what methods are available under the Datastore service

Let’s start with the easy 3 first aka the bottom ones

[sta_anchor id=”datastore_listdatastoresbyhostorcluster” unsan=”Datastore_ListDatastoresByHostOrCluster” /]

Datastore_ListDatastoresByHostOrCluster

The name says enough with Datastore_ListDatastoresByHostOrCluster you are able to list datastores using the HostOrClusterID.

I am cutting some corners here how to find this out but to get this HostOrClusterID we need to get the DatacenterId and to get that we’ll need the VirtualcenterId.

To get all virtualcenters in a pod you need to use virtualcenters_list() and what I do in this example is listing them first and than putting the first virtualcenter in an variable.

$hvservice.VirtualCenter.VirtualCenter_List()
$VC=$hvservice.VirtualCenter.VirtualCenter_List() | Select-Object -first 1

and the same for the datacenter using the virtualcenterID

$hvservice.Datacenter.Datacenter_List($vc.id)
$DC=$hvservice.Datacenter.Datacenter_List($vc.id) | Select-Object -first 1

With the datacenter ID I’ll retreive the info under HostOrCluster and store it in an variable.

$hvservice.HostOrCluster.HostOrCluster_GetHostOrClusterTree($dc.id)
$tree=$hvservice.HostOrCluster.HostOrCluster_GetHostOrClusterTree($dc.id)

Let’s browse this object and see what we can find

We can clearly see the name here and as I need Cluster_Pod2 I am putting that one in an object

$pod2cluster=$tree.TreeContainer.Children.info | select-object -last 1
$pod2cluster

And with this object I can get to my datastores and again I store them in an object

$hvservice.Datastore.Datastore_ListDatastoresByHostOrCluster($pod2cluster.id)
$datastores=$hvservice.Datastore.Datastore_ListDatastoresByHostOrCluster($pod2cluster.id)

Let’s see what’s in there

So we see most of the basic info in here that we might need including name, capacity and free space. Not sure why the numberofvm’s is empty as all of them have vm’s.

[sta_anchor id=”datastore_listdatastoresbydesktoporfarm” unsan=”Datastore_ListDatastoresByDesktopOrFarm” /]

Datastore_ListDatastoresByDesktopOrFarm

Let’s see what we need for this one

So an object is needed of the type VMware.Hv.DatastoreSpec let’s define the object and see what’s in it.

As I am not 100% sure if all are required or not and what might break I’ll have a look at the API explorer article of this.

So it requires either a DesktopID OR a FarmID wile you can provide the hostorclusterId but that will be populated if you don’t provide one.

I am not going to build the query here to get a desktop pool so I’ll just use get-hvpool and get-hvfarm from the vmware.hv.helper powershell module.

Next I put the $pool.id in the spec and get the details

$spec.DesktopId=$pool.id
$hvservice.Datastore.Datastore_ListDatastoresByDesktopOrFarm($spec)
$datastores=$hvservice.Datastore.Datastore_ListDatastoresByDesktopOrFarm($spec)
$datastores.datastoredata

So this lists all the datastores that I have available in this cluster. I know this 100% sure as the ISO datastore is a read-only datastore that doesn’t have any desktops.

Let’s do the same using the farmId

$spec.DesktopId=$null
$spec.FarmId=$farm.id
$datastores=$hvservice.Datastore.Datastore_ListDatastoresByDesktopOrFarm($spec)
$datastores

Same amount of datastores so the same result.

[sta_anchor id=”datastore_listdatastoreclustersbyhostorcluster” unsan=”Datastore_ListDatastoreClustersByHostOrCluster” /]

Datastore_ListDatastoreClustersByHostOrCluster

As I don’t have any datastore clusters in my lab I cannot show it but you’ll need the same hostorclusterid as we used for Datastore_ListDatastoresByHostOrCluster

[sta_anchor id=”datastore_getusage” unsan=”Datastore_GetUsage” /]

Datastore_GetUsage

This method shows what desktop pools are using a particular datastore. When doing a dry run it shows that a DatastoreId is needed.

I will use one of the items that I still have stored in my $datastores variable

$datastore=$datastores |Select-Object -last 1
$datastore.DatastoreData
$hvservice.Datastore.Datastore_GetUsage($datastore.id)

So this ia a rather boring datastore as it only has 1 pool configured to use it (and it doesn’t even have any vm’s from this pool on it) but you’ll see that there is another datastore configured for this pool as wel. I do have a more used datastore though on a local nvme drive.

$datastore=$datastores | where {$_.datastoredata.name -like "*nvme*"}
$hvservice.Datastore.Datastore_GetUsage($datastore.id)

As you can see it shows the desktop pools and even the single farm I have that use this datastore each with their own disk usage.

[sta_anchor id=”datastore_getdatastorerequirements” unsan=”Datastore_GetDatastoreRequirements” /]

Datastore_GetDatastoreRequirements

The Datastore_GetDatastoreRequirements method does a calculation of what disk space might be needed for a desktop pool.

So let’s see what we need

$reqspec=new-object VMware.Hv.DatastoreRequirementSpec
$reqspec

That’s a lof and as a screenshot wouldn’t fit here is the link to the APi explorer page on it: here

To fill these things I will use the $pool variable that I still have stored.

$reqspec.DesktopId=$pool.id
$reqspec.Source="INSTANT_CLONE_ENGINE"
$reqspec.VmId=$pool.AutomatedDesktopData.VirtualCenterProvisioningSettings.VirtualCenterProvisioningData.parentvm
$reqspec.SnapshotId=$pool.AutomatedDesktopData.VirtualCenterProvisioningSettings.VirtualCenterProvisioningData.Snapshot
$reqspec.PoolSize=30
$hvservice.Datastore.Datastore_GetDatastoreRequirements($reqspec)

And when I change the poolsize

Calling the EMEA #vCommunity for a Monthly Virtual #vBreakfast EMEA!

For years Fred Hofer has been organizing the vBreakfast for VMworld EMEA across the street of the Fira in a very nice place. A big thank you to Fred to do this every year!! Due to the mess this world currently is in we couldn’t do it in person this year and we moved virtual. Big thanks to Runecast for organizing the virtual space we could use but sadly we had to do it without the vWorldfamous Grumpy Waiter. I really like the way it was setup with separate tables and everything. The only issue was that it seemed to use whatever Microphone, speaker and webcam that where configured in Chrome and switching was almost impossible.

During this meeting we came up with the idea to organize a monthly virtual vBreakfast for EMEA. Kev Johnson offered to do it again on the site that Runecast used for this but I think it would be more flexible if we used something like zoom. Luckily I can use a paid license so it’s no problem to make this a 2 hour long come and go whenever you like session. The talk can be tech, it can be non-tech whatever you like!

When?

Every Last Friday of the month (yes we’ll move the december one!)

How late?

7.30am UK time to 9.00 (8.30 Amsterdam to 10.00, 9.30 Israel time to 11.00, you get what I am saying)

How do I attend?

Drop me an email at vbreakfastemea at gmail.com and I will forward you the invite

But I don’t live in EMEA can I still attend?

Hell yeah, I don’t give a flying F..K where you are from, everyone is welcome and the more the better!

I don’t use VMware but Nutanix or Hyper-V or Citrix?

Didn’t I just say everyone is welcome? Yes I also don’t care what hypervisor or EUC platform you use!

What do you talk about?

The Question is: what do we NOT talk about? Sometimes it’s tech, sometimes its the upcoming weekend. How to become a vExpert etc etc.

I work for company XYZ and want to sponsor this!

I think the only way of sponsoring that could work is if you organize a real breakfast for everyone that signs up. If you are prepared to do that we won’t offer more in return than a mention and a thank you as this is an organized unstructured meeting to meetup with friends.

My Golden Image build using HashiCorp Packer

After a Tweet last week by former colleague and fellow vExpert Jeroen Buren, my reaction on that and another question that we got  I decided to finally make some time and document how my Packer Golden Image build works. To be honest I don’t think that it’s anything spectacular and most of it has been borrowed from either Mark Brookfield or someone else but I forgot who, sorry for that! (if you recognize your work send me a note and I’ll update this piece) While my templates aren’t really complicated I am happy with them and they are exactly what I need in my lab. Things can definitely be done better but it’s enough for me.

I use 2 main files, 1 with the generic settings for the type of image and one that has the variables for the vCenter where it will be created. The last one looks like this:

{
    "vm_name":"W10-p2-{{isotime \"2006-01-02-15-04\"}}",
    "vcenter_server":"pod1vcr1.loft.lab",
    "username":"administrator@vsphere.local",
    "password":"hahahahanope!",
    "datastore":"NVME1TB (1)",
    "datastore_iso":"ISO",
    "cluster": "Cluster_Pod2",
    "network": "dpg_loft_102",
    "winrm_username": "Administrator",
    "winrm_password": "VMware1!"
}

The VM name is W10-p2-dateandtime the isotime combined with that default time makes sure that I get the current date and time of running the script. For more information see this page: https://www.packer.io/guides/workflow-tips-and-tricks/isotime-template-function. I have separate datastores for ISO’s and where the VM will be created while that port group is on a dVswitch.

The 2nd file is slightly more complicated:

{
    "builders": [
    {
        "type": "vsphere-iso",
        "vcenter_server":      "{{user `vcenter_server`}}",
        "username":            "{{user `username`}}",
        "password":            "{{user `password`}}",
        "insecure_connection": "true",
 
        "vm_name": "{{user `vm_name`}}",
        "datastore": "{{user `datastore`}}",
    "Notes": "Windows 10 1909 Instant Clone Image build using Packer {{isotime \"2006-01-02-15-04\"}}",
        "create_snapshot": true,
        "cluster": "{{user `cluster`}}",
        "network": "{{user `network`}}",
        "boot_order": "disk,cdrom",
 
        "vm_version":       15,  
        "guest_os_type": "windows9_64Guest",
    "firmware":	"bios",
 
        "communicator": "winrm",
        "winrm_username": "{{user `winrm_username`}}",
        "winrm_password": "{{user `winrm_password`}}",
    "winrm_timeout": "5h",
 
        "CPUs":             2,
        "RAM":              6064,
        "RAM_reserve_all":  false,
    "video_ram": 128000,
    
    "remove_cdrom": true,
 
        "disk_controller_type":  "pvscsi",
        "disk_size":        51200,
        "disk_thin_provisioned": true,
    
    "configuration_parameters": {
      "svga.autodetect" : "FALSE",
      "svga.numDisplays" : "2"
    },
 
        "network_card": "vmxnet3",
 
        "iso_paths": [
        "[{{user `datastore_iso`}}] Windows_10_1909_enterprise.iso",
        "[{{user `datastore_iso`}}] VMware-Tools-windows-11.0.5-15389592.iso"
        ],
 
        "floppy_files": [
            "{{template_dir}}/setup/"
        ],
        "floppy_img_path": "[{{user `datastore_iso`}}] floppy/pvscsi-Windows8.flp"
    }
    ],
 
    "provisioners": [
    {
            "type": "windows-shell",
      "script": "{{template_dir}}/setup/onedrive.cmd"
        },
    {
      "type": "windows-update",
      "search_criteria": "IsInstalled=0",
      "filters": [
        "exclude:$_.Title -like '*Preview*'",
        "include:$true"
      ],
      "update_limit": 25
    },
    {
      "type": "windows-restart",
      "restart_timeout": "15m",
      "restart_check_command": "powershell -command \"& {Write-Output 'restarted.'}\""
    },
        {
            "type": "powershell",
            "inline": [
        "Set-TimeZone -Id 'W. Europe Standard Time'",
                "Get-AppXPackage -AllUsers | Where {($_.name -notlike \"Photos\") -and ($_.Name -notlike \"Calculator\") -and ($_.Name -notlike \"Store\")} | Remove-AppXPackage -ErrorAction SilentlyContinue",
                "Get-AppXProvisionedPackage -Online | Where {($_.DisplayName -notlike \"Photos\") -and ($_.DisplayName -notlike \"Calculator\") -and ($_.DisplayName -notlike \"Store\")} | Remove-AppXProvisionedPackage -Online -ErrorAction SilentlyContinue"     
            ]
        },
    {
      "type": "windows-restart",
      "restart_timeout": "15m",
      "restart_check_command": "powershell -command \"& {Write-Output 'restarted.'}\""
    },
        {
            "type": "powershell",
            "scripts": [
                "{{template_dir}}/setup/Horizon_Agent_IC.ps1"
                "{{template_dir}}/setup/appvolumes.ps1",
                "{{template_dir}}/setup/dem.ps1",
        "{{template_dir}}/setup/fslogix.ps1",
                ,
        "{{template_dir}}/setup/CU.ps1"
            ]
        },
    {
      "type": "windows-restart",
      "restart_timeout": "15m",
      "restart_check_command": "powershell -command \"& {Write-Output 'restarted.'}\""
    },
    {
            "type": "powershell",
            "scripts": [
        "{{template_dir}}/setup/osot.ps1"
            ]
        }
    ]
}

Some specifics: to mark my GI’s I always create a note with the type and build date again using the isotime.

"Notes": "Windows 10 1909 Instant Clone Image build using Packer {{isotime \"2006-01-02-15-04\"}}",

And as I am very lazy I also have it creating a snapshot for me

"create_snapshot": true,

These make sure I have more than the default ram for the build in graphics adapter

"RAM":              6064,
"RAM_reserve_all":  false,
"video_ram": 128000,


"configuration_parameters": {
  "svga.autodetect" : "FALSE",
  "svga.numDisplays" : "2"
},

Some versions of Packer had an issue with ejecting the cd-rom’s but that has been fixed now.

"remove_cdrom": true,

There are several optimizations that take place like the app volumes script at the beginning (onedrive.cmd) and the VMware OS Optimization Tool in the end (osot.ps1).

All the agents are shared from a webserver and this is one of the ps1 scripts that starts the installation, the horizon agent in this case.

$ErrorActionPreference = "Stop"
 
$webserver = "loftfls01.loft.lab"
$url = "http://" + $webserver
$installer = "VMware-Horizon-Agent-x86_64-8.0.0-16530789.exe"
$listConfig = "/s /v ""/qn VDM_VC_MANAGED_AGENT=1 ADDLOCAL=Core,ClientDriveRedirection,RTAV,TSMMR,VmwVaudio,USB,NGVC,PerfTracker,HelpDesk"""
 
# Verify connectivity
Test-Connection $webserver -Count 1
 
# Get Horizon Agent
Invoke-WebRequest -Uri ($url + "/" + $installer) -OutFile C:\$installer
 
# Unblock installer
Unblock-File C:\$installer -Confirm:$false -ErrorAction Stop
 
# Install Horizon Agent
Try 
{
   Start-Process C:\$installer -ArgumentList $listConfig -PassThru -Wait -ErrorAction Stop
}
Catch
{
   Write-Error "Failed to install the Horizon Agent"
   Write-Error $_.Exception
   Exit -1 
}
 
# Cleanup on aisle 4...
Remove-Item C:\$installer -Confirm:$false

and the osot.ps1 looks like this

$ErrorActionPreference = "Stop"
 
$webserver = "loftfls01.loft.lab"
$url = "http://" + $webserver
$osot = "VMwareOSOptimizationTool.exe"
$osotConfig = "VMwareOSOptimizationTool.exe.config"
 
# Verify connectivity
Test-Connection $webserver -Count 1
 
# Get Files
ForEach ($file in $osot,$osotConfig) {
   Invoke-WebRequest -Uri ($url + "/" + $file) -OutFile C:\$file
}
 
# Run OSOT
C:\VMwareOSOptimizationTool.exe -o -t "VMware Templates\Windows 10 and Server 2016 or later" -f all
 
# Sleep before cleanup
Start-Sleep -Seconds 180
 
# Cleanup on aisle 4...
ForEach ($file in $osot,$osotConfig) {
   Remove-Item C:\$file -Confirm:$false
}

I have even created a simple powershell script that starts the build with a couple extra options. -Timestamp-ui to show the timestamp while the -force isn’t needed anymore as each build has it’s own name but I keep it in there.

[CmdletBinding()]
param(
Parameter(Mandatory)]
[string]$environmentfile,
[Parameter(Mandatory)]
[string]$buildfile
)

c:\software\packer\packer.exe build -force -timestamp-ui -var-file $environmentfile $buildfile

So how does this look?

I understand that this is far from a full explanation of all the options in the json files but I think most things are rather generic with a few things that I have highlighted.

Total running time in my lab highly depends on what host I use (core speed) and what iso is used as I also install Windows Updates. The server 2019 ISO updated in sept 2020 takes 40 minutes while Windows 10 1909 without extra patches takes just over an hour.

Jon Howe also did a nice write-up with some more explanation: https://www.virtjunkie.com/vmware-template-packer/#Packer_Template_File_User_Variables

Adding Nutanix CE CVM nodes & Prism Central VM’s to ControlUp

Take note: at the moment of writing I am preparing to upgrade my Nutanix #vCommunity Edition to the latest version so I am still at version 20191030.415 of AHV. Also be aware that I can’t do any promises on if what I am doing is supported to do on your production Nutanix systems but as long as you don’t install any extra packages it’s just an ssh sessions that pulls some data.

Today I had a call with Samuel Legrand and a potential ControlUp customer where they had some questions about the Nutanix CVM resource usage. Because of that I tried to add the CVM of the Nutanix Community Edition in my lab and that resulted in this tweet:

So yes it’s possible to add the CVM as a monitored Linux Machine in ControlUp and it’s rather easy to do so. Let me show you what to do. To make sure that I don’t try to potentially mess with any CentOS packages I disable the installation of missing packages on Linux machines in the ControlUp console.

Next up is defining a (shared) credential I choose a shared credential so I get to see the data in Insights as well. You do this under Monitor Settings > Domain Identity

Click Add Credentials, use .\nutanix as username (so it’s seen as a local account and not a domain account), fill in the password and make sure the Friendly name is clear for what it is.

Next up we’ll actually add the machines by configuring a Linux Data Collector. Click Linux Data Collector in the ribbon, make up a name and select the credential you just created.

After this click add and use an IP range to scan or work with single IP’s, press scan and click add for the correct systems and hit ok.

Strongly advisable is select a dedicated data collector instead of the Console/Monitor this can be any system that has the ControlUp agent installed. In the Linux Data Collector screen click the arrow for the Connector Options, select ControlUp Console / Monitor and click remove.

Now click add and select the machine you want to use as data collector and click ok twice.

The machines will now be added to the bottom of the tree

Before the next screenshot I moved them into a folder but you can clearly see that things like OS, memory utilization are working. CPU is not and when we tried it with the production systems from the customer they actually gave an error this is most probably caused by the fact that not all required rpm’s are installed. It’s better than nothing though!

When drilling down on the machines to the processes it’s clear what the biggest consumers of processes are for the CVM.

And for Prism Central CE

So with this we could monitor the Nutanix pieces and create triggers for if or when one of the processes would become unavailable or when the entire machine goes down.

The VMware Labs flings monthly for September 2020

[Disclaimer] any typo’s in this blog posts are Brian Madden’s fault since I am listening to his VMworld session with Carl Webster[/disclaimer]

Yes this blog post is really being created during this session. Hopefully you also where able to join VMworld because even though I hate the virtual events and am missing the live interaction with people I am still kinda liking the event. This month 4 new flings have been released while another 4 received updates.

New

vRealize Network Insight and HCX Integration

Python Utility for VMware Cloud Foundation Management

VMCA Certificate Generator

SQL30 – An ORM for SQLITE on ESX

Updates

Supernova – Accelerating Machine Learning Inference

VMware Machine Learning Platform

Horizon Session Recording

App Volumes Entitlement Sync

New Flings

[sta_anchor id=”vnihi” /]

vRealize Network Insight and HCX Integration

This integration script between vRealize Network Insight (vRNI) and VMware HCX, allows you to streamline the application migration process.

First, use the application discovery methods within vRNI to discover the application boundaries, including the VMs (or other workloads), to form application constructs within vRNI. This integration script then synchronizes the vRNI application constructs into HCX Mobility Groups, saving you the time that it would’ve taken to do this manually. After the sync, you can pick up the migration process and execute the migration.

[sta_anchor id=”puvcfm” /]

Python Utility for VMware Cloud Foundation Management

Python Utility for VMware Cloud Foundation Management is a lightweight python-based command line VCF administration tool. It offers an interactive shell interface to interact with the VMware Cloud Foundation (SDDC Manager) public API.

After the utility is installed vcfkite binary is available on machine which shall be used to launch interactive shell interface to interact with SDDC Manager(uses SDDC Manager Public API’s)

This utility is platform independent (tested on Linux CentOS/Windows W2K12) and would be distributed as a python’s wheel package which can be easily installed using python’s native pip3 command (For details information check the Instructions and Requirements sections).

The purpose of this utility is to make VMware Cloud Foundation API’s more accessible to users/customers and work towards the adoption of the VMware Cloud Foundation API & VMware Cloud Foundation in general.

[sta_anchor id=”vmcacg” /]

VMCA Certificate Generator

The VMware Certificate Authority (VMCA) Certificate Generator gives you the ability to simply retrieve certificates signed by the VMware Certificate Authority (VMCA) running on vCenter / PSC.

This can be useful when you don’t have access to a company wide Certificate Authority (e.g. small-business or running in a lab), but you want to have valid certificates for your services.
The certificates can be used for other VMware products like vRealize Suite, NSX as well as 3rd party services.
Once you trust the VMCA root certificate (to be retrieved by the vCenter URL or over this tool), you trust all services with the new certificates.

The validity of the certificates is not changeable and depends on the vCenter version. With vCenter 7.0 you’ll get certificates valid for 2 years.

[sta_anchor id=”sofsoe” /]

SQL30 – An ORM for SQLITE on ESX

Persistence is an integral part of any real world software application. A widely used method of achieving (file based) persistence is through SQLITE database. Python natively provides support for interacting with SQLITE with the help of module “sqlite” (sqlite3 in python3). SQLAlchemy is a popular ORM used for interacting with sqlite database in Python. However, this and many other similar ORMs do not work on ESX Hypervisor as is. It is because these packages have dependency on other packages which are not supported / installable on ESX hypervisor further.

In this fling, we present SQL30, a ZERO weight ORM for SQLITE database written using only native python constructs. This python package has no dependency on any external python package and works as is on ESX version 6.5 and above.

SQL30 is useful as :

It helps developers achieve persistence in Python applications even on platforms such as ESX which have limited / trailing support for Python.
It improves productivity as developers can write (SQL based) database applications without having to learn SQL itself.

 

Updated Flings

[sta_anchor id=”samli” /]

Supernova – Accelerating Machine Learning Inference

The Supernova – Accelerating Machine Learning Inference fling is made to enable Machine learning on the edge.

Changelog

Version 1.0 Update

Compared 0.0.1, this release supports:

  1. New HW accelerators – AMD GPU + Xilinx FPGA
  2. CPU Accelerations with OpenVINO
  3. Basic K8S deployment
  4. Versatile APIs
  5. vSphere Bitfusion support
  6. More use cases like facial mask

 

[sta_anchor id=”vmlp” /]

VMware Machine Learning Platform

The VMware Machine Learning Platform helps Data Scientists perform their magic ML workloads on VMware infrastructure.

Changelog

Version 0.4.0 Update

  • Federated Data and Model Manager (FDMM)
  • Docs Service accessible at /vmlp-docs/
  • Better LDAP deployment configuration
  • Image “prepull” tool
  • Added BitFusion 2.0 support to Keras/Pytorch examples
  • Lots of bug fixes

Includes contributions from: Jiahao “Luke” Chen (Federated Data and Model Manager)

[sta_anchor id=”hsr” /]

Horizon Session Recording

The Horizon Session Recording fling can assist in troubleshooting the desktop by not having the user repeat their steps time over time.

Changelog

Version 2.1.50 Update

  • Fixed multiple deadlocks in the agent recording.
  • Added support for Horizon 8 (2006).

[sta_anchor id=”aves” /]

App Volumes Entitlement Sync

The App Volumes Entitlement Sync is a great help when you have multiple App Volumes environments to make sure everyone gets the same apps everywhere.

Changelog

Version 4.3 Update

  • Fixes Typo in Secondary Site Server Label
  • No longer shows non-replicated applications as errors in the Fix Apps dialog–Much easier to read what will be fixed.