The VMware Labs flings monthly for February 2021 – Reach alert!

It’s been a busy month on the flings front, no less than 17(!!) new releases and updated flings. This is how my browser tabs look:

If I have therm all correct there are 6 new releases and 10 updates (2 1 of which update without a changelog so a boo for that!) so this post is going to be a long one!

New Releases

Community NVMe Driver for ESXi

VMware Cloud Foundation Powernova

Workspace ONE Access Migration Tool

Sample Data Platform Deployment on Virtualized Cloud Infrastructure

Community Networking Driver for ESXi

Code Stream Concourse Integrator

Updates

ESXi Compatibility Checker

VMware Machine Learning Platform

Virtualized High Performance Computing Toolkit

Horizon Peripherals Intelligence

Workspace ONE App Analyzer for macOS

VMware OS Optimization Tool

Horizon Helpdesk Utility

HCIBench

Horizon Reach

Workspace ONE Discovery

App Volumes Migration Utility

New Releases

[sta_anchor id=”cndfe” /]

Community NVMe Driver for ESXi

This Fling is a collection of ESXi Native Drivers which enables ESXi to recognize and consume various NVMe-based storage devices. These devices are not officially on the VMware HCL and have been developed to enable and support the VMware Community.

Currently, this Fling provides an emulated NVMe driver for the Apple 2018 Intel Mac Mini 8,1 and the Apple 2019 Intel Mac Pro 7,1 allowing customers to use the local NVMe SSD with ESXi. This driver is packaged up as an Offline Bundle and is only activated when it detects ESXi has been installed on either an Apple Mac Mini or Apple Mac Pro.

[sta_anchor id=”vcfp” /]

VMware Cloud Foundation Powernova

VMware Cloud Foundation Powernova is a Fling built on top of VCF that provides the users the ability to perform Power Operations (Power ON, Power OFF) seamlessly across the entire inventory. It has a sleek UI to visualize the entire VCF inventory (which is the first of its kind for VCF) across the domains of VCF.

The UI is easy to use and elucidates the current Health and Power State of each node in the VCF inventory. Powernova lets the user work on the Power Operations on the components with domain specific inter dependencies automatically resolved.

Powernova also performs valid health checks on all nodes in the VCF inventory to ensure Power Operations are performed only on healthy nodes. Powernova takes minimal input (4 user defined inputs on their VCF system) and does all the magic for the users behind the scenes.

If any infrastructure maintenance activity, VCF migration activity, or power operations need to be performed only on specific domains in VCF, then Powernova is the one stop solution for all VCF users.

[sta_anchor id=”wsoamt” /]

Workspace ONE Access Migration Tool

Workspace ONE Access Migration Tool helps ease migration of Apps from one WS1 Access tenant to another (on-premises to SaaS or SaaS to SaaS) and use cases that require mirroring one tenant to another (for setting up UAT from PROD or vice versa) by providing capabilities listed below

Features
  • Copying of App Categories
  • Migrating Weblinks (3rd party IDP), icons as is
  • Creating a link to federated apps and copying the icons (to maintain the same user experience)
  • Copying App Assignment to a Category mapping

[sta_anchor id=”sdpdovci” /]

Sample Data Platform Deployment on Virtualized Cloud Infrastructure

Data is king and your users need a sample data platform quickly.

With this Fling, you will leverage your VMware Cloud Foundation 4.0 with vRealize Automation deployment and stand a sample data platform based on vSphere Virtual Machines in less than 20-minutes comprising of Kafka, Spark, Solr, and ELK.

You can also choose whether to deploy a wavefront proxy and configure the components to send data to the wavefront proxy or use your own.

[sta_anchor id=”cndfe” /]

Community Networking Driver for ESXi

This Fling is a collection of ESXi Native Drivers which enables ESXi to recognize and consume various PCIe-based network adapters (See Requirements for details). These devices are not officially on the VMware HCL and have been developed to enable and support the VMware Community.

[sta_anchor id=”csci” /]

Code Stream Concourse Integrator

The Code Stream Concourse Integrator (CSCI) Fling provides integration between a vRealize Automation Code Stream and Concourse CI tools with which users can trigger Concourse CI pipelines from Code Stream pipelines without any additional tooling/scripting. This enables users to use the features from both the tools flexibly and seamlessly as per their needs. This solution is built using Code Stream’s extensibility feature named Custom Integration.

Updates

[sta_anchor id=”ecc” /]

ESXi Compatibility Checker

The ESXi Compatibility Checker helps the vSphere admin out in checking if their environment will work with later versions of ESXi. [non-sponsored advertisement]Also check Runecast, they can run a simulation for you as well.[/non-sponsored advertisement]

Changelog

Build 20210219

  • Fix for ESX / VC 7.0 U1 Versioning issues
  • A new logo 😉

[sta_anchor id=”vmlp” /]

VMware Machine Learning Platform

Our goal is to provide an end-to-end ML platform for Data Scientists to perform their job more effectively by running ML workloads on top of VMware infrastructure.

Using vMLP allows to:

Save the costs by enabling efficient use of shared GPUs for ML workfloads
Reduce the risks of broken Data Science workflows by leveraging well-tested and ready-to-use demos and project templates
Faster “go-to-market” for ML models by utilizing end-to-end oriented tooling including fast and easy model deployment and serving via standardized REST API

Changelog

Version 0.4.1

  • Jupyter: R Kernel
  • Jupyter: BitFusion 2.5.0 Demo
  • Jupyter: MADlib/RTS4MADlib on Greenplum Demo
  • Multiple bug fixes

[sta_anchor id=”vhpct” /]

Virtualized High Performance Computing Toolkit

This toolkit is intended to facilitate managing the lifecycle of these special configurations by leveraging vSphere APIs. It also includes features that help vSphere administrators perform some common vSphere tasks that are related to creating such high-performing environments, such as VM cloning, setting Latency Sensitivity, and sizing vCPUs, memory, etc.

Changelog

Nope 🙁

[sta_anchor id=”hpi” /]

Horizon Peripherals Intelligence

Horizon Peripherals Intelligence is an online self-serviced diagnosis service that can help increase the satisfaction when using peripheral devices with Horizon product by both the end users and the admin user. Currently, we support diagnosis for the following device categories – USB storage devices, USB printers, USB scanners, Cameras. We will continue to cover more device categories in the future

Changelog

Version 1.0

  • Add support for USB Audios, Speechmics, Signaturepads, Barcode scanners
  • Add support for L10n of web pages in simplified Chinese, traditional Chinese and English
  • Add support for window 7 and windows 2012R2
  • Add support for 32 bits OS
  • Add support for cmdline installation

[sta_anchor id=”woaafm” /]

Workspace ONE App Analyzer for macOS

The Workspace ONE macOS App Analyzer will determine any Privacy Permissions, Kernel Extensions, or System Extensions needed by an installed macOS application, and can be used to automatically create profiles in Workspace ONE UEM to whitelist those same settings when deploying apps to managed devices.

Changelog

Version 1.2 

  • Added support for Big Sur
  • Updated icon

[sta_anchor id=”osot” /]

VMware OS Optimization Tool

Image optimize you must with osot!

Changelog

  • nope 🙁

Update: OSOT didn’t receive an update, someone only edited the page according to Hilko.

[sta_anchor id=”hhu” /]

Horizon Helpdesk Utility

Besides ControlUp the helpdesk fling  is the best tool to help your users.

Changelog

Version 1.5.0.24

  • Added support for Horizon 8.1

[sta_anchor id=”hcibench” /]

HCIBench

HCIBench stands for “Hyper-converged Infrastructure Benchmark”. It’s essentially an automation wrapper around the popular and proven open source benchmark tools: Vdbench and Fio that make it easier to automate testing across a HCI cluster. HCIBench aims to simplify and accelerate customer POC performance testing in a consistent and controlled way. The tool fully automates the end-to-end process of deploying test VMs, coordinating workload runs, aggregating test results, performance analysis and collecting necessary data for troubleshooting purposes.

Changelog

Version 2.5.3

  • Fixed graphite permission issue which blocked vdbench/fio grafana display
  • Updated drop cache script to make it compatible with upcoming vSphere
  • md5sum: 622625cc7a551bd7bf07ff4f19a57a17 HCIBench_2.5.3.ova

[sta_anchor id=”reach” /]

Horizon Reach

Again if you’re not a ControlUp customer Reach is the next best thing to manage you’re Horizon environment.

Changelog

Version 1.3.1.2

  • Added support for Horizon 8.1
  • Bugfixes

[sta_anchor id=”wsod” /]

Workspace ONE Discovery

VMware Workspace ONE UEM is used to manage Windows 10 endpoints, whether it be Certificate Management, Application Deployment or Profile Management. The Discovery Fling enables you to view these from the device point of view and review the Workspace ONE related services, which applications have been successfully deployed, use the granular view to see exactly what has been configured with Profiles, view User & Machine certificates and see which Microsoft Windows Updates have been applied.

Changelog

February, 16, 2021 – Version 1.2

  • Replaced icon
  • New logo 🙂

[sta_anchor id=”avmu” /]

App Volumes Migration Utility

App Volumes Migration Utility allows admins to migrate AppStacks managed by VMware App Volumes 2.18, to the new application package format of App Volumes 4. The format of these packages in App Volumes 4 have evolved to improve performance and help simplify application management.

Changelog

Version 1.0.7 Update

  • Migration fails if their are blacklisted registry entries containing embedded NULL chars.
  • File system migration fails if their are directories having a trailing DOT name ( ex- Microsoft. ).

Managing application pools using the VMware Horizon Python Module

Earlier this week I added several methods to the VMware Horizon Python Module that are centered about application pools and I promised a blog post so here it is 🙂 In the module we have the following methods in the Inventory about Application Pools:

Preparation

In order to use the methods I am using this as standard configuration in my script

import requests, getpass, urllib, json, operator
import vmware_horizon
requests.packages.urllib3.disable_warnings()

url="https://loftcbr01.loft.lab"
username = "m_wouter"
domain = "loft.lab"
pw = getpass.getpass()


hvconnectionobj = vmware_horizon.Connection(username = username,domain = domain,password = pw,url = url)
hvconnectionobj.hv_connect()
print("connected")
monitor = obj=vmware_horizon.Monitor(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
external=vmware_horizon.External(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
inventory=vmware_horizon.Inventory(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)
entitlements=vmware_horizon.Entitlements(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)

All of the connects at the bottom is so I don’t need to think to do those if I need them when testing.

I end with

end=hvconnectionobj.hv_disconnect()
print(end)

Both the connected and end prints aren’t required at all but give me feedback about the status of the connection.

[sta_anchor id=”get_application_pools” /]

get_application_pools

This is the easiest method to use as it doesn’t require anything. It does allow for setting page sizes and filtering if needed. See this article if you want to know more about filtering: https://www.retouw.nl/2021/02/14/filtering-searching-and-pagination-with-the-python-module-for-vmware-horizon/ The method will return a list of dicts, for the first example I will show only the names of the items.

ap = inventory.get_application_pools(maxpagesize=100)
for i in ap:
    print(i["name"])

Or just with the entire list returned

ap = inventory.get_application_pools(maxpagesize=100)
print(ap)

[sta_anchor id=”get_application_pool” /]

get_application_pool

To get a single application pool you can use get_application_pool and it requires an application_pool_id, I will use the first one of the list of application to show it.

ap = inventory.get_application_pools(maxpagesize=100)
firstap=ap[0]
print(inventory.get_application_pool(application_pool_id=firstap["id"]))

[sta_anchor id=”delete_application_pool” /]

delete_application_pool

To delete an application pool we again only need the application_pool_id I will combine both the get methods to show all application pools before and after the deletion. (with some prints not relevant for the code so I won’t show them below)

ap = inventory.get_application_pools(maxpagesize=100)
for i in ap:
    print(i["name"])
firstap=ap[0]

print(inventory.get_application_pool(application_pool_id=firstap["id"]))

inventory.delete_application_pool(application_pool_id=firstap["id"])

ap = inventory.get_application_pools(maxpagesize=100)
for i in ap:
    print(i["name"])

[sta_anchor id=”new_application_pool” /]

new_application_pool

Since I just deleted my firefox pool I will need to recreate it. The new_application_pool method requires a dict with quite a lof of values. This is the standard list that the swagger-ui gives you

{
  "anti_affinity_data": {
    "anti_affinity_count": 10,
    "anti_affinity_patterns": [
      "*pad.exe",
      "*notepad.???"
    ]
  },
  "category_folder_name": "dir1\\dir2\\dir3\\dir4",
  "cs_restriction_tags": [
    "Internal",
    "External"
  ],
  "description": "string",
  "desktop_pool_id": "0103796c-102b-4ed3-953f-3dfe3d23e0fe",
  "display_name": "Firefox",
  "enable_client_restrictions": false,
  "enable_pre_launch": false,
  "enabled": true,
  "executable_path": "C:\\ProgramData\\Microsoft\\Windows\\Start Menu\\Programs\\Firefox.lnk",
  "farm_id": "855ea6c5-720a-41e1-96f4-958c90e6e424",
  "max_multi_sessions": 5,
  "multi_session_mode": "DISABLED",
  "name": "Firefox",
  "parameters": "-p myprofile",
  "publisher": "Mozilla Corporation",
  "shortcut_locations": [
    "START_MENU"
  ],
  "start_folder": "string",
  "supported_file_types_data": {
    "enable_auto_update_file_types": true,
    "enable_auto_update_other_file_types": true,
    "file_types": [
      {
        "description": "Firefox Document",
        "type": ".html"
      }
    ],
    "other_file_types": [
      {
        "description": "Firefox URL",
        "name": "https",
        "type": "URL"
      }
    ]
  },
  "version": "72.0.2"
}

This does not say that all of these are required, what I have found to be an easy way to find what the minimums are is to  create an application pool with a single key value pair. display_name is always required so I will use that one. Experience has learned that this might require several tries so let’s go.

new_app_pool = {}
new_app_pool["display_name"] = "Firefox"

inventory.new_application_pool(application_pool_data=new_app_pool)

So the first hard requirements are display_name, executable_path and name, let’s add these and see what happens

new_app_pool = {}
new_app_pool["display_name"] = "Firefox"
new_app_pool["name"] = "Firefox"
new_app_pool["executable_path"] = "C:\\ProgramData\\Microsoft\\Windows\\Start Menu\\Programs\\Firefox.lnk"

inventory.new_application_pool(application_pool_data=new_app_pool)

It looks like we actually need some more: at least desktop_pool_id or farm_id since I am doing this against a connection server with no farms I’ll use a desktop pool.

desktop_pools = inventory.get_desktop_pools()
firstpool = desktop_pools[0]

new_app_pool = {}
new_app_pool["display_name"] = "Firefox"
new_app_pool["name"] = "Firefox"
new_app_pool["executable_path"] = "C:\\ProgramData\\Microsoft\\Windows\\Start Menu\\Programs\\Firefox.lnk"
new_app_pool["desktop_pool_id"] = firstpool["id"]

inventory.new_application_pool(application_pool_data=new_app_pool)

No errors and a peak in the admin console shows me that I again have a firefox application

[sta_anchor id=”update_application_pool” /]

update_application_pool

To update the pools we need the application_pool_id and again a dict, this time the dict needs things we want to update. Experience again learned me there are a few required key value pairs while the example in the swagger-ui shows lots, so let’s find those. I am going to use my new firefox app as the source for this. What I actually am going to try to change is the display_name so I will use that as the first key value pair.

filter = {}
filter["type"] = "And"
filter["filters"] = []
filter1={}

filter1["type"] = "Equals"
filter1["name"] = "name"
filter1["value"] = "Firefox"
filter["filters"].append(filter1)
ap = (inventory.get_application_pools(filter=filter))[0]
appid = ap["id"]
update_app = {}
update_app["display_name"] = "FF2"
inventory.update_application_pool(application_pool_id=appid, application_pool_data=update_app)

So here different key value pairs are required than when creating a new application pool, strange but there is nothing I can do about it! I will add these from the ap object I retrieve earlier in the script.

aps = inventory.get_application_pools(maxpagesize=100)
for i in aps:
    print(i["display_name"])
filter = {}
filter["type"] = "And"
filter["filters"] = []
filter1={}

filter1["type"] = "Equals"
filter1["name"] = "name"
filter1["value"] = "Firefox"
filter["filters"].append(filter1)
ap = (inventory.get_application_pools(filter=filter))[0]
appid = ap["id"]
update_app = {}
update_app["display_name"] = "FF2"
update_app["executable_path"] = ap["executable_path"]
update_app["multi_session_mode"] = ap["multi_session_mode"]
update_app["enable_pre_launch"] = ap["enable_pre_launch"]

inventory.update_application_pool(application_pool_id=appid, application_pool_data=update_app)

aps = inventory.get_application_pools(maxpagesize=100)
for i in aps:
    print(i["display_name"])

So with that you have the basics to retrieve, create, update and delete application pools using python

Filtering/Searching and pagination with the Python module for VMware Horizon

Yesterday I added the first method to the VMware Horizon Python module that makes use of filtering while the day before that I added pagination. VMware{Code} has a document describing available options for both but let me give some explanation.

Pagination

Pagination is where you perform a query but only get an x amount of objects returned by default. The rest of the objects are available on the next page or pages. This is exactly what I ran into with the vmware.hv.helper Powershell module a long time ago. With the REST api’s this is rather easy to add since if there are more pages/objects left the headers will contain a key named HAS_MORE_RECORDS. For all the methods that I add where pagination is supported you don’t need to handle this though as I have added it to the method itself. What I did add was the option the change the maximum page size. I default to 100 and the maximum is 1000, if you supply an interrupt higher than 1000 this will be corrected to 1000.

Filtering

Filtering needs some more work from the user of the module to be able to use it.

What options are there for filtering?

For the type we have: And, Or and Not

For the filters themselves there are: Equals, NotEquals, Contains, StartsWith and Between.

The formula is you pick one from the first row and combine that with one or more from the second row.

To apply these the document describes the base schema like this:

{
    “type”: ”And”,
    “filter”: <filter object>
}

and a filter object looks like this:

{
    "type":"Equals",
    "name":"domain",
    "value":"ad-example0"
}

or this for a range:

{
    "type":"Between",
    "name":"assignedUsers",
    "fromValue":"10",
    "toValue":"20"
}

Combining both into a single object looks like this:

{
    "type":"Not",
    "filter": {
        "type":"Equals",
        "name":"domain",
        "value":"ad-example0"
    }
}

This all looks like a dictionary with a nested dictionary when translating it to Python but when you have multiple filters it suddenly looks like this:

{
    "type":"And",
  "filters": [
        {
            "type":"Equals", 
            "name":"domain",
            "value":"ad-example0"
        },
        {
            "type":"StartsWith", 
            "name":"name",
            "value":"test"
        }
    ]
}

otherwise know as a dictionary with a list of dictionaries in it and since the latter also works with a single dict inside the list I have taken that route. The document also describes encoding and minifying the code to it works for a REST api call but I have done all of that for you so no need to worry about it, just build the dictionary and you are good!

Now let’s actually perform a search

First I create my base object with the type AND and a list for the filters key

filter_dict = {}
filter_dict["type"] = "And"
filter_dict["filters"] = []

Next I create the filters object where the type is contains and I filter on the field name with the value LP-00

filter1={}
filter1["type"] = "Contains"
filter1["name"] = "name"
filter1["value"] = "LP-00"

And now I add the filters1 object to the filter_dict filters list

filter["filters"].append(filter1)

and I get the machines with a pagesize of 1 to show the pagination (the pool with these machines only has 2 😉 )

machines = obj.get_machines(maxpagesize=1, filter = filter_dict)

And this would be the entire python script

import requests, getpass, urllib, json
import vmware_horizon

requests.packages.urllib3.disable_warnings()

url="https://loftcbr01.loft.lab"
username = "m_wouter"
domain = "loft.lab"
pw = getpass.getpass()

hvconnectionobj = vmware_horizon.Connection(username = username,domain = domain,password = pw,url = url)
hvconnectionobj.hv_connect()

obj = vmware_horizon.Inventory(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)

filter_dict = {}
filter_dict["type"] = "And"
filter_dict["filters"] = []
filter1={}
filter1["type"] = "Contains"
filter1["name"] = "name"
filter1["value"] = "LP-00"

filter["filters"].append(filter1)

machines = obj.get_machines(maxpagesize=1, filter = filter_dict)

for i in machines:
    print(i["name"])

hvconnectionobj.hv_disconnect()

And it shows this in python:

My #100DaysOfCode #Python Challenge == VMware_Horizon Module

So after 5 weeks of following the #Python training for my 100DaysOfCode challenge I have decided that my main goal for the challenge itself will be to work on the Horizon Python Module. With the course some things I find really boring and I need a real target to really learn things instead of just repeating someone else is doing as well.

I will still do some of the fun parts of it in time like databases and such when I need it but for now I will focus on the module. This weekend I added handling of the Instant Clone domain accounts to the module and also added documentation both in the module and the github repository. I know I will still learn heaps because almost all of it is still rather new and repetition works best for me.

Added Methods to the module

  • External Class
    • get_ad_domains
  • Settings class
    • get_ic_domain_accounts
    • get_ic_domain_account
    • new_ic_domain_account
    • update_ic_domain_account
    • delete_ic_domain_account

The VMware Labs flings monthly for January 2021 – it’s OSOT time again!

January was a good month for me as I started the Python 100DaysOfCode challenge but the VMware engineers also did plenty of work. Seven flingss received an update and it looks like we have a single new release.

New Release

Desktop Container Tools

Updates

Demo Appliance for Tanzu Kubernetes Grid

Power vRA Cloud

DRS Dump Insight

vSphere Software Asset Management Tool

VMware OS Optimization Tool

Workspace ONE Discovery

Python Client for VMC on AWS

New Releases

[sta_anchor id=”dct” /]

Desktop Container Tools

Desktop Container Tools is a free tool that allows you to do basic management of vctl (a CLI tool shipped with VMware Fusion) container engine on macOS for running containers and Kubernetes clusters.

Features

  • Easy&Access

Handy management of vctl container engine through the user interface and Touch Bar. Configure your virtual machines for containers and Kubernetes cluster without CLI.

  • Multi-language Support

Currently support English & Simplified Chinese. More languages are underway.

  • Light & Free

It’s light and it’s free.

Updated Flings

[sta_anchor id=”daftkg” /]

Demo Appliance for Tanzu Kubernetes Grid

The Demo Appliance for Tanzu Kubernetes Grid is a sample appliance to help customers to learn and deploy Tanzu Kubernetes Grid.

Changelog

Jan 05, 2021 – v1.2.1

  • Support for latest TKG 1.2.1 release
  • Support for TKG Workload Cluster upgrade workflow from K8s 1.18.10 to 1.19.3
  • Updated embedded Harbor to use self-sign TLS certificate (new feature of TKG 1.2.1)
  • Updated to latest version of Harbor (2.1.2)

Known Issue:

DNS resolution issue when installing TKG Extensions. Workaround is to add the following snippet to kapp-controller.yaml

volumeMounts:
- mountPath: /etc/hosts
name: etc
subPath: hosts
volumes:
- name: etc
hostPath:
path: /etc

[sta_anchor id=”pvrac” /]

Power vRA Cloud

PowervRA Cloud is a PowerShell module that abstracts the VMware vRealize Automation Cloud APIs to a set of easily used PowerShell functions. This tool provides a comprehensive command line environment for managing your VMware vRealize Automation Cloud environment.

Changelog

Version 1.3

  • 4 x New Cmdlets for VMC
  • 5 x New Cmdlets for AWS
  • Powershell 7 on Windows Support
  • Bugfixes

[sta_anchor id=”drsdi” /]

DRS Dump Insight

DRS Dump Insight is a portal that vSPhere administrators can use to analyze why DRS performed actions.

Changelog

Version 2.0

  • Added support for 7.0 and 7.0U1 dumps.
  • Toggle added for selective analysis of all full dumps.
  • Bug fixes and backend improvements

[sta_anchor id=”vsamt” /]

vSphere Software Asset Management Tool

The vSphere Software Asset Management (vSAM) is a tool that collects and summarizes vSphere product deployment information. It calls on vSphere APIs for deployment data and produces a PDF report that the customer can consult as a part of their infrastructure review and planning process. This lightweight Java application runs on Windows, Linux or Mac OS.

Changelog

Version 1.3 Update

  • Show Tanzu products in the report.
  • Bug fixes.

[sta_anchor id=”osot” /]

VMware OS Optimization Tool

Building a new golden image? Use the OS Optimizer tool to let it perform better but please test test test if all your apps are working.

Changelog

January 2021, b2001 Bug Fixes

  • All optimization entries have been added back into the main user template. This allows manual tuning and selection of all optimizations.
  • Fixed two hardware acceleration selections were not previously controlled by the Common Option for Visual Effect to disable hardware acceleration.

Optimize

  • During an Optimize, the optimization selections are automatically exported to a default json file (%ProgramData%\VMware\OSOT\OptimizedTemplateData.json).

Analyze

  • When an Analyze is run, if the default json file exists (meaning that this image has already been optimized), this is imported and used to select the optimizations and the Common Options selections with the previous choices.
  • If the default selections are required, on subsequent runs of the OS Optimization Tool, delete the default json file, relaunch the tool and run Analyze.

Command Line

  • The OptimizedTemplateData.json file can also be used from the command line with the -applyoptimization parameter.

Optimizations

  • Changed entries for Hyper-V services to not be selected by default. These services are required for VMs deployed onto Azure. Windows installation sets these to manual (trigger) so these so not cause any overhead on vSphere, when left with the default setting.

[sta_anchor id=”wsoned” /]

Workspace ONE Discovery

VMware Workspace ONE UEM is used to manage Windows 10 endpoints, whether it be Certificate Management, Application Deployment or Profile Management. The Discovery Fling enables you to view these from the device point of view and review the Workspace ONE related services, which applications have been successfully deployed, use the granular view to see exactly what has been configured with Profiles, view User & Machine certificates and see which Microsoft Windows Updates have been applied.

Changelog

January 14, 2021 – Version 1.1

  • Updated application icon (ICO)
  • Monitoring the VMware Horizon Client, VMware Digital Experience Telemetry and VMware Hub Health services

[sta_anchor id=”pcfvrmcoaws” /]

Python Client for VMC on AWS

Python Client for VMware Cloud on AWS is an open-source Python-based tool. Written in Python, the tool enables VMware Cloud on AWS users to automate the consumption of their VMware Cloud on AWS SDDC.

Changelog

Version 1.2

  • Added a Dockerfile to build a Docker image to run PyVMC
  • Added Egress counters visibility
  • Added routing table visibility
  • Added L2VPN support
  • Added Nested Group support

Updates to the VMware Horizon Python Module

I have just pushed some changes to the Horizon Python module. With these changes I am more complying with the Python coding standards by initiating an object before being able to use the functions inside a class. Also I added a bunch of the api calls available in the monitor parts.

To connect you now start like this:

import requests, getpass
import vmware_horizon

requests.packages.urllib3.disable_warnings()
url = input("URL\n")
username = input("Username\n")
domain = input("Domain\n")
pw = getpass.getpass()

hvconnectionobj = vmware_horizon.Connection(username = username,domain = domain,password = pw,url = url)
hvconnectionobj.hv_connect()

so technically you first initiate a Connection class object and than you use the hv_connect function inside that class after which the access token is stored inside the object itself.

Now to use the monitors for example you create an object for this.

monitor = vmware_horizon.Monitor(url=hvconnectionobj.url, access_token=hvconnectionobj.access_token)

To see what functions are available you can combine print with dir.

print(dir(monitor))

and the full list, the ones with (id) require an id:

  • ad_domain
  • connection_servers
  • connection_server(id)
  • event_database
  • farms
  • farm(id)
  • gateways
  • gateway(id)
  • rds_servers
  • rds_server(id)
  • saml_authenticators
  • saml_authenticator(id)
  • view_composers
  • view_composer(vcId)
  • virtual_centers
  • virtual_center(id)
  • remote_pods
  • remote_pod(id)
  • true_sso

As you can see I had to work with underscores instead of hyphens as python doesn’t like those in the names of functions

As said some of these might require an id but connection_servers works without one for example
print(monitor.connection_servers())

Todo: Error handling for wrong passwords, documentation

My #100DaysOfCode challenge: week 3

This week I learned about Object Orientated Programming, classes, modules, tuples and other things. I decided on skipping some days of the course because my goal of the 100 days is to learn new techniques to be used with Python. Some of the days are more about how to think in solving challenges than new techniques so I did look at the videos where it was clear that new stuff is being thought bit I do my own thing for the rest. I have also create a first version of a module to use the VMware Horizon REST API’s and blogged about it yesterday. On the positive side I learned that even though I was making long evenings with ControlUp’s yearly SKO I was still able to take in new information during my morning 100DaysOfCode ritual.

Learning points:

  • Even when making long days I can still take in new information during the mornings
  • I can’t be arsed to create code that I don’t care about
  • I still think this is a fun challenge
  • Python is cool

 

Using the Horizon REST API’s with Python

As you probably have seen from my tweets the last three weeks I have been doing the 100DaysOfCode challenge specifically for Python. Today I was actually a bit bored with the task we got (sorry, I hate creating games) so I decided on checking if I was actually able to consume the Horizon api’s from Python. This was something entirely new for me so it was a boatload of trial & error until I got it working with this script:

import requests,json, getpass

requests.packages.urllib3.disable_warnings()

pw = getpass.getpass()
domain = input("Domain")
username = input("Username")
url = input("URL")



headers = {
    'accept': '*/*',
    'Content-Type': 'application/json',
}

data = {"domain": domain, "password": pw, "username": username}
json_data = json.dumps(data)

response = requests.post(f'{url}/rest/login', verify=False, headers=headers, data=json_data)
data = response.json()

access_token = {
    'accept': '*/*',
    'Authorization': 'Bearer ' + data['access_token']
}

response = requests.get(f'{url}/rest/inventory/v1/desktop-pools', verify=False,  headers=access_token)
data = response.json()
for i in data:
    print(i['name'])

First I import the requests json and getpass modules. The requests module does the webrequests, the json is used to transform the data to be usable and getpass is used to get my password without showing it. After this I add a line to get rid of the warnings that my certificates aren’t to be trusted (it’s a homelab, duh!).

The most important part is that for the authentication I send username,password and domain as json data in the data while the headers contain the content type. The response gets converted to json data and I use that json data to build the access token. For future requests I only need to pass the access token for authentication.

Now this looks fun but wouldn’t it be better if I create a module for it? Yes it does and that’s what I have done and I have even added a simple function to list desktop pools.

import json, requests, ssl

class Connection:
    def hv_connect(username, password, domain, url):
        headers = {
            'accept': '*/*',
            'Content-Type': 'application/json',
        }

        data = {"domain": domain, "password": password, "username": username}
        json_data = json.dumps(data)

        response = requests.post(f'{url}/rest/login', verify=False, headers=headers, data=json_data)
        data = response.json()

        access_token = {
            'accept': '*/*',
            'Authorization': 'Bearer ' + data['access_token']
        }
        return access_token

    def hv_disconnect(url, access_token):
        requests.post(f'{url}/rest/logout', verify=False, headers=access_token)

class Pools:
    def list_hvpools(url,access_token):
        response = requests.get(f'{url}/rest/inventory/v1/desktop-pools', verify=False,  headers=access_token)
        return response.json()



And with a simple script I consume this module to show the display name of the first pool.

import requests, getpass
import vmware_horizon

requests.packages.urllib3.disable_warnings()
url = input("URL\n")
username = input("Username\n")
domain = input("Domain\n")
pw = getpass.getpass()


at = vmware_horizon.Connection.hv_connect(username=username,password=pw,url=url,domain=domain)


pools = vmware_horizon.Pools.list_hvpools(url=url, access_token=at)
print(f'The first Desktop pool is {pools[0]["display_name"]}')

vmware_horizon.Connection.hv_disconnect(url=url, access_token=at)

The module is from from ready and I need to find a better way to make it optional to ignore the certificate erros but if you want to follow the progress of the module it can be found on my Github.

 

 

Week 2 of my #100DaysOfCode challenge

And with a higher or lower game I finished the second week of my 100DaysofCode #Python challenge. Sometimes when I don’t see the solution for something it takes ma ages to get on the right path but when I see it I finish the project pdq! The debugging that I have been applying for years in my powershell code also seems to apply to python and google is a coders best friend 🙂 Besides the python course I actually didn’t do a whole lot of coding in Powershell, maybe a couple of hours spread over the week and my “big project” is finally finished and ready to be published.

Good things:

  • focus
  • when it works it works

Bad things:

  • when my I start thinking in the wrong direction it will keep going wrong
  • I still sometimes think too difficult

 

My 100 Days of Code Challenge: done with the first week!

So the first seven days of coding for my 100 Days of Code Challenge have passed. I have already learned heaps from the Python course that I follow for the challenge but have also run into some walls where my thinking process brings me in the wrong direction. What I also notice is that I go a bit more advanced than the level the course currently is at because I google for some solution and try to understand that while there might be a more simple solution available that sometimes costs more lines of code. I do try to make sure that I understand what I use otherwise it doesn’t make sense to copy/paste some solution and seeing it work but having no idea on the why or how.

Not directly related to the code but the decision to do the course early in the morning works very well for me. It sharpens my senses for the rest of the day and when I sit behind my work laptop I am fully ready to go while normally I still had to get into the ‘production’ groove at that point. Sometimes I do need to finish the daily project after dinner but I don’t mind doing that. I always create the daily update page also at that moment so it’s a good combination also to refresh on what i have learned that day.

One thing that is related to code and what I have to get used to is that my coding during the day usually is 99% in PowerShell and I sometimes tend to confuse the 2 languages on how to do things at what point.

Good points:

  • I got a streak of 7 days
  • learned a lot
  • It’s fun
  • sharper for the rest of the day

less good things

  • confusing PS and Python code
  • I tend to over complicate things
  • I hate doing workflows on my pc, need to use my whiteboard