EUC Performance Benchmarking Best Practices

Design

Lego Image
Build a target environment and select automated workloads that simulate virtual desktop users doing realistic work.

Perform

Lego Image
Orchestrate automated test runs, capture client-side session screen videos and collect host-side telemetry data.

Analyze

Lego Image
Analyze collected results by combining screen videos of the primary user session with telemetry data collected on the host machine.

Overview

When interacting with virtual desktops or remote applications, there are seven usability and performance aspects with significant relevance for most users: fast user logon, short application load times, high refresh rate of the endpoint display, great graphics quality, unnoticeable user interface response time delays, support of all common media formats, and high session reliability. Only systems that come close to this ideal allow users to naturally emerge into the digital workspace through a range of endpoint devices with different capabilities and form factors, including head-mounted displays.

Unfortunately, it's hard to measure or score perceived user experience in virtual desktops or remote applications. Until now, there is no commonly accepted benchmarking methodology and service offering where the primary focus lies on what a remote user really sees on the screen while measuring the timely correlated load patterns generated on the host platform. As a consequence, there are no adequate metrics to define, measure and compare the quality of perceived remote user experience. To fill this gap, it's our ambition to introduce you to commonly accepted best practices and "recipes" for benchmarking, quality management and performance optimization in End-User Computing (EUC) environments.

A typical EUC performance benchmarking project flow can be separated into three phases:

  1. Design: Pose a testable question, state a hypothesis, setup an EUC test lab and design an EUC experiment.
  2. Perform: Perform controlled and reproducible EUC experiments and collect test data.
  3. Analyze: Review test results, visualize data, draw conclusions and publish findings.

Our benchmarking approach strictly follows best practices established in international research projects. The underlying test methodology is based on a range of pre-selected and adaptable application scenarios representing typical use cases. Both fully automated and manual test sequences are executed in a closely controlled way and recorded as screen videos. These videos are correlated to relevant feature and performance factors, using metrics closely tied to the actual remote end user experience on popular client devices.

It is important to note that our EUC performance benchmarking methodology works across the boundaries of on-premises and cloud environments - it doesn't matter where the different test components are located.

Design a Test Environment

Our EUC benchmarking approach is based on valid experimental evidence and rational discussion. Lab experiments conducted with the modules recommended in this article allow you to test EUC theories and to provide the basis for scientific knowledge. In other words, the findings from a properly designed EUC experiment can make the difference between guesswork and solid facts. Our approach can also call for a new EUC theory, either by showing that an accepted theory is incorrect, or by exhibiting a new phenomenon that is in need of explanation. EUC test engineers may also want to investigate a phenomenon just because it looks interesting. It is critically important to pose a testable EUC question, state a hypothesis or theory, and then design an EUC experiment with the goal to answer the question or refine the theory. It is our strong believe that the learnings from EUC experiments allow us to deliver better remote end-user experience for consumers and business users.

Building the EUC test lab environment represents an important phase in each EUC benchmarking project. Setting up the test lab includes the installation and configuration of one or multiple host systems, guest VMs, endpoint devices and network components. In most cases it is necessary to add a domain controller and a file server to the test infrastructure. Predefined simulated workloads must be installed on selected guest VMs providing users access to remote Windows sessions and applications. In addition, a screen video capture device and telemetry data collector mechanisms must be added to the test environment.

Labs for EUC Score projects are built by EUC benchmarking experts or test engineers to accomplish a set of predefined tasks. Each test lab consists of multiple component categories:

  • Endpoint Device: This is a physical or virtual endpoint where a real or synthetic user interacts with a remote session. Prerequisite is an installed remoting client software (such as Microsoft Remote Desktop Connection, Microsoft WVD Client, Citrix Workspace App, or VMware Horizon Client). Connecting an endpoint device video output to the input of a video capture device is required for recording the screen output of simulated workloads.
  • Host Systems with Target Machine: Virtual desktops or remote sessions connected to endpoint devices are delivered by one or multiple target machines. Each target machine is running on a host system, which can be a hypervisor or a Windows operating system installed on physical server hardware. User sessions delivered by a target machine are connected to endpoint devices over LAN or WAN. Required simulated workloads and tools must be installed on the target machine hosting the interactive remote sessions used for testing. The installation of telemetry data collectors on target machines is optional.
  • Telemetry Data Collector: Installed on the host server and initiated through command line from the primary user session, the telemetry data collector archives server-side telemetry data during each test run.
  • Load Generators: Secondary users that generate load on a test host server require session hosts from which those sessions to the test host server are invoked. Running a simulated persona from the command-line or by group policy allows you to launch secondary user sessions.
  • Side-by-Side Player: The purpose of such a visualization tool is to analyze and present previously collected screen videos and performance data sets in a way that is easy to understand and interpret.

An important component of our EUC performance benchmarking approach is a frame grabber or a video capture device connected to a physical client device's video output or a screen recorder software installed in a virtual client. Physical endpoints can be Windows PCs, Macs, Android devices, tablets or smartphones. If a video capture device is connected to a physical client, a screen recording software is required on a separate PC or laptop to record and store the resulting video data stream.

Go back to Overview

Perform Test Runs

This project phase focuses on performing controlled and reproducible EUC experiments. Selected test sequences are executed in a closely controlled way and recorded as screen videos using a video capture device or frame grabber connected to a physical client device's video output. As an alternative, a screen recorder software installed on a (virtual) remoting client can be used. The resulting videos are correlated to relevant feature and performance factors, using metrics closely tied to the actual remote end user experience on popular client endpoint devices.

From a machine called "lab controller" and the primary user session, you control the launch of test workload sequences, record screen videos and collect telemetry data. During each test sequence, secondary users (“noisy neighbors”) optionally create a pre-defined base load designed to mimic certain types of workers based on industry standard definitions.

Go back to Overview

Analyze Test Results

When feeding the results collected in the previous phase into a side-by-side video player with data overlay, it presents screen videos and performance data in a way that is easy to understand and to interpret. This allows the analysis of the most important remote end-user experience factors, such as user interface response times, screen refresh cycles (frame rates), graphics formats and media types supported, perceived graphics and media performance, media synchronism ("lip sync"), noticeable distortion of media caused by codecs and performance influenced by varying network conditions.

After analysing the data collected during the EUC experiments, you can draw you conclusions and publish your findings. Check out sample results visualized by the EUC Score Side-by-Side Player.

Go back to Overview