EUC Score Test Methodology

Design

Lego Image
Build a target environment and select automated workloads that simulate virtual desktop users doing realistic work.

Perform

Lego Image
Orchestrate automated test runs, capture client-side session screen videos and collect host-side telemetry data.

Analyze

Lego Image
Analyze collected results by combining screen videos of the primary user session with telemetry data collected on the host machine.

Overview

When interacting with virtual desktops or remote applications, there are seven usability and performance aspects with significant relevance for most users: fast user logon, short application load times, high refresh rate of the endpoint display, great graphics quality, unnoticeable user interface response time delays, support of all common media formats, and high session reliability. Only systems that come close to this ideal allow users to naturally emerge into the digital workspace through a range of endpoint devices with different capabilities and form factors, including head-mounted displays.

Unfortunately, it's hard to measure or score perceived user experience in virtual desktops or remote applications. Until now, there is no commonly accepted benchmarking methodology and service offering where the primary focus lies on what a remote user really sees on the screen while measuring the timely correlated load patterns generated on the host platform. As a consequence, there are no adequate metrics to define, measure and compare the quality of perceived remote user experience. To fill this gap, it's our ambition to introduce you to commonly accepted best practices and "recipes" for benchmarking, quality management and performance optimization in End-User Computing (EUC) environments.

A typical EUC performance benchmarking project flow can be separated into three phases:

  1. Design: Pose a testable question, state a hypothesis, setup an EUC test lab and design an EUC experiment.
  2. Perform: Perform controlled and reproducible EUC experiments and collect test data.
  3. Analyze: Review test results, visualize data, draw conclusions and publish findings.

Our benchmarking approach strictly follows best practices established in international research projects. The underlying test methodology is based on a range of pre-selected and adaptable application scenarios representing typical use cases. Both fully automated and manual test sequences are executed in a closely controlled way and recorded as screen videos. These videos are correlated to relevant feature and performance factors, using metrics closely tied to the actual remote end user experience on popular client devices.

It is important to note that our EUC performance benchmarking methodology works across the boundaries of on-premises and cloud environments - it doesn't matter where the different test components are located.

Design a Test Environment

Our EUC benchmarking approach is based on valid experimental evidence and rational discussion. Lab experiments conducted with the modules recommended in this article allow you to test EUC theories and to provide the basis for scientific knowledge. In other words, the findings from a properly designed EUC experiment can make the difference between guesswork and solid facts. Our approach can also call for a new EUC theory, either by showing that an accepted theory is incorrect, or by exhibiting a new phenomenon that is in need of explanation. EUC test engineers may also want to investigate a phenomenon just because it looks interesting. It is critically important to pose a testable EUC question, state a hypothesis or theory, and then design an EUC experiment with the goal to answer the question or refine the theory. It is our strong believe that the learnings from EUC experiments allow us to deliver better remote end-user experience for consumers and business users.

Building the EUC test lab environment represents an important phase in each EUC benchmarking project. Setting up the test lab includes the installation and configuration of one or multiple host systems, guest VMs, endpoint devices and network components. In most cases it is necessary to include Active Directory and file server resources in the test infrastructure. Predefined simulated workloads must be added to selected guest VMs providing test users access to remote Windows sessions and applications. In addition, a screen video capture device and telemetry data collector mechanisms must be added to the test environment.

Labs for EUC Score projects are built by EUC benchmarking experts or test engineers to accomplish a set of predefined tasks. Each test lab consists of multiple component categories:

  • Endpoint Device: This is a physical or virtual endpoint where a real or synthetic primary user interacts with a remote session. Physical endpoints can be Windows PCs, Macs, Android devices, tablets or smartphones. Prerequisite is an installed remoting client software (such as Microsoft Remote Desktop Connection, Microsoft AVD Client, Citrix Workspace App, or VMware Horizon Client). Connecting the endpoint device video output to the input of a video capture or frame grabber device is required for recording the screen output of simulated workloads. Running screen recorder software installed on a virtual client device is also an option.
  • Target System: Virtual desktops or remote sessions connected to endpoint devices are delivered by one or multiple target machines. Each target machine is running on a host system, which can be a hypervisor or a Windows operating system installed on physical hardware. User sessions delivered by a target machine are connected to endpoint devices over LAN or WAN. Selected simulated workloads and controller tools must be present on the target machine hosting the interactive remote sessions used for testing. Adding telemetry data collectors designed to archive telemetry data during each test run is recommended.
  • Lab Controller: A powerful enough PC or laptop connected to the output of the video capture device and capable of running video recording software, such as OBS Studio. Video resolution must be full HD or higher at 60 frames per second.
  • Load Generator: An optional separate system hosting secondary user client sessions ("noisy neighbors") and running persona workloads that generate pre-defined and consistent load on a multi-user target system. Depending on the test cases, it may be necessary to launch the secondary user sessions from the command-line or by group policy.

For more details on the lab equipment, please check out the Lab Equipment page.

Go back to Overview

Perform Test Runs

This project phase focuses on performing controlled and reproducible EUC experiments. Selected test sequences are executed in a closely controlled way and recorded as screen videos using a video capture device or frame grabber connected to a physical client device's video output. As an alternative, a screen recorder software installed on a (virtual) remoting client can be used. The resulting videos are correlated to relevant feature and performance factors, using metrics closely tied to the actual remote end user experience on popular client endpoint devices.

From the Lab Controler machine and the primary user session, you control the launch of test workload sequences, record screen videos and collect telemetry data. During each test sequence, secondary users (“noisy neighbors”) optionally create a pre-defined base load designed to mimic certain types of workers based on industry standard definitions.

Go back to Overview

Analyze Test Results

When feeding the results collected in the previous phase into a side-by-side video player with data overlay, called "Sync Player", it presents screen videos and performance data in a way that is easy to understand and to interpret. This allows the analysis of the most important remote end-user experience factors, such as user interface response times, screen refresh cycles (frame rates), graphics formats and media types supported, perceived graphics and media performance, media synchronism ("lip sync"), noticeable distortion of media caused by codecs and performance influenced by varying network conditions.

After analysing the data collected during the previous phase, you can draw your conclusions and publish your findings. Check out sample results visualized by the EUC Score Sync Player.

Go back to Overview