When interacting with virtual desktops or remote applications, there are seven usability and performance aspects with significant relevance for most users: fast user logon, short application load times, high refresh rate of the endpoint display, great graphics quality, unnoticeable user interface response time delays, support of all common media formats, and high session reliability. Only systems that come close to this ideal allow users to naturally emerge into the digital workspace through a range of endpoint devices with different capabilities and form factors, including head-mounted displays.
Unfortunately, it's hard to measure or score perceived user experience in virtual desktops or remote applications. Until now, there is no commonly accepted benchmarking methodology and service offering where the primary focus lies on what a remote user really sees on the screen while measuring the timely correlated load patterns generated on the host platform. As a consequence, there are no adequate metrics to define, measure and compare the quality of perceived remote user experience. To fill this gap, it's our ambition to introduce you to commonly accepted best practices and "recipes" for benchmarking, quality management and performance optimization in End-User Computing (EUC) environments.
A typical EUC performance benchmarking project flow can be separated into three phases:
Our benchmarking approach strictly follows best practices established in international research projects. The underlying test methodology is based on a range of pre-selected and adaptable application scenarios representing typical use cases. Both fully automated and manual test sequences are executed in a closely controlled way and recorded as screen videos. These videos are correlated to relevant feature and performance factors, using metrics closely tied to the actual remote end user experience on popular client devices.
It is important to note that our EUC performance benchmarking methodology works across the boundaries of on-premises and cloud environments - it doesn't matter where the different test components are located.
Our EUC benchmarking approach is based on valid experimental evidence and rational discussion. Lab experiments conducted with the modules recommended in this article allow you to test EUC theories and to provide the basis for scientific knowledge. In other words, the findings from a properly designed EUC experiment can make the difference between guesswork and solid facts. Our approach can also call for a new EUC theory, either by showing that an accepted theory is incorrect, or by exhibiting a new phenomenon that is in need of explanation. EUC test engineers may also want to investigate a phenomenon just because it looks interesting. It is critically important to pose a testable EUC question, state a hypothesis or theory, and then design an EUC experiment with the goal to answer the question or refine the theory. It is our strong believe that the learnings from EUC experiments allow us to deliver better remote end-user experience for consumers and business users.
Building the EUC test lab environment represents an important phase in each EUC benchmarking project. Setting up the test lab includes the installation and configuration of one or multiple host systems, guest VMs, endpoint devices and network components. In most cases it is necessary to include Active Directory and file server resources in the test infrastructure. Predefined simulated workloads must be added to selected guest VMs providing test users access to remote Windows sessions and applications. In addition, a screen video capture device and telemetry data collector mechanisms must be added to the test environment.
Labs for EUC Score projects are built by EUC benchmarking experts or test engineers to accomplish a set of predefined tasks. Each test lab consists of multiple component categories:
For more details on the lab equipment, please check out the Lab Equipment page.
This project phase focuses on performing controlled and reproducible EUC experiments. Selected test sequences are executed in a closely controlled way and recorded as screen videos using a video capture device or frame grabber connected to a physical client device's video output. As an alternative, a screen recorder software installed on a (virtual) remoting client can be used. The resulting videos are correlated to relevant feature and performance factors, using metrics closely tied to the actual remote end user experience on popular client endpoint devices.
From the Lab Controler machine and the primary user session, you control the launch of test workload sequences, record screen videos and collect telemetry data. During each test sequence, secondary users (“noisy neighbors”) optionally create a pre-defined base load designed to mimic certain types of workers based on industry standard definitions.
When feeding the results collected in the previous phase into a side-by-side video player with data overlay, called "Sync Player", it presents screen videos and performance data in a way that is easy to understand and to interpret. This allows the analysis of the most important remote end-user experience factors, such as user interface response times, screen refresh cycles (frame rates), graphics formats and media types supported, perceived graphics and media performance, media synchronism ("lip sync"), noticeable distortion of media caused by codecs and performance influenced by varying network conditions.
After analysing the data collected during the previous phase, you can draw your conclusions and publish your findings. Check out sample results visualized by the EUC Score Sync Player.