EUC Score Terminology
Technical or special terms used in the context of EUC Score experiments.
[Installation] [Experiments] [Simloads]
[Dataset Analysis] [Persona Types] [Screen Artifacts]
[Test Result Naming] [Test Setups] [Acronyms]
- An EUC Score installation package (developed with Inno Setup) contains all of the information to run
the setup user interface, to install or uninstall selected EUC Score components, to copy or delete media files,
and to create, modify or remove registry keys.
- The installation scope is defined when installing the EUC Score Base Package. It can be installed
just for the current user. This does not require administrator privileges and creates all necessary registry entries
in HKEY_CURRENT_USER. In multi-user environments, it must be installed for all users of the machine.
This requires local administrator privileges and creates all necessary registry entries in HKEY_LOCAL_MACHINE.
- An EUC Score experiment is conducted by an experienced test engineer (or QA engineer) who organizes
and performs an EUC Score experiment with the goal to collect performance data and to support or refute a hypothesis.
- An EUC Score test engineer designs test plans, scenarios, scripts, or procedures and conducts EUC Score
experiments. An important aspect of this role is to provide profound feedback to system architects and developers
on system performance and perceived user experience. The test engineer should be good at disciplined work, problem sensitivity,
reasoning, systematic reviews, written expression, and information ordering.
- A system under test (SUT) is a host, a virtual machine or a DaaS environment acting as the target system
in an EUC Score experiment. User sessions delivered by a target system are connected to endpoint devices over LAN or WAN.
Running synthetic or simulated workloads in a user session hosted on the SUT is considered as host-side testing, while the screen
output is delivered to the endpoint device. Adding telemetry data collectors designed to archive telemetry data
during each test run is recommended.
- A hypervisor can be the host for a system under test. The hypervisor or virtual machine monitor (VMM) is a
piece of software that allows physical devices to share their resources among virtual machines (VMs) running on top of
that physical hardware. The hypervisor creates, runs and manages VMs.
- An endpoint device is a physical or virtual machine from where a real or synthetic primary user interacts
with a remote session. Physical endpoints can be Windows PCs, Macs, Android devices, tablets or smartphones.
Prerequisite is an installed remoting client software (such as Microsoft Remote Desktop Connection, Microsoft AVD Client,
Citrix Workspace App, or VMware Horizon Client). Connecting the endpoint device video output to the
input of a video capture or frame grabber device is required for recording the screen output of simulated workloads.
Running screen recorder software installed on a virtual endpoint device is also an option.
- A load generator is an optional separate system hosting virtual endpoints for secondary user sessions ("noisy neighbors") and
running persona workloads that generate pre-defined and consistent load on a multi-user target system. Depending on the test cases,
it may be necessary to launch the secondary user sessions from the command-line or by group policy.
- A video capture device is a hardware device that connects to a computer that converts the video signal from the output
of an endpoint device into a digital video feed that a recorder computer can recognize, typically over a USB connection.
- A screen video recorder is a powerful enough PC or laptop connected to the output of a video capture device
and capable of running video recording software, such as OBS Studio. The screen video resolution must be full HD or higher at 60
frames per second. The resulting digital data are referred to as a digital video stream and can be recorded as a video file.
- Metrics are measures of quantitative assessment, such as telemetry data, commonly used for assessing, comparing,
and tracking performance or production. Generally, a group of metrics will typically be used to build a dashboard that management
or analysts review on a regular basis to maintain performance assessments, opinions, and business strategies.
- A benchmark dataset is a collection of host and/or endpoint metrics, test infrastructure settings and capabilities,
(synthetic) user activities, session states, screen videos, and log files, used to analyze perceived user performance and create
visualizations.
- A Simload (= simulated workload) mimics (synthetic) user activities representing typical interactive
digital work patterns. Technically, a Simload is a compiled AutoIt executable that can be registered and launched on
a guest VM or a physical Windows workstation. A Simload opens one or multiple applications, starts logging timestamped
events and information, simulates user interactions by controlling the applications, and after a predetermined time terminates
all applications under control and stops logging.
- A test step represents a single atomic Simload function, action or user activity within a test scenario.
Examples are launching an application or pressing a button.
- A test scenario is a sequence or set of test steps forming an individual Simload.
- A primary Simload (= type 1 simulated workload) is an individual test scenario for the "primary user"
(= the user in focus). The test scenario of each type 1 Simload highlights a specific graphic or multimedia format
(GDI, DirectX, OpenGL or video), with some scenarios requiring a pre-installed application. Runtime is 30 - 180 seconds,
but technically longer times can be configured.
- A blended Simload or Persona (= type 2 simulated workload) is a sequence of chained
or overlayed user activities, orchestrated in such a way they generate the characteristic behavior and consistent load
pattern of a predefined interactive user type. Such a user type can be used to simulated secondary users (= users not in focus)
or "noisy neighbors". Examples for personas are Author, Knowledge Worker, CAD/CAM Professional, Power User or Media Designer.
Most type 2 Simloads launch and control multiple applications simultaneously, with a chain of subsequent foreground applications
running in predefined or random order and multiple background applications running in parallel. The typical runtime is
60 to 90 minutes, but technically both shorter and longer times can be configured.
- A score Simload (= type 3 simulated workload) runs test applications and measures predefined system metrics
used to produce a number (= score) that represents the performance. Typically, each score Simload is associated with a specific
theme, such as graphics output performance or application load time.
Under the same test conditions, the precision of a score Simload is usually better than 5%.
This means that running a score Simload repeatedly on a consistent target system will produce scores that fall within a 5% range.
The individual scores collected for each theme can be used to calculate an overall score.
- A system Simload (= type 0 simulated workload) collects system settings and capabilities, such as operating system,
screen resolution, maximum frame rate, CPU type, number of CPU cores, available memory, storage capacity and GPU capabilities.
- A test run is the execution of one individual Simload while collecting telemetry data, logging session states
and user activities, and recording a screen video.
- The test setup is the setting of the test environment with all necessary specification details.
This includes the selection of Simloads used for benchmarking.
- A test sequence, test iteration or run collection is a predefined list or
set of simulated workloads that follow each other.
All Simloads in such a test sequence are executed under identical environment settings and conditions (test setup) while
collecting the benchmark dataset (= collecting telemetry data, logging session states and user activities, and recording screen videos).
- A runbook or playbook runs a customizable sequence of Simloads over a configurable time period.
- A Simload phalanx is a group of user sessions simultaneously running a blended Simload, typically to create secondary
"noisy neighbor" load while the "primary user" session is under test observation.
- A persona is a fictional character created to represent a user type that uses a digital workspace in a similar way.
- Data visualization is the graphical representation of information and data. By using visual elements like videos,
text output panels, charts, graphs, and color-coded maps, data visualization tools provide an accessible way to see and understand
trends, outliers, and patterns in data.
- The Sync Player (fka Side-by-Side Player) is an HTML5 data visualization and analytics tool
used for the visual representation and timely correlated animation of individual EUC Score benchmark datasets.
The Sync Player client area is divided into two foreground media output tiles at the top (each including a title bar),
a numerical data visualization area at the bottom, and a control and status bar in the footer.
- A Sync Player quadrant or tile is a visualization element found in the Sync Player user interface.
It's a rectangular box that contains one or multiple videos, animations, charts, text fields or other data visualization elements.
- A Sync Player clip is defined by an HTML file with links to one or two correlated video files
for synchronized playback of individual EUC Score test results. The rule of thumb is to keep Sync Player clips under one minute,
but length may vary depending on where it's shared or embedded.
- A Sync Player report is an HTML file including test setup information and findings that can be opened in the
Sync Player user interface as an overlay of the two tiles that represent the numerical data visualization area.
- Build scripts and the Build Wizard are used to create Sync Player clips by reading
test data from EUC Score benchmark datasets, referencing to screen video files and producing stand-alone HTML5 apps.
- The Sync Studio is an interactive GUI tool to create Sync Player clips using EUC Score benchmark datasets.
- The EUC SUX Factor (EUC Score User Experience Factor) is a number representing the user experience,
with 0 being the best result and 10 being the worst.
- Kiosk User (aka Deskless Worker) [5-50% of IT workforce]: This persona type needs to share a desktop that is
placed in a public place. Examples include students using a shared computer in a classroom, nurses at nursing stations, and
computers used for job placement and recruiting. These desktops require automatic login. Authentication can be done through
certain applications if necessary.
- Task Worker (aka Data Entry Worker, Transaction Workers) [25-80% of IT workforce]: This persona type describes
users who perform well-defined, repetitive, and delineated tasks. These are performed using a limited number of applications
and may share devices with other users. They need to be able to create simple documents, fill spreadsheets, and draft reports.
Examples for this persona include administrators, receptionists, bank tellers, call center workers and warehouse workers.
- Information Worker (aka Office Workers, Firstline Worker, Structured Task Worker, Knowledge Consumer, Integration Worker)
[25-80% of IT workforce]: This persona type tends to work only with data and information, not ideas. They create and consume, but
don't transform or manage information. They need to be able to find facts quickly, prepare presentations, and edit, write and process information.
Examples are bank clerks, call center operators, medical staff, sales associates, factory workers and customer service. Information workers
in supervisor roles include shop managers, bank managers and nursing supervisors.
- Knowledge Worker (aka Knowledge Broker, Expert, Collaboration Worker) [10-50% of IT workforce]: This persona type describes
people with a focus on activities around research, creation or re-use of knowledge by creating complex documents, presentations, and spreadsheets.
They want to be able to develop and improve processes and forms, encourage collaboration and create workspace environments.
In addition, they need to create, consume, transform and analyze data. Examples are middle/senior managers, accountants, sales managers,
consultants, research analysts, teachers and financial analysts.
- Power User (aka Developer, Data Analyst, Knowledge Generator) [5-50% of IT workforce]: This persona type describes people
with activities that are very compute intensive and includes the utilization of many applications at the same time.
Applications used in this scenario consume significant amounts of CPU, disk, memory, network resources, and (shared) GPUs at a given time.
The typical Power User work pattern may include the visualization of models created by CAD/CAM Professionals or marketing materials produced by
Media Designers, and eventually applies small changes, but without necessarily re-rendering the full model.
Examples are engineers, scientists, architects, educators, researchers, product marketing managers, traders in the financial industry,
software developers, data analysts and statisticians.
- CAD/CAM Professional [5-25% of IT workforce]: This persona type uses graphically-intense applications for
computer-aided design (CAD) and computer-aided manufacturing (CAM). The scenario includes the utilization of individual CAD/CAM
applications can handle 3D models with millions of 3D elements, consuming significant amounts of CPU, disk, memory or GPU resources
at a given time.
- Media Designer [5-50% of IT workforce]: This persona type creates the overall look and feel of a wide range of interactive
communication products and often use text, data, graphics, sound, animation and other digital and visual effects. They plan, visualize and
create marketing materials based on business needs and guidance from team members in advertising, product development, sales, promotions
or other departments within a company. Examples are artists, graphics designers, web designers, animators and video editors.
- Mobile Worker (aka Work-from-Home User) [5-80% of IT workforce]: This persona type is generally applicable in a combination
with one of the other personas. It concentrates on users who work from numerous locations at a given day and typically are using multiple
devices during this time span. Examples include medical personnel, pilots and cabin crew, field sales professionals, railway staff and executives.
- A compression artifact is a noticeable distortion of the screen output caused by lossy data compression.
If the compressor cannot store enough video data in the compressed version or the network cannot provide enough bandwidth to deliver
the full video data stream, the result is a loss of quality, or introduction of artifacts.
- Block boundary artifacts are caused by the block-based discrete cosine transform (DCT)
compression algorithm used in digital video standards, such as JPEG and MPEG.
The resulting squares on the screen represent packets of data which have not been received or were lost in transmission
Other names for this spacial effect are blocking, mosaicing, pixelating, quilting, and checkerboarding.
- Tiling is a spacial artifact caused by the process of subdividing a computer graphics image by a regular
grid in optical space and rendering each section of the grid, or tile, separately.
- Grime, smudge, or smear artifacts move with the optic flow of an object in a video.
The results looks like an airbrush or painting effect where the painter is rubbing across a pastel drawing.
- A blurry video makes discrete objects appear out of focus. This may be caused by the lack of resolution and
definition of the original video source. Blurring is also referred to as fuzziness or unsharpness.
This effect is a result of loss of high spatial frequency image detail, typically at sharp edges.
- Motion blur is the apparent streaking of moving objects in a sequence of video frames.
- Color artifacts, such as false colors or temporarily wrong color lookup table (CLUT), are caused by
video decoding problems.
- Color bleeding occurs when the edges of one color in the image unintentionally bleeds or overlaps into another color.
- Mosquito noise or edge business gets its name from resembling a mosquito flying around a person's
head and shoulders. It's typified as haziness or shimmering around sharp transitions between foreground entities and the background.
- Ringing is also known as echoing or ghosting. It takes the form of a “halo,” band, or “ghost” near sharp edges.
During image decompression, there is insufficient data to form as sharp an edge as in the original.
- A choppy, laggy, jumpy or jerky video is either out-of-sync
or has glitches when you play it. This is a temporal artifact, resulting in uneven or wobbly motion.
- Floating refers to illusory motion in certain regions while the surrounding areas remain static.
This is the result of the video decoder erroneously skipping key frames or predictive frames.
- Jitter is the loss of transmitted data between network devices or when there is a time delay in the sending of
video data packets over a network connection.
This may result in glitchy horizontal lines of video image frames being randomly displaced due to the corruption of synchronization signals.
- Flickering generally refers to frequent luminance or chrominance changes over time, similar to a candle's flame.
It is often broken out as fine-grain flickering and coarse-grain flickering.
- Slow motion is showing the rendered frames of a remote desktop more slowly than it should be.
- Video stuttering means that a video sometimes pauses or "buffers". This is a rendering anomaly that occurs when
the time between frames varies noticeably.
- A lag is a slow response by the computer, network, or any application.
- The response time is the time taken to transmit a user's request, process it by the computer,
transmit the response back to the endpoint device, and change the content on the display. Response time is frequently used
as a measure of the performance of an interactive system.
- A freeze frame is a paused video frame that halts the movement within the video. It essentially converts the
moving picture into a still shot for a given period.
- If a video codec is missing or not supported, it results in an error message or a black screen.
- Score: A number correct out of a total. Scoring is a fairly mechanical process.
- Grade: A general assessment based to an extent on subjective assessment, and perhaps involving a number of
different aspects of performance. Could be a number, a letter or it could even be a color. A grade is used specifically for
letters or numbers that recognize particular levels of achievement.
- Mark: A number, typically xx%. A mark is something you get in a test or exam or even on your homework.
It's used to evaluate (academic work) according to a scale of letters or numbers. A mark is defined as a number, letter,
or symbol used to indicate various grades of academic achievement.
- Point: Unit of measurement, achieved result on a test. A point is a numerical unit of academic achievement
equal to a letter grade.
- Capabilities: How good is the ability to execute a specified course of action or display a basic set of
graphics formats. The components and formats rated in this context are processor (CPU), primary disk and mounted drives (IOPS),
network (kbps), 2D graphics (GDI, GDI+), video (WMV, MP4) and 3D graphics (DirectX, OpenGL, OpenCL, WebGL, Vulcan).
- Baseline Performance: How good is the perceived user experience and a correlated telemetry data set in a
single user session - bare metal, remote session and virtual desktop. This is used to evaluate an environment's individual areas
in terms of supported media formats, speed and responsiveness. The typical timespan for a simulation sequence testing one particular
media format is 45 to 90 seconds, running all tests for the entire spectrum of media may take up to 30 minutes.
- Standard Load: How good is the perceived user experience and a correlated telemetry data set in a monitored
primary user session while a fixed number of secondary user sessions are generating a predefined background load. Test sequences
with different load profiles for the secondary sessions reflect common usage patterns. This subjects the environment to the kind
of access and usage rate expected in routine operation. The typical runtime of a secondary user sequence is 60 minutes, allowing
to conduct and record a series of individual 45 to 90-second test sequences in the primary user session.
- Endurance: An endurance test simulates the behavior of a system under a sustained load from several hours to
several days. This helps bring to light memory leaks and buffer overflows.
- Scalability and Stress: How good is the perceived user experience and a correlated telemetry data set in
relation to access times when the number of users increases constantly until the system reaches a point of saturation
(= with unacceptable user interface response times or system errors).
The most extreme variant is simulating logon storms that generate excessive loads in an environment (= benign denial-of-service attacks).
- API: Application Programming Interface.
- AVD: Azure Virtual Desktop.
- AWS: Amazon Web Services.
- BYOD: Bring Your Own Device.
- CDN: Content Delivery Network.
- CSP: Cloud Service Provider.
- CVAD: Citrix Virtual Apps and Desktops.
- DaaS: Desktop as a Service (sometimes Device as a Service).
- DCV: Desktop Cloud Virtualization (as in AWS NICE DCV).
- DEX: Digital Employee Experience.
- EUC: End User Computing.
- FRP: Frame Remoting Protocol.
- GCP: Google Cloud Platform.
- GPU: Graphics Processing Unit.
- GUI: Graphical User Interface.
- HDD: Hard Disk Drive.
- HDX: Citrix High Definition User Experience Protocol (on top of ICA).
- IaaS: Infrastructure as a Service.
- ICA: Citrix Independent Computing Architecture Protocol.
- IOPS: Input/Output Operations per Second.
- LAN: Local Area Network.
- MSP: Managed Service Provider.
- PaaS: Platform as a Service.
- PCoIP: Teradici Pixel Compression over Internet Protocol.
- QA: Quality Assurance.
- RDP: Microsoft Remote Desktop Protocol.
- RDS: Microsoft Remote Desktop Services.
- RUM: Real User Monitoring.
- SaaS: Software as a Service.
- SDK: Software Development Kit.
- SLA: Service Level Agreement.
- SSD: Solid State Disk.
- SUT: System Under Test.
- UX: User Experience.
- VDA: Citrix Virtual Delivery Agent.
- VDI: Virtual Desktop Infrastructure.
- VM: Virtual Machine.
- WAN: Wide Area Network.
- WFH: Work From Home.