stress-ng will stress test a computer system in various selectable ways. It was designed to exercise various physical subsystems of a computer as well as the various operating system kernel interfaces. Stress-ng features:
- 310+ stress tests
- 80+ CPU specific stress tests that exercise floating point, integer, bit manipulation and control flow
- 20+ virtual memory stress tests
- 40+ file system stress tests
- 30+ memory/CPU cache stress tests
- portable: builds on Linux (Debian, Devuan, RHEL, Fedora, Centos, Slackware OpenSUSE, Ubuntu, etc..), Solaris, FreeBSD, NetBSD, OpenBSD, DragonFlyBSD, Minix, Android, MacOS X, Serenity OS, GNU/Hurd, Haiku, Windows Subsystem for Linux and SunOs/Dilos/Solaris. with gcc, musl-gcc, clang, icc, icx, tcc and pcc.
- tested on alpha, armel, armhf, arm64, hppa, i386, m68k, mips32, mips64, power32, ppc64el, risc-v, sh4, s390x, sparc64, x86-64
stress-ng was originally intended to make a machine work hard and trip hardware issues such as thermal overruns as well as operating system bugs that only occur when a system is being thrashed hard. Use stress-ng with caution as some of the tests can make a system run hot on poorly designed hardware and also can cause excessive system thrashing which may be difficult to stop.
stress-ng can also measure test throughput rates; this can be useful to observe performance changes across different operating system releases or types of hardware. However, it has never been intended to be used as a precise benchmark test suite, so do NOT use it in this manner.
Running stress-ng with root privileges will adjust out of memory settings on Linux systems to make the stressors unkillable in low memory situations, so use this judiciously. With the appropriate privilege, stress-ng can allow the ionice class and ionice levels to be adjusted, again, this should be used with care.
Tarballs of each version of stress-ng can be downloaded using the URL:
https://github.com/ColinIanKing/stress-ng/tarball/version
where version is the relevant version number, for example:
https://github.com/ColinIanKing/stress-ng/tarball/V0.13.05
docker run --rm ghcr.io/colinianking/stress-ng --help
or
docker run --rm colinianking/stress-ng --help
Recent versions of stress-ng are available in the Ubuntu stress-ng ppa for various Ubuntu releases:
https://launchpad.net/~colin-king/+archive/ubuntu/stress-ng
sudo add-apt-repository ppa:colin-king/stress-ng
sudo apt update
sudo apt install stress-ng
To build, the following libraries will ensure a fully functional stress-ng build: (note libattr is not required for more recent disto releases).
Debian, Ubuntu:
- gcc
- g++
- libaio-dev
- libapparmor-dev
- libatomic1
- libattr1-dev
- libbsd-dev
- libcap-dev
- libeigen3-dev
- libgbm-dev
- libgcrypt-dev
- libglvnd-dev
- libipsec-mb-dev
- libjpeg-dev
- libjudy-dev
- libkeyutils-dev
- libkmod-dev
- libmd-dev
- libmpfr-dev
- libsctp-dev
- libxxhash-dev
- zlib1g-dev
RHEL, Fedora, Centos:
- gcc
- g++
- eigen3-devel
- Judy-devel
- keyutils-libs-devel
- kmod-devel
- libaio-devel
- libatomic
- libattr-devel
- libbsd-devel
- libcap-devel
- libgbm-devel
- libgcrypt-devel
- libglvnd-core-devel
- libglvnd-devel
- libjpeg-devel
- libmd-devel
- mpfr-devel
- libX11-devel
- libXau-devel
- libxcb-devel
- lksctp-tools-devel
- xorg-x11-proto-devel
- xxhash-devel
- zlib-devel
RHEL, Fedora, Centos (static builds):
- gcc
- g++
- eigen3-devel
- glibc-static
- Judy-devel
- keyutils-libs-devel
- libaio-devel
- libatomic-static
- libattr-devel
- libbsd-devel
- libcap-devel
- libgbm-devel
- libgcrypt-devel
- libglvnd-core-devel
- libglvnd-devel
- libjpeg-devel
- libmd-devel
- libX11-devel
- libXau-devel
- libxcb-devel
- lksctp-tools-devel
- mpfr-devel
- xorg-x11-proto-devel
- xxhash-devel
- zlib-devel
SUSE:
- gcc
- gcc-c++
- eigen3-devel
- keyutils-devel
- libaio-devel
- libapparmor-devel
- libatomic1
- libattr-devel
- libbsd-devel
- libcap-devel
- libgbm-devel
- libglvnd-devel
- libjpeg-turbo
- libkmod-devel
- libmd-devel
- libseccomp-devel
- lksctp-tools-devel
- mpfr-devel
- xxhash-devel
- zlib-devel
ClearLinux:
- devpkg-eigen
- devpkg-Judy
- devpkg-kmod
- devpkg-attr
- devpkg-libbsd
- devpkg-libgcrypt
- devpkg-libjpeg-turbo
- devpkg-libsctp
- devpkg-mesa
Alpine Linux:
- build-base
- eigen-dev
- jpeg-dev
- judy-dev
- keyutils-dev
- kmod-dev
- libaio-dev
- libatomic
- libattr
- libbsd-dev
- libcap-dev
- libgcrypt-dev
- libmd-dev
- libseccomp-dev
- lksctp-tools-dev
- mesa-dev
- mpfr-dev
- xxhash-dev
- zlib-dev
NOTE: the build will try to detect build dependencies and will build an image with functionality disabled if the support libraries are not installed.
At build-time stress-ng will detect kernel features that are available on the target build system and enable stress tests appropriately. Stress-ng has been build-tested on Ubuntu, Debian, Debian GNU/Hurd, Slackware, RHEL, SLES, Centos, kFreeBSD, OpenBSD, NetBSD, FreeBSD, Debian kFreeBSD, DragonFly BSD, OS X, Minix, Solaris 11.3, OpenIndiana and Hiaku. Ports to other POSIX/UNIX like operating systems should be relatively easy.
NOTE: ALWAYS run make clean
after fetching changes from the git repository
to force the build to regenerate the build configuration file. Parallel builds using
make -j are supported.
To build on BSD systems, one requires gcc and GNU make:
CC=gcc gmake clean
CC=gcc gmake
To build on OS X systems, just use:
make clean
make -j
To build on MINIX, gmake and clang are required:
CC=clang gmake clean
CC=clang gmake
To build on SunOS, one requires GCC and GNU make, build using:
CC=gcc gmake clean
CC=gcc gmake
To build on Dilos, one requires GCC and GNU make, build using:
CC=gcc gmake clean
CC=gcc gmake
To build on Haiku Alpha 4:
make clean
make
To build a static image (example, for Android), use:
make clean
STATIC=1 make
To build with full warnings enabled:
make clean
PEDANTIC=1 make
To build with the Tiny C compiler:
make clean
CC=tcc make
To build with the PCC portable C compiler use:
make clean
CC=pcc make
To build with the musl C library:
make clean
CC=musl-gcc
To build with the Intel C compiler icc use:
make clean
CC=icc make
To build with the Intel C compiler icx use:
make clean
CC=icx make
To perform a cross-compilation using gcc, use a static build, specify the toolchain (both CC and CXX). For example, a mips64 cross build:
make clean
STATIC=1 CC=mips64-linux-gnuabi64-gcc CXX=mips64-linux-gnuabi64-g++ make -j $(nproc)
Send patches to colin.i.king@gmail.com or merge requests at https://github.com/ColinIanKing/stress-ng
The Ubuntu stress-ng reference guide contains a brief overview and worked examples.
Run 4 CPU, 2 virtual memory, 1 disk and 8 fork stressors for 2 minutes and print measurements:
stress-ng --cpu 4 --vm 2 --hdd 1 --fork 8 --timeout 2m --metrics
stress-ng: info: [573366] setting to a 120 second (2 mins, 0.00 secs) run per stressor
stress-ng: info: [573366] dispatching hogs: 4 cpu, 2 vm, 1 hdd, 8 fork
stress-ng: info: [573366] successful run completed in 123.78s (2 mins, 3.78 secs)
stress-ng: info: [573366] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s CPU used per
stress-ng: info: [573366] (secs) (secs) (secs) (real time) (usr+sys time) instance (%)
stress-ng: info: [573366] cpu 515396 120.00 453.02 0.18 4294.89 1137.24 94.42
stress-ng: info: [573366] vm 2261023 120.01 223.80 1.80 18840.15 10022.27 93.99
stress-ng: info: [573366] hdd 367558 123.78 10.63 11.67 2969.49 16482.42 18.02
stress-ng: info: [573366] fork 598058 120.00 68.24 65.88 4983.80 4459.13 13.97
Run matrix stressor on all online CPUs for 60 seconds and measure temperature:
stress-ng --matrix -1 --tz -t 60
stress-ng: info: [1171459] setting to a 60 second run per stressor
stress-ng: info: [1171459] dispatching hogs: 8 matrix
stress-ng: info: [1171459] successful run completed in 60.01s (1 min, 0.01 secs)
stress-ng: info: [1171459] matrix:
stress-ng: info: [1171459] acpitz0 75.00 C (348.15 K)
stress-ng: info: [1171459] acpitz1 75.00 C (348.15 K)
stress-ng: info: [1171459] pch_skylake 60.17 C (333.32 K)
stress-ng: info: [1171459] x86_pkg_temp 62.72 C (335.87 K)
Run a mix of 4 I/O stressors and check for changes in disk S.M.A.R.T. metadata:
sudo stress-ng --iomix 4 --smart -t 30s
stress-ng: info: [1171471] setting to a 30 second run per stressor
stress-ng: info: [1171471] dispatching hogs: 4 iomix
stress-ng: info: [1171471] successful run completed in 30.37s
stress-ng: info: [1171471] Device ID S.M.A.R.T. Attribute Value Change
stress-ng: info: [1171471] sdc 01 Read Error Rate 88015771 71001
stress-ng: info: [1171471] sdc 07 Seek Error Rate 59658169 92
stress-ng: info: [1171471] sdc c3 Hardware ECC Recovered 88015771 71001
stress-ng: info: [1171471] sdc f1 Total LBAs Written 481904395 877
stress-ng: info: [1171471] sdc f2 Total LBAs Read 3768039248 5139
stress-ng: info: [1171471] sdd be Temperature Difference 3670049 1
Benchmark system calls using the VDSO:
stress-ng --vdso 1 -t 5 --metrics
stress-ng: info: [1171584] setting to a 5 second run per stressor
stress-ng: info: [1171584] dispatching hogs: 1 vdso
stress-ng: info: [1171585] stress-ng-vdso: exercising vDSO functions: clock_gettime time gettimeofday getcpu
stress-ng: info: [1171585] stress-ng-vdso: 9.88 nanoseconds per call (excluding 1.73 nanoseconds test overhead)
stress-ng: info: [1171584] successful run completed in 5.10s
stress-ng: info: [1171584] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s CPU used per
stress-ng: info: [1171584] (secs) (secs) (secs) (real time) (usr+sys time) instance (%)
stress-ng: info: [1171584] vdso 430633496 5.10 5.10 0.00 84375055.96 84437940.39 99.93
stress-ng: info: [1171584] vdso 9.88 nanoseconds per call (average per stressor)
Generate and measure branch misses using perf metrics:
sudo stress-ng --branch 1 --perf -t 10 --stdout | grep Branch
stress-ng: info: [1171714] 604,703,327 Branch Instructions 53.30 M/sec
stress-ng: info: [1171714] 598,760,234 Branch Misses 52.77 M/sec (99.02%)
stress-ng has found Kernel and QEMU bugs/regressions and appropriate fixes have been landed to address these issues:
2015:
- KEYS: ensure we free the assoc array edit if edit is valid
- proc: fix -ESRCH error when writing to /proc/$pid/coredump_filter
- SMP divide error
2016:
- fs/locks.c: kernel oops during posix lock stress test
- sched/core: Fix a race between try_to_wake_up() and a woken up task
- devpts: fix null pointer dereference on failed memory allocation
- arm64: do not enforce strict 16 byte alignment to stack pointer
2017:
- ARM: dts: meson8b: add reserved memory zone to fix silent freezes
- ARM64: dts: meson-gx: Add firmware reserved memory zones
- ext4: lock the xattr block before checksuming it
- rcu_preempt detected stalls on CPUs/tasks
- BUG: unable to handle kernel NULL pointer dereference
- WARNING: possible circular locking dependency detected
2018:
- Illumos: ofdlock(): assertion failed: lckdat->l_start == 0
- debugobjects: Use global free list in __debug_check_no_obj_freed()
- ext4_validate_inode_bitmap:99: comm stress-ng: Corrupt inode bitmap
- virtio/s390: fix race in ccw_io_helper()
2019:
- mm/page_idle.c: fix oops because end_pfn is larger than max_pfn
- mm: compaction: avoid 100% CPU usage during compaction when a task is killed
- mm/vmalloc.c: preload a CPU with one object for split purpose
- perf evlist: Use unshare(CLONE_FS) in sb threads to let setns(CLONE_NEWNS) work
- riscv: reject invalid syscalls below -1
2020:
- RISC-V: Don't allow write+exec only page mapping request in mmap
- riscv: set max_pfn to the PFN of the last page
- crypto: hisilicon - update SEC driver module parameter
- net: atm: fix update of position index in lec_seq_next
- sched/debug: Fix memory corruption caused by multiple small reads of flags
- ocfs2: ratelimit the 'max lookup times reached' notice
- using perf can crash kernel with a stack overflow
- stress-ng on gcov enabled focal kernel triggers OOPS
- kernel bug list_del corruption on s390x from stress-ng mknod and stress-ng symlink
2021:
- sparc64: Fix opcode filtering in handling of no fault loads
- opening a file with O_DIRECT on a file system that does not support it will leave an empty file
- locking/atomic: sparc: Fix arch_cmpxchg64_local()
- btrfs: fix exhaustion of the system chunk array due to concurrent allocations
- btrfs: rework chunk allocation to avoid exhaustion of the system chunk array
- btrfs: fix deadlock with concurrent chunk allocations involving system chunks
- locking/atomic: sparc: Fix arch_cmpxchg64_local()
- pipe: do FASYNC notifications for every pipe IO, not just state changes
- io-wq: remove GFP_ATOMIC allocation off schedule out path
- mm/swap: consider max pages in iomap_swapfile_add_extent
- block: loop: fix deadlock between open and remove
- tmpfs: O_DIRECT | O_CREATE open reports open failure but actually creates a file
2022:
- copy_process(): Move fd_install() out of sighand->siglock critical section
- minix: fix bug when opening a file with O_DIRECT
- arch/arm64: Fix topology initialization for core scheduling
- running stress-ng on Minux 3.4.0-RC6 on amd64 assert in vm/region.c:313
- unshare test triggers unhandled page fault
- request_module DoS
- NUMA Benchmark Regression In Linux 5.18
- Underflow in mas_spanning_rebalance() and test
- mm/huge_memory: do not clobber swp_entry_t during THP split
- AppArmor: -42.5% regression of stress-ng.kill.ops_per_sec due to commit
- clocksource: Suspend the watchdog temporarily when high read lantency detected
2023:
- qemu-system-m68k segfaults on opcode 0x4848
- rtmutex: Ensure that the top waiter is always woken up
- mm/swap: fix swap_info_struct race between swapoff and get_swap_pages()
- block, bfq: Fix division by zero error on zero wsum
- riscv: mm: Ensure prot of VM_WRITE and VM_EXEC must be readable
- Revert "mm: vmscan: make global slab shrink lockless"
- crash/hang in mm/swapfile.c:718 add_to_avail_list when exercising stress-ng
- mm: fix zswap writeback race condition
- x86/fpu: Set X86_FEATURE_OSXSAVE feature after enabling OSXSAVE in CR4
- kernel/fork: beware of __put_task_struct() calling context
- arm64: dts: ls1028a: add l1 and l2 cache info
- filemap: add filemap_map_order0_folio() to handle order0 folio
- mm: shrinker: add infrastructure for dynamically allocating shrinker
- mm: shrinker: make global slab shrink lockless
- bcachefs: Clear btree_node_just_written() when node reused or evicted
2020:
- selinux: complete the inlining of hashtab functions
- selinux: store role transitions in a hash table
- sched/rt: Optimize checking group RT scheduler constraints
- sched/fair: handle case of task_h_load() returning 0
- sched/deadline: Unthrottle PI boosted threads while enqueuing
- mm: fix madvise WILLNEED performance problem
- powerpc/dma: Fix dma_map_ops::get_required_mask
- stress-ng close causes kernel oops(es) v5.6-rt and v5.4-rt
2021:
- Revert "mm, slub: consider rest of partial list if acquire_slab() fails
- mm: memory: add orig_pmd to struct vm_fault
- selftests/powerpc: Add test of mitigation patching
- dm crypt: Avoid percpu_counter spinlock contention in crypt_page_alloc()
- mm/migrate: optimize hotplug-time demotion order updates
- powerpc/rtas: rtas_busy_delay() improvements
2022:
- sched/core: Accounting forceidle time for all tasks except idle task
- ipc/mqueue: use get_tree_nodev() in mqueue_get_tree()
2023:
- mm/swapfile: add cond_resched() in get_swap_pages()
- module: add debug stats to help identify memory pressure
- module: avoid allocation if module is already present and ready
- sched: Interleave cfs bandwidth timers for improved single thread performance at low utilization
- Stress-ng presentation at ELCE 2019 Lyon
- Video of the above presentation
- Linux Foundation Mentoring Session, May 2022
- Kernel Recipes presentation, Sept 2023
2015:
- Enhancing Cloud energy models for optimizing datacenters efficiency
- Tejo: A Supervised Anomaly Detection Scheme for NewSQL Databases
- CoMA: Resource Monitoring of Docker Containers
- An Investigation of CPU utilization relationship between host and guests in a Cloud infrastructure
2016:
- Increasing Platform Determinism PQOS DPDK
- Towards Energy Efficient Data Management in HPC: The Open Ethernet Drive Approach
- CPU and memory performance analysis on dynamic and dedicated resource allocation using XenServer in Data Center environment
- How Much Power Does your Server Consume? Estimating Wall Socket Power Using RAPL Measurements
- DevOps for IoT Applications using Cellular Networks and Cloud
- A Virtual Network Function Workload Simulator
- Characterizing and Reducing Cross-Platform Performance Variability Using OS-level Virtualization
- How much power does your server consume? Estimating wall socket power using RAPL measurements
- UIE: User-centric Interference Estimation for Cloud Applications
2017:
- Auto-scaling of Containers: the impact of Relative and Absolute Metrics
- Testing the Windows Subsystem for Linux
- Practical analysis of the Precision Time Protocol under different types of system load
- Towards Virtual Machine Energy-Aware Cost Prediction in Clouds
- Algorithms and Architectures for Parallel Processing
- Advanced concepts and tools for renewable energy supply of Data Centres
- Monitoring and Modelling Open Compute Servers
- Experimental and numerical analysis for potential heat reuse in liquid cooled data centres
- Modeling and Analysis of Performance under Interference in the Cloud
- Effectively Measure and Reduce Kernel Latencies for Real time Constraints
- Monitoring and Analysis of CPU load relationships between Host and Guests in a Cloud Networking Infrastructure
- Measuring the impacts of the Preempt-RT patch
- Reliable Library Identification Using VMI Techniques
- Elastic-PPQ: A two-level autonomic system for spatial preference query processing over dynamic data stream
- OpenEPC integration within 5GTN as an NFV proof of concept
- Time-Aware Dynamic Binary Instrumentation
- Experience Report: Log Mining using Natural Language Processing and Application to Anomaly Detection
- Mixed time-criticality process interferences characterization on a multicore Linux system
- Cloud Orchestration at the Level of Application
2018:
- Multicore Emulation on Virtualised Environment
- Stress-SGX : Load and Stress your Enclaves for Fun and Profit
- quiho: Automated Performance Regression Testing Using Inferred Resource Utilization Profiles
- Hypervisor and Virtual Machine Memory Optimization Analysis
- Real-Time testing with Fuego
- FECBench: An Extensible Framework for Pinpointing Sources of Performance Interference in the Cloud-Edge Resource Spectrum
- Quantifying the Interaction Between Structural Properties of Software and Hardware in the ARM Big.LITTLE Architecture
- RAPL in Action: Experiences in Using RAPL for Power Measurements
2019:
- Performance Isolation of Co-located Workload in a Container-based Vehicle Software Architecture
- Analysis and Detection of Cache-Based Exploits
- kMVX: Detecting Kernel Information Leaks with Multi-variant Execution
- Scalability of Kubernetes Running Over AWS
- A study on performance measures for auto-scaling CPU-intensive containerized applications
- Scavenger: A Black-Box Batch Workload Resource Manager for Improving Utilization in Cloud Environments
- Estimating Cloud Application Performance Based on Micro-Benchmark Profiling
2020:
- Performance and Energy Trade-Offs for Parallel Applications on Heterogeneous Multi-Processing Systems
- C-Balancer: A System for Container Profiling and Scheduling
- Modelling VM Latent Characteristics and Predicting Application Performance using Semi-supervised Non-negative Matrix Factorization
- Semi-dynamic load balancing: efficient distributed learning in non-dedicated environments
- A Performance Analysis of Hardware-assisted Security Technologies
- Green Cloud Software Engineering for Big Data Processing
- Real-Time Detection for Cache Side Channel Attack using Performance Counter Monitor
- Subverting Linux’ Integrity Measurement Architecture
- Real-time performance assessment using fast interrupt request on a standard Linux kernel
- Low Energy Consumption on Post-Moore Platforms for HPC Research
- Managing Latency in Edge-Cloud Environment
- Demystifying the Real-Time Linux Scheduling Latency
2021:
- Streamline: A Fast, Flushless Cache Covert-Channel Attack by Enabling Asynchronous Collusion
- Experimental Analysis in Hadoop MapReduce: A Closer Look at Fault Detection and Recovery Techniques
- Performance Characteristics of the BlueField-2 SmartNIC
- Evaluating Latency in Multiprocessing Embedded Systems for the Smart Grid
- Work-in-Progress: Timing Diversity as a Protective Mechanism
- Sequential Deep Learning Architectures for Anomaly Detection in Virtual Network Function Chains
- WattEdge: A Holistic Approach for Empirical Energy Measurements in Edge Computing
- PTEMagnet: Fine-Grained Physical Memory Reservation for Faster Page Walks in Public Clouds
- The Price of Meltdown and Spectre: Energy Overhead of Mitigations at Operating System Level
- An Empirical Study of Thermal Attacks on Edge Platforms
- Sage: Practical & Scalable ML-Driven Performance Debugging in Microservices
- A Generalized Approach For Practical Task Allocation Using A MAPE-K Control Loop]
- Towards Independent Run-time Cloud Monitoring
- FIRESTARTER 2: Dynamic Code Generation for Processor Stress Tests
- Performance comparison between a Kubernetes cluster and an embedded system
- Performance Exploration of Virtualization Systems
2022:
- A general method for evaluating the overhead when consolidating servers: performance degradation in virtual machines and containers
- FedComm: Understanding Communication Protocols for Edge-based Federated Learning
- Achieving Isolation in Mixed-Criticality Industrial Edge Systems with Real-Time Containers
- Design and Implementation of Machine Learning-Based Fault Prediction System in Cloud Infrastructure
- The TSN Building Blocks in Linux
- uKharon: A Membership Service for Microsecond Applications
- Evaluating Secure Enclave Firmware Development for Contemporary RISC-V WorkstationsContemporary RISC-V Workstation
- Evaluation of Real-Time Linux on RISC-V processor architecture
- Hertzbleed: Turning Power Side-Channel Attacks Into Remote Timing Attacks on x86
- Don’t Mesh Around: Side-Channel Attacks and Mitigations on Mesh Interconnects
2023:
- Fight Hardware with Hardware: System-wide Detection and Mitigation of Side-Channel Attacks using Performance Counters
- Introducing k4.0s: a Model for Mixed-Criticality Container Orchestration in Industry 4.0
- A Comprehensive Study on Optimizing Systems with Data Processing Units
- Estimating Cloud Application Performance Based on Micro-Benchmark Profiling
- PSPRAY: Timing Side-Channel based Linux Kernel Heap Exploitation Technique
- Robust and accurate performance anomaly detection and prediction for cloud applications: a novel ensemble learning-based framework
- Feasibility Study for a Python-Based Embedded Real-Time Control System
- Adaptation of Parallel SaaS to Heterogeneous Co-Located Cloud Resources
- A Methodology and Framework to Determine the Isolation Capabilities of Virtualisation Technologies
- Data Station: Delegated, Trustworthy, and Auditable Computation to Enable Data-Sharing Consortia with a Data Escrow
- An Empirical Study of Resource-Stressing Faults in Edge-Computing Applications
- Finding flaky tests in JavaScript applications using stress and test suite reordering
- The Power of Telemetry: Uncovering Software-Based Side-Channel Attacks on Apple M1/M2 Systems
- A Performance Evaluation of Embedded Multi-core Mixed-criticality System Based on PREEMPT RT Linux
- Data Leakage in Isolated Virtualized Enterprise Computing SystemsSystems
- Considerations for Benchmarking Network Performance in Containerized Infrastructures
- EnergAt: Fine-Grained Energy Attribution for Multi-Tenancy
- Quantifying the Security Profile of Linux Applications
- Gotham Testbed: a Reproducible IoT Testbed for Security Experiments and Dataset Generation
- Profiling with Trust: System Monitoring from Trusted Execution Environments
- Thermal-Aware on-Device Inference Using Single-Layer Parallelization with Heterogeneous Processors
- Towards Fast, Adaptive, and Hardware-Assisted User-Space Scheduling
- Heterogeneous Anomaly Detection for Software Systems via Semi-supervised Cross-modal Attention
- Green coding : an empirical approach to harness the energy consumption ofsoftware services
- Enhancing Empirical Software Performance Engineering Research with Kernel-Level Events: A Comprehensive System Tracing Approach
- Cloud White: Detecting and Estimating QoS Degradation of Latency-Critical Workloads in the Public Cloud
- Dynamic Resource Management for Cloud-native Bulk Synchronous Parallel Applications
- Towards Serverless Optimization with In-place Scaling
- A Modular Approach to Design an Experimental Framework for Resource Management Research
- Targeted Deanonymization via the Cache Side Channel: Attacks and Defenses
- Validating Full-System RISC-V Simulator: A Systematic Approach
I am keen to add to the stress-ng project page any citations to research or projects that use stress-ng. I also appreciate information concerning kernel bugs or performance regressions found with stress-ng.
Many thanks to the following contributors to stress-ng (in alphabetical order):
Abdul Haleem, Aboorva Devarajan, Adriand Martin, Adrian Ratiu, Aleksandar N. Kostadinov, Alexander Kanavin, Alexandru Ardelean, Alfonso Sánchez-Beato, Allen H, Andrey Gelman, André Wild, Anisse Astier, Anton Eliasson, Arjan van de Ven, Baruch Siach, Bryan W. Lewis, Camille Constans, Carlos Santos, Christian Ehrhardt, Christopher Brown, Chunyu Hu, Danilo Krummrich, Davidson Francis, David Turner, Dominik B Czarnota, Dorinda Bassey, Eder Zulian, Eric Lin, Erik Stahlman, Erwan Velu, Fabien Malfoy, Fabrice Fontaine, Fernand Sieber,Florian Weimer, Francis Laniel, Guilherme Janczak, Hui Wang, Hsieh-Tseng Shen, Iyán Méndez Veiga, James Hunt, Jan Luebbe, Jianshen Liu, John Kacur, Jules Maselbas, Julien Olivain, Kenny Gong, Khalid Elmously, Khem Raj, Luca Pizzamiglio, Luis Chamberlain, Luis Henriques, Matthew Tippett, Mauricio Faria de Oliveira, Maxime Chevallier, Max Kellermann, Maya Rashish, Mayuresh Chitale, Meysam Azad, Mike Koreneff, Nick Hanley, Paul Menzel, Piyush Goyal, Ralf Ramsauer, Rosen Penev, Rulin Huang, Siddhesh Poyarekar, Shoily Rahman, Thadeu Lima de Souza Cascardo, Thia Wyrod, Thinh Tran, Tim Gardner, Tim Gates, Tim Orling, Tommi Rantala, Witold Baryluk, Yong-Xuan Wang, Zhiyi Sun.