(RFE?) Hyperthread NUMA topology not presented
lcarstensen opened this issue · 4 comments
Consider, on a 2017 Macbook Pro on MacOS Sierra:
$ sysctl -n machdep.cpu.brand_string
Intel(R) Core(TM) i5-7360U CPU @ 2.30GHz
$ docker info
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 15
Server Version: 17.06.2-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.41-moby
Operating System: Alpine Linux v3.5
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.952GiB
Name: moby
ID: V7IC:5SQA:ZEIA:GNI3:QAON:ZM2I:7UYA:VFXS:3547:YUZG:VR2S:XWYE
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 26
Goroutines: 46
System Time: 2017-09-20T18:36:38.74907427Z
EventsListeners: 1
No Proxy: *.local, 169.254/16
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
$ docker run -it --rm centos /usr/bin/lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 142
Model name: Intel(R) Core(TM) i5-7360U CPU @ 2.30GHz
Stepping: 9
CPU MHz: 2300.000
BogoMIPS: 4608.00
Hypervisor vendor: vertical
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 4096K
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht pbe syscall nx pdpe1gb lm constant_tsc rep_good nopl xtopology nonstop_tsc eagerfpu pni pclmulqdq dtes64 ds_cpl ssse3 sdbg fma cx16 xtpr pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase bmi1 hle avx2 bmi2 erms rtm xsaveopt arat
This is a 1-socket CPU with two threads that shows up as a 2-socket system, with each socket having a single thread. Software designed for utilizing L1 cache effectively won't operate properly under this runtime. Are there configuration options available in hyperkit/xhyve that would allow us to set CPU/NUMA topology correctly, or is this an RFE?
I am not sure if this would be helpful - the virtual cores are not locked by default to any particular host core/hyperthread, so the topology is not even constant - "cores" are host threads not host CPUs.
I suspected that. I'm a new xhyve/HyperKit user, so apologies in advance - but under Linux KVM/libvirt/etc. what's required is CPU pinning (e.g. https://www.intel.com/content/www/us/en/communications/smarter-cpu-pinning-openstack-nova-brief.html) - anything available similar to that?
There is a thread affinity API for osx it seems see https://developer.apple.com/library/content/releasenotes/Performance/RN-AffinityAPI/#//apple_ref/doc/uid/TP40006635-CH1-DontLinkElementID_2
So it should be possible to do something here. I don't know if bhyve has done this on FreeBSD - if it has we should be able to reuse any numa table setup code.
bhyve does support pinning vCPUs. There's no NUMA table setup for this. The socket vs core stuff can be set via global tunables, and is mainly useful for desktop Windows guests where there is a cap of 1 or 2 CPU sockets max, but a much larger limit on cores/socket.
(There is a review open for supporting a Qemu-style socket/cores/thread configuration in bhyve - https://reviews.freebsd.org/D9930)