Unexpected Quit/Crash while dimmed
artifishvr opened this issue · 12 comments
Using sleep mode, activated sleep mode and after a while dimmed, oyasumi quit in the background and the dimming snapped back to off. Has happened 3 times now.
Log - OyasumiVR_2023-12-22_03-19-00.log
Status Information
{
"OyasumiVR Application": {
"Version": "1.11.1-STANDALONE",
"Build ID": "4b630b2",
"Elevated Sidecar": "Not running",
"Overlay Sidecar": "Running"
},
"OpenVR": {
"SteamVR": "Running",
"Devices": "64"
},
"OSC & OSCQuery": {
"OSC Host": "127.0.0.1",
"OSC Port": "64716",
"OSCQuery Host": "192.168.0.13",
"OSCQuery Port": "55576"
},
"HTTP & gRPC": {
"Core HTTP Port": "55568",
"Core gRPC Port": "55569",
"Core gRPC Web Port": "55570",
"Elevated Sidecar gRPC Port": "Unknown",
"Elevated Sidecar gRPC Web Port": "Unknown",
"Overlay Sidecar gRPC Port": "55600",
"Overlay Sidecar gRPC Web Port": "55601"
},
"VRChat": {
"Client": "Running",
"Login Status": "Logged in",
"User": "ArtificialVR",
"OSC Host": "100.109.118.69",
"OSC Port": "9000",
"OSCQuery Host": "127.0.0.1",
"OSCQuery Port": "51208"
}
}
lemme know what else i can provide :D
Hi, thanks for the report! I can tell you you're not the only one experiencing this, I'm trying to track down the problem.
I notice the log file you provided has its most recent logs from the 13th. Do you happen to have any log files that have more recent entries that I could look at?
1.11.2 will be rolling out in a few minutes. While it won't fix the issue, it will include some crash reporting that might help me figure out what's going on.
Do you happen to have any log files that have more recent entries that I could look at?
Sorry about that! I was sleepy and totally sent the wrong file, then passed out shortly after. Here's the logfile I should have sent.
Here's brand new logs from the crash on 1.11.2.
I see a couple of errors in your log. However, good chance they don't actually have anything to do with your problem. The one related to the windows power policy definitely isn't but I'll quickly fix that issue. The stream of network response errors at the end worry me a bit though. This is likely to do with the new OSCQuery functionality. Sadly I don't have an option right now to fully turn it off, so right now we can't actually test if it's actually the cause behind your crashes, but I'll be adding an option for it so we can test that.
In the meantime one question: When enabling the sleep mode, the relevant errors start about 10 minutes later, followed by the end of the log. Does the crash happen at the same time?
Edit: (1.11.3 is rolling out with various fixes, I doubt it but maybe it has some effect)
The crash happened almost exactly 10 minutes after starting the transition from prepare sleep mode to sleep mode. Interestingly, the logs don't appear to include any of the network errors this time. I'll try to recreate on 1.11.3 soon.
Latest OyasumiVR.log
Also, an interesting note is that in the previous logs, the IP in the errors is always the one associated with my PC on my Tailscale Network. Not sure if that helps.
Hmm gotcha. I see in those latest logs that the brightness transition finishes over a minute before it crashes, so I feel like that can't be related.
As for the network errors, they're internal from the mdns-sd
crate. I'm not going to pretend I fully understand what's going on there internally, however them appearing right at the same time of the crash is kind of suspicious. I'll be adding a debugging option to a future patch version to completely disable mdns functionality, so we can verify whether it's actually related or not.
However, if that were to be the cause, it would likely not be connected to the sleep mode. I wonder if this also eventually occurs while the sleep mode is still disabled?
As a sidenote, 1.11.3 also dumps information about any panic to a panic.log
. Not in the log folder, but right next to the OyasumiVR.exe
executable. I don't expect it to since I don't seem to be receiving any related crash reports either, but it might be worth checking out if it's there the next time it happens to you.
I wonder if this also eventually occurs while the sleep mode is still disabled?
Yep! I started Oyasumi and just left it without activating sleep mode, and it did crash after some time.
OyasumiVR.log
No panic.log
to send.
Alright, I just released 1.11.4. There's a couple of things:
-
It's downgraded back to Tauri v1.4. v1.5 seemed to have some issues that was causing some quiet (as well as non-quiet) crashes for some people. Because of this, it could be that with v1.11.4 your issue is already resolved, so please test before testing the following options!
-
In case the crashing is caused by
mdns-sd
, I've now added a flag to completely disable mDNS functionality. Next to theOyasumiVR.exe
, you'll find a new file namedflags.toml
.
If you set the DISABLE_MDNS
flag in this file to true
, and restart OyasumiVR, you'll have disabled mDNS functionality completely.
Note that if you do this, all OSC and OSCQuery related functionality will mostly cease to function. Not very practical, but if OyasumiVR stays alive after enabling this flag, it's a pretty good indicator that the crashes are actually caused by something mDNS related.
- You can now also disable all OSC and OSCQuery functionality with a toggle in the advanced settings
If the above two options don't work, this is something you can try disabling to see if it helps. Note that you should restart OyasumiVR after turning this option off to make sure the mdns service goes down with it too.
Version 1.11.4 did not fix the crash on it's own. Enabling DISABLE_MDNS in the flags and disabling the toggle in advanced settings may have fixed it. After 3 hours Oyasumi was still open. I'm continuing to test it.
I think I finally have something a bit more concrete, and reproducible. tl;dr: I've started working on implementing some optimizations that might mitigate this, you'll find the first attempt at this on the beta branch on steam. You'll know you have the right build if the build ID is Edit: This has now too been released in v1.11.50d00f5d
.
More details
It seems Tauri currently has an issue where applications built using it crash when the calling rates for its commands and events (communication between the rust and js sides) gets too high. This is something I've been able to reproduce. The error code I end up with seems consistent with other reports I've been receiving.
The highest use for events is currently:
- Updating SteamVR device properties
- Sending over positional data (for sleep pose tracking)
- Sending over received OSC data
This issue being the cause would explain a few things:
- Why it only happens when SteamVR and/or VRChat are active
- Why disabling MDNS keeps things more stable for you. (You're not receiving any OSC data)
- Why only some people are getting it and others aren't:
- The amount of OSC data received varies from avatar to avatar
- The amount of device data communicated depends on the amount of active SteamVR devices
- Multiple instances of VRChat active on the network could result in a lot more data being received
In the new beta version mentioned above, I've already heavily brought down the amount of events being transmitted, which if anything, saves some CPU time. If you could test whether this solves your issue, that would be fantastic.
For now, I'll be focusing on:
- Profiling commands, to see if any further optimizations can be made to reduce the amount of calls being made (I can think of a couple)
- Rewriting the sleeping pose detection in Rust, so positional data no longer has to be sent over at all
- Adding a filtering option for OSC messages, so OyasumiVR will only parse OSC messages from 1 VRChat instance on the network at max.
One more thing: If you do run into another crash after this, would you be able to check the windows event viewer and pass along some details of the crash? You can find it under Event Viewer (Local) -> Windows Logs -> Application
. Here you can Ctrl+F for "OyasumiVR" to find the right logs.
Since the release of 1.11.5, the reports of any crashes seem to have stopped coming. Therefore I'm going to assume the fixes in that update have resolved this issue, so I'll close this issue. If someone still runs into this problem, please feel free to open a new issue.