zorgiepoo/Bit-Slicer

debugger defaults to ARM on x86 executable

notahacker8 opened this issue · 7 comments

I'm running an intel application on apple silicon and bit slicer automatically assumes the app is using ARM architecture.

In what way / context exactly?

Rosetta 2 translates the x86_64 executable to arm64 one ahead of time so the actual code that gets executed and can be debugged is arm64. Bit Slicer does not support 'mapping back' to x86_64 instruction set like lldb can (and I don't know how to implement such a thing).

Let's say I find a function in memory. In x86_64, the first bytes are usually 55 48 89 e5...., which represent x86_64 instructions. Bit slicer however, assumes that it is an arm64 executable, and disassembles completely wrong instructions.

You might be looking at the untranslated code. How are you "finding" the function, by doing a scan of "known bytes" or by setting a watchpoint (watching for accesses) or debugger breakpoint?

I suppose Bit Slicer can switch between disassembling x86_64 and arm64 depending on what region is being viewed. I don't know how tricky that is. Barring implementing that, arm64 is the right default still for debugging running code.

On another note all of this will go away when games become apple silicon native.. Rosetta 2 support isn't the best. Same thing happened with Rosetta 1 support back in the day..

It would be nice if there was an option to select an architecture to debug with.

I think the correct way is to use x86_64 disassembler if the instructions are in a mapped mach-o segment.

However, this doesn't sound extremely useful to me anyway. Most likely, any changes there will have no effect since the real arm64 code being executed is somewhere else.

I will address this in #80. First the "automatic" mode should be smarter about picking the right architecture depending on the memory address being looked at. Then there's an option to change disassembly interpretation in the Debug menu to either Intel or ARM.