![]() ![]() Ruby was more complicated to run and needs to be installed in a way that I wanted to avoid. e:1:in `require': cannot load such file - rbconfig (LoadError) from -e:1:in `' ruby -e 'require "rbconfig" pp RbConfig::CONFIG' | grep "host" `RubyGems' were not loaded. Unfortunately, running this binary wasn’t as easy as I thought and resulted in an error: ~. ruby -v ruby 3.0.0p0 ( revision 95aff21468) ~ lipo -detailed_info ruby input file ruby is not a fat file Non-fat file: ruby is architecture: arm64 configure & makeĪnalyzing the binary showed that it was a pure ARM64 binary: ~. I didn’t want the resulting binary to be installed on the system, so I only ran: ~. The idea of compiling Ruby myself came into my head, as it should have been a definitive way to have an ARM binary, so I downloaded the source and started making my own Ruby binary. I’m no Ruby developer and have little experience with the language itself, which only made things harder and left me more clueless. ruby-thin -e 'require "rbconfig" pp RbConfig::CONFIG' | grep "host" …resulted in a new file ( ruby-thin), which seemed to be pure ARM-binary code: ~ lipo -archs ruby-thin arm64eīut running the same test from earlier again with this new binary: ~. Running this: ~ lipo -thin arm64e -output ruby-thin /usr/bin/ruby ![]() So I thinned the binary from Apple and tried to slice out the ARM part only to make sure I was using ARM code only. ![]() This command requires the -output option.” Take one input file and create a thin output file with the specified arch_type. The Lipo tool has some another interesting feature: You can thin a universal binary into its architectural binary parts. To verify that, I wanted to make sure my test was running with an ARM Ruby binary - but how do you do that? With these new insights, I suspected the universal binary wasn’t choosing the correct binary part for the architecture it was running on. Note: A lot of programs seem to use and rely on the RbConfig module to fetch information about the system they’re runnning on.Īfter that, it was quite clear that the problem was not FFI Library itself but something deeper inside the Ruby installation provided by Apple itself. "host_os"=>"darwin20", "host_vendor"=>"apple", "host_cpu"=>"x86_64", "host"=>"x86_64-apple-darwin20", "host_alias"=>"",Įverything seemed fine except we’d expect arm64 or arm64e in the field host_cpu on line 3 of the listing, as we’re running the program on an ARM processor. So it seemed we had a normal universal binary that’d automatically choose the correct binary for the system’s architecture - in my case, the one for ARM64 but running: ~ ruby -e 'require "rbconfig" pp RbConfig::CONFIG' | grep "host" I dumped detailed information about the binary: ~ lipo -detailed_info /usr/bin/ruby Fat header in: /usr/bin/ruby fat_magic 0xcafebabe nfat_arch 2 architecture x86_64 cputype CPU_TYPE_X86_64 cpusubtype CPU_SUBTYPE_X86_64_ALL capabilities 0x0 offset 16384 size 56624 align 2^14 (16384) architecture arm64e cputype CPU_TYPE_ARM64 cpusubtype CPU_SUBTYPE_ARM64E capabilities PTR_AUTH_VERSION USERSPACE 0 offset 81920 size 56432 align 2^14 (16384) My first step in digging more into the problem was to analyze the Ruby binary itself, which is found at /usr/bin on a macOS system: ~ ruby -v ruby 2.6.3p62 ( revision 67580) Īs usual with the preinstalled version from Apple, it was an old version but, interestingly, a universal binary instead of an emulated-x86 one.įor analyzing universal binaries, Apple provides Lipo, a command-line program to create or operate on universal files ( more information).
0 Comments
Leave a Reply. |