Post by Denys Vlasenko[Requesting program interpreter: /lib/ld-uClibc.so.0]
You'll notice that's a hardwired absolute path. If you check all the other
binaries on your system (including the ones your host came with), you'll
notice they have hardwired absolute paths for this too.
Post by Denys VlasenkoI can copy or symlink them to ones in cross-compiler-x86_64/lib
and it will start working.
Unfortunately that's just the way the dynamic linker works.
Post by Denys VlasenkoBut I can't make it for more than one
cross-compiling toolchain at once, right?
Each dynamic binary needs an absolute path to its dynamic linker. The kernel
loads this directly, so it doesn't have a search path, in much the same way
/sbin/hotplug hasn't got a search path when the kernel launches that. Such a
search path wold be putting policy into the kernel.
There's an online book on linking that covers this:
http://www.iecc.com/linker/linker10.html
Post by Denys VlasenkoI can use either cross-compiler-i686 or cross-compiler-x86_64,
but not both at once. But that would be useful.
The wrapper I use is actually run-time configurable with environment variables.
When compiling stuff:
export CCWRAP_DYNAMIC_LINKER=/lib-uClibc-x86_64/ld-uClibc.so.0
Then copy all the appropraite shared librareis to that directory (or whatever
name you prefer to use). The uClibc dynamic linker will look in the directory
the shared linker is installed in as one of its default locations, see line
286 of uClibc/ldso/ldso/dl-elf.c:
/* Look for libraries wherever the shared library loader
* was installed */
_dl_if_debug_dprint("\tsearching ldso dir='%s'\n", _dl_ldsopath);
if ((tpnt1 = search_for_named_library(libname, secure, _dl_ldsopath,
rpnt)) != NULL)
{
return tpnt1;
}
The downside is of course that it'll fall back to looking in /lib if it
doesn't find the library it's it's looking for. The bane of cross compiling is
falling back to default locations at which the host headers and libraries
live. Making it _NOT_ do that is 95% of the game of whack-a-mole you wind up
playing trying to make this crap work.
But in the case of busybox, that shouldn't be too big an issue. You don't
have the "my cross compiler didn't have zlib installed so it found the host
library" issue because we don't use random external dependencies. You can't
leak random external dependencies if you don't _use_ them.
If you did want to hard-wire in both of these changes, you could change the
default path to the dynamic linker in ccwrap.c on line 197:
http://impactlinux.com/hg/hgwebdir.cgi/firmware/file/f3b242456ff7/sources/toys/ccwrap.c
And then you could fix the dynamic linker library search path fallback problem
by rebuilding ld-uClibc.so.0 with a different UCLIBC_RUNTIME_PREFIX, although
if you're going to delve into the horror that is uClibc's path logic, read
this first:
http://ibot.rikers.org/%23uclibc/20081210.html.gz
And then probably give up and just hardwire what you want into dl-elf.c line
299 or so, because it's going to add a hardwired "usr/lib" after the path you
give it, whether you want it to or not. (I believe I convinced bernhard to
stop doing this in current -git, but I haven't gotten around to testing the
new release candidate yet.)
Post by Denys VlasenkoFor example, in order to run randomconfig tests for both 32
and 64 bits in parallel overnight.
Build statically and it'll work fine? That's the easy way...
The thing is, I'm not treating x86 or x86-64 specially. I'm treating 'em the
exact same way I treat mips and arm and such. Those won't run on your host,
you need to use the emulator to run them. I go ahead and use the emulator to
text i486 and such too, because the fact that i486 runs on the host doesn't
mean it'll run on a real 486, and yes some low-power embedded chips emulate a
486 but not the pentium instructions:
http://impactlinux.com/hg/hgwebdir.cgi/firmware/rev/1004
So I generally use a system image, or a chroot with application emulation and
dynamic linking, or I build statically and use application emulation.
A prominent design goal of these toolchains is to get all the architectures to
behave as similarly as possible. Having them use the same dynamic linker name
is part of that. When I build an x86-64 image it's fully 64-bit, with no 32
bit support. (Same as mips64, or the upcoming ppc64 I'm poking at.)
That's also why they don't multilib: you build with this toolchain, it should
produce the right output by default. If you want a different type of output,
use a different toolchain. (If I could get one toolchain to support all
targets, I'd build one and use wrapper scripts to feed in target flags. But
gcc wasn't designed with that in mind. You'll find "gcc wasn't designed with
that in mind" crops up a LOT when you start playing with it, it's their
unofficial motto, I think...)
It's not actually that hard to support "32 bit on 64 bit" sort of things. The
dynamic linker and default library search path are the main things. But I'm
trying to keep down the complexity and having each toolchain and each system
image support exactly one target is a big part of that. Not having two
contentexts that can get confused with each other, thus no cross compiling
issues.
Post by Denys VlasenkoI know that various distros use different names, like /lib and /lib64,
to make it possible. How do they do it?
They hardwire a different path to the dynamic linker into each executable the
toolchain creates (which in this instance is controlled by ccwrap, see above).
And then they teach that dynamic linker to look for libraries in /lib64 by
default, instead of in /lib. (And then to make themselves feel better they
move the 32 bit libraries to lib32 and symlink /lib to /lib32, even though
nothing anywhere ever uses lib32 directly as a path. Presumably it helps them
sleep at night to pretend they haven't special-cased 64 bit out the wazoo to
work around legacy 32 bit binaries. "See, we abused 32 bit in the same way,
we just made sure it didn't matter in the slightest by symlinking all the
paths that are actually _used_ by the legacy binaries we're supporting to
point to the place we moved it." Makes me want to pat 'em on the head and go
"there there, lay on this couch, tell me about your mother"...)
Post by Denys VlasenkoAnd do you think it might make sense for you
to use /lib-$CROSS instead of /lib for every (cross-)compiler,
making it possible to run many dynamically linked programs
against different sub-arches on the same machine?
I could, sure. But you'd still need to use the emulator to run 'em, at which
point running 'em in a chroot or via a system image makes about as much sense.
Post by Denys VlasenkoThis will be an overkill for the case when one runs just a plain
one-subarch, but it will still work for that case too, right?
It would work, yes.
Let's talk over the design issues at CELF next week. If you're serious about
this use case I can put a config option into my build to automate it for you,
but I'd like to demonstrate scriptable system images to you first. I think
they're a better way to do this sort of thing.
System images are nicely self contained, and don't require root access to run.
Adding stuff on the host is not self-contained, requires root access, tends to
bit rot, bypasses your distro's normal package tracking mechanisms (and even
if it didn't, packages are never tagged as "needed for this project" so
reproducing the setup on another machine is a pain). And application emlation
is inherently more brittle than system emulation anyway so you'll spend lots
of your time finding bugs in the _emulator_, not in busybox. (Less so now than
2 years ago, but still. In system emulation it generally either works
completely or not at all, no strange buggy halfway working states.)
Rob
--
Latency is more important than throughput. It's that simple. - Linus Torvalds