6 Replies Latest reply on May 13, 2013 11:00 AM by synthnassizer

    using Freescale's sources to build an ltib image for Sabre Lite fails

    synthnassizer

      Hello everyone,

      I am following https://community.freescale.com/message/312022 . I have an Element14 sabre-lite instead of one of the 3 boards that freescale offers.

      I have downloaded the sources and docs and I am following the guide (found from L3.0.35_1.1.0_docs.tar.gz -> L3.0.35_1.1.0_docs -> L3.0.35_1.1.0_BSP_Documents -> SABRE_SD_User_Guide_L3.0.35_1.1.0.pdf )

      I have extracted the L3.0.35_1.1.0_121218_source.tar.gz and installed LTIB running ./install.

      I then move on to running

      cd ltib

      ./ltib -m config.

       

      However the process fails when building wget (from host_config.log)

       

      gcc -O2 -Wall -Wno-implicit -o wget cmpt.o connect.o convert.o cookies.o ftp.o f

      tp-basic.o ftp-ls.o ftp-opie.o hash.o headers.o host.o html-parse.o html-url.o h

      ttp.o init.o log.o main.o gen-md5.o netrc.o progress.o rbuf.o recur.o res.o retr

      .o safe-ctype.o snprintf.o gen_sslfunc.o url.o utils.o version.o  -lssl -lcrypto

      -ldl

      gen_sslfunc.o: In function `init_ssl':

      gen_sslfunc.c:(.text+0x343): undefined reference to `SSLv2_client_method'

      collect2: ld returned 1 exit status

      make[1]: *** [wget] Error 1

      make[1]: Leaving directory `/opt/freescale/ltib/usr/src/rpm/BUILD/wget-1.9.1/src'

      make: *** [src] Error 2

      error: Bad exit status from /opt/slackaloo_nfs/freescale/ltib/tmp/rpm-tmp.73496 (%build)

       

       

      RPM build errors:

          Bad exit status from /opt/slackaloo_nfs/freescale/ltib/tmp/rpm-tmp.73496 (%build)

      Build time for wget: 14 seconds

       

      Failed building wget

      Died at ./ltib line 1392.

      traceback:

      main::build_host_rpms:1392

        main::host_checks:1447

         main:554

       

      The host is Linux mint 12.04.

      I found that LTIB doesn't like ubuntu 12.04 ( https://community.freescale.com/docs/DOC-93454 ) , so I also tried removing the ltib directory, reinstalling, applying the patch and running ./ltib -m config once again. still no luck.

       

      Anyone has seen this?

      I read the https://community.freescale.com/message/289362#289362 which refers to ubuntu 11.10 and imx53. So I was hoping the problem has been debuged since ubuntu 11.10....

       

      Thank you for your help.

       

      EDIT: actually ltib fails in other stages too. I re-deleted everything, re-extractred from the archive, patched again for ubuntu 12.04, and now it fails at

       

      Processing: libtool

      =====================

       

      Processing: lkc

      =================

      Build path taken because: no prebuilt rpm,

      Testing network connectivity

      OK GPP:

       

      Try lkc-1.4-lib.patch.md5 from the GPP

      http://bitshrine.org/gpp//lkc-1.4-lib.patch.md5:

      2013-05-09 17:35:38 ERROR 404: Not Found.

      Try lkc-1.4-lib.patch from the GPP

      http://bitshrine.org/gpp//lkc-1.4-lib.patch:

      2013-05-09 17:35:38 ERROR 404: Not Found.

      Can't get: lkc-1.4-lib.patch at ./ltib line 802.

      Died at ./ltib line 1392.

      traceback:

      main::build_host_rpms:1392

        main::host_checks:1447

         main:554

        • Re: using Freescale's sources to build an ltib image for Sabre Lite fails

          nass sil wrote:

           

          Anyone has seen this?

          I read the https://community.freescale.com/message/289362#289362 which refers to ubuntu 11.10 and imx53. So I was hoping the problem has been debuged since ubuntu 11.10....

          I don't use LTIB, or ubuntu, so may not be able to help much.  It does seem that your problem is almost exactly what's described on your last link. Either wget is being built prior to openssl, or possibly the openssl build went wrong somehow.

          The simple fix is to edit the build script and add --without-ssl to get you going. You can always rebuild openssl & wget on the sabre-lite manually later if you need https from wget.

           

          Otherwise you need to debug the ltib process to find out what's happened. SSLv2_client_method appears as if it should be in libssl and you appear to have -lssl on the command line, so either libssl isn't present (not built yet or failed to build) or it's not on a path where the linker is looking - you'll need some more verbose debug output to find out where it's looking and then check if an appropriate libssl with the symbol defined is available on that path.

           

          I take it you're cross compiling using a (faster) x86 PC ?  Debian had (not sure if they still do) a policy of always compiling on the target as it avoids various problems. As this one is seemingly a problem of the host, not the target, the Debian policy seems to be a good one. (lots of google results for that failure building wget on ubuntu)

           

          Out of curiosity, what versions of openssl and wget does LTIB use ?  There are various references to openssl 1.0.0[n] having disabled SSLv2 completely, hence with a new enough version of openssl and an older version of wget you probably should expect this problem.

           

          There seems to be a fix for this here http://git.savannah.gnu.org/cgit/wget.git/commit/?id=cbe8eb725b7cd4e78d8c2fe2d41aadd802308537 which is almost two years old, this suggests you need wget v1.13 or newer.

           

           

          Congratulations, you've done a nice job of reinforcing my opinion that black box build systems like LTIB should never be used

            • Re: using Freescale's sources to build an ltib image for Sabre Lite fails
              synthnassizer

               

              I take it you're cross compiling using a (faster) x86 PC ?  Debian had (not sure if they still do) a policy of always compiling on the target as it avoids various problems. As this one is seemingly a problem of the host, not the target, the Debian policy seems to be a good one. (lots of google results for that failure building wget on ubuntu)

              Hi selsinork,

              I am on a x86_64 host indeed. and that causes a certain number of problems (i guess), however I need to compile a rootfs with hardfloat (and vfp or neon) support. Additionally, I need to use the gpu accelaration that is available (as far as I know, non is available as hardfloat at the moment, but it is underway).

              LTIB seemed like a good option as it is supported by freescale and you can actually setup the gcc option (that will enable hardfloat compilation) through an ncurses interface.

               

              Hm while at it I do have a question. If I can setup gcc options to compile hardfloat+(vfp/neon) binaries, why are there 2 types of toolchains offered around ? (one is armel and the other is armhf). Have I understood smth wrong?

               

              Congratulations, you've done a nice job of reinforcing my opinion that black box build systems like LTIB should never be used

               

              I am more than willing to go any other way. I guess using linux kernel L3.0.35_1.1.1 from boundary devices is the way forward , and u-boot I am not changing as of yet. But how would you advice compiling the rootfs?

               

              Thank you for your help

                • Re: using Freescale's sources to build an ltib image for Sabre Lite fails

                  nass sil wrote:

                   

                  Hm while at it I do have a question. If I can setup gcc options to compile hardfloat+(vfp/neon) binaries, why are there 2 types of toolchains offered around ? (one is armel and the other is armhf). Have I understood smth wrong?

                  Don't think so.. armel = softfloat, armhf=hardfloat, but that's probably a little simplistic. Various versions of Arm will have different combinations of vfp[123] and neon, so the toolchain can be tweaked accordingly. Classic example is the raspberry pi which has some unusual combination of a ARMv6 with a vfp and is why Raspbian is an almost ground-up recompile.

                   

                  The soft-float toolchain could be built conservatively enough that it's code runs on just about anything (with the caveat that it looks like the fp emulator is about to be pulled out of the kernel, at which point it won't run at all). I've taken a Debian (IIRC) armel filesystem off a raspberry pi and put it onto an i.MX53QSB without problems

                   

                  The one obvious problem is that you can't easily mix the pieces - you can't have a armhf app linked to an armel library due to differences in calling conventions. This also makes bootstrapping to a non-native arch rather more interesting.

                   

                  I am more than willing to go any other way. I guess using linux kernel L3.0.35_1.1.1 from boundary devices is the way forward , and u-boot I am not changing as of yet. But how would you advice compiling the rootfs?

                  That's a good question, and one I probably don't have an answer for. If you need the GPU acceleration you may have no choice but to go with LTIB or at least something built with a compatible toolchain configuration and version to whatever parts of the gpu code you don't have source for.

                   

                  I've been using Robert Nelsons pre-built debian armhf filesystems to allow me to start building what I need in a native environment. I'm avoiding some of the issues that way, but then again, I'm not going to be using the GPU so I don't have to factor that in.

                   

                  If I worked out the cause properly - new openssl, old wget - then you can probably get past your current problem by somehow telling LTIB to use a newer wget.

                    • Re: using Freescale's sources to build an ltib image for Sabre Lite fails
                      synthnassizer

                      hi selsinork,

                      thank you for the info.

                      I am not entirely certain I understood your explanation about the different toolchains.

                      I mean, say I have an armel toolchain and using this toolchain, I append gcc option "-mfloat=hard" to my build.

                      Will I not get a hardfloat binary executable in the end?

                      If yes, then why is there an armhf toolchain?

                      If no, why am I allowed to use the option -mfloat=hard with a softfp toolchain (armel)?

                       

                      I have downloaded several toolchains in the past. including linaro's armel and armhf. I realised under their directory trees that there exist subfolders containing softfp,neon,hard,etcetc versions of libraries...

                       

                      so I am fuzzy about the different toolchains.

                        • Re: using Freescale's sources to build an ltib image for Sabre Lite fails

                          hmm.. toolchain issues are fun

                           

                          Three things to be aware of:

                          1. host architecture
                          2. default target architecture
                          3. supported target architectures

                           

                          When the host and target differ you're talking about cross compilers.  There's also usually a range of targets that a single toolchain can build. For example in the x86 world a single toolchain can normally build for i386, i486, i586, i686 (all 32 bit) and x86_64 (64 bit). The same will be true in the Arm versions, but I'm less familiar with the options here.

                           

                          However even on Arm, if your host OS is armel then you wouldn't be able to run a toolchain which is itself compiled to run on armhf, even though that toolchain could produce code for armel.

                          So the three points above are quite seperate.

                           

                          When you use an armel toolchain, running on an armel host (i.e. a native toolchain) and use -mfloat=hard, you're likely to have problems linking the result as the libraries on your system are armel, not armhf.

                           

                          Cross compiling on x86, using a compiler that's capable of generating either armhf or armel is yet another issue, here you can't link against the native x86 libraries and you need to have a directory somewhere with a set of libraries that match the particular target. The cross toolchain is normally built to look in a different directory tree when linking due to this problem.

                           

                          Cross compiling leads to various other problems, some applications will include the source code for some helper that's then used to further process something in the source tree. This usually leads to the helper being built for the target arch, say armhf, but that code obviously can't run on the x86 host. So some toolchain setups (not all) include a version of QEMU (an emulator) to be able to run the armhf code in an emulated environment on the x86 host.

                           

                          For a variety of these reasons, and akward combinations of some of them, folks like Demian will cross compile the minimal environment and toolchain that they can to be able to do the rest in a native environment. This is a two step process though, 1, build a cross toolchain with target armhf, 2. use that toolchain to build an armhf native toolchain

                           

                          When talking about the gcc/glibc toolchain, these are much more aware of the pitfalls due to a long history of bootstrapping new arch code with cross compilers and are much better behaved in this area.

                           

                          Don't know if I'm helping here or just adding more confusion

                            • Re: using Freescale's sources to build an ltib image for Sabre Lite fails
                              synthnassizer

                              what you say makes perfect sense.

                              it is rational and I was thinking that (before I got completely mixed up).

                              I guess excessive information without out there is the cause of that.

                              The dir tree of my extracted toolchain messes with my head too.

                              In the toolchain there usually is

                              <toolchain root>/bin/arm-none|linux-gnueabi-(gcc|ld|g++| etc)

                              <toolchain root>/<some arm* dir>/bin/(gcc|ld|g++| etc)

                              ...<other subdirs with libs /header files / gcc and other tools too>

                               

                              In one bin folder , there exist Intel 80386 gcc build tools.

                              in another bin folder, there exist native arm gcc tools.

                               

                               

                              So from what you say, downloading an armel or an armhf toolchain on my x86_64 host is irrelevant as far as building on the host is concerned -as the (armel/armhf) denotes the arch on which the toolchain will run natively.

                              I presume I can get the armhf toolchain, build a rootfs with armhf (still struggling with that) and copy to  that rootfs the native gcc tools for armhf toolchain.

                              I presume I also need to copy the armhf libraries (non-stripped!) from the toolchain dir tree. Then I SHOULD be able to build applications directly on my sabre lite. correct?

                               

                              As a long time slackware user, and since LTIB is playing with my nerves, I may actually try Alien Bob's methods & scripts for building an armhf system (for a chrome laptop, but using the boundary devices kernel, I shouldn't have much trouble...