[lug] Disable Autoconf Version Check?

stimits at comcast.net stimits at comcast.net
Sat Jul 2 13:56:41 MDT 2016


On Sat, 2016-07-02 at 04:24 +0000, stimits at comcast.net wrote:>> It just says my version of make is too old. It wants something like>> 3.7 minimum, but I'm using 4.0...it's mistakenly thinking 4.0 is old>> when it is really too new.

>Try >  autoreconf -i

>to regenerate configure from configure.in.  If the message comes from>running configure, edit config.log, go to the end of the file and then>back up until you see the message configure failed on.  It will tell>you want line in configure that caused the error, which you can use to>backtrack to either configure.in mods or possibly Makefile.in.  

I poked around using edits plus the suggested autoreconf -i, and discovered there was a flaw in the shell script use to configure and build several related packages. It turns out that autoconf was required to be exactly version 2.68, but I had 2.69, and the error was not reported and did not cause abort (the message was redirected and not visible). Some of the existing autoconf files were not being generated and were using old files, so my changes never updated anything.
 
After getting that fixed, I discovered there is a "sed" line designed to filter and find the version for "make", and for a reason I still do not know, the decimal is being truncated if it is ".0". My version of make is "4.0", the scripts were comparing to "4.", and since it is a text comparison and not numeric, this caused failure. I hacked away at some of the files (I'm sure it was a bad way of doing it) and got past the "4." truncation and the autoconf version issue.
 
@Davide Del Vento:
A big part of why I needed this older tool chain is because there is a patch which needs to be tested quickly. The vendor normally uses this older compiler for kernels, and the first priority is to get a working patch out. The second part is to document issues relating to what is required to start using newer compilers.
 
So far as compilers go and invalid code, I think all of the compilers are putting out valid code, and that really the issue is how the boot loader is loading the kernel. Historically, ARMv7-a code is all 32-bit. Some of the newer ARM processors now support 64-bit as ARMv8-a. ARMv8-a also supports backwards compatible 32-bit ARMv8, where essentially anything ARMv7-a will run in it...but the processor has to be in a 32-bit mode to do so. Once in 32-bit mode some of the registers expect the values in the lower 32 bits. I don't have a JTAG debugger so I can't actually see what is there (kgdboc won't work, this is too early in the load), but I think the 32 bits ended up misaligned and in the wrong part of the 64-bit register.
 
The reason this is plausible is that the kernel is being migrated from 32-bit to 64-bit. Going from 32-bit user space and 32-bit kernel space, the next stage was 32-bit user space plus 64-bit kernel space...this worked as expected on a large range of ARM tool chains (kernel compile there requires both 32-bit armhf compiler and 64-bit aarch64 compiler). The issue showed up at the transition of 32-bit user/64-bit kernel to 64-bit user/64-bit kernel. So far as I know, the boot loader never changed, and has only supported 32-bit. I think some tiny detail has resulted in the very first 32-bit assembler instruction being off by 32 bits. I want the working older version so I can compare byte-by-byte at the load address without having a JTAG debugger (JTAG would make things so much easier). I want to see if it matches up until that first failed instruction, and then has a 32-bit offset that a 32-bit boot loader may have not accounted for.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lug.boulder.co.us/pipermail/lug/attachments/20160702/6e487411/attachment.html>


More information about the LUG mailing list