So I've had a number of conversations with Dennis Gilmore and folks from other ARM disro ports about v7 support, and particularly with respect to hardware math. (In addition, one of the Seneca students is currently investigating v5 vs. v7 support in an attempt to figure out how much of the Fedora universe needs to be recompiled for optimal benefit).
Regarding hardfp, though, things are quite unclear. My understanding of soft/softfp/hardfp was initially wrong. As I understand it now:
- soft does all the math in software. Function values are passed in CPU registers where appropriate. - softfp enables the use of FPU instructions, but continues to pass function arguments in CPU registers. This mode enables hardware acceleration of math and interoperability with soft, at the cost of a CPU->FPU register move in some cases. - hardfp enables the use of FPU instructions, and function values are passed in FPU registers where appropriate. This mode is incompatible with soft and softfp, and cannot be used on CPUs that have an FPU. According to gcc (docs + error messages), it is also incompatible with CPUs that use a "vfp" math unit, such as OMAP3xxx CPUs (BeagleBoard) and (I think) the CPU used in the XO1.75. I'm unclear on which hardware math units it is compatible with.
In terms of hardware support, I think we definitely want to continue to support armv5tel with software floating-point, since that's what is used on many current Marvell CPUs, including those used in the SheevaPlug/GuruPlug/OpenRD.
hardfp would break compatibility with all of the existing binary packages, and hardfp can't be compiled by gcc for any of the CPUs I have got my hands on so far.
Thus, I recommend that we aim at armv7l softfp to support as an arch alongside armv5tel.
Comments?
-Chris
Am Samstag, 27. November 2010, 18:16:05 schrieb Chris Tyler:
So I've had a number of conversations with Dennis Gilmore and folks from other ARM disro ports about v7 support, and particularly with respect to hardware math. (In addition, one of the Seneca students is currently investigating v5 vs. v7 support in an attempt to figure out how much of the Fedora universe needs to be recompiled for optimal benefit).
Regarding hardfp, though, things are quite unclear. My understanding of soft/softfp/hardfp was initially wrong. As I understand it now:
- soft does all the math in software. Function values are passed in CPU
registers where appropriate.
- softfp enables the use of FPU instructions, but continues to pass
function arguments in CPU registers. This mode enables hardware acceleration of math and interoperability with soft, at the cost of a CPU->FPU register move in some cases.
- hardfp enables the use of FPU instructions, and function values are
passed in FPU registers where appropriate. This mode is incompatible with soft and softfp, and cannot be used on CPUs that have an FPU.
... and cannot be used on CPUs that have _no_ FPU (probably same for specialized/external FPU).
According to gcc (docs + error messages), it is also incompatible with CPUs that use a "vfp" math unit, such as OMAP3xxx CPUs (BeagleBoard) and (I think) the CPU used in the XO1.75. I'm unclear on which hardware math units it is compatible with.
Hmm, really ? There are different vfp versions (vfp/vfp2/vfp3 and vfp4 upcoming). From what I see and what was discussed at MeeGo Conference, the least common denominator is hardfp with vfpv3-d16. See:
* http://lists.meego.com/pipermail/meego-sdk/2010-November/000449.html * http://wiki.debian.org/ArmHardFloatPort * https://wiki.linaro.org/Linaro-arm-hardfloat
In terms of hardware support, I think we definitely want to continue to support armv5tel with software floating-point, since that's what is used on many current Marvell CPUs, including those used in the SheevaPlug/GuruPlug/OpenRD.
hardfp would break compatibility with all of the existing binary packages, and hardfp can't be compiled by gcc for any of the CPUs I have got my hands on so far.
Thus, I recommend that we aim at armv7l softfp to support as an arch alongside armv5tel.
I still think a armv7 hardfloat little with vfpv3-d16 would be a good baseline.
Best, Jan-Simon
On Sat, Nov 27, 2010 at 5:16 PM, Chris Tyler chris@tylers.info wrote:
So I've had a number of conversations with Dennis Gilmore and folks from other ARM disro ports about v7 support, and particularly with respect to hardware math. (In addition, one of the Seneca students is currently investigating v5 vs. v7 support in an attempt to figure out how much of the Fedora universe needs to be recompiled for optimal benefit).
From my interpretation of the gcc notes on float it looks like the two
are mutually exclusive. From the notes "Note that the hard-float and soft-float ABIs are not link-compatible; you must compile your entire program with the same ABI, and link with a compatible set of libraries. "
http://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html
From some of the Linaro "next 6 months" tasks it looks more like there
will be NEON optimised packages that can be added to ARMv7 packages as opposed to ARMv7 added to 5tel.
https://wiki.linaro.org/Linaro-arm-hardfloat
Regarding hardfp, though, things are quite unclear. My understanding of soft/softfp/hardfp was initially wrong. As I understand it now:
- soft does all the math in software. Function values are passed in CPU
registers where appropriate.
- softfp enables the use of FPU instructions, but continues to pass
function arguments in CPU registers. This mode enables hardware acceleration of math and interoperability with soft, at the cost of a CPU->FPU register move in some cases.
- hardfp enables the use of FPU instructions, and function values are
passed in FPU registers where appropriate. This mode is incompatible with soft and softfp, and cannot be used on CPUs that have an FPU. According to gcc (docs + error messages), it is also incompatible with CPUs that use a "vfp" math unit, such as OMAP3xxx CPUs (BeagleBoard) and (I think) the CPU used in the XO1.75. I'm unclear on which hardware math units it is compatible with.
My understanding is that all the ARMv7 chips have vfp3 but there's two different versions (16 and 32) which means that all the A8 chips should support that, and it seems on the A9 its optional.
http://www.arm.com/products/processors/technologies/vector-floating-point.ph...
In terms of hardware support, I think we definitely want to continue to support armv5tel with software floating-point, since that's what is used on many current Marvell CPUs, including those used in the SheevaPlug/GuruPlug/OpenRD.
We most definitely need to support 5tel but it would also be good to choose a ARMv7 level to support as well. I'm not sure if its worth seeing what Linaro/MeeGo/Ubuntu/Debian support to try and be aligned with them as it would no doubt help in terms of upstream bugs with glibc/gcc etc
hardfp would break compatibility with all of the existing binary packages, and hardfp can't be compiled by gcc for any of the CPUs I have got my hands on so far.
It would seem that its a definite second repository like i686/x86-64 but it also seems that there's patches pending for gcc 4.5.1 so it might a F-15 target unless we can get an updated gcc for F-14 but the BeagleBoard and n900 should both support some version of hardfp
Thus, I recommend that we aim at armv7l softfp to support as an arch alongside armv5tel.
Comments?
Peter
On 11/28/2010 11:14 AM, Peter Robinson wrote:
From my interpretation of the gcc notes on float it looks like the two
are mutually exclusive. From the notes "Note that the hard-float and soft-float ABIs are not link-compatible; you must compile your entire program with the same ABI, and link with a compatible set of libraries. "
The reason is entirely different calling convention. With hardware floating point linkage, arguments and return values are passed in VFP registers; you have to be sure that target contain VFP. With software floating point linkage arguments and return values are passed in ARM registers; it is slower - data has to be transferred to VFP.
My understanding is that all the ARMv7 chips have vfp3 but there's two different versions (16 and 32) which means that all the A8 chips should support that, and it seems on the A9 its optional.
Choice of vfp3-16 is the best option right now, otherwise the number of possible targets would be limited.
I vote for vfp3-16 + hardfp + separate repository ;-)
Vaclav
On Sun, 2010-11-28 at 11:14 +0000, Peter Robinson wrote:
On Sat, Nov 27, 2010 at 5:16 PM, Chris Tyler chris@tylers.info wrote:
hardfp would break compatibility with all of the existing binary packages, and hardfp can't be compiled by gcc for any of the CPUs I have got my hands on so far.
They key point here, as I sort this out, is that *the gcc we currently have in Fedora* does not support hardfp on vfp-equipped CPUs.
It would seem that its a definite second repository like i686/x86-64 but it also seems that there's patches pending for gcc 4.5.1 so it might a F-15 target unless we can get an updated gcc for F-14 but the BeagleBoard and n900 should both support some version of hardfp
Agreed, this really is boiling down to the question:
(a) Do we use a different compiler than is currently included in Fedora, or
(b) Do we wait until a version of gcc that supports hardfp on vfp is available in Fedora (4.5.x, likely in the F15 timeframe).
I don't want to wait for (b) but I think it's a better option than (a).
-Chris
On Mon, Nov 29, 2010 at 4:04 AM, Chris Tyler chris@tylers.info wrote:
On Sun, 2010-11-28 at 11:14 +0000, Peter Robinson wrote:
On Sat, Nov 27, 2010 at 5:16 PM, Chris Tyler chris@tylers.info wrote:
hardfp would break compatibility with all of the existing binary packages, and hardfp can't be compiled by gcc for any of the CPUs I have got my hands on so far.
They key point here, as I sort this out, is that *the gcc we currently have in Fedora* does not support hardfp on vfp-equipped CPUs.
It would seem that its a definite second repository like i686/x86-64 but it also seems that there's patches pending for gcc 4.5.1 so it might a F-15 target unless we can get an updated gcc for F-14 but the BeagleBoard and n900 should both support some version of hardfp
Agreed, this really is boiling down to the question:
(a) Do we use a different compiler than is currently included in Fedora, or
(b) Do we wait until a version of gcc that supports hardfp on vfp is available in Fedora (4.5.x, likely in the F15 timeframe).
I don't want to wait for (b) but I think it's a better option than (a).
I think b is the better option, the other thing is that we are still currently well behind Fedora mainline with F-12 being the currently working release. To try and get F-13 and F-14 on two arches working is a lot of work when we're already far behind. I personally would almost skip F-13 and go for F-14 with the current arm5 and aim for F-15 with both. I'm not sure what the implications are for skipping a release within the koji-shadow etc infra nor do I know what OLPCs plans are (as they are the only ones currently wanting to use Fedora ARM for a actual project) but they seem to be aiming to jump from F-11 to F-14 for their next release, but I don't know if that includes the XO-1.75.
Peter
I think going for F14 (arm5tel ) and postponing anything else(armv7*) to F15+ is a good plan, as manpower is limited and (at least I) prefer one stable than to wacky sub-architectures.
Bernhard
First. The Marvell Feroceon core which is what the plug computers are based on supposedly is cortex-8 compatible which is armv7. I am assuming this is using the sheeva "tri-core" technology which means it has a arm5tel, armv6 and armv7 compatible core (not 3 cores). (and I am =guessing= marvell really didnt want to pay a bigger license fee to the ARM group for saying more then arm5tel. xscale did it for years.)
Second. I understand we probably will have logistic issues releasing v7 arch for F14 since it has already been released (for x86), I assume it isn't trivial to add compiler flags for the 13k packages in both F14 and rawhide(F15. That sounds like a lot work. It is easier to put them directly into rawhide rather then in both places so they are there moving forward (still a lot of work but it only needs to be done once and you probably can easily script it.)
We could branch out a cortex or a v7 release, but that is more logistic issues, and honestly by dropping arm5tel support. I dont think we are dropping much hardware that people are actually interested in running Fedora on and especially by the F15 release. Tablets, laptops, embedded servers would be more realistic, and =really= we need to be getting ready for the Cortex-A15 which are designed to be in low-power servers which Fedora is typically an excellent distribution for early adopters and could be in this instance also.
The only exception I can possibly think of would be Qemu/libvirtd which doesnt have that great of support for arm but it does make a decent VM testing. And we could just default the libvirtd to v7 in F15 right off the bat instead of defaulting to arm5tel.
As far as the FPU's moving forward. Instead of supporting 8-bit FPU instructions, can we convert them to 16-bit instructions? So when the 8-bit fpu math is dropped in later releases of the ARM processors we are still compatible?
Another sticky spot is going to be the vectorization routines where Marvell support MMX and samsung/apple all support Neon. I'm not sure if the cortex spec moving forward will include a spec for a vector unit as well or not. Can these be taken care of in by replacing say glibc so we don't run into issues?
As far as actually moving forward...
If it is possible to cross-compile RPMS, and get sane results, it would be in our best interest to start cross-compiling F15-rawhide to look for and fix bugs. I understand having to recompile the whole dist on the actual arm hardware. But if we can catch 90% of the issues on fast systems before hitting the arm hardware buildbot, then we should be doing that (if we aren't). Are there instructions on the Wiki?
I'm not sure some of my assumptions are correct go ahead and flame me if I made an error. :)
sean
Quoting Bernhard Schuster schuster.bernhard@googlemail.com:
I think going for F14 (arm5tel ) and postponing anything else(armv7*) to F15+ is a good plan, as manpower is limited and (at least I) prefer one stable than to wacky sub-architectures.
Bernhard
On 11/30/10 19:49, Somebody in the thread at some point said:
Hi -
I understand we probably will have logistic issues releasing v7 arch for F14 since it has already been released (for x86), I assume it isn't trivial to add compiler flags for the 13k packages in both F14 and rawhide(F15. That sounds like a lot work. It is easier to put them directly into rawhide rather then in both places so they are there moving forward (still a lot of work but it only needs to be done once and you probably can easily script it.)
Compiler flags and so on are mainly handled by rpmbuild based on the macros for the architecture it's building on. So it's not like patching thousands of packages.
We could branch out a cortex or a v7 release, but that is more logistic issues, and honestly by dropping arm5tel support. I dont think we are dropping much hardware that people are actually interested in running Fedora on and especially by the F15 release.
I am very interested in running Fedora on armv5tel as we can today.
Tablets, laptops, embedded servers would be more realistic, and
There is a quite wide spread of arm hardware about, it is not going to be the case that suddenly everything is Cortex. For example these last days I have been using Arm Fedora on NXP LPC3250 which is a new, cheap chip based on the ARM926EJ core which is armv5; Fedora is working great on SD Card. The last thing I worked on uses Fedora on an iMX31 CPU which is ARM1136 / armv6.
If it makes a big difference to build for high end cortex specifically, then I hope we're able to keep armv5 while the chips are still current and being designed into things along the lines of i386 / x86_64.
As far as actually moving forward...
If it is possible to cross-compile RPMS, and get sane results, it
I think trying to make Fedora build cross is a whole other issue.
Building stuff cross is a trickier business than you might think. Many packages with recent autotools can build cross OK, plus or minus some magic needed to work with rpmbuild like that, but there is no point doing all that work if there are fast ARM high-end machines available that can build them native. Surely it's clear that high end arm machines are clearly going to approach x86 kinds of speed anyway in the next years reducing any pay back from the effort of going cross.
-Andy
On Tue, Nov 30, 2010 at 8:52 PM, Andy Green andy@warmcat.com wrote:
On 11/30/10 19:49, Somebody in the thread at some point said:
Hi -
I understand we probably will have logistic issues releasing v7 arch for F14 since it has already been released (for x86), I assume it isn't trivial to add compiler flags for the 13k packages in both F14 and rawhide(F15. That sounds like a lot work. It is easier to put them directly into rawhide rather then in both places so they are there moving forward (still a lot of work but it only needs to be done once and you probably can easily script it.)
Compiler flags and so on are mainly handled by rpmbuild based on the macros for the architecture it's building on. So it's not like patching thousands of packages.
We could branch out a cortex or a v7 release, but that is more logistic issues, and honestly by dropping arm5tel support. I dont think we are dropping much hardware that people are actually interested in running Fedora on and especially by the F15 release.
I am very interested in running Fedora on armv5tel as we can today.
Tablets, laptops, embedded servers would be more realistic, and
There is a quite wide spread of arm hardware about, it is not going to be the case that suddenly everything is Cortex. For example these last days I have been using Arm Fedora on NXP LPC3250 which is a new, cheap chip based on the ARM926EJ core which is armv5; Fedora is working great on SD Card. The last thing I worked on uses Fedora on an iMX31 CPU which is ARM1136 / armv6.
If it makes a big difference to build for high end cortex specifically, then I hope we're able to keep armv5 while the chips are still current and being designed into things along the lines of i386 / x86_64.
I think that is most definitely the plan and I believe the best way to go. The perf improvements that hw FP and NEON (equiv of SSE on intel) is great and it something worth optimising for and the way distros are tending to go but there are 1000s of ARM v5 devices out there as well and I don't think the chipset is going anywhere soon.
Peter
Quoting Andy Green andy@warmcat.com:
On 11/30/10 19:49, Somebody in the thread at some point said:
Hi -
I understand we probably will have logistic issues releasing v7 arch for F14 since it has already been released (for x86), I assume it isn't trivial to add compiler flags for the 13k packages in both F14 and rawhide(F15. That sounds like a lot work. It is easier to put them directly into rawhide rather then in both places so they are there moving forward (still a lot of work but it only needs to be done once and you probably can easily script it.)
Compiler flags and so on are mainly handled by rpmbuild based on the macros for the architecture it's building on. So it's not like patching thousands of packages.
We could branch out a cortex or a v7 release, but that is more logistic issues, and honestly by dropping arm5tel support. I dont think we are dropping much hardware that people are actually interested in running Fedora on and especially by the F15 release.
I am very interested in running Fedora on armv5tel as we can today.
Tablets, laptops, embedded servers would be more realistic, and
There is a quite wide spread of arm hardware about, it is not going to be the case that suddenly everything is Cortex. For example these last days I have been using Arm Fedora on NXP LPC3250 which is a new, cheap chip based on the ARM926EJ core which is armv5; Fedora is working great on SD Card. The last thing I worked on uses Fedora on an iMX31 CPU which is ARM1136 / armv6.
If it makes a big difference to build for high end cortex specifically, then I hope we're able to keep armv5 while the chips are still current and being designed into things along the lines of i386 / x86_64.
As far as actually moving forward...
If it is possible to cross-compile RPMS, and get sane results, it
I think trying to make Fedora build cross is a whole other issue.
Building stuff cross is a trickier business than you might think. Many packages with recent autotools can build cross OK, plus or minus some magic needed to work with rpmbuild like that, but there is no point doing all that work if there are fast ARM high-end machines available that can build them native. Surely it's clear that high end arm machines are clearly going to approach x86 kinds of speed anyway in the next years reducing any pay back from the effort of going cross.
I don't believe ARM is going to reach top end x86 speeds in the next couple of years. I think MIPS64 has a better chance. I do believe it can be a cost effective, energy efficient way to replace lower-end systems like desktops and can get some traction in the data centers as a replacement for low-end systems and caching type of servers.
I don't disagree with a split, but what concerns me is we don't have enough resources to get F13-ARM out the door, much less two versions of the distro. We don't have enough people nor the hardware to pull it off.
If you can cross-compile, and knock out 50% of the bugs out in a "pre-build" system before they hit the actual build system. It increases the overall speed of development. I am fully aware it isn't going to be simpler then using real hardware, but I was wondering if it would be simpler then trying to get qemu-arm with virtio and the plan9 layer working (i failed the first time).
Distributing a VM "image" with all the build tools set up and everything configured is a lot simpler then setting up a full blown dev environment.
On 12/01/10 18:37, Somebody in the thread at some point said:
Hi -
I don't disagree with a split, but what concerns me is we don't have enough resources to get F13-ARM out the door, much less two versions of the distro. We don't have enough people nor the hardware to pull it off.
Infrastructure evidently already exists to knock out armv5 packages.
If you can cross-compile, and knock out 50% of the bugs out in a "pre-build" system before they hit the actual build system. It increases
Several years ago I made my own rpm-based cross-build system similar to this. You meet packages like perl, which as part of its build process created a "miniperl" that it then ran to complete the build process. Except when you build cross, the miniperl executable is an ARM executable on an x86_64 box and it can't complete the build process.
Most packages are not that tough but there are still enough funnies that instead of knocking bugs out in some awesome fast cross environment, you are running around discovering and solving cross-specific bugs.
the overall speed of development. I am fully aware it isn't going to be simpler then using real hardware, but I was wondering if it would be simpler then trying to get qemu-arm with virtio and the plan9 layer working (i failed the first time).
qemu arm is a dead loss for mass build, it's a fraction of the speed of native execution on even a weak arm.
Distributing a VM "image" with all the build tools set up and everything configured is a lot simpler then setting up a full blown dev environment.
If it was the case that arm will never approach x86 speeds, then cross is worth worrying about because it will solve years of speed differential by a large effort now.
But that is not the case, fast arm native platforms are already here and will only get faster in the future. Cross and arm emulation are not the solution and won't become the solution either.
-Andy
Quoting Andy Green andy@warmcat.com:
On 12/01/10 18:37, Somebody in the thread at some point said:
Hi -
I don't disagree with a split, but what concerns me is we don't have enough resources to get F13-ARM out the door, much less two versions of the distro. We don't have enough people nor the hardware to pull it off.
Infrastructure evidently already exists to knock out armv5 packages.
If you can cross-compile, and knock out 50% of the bugs out in a "pre-build" system before they hit the actual build system. It increases
Several years ago I made my own rpm-based cross-build system similar to this. You meet packages like perl, which as part of its build process created a "miniperl" that it then ran to complete the build process. Except when you build cross, the miniperl executable is an ARM executable on an x86_64 box and it can't complete the build process.
Most packages are not that tough but there are still enough funnies that instead of knocking bugs out in some awesome fast cross environment, you are running around discovering and solving cross-specific bugs.
yuck. :P
the overall speed of development. I am fully aware it isn't going to be simpler then using real hardware, but I was wondering if it would be simpler then trying to get qemu-arm with virtio and the plan9 layer working (i failed the first time).
qemu arm is a dead loss for mass build, it's a fraction of the speed of native execution on even a weak arm.
Distributing a VM "image" with all the build tools set up and everything configured is a lot simpler then setting up a full blown dev environment.
If it was the case that arm will never approach x86 speeds, then cross is worth worrying about because it will solve years of speed differential by a large effort now.
But that is not the case, fast arm native platforms are already here and will only get faster in the future. Cross and arm emulation are not the solution and won't become the solution either.
I was looking at today. Today the project has a handful of builders. How does the project get on track to appear to be a viable solution, rather then a secondary arch that is 2 releases behind to someone unfamiliar with the project? What is the easiest way to get there today?
ons 2010-12-01 klockan 14:37 -0500 skrev omalleys@msu.edu:
I was looking at today. Today the project has a handful of builders. How does the project get on track to appear to be a viable solution, rather then a secondary arch that is 2 releases behind to someone unfamiliar with the project? What is the easiest way to get there today?
First step is to massage the builders until F13, F14 and finally rawhide can be shadowed automatically and we can focus on actual arm issues more than stupid build issues.
This is doing very good progress, and should from what I understand settle in a couple months. At the moment the build farm is working hard on catching up.
Regards Henrik
On Wed, 2010-12-01 at 23:20 +0100, Henrik Nordström wrote:
ons 2010-12-01 klockan 14:37 -0500 skrev omalleys@msu.edu:
I was looking at today. Today the project has a handful of builders. How does the project get on track to appear to be a viable solution, rather then a secondary arch that is 2 releases behind to someone unfamiliar with the project? What is the easiest way to get there today?
First step is to massage the builders until F13, F14 and finally rawhide can be shadowed automatically and we can focus on actual arm issues more than stupid build issues.
This is doing very good progress, and should from what I understand settle in a couple months. At the moment the build farm is working hard on catching up.
That's how I'd categorize the current state of things too.
In terms of hardware: At the moment we have 17 active builders, which are armv5 512M machines. There are also a few others that are currently being used for testing and will be inserted back into the build farm soon; the full hardware list is 21 GuruPlugs, 1 OpenRD, 1 SheevaPlug, 1 BB-C4, and 1 BB-xM.
There are also 15 PandaBoards on order, which should at least double the farm capacity because they should outperform the plugs (dual core, 1GB); if DigiKey's ship dates are to be believed, we're looking at having these online in early January.
The main issue isn't so much build capacity as the manual chore of clearing up build failures. Most of this work is being done daily by Paul Whalen, and he's had some great help on key packages by Rick Mattes and others. That's definitely the place to put additional energy if we're going to accelerate things.
-Chris
On Wed, Dec 01, 2010 at 11:20:28PM +0100, Henrik Nordström wrote:
ons 2010-12-01 klockan 14:37 -0500 skrev omalleys@msu.edu:
I was looking at today. Today the project has a handful of builders. How does the project get on track to appear to be a viable solution, rather then a secondary arch that is 2 releases behind to someone unfamiliar with the project? What is the easiest way to get there today?
First step is to massage the builders until F13, F14 and finally rawhide can be shadowed automatically and we can focus on actual arm issues more than stupid build issues.
This is doing very good progress, and should from what I understand settle in a couple months. At the moment the build farm is working hard on catching up.
AFAICS there are already lots of high level packages sucessfully built in koji[0] for F13, therefore I assume he base comps packages are probably all built already, too. So it should already be possible to install F13 on a arm box or is there some necessary package missing? It seems that the major problem is, that there is no Fedora 13 arm repo, e.g. the wiki [1] suggests that the packages are available at http://ftp.linux.org.uk/pub/linux/arm/fedora/
but I cannot find F13 packages there:
http://ftp.linux.org.uk/pub/linux/arm/fedora/pub/fedora/linux/releases/
Regards Till
[0] http://arm.koji.fedoraproject.org/koji/builds [1] http://fedoraproject.org/wiki/Architectures/ARM/Using
On 12/04/2010 06:33 AM, Till Maas wrote:
AFAICS there are already lots of high level packages sucessfully built in koji[0] for F13, therefore I assume he base comps packages are probably all built already, too. So it should already be possible to install F13 on a arm box or is there some necessary package missing? It seems that the major problem is, that there is no Fedora 13 arm repo, e.g. the wiki [1] suggests that the packages are available at http://ftp.linux.org.uk/pub/linux/arm/fedora/
but I cannot find F13 packages there:
http://ftp.linux.org.uk/pub/linux/arm/fedora/pub/fedora/linux/releases/
Regards Till
I'd be happy with any updated to F12 since I see some packages have been re-built for that.
Jeff
lör 2010-12-04 klockan 07:38 -0500 skrev Jeff Voskamp:
I'd be happy with any updated to F12 since I see some packages have been re-built for that.
the F12 repository on the arm koji seems a bit messed up if you ask me. Many of the packages have an fc13 release tag in their rpm name which I doubt can be right.
Regards Henrik
lör 2010-12-04 klockan 12:33 +0100 skrev Till Maas:
AFAICS there are already lots of high level packages sucessfully built in koji[0] for F13, therefore I assume he base comps packages are probably all built already, too. So it should already be possible to install F13 on a arm box or is there some necessary package missing?
The current build run is a "build-previous" build, building packages one package release before the package in F13 and is in preparation for the actual F13 release build. This to shake out most of the dependencies before F13 is built. There is some manual wrangling needed to handle this massive hop directly from F12 to F13 due to a number of circular build dependencies and other ugly things.
There probably is quicker way of preparing the F13 release proper by carefully analyzing dependencies etc, but this is the tool currently available for the job.
It seems that the major problem is, that there is no Fedora 13 arm repo, e.g. the wiki [1] suggests that the packages are available at http://ftp.linux.org.uk/pub/linux/arm/fedora/
The current repository is koji-only and not pushed to any public mirrors, and should be regarded as a scratch repository due to the wildly changing dependencies caused by all packages rebuilt in one go.
Regards Henrik
tis 2010-11-30 klockan 20:52 +0000 skrev Andy Green:
I think trying to make Fedora build cross is a whole other issue.
Indeed.
distcc from an arm build host to hosts using cross compilers is a more feasible alternative, and only requires an arm cross compiler build which is reasonably synchronized with the native compiler version used on the arm build host.
Regards Henrik