Hello ,
Anyone know why there does not seem to be a Fedora 20 release for Raspberry Pi? Pidora seems to be stuck at fedora 18.
Dave
19 is pending and Seneca is 6 months behind mainstream fedora.
Adrian ... vk4tux
-----Original Message----- From: David Cook d.cook@sheffield.ac.uk To: arm@lists.fedoraproject.org Subject: [fedora-arm] Fedora 20 for Raspberry Pi???? Date: Mon, 23 Dec 2013 15:30:11 +0000
Hello ,
Anyone know why there does not seem to be a Fedora 20 release for
Raspberry Pi? Pidora seems to be stuck at fedora 18.
Dave
_______________________________________________ arm mailing list arm@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/arm
Out of curiosity: why is there such a difference in how the RPi is handled and how other ARM boards are handled (BBB, etc.) in Fedora?
Regards, Geert
On Mon, Dec 23, 2013 at 4:45 PM, Adrian vk4tux@bigpond.com wrote:
19 is pending and Seneca is 6 months behind mainstream fedora.
Adrian ... vk4tux
-----Original Message----- From: David Cook d.cook@sheffield.ac.uk To: arm@lists.fedoraproject.org Subject: [fedora-arm] Fedora 20 for Raspberry Pi???? Date: Mon, 23 Dec 2013 15:30:11 +0000
Hello ,
Anyone know why there does not seem to be a Fedora 20 release for
Raspberry Pi? Pidora seems to be stuck at fedora 18.
Dave
arm mailing list arm@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/arm
arm mailing list arm@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/arm
On Mon, Dec 23, 2013 at 5:32 PM, Geert Jansen geertj@gmail.com wrote:
Out of curiosity: why is there such a difference in how the RPi is handled and how other ARM boards are handled (BBB, etc.) in Fedora?
Not so much boards as ARM architectures. The BBB and other boards that we support are ARMv7, the Raspberry Pi is ARMv6.
Peter
On Mon, Dec 23, 2013 at 06:32:47PM +0100, Geert Jansen wrote:
Out of curiosity: why is there such a difference in how the RPi is handled and how other ARM boards are handled (BBB, etc.) in Fedora?
Fedora has decided only to support armv7 (& v8, but you can't buy that hardware). The RPi's ARM chip is armv6. As a result Fedora isn't building this but it's handed off to someone else to do a remix. AIUI they have to recompile everything, not just the kernel, so it takes a while.
Rich.
To make a comparison:
* Fedora does not offer support for older i486 processors. * The Raspberry Pi is like that, using an old (out dated) ARMv6 CPU.
Hope that helps clarify. Raspberry PI are based on obsolete technology.
Thanks, -Jon
On Mon, Dec 23, 2013 at 11:40 AM, Richard W.M. Jones rjones@redhat.com wrote:
On Mon, Dec 23, 2013 at 06:32:47PM +0100, Geert Jansen wrote:
Out of curiosity: why is there such a difference in how the RPi is handled and how other ARM boards are handled (BBB, etc.) in Fedora?
Fedora has decided only to support armv7 (& v8, but you can't buy that hardware). The RPi's ARM chip is armv6. As a result Fedora isn't building this but it's handed off to someone else to do a remix. AIUI they have to recompile everything, not just the kernel, so it takes a while.
Rich.
-- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones virt-p2v converts physical machines to virtual machines. Boot with a live CD or over the network (PXE) and turn machines into KVM guests. http://libguestfs.org/virt-v2v _______________________________________________ arm mailing list arm@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/arm
On Mon, Dec 23, 2013 at 11:51:13AM -0600, Jon wrote:
To make a comparison:
- Fedora does not offer support for older i486 processors.
- The Raspberry Pi is like that, using an old (out dated) ARMv6 CPU.
Hope that helps clarify. Raspberry PI are based on obsolete technology.
The RPi works fine for many people. 2.3 million have been sold.
I think we should be honest about the real reason: Either we have to maintain two sets of packages or we have to make everyone on the newer and faster armv7 suffer with unoptimized binaries, and we don't want to do either of those things.
Rich.
Is there a hardware list of arm v7 systems that people have gotten fedora on? I did a quick search, but I've rarely been good a effective searches...
On 12/23/2013 04:12 PM, Richard W.M. Jones wrote:
On Mon, Dec 23, 2013 at 11:51:13AM -0600, Jon wrote:
To make a comparison:
- Fedora does not offer support for older i486 processors.
- The Raspberry Pi is like that, using an old (out dated) ARMv6 CPU.
Hope that helps clarify. Raspberry PI are based on obsolete technology.
The RPi works fine for many people. 2.3 million have been sold.
I think we should be honest about the real reason: Either we have to maintain two sets of packages or we have to make everyone on the newer and faster armv7 suffer with unoptimized binaries, and we don't want to do either of those things.
Rich.
On Mon, Dec 23, 2013 at 04:24:42PM -0500, Robert Moskowitz wrote:
Is there a hardware list of arm v7 systems that people have gotten fedora on? I did a quick search, but I've rarely been good a effective searches...
This is the list of what is supported by Fedora:
https://fedoraproject.org/wiki/Architectures/ARM
Remixes are covered here:
https://fedoraproject.org/wiki/Architectures/ARM/F19/Remixes
But if you mean what ARM hardware has anyone ever tried to run Fedora on and maybe got something working by hacking around, I don't think that exists.
Rich.
On 12/23/2013 04:31 PM, Richard W.M. Jones wrote:
On Mon, Dec 23, 2013 at 04:24:42PM -0500, Robert Moskowitz wrote:
Is there a hardware list of arm v7 systems that people have gotten fedora on? I did a quick search, but I've rarely been good a effective searches...
This is the list of what is supported by Fedora:
https://fedoraproject.org/wiki/Architectures/ARM
Remixes are covered here:
https://fedoraproject.org/wiki/Architectures/ARM/F19/Remixes
But if you mean what ARM hardware has anyone ever tried to run Fedora on and maybe got something working by hacking around, I don't think that exists.
It would be nice for a place where people can report what they have gotten working. And how.
Right now all I have is a pogoplug that I hope to get back to and play with. But I have colleagues at Freescale, maybe I can get some units from them...
On Tue, Dec 24, 2013 at 12:02 AM, Robert Moskowitz rgm@htt-consult.com wrote:
On 12/23/2013 04:31 PM, Richard W.M. Jones wrote:
On Mon, Dec 23, 2013 at 04:24:42PM -0500, Robert Moskowitz wrote:
Is there a hardware list of arm v7 systems that people have gotten fedora on? I did a quick search, but I've rarely been good a effective searches...
This is the list of what is supported by Fedora:
https://fedoraproject.org/wiki/Architectures/ARM
Remixes are covered here:
https://fedoraproject.org/wiki/Architectures/ARM/F19/Remixes
But if you mean what ARM hardware has anyone ever tried to run Fedora on and maybe got something working by hacking around, I don't think that exists.
It would be nice for a place where people can report what they have gotten working. And how.
That's what this list is for.... and it's used quite regularly as such.
Peter
On 12/23/2013 07:26 PM, Peter Robinson wrote:
On Tue, Dec 24, 2013 at 12:02 AM, Robert Moskowitz rgm@htt-consult.com wrote:
On 12/23/2013 04:31 PM, Richard W.M. Jones wrote:
On Mon, Dec 23, 2013 at 04:24:42PM -0500, Robert Moskowitz wrote:
Is there a hardware list of arm v7 systems that people have gotten fedora on? I did a quick search, but I've rarely been good a effective searches...
This is the list of what is supported by Fedora:
https://fedoraproject.org/wiki/Architectures/ARM
Remixes are covered here:
https://fedoraproject.org/wiki/Architectures/ARM/F19/Remixes
But if you mean what ARM hardware has anyone ever tried to run Fedora on and maybe got something working by hacking around, I don't think that exists.
It would be nice for a place where people can report what they have gotten working. And how.
That's what this list is for.... and it's used quite regularly as such.
Oh, yes. But then one has to read through all the messages and gather the 'lore' of the list to learn what has been done.
You see, my day job is developing standards. I work in the IETF and IEEE on standards. Particularly in the IETF we have noted that there is tremendous lore in the lists that we MUST capture so that someone coming along later will get it right without having to plow through sometimes 1000s of emails. I got burned with RFC 2410; we had fun writing it. Everyone at the time knew what we meant about a cipher with key length of Zero and what that meant in ISAKMP. 5 years later we had a few companies whos programmers had not grown up with english and definitely did not know about things like ITAR. The result was they did not interoperate with other IPsec implementations. Ouch. (they included the ISAKMP key length payload which you are not suppose to do if the cipher's key length is a constant and if you get an ISAKMP payload you are not expecting you are suppose to reject the exchange. All we had to do was include that little point in the RFC, but did not) So for the work I am in charge of, I try to capture the list lore and see that someone gets it consolidated.
Just my 1 cent worth.
Back to looking at what hardware I want to get sometime soonish.
On 24/12/13 11:55, Robert Moskowitz wrote:
On 12/23/2013 07:26 PM, Peter Robinson wrote:
On Tue, Dec 24, 2013 at 12:02 AM, Robert Moskowitz rgm@htt-consult.com wrote:
On 12/23/2013 04:31 PM, Richard W.M. Jones wrote:
On Mon, Dec 23, 2013 at 04:24:42PM -0500, Robert Moskowitz wrote:
Is there a hardware list of arm v7 systems that people have gotten fedora on? I did a quick search, but I've rarely been good a effective searches...
But if you mean what ARM hardware has anyone ever tried to run Fedora on and maybe got something working by hacking around, I don't think that exists.
It would be nice for a place where people can report what they have gotten working. And how.
That's what this list is for.... and it's used quite regularly as such.
Oh, yes. But then one has to read through all the messages and gather the 'lore' of the list to learn what has been done.
You see, my day job is developing standards. I work in the IETF and IEEE on standards. Particularly in the IETF we have noted that there is tremendous lore in the lists that we MUST capture so that someone coming along later will get it right without having to plow through sometimes 1000s of emails.
Sounds to me like there needs to be a standard format for discussing such stuff on an email list which can then be hoovered up by automated systems and spewed out the other side with all the JSON one can throw at it! Such a standard would have to take in to consideration every email list format and every possible topic (as well as some majicks for the edge cases).
Or alternatively, as per time immemorial, users can set up their own systems to do this for themselves and then offer to the world the 'discovered' information and contribute back, if they feel the need to do so.
RFCs 3501 and 3028 would be my starting point, but I feel I'm drifting a little OT.
Pete.
Is there a hardware list of arm v7 systems that people have gotten fedora on? I did a quick search, but I've rarely been good a effective searches...
This is the list of what is supported by Fedora:
https://fedoraproject.org/wiki/Architectures/ARM
Remixes are covered here:
https://fedoraproject.org/wiki/Architectures/ARM/F19/Remixes
But if you mean what ARM hardware has anyone ever tried to run Fedora on and maybe got something working by hacking around, I don't think that exists.
It would be nice for a place where people can report what they have gotten working. And how.
That's what this list is for.... and it's used quite regularly as such.
Oh, yes. But then one has to read through all the messages and gather the 'lore' of the list to learn what has been done.
It's called google :-)
You see, my day job is developing standards. I work in the IETF and IEEE on
Off topic for the list.
On Mon, Dec 23, 2013 at 07:02:37PM -0500, Robert Moskowitz wrote:
On 12/23/2013 04:31 PM, Richard W.M. Jones wrote:
On Mon, Dec 23, 2013 at 04:24:42PM -0500, Robert Moskowitz wrote:
Is there a hardware list of arm v7 systems that people have gotten fedora on? I did a quick search, but I've rarely been good a effective searches...
This is the list of what is supported by Fedora:
https://fedoraproject.org/wiki/Architectures/ARM
Remixes are covered here:
https://fedoraproject.org/wiki/Architectures/ARM/F19/Remixes
But if you mean what ARM hardware has anyone ever tried to run Fedora on and maybe got something working by hacking around, I don't think that exists.
It would be nice for a place where people can report what they have gotten working. And how.
Luckily it's a wiki and anyone can edit it.
Rich.
On 12/24/2013 05:31 AM, Richard W.M. Jones wrote:
On Mon, Dec 23, 2013 at 04:24:42PM -0500, Robert Moskowitz wrote:
Is there a hardware list of arm v7 systems that people have gotten fedora on? I did a quick search, but I've rarely been good a effective searches...
This is the list of what is supported by Fedora:
https://fedoraproject.org/wiki/Architectures/ARM
Remixes are covered here:
https://fedoraproject.org/wiki/Architectures/ARM/F19/Remixes
But if you mean what ARM hardware has anyone ever tried to run Fedora on and maybe got something working by hacking around, I don't think that exists.
Rich.
Don't put too much faith in that list of supported hardware. For example, right now it says the BeagleBone Black is a supported device, and yet the Fedora 20 distribution is currently unusable on one of these boards, due to a kernel oops problem. The betas were OK, and hopefully the issue will soon be fixed, but the standard distribution will just make you sad.
When most people seem the term "supported" they expect the stuff has been checked out reasonably well, and most things are known to work OK. In the context of ARM Fedora it seems to mean that at least one person is working with the platform, even if its still very much a work in progress.
Regards, Steve
Is there a hardware list of arm v7 systems that people have gotten fedora on? I did a quick search, but I've rarely been good a effective searches...
This is the list of what is supported by Fedora:
https://fedoraproject.org/wiki/Architectures/ARM
Remixes are covered here:
https://fedoraproject.org/wiki/Architectures/ARM/F19/Remixes
But if you mean what ARM hardware has anyone ever tried to run Fedora on and maybe got something working by hacking around, I don't think that exists.
Rich.
Don't put too much faith in that list of supported hardware. For example, right now it says the BeagleBone Black is a supported device, and yet the Fedora 20 distribution is currently unusable on one of these boards, due to a kernel oops problem. The betas were OK, and hopefully the issue will soon be fixed, but the standard distribution will just make you sad.
It will be fixed in what ever the next kernel update is after 3.12.6-300 or if you want to try a scratch build use the 3.12.6-300 at this link.
http://pbrobinson.fedorapeople.org/arm-kernel/
When most people seem the term "supported" they expect the stuff has been checked out reasonably well, and most things are known to work OK. In the context of ARM Fedora it seems to mean that at least one person is working with the platform, even if its still very much a work in progress.
It's a matter of what can be tested and supported with the limited resources we have. We have a fairly strict rule of "only in the upstream kernel" to ensure we don't need to manage 100s of patchsets as we don't have the resources to deal with that and it's very much not Fedora.
As a result of this and the ongoing upstream changes it does mean that things do break at times. The omap4 devices such as pandaboard are a good example here.
In the case of the BeagleBone Black it's support still isn't all upstream. We've been working with the maintainers and manufacturers of the device to get this resolved but it's only happened recently and while they are working to get things upstream there's still a patch of 3100+ lines that we need to carry and need to manually rebase with each release.
I do it as I get time and can test but unfortunately at times my $dayjob and real life takes over. If you wish to step up and assist with this process the help would be greatly appreciated.
The main Fedora kernel, of which we are a part, has never stopped shipping an entire update for a single piece of hardware. This is the case with x86 devices as well so we're no different here.
Peter
On 23/12/2013 21:12, Richard W.M. Jones wrote:
On Mon, Dec 23, 2013 at 11:51:13AM -0600, Jon wrote:
To make a comparison:
- Fedora does not offer support for older i486 processors.
- The Raspberry Pi is like that, using an old (out dated) ARMv6 CPU.
Hope that helps clarify. Raspberry PI are based on obsolete technology.
The RPi works fine for many people. 2.3 million have been sold.
I think we should be honest about the real reason: Either we have to maintain two sets of packages or we have to make everyone on the newer and faster armv7 suffer with unoptimized binaries, and we don't want to do either of those things.
From what I recall, significant difference in performance for 99% of applications in a typical Linux distro (desktop or server) between armv5tel and armv7hl has been debunked plenty of times. Yes, there are a handful of applications that benefit, but they are relatively few and generally limited to multimedia and gaming tasks, i.e. things that ARM is not really ideally suited for yet anyway.
From what I can tell the much bigger obstacle to supporting older ARM platforms is two fold - few upstream developers care about ARM, and even fewer care about their code even compiling on older ARM targets (e.g. (but not limited to) a large number of packages make little or no effort to ensure memory accesses are aligned - including the likes of e2fsprogs, and transparent alignment fixup in hardware is only available on armv7 and later).
The problem is upstream of the distro, IMO.
Gordan
Gordan Bobic gordan@bobich.net writes:
(e.g. (but not limited to) a large number of packages make little or no effort to ensure memory accesses are aligned - including the likes of e2fsprogs, and transparent alignment fixup in hardware is only available on armv7 and later).
I'm surprised that Ted isn't willing to fix issues in e2fsprogs.
If you can point me to the upstream bug reports I can ping him to see what's up?
-derek
On 12/26/2013 02:20 PM, Derek Atkins wrote:
Gordan Bobic gordan@bobich.net writes:
(e.g. (but not limited to) a large number of packages make little or no effort to ensure memory accesses are aligned - including the likes of e2fsprogs, and transparent alignment fixup in hardware is only available on armv7 and later).
I'm surprised that Ted isn't willing to fix issues in e2fsprogs.
If you can point me to the upstream bug reports I can ping him to see what's up?
Take a look here: http://comments.gmane.org/gmane.comp.file-systems.ext4/33324
As has been mentioned before, there is a whole shedload of packages that have similar issues - I have seen literally thousands of alignment faults get reported (I have the alignment set to fix+warn on my armv5tel builders) in various packages during build and test stages. Once upon a time I planned to collate the data and get the issue reported to all upstream maintainers, but that is a mammoth task just to report, let alone fix, and I have very little faith there is enough will among the developers to fix all the affected packages and ensure they write code that isn't affected by this problem going forward.
Gordan
On 12/27/2013 03:49 AM, Gordan Bobic wrote:
On 12/26/2013 02:20 PM, Derek Atkins wrote:
Gordan Bobic gordan@bobich.net writes:
(e.g. (but not limited to) a large number of packages make little or no effort to ensure memory accesses are aligned - including the likes of e2fsprogs, and transparent alignment fixup in hardware is only available on armv7 and later).
I'm surprised that Ted isn't willing to fix issues in e2fsprogs.
If you can point me to the upstream bug reports I can ping him to see what's up?
Take a look here: http://comments.gmane.org/gmane.comp.file-systems.ext4/33324
As has been mentioned before, there is a whole shedload of packages that have similar issues - I have seen literally thousands of alignment faults get reported (I have the alignment set to fix+warn on my armv5tel builders) in various packages during build and test stages. Once upon a time I planned to collate the data and get the issue reported to all upstream maintainers, but that is a mammoth task just to report, let alone fix, and I have very little faith there is enough will among the developers to fix all the affected packages and ensure they write code that isn't affected by this problem going forward.
This is hardly surprising, as you can't really fix what you can't test.
15 years ago my Linux/Unix code was always clear of alignment issues, as I developed on a mix of Alpha and x86 machines, and the Alphas didn't like misalignment. I think enough people developed on a mixture of machines to flush out most of the alignment issues back then.
We entered a period of such x86 dominance that few people saw alignment issues any more, and I suspect the number of issues grew rapidly. Now, non-x86 work is growing again, but its mostly on ARMv7 devices, so I think its unlikely the frequency of alignment issues will ever go down again.
Regards, Steve
On Thu, Dec 26, 2013 at 07:49:58PM +0000, Gordan Bobic wrote:
On 12/26/2013 02:20 PM, Derek Atkins wrote:
Gordan Bobic gordan@bobich.net writes:
(e.g. (but not limited to) a large number of packages make little or no effort to ensure memory accesses are aligned - including the likes of e2fsprogs, and transparent alignment fixup in hardware is only available on armv7 and later).
I'm surprised that Ted isn't willing to fix issues in e2fsprogs.
If you can point me to the upstream bug reports I can ping him to see what's up?
Take a look here: http://comments.gmane.org/gmane.comp.file-systems.ext4/33324
As has been mentioned before, there is a whole shedload of packages that have similar issues - I have seen literally thousands of alignment faults get reported (I have the alignment set to fix+warn on my armv5tel builders) in various packages during build and test stages. Once upon a time I planned to collate the data and get the issue reported to all upstream maintainers, but that is a mammoth task just to report, let alone fix, and I have very little faith there is enough will among the developers to fix all the affected packages and ensure they write code that isn't affected by this problem going forward.
I disagree that it is even a problem; except in a very small number of cases where it causes a measurable slowdown. Is there a way to find out if a program is doing an excessive number of alignment fixups?
Basically this is an architectural problem in ARM, and not something developers should go through hoops to fix except in the tiny number of cases where it causes an actual, measurable problem.
Rich.
On 12/27/2013 08:32 AM, Richard W.M. Jones wrote:
On Thu, Dec 26, 2013 at 07:49:58PM +0000, Gordan Bobic wrote:
On 12/26/2013 02:20 PM, Derek Atkins wrote:
Gordan Bobic gordan@bobich.net writes:
(e.g. (but not limited to) a large number of packages make little or no effort to ensure memory accesses are aligned - including the likes of e2fsprogs, and transparent alignment fixup in hardware is only available on armv7 and later).
I'm surprised that Ted isn't willing to fix issues in e2fsprogs.
If you can point me to the upstream bug reports I can ping him to see what's up?
Take a look here: http://comments.gmane.org/gmane.comp.file-systems.ext4/33324
As has been mentioned before, there is a whole shedload of packages that have similar issues - I have seen literally thousands of alignment faults get reported (I have the alignment set to fix+warn on my armv5tel builders) in various packages during build and test stages. Once upon a time I planned to collate the data and get the issue reported to all upstream maintainers, but that is a mammoth task just to report, let alone fix, and I have very little faith there is enough will among the developers to fix all the affected packages and ensure they write code that isn't affected by this problem going forward.
I disagree that it is even a problem; except in a very small number of cases where it causes a measurable slowdown.
It's a more philosophical issue - since alignment issues arguably arise from poor programming practice in the first place, should there be pressure to not produce code that suffers from such issues?
Is there a way to find out if a program is doing an excessive number of alignment fixups?
Mostly - run with fix+warn # echo 3 > /proc/cpu/alignment and keep an eye on syslog (or get logwatch to do so for you).
If you have a relatively simple setup (basic LAMP server and little else), hardly anything gets logged. If you have a koji/mock farm, the syslog is flooded with the warnings. I haven't actually measured this, but my impression is that a non-trivial fraction of the alignment warnings actually occur in the test suites for various packages that support them.
Basically this is an architectural problem in ARM, and not something developers should go through hoops to fix except in the tiny number of cases where it causes an actual, measurable problem.
There are always performance drawbacks to not paying attention to alignment, since unaligned access also end up straddling cache lines, which also comes with a performance hit, even when there is transparent alignment auto-fixup in hardware.
IIRC, SPARC and Itanium also have, or at least had issues with unaligned accesses. I don't know if they introduced a transparent fixup in hardware since.
It's also not a case of jumping through hoops, it's a matter of not using poor practices such as allocating arrays of char for buffers and then casting them into structs.
Gordan
On 12/27/2013 05:23 PM, Gordan Bobic wrote:
On 12/27/2013 08:32 AM, Richard W.M. Jones wrote:
On Thu, Dec 26, 2013 at 07:49:58PM +0000, Gordan Bobic wrote:
On 12/26/2013 02:20 PM, Derek Atkins wrote:
Gordan Bobic gordan@bobich.net writes:
(e.g. (but not limited to) a large number of packages make little or no effort to ensure memory accesses are aligned - including the likes of e2fsprogs, and transparent alignment fixup in hardware is only available on armv7 and later).
I'm surprised that Ted isn't willing to fix issues in e2fsprogs.
If you can point me to the upstream bug reports I can ping him to see what's up?
Take a look here: http://comments.gmane.org/gmane.comp.file-systems.ext4/33324
As has been mentioned before, there is a whole shedload of packages that have similar issues - I have seen literally thousands of alignment faults get reported (I have the alignment set to fix+warn on my armv5tel builders) in various packages during build and test stages. Once upon a time I planned to collate the data and get the issue reported to all upstream maintainers, but that is a mammoth task just to report, let alone fix, and I have very little faith there is enough will among the developers to fix all the affected packages and ensure they write code that isn't affected by this problem going forward.
I disagree that it is even a problem; except in a very small number of cases where it causes a measurable slowdown.
It's a more philosophical issue - since alignment issues arguably arise from poor programming practice in the first place, should there be pressure to not produce code that suffers from such issues?
If you make an x86 machine work through alignment issues in software, rather than let the hardware sort it out, you will pay a speed penalty. In a lot of protocol code misalignment is forced upon you by the protocol, and sorting it out in software can lead to a *considerable* speed penalty.
Is there a way to find out if a program is doing an excessive number of alignment fixups?
Mostly - run with fix+warn # echo 3 > /proc/cpu/alignment and keep an eye on syslog (or get logwatch to do so for you).
If you have a relatively simple setup (basic LAMP server and little else), hardly anything gets logged. If you have a koji/mock farm, the syslog is flooded with the warnings. I haven't actually measured this, but my impression is that a non-trivial fraction of the alignment warnings actually occur in the test suites for various packages that support them.
Basically this is an architectural problem in ARM, and not something developers should go through hoops to fix except in the tiny number of cases where it causes an actual, measurable problem.
There are always performance drawbacks to not paying attention to alignment, since unaligned access also end up straddling cache lines, which also comes with a performance hit, even when there is transparent alignment auto-fixup in hardware.
IIRC, SPARC and Itanium also have, or at least had issues with unaligned accesses. I don't know if they introduced a transparent fixup in hardware since.
It's also not a case of jumping through hoops, it's a matter of not using poor practices such as allocating arrays of char for buffers and then casting them into structs.
Gordan
Regards, Steve
On 12/27/2013 09:48 AM, Steve Underwood wrote:
On 12/27/2013 05:23 PM, Gordan Bobic wrote:
On 12/27/2013 08:32 AM, Richard W.M. Jones wrote:
On Thu, Dec 26, 2013 at 07:49:58PM +0000, Gordan Bobic wrote:
On 12/26/2013 02:20 PM, Derek Atkins wrote:
Gordan Bobic gordan@bobich.net writes:
(e.g. (but not limited to) a large number of packages make little or no effort to ensure memory accesses are aligned - including the likes of e2fsprogs, and transparent alignment fixup in hardware is only available on armv7 and later).
I'm surprised that Ted isn't willing to fix issues in e2fsprogs.
If you can point me to the upstream bug reports I can ping him to see what's up?
Take a look here: http://comments.gmane.org/gmane.comp.file-systems.ext4/33324
As has been mentioned before, there is a whole shedload of packages that have similar issues - I have seen literally thousands of alignment faults get reported (I have the alignment set to fix+warn on my armv5tel builders) in various packages during build and test stages. Once upon a time I planned to collate the data and get the issue reported to all upstream maintainers, but that is a mammoth task just to report, let alone fix, and I have very little faith there is enough will among the developers to fix all the affected packages and ensure they write code that isn't affected by this problem going forward.
I disagree that it is even a problem; except in a very small number of cases where it causes a measurable slowdown.
It's a more philosophical issue - since alignment issues arguably arise from poor programming practice in the first place, should there be pressure to not produce code that suffers from such issues?
If you make an x86 machine work through alignment issues in software, rather than let the hardware sort it out, you will pay a speed penalty.
How is transparent alignment fixup going to give you back the performance you lose from accesses straddling cache lines?
In a lot of protocol code misalignment is forced upon you by the protocol, and sorting it out in software can lead to a *considerable* speed penalty.
Fair, I can see this being one case where alignment auto-fixup is beneficial, but how many commonly used protocols are there where this is actually a problem? Has anyone ever assessed this comprehensively? Is there a list somewhere?
Gordan
On Fri, Dec 27, 2013 at 09:53:54AM +0000, Gordan Bobic wrote:
How is transparent alignment fixup going to give you back the performance you lose from accesses straddling cache lines?
You can have structs straddling cache lines and causing performance problems without alignment issues, or structs being packed too close together causing false sharing again w/o alignment being involved.
If alignment problems cause performance issues, then we should deal with those performance problems. If they don't, we shouldn't worry about them.
Rich.
ObHack: I once worked with an architecture [68k-based VME hardware] that not only faulted on unaligned access, but also on accesses of the wrong *size* (eg. using a short-sized read instruction instead of a word-sized read instruction). Dealing with that nonsense involved a lot of compiler-specific massaging of code and some inline assembly ...
On 12/27/2013 04:02 PM, Richard W.M. Jones wrote:
On Fri, Dec 27, 2013 at 09:53:54AM +0000, Gordan Bobic wrote:
How is transparent alignment fixup going to give you back the performance you lose from accesses straddling cache lines?
You can have structs straddling cache lines and causing performance problems without alignment issues, or structs being packed too close together causing false sharing again w/o alignment being involved.
If alignment problems cause performance issues, then we should deal with those performance problems. If they don't, we shouldn't worry about them.
Rich.
ObHack: I once worked with an architecture [68k-based VME hardware] that not only faulted on unaligned access, but also on accesses of the wrong *size* (eg. using a short-sized read instruction instead of a word-sized read instruction). Dealing with that nonsense involved a lot of compiler-specific massaging of code and some inline assembly ...
I'm very glad you mentioned compilers - this is in fact easily fixable at compiler level. Intel's ICC has an option to make all arrays and structs always aligned to a boundary (up to 16 byte, IIRC). If GCC were to implement such a feature the problem could be made to go away without actually addressing the underlying cause of the problem. It might be a bodge, but since complete fix of the underlying problem isn't going to happen anyway, a good bodge would be a lot better than doing nothing.
Gordan
On Friday, December 27, 2013 11:27 AM, Gordan Bobic gordan@bobich.net wrote:
On 12/27/2013 04:02 PM, Richard W.M. Jones wrote:
On Fri, Dec 27, 2013 at 09:53:54AM +0000, Gordan Bobic wrote:
How is transparent alignment fixup going to give you back the performance you lose from accesses straddling cache lines?
You can have structs straddling cache lines and causing performance problems without alignment issues, or structs being packed too close together causing false sharing again w/o alignment being involved.
If alignment problems cause performance issues, then we should deal with those performance problems. If they don't, we shouldn't worry about them.
Rich.
ObHack: I once worked
with an architecture [68k-based VME hardware]
that not only faulted on unaligned access, but also on accesses of the wrong *size* (eg. using a short-sized read instruction instead of a word-sized read instruction). Dealing with that nonsense involved a lot of compiler-specific massaging of code and some inline assembly ...
I'm very glad you mentioned compilers - this is in fact easily fixable at compiler level. Intel's ICC has an option to make all arrays and structs always aligned to a boundary (up to 16 byte, IIRC). If GCC were to implement such a feature the problem could be made to go away without actually addressing the underlying cause of the problem. It might be a bodge, but since complete fix of the underlying problem isn't going to happen anyway, a good bodge would be a lot better than doing nothing.
I agree a good bodge would be better then nothing. It helps a multitude of platforms besides arm,mips, ppc, etc and makes all the platforms run faster (which is why Intel does it in their compiler.). It looks like llvm fixes it as well.
Here is one of the best articles I have seen written on the subject: http://www.ibm.com/developerworks/library/pa-dalign/
I am NOT trying to start a compiler war. but.. It looks like clang -O3 actually fixes them*. It seems like it would be possible to recompile using clang, and edit clang/llvm to log when it uses fix up code, use that in koji, and automatically file bugs with the log files. (an alternative easy solution for finding them vs adding a bunch of code to gcc which will take some time unless I missed something in gcc.)
Project safecode (which also uses llvm), actually injects code in, for better reporting of dangling pointers, etc, which could be fed through abrt. But you have to execute the code and hit that section of it. That is probably not a great solution since it starts the compiler war unless it can be done with the llvm optimizer using the gcc dragonegg plugin.
I haven't looked in a while, are the atomics for armv5/6 supported in gcc or llvm now?
*(I used http://en.wikipedia.org/wiki/Segmentation_fault#Unaligned_access (Bus Error Example ) with gcc and clang both didn't work, but clang -O3 flag, didn't give a bus error when executed. whether or not it is doing the right thing is another story, but it does mean it is hitting fixup code..maybe gcc has a similar flag I haven't found?)
Sean
Sean
Gordan Bobic gordan@bobich.net wrote:
On 12/27/2013 04:02 PM, Richard W.M. Jones wrote:
On Fri, Dec 27, 2013 at 09:53:54AM +0000, Gordan Bobic wrote:
How is transparent alignment fixup going to give you back the performance you lose from accesses straddling cache lines?
You can have structs straddling cache lines and causing performance problems without alignment issues, or structs being packed too close together causing false sharing again w/o alignment being involved.
If alignment problems cause performance issues, then we should deal with those performance problems. If they don't, we shouldn't worry about them.
Rich.
ObHack: I once worked with an architecture [68k-based VME hardware] that not only faulted on unaligned access, but also on accesses of
the
wrong *size* (eg. using a short-sized read instruction instead of a word-sized read instruction). Dealing with that nonsense involved a lot of compiler-specific massaging of code and some inline assembly
...
I'm very glad you mentioned compilers - this is in fact easily fixable at compiler level. Intel's ICC has an option to make all arrays and
No, if your code takes the approach to cast the struct pointer into a byte stream, the struct pointer itself can be unaligned.
Your compiler can do nothing about that, you have to touch the members using bytewise accessors to be compatible with SoCs that don't fix up unaligned access properly.
structs always aligned to a boundary (up to 16 byte, IIRC). If GCC were
to implement such a feature the problem could be made to go away without actually addressing the underlying cause of the problem. It might be a bodge, but since complete fix of the underlying problem isn't going to happen anyway, a good bodge would be a lot better than doing nothing.
What's wrong with you sending patches to the upstream?
-Andy
Gordan _______________________________________________ arm mailing list arm@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/arm
On 12/30/2013 09:58 AM, Andy Green wrote:
Gordan Bobic gordan@bobich.net wrote:
On 12/27/2013 04:02 PM, Richard W.M. Jones wrote:
On Fri, Dec 27, 2013 at 09:53:54AM +0000, Gordan Bobic wrote:
How is transparent alignment fixup going to give you back the performance you lose from accesses straddling cache lines?
You can have structs straddling cache lines and causing performance problems without alignment issues, or structs being packed too close together causing false sharing again w/o alignment being involved.
If alignment problems cause performance issues, then we should deal with those performance problems. If they don't, we shouldn't worry about them.
Rich.
ObHack: I once worked with an architecture [68k-based VME hardware] that not only faulted on unaligned access, but also on accesses of
the
wrong *size* (eg. using a short-sized read instruction instead of a word-sized read instruction). Dealing with that nonsense involved a lot of compiler-specific massaging of code and some inline assembly
...
I'm very glad you mentioned compilers - this is in fact easily fixable at compiler level. Intel's ICC has an option to make all arrays and
No, if your code takes the approach to cast the struct pointer into a byte stream, the struct pointer itself can be unaligned.
It won't fix all cases, but it will fix a large chunk of them - perhaps enough of them to make fixing the remainder a tractable problem.
Your compiler can do nothing about that, you have to touch the members using bytewise accessors to be compatible with SoCs that don't fix up unaligned access properly.
structs always aligned to a boundary (up to 16 byte, IIRC). If GCC were
to implement such a feature the problem could be made to go away without actually addressing the underlying cause of the problem. It might be a bodge, but since complete fix of the underlying problem isn't going to happen anyway, a good bodge would be a lot better than doing nothing.
What's wrong with you sending patches to the upstream?
Nothing apart from the amount of man-months it would take to investigate all of them, write patches, and chase the upstream through to accepting them (if they are even accepted).
Gordan
Gordan Bobic gordan@bobich.net wrote:
On 12/30/2013 09:58 AM, Andy Green wrote:
Gordan Bobic gordan@bobich.net wrote:
On 12/27/2013 04:02 PM, Richard W.M. Jones wrote:
On Fri, Dec 27, 2013 at 09:53:54AM +0000, Gordan Bobic wrote:
How is transparent alignment fixup going to give you back the performance you lose from accesses straddling cache lines?
You can have structs straddling cache lines and causing performance problems without alignment issues, or structs being packed too
close
together causing false sharing again w/o alignment being involved.
If alignment problems cause performance issues, then we should deal with those performance problems. If they don't, we shouldn't worry about them.
Rich.
ObHack: I once worked with an architecture [68k-based VME hardware] that not only faulted on unaligned access, but also on accesses of
the
wrong *size* (eg. using a short-sized read instruction instead of a word-sized read instruction). Dealing with that nonsense involved
a
lot of compiler-specific massaging of code and some inline assembly
...
I'm very glad you mentioned compilers - this is in fact easily
fixable
at compiler level. Intel's ICC has an option to make all arrays and
No, if your code takes the approach to cast the struct pointer into a
byte stream, the struct pointer itself can be unaligned.
It won't fix all cases, but it will fix a large chunk of them - perhaps
enough of them to make fixing the remainder a tractable problem.
It's already tractable, you're choosing not to engage with solving it upstream.
Your compiler can do nothing about that, you have to touch the
members using bytewise accessors to be compatible with SoCs that don't fix up unaligned access properly.
structs always aligned to a boundary (up to 16 byte, IIRC). If GCC
were
to implement such a feature the problem could be made to go away without actually addressing the underlying cause of the problem. It might be
a
bodge, but since complete fix of the underlying problem isn't going
to
happen anyway, a good bodge would be a lot better than doing
nothing.
What's wrong with you sending patches to the upstream?
Nothing apart from the amount of man-months it would take to
Nonsense... a few years ago I made my own cross distro for an ARM9 device without hardware fixup, entirely from source tarballs, and there were almost no alignment issues in the major projects.
I guess they will tend to start to increase since more people are using newer ARM SoC which all have hardware fixup - but you are the backpressure against that by providing patches for the real problems you found... at least if you care about it, you should be.
-Andy
investigate all of them, write patches, and chase the upstream through to accepting
them (if they are even accepted).
Gordan
On 12/30/2013 11:54 AM, Andy Green wrote:
Gordan Bobic gordan@bobich.net wrote:
On 12/30/2013 09:58 AM, Andy Green wrote:
Gordan Bobic gordan@bobich.net wrote:
On 12/27/2013 04:02 PM, Richard W.M. Jones wrote:
On Fri, Dec 27, 2013 at 09:53:54AM +0000, Gordan Bobic wrote:
How is transparent alignment fixup going to give you back the performance you lose from accesses straddling cache lines?
You can have structs straddling cache lines and causing performance problems without alignment issues, or structs being packed too
close
together causing false sharing again w/o alignment being involved.
If alignment problems cause performance issues, then we should deal with those performance problems. If they don't, we shouldn't worry about them.
Rich.
ObHack: I once worked with an architecture [68k-based VME hardware] that not only faulted on unaligned access, but also on accesses of
the
wrong *size* (eg. using a short-sized read instruction instead of a word-sized read instruction). Dealing with that nonsense involved
a
lot of compiler-specific massaging of code and some inline assembly
...
I'm very glad you mentioned compilers - this is in fact easily
fixable
at compiler level. Intel's ICC has an option to make all arrays and
No, if your code takes the approach to cast the struct pointer into a
byte stream, the struct pointer itself can be unaligned.
It won't fix all cases, but it will fix a large chunk of them - perhaps
enough of them to make fixing the remainder a tractable problem.
It's already tractable, you're choosing not to engage with solving it upstream.
I'll enumerate the instances of this next time I'm doing a RedSleeve rebuild (might start this week when I resurrect my Koji farm of armv5tel devices). Last time I checked the number of instances logged was in the hundreds - sufficiently high that I just gave up.
Your compiler can do nothing about that, you have to touch the
members using bytewise accessors to be compatible with SoCs that don't fix up unaligned access properly.
structs always aligned to a boundary (up to 16 byte, IIRC). If GCC
were
to implement such a feature the problem could be made to go away without actually addressing the underlying cause of the problem. It might be
a
bodge, but since complete fix of the underlying problem isn't going
to
happen anyway, a good bodge would be a lot better than doing
nothing.
What's wrong with you sending patches to the upstream?
Nothing apart from the amount of man-months it would take to
Nonsense... a few years ago I made my own cross distro for an ARM9 device without hardware fixup, entirely from source tarballs, and there were almost no alignment issues in the major projects.
I did the same 18 months ago, and my experience was distinctly different. Thankfully, with the kernel-level alignment fixup at least building the distro was tractable.
I guess they will tend to start to increase since more people are using newer ARM SoC which all have hardware fixup - but you are the backpressure against that by providing patches for the real problems you found... at least if you care about it, you should be.
A fair point well made, but I don't think we entirely agree on the scale of the problem.
Gordan
Gordan Bobic gordan@bobich.net wrote:
On 12/30/2013 11:54 AM, Andy Green wrote:
Gordan Bobic gordan@bobich.net wrote:
On 12/30/2013 09:58 AM, Andy Green wrote:
Gordan Bobic gordan@bobich.net wrote:
On 12/27/2013 04:02 PM, Richard W.M. Jones wrote:
On Fri, Dec 27, 2013 at 09:53:54AM +0000, Gordan Bobic wrote: > How is transparent alignment fixup going to give you back the > performance you lose from accesses straddling cache lines?
You can have structs straddling cache lines and causing
performance
problems without alignment issues, or structs being packed too
close
together causing false sharing again w/o alignment being
involved.
If alignment problems cause performance issues, then we should
deal
with those performance problems. If they don't, we shouldn't
worry
about them.
Rich.
ObHack: I once worked with an architecture [68k-based VME
hardware]
that not only faulted on unaligned access, but also on accesses
of
the
wrong *size* (eg. using a short-sized read instruction instead of
a
word-sized read instruction). Dealing with that nonsense
involved
a
lot of compiler-specific massaging of code and some inline
assembly
...
I'm very glad you mentioned compilers - this is in fact easily
fixable
at compiler level. Intel's ICC has an option to make all arrays
and
No, if your code takes the approach to cast the struct pointer into
a
byte stream, the struct pointer itself can be unaligned.
It won't fix all cases, but it will fix a large chunk of them -
perhaps
enough of them to make fixing the remainder a tractable problem.
It's already tractable, you're choosing not to engage with solving it
upstream.
I'll enumerate the instances of this next time I'm doing a RedSleeve rebuild (might start this week when I resurrect my Koji farm of armv5tel devices). Last time I checked the number of instances logged was in the
hundreds - sufficiently high that I just gave up.
Yeah but that's hundreds of instances of one bug in one package or one instance of a hundred bugs in different packages? If it's in a library it might show up in a few different processes but still be one bug.
If it's in glibc it might show up many times in one session different ways but still be one issue.
Did you try catching the sigbus or whatever you're getting in gdb?
-Andy
Your compiler can do nothing about that, you have to touch the
members using bytewise accessors to be compatible with SoCs that
don't
fix up unaligned access properly.
structs always aligned to a boundary (up to 16 byte, IIRC). If GCC
were
to implement such a feature the problem could be made to go away without actually addressing the underlying cause of the problem. It might
be
a
bodge, but since complete fix of the underlying problem isn't
going
to
happen anyway, a good bodge would be a lot better than doing
nothing.
What's wrong with you sending patches to the upstream?
Nothing apart from the amount of man-months it would take to
Nonsense... a few years ago I made my own cross distro for an ARM9
device without hardware fixup, entirely from source tarballs, and there were almost no alignment issues in the major projects.
I did the same 18 months ago, and my experience was distinctly different. Thankfully, with the kernel-level alignment fixup at least building the distro was tractable.
I guess they will tend to start to increase since more people are
using newer ARM SoC which all have hardware fixup - but you are the backpressure against that by providing patches for the real problems you found... at least if you care about it, you should be.
A fair point well made, but I don't think we entirely agree on the scale of the problem.
Gordan
On 12/30/2013 07:54 PM, Andy Green wrote:
Gordan Bobic gordan@bobich.net wrote:
On 12/30/2013 09:58 AM, Andy Green wrote:
Gordan Bobic gordan@bobich.net wrote:
On 12/27/2013 04:02 PM, Richard W.M. Jones wrote:
On Fri, Dec 27, 2013 at 09:53:54AM +0000, Gordan Bobic wrote:
How is transparent alignment fixup going to give you back the performance you lose from accesses straddling cache lines?
You can have structs straddling cache lines and causing performance problems without alignment issues, or structs being packed too
close
together causing false sharing again w/o alignment being involved.
If alignment problems cause performance issues, then we should deal with those performance problems. If they don't, we shouldn't worry about them.
Rich.
ObHack: I once worked with an architecture [68k-based VME hardware] that not only faulted on unaligned access, but also on accesses of
the
wrong *size* (eg. using a short-sized read instruction instead of a word-sized read instruction). Dealing with that nonsense involved
a
lot of compiler-specific massaging of code and some inline assembly
...
I'm very glad you mentioned compilers - this is in fact easily
fixable
at compiler level. Intel's ICC has an option to make all arrays and
No, if your code takes the approach to cast the struct pointer into a
byte stream, the struct pointer itself can be unaligned.
It won't fix all cases, but it will fix a large chunk of them - perhaps
enough of them to make fixing the remainder a tractable problem.
It's already tractable, you're choosing not to engage with solving it upstream.
Your compiler can do nothing about that, you have to touch the
members using bytewise accessors to be compatible with SoCs that don't fix up unaligned access properly.
structs always aligned to a boundary (up to 16 byte, IIRC). If GCC
were
to implement such a feature the problem could be made to go away without actually addressing the underlying cause of the problem. It might be
a
bodge, but since complete fix of the underlying problem isn't going
to
happen anyway, a good bodge would be a lot better than doing
nothing.
What's wrong with you sending patches to the upstream?
Nothing apart from the amount of man-months it would take to
Nonsense... a few years ago I made my own cross distro for an ARM9 device without hardware fixup, entirely from source tarballs, and there were almost no alignment issues in the major projects.
I guess they will tend to start to increase since more people are using newer ARM SoC which all have hardware fixup - but you are the backpressure against that by providing patches for the real problems you found... at least if you care about it, you should be.
-Andy
In recent years i think most instances of misalignment in packages has been picked up by openwrt/ddwrt/tomato/etc users, as most routers have MIPS processors, and most of these can't correct misalignment. So far the routers seem to be continuing with MIPS cores. If they move to ARM I can't think of anything which will keep the number of misalignment issues down.
You can't blame programmers for leaving alignment issues in their code. I try to keep my code alignment safe, but without a test platform where alignment matters I just can't tell if I have missed something. You can blame programmers if they won't make a real effort to flush out those problems when they are reported. Things like autoconf tests make it easy to use special handling of misalignment where it is needed, and let the hardware handle it where it can. It is hard to ensure you have caught every instance where optional processing needs to occur.
investigate all of them, write patches, and chase the upstream through to accepting
them (if they are even accepted).
Gordan
Regards, Steve
Steve Underwood steveu@coppice.org wrote:
On 12/30/2013 07:54 PM, Andy Green wrote:
Gordan Bobic gordan@bobich.net wrote:
On 12/30/2013 09:58 AM, Andy Green wrote:
Gordan Bobic gordan@bobich.net wrote:
On 12/27/2013 04:02 PM, Richard W.M. Jones wrote:
On Fri, Dec 27, 2013 at 09:53:54AM +0000, Gordan Bobic wrote: > How is transparent alignment fixup going to give you back the > performance you lose from accesses straddling cache lines? You can have structs straddling cache lines and causing
performance
problems without alignment issues, or structs being packed too
close
together causing false sharing again w/o alignment being
involved.
If alignment problems cause performance issues, then we should
deal
with those performance problems. If they don't, we shouldn't
worry
about them.
Rich.
ObHack: I once worked with an architecture [68k-based VME
hardware]
that not only faulted on unaligned access, but also on accesses
of
the
wrong *size* (eg. using a short-sized read instruction instead of
a
word-sized read instruction). Dealing with that nonsense
involved
a
lot of compiler-specific massaging of code and some inline
assembly
...
I'm very glad you mentioned compilers - this is in fact easily
fixable
at compiler level. Intel's ICC has an option to make all arrays
and
No, if your code takes the approach to cast the struct pointer into
a
byte stream, the struct pointer itself can be unaligned.
It won't fix all cases, but it will fix a large chunk of them -
perhaps
enough of them to make fixing the remainder a tractable problem.
It's already tractable, you're choosing not to engage with solving it
upstream.
Your compiler can do nothing about that, you have to touch the
members using bytewise accessors to be compatible with SoCs that
don't
fix up unaligned access properly.
structs always aligned to a boundary (up to 16 byte, IIRC). If GCC
were
to implement such a feature the problem could be made to go away without actually addressing the underlying cause of the problem. It might
be
a
bodge, but since complete fix of the underlying problem isn't
going
to
happen anyway, a good bodge would be a lot better than doing
nothing.
What's wrong with you sending patches to the upstream?
Nothing apart from the amount of man-months it would take to
Nonsense... a few years ago I made my own cross distro for an ARM9
device without hardware fixup, entirely from source tarballs, and there were almost no alignment issues in the major projects.
I guess they will tend to start to increase since more people are
using newer ARM SoC which all have hardware fixup - but you are the backpressure against that by providing patches for the real problems you found... at least if you care about it, you should be.
-Andy
In recent years i think most instances of misalignment in packages has been picked up by openwrt/ddwrt/tomato/etc users, as most routers have MIPS processors, and most of these can't correct misalignment. So far the routers seem to be continuing with MIPS cores. If they move to ARM I can't think of anything which will keep the number of misalignment issues down.
Yes I think those guys and older arm arch guys have been responsible for keeping everything almost clean until now.
You can't blame programmers for leaving alignment issues in their code.
I try to keep my code alignment safe, but without a test platform where
alignment matters I just can't tell if I have missed something. You can
blame programmers if they won't make a real effort to flush out those problems when they are reported.
Yeah I think until you realize why and how it's a problem (in other words, you got bitten) most programmers wouldn't particularly think to defend against it because the code is c-legal and works on x86.
-Andy
Things like autoconf tests make it
easy to use special handling of misalignment where it is needed, and let the
hardware handle it where it can. It is hard to ensure you have caught every instance where optional processing needs to occur.
investigate all of them, write patches, and chase the upstream through to
accepting
them (if they are even accepted).
Gordan
Regards, Steve
arm mailing list arm@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/arm
On Mon, Dec 30, 2013 at 10:27:38PM +0800, Andy Green wrote:
Yeah I think until you realize why and how it's a problem (in other words, you got bitten) most programmers wouldn't particularly think to defend against it because the code is c-legal and works on x86.
That's because it *isn't* a problem.
We shouldn't worry about misalignment problems in Fedora ARM unless you can demonstrate with hard numbers that a particular misalignment causes a performance issue.
Set alignment to fixup, and forget about it.
Rich.
On 01/01/2014 10:18 AM, Richard W.M. Jones wrote:
On Mon, Dec 30, 2013 at 10:27:38PM +0800, Andy Green wrote:
Yeah I think until you realize why and how it's a problem (in other words, you got bitten) most programmers wouldn't particularly think to defend against it because the code is c-legal and works on x86.
That's because it *isn't* a problem.
We shouldn't worry about misalignment problems in Fedora ARM unless you can demonstrate with hard numbers that a particular misalignment causes a performance issue.
Set alignment to fixup, and forget about it.
How dare anyone suggest the developers be educated and the problem be fixed rather than worked around.
Gordan
On Wed, Jan 01, 2014 at 10:22:29AM +0000, Gordan Bobic wrote:
On 01/01/2014 10:18 AM, Richard W.M. Jones wrote:
On Mon, Dec 30, 2013 at 10:27:38PM +0800, Andy Green wrote:
Yeah I think until you realize why and how it's a problem (in other words, you got bitten) most programmers wouldn't particularly think to defend against it because the code is c-legal and works on x86.
That's because it *isn't* a problem.
We shouldn't worry about misalignment problems in Fedora ARM unless you can demonstrate with hard numbers that a particular misalignment causes a performance issue.
Set alignment to fixup, and forget about it.
How dare anyone suggest the developers be educated and the problem be fixed rather than worked around.
There's nothing to educate about. It's a non-problem except in a narrow performance case.
Most developers don't need to worry about swap, for precisely the same reason. Alignment fixups, swapping, cache lines, TLBs, huge pages, ... are details of the implementation, and you don't need to think about them unless you're chasing a performance problem.
Rich.
On Wednesday, January 1, 2014 6:52 AM, Richard W.M. Jones rjones@redhat.com wrote:
On Wed, Jan 01, 2014 at 10:22:29AM +0000, Gordan Bobic wrote: On 01/01/2014 10:18 AM, Richard W.M. Jones wrote: >On Mon, Dec 30, 2013 at 10:27:38PM +0800, Andy Green wrote: >>Yeah I think until you realize why and how it's a problem (in
other
>>words, you got bitten) most programmers wouldn't particularly
think
>>to defend against it because the code is c-legal and works on x86. > >That's because it *isn't* a problem. > >We shouldn't worry about misalignment problems in Fedora ARM unless >you can demonstrate with hard numbers that a particular misalignment >causes a performance issue. > >Set alignment to fixup, and forget about it.
How dare anyone suggest the developers be educated and the problem be fixed rather than worked around.
There's nothing to educate about. It's a non-problem except in a narrow performance case.
Most developers don't need to worry about swap, for precisely the same reason. Alignment fixups, swapping, cache lines, TLBs, huge pages, ... are details of the implementation, and you don't need to think about them unless you're chasing a performance problem.
They are a problem. It is a performance issue at the very least on =ALL= platforms. There is a cost even on Intel's platform for alignment errors, they just fix them up in hardware so it isn't as big of a performance hit. It might be 5 cycles instead of 20.
C is a language where the programmer has to know what they are doing. You don't depend on the compiler to do the "right thing". In other words, you get the keys, if you decide to drive into a brick wall going 100mph, it will let you. It doesn't care. You fsck'd up.
You aren't a good programmer if you are knowingly introducing errors into your program and are relying on hardware to fix them. Then make the excuse, the hardware fixes the problem, therefore it isn't a problem. That is just lame.
It used to be the trick was identifying the issue and most projects would fix it up, sooner rather then later. Because it is rather embarrassing.
On Wed, Jan 01, 2014 at 12:21:30PM -0800, Sean Omalley wrote:
They are a problem. It is a performance issue at the very least on =ALL= platforms. There is a cost even on Intel's platform for alignment errors, they just fix them up in hardware so it isn't as big of a performance hit. It might be 5 cycles instead of 20.
On Intel Sandybridge and up there is no penalty:
http://www.agner.org/optimize/blog/read.php?i=142&v=t
On earlier Intel processors it's not significant:
http://lemire.me/blog/archives/2012/05/31/data-alignment-for-speed-myth-or-r...
Anyway, you are optimizing far too early. If there's a performance problem, run 'perf', find out that it's caused by X where X might be the big misalignment penalty on ARM or many other things, then fix that.
There's no need to go on a huge crusade to fix every last mis- alignment, because that will involve vast hours of programmer effort for no measurable gain.
Rich.
On 01/01/2014 09:09 PM, Richard W.M. Jones wrote:
On Wed, Jan 01, 2014 at 12:21:30PM -0800, Sean Omalley wrote:
They are a problem. It is a performance issue at the very least on =ALL= platforms. There is a cost even on Intel's platform for alignment errors, they just fix them up in hardware so it isn't as big of a performance hit. It might be 5 cycles instead of 20.
On Intel Sandybridge and up there is no penalty:
http://www.agner.org/optimize/blog/read.php?i=142&v=t
On earlier Intel processors it's not significant:
http://lemire.me/blog/archives/2012/05/31/data-alignment-for-speed-myth-or-r...
On ARM without hardware fixup it's huge. Using the test in the second link on Kirkwood (which is much more advanced and faster than standard armv5tel, thanks to Intel's improvements before they sold the division to Marvell), the results are:
# ./test
processing word of size 4 offset = 0 average time for offset 0 is 207.45 offset = 1 average time for offset 1 is 7511.55 offset = 2 average time for offset 2 is 7511.35 offset = 3 average time for offset 3 is 7511.55
processing word of size 8 offset = 0 average time for offset 0 is 414.8 offset = 1 average time for offset 1 is 12340.2 offset = 2 average time for offset 2 is 12338.5 offset = 3 average time for offset 3 is 12343.8 offset = 4 average time for offset 4 is 414.2 offset = 5 average time for offset 5 is 12337.5 offset = 6 average time for offset 6 is 12339.9 offset = 7 average time for offset 7 is 12337.4
That's a 36x and 29x slowdown with: echo 2 > /proc/cpu/alignment
If you use 3 (fixup+warn) it gets an order of magnitude worse because syslog eats all the CPU logging the warnings.
I suspect the numbers on the Pi would be similarly bad, but I don't have one so can't test that. I'll get some numbers for an ARMv7 machine later.
Anyway, you are optimizing far too early.
It is better to optimize too early than it is to pre-emptively code in blissful ignorance of what goes on underneath.
If there's a performance problem, run 'perf', find out that it's caused by X where X might be the big misalignment penalty on ARM or many other things, then fix that.
There's no need to go on a huge crusade to fix every last mis- alignment, because that will involve vast hours of programmer effort for no measurable gain.
Maybe so, but that doesn't mean that past errors should be considered as a precedent and all code henceforth should also be written without any alignment consideration, especially considering it has the potential to be dangerous (e.g. on hardware without alignment auto-fix up with kernels that don't default to auto-fixing alignment).
Gordan
On 2014-01-01 21:09, Richard W.M. Jones wrote:
On Wed, Jan 01, 2014 at 12:21:30PM -0800, Sean Omalley wrote:
They are a problem. It is a performance issue at the very least on =ALL= platforms. There is a cost even on Intel's platform for alignment errors, they just fix them up in hardware so it isn't as big of a performance hit. It might be 5 cycles instead of 20.
On Intel Sandybridge and up there is no penalty:
http://www.agner.org/optimize/blog/read.php?i=142&v=t
On earlier Intel processors it's not significant:
http://lemire.me/blog/archives/2012/05/31/data-alignment-for-speed-myth-or-r...
Anyway, you are optimizing far too early. If there's a performance problem, run 'perf', find out that it's caused by X where X might be the big misalignment penalty on ARM or many other things, then fix that.
I have just run the test on my Samsung Chromebook (A15) and the results are concerning:
processing word of size 8 offset = 0 ignore this: average time for offset 0 is 77.95 offset = 1 ignore this: average time for offset 1 is 3465.2 offset = 2 ignore this: average time for offset 2 is 3454.25 offset = 3 ignore this: average time for offset 3 is 3451.2
That is 44x slower.
More concerningly, the counters in /proc/cpu/alignment are counting the misalignments (set to 2 - single fixup), which I thought wasn't supposed to happen on ARMv7 since the fixup is transparently happening in hardware without visibility further up.
Note: /proc/cpu/alignment is not settable to 0 (ignore) - forcing it to 0 still results in setting of 2 (fixup), and setting it to 1 (warn) results in setting of 3 (fixup+warn).
This could be a feaure of the 3.4.0 ChromeOS kernel, but if it isn't, that would imply that although the alignent does happen in hardware (and is not disablable), there is still a massive performance hit.
Is this a Samsung Exynos 5250 related bug? Or is this the expected behaviour?
I'll try to dig out a Tegra2 machine and see how that compares, but thus far it is not looking good at all.
There's no need to go on a huge crusade to fix every last mis- alignment, because that will involve vast hours of programmer effort for no measurable gain.
I am currently doing a mass rebuild and will compile a list of all packages I find that are exhibiting unaligned accesses. So far they include some important ones, such as nss.
Gordan
On Mon, Jan 20, 2014 at 02:45:59PM +0000, Gordan Bobic wrote:
On 2014-01-01 21:09, Richard W.M. Jones wrote:
On Wed, Jan 01, 2014 at 12:21:30PM -0800, Sean Omalley wrote:
They are a problem. It is a performance issue at the very least on =ALL= platforms. There is a cost even on Intel's platform for alignment errors, they just fix them up in hardware so it isn't as big of a performance hit. It might be 5 cycles instead of 20.
On Intel Sandybridge and up there is no penalty:
http://www.agner.org/optimize/blog/read.php?i=142&v=t
On earlier Intel processors it's not significant:
http://lemire.me/blog/archives/2012/05/31/data-alignment-for-speed-myth-or-r...
Anyway, you are optimizing far too early. If there's a performance problem, run 'perf', find out that it's caused by X where X might be the big misalignment penalty on ARM or many other things, then fix that.
I have just run the test on my Samsung Chromebook (A15) and the results are concerning:
processing word of size 8 offset = 0 ignore this: average time for offset 0 is 77.95 offset = 1 ignore this: average time for offset 1 is 3465.2 offset = 2 ignore this: average time for offset 2 is 3454.25 offset = 3 ignore this: average time for offset 3 is 3451.2
That is 44x slower.
Is this a synthetic benchmark, or is some actual running code from Fedora 44x slower?
I never said that fixups were free, obviously going in and out of the kernel to emulate an instruction is going to take some time. The question is whether it noticably affects any code.
Rich.
On 2014-01-20 14:51, Richard W.M. Jones wrote:
On Mon, Jan 20, 2014 at 02:45:59PM +0000, Gordan Bobic wrote:
On 2014-01-01 21:09, Richard W.M. Jones wrote:
On Wed, Jan 01, 2014 at 12:21:30PM -0800, Sean Omalley wrote:
They are a problem. It is a performance issue at the very least on =ALL= platforms. There is a cost even on Intel's platform for alignment errors, they just fix them up in hardware so it isn't as big of a performance hit. It might be 5 cycles instead of 20.
On Intel Sandybridge and up there is no penalty:
http://www.agner.org/optimize/blog/read.php?i=142&v=t
On earlier Intel processors it's not significant:
http://lemire.me/blog/archives/2012/05/31/data-alignment-for-speed-myth-or-r...
Anyway, you are optimizing far too early. If there's a performance problem, run 'perf', find out that it's caused by X where X might be the big misalignment penalty on ARM or many other things, then fix that.
I have just run the test on my Samsung Chromebook (A15) and the results are concerning:
processing word of size 8 offset = 0 ignore this: average time for offset 0 is 77.95 offset = 1 ignore this: average time for offset 1 is 3465.2 offset = 2 ignore this: average time for offset 2 is 3454.25 offset = 3 ignore this: average time for offset 3 is 3451.2
That is 44x slower.
Is this a synthetic benchmark, or is some actual running code from Fedora 44x slower?
This is based on the test on the page you posted a link to above.
I never said that fixups were free, obviously going in and out of the kernel to emulate an instruction is going to take some time.
You seemed to imply it above by saying that penalty on recent x86 is non-existant on Sandy Bridge and insignificant on slightly less recent x86 CPUs.
The question is whether it noticably affects any code.
It certainly seems to affect the nss build process quite badly, specifically the test stage (which actually fails some tests on ARM, concerningly). Whether it affects the runtime I don't know, I don't think I use it - the only crypto related packages I use are OpenSSH and mod_ssl, both of which, AFAIK, link against OpenSSL rather than nss.
Gordan
On Mon, Jan 20, 2014 at 03:17:43PM +0000, Gordan Bobic wrote:
On 2014-01-20 14:51, Richard W.M. Jones wrote:
I never said that fixups were free, obviously going in and out of the kernel to emulate an instruction is going to take some time.
You seemed to imply it above by saying that penalty on recent x86 is non-existant on Sandy Bridge and insignificant on slightly less recent x86 CPUs.
I failing to see what Intel Sandybridge has to do with the ARM Cortex-A15 chips in Chromebooks, but anyway ...
The question is whether it noticably affects any code.
It certainly seems to affect the nss build process quite badly, specifically the test stage (which actually fails some tests on ARM, concerningly). Whether it affects the runtime I don't know, I don't think I use it - the only crypto related packages I use are OpenSSH and mod_ssl, both of which, AFAIK, link against OpenSSL rather than nss.
OK, sounds like nss needs to be fixed.
Rich.
"Richard W.M. Jones" rjones@redhat.com wrote:
On Mon, Dec 30, 2013 at 10:27:38PM +0800, Andy Green wrote:
Yeah I think until you realize why and how it's a problem (in other words, you got bitten) most programmers wouldn't particularly think to defend against it because the code is c-legal and works on x86.
That's because it *isn't* a problem.
We shouldn't worry about misalignment problems in Fedora ARM unless you can demonstrate with hard numbers that a particular misalignment causes a performance issue.
Set alignment to fixup, and forget about it.
Since Fedora Arm made the decision to only target v7+ that's reasonable from that perspective. I think everything in that class fixes them up in hardware anyway.
There are plenty of chips outside that though and if those users are bothered by the fixups they can send patches.
-Andy
Rich.
On 12/28/2013 12:27 AM, Gordan Bobic wrote:
On 12/27/2013 04:02 PM, Richard W.M. Jones wrote:
On Fri, Dec 27, 2013 at 09:53:54AM +0000, Gordan Bobic wrote:
How is transparent alignment fixup going to give you back the performance you lose from accesses straddling cache lines?
You can have structs straddling cache lines and causing performance problems without alignment issues, or structs being packed too close together causing false sharing again w/o alignment being involved.
If alignment problems cause performance issues, then we should deal with those performance problems. If they don't, we shouldn't worry about them.
Rich.
ObHack: I once worked with an architecture [68k-based VME hardware] that not only faulted on unaligned access, but also on accesses of the wrong *size* (eg. using a short-sized read instruction instead of a word-sized read instruction). Dealing with that nonsense involved a lot of compiler-specific massaging of code and some inline assembly ...
I'm very glad you mentioned compilers - this is in fact easily fixable at compiler level. Intel's ICC has an option to make all arrays and structs always aligned to a boundary (up to 16 byte, IIRC). If GCC were to implement such a feature the problem could be made to go away without actually addressing the underlying cause of the problem. It might be a bodge, but since complete fix of the underlying problem isn't going to happen anyway, a good bodge would be a lot better than doing nothing.
Gordan
How does this in any way related to the alignment problem? All C compilers align arrays and structures in sensible ways by default. They have to. Its a requirement of the C language. Problems come from things like pointing directly at elements in communication structures, which may not be naturally aligned. They can also come from overriding the default alignment of arrays and structures, which most compilers permit these days, with a varierty of constructs like "#pragma pack(1)"
Regards, Steve
On 12/30/2013 10:08 AM, Steve Underwood wrote:
On 12/28/2013 12:27 AM, Gordan Bobic wrote:
On 12/27/2013 04:02 PM, Richard W.M. Jones wrote:
On Fri, Dec 27, 2013 at 09:53:54AM +0000, Gordan Bobic wrote:
How is transparent alignment fixup going to give you back the performance you lose from accesses straddling cache lines?
You can have structs straddling cache lines and causing performance problems without alignment issues, or structs being packed too close together causing false sharing again w/o alignment being involved.
If alignment problems cause performance issues, then we should deal with those performance problems. If they don't, we shouldn't worry about them.
Rich.
ObHack: I once worked with an architecture [68k-based VME hardware] that not only faulted on unaligned access, but also on accesses of the wrong *size* (eg. using a short-sized read instruction instead of a word-sized read instruction). Dealing with that nonsense involved a lot of compiler-specific massaging of code and some inline assembly ...
I'm very glad you mentioned compilers - this is in fact easily fixable at compiler level. Intel's ICC has an option to make all arrays and structs always aligned to a boundary (up to 16 byte, IIRC). If GCC were to implement such a feature the problem could be made to go away without actually addressing the underlying cause of the problem. It might be a bodge, but since complete fix of the underlying problem isn't going to happen anyway, a good bodge would be a lot better than doing nothing.
Gordan
How does this in any way related to the alignment problem? All C compilers align arrays and structures in sensible ways by default. They have to. Its a requirement of the C language. Problems come from things like pointing directly at elements in communication structures, which may not be naturally aligned. They can also come from overriding the default alignment of arrays and structures, which most compilers permit these days, with a varierty of constructs like "#pragma pack(1)"
It's related because char[] is byte aligned rather than word-aligned. In some cases (e.g. e2fsprogs) a buffer is declared as char[4096], which means that when it's cast into a struct, struct elements won't be suitably aligned. If the compiler were to align all arrays to a 16-byte boundary, this wouldn't be an issue.
I accept that in some cases, having arrays and structures unaligned may be useful (e.g. wire protocol packet parsing), but in most cases it seems to be just lazy or uninformed programming. In those cases the compiler aligning everything to a 16-byte boundary would help.
Gordan
On 12/30/2013 07:41 PM, Gordan Bobic wrote:
On 12/30/2013 10:08 AM, Steve Underwood wrote:
On 12/28/2013 12:27 AM, Gordan Bobic wrote:
On 12/27/2013 04:02 PM, Richard W.M. Jones wrote:
On Fri, Dec 27, 2013 at 09:53:54AM +0000, Gordan Bobic wrote:
How is transparent alignment fixup going to give you back the performance you lose from accesses straddling cache lines?
You can have structs straddling cache lines and causing performance problems without alignment issues, or structs being packed too close together causing false sharing again w/o alignment being involved.
If alignment problems cause performance issues, then we should deal with those performance problems. If they don't, we shouldn't worry about them.
Rich.
ObHack: I once worked with an architecture [68k-based VME hardware] that not only faulted on unaligned access, but also on accesses of the wrong *size* (eg. using a short-sized read instruction instead of a word-sized read instruction). Dealing with that nonsense involved a lot of compiler-specific massaging of code and some inline assembly ...
I'm very glad you mentioned compilers - this is in fact easily fixable at compiler level. Intel's ICC has an option to make all arrays and structs always aligned to a boundary (up to 16 byte, IIRC). If GCC were to implement such a feature the problem could be made to go away without actually addressing the underlying cause of the problem. It might be a bodge, but since complete fix of the underlying problem isn't going to happen anyway, a good bodge would be a lot better than doing nothing.
Gordan
How does this in any way related to the alignment problem? All C compilers align arrays and structures in sensible ways by default. They have to. Its a requirement of the C language. Problems come from things like pointing directly at elements in communication structures, which may not be naturally aligned. They can also come from overriding the default alignment of arrays and structures, which most compilers permit these days, with a varierty of constructs like "#pragma pack(1)"
It's related because char[] is byte aligned rather than word-aligned. In some cases (e.g. e2fsprogs) a buffer is declared as char[4096], which means that when it's cast into a struct, struct elements won't be suitably aligned. If the compiler were to align all arrays to a 16-byte boundary, this wouldn't be an issue.
I accept that in some cases, having arrays and structures unaligned may be useful (e.g. wire protocol packet parsing), but in most cases it seems to be just lazy or uninformed programming. In those cases the compiler aligning everything to a 16-byte boundary would help.
Gordan
Do you know a compiler for anything that isn't really tiny (like an 8 bit MCU) which doesn't align character arrays to start at a multiple of something meaningful (e.g. 4, 8, or 16)? It helps a lot with character 0, but it doesn't help a whole lot with character 1.
I think its actually quite rare to find a misalignment problem that isn't related to working with multi-byte values in a buffer which is an image of something external, like a comms buffer or an image of something from disk.
Regards, Steve
On 12/30/2013 11:56 AM, Steve Underwood wrote:
On 12/30/2013 07:41 PM, Gordan Bobic wrote:
On 12/30/2013 10:08 AM, Steve Underwood wrote:
On 12/28/2013 12:27 AM, Gordan Bobic wrote:
On 12/27/2013 04:02 PM, Richard W.M. Jones wrote:
On Fri, Dec 27, 2013 at 09:53:54AM +0000, Gordan Bobic wrote:
How is transparent alignment fixup going to give you back the performance you lose from accesses straddling cache lines?
You can have structs straddling cache lines and causing performance problems without alignment issues, or structs being packed too close together causing false sharing again w/o alignment being involved.
If alignment problems cause performance issues, then we should deal with those performance problems. If they don't, we shouldn't worry about them.
Rich.
ObHack: I once worked with an architecture [68k-based VME hardware] that not only faulted on unaligned access, but also on accesses of the wrong *size* (eg. using a short-sized read instruction instead of a word-sized read instruction). Dealing with that nonsense involved a lot of compiler-specific massaging of code and some inline assembly ...
I'm very glad you mentioned compilers - this is in fact easily fixable at compiler level. Intel's ICC has an option to make all arrays and structs always aligned to a boundary (up to 16 byte, IIRC). If GCC were to implement such a feature the problem could be made to go away without actually addressing the underlying cause of the problem. It might be a bodge, but since complete fix of the underlying problem isn't going to happen anyway, a good bodge would be a lot better than doing nothing.
Gordan
How does this in any way related to the alignment problem? All C compilers align arrays and structures in sensible ways by default. They have to. Its a requirement of the C language. Problems come from things like pointing directly at elements in communication structures, which may not be naturally aligned. They can also come from overriding the default alignment of arrays and structures, which most compilers permit these days, with a varierty of constructs like "#pragma pack(1)"
It's related because char[] is byte aligned rather than word-aligned. In some cases (e.g. e2fsprogs) a buffer is declared as char[4096], which means that when it's cast into a struct, struct elements won't be suitably aligned. If the compiler were to align all arrays to a 16-byte boundary, this wouldn't be an issue.
I accept that in some cases, having arrays and structures unaligned may be useful (e.g. wire protocol packet parsing), but in most cases it seems to be just lazy or uninformed programming. In those cases the compiler aligning everything to a 16-byte boundary would help.
Gordan
Do you know a compiler for anything that isn't really tiny (like an 8 bit MCU) which doesn't align character arrays to start at a multiple of something meaningful (e.g. 4, 8, or 16)? It helps a lot with character 0, but it doesn't help a whole lot with character 1.
I think its actually quite rare to find a misalignment problem that isn't related to working with multi-byte values in a buffer which is an image of something external, like a comms buffer or an image of something from disk.
GCC 4.4.x didn't seem to, otherwise some of the errors I was looking at wouldn't have come up. malloc() seems to allocate 4-byte aligned on ARM, but char[] seems to be byte aligned.
Gordan
On 12/23/2013 01:12 PM, Richard W.M. Jones wrote:
I think we should be honest about the real reason: Either we have to maintain two sets of packages or we have to make everyone on the newer and faster armv7 suffer with unoptimized binaries, and we don't want to do either of those things.
That's true, but it's not the only reason. There is also the issue of the kernel not being upstream and the firmware not being Fedora-compatible (This latter point might have been remedied, haven't looked in a while). There are likely other reasons as well.