On 2016-04-28 19:49, Jon Masters wrote:
Hi Gordan, Peter, all,
On 04/27/2016 03:39 PM, Gordan Bobic wrote:
> On 2016-04-27 19:12, John Dulaney wrote:
>> On Wed, Apr 27, 2016 at 05:04:38PM +0100, Gordan Bobic wrote:
>>> >Maybe that's something that CentOS have added (don't know,
>>> >looked), RHELSA doesn't support it that I'm aware of and
>>> >definitely only 64K page size. The biggest change is in rpm and the
>>> >arch mappings there.
>>> They might not support it, but it most certainly works. There are no
>>> changes specific to this that I can find in CentOS. All I changed
>>> rebuilt the host kernel with 4KB pages and ARM32 support (still an
>>> aarch64 kernel). C7 armv7hl guest is completely unmodified apart
>>> the /etc/rpm/platform being set explicitly.
First of all, Jon, thank you for your thoughts on this matter.
Allow me to add a few thoughts. I have been working with the ARM
(as well as the ARM Architecture Group) since before the architecture
was announced, and the issue of page size and 32-bit backward
compatibility came up in the earliest days. I am speaking from a Red
perspective and NOT dictating what Fedora should or must do, but I do
strongly encourage Fedora not to make a change to something like the
page size simply to support a (relatively) small number of corner
IMO, the issue of backward compatibility is completely secondary to
the issue of efficiency of memory fragmentation/occupancy when it comes
to 64KB pages. And that isn't a corner case, it is the overwhelmingly
It is better to focus on the longer term trajectory, which the
handset market demonstrates: the transition to 64-bit computing
will be much faster than people thought, and we don't need to build a
legacy (we don't a 32-bit app store filled with things that can't
be rebuilt, and all of them have been anyway).
I think going off on a tangent about the mobile devices needlessly
muddies the water here. 64-bitness is completely independent of memory
page size and pros and cons of different sizes. If anything on mobile
devices where memory is scarcer, smaller pages will result in lower
fragmentation and less wasted memory.
That doesn't mean we shouldn't love 32-bit ARM devices, which
we do. In
fact, there will be many more 32-bit ARM devices over coming years.
is especially true for IoT clients. But there will also be a large (and
rapidly growing) number of very high performance 64-bit systems. Many
those will not have any 32-bit backward compatibility, or will disable
it in the interest of reducing the amount of validation work. Having an
entire separate several ISAs just for the fairly nonexistent field of
proprietary non-recompilable third party 32-bit apps doesn't really
sense. Sure, running 32-bit via multilib is fun and all, but it's not
really something that is critical to using ARM systems.
Except where there's no choice, such as closed source applications
(Plex comes to mind) or libraries without appropriate ARM64
such as Mono. I'm sure pure aarch64 will be supported by it all at
some point, but the problem is real today.
But OK, for the sake of this discussion let's completely ignore the
32-bit support to simplify things.
The mandatory page sizes in the v8 architecture are 4K and 64K, with
various options around the number of bits used for address spaces, huge
pages (or ginormous pages), and contiguous hinting for smaller "huge"
pages. There is an option for 16K pages, but it is not mandatory. In
server specifications, we don't compel Operating Systems to use 64K,
everything is written with that explicitly in mind. By using 64K early
we ensure that it is possible to do so in a very clean way, and then if
(over the coming years) the deployment of sufficient real systems
that this was a premature decision, we still have 4K.
The real question is how much code will bit-rot due to not being
tested with 4KB pages, and how difficult it will be to subsequently
push through patches all the way from upstream projects down to the
level of the distros we are all fans of here. And even then, the
consequence will be software that is broken for anyone who has a
need to do anything but the straight-and-narrow case that the
distro maintainers envisaged.
The choices for preferred page size were between 4K and 64K. In the
interest of transparency, I pushed from the RH side in the earliest
(before public disclosure) to introduce an intentional break with the
past and support only 64K on ARMv8.
Breaking with the past is all well and good, but I am particularly
interested in the technical reasons for doing so. What benefits exceed
the drawbacks of the significant increase in fragmentation in the
general case (apart from databases - and for that we have huge pages
I also asked a few of the chip
vendors not to implement 32-bit execution (and some of them have indeed
omitted it after we discussed the needs early on), and am aggressively
pushing for it to go away over time in all server parts. But there's
more to it than that. In the (very) many early conversations with
various performance folks, the feedback was that larger page sizes than
4K should generally be adopted for a new arch. Ideally that would have
been 16K (which other architectures than x86 went with also), but that
was optional. Optionally necessarily means "does not exist". My advice
when Red Hat began internal work on ARMv8 was to listen to the experts.
Linus is not an expert?
I am well aware of Linus's views on the topic and I have seen the
on G+ and elsewhere. I am completely willing to be wrong (there is not
enough data yet) over moving to 64K too soon and ultimately if it was
premature see things like RHELSA on the Red Hat side switch back to 4K.
My main concern is around how much code elsewhere will rot and need
attention should this ever happen.
Fedora is its own master, but I strongly encourage retaining the use
64K granules at this time, and letting it play out without responding
one or two corner use cases and changing course. There are very many
design optimizations that can be done when you have a 64K page size,
from the way one can optimize cache lookups and hardware page table
walker caches to the reduction of TLB pressure (though I accept that
huge pages are an answer for this under a 4K granule regime as well).
would be nice to blaze a trail rather than take the safe default.
While I agree with the sentiment, I think something like this is
better decided on carefully considered merit assessed through
My own opinion is that (in the longer term, beginning with server)
should not have a 32-bit legacy of the kind that x86 has to deal with
forever. We can use virtualization (and later, if it really comes to
containers running 32-bit applications with 4K pages exposed to them -
an implementation would be a bit like "Clear" containers today) to run
32-bit applications on 64-bit without having to do nasty hacks (such as
multilib) and reduce any potential for confusion on the part of users
(see also RasPi 3 as an example). It is still early enough in the
evolution of general purpose aarch64 to try this, and have the
fallback of retreating to 4K if needed. The same approach of running
under virtualization or within a container model equally applies to
ILP32, which is another 32-bit ABI that some folks like, in that a
party group is welcome to do all of the lifting required.
This again mashes 32-bit support with page size. If there is no
32-bit support in the CPU, I am reasonably confident that QEMU
emulation if it will be unusably slow for just about any serious
use case (you might as well run QEMU emulation of ARM32 on x86
in that case and not even touch upon aarch64).
>>> The main point being that the original assertion that
>>> work would require rpm, yum, packagekit, mock and other code changes
>>> doesn't seem to be correct based on empirical evidence.
>> It may work with rpm, but, as per the original post, dnf does not
>> support it, and dnf should not support it as long as Fedora
>> does not support a 32 bit userspace on aarch64.
It's a lot of lifting to support validating a 32-bit userspace for a
brand new architecture that doesn't need to have that legacy. Sure,
convenient, and you're obviously more than capable of building a kernel
with a 4K page size and doing whatever you need for yourself. That's
beauty of open source. It lets you have a 32-bit userspace on a 64-bit
device without needing to support that for everyone else.
Sure, that is the beauty of open source. But will Fedora accept
patches for fixing things that break during such independent
validation? My experience with Fedora patch submissions has
been very poor in the past - the typical outcome being that the
bug will sit and rot in bugzilla until the distro goes EOL and
the bug zapper closes it. That is hugely demotivating.
> 2) Nobody has yet pointed at ARM's own documentation (I did
> earlier) that says that 4KB memory page support is optional
> rather than mandatory.
Nobody said this was a requirement. I believe you raised this as some
kind of logical fallacy to reinforce the position that you have taken.
I'm afraid you got that backwards. I believe it was Peter that
said that Seattle didn't support 4KB pages, seemingly implied
as a means of justifying the use of 64KB pages:
> And if 4KB support is in fact mandatory, then arguably the
> decision to opt for 64KB for the sake of supporting Seattle was
> based on wanting to support broken hardware that turned out to
> be too little too late anyway.
Seattle was incredibly well designed by a very talented team of
engineers at AMD, who know how to make servers. They did everything
fully in conformance with the specifications we coauthored for v8. It
true that everyone would have liked to see low cost mass market Seattle
hardware in wide distribution. For the record, last week, I received
of the preproduction "Cello" boards ($300) for which a few kinks are
being resolved before it will go into mass production soon.
If Seattle does in fact support the spec mandatory 4KB memory
pages, then that specific SoC is no longer relevant to this
> So either something magical happens that means that the
> missing 32-bit support doesn't have to be fully emulated in
> software, or the entire argument being made for VMs instead
> of chroots is entirely erroneous.
Nobody said there wasn't a performance hit using virtualization.
Depending upon how you measure it, it's about 3-10% overhead or
to use KVM (or Xen for that matter) on ARMv8. That doesn't make it an
erroneous argument that running a VM is an easier exercise in
distribution validation and support: you build one 64-bit distro, you
build one 32-bit distro. You don't have to support a mixture. In a few
years, we'll all be using 64-bit ARM SoCs in every $10 device, only
running native 64-bit ARMv8 code, and wondering why it was ever an
issue that we might want multilib. We'll have $1-$2 IoT widgets that
32-bit, but that's another matter. There's no legacy today, so let's
concentrate on not building one and learning from history.
I am talking about the specific case of using
armv7hl (or armv5tel) VMs on aarch64 hardware that doesn't
implement 32-bit ARM support (and you suggest above that not
supporting ARM32 on ARM64 hardware may be a good thing). But that
also means that there is no advantage to running an armv7hl distro
on aarch64 hardware without legacy support, so the whole VM notion
is out of scope since it isn't virtualization, it is emulation. And
at that point there is no advantage to running an emulator on
aarch64 machine over an x86-64 machine.
The point being that if there's no legacy 32-bit support in
hardware, it's not going to be workable anyway. If there is
legacy 32-bit support in hardware, running it it in a chroot
or in a docker container might not be outright supported (I
get it, there are only so many maintainers and testers) but
at the very least external, user provided validation, patches
and questions and bug reports should be treated with something
other than contempt.