On Fri, 9 Jan 2009 16:45:11 -0500 Bill Nottingham notting@redhat.com wrote:
Here's some proposed config changes.
-CONFIG_HAMRADIO=y +# CONFIG_HAMRADIO is not set
hamradio was enabled because of user request.
Bill Nottingham wrote:
Here's some proposed config changes.
Jumping on the wagon ;)
Can we enable CONFIG_DMAR please? This turns on the IOMMU on intel boxes, using VT-d. Called "DMA Remapping" in intel speak, this is where the config option name comes from. Advantages:
(1) 32bit PCI devices can DMA to memory above 4G, thus the need for swiotlb (i.e. bounce buffers) is gone. (2) It allows kvm to pass through PCI devices to guests securely.
thanks, Gerd
On Tue, Jan 13, 2009 at 09:55:27PM +0100, Gerd Hoffmann wrote:
Bill Nottingham wrote:
Here's some proposed config changes.
Jumping on the wagon ;)
Can we enable CONFIG_DMAR please? This turns on the IOMMU on intel boxes, using VT-d. Called "DMA Remapping" in intel speak, this is where the config option name comes from. Advantages:
(1) 32bit PCI devices can DMA to memory above 4G, thus the need for swiotlb (i.e. bounce buffers) is gone. (2) It allows kvm to pass through PCI devices to guests securely.
The last time we tried this, it blew up a lot due to broken BIOSes. Maybe it's been improved enough to tolerate them, so we can probably give it a spin in rawhide for a while to see what happens.
Dave
Dave Jones wrote:
On Tue, Jan 13, 2009 at 09:55:27PM +0100, Gerd Hoffmann wrote:
Bill Nottingham wrote:
Here's some proposed config changes.
Jumping on the wagon ;)
Can we enable CONFIG_DMAR please? This turns on the IOMMU on intel boxes, using VT-d. Called "DMA Remapping" in intel speak, this is where the config option name comes from. Advantages:
(1) 32bit PCI devices can DMA to memory above 4G, thus the need for swiotlb (i.e. bounce buffers) is gone. (2) It allows kvm to pass through PCI devices to guests securely.
The last time we tried this, it blew up a lot due to broken BIOSes.
Do you have bug numbers at hand? Searching for CONFIG_DMAR gives me "Zarro Boogs found".
Maybe it's been improved enough to tolerate them, so we can probably give it a spin in rawhide for a while to see what happens.
Well, it works for me. Single test box only though. As the machine in question actually has memory above 4G it is a nice speedup for kernel builds (sys time going down from ~15min to ~5min).
cheers, Gerd
On Tue, Jan 13, 2009 at 10:21:52PM +0100, Gerd Hoffmann wrote:
Dave Jones wrote:
On Tue, Jan 13, 2009 at 09:55:27PM +0100, Gerd Hoffmann wrote:
Bill Nottingham wrote:
Here's some proposed config changes.
Jumping on the wagon ;)
Can we enable CONFIG_DMAR please? This turns on the IOMMU on intel boxes, using VT-d. Called "DMA Remapping" in intel speak, this is where the config option name comes from. Advantages:
(1) 32bit PCI devices can DMA to memory above 4G, thus the need for swiotlb (i.e. bounce buffers) is gone. (2) It allows kvm to pass through PCI devices to guests securely.
The last time we tried this, it blew up a lot due to broken BIOSes.
Do you have bug numbers at hand? Searching for CONFIG_DMAR gives me "Zarro Boogs found".
Maybe it's been improved enough to tolerate them, so we can probably give it a spin in rawhide for a while to see what happens.
Well, it works for me. Single test box only though. As the machine in question actually has memory above 4G it is a nice speedup for kernel builds (sys time going down from ~15min to ~5min).
I've just enabled it (and GFX_WA and FLOPPY_WA on x86_64.)
(I'm suprised it's x86_64 only, surely virtualization enabled i386 could benefit from DMA protection?)
* Dave Jones (davej@redhat.com) wrote:
On Tue, Jan 13, 2009 at 09:55:27PM +0100, Gerd Hoffmann wrote:
Bill Nottingham wrote:
Here's some proposed config changes.
Jumping on the wagon ;)
Can we enable CONFIG_DMAR please? This turns on the IOMMU on intel boxes, using VT-d. Called "DMA Remapping" in intel speak, this is where the config option name comes from. Advantages:
(1) 32bit PCI devices can DMA to memory above 4G, thus the need for swiotlb (i.e. bounce buffers) is gone. (2) It allows kvm to pass through PCI devices to guests securely.
The last time we tried this, it blew up a lot due to broken BIOSes. Maybe it's been improved enough to tolerate them, so we can probably give it a spin in rawhide for a while to see what happens.
Upstream was still broken as recently as Friday for bad BIOSes (x200s in this case). Wonder if opt-in via cmdline would be helpful?
thanks, -chris
Chris Wright wrote:
Upstream was still broken as recently as Friday for bad BIOSes (x200s in this case). Wonder if opt-in via cmdline would be helpful?
Like the attached patch? Disclaimer: untested, build still running ...
thanks, Gerd
diff --git a/drivers/pci/intel-iommu.c b/drivers/pci/intel-iommu.c index 235fb7a..ddd0f31 100644 --- a/drivers/pci/intel-iommu.c +++ b/drivers/pci/intel-iommu.c @@ -268,7 +268,8 @@ static long list_size;
static void domain_remove_dev_info(struct dmar_domain *domain);
-int dmar_disabled; +/* default-off for now because it blows up on some machines due to bios bugs */ +int dmar_disabled = 1; static int __initdata dmar_map_gfx = 1; static int dmar_forcedac; static int intel_iommu_strict; @@ -287,6 +288,9 @@ static int __init intel_iommu_setup(char *str) if (!strncmp(str, "off", 3)) { dmar_disabled = 1; printk(KERN_INFO"Intel-IOMMU: disabled\n"); + } else if (!strncmp(str, "on", 2)) { + dmar_disabled = 0; + printk(KERN_INFO"Intel-IOMMU: enabled\n"); } else if (!strncmp(str, "igfx_off", 8)) { dmar_map_gfx = 0; printk(KERN_INFO
On Tue, Jan 13, 2009 at 01:39:13PM -0800, Chris Wright wrote:
Upstream was still broken as recently as Friday for bad BIOSes (x200s in this case). Wonder if opt-in via cmdline would be helpful?
Do you know enough about the failure cases to DMI quirk it, plus an opt-out command line option, with it enabled as default?
Thanks, Matt
* Matt Domsch (Matt_Domsch@dell.com) wrote:
On Tue, Jan 13, 2009 at 01:39:13PM -0800, Chris Wright wrote:
Upstream was still broken as recently as Friday for bad BIOSes (x200s in this case). Wonder if opt-in via cmdline would be helpful?
Do you know enough about the failure cases to DMI quirk it, plus an opt-out command line option, with it enabled as default?
There is opt-out already. As Kyle pointed out, that particular issue has been fixed (as was another one w/ busted BIOS about a week ago). It's just an arms race ;-)
FWIW, I'm in favor of enabling it (particularly to help aid KVM device assignement in rawhide).
thanks, -chris
On Tue, Jan 13, 2009 at 01:39:13PM -0800, Chris Wright wrote:
- Dave Jones (davej@redhat.com) wrote:
On Tue, Jan 13, 2009 at 09:55:27PM +0100, Gerd Hoffmann wrote:
Bill Nottingham wrote:
Here's some proposed config changes.
Jumping on the wagon ;)
Can we enable CONFIG_DMAR please? This turns on the IOMMU on intel boxes, using VT-d. Called "DMA Remapping" in intel speak, this is where the config option name comes from. Advantages:
(1) 32bit PCI devices can DMA to memory above 4G, thus the need for swiotlb (i.e. bounce buffers) is gone. (2) It allows kvm to pass through PCI devices to guests securely.
The last time we tried this, it blew up a lot due to broken BIOSes. Maybe it's been improved enough to tolerate them, so we can probably give it a spin in rawhide for a while to see what happens.
Upstream was still broken as recently as Friday for bad BIOSes (x200s in this case). Wonder if opt-in via cmdline would be helpful?
A patch from Dirk Hohndel fixes this (looks like it got merged Sundayish, after floating around linux-pci for a week.) AFAIK, at least.
regards, Kyle
* Kyle McMartin (kyle@infradead.org) wrote:
On Tue, Jan 13, 2009 at 01:39:13PM -0800, Chris Wright wrote:
Upstream was still broken as recently as Friday for bad BIOSes (x200s in this case). Wonder if opt-in via cmdline would be helpful?
A patch from Dirk Hohndel fixes this (looks like it got merged Sundayish, after floating around linux-pci for a week.) AFAIK, at least.
Yeah, that's the case I was referring to. Mostly thinking that any feature like VT-d relying on BIOS will have some time to get defensive enough for all the ways BIOS can screw up the feature.
thanks, -chris
On Tue, Jan 13, 2009 at 02:47:18PM -0800, Chris Wright wrote:
- Kyle McMartin (kyle@infradead.org) wrote:
On Tue, Jan 13, 2009 at 01:39:13PM -0800, Chris Wright wrote:
Upstream was still broken as recently as Friday for bad BIOSes (x200s in this case). Wonder if opt-in via cmdline would be helpful?
A patch from Dirk Hohndel fixes this (looks like it got merged Sundayish, after floating around linux-pci for a week.) AFAIK, at least.
Yeah, that's the case I was referring to. Mostly thinking that any feature like VT-d relying on BIOS will have some time to get defensive enough for all the ways BIOS can screw up the feature.
No doubt it's going to end up like MSI. :(
regards, Kyle
On Tue, 2009-01-13 at 13:39 -0800, Chris Wright wrote:
- Dave Jones (davej@redhat.com) wrote:
On Tue, Jan 13, 2009 at 09:55:27PM +0100, Gerd Hoffmann wrote:
Bill Nottingham wrote:
Here's some proposed config changes.
Jumping on the wagon ;)
Can we enable CONFIG_DMAR please? This turns on the IOMMU on intel boxes, using VT-d. Called "DMA Remapping" in intel speak, this is where the config option name comes from. Advantages:
(1) 32bit PCI devices can DMA to memory above 4G, thus the need for swiotlb (i.e. bounce buffers) is gone. (2) It allows kvm to pass through PCI devices to guests securely.
The last time we tried this, it blew up a lot due to broken BIOSes. Maybe it's been improved enough to tolerate them, so we can probably give it a spin in rawhide for a while to see what happens.
Upstream was still broken as recently as Friday for bad BIOSes (x200s in this case). Wonder if opt-in via cmdline would be helpful?
This is now believed to be fixed, and I've re-enabled DMAR in rawhide.
I see Chuck has applied the fix to F-10 already, and we'll look at the question of re-enabling DMAR there later. For now you still need to boot F-10 with 'intel_iommu=on'.
"DJ" == Dave Jones davej@redhat.com writes:
DJ> The last time we tried this, it blew up a lot due to broken DJ> BIOSes.
Yes, this used to kill a bunch of my machines dead. It may be better now; I'm quite willing to boot test kernels if someone wants to point me at one.
- J<
kernel@lists.fedoraproject.org