Bug 12222
Summary: | kernel BUG at drivers/pci/intel-iommu.c:1373! | ||
---|---|---|---|
Product: | IO/Storage | Reporter: | John Blbec (john.blbec) |
Component: | SCSI | Assignee: | David Woodhouse (dwmw2) |
Status: | RESOLVED DUPLICATE | ||
Severity: | normal | CC: | anil.s.keshavamurthy, john.blbec |
Priority: | P1 | ||
Hardware: | All | ||
OS: | Linux | ||
Kernel Version: | 2.6.26-gentoo-r4 | Subsystem: | |
Regression: | No | Bisected commit-id: |
Description
John Blbec
2008-12-14 09:35:34 UTC
a photo of bug is taken in console so nvidia module has been removed even one is in my cat /proc/modules. all tests has been run without nvidia module, of course... Reply-To: fujita.tomonori@lab.ntt.co.jp On Sun, 14 Dec 2008 09:35:38 -0800 (PST) bugme-daemon@bugzilla.kernel.org wrote: > http://bugzilla.kernel.org/show_bug.cgi?id=12222 > > Summary: kernel BUG at drivers/pci/intel-iommu.c:1373! > Product: IO/Storage > Version: 2.5 > KernelVersion: 2.6.26-gentoo-r4 > Platform: All > OS/Version: Linux > Tree: Mainline > Status: NEW > Severity: normal > Priority: P1 > Component: SCSI > AssignedTo: linux-scsi@vger.kernel.org > ReportedBy: john.blbec@centrum.cz > CC: anil.s.keshavamurthy@intel.com > > > Latest working kernel version: unknown Probably, this is a VT-d bug. I saw the same bug report before: http://lkml.org/lkml/2008/9/8/138 it's worth trying the latest kernel but I guess that the problem still exists. As Mark suggested, it's also worth trying 'intel_iommu=strict' kernel boot option, I think. If it doesn't work, you can use a workaround to disable VT-d with 'intel_iommu=off' kernel boot option. thanks for the answer :o) results: 1) intel_iommu=strict ... it does not solve the issue 2) intel_iommu=off ...... yes, it is a workaround, bonnie++ finished correctly well, I have two questions. what performance impact should I expect and is there any odds the bug will be fixed in the next linux kernel version? In 2.6.28-rc8 kernel, the BUG_ON is now at line 1276. The BUG_ON is just indicating the IOMMU page table entry is already (or still) in use. My guess is either the IOMMU space allocator is buggy *OR* the unmap code isn't clearing dma_pte_addr() (off by one?). Perhaps there needs to be a wmb() in intel_unmap_sg() between dma_pte_clear_range() and the later __free_iova() call. intel_iommu=off means no IOMMU will be used. For normal workloads with modern PCIe devices (which are all 64-bit, right?), there would be no perf impact. Not until you wanted to get better isolation for virtual guest OSs or used a device driver that only offers 32-bit DMA support, will it matter. *** Bug 12223 has been marked as a duplicate of this bug. *** I understand. Thanks for the answer. Should be fixed in 2.6.31, and queued for -stable too. *** This bug has been marked as a duplicate of bug 13584 *** great! thanks david ;o) |