Bug 218866

Summary: Extra /dev/sd.. entries for a fake raid when more than 15 partitions
Product: IO/Storage Reporter: Marc Debruyne (marc_debruyne)
Component: MDAssignee: linux-scsi (linux-scsi)
Status: RESOLVED INVALID    
Severity: normal CC: marc_debruyne, mkp, phill
Priority: P3    
Hardware: Intel   
OS: Linux   
Kernel Version: 6.9.1 Subsystem:
Regression: No Bisected commit-id:
Attachments: dmesg

Description Marc Debruyne 2024-05-21 09:22:50 UTC
gentoo

kernel linux-6.8.3-gentoo

Fake Raid 1 via "LSI Software RAID  
***************  
ls /dev/sd* + info  

/dev/sda /dev/sda1 
/dev/sdb /dev/sdb1
/dev/sdc /dev/sdc1  raid 5 /devmd126
/dev/sdf /dev/sdf1
/dev/sdg /dev/sdg1
/dev/sdi /dev/sdi1

/dev/sdh /dev/sdh1 SWAP

/dev/sdd    
/dev/sdd16         16 to 20 are the extra entries
/dev/sdd17
/dev/sdd18
/dev/sdd19
/dev/sdd20          raid 1 with 20 partitions
/dev/sde
/dev/sde16
/dev/sde17
/dev/sde18
/dev/sde19
/dev/sde20

/dev/sdj           ?
**********************
lsblk

NAME         MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda            8:0    0  16.4T  0 disk  
└─sda1         8:1    0  16.4T  0 part  
  └─md125      9:125  0  81.9T  0 raid5 /srv/nfs/home
                                        /home
sdb            8:16   0  16.4T  0 disk  
└─sdb1         8:17   0  16.4T  0 part  
  └─md125      9:125  0  81.9T  0 raid5 /srv/nfs/home
                                        /home
sdc            8:32   0  16.4T  0 disk  
└─sdc1         8:33   0  16.4T  0 part  
  └─md125      9:125  0  81.9T  0 raid5 /srv/nfs/home
                                        /home
sdd            8:48   0 931.5G  0 disk  
├─md126        9:126  0 930.4G  0 raid1 
│ ├─md126p1  259:18   0   450M  0 part  
│ ├─md126p2  259:19   0   100M  0 part  
│ ├─md126p3  259:20   0    16M  0 part  
│ ├─md126p4  259:21   0 117.2G  0 part  
│ ├─md126p5  259:22   0  97.7G  0 part  /
│ ├─md126p6  259:23   0    99M  0 part  /boot/efi
│ ├─md126p7  259:24   0  97.7G  0 part  
│ ├─md126p8  259:25   0  97.7G  0 part  
│ ├─md126p9  259:26   0   6.8G  0 part  
│ ├─md126p10 259:27   0  54.8G  0 part  
│ ├─md126p11 259:28   0  55.9G  0 part  
│ ├─md126p12 259:29   0  43.2G  0 part  
│ ├─md126p13 259:30   0     1G  0 part  
│ ├─md126p14 259:31   0  40.8G  0 part  
│ ├─md126p15 259:32   0   100M  0 part  
│ ├─md126p16 259:33   0    16M  0 part  
│ ├─md126p17 259:34   0   293G  0 part  
│ ├─md126p18 259:35   0   606M  0 part  
│ ├─md126p19 259:36   0   6.8G  0 part  
│ └─md126p20 259:37   0   6.8G  0 part  
├─md127        9:127  0     0B  0 md    
├─sdd16      259:8    0    16M  0 part  
├─sdd17      259:9    0   293G  0 part  
├─sdd18      259:10   0   606M  0 part  
├─sdd19      259:11   0   6.8G  0 part  
└─sdd20      259:12   0   6.8G  0 part  
sde            8:64   0 931.5G  0 disk  
├─md126        9:126  0 930.4G  0 raid1 
│ ├─md126p1  259:18   0   450M  0 part  
│ ├─md126p2  259:19   0   100M  0 part  
│ ├─md126p3  259:20   0    16M  0 part  
│ ├─md126p4  259:21   0 117.2G  0 part  
│ ├─md126p5  259:22   0  97.7G  0 part  /
│ ├─md126p6  259:23   0    99M  0 part  /boot/efi
│ ├─md126p7  259:24   0  97.7G  0 part  
│ ├─md126p8  259:25   0  97.7G  0 part  
│ ├─md126p9  259:26   0   6.8G  0 part  
│ ├─md126p10 259:27   0  54.8G  0 part  
│ ├─md126p11 259:28   0  55.9G  0 part  
│ ├─md126p12 259:29   0  43.2G  0 part  
│ ├─md126p13 259:30   0     1G  0 part  
│ ├─md126p14 259:31   0  40.8G  0 part  
│ ├─md126p15 259:32   0   100M  0 part  
│ ├─md126p16 259:33   0    16M  0 part  
│ ├─md126p17 259:34   0   293G  0 part  
│ ├─md126p18 259:35   0   606M  0 part  
│ ├─md126p19 259:36   0   6.8G  0 part  
│ └─md126p20 259:37   0   6.8G  0 part  
├─md127        9:127  0     0B  0 md    
├─sde16      259:13   0    16M  0 part  
├─sde17      259:14   0   293G  0 part  
├─sde18      259:15   0   606M  0 part  
├─sde19      259:16   0   6.8G  0 part  
└─sde20      259:17   0   6.8G  0 part  
sdf            8:80   0  16.4T  0 disk  
└─sdf1         8:81   0  16.4T  0 part  
  └─md125      9:125  0  81.9T  0 raid5 /srv/nfs/home
                                        /home
sdg            8:96   0  16.4T  0 disk  
└─sdg1         8:97   0  16.4T  0 part  
  └─md125      9:125  0  81.9T  0 raid5 /srv/nfs/home
                                        /home
sdh            8:112  0 465.8G  0 disk  
└─sdh1         8:113  0 465.8G  0 part  [SWAP]
sdi            8:128  0  16.4T  0 disk  
└─sdi1         8:129  0  16.4T  0 part  
  └─md125      9:125  0  81.9T  0 raid5 /srv/nfs/home
                                        /home
sdj            8:144  1     0B  0 disk  
sr0           11:0    1     4G  0 rom   
nvme0n1      259:0    0 953.9G  0 disk  
├─nvme0n1p1  259:38   0   100M  0 part  
├─nvme0n1p2  259:39   0    16M  0 part  
├─nvme0n1p3  259:40   0   293G  0 part  
├─nvme0n1p4  259:41   0   606M  0 part  
├─nvme0n1p5  259:42   0  39.1G  0 part  
├─nvme0n1p6  259:43   0  37.3G  0 part  
└─nvme0n1p7  259:44   0  93.1G  0 part
******************
blkid

/dev/md125: UUID="05eeba2c-9ab4-4bb2-93ca-85c29bc9d852" BLOCK_SIZE="4096" TYPE="ext4"
/dev/md126p1: LABEL="Recovery" BLOCK_SIZE="512" UUID="46B0C283B0C27947" TYPE="ntfs" PARTUUID="b5ea8385-bd48-bc40-a37d-f78aa81a9b32"
/dev/md126p2: LABEL="SYSTEM" UUID="9030-2F6D" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="a3c1293b-522c-1c4b-a7a8-acb367aaccd1"
/dev/md126p4: LABEL="Windows_10" BLOCK_SIZE="512" UUID="109ED0469ED025CE" TYPE="ntfs" PARTUUID="2759f468-2220-584d-ab3e-62a5ddb5f47c"
/dev/md126p5: LABEL="Gentoo 1" UUID="ceffaa4b-695c-4490-9f9a-38aed1bc40a8" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="9df78387-1113-d941-a6e5-942008bb5c4a"
/dev/md126p6: LABEL_FATBOOT="EFI_Gentoo1" LABEL="EFI_Gentoo1" UUID="3883-62B1" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="68f07cf1-f9af-534b-9eb2-ff3d5100ad7c"
/dev/md126p7: LABEL="Gentoo 2" UUID="c9247467-374b-41c2-a72f-57a37d3e6d7b" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="3f8de685-9ea2-9d40-abea-97d2c736c8c6"
/dev/md126p8: LABEL="Gentoo 3" UUID="52118bfa-440a-48eb-9ad8-99749988c7fa" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="71d75852-4eed-d74f-b27a-8537d4a32581"
/dev/md126p9: LABEL="LFS sysd 12.0" UUID="e6001dff-27e9-4388-b622-26c59ce80085" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="a613bf75-5047-8642-b545-ad0e7fe57d93"
/dev/md126p10: LABEL="Ubuntu 24.04" UUID="dd12ff4e-9ed6-4d5b-a391-cce9bdcb264d" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="c17dede0-d4f6-5043-a0a7-bd9a4d6dba9c"
/dev/md126p11: LABEL="Ubuntu 22.04" UUID="036f6f20-6b01-4d5e-8f42-e55ab46a9009" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="6ff7d372-c4bc-1b4c-8177-53cce534a39e"
/dev/md126p12: LABEL="ISO Sytems" UUID="d2290b79-34a7-4251-80b7-9f8b024348c2" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="597e090b-f42a-4b4c-aaf0-454ee974ff7b"
/dev/md126p13: LABEL="Clonezilla" UUID="e102e1d8-37ee-48c9-966e-4dc815b24ee1" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="158e6974-b5cb-3945-b734-2ed522900a72"
/dev/md126p14: LABEL="Ubuntu Home" UUID="a6ebbfa2-b4f0-4701-9b69-5e680a245dcc" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="646cdd48-41cb-49aa-b2ee-9a08f7b7ca1e"
/dev/md126p15: LABEL_FATBOOT="EFI_W11" LABEL="EFI_W11" UUID="62D0-9530" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="5ed7b3c5-dbb6-9143-954f-2bf8022abe76"
/dev/md126p17: BLOCK_SIZE="512" UUID="D66A58B46A58935B" TYPE="ntfs" PARTUUID="1cb07353-a2af-7443-9c23-dd34c1d482cb"
/dev/md126p18: BLOCK_SIZE="512" UUID="802EB1582EB14844" TYPE="ntfs" PARTUUID="5643e2c3-e229-b449-8453-e0e82c615c45"
/dev/md126p19: LABEL="LFS sysd 12.0 DE" UUID="6342936d-140d-4051-ad74-fed61623178a" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="9eb8029e-c372-47bb-8296-7989fa1cc382"
/dev/md126p20: LABEL="LFS sysd 12.0 NE" UUID="6aff57b4-bc71-4e93-8807-db85c7e72d92" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="8fc6d76b-727a-42b2-be23-28467132a659"
/dev/nvme0n1p1: UUID="E256-F414" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI system partition" PARTUUID="fa0e08f9-fae7-43c8-b0f1-44ff1393d63c"
/dev/nvme0n1p3: BLOCK_SIZE="512" UUID="D66A58B46A58935B" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="e4675753-429a-4a0c-a431-2107e592e315"
/dev/nvme0n1p4: BLOCK_SIZE="512" UUID="802EB1582EB14844" TYPE="ntfs" PARTUUID="266e4a07-4320-4fd0-958c-19dba39fccfb"
/dev/nvme0n1p5: LABEL="Ubuntu Home" UUID="a6ebbfa2-b4f0-4701-9b69-5e680a245dcc" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="93f66863-e60a-fb43-9e8e-d8de36e532a3"
/dev/nvme0n1p6: UUID="37424a31-ede7-4e94-8636-4d5cfd37785a" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="45999b8c-ac59-4912-a4fc-a4986caad943"
/dev/nvme0n1p7: UUID="4c886474-e36e-4922-a219-76b290862e0d" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="f891ad39-0a04-40c5-9bf7-97c57a7879fa"
/dev/sda1: UUID="a6f4693e-1554-697b-5c93-edf466ac4f07" UUID_SUB="4ae0e931-99b5-a0aa-a50e-c9a1245047c2" LABEL="marc:124" TYPE="linux_raid_member" PARTUUID="9469133c-8706-514c-b2af-8ee0d7be61e9"
/dev/sdb1: UUID="a6f4693e-1554-697b-5c93-edf466ac4f07" UUID_SUB="c9eda85e-a908-a537-d48a-2de4f3385145" LABEL="marc:124" TYPE="linux_raid_member" PARTUUID="3447f3d9-83fe-3843-9555-a85fead987bd"
/dev/sdc1: UUID="a6f4693e-1554-697b-5c93-edf466ac4f07" UUID_SUB="2b3474dc-cb07-d17b-332c-c17dbebfacf9" LABEL="marc:124" TYPE="linux_raid_member" PARTUUID="db7fc402-43cc-d544-a376-a3724ecf1ef8"
/dev/sdd17: BLOCK_SIZE="512" UUID="D66A58B46A58935B" TYPE="ntfs" PARTUUID="1cb07353-a2af-7443-9c23-dd34c1d482cb"
/dev/sdd18: BLOCK_SIZE="512" UUID="802EB1582EB14844" TYPE="ntfs" PARTUUID="5643e2c3-e229-b449-8453-e0e82c615c45"
/dev/sdd19: LABEL="LFS sysd 12.0 DE" UUID="6342936d-140d-4051-ad74-fed61623178a" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="9eb8029e-c372-47bb-8296-7989fa1cc382"
/dev/sdd20: LABEL="LFS sysd 12.0 NE" UUID="6aff57b4-bc71-4e93-8807-db85c7e72d92" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="8fc6d76b-727a-42b2-be23-28467132a659"
/dev/sde17: BLOCK_SIZE="512" UUID="D66A58B46A58935B" TYPE="ntfs" PARTUUID="1cb07353-a2af-7443-9c23-dd34c1d482cb"
/dev/sde18: BLOCK_SIZE="512" UUID="802EB1582EB14844" TYPE="ntfs" PARTUUID="5643e2c3-e229-b449-8453-e0e82c615c45"
/dev/sde19: LABEL="LFS sysd 12.0 DE" UUID="6342936d-140d-4051-ad74-fed61623178a" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="9eb8029e-c372-47bb-8296-7989fa1cc382"
/dev/sde20: LABEL="LFS sysd 12.0 NE" UUID="6aff57b4-bc71-4e93-8807-db85c7e72d92" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="8fc6d76b-727a-42b2-be23-28467132a659"
/dev/sdf1: UUID="a6f4693e-1554-697b-5c93-edf466ac4f07" UUID_SUB="6179a5c6-670b-5267-93d9-d91cdf412ddb" LABEL="marc:124" TYPE="linux_raid_member" PARTUUID="89b4c397-56cd-324b-8087-1a97546bad3e"
/dev/sdg1: UUID="a6f4693e-1554-697b-5c93-edf466ac4f07" UUID_SUB="916713bb-f883-c8d8-b30d-ec24a6a0e378" LABEL="marc:124" TYPE="linux_raid_member" PARTUUID="76b4d0ad-a65b-5c40-8583-90b9cca050e4"
/dev/sdh1: UUID="b4b1cbec-4e1f-4da7-88d7-5ebc50492090" TYPE="swap" PARTUUID="77092f13-01"
/dev/sdi1: UUID="a6f4693e-1554-697b-5c93-edf466ac4f07" UUID_SUB="e9b7922a-6993-8be3-6225-e7df4256211c" LABEL="marc:124" TYPE="linux_raid_member" PARTUUID="1e763663-6a11-a448-aaac-60ffe20c2cf8"
/dev/nvme0n1p2: PARTLABEL="Microsoft reserved partition" PARTUUID="3cf19ba3-119b-430b-a26e-8a6b2893192d"
/dev/sdd16: PARTUUID="90c8e6a3-789b-8442-8592-442fe3d69e88"
/dev/sde16: PARTUUID="90c8e6a3-789b-8442-8592-442fe3d69e88"
/dev/md126p3: PARTUUID="f876f727-21e7-f543-a105-5b4180bebed0"
/dev/md126p16: PARTUUID="90c8e6a3-789b-8442-8592-442fe3d69e88"

one of the results is than grub-mkconfig makes extra entries for the extra /dev entries.
Comment 1 Marc Debruyne 2024-05-22 09:06:37 UTC
Upgraded kernel to 6.9.1-gentoo-x86_64 (6.9.1)

same result.
Comment 2 Marc Debruyne 2024-05-22 19:21:03 UTC
Created attachment 306323 [details]
dmesg

dmesg contains:

[   32.827175] /dev/sde20: Can't open blockdev
[   32.846921] /dev/sde19: Can't open blockdev
[   32.878496] ntfs3: Max link count 4000
[   32.878839] /dev/sde17: Can't open blockdev
[   32.884543] /dev/sdd20: Can't open blockdev
[   32.905281] /dev/sdd19: Can't open blockdev
[   32.922129] /dev/sdd17: Can't open blockdev
Comment 3 The Linux kernel's regression tracker (Thorsten Leemhuis) 2024-05-24 09:39:47 UTC
Is this a regression. IOW: is this something that did not happen with say 6.6.y or 6.7.y? And are you using a vanilla kernel? Or something close to it?
Comment 4 Martin K. Petersen 2024-05-24 10:23:07 UTC
Looks like the partition table is invalid:

[    3.037149] GPT:Primary header thinks Alt. header is not at the end of the disk.
[    3.037151] GPT:1951170559 != 1953525167
[    3.037153] GPT:Alternate GPT header not at the end of the disk.
[    3.037154] GPT:1951170559 != 1953525167
[    3.037156] GPT: Use GNU Parted to correct GPT errors.
[    3.037168]  sdd: sdd1 sdd2 sdd3 sdd4 sdd5 sdd6 sdd7 sdd8 sdd9 sdd10 sdd11 sdd12 sdd13 sdd14 sdd15 sdd16 sdd17 sdd18 sdd19 sdd20
[    3.037441] GPT:Primary header thinks Alt. header is not at the end of the disk.
[    3.037443] GPT:1951170559 != 1953525167
[    3.037445] GPT:Alternate GPT header not at the end of the disk.
[    3.037446] GPT:1951170559 != 1953525167
[    3.037448] GPT: Use GNU Parted to correct GPT errors.
[    3.037459]  sde: sde1 sde2 sde3 sde4 sde5 sde6 sde7 sde8 sde9 sde10 sde11 sde12 sde13 sde14 sde15 sde16 sde17 sde18 sde19 sde20
Comment 5 Marc Debruyne 2024-05-24 13:00:41 UTC
(In reply to The Linux kernel's regression tracker (Thorsten Leemhuis) from comment #3)
> Is this a regression. IOW: is this something that did not happen with say
> 6.6.y or 6.7.y? And are you using a vanilla kernel? Or something close

The Gentoo kernels are very close to a vanilla kernel.

On my Linux from Scratch on another partition on same system with a vanilla 6.4.12 kernel same is occurring.

On Ubuntu 22.04 also on same system with kernel 6.2.0-33: same

I will compile a kernel 6.6.30 and post result.
Comment 6 Marc Debruyne 2024-05-24 13:13:48 UTC
(In reply to Martin K. Petersen from comment #4)
> Looks like the partition table is invalid:
> 
> [    3.037149] GPT:Primary header thinks Alt. header is not at the end of
> the disk.
> [    3.037151] GPT:1951170559 != 1953525167
> [    3.037153] GPT:Alternate GPT header not at the end of the disk.
> [    3.037154] GPT:1951170559 != 1953525167
> [    3.037156] GPT: Use GNU Parted to correct GPT errors.
> [    3.037168]  sdd: sdd1 sdd2 sdd3 sdd4 sdd5 sdd6 sdd7 sdd8 sdd9 sdd10
> sdd11 sdd12 sdd13 sdd14 sdd15 sdd16 sdd17 sdd18 sdd19 sdd20
> [    3.037441] GPT:Primary header thinks Alt. header is not at the end of
> the disk.
> [    3.037443] GPT:1951170559 != 1953525167
> [    3.037445] GPT:Alternate GPT header not at the end of the disk.
> [    3.037446] GPT:1951170559 != 1953525167
> [    3.037448] GPT: Use GNU Parted to correct GPT errors.
> [    3.037459]  sde: sde1 sde2 sde3 sde4 sde5 sde6 sde7 sde8 sde9 sde10
> sde11 sde12 sde13 sde14 sde15 sde16 sde17 sde18 sde19 sde20

/dev/ssd and /dev/sde are members of a raid 1 array.
This array is a fake raid. It's a LSI Software RAID witch is standard available on the ASUS Z10PE-D16 WS motherboard. The firmware uses the end part of the disk.

Note that the partitions 1 to 15 do not have this problem.
The partitions 16 to 20 contain a windows 11 system and 2 LFS systems which are fully operational. (In Windows only the raid partitions are visible).
Comment 7 Marc Debruyne 2024-05-24 16:32:47 UTC
Had to install kernel 6.6.21 on an old gentoo partition on same raid array.
Same result. So we can assume there is no regression.
Maybe it is the first time that someone uses so many partitions on a fake raid.
Comment 8 Marc Debruyne 2024-05-26 16:48:58 UTC
on https://docs.kernel.org/admin-guide/devices.html

************************************************************************
   8 block      SCSI disk devices (0-15)
                  0 = /dev/sda          First SCSI disk whole disk
                 16 = /dev/sdb          Second SCSI disk whole disk
                 32 = /dev/sdc          Third SCSI disk whole disk
                    ...
                240 = /dev/sdp          Sixteenth SCSI disk whole disk

                Partitions are handled in the same way as for IDE
                disks (see major number 3) except that the limit on
                partitions is 15.
************************************************************************

Is the number of partitions really limited to 15 partitions?

Surprisingly the extra /dev/ entries start from 16
Comment 9 Phillip Susi 2024-05-28 18:28:59 UTC
This is no bug.  Fake raids are, well, fake.  The kernel sees both individual disks and since they are mirror images of each other, both have the same partitions on them.

The dmraid utility can be used to recognize the fake raid metadata and configure the kernel device mapper to access the raid, and with the -Z switch, it will remove the partitions from the underlying individual disks from the kernel's view.
Comment 10 Marc Debruyne 2024-06-21 08:15:04 UTC
Changed "Component" to MD
Comment 11 Marc Debruyne 2024-06-22 07:52:19 UTC
(In reply to Phillip Susi from comment #9)
> This is no bug.  Fake raids are, well, fake.  The kernel sees both
> individual disks and since they are mirror images of each other, both have
> the same partitions on them.
> 
> The dmraid utility can be used to recognize the fake raid metadata and
> configure the kernel device mapper to access the raid, and with the -Z
> switch, it will remove the partitions from the underlying individual disks
> from the kernel's view.

dmraid is not used; mdadm is used

********************

cat /proc/mdstat

--------

Personalities : [raid1] [raid6] [raid5] [raid4] 
md125 : active raid5 sdh1[4] sdb1[2] sdi1[6] sda1[0] sdc1[1] sdf1[3]
      87890972160 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      bitmap: 5/131 pages [20KB], 65536KB chunk

md126 : active raid1 sde[1] sdd[0]
      975585280 blocks super external:/md127/0 [2/2] [UU]
      
md127 : inactive sde[1](S) sdd[0](S)
      2354608 blocks super external:ddf
       
unused devices: <none>
Comment 12 Phillip Susi 2024-06-25 12:44:47 UTC
(In reply to Marc Debruyne from comment #11)
> dmraid is not used; mdadm is used

Ahh yes, they did add support for the Intel fake raids to mdadm.  In that case, mdadm needs to learn to remove the partitions from the underlying disk, but that's a user space issue, not in the kernel.