Bug 105241 - LVM raid1: sync does not start (stuck at 0%)
Summary: LVM raid1: sync does not start (stuck at 0%)
Status: NEW
Alias: None
Product: IO/Storage
Classification: Unclassified
Component: LVM2/DM (show other bugs)
Hardware: x86-64 Linux
: P1 normal
Assignee: Alasdair G Kergon
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2015-09-29 12:23 UTC by Sami Liedes
Modified: 2016-05-31 12:37 UTC (History)
1 user (show)

See Also:
Kernel Version: 4.2.1, 4.0.8
Subsystem:
Regression: No
Bisected commit-id:


Attachments

Description Sami Liedes 2015-09-29 12:23:41 UTC
I created a new LVM volume group and a linear volume on it. After converting it to a raid1 volume, the kernel won't start to sync it. I have observed this behavior on both kernels 4.2.1 and 4.0.8; the pasted output is from 4.2.1.

FWIW, this is the same computer as in Bug 100491, another dm-raid1 related bug. However I created a completely new volume group and a new logical volume, so I don't know how related these are.

What I did:

1. pvcreate /dev/sdb5 /dev/sdb6 /dev/sdc2
2. vgcreate vg /dev/sdb5 /dev/sdb6 /dev/sdc2
3. lvcreate -n root -L 900G /dev/sdb5
[... steps to set up a dm-crypt mapping on /dev/mapper/vg-root, create a filesystem]
4. lvconvert -m1 vg/root /dev/sdc2

Now, lvconvert succeeds and I get this on dmesg:

------------------------------------------------------------
[ 3505.979470] device-mapper: raid: Superblocks created for new array
[ 3505.980026] md/raid1:mdX: not clean -- starting background reconstruction
[ 3505.980028] md/raid1:mdX: active with 2 out of 2 mirrors
[ 3505.980030] Choosing daemon_sleep default (5 sec)
[ 3505.980050] created bitmap (900 pages) for device mdX
[ 3505.996918] mdX: bitmap file is out of date, doing full recovery
[ 3512.853779] mdX: bitmap initialized from disk: read 57 pages, set 1843200 of 1843200 bits
------------------------------------------------------------

However, nothing else happens. Specifically, there is no disk activity on /dev/sdc2. In lvs output, Cpy%Sync is perpetually stuck on 0.00.

------------------------------------------------------------
# lvs -o +seg_pe_ranges --all
  LV              VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert PE Ranges
  backup          rootvg -wi-ao---- 800.00g                                                     /dev/sdc4:0-118989
  backup          rootvg -wi-ao---- 800.00g                                                     /dev/sdc3:18024-103833
  root            rootvg -wi-ao----   1.32t                                                     /dev/sda2:0-345129
  root            vg     rwi-aor--- 900.00g                                    0.00             root_rimage_0:0-230399 root_rimage_1:0-230399
  [root_rimage_0] vg     Iwi-aor--- 900.00g                                                     /dev/sdb5:0-230399
  [root_rimage_1] vg     Iwi-aor--- 900.00g                                                     /dev/sdc2:1-230400
  [root_rmeta_0]  vg     ewi-aor---   4.00m                                                     /dev/sdb5:230400-230400
  [root_rmeta_1]  vg     ewi-aor---   4.00m                                                     /dev/sdc2:0-0
------------------------------------------------------------

------------------------------------------------------------
# vgcreate --version
  LVM version:     2.02.127(2) (2015-08-10)
  Library version: 1.02.104 (2015-08-10)
  Driver version:  4.33.0
------------------------------------------------------------

Note You need to log in before you can comment on or make changes to this bug.