Bug 196383
Summary: | Regression in md/raid1 with write behind | ||
---|---|---|---|
Product: | IO/Storage | Reporter: | Markus (m4rkusxxl) |
Component: | MD | Assignee: | io_md |
Status: | RESOLVED CODE_FIX | ||
Severity: | normal | CC: | shli |
Priority: | P1 | ||
Hardware: | All | ||
OS: | Linux | ||
Kernel Version: | 4.12.2 | Subsystem: | |
Regression: | Yes | Bisected commit-id: | |
Attachments: |
debug patch
patch for the bug patch for the bug |
Description
Markus
2017-07-15 21:51:04 UTC
Created attachment 257573 [details]
debug patch
can you please check if the patch works
Created attachment 257575 [details]
patch for the bug
alright, this is the patch I'll push to upstream, could you please check?
I just tested your "patch for the bug". Unfortunatelly still fails. Have you tried it? My raid is a raid1 over three hdds (one is spare) and one ssd. With write behind enabled. I did have a test in my virtual machine and it fixes one bug, maybe not exactly the one you saw. do you have kernel log with the patch applied? Yes. The error is the same. (Havent compared the numbers though.) I applied it on top of 4.12.2. Is there a way to save that error? Making a photo of the screen feels wrong. And compare by hand is error prone. I have tried to emulate the setup you have in a virtual machine. 3 ahci harddisks and one fast nvme. Without the patch, I did see a crash and the patch fix it, though the crash I saw is in difference place. I think photo is the only way if you don't have a serial console. Can you give more details of your raid setup? (with mdadm -Q --detail /dev/md0). Is there any io error in the disks? I havent seen any io errors. # mdadm -Q --detail /dev/md11 /dev/md11: Version : 0.90 Creation Time : xxx Raid Level : raid1 Array Size : 27262912 (26.00 GiB 27.92 GB) Used Dev Size : 27262912 (26.00 GiB 27.92 GB) Raid Devices : 3 Total Devices : 4 Preferred Minor : 11 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Thu Jul 20 19:33:38 2017 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 UUID : xxx Events : 0.1555 Number Major Minor RaidDevice State 0 8 21 0 active sync writemostly /dev/sdb5 1 8 53 1 active sync writemostly /dev/sdd5 2 8 5 2 active sync /dev/sda5 3 8 37 - spare /dev/sdc5 # for i in /dev/sd{a,b,d}5 ; do mdadm -X $i ; done Filename : /dev/sda5 Magic : 6d746962 Version : 4 UUID : xxx Events : 1555 Events Cleared : 1555 State : OK Chunksize : 64 MB Daemon : 5s flush period Write Mode : Allow write behind, max 256 Sync Size : 27262912 (26.00 GiB 27.92 GB) Bitmap : 416 bits (chunks), 0 dirty (0.0%) Filename : /dev/sdb5 Magic : 6d746962 Version : 4 UUID : xxx Events : 1555 Events Cleared : 1555 State : OK Chunksize : 64 MB Daemon : 5s flush period Write Mode : Allow write behind, max 256 Sync Size : 27262912 (26.00 GiB 27.92 GB) Bitmap : 416 bits (chunks), 0 dirty (0.0%) Filename : /dev/sdd5 Magic : 6d746962 Version : 4 UUID : xxx Events : 1555 Events Cleared : 1555 State : OK Chunksize : 64 MB Daemon : 5s flush period Write Mode : Allow write behind, max 256 Sync Size : 27262912 (26.00 GiB 27.92 GB) Bitmap : 416 bits (chunks), 0 dirty (0.0%) Created attachment 257623 [details]
patch for the bug
alright, I got it. it's the trim request. This one should fix all the issues.
This patch worked! The system boots without an error. Thanks! Will you commit that fix? yep, already in upstream, please close this bug |