For your bonus question:
mdadm --examine --scan >> /etc/mdadm/mdadm.conf
The labels means the disk is regarded as "spare". You should try stopping and re-starting the array:S
mdadm --stop /dev/md0
mdadm --assemble --scan
to re-assemble the array and if that doesn't work, you may need to update your , see for example this question for details on how to do that.mdadm.conf
This is not a raid Problem, more a permission issue. If you check the output of the command, you can see that the raid5 is assembled perfectly as indicated by the cat /proc/mdstat in the [UUUU] line.md0
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid5 sde[4] sdb[0] sdd[2] sdc[1] 8790405120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/22 pages [0KB], 65536KB chunk
unused devices: <none>
Also the partition table (loop for a loop-device) and the filesystem (ext4) are fine. After mounting the loop device with you should see something like:mount /dev/md0 /mnt
$ mount /dev/md0 /mnt
$ ls -la /mnt
total 12
drwxr-xr-x 3 root root 4096 Jun 29 2016 .
drwxr-xr-x 10 root root 4096 Dec 7 10:42 ..
drwxr-xr-x 3 1001 2001 4096 Feb 6 2017 oldusername
Now you need to change the owner of the old user home to your current user:
$ sudo chown $(stat -c '%u:%g' ~/) /mnt/oldusername
$ ls -la /mnt
total 12
drwxr-xr-x 3 root root 4096 Jun 29 2016 .
drwxr-xr-x 10 root root 4096 Dec 7 10:42 ..
drwxr-xr-x 3 currentuser currentgroup 4096 Feb 6 2017 oldusername
Last time I checked, mdadm won't let you --grow raid10. I glanced over mdadm's manpage now and it still says: Currently supported growth options including changing the active size of component devices and changing the number of active devices in RAID levels 1/4/5/6, changing the RAID level between 1, 5, and 6, changing the chunk size and layout for RAID5 and RAID5, as well as adding or removing a write-intent bitmap.
I don't think you can do what you want: replacing only some disks will not enable you to grow the array (to use the additional "blank" space). Moreover, RAID5 with these high capacity disks is very risky.
For these reasons, I strongly suggests you to create a new RAID6 or RAID10 array (with 4x 8TB disks) and to migrate your data to the new array. While painful, it will prevent a probable data loss in the future.
The standard procedures are:
You don't seem to have taken this seriously. Try to recover what's still there now. Trying to rebuild that nearly failed array might case more damage than you expect.
If the data is valuable enough, find a trustworthy and capable data recovery service. Put aside a four to five digit amount of cash. Otherwise, rinse & repeat - replace disks, reformat, reinstall and take the standard procedures more seriously.
If there has been few or no changes to the data on the array since you failed the disk you might be able to use --re-add You can check the number of events on each drive:
mdadm --examine /dev/sd[e-g]1 | egrep 'Event|/dev/sd'
If the number of events are not too far behind (and you have bitmap enabled) you can re-add:
mdadm /dev/md3 --re-add /dev/sdf1
If that doesn't work you will need to add the disk again (this might trigger a full rebuild)
mdadm /dev/md3 -a /dev/sdf1
According to the documentation, mdadm will try re-add first when issuing the add (-a, --add) command. Running re-add is useful if you want to try adding the drive without a resync and not have it fall back to resyncing right away if re-add doesn't' work.