Recovering Windows Spanned Disks (LDM) using Linux?

Can I read LDM sections of Windows 2008 on Linux?

We have five 512GB LUNS exported via iSCSI to dead Windows 2008, and this box no longer wants them. Windows considers them to be raw devices now ... Therefore, I would like to read Linux partitions. I use the latest ubuntu to try to save at least some data. The problem is that all the documentation that I have found so far seems outdated (they often talk about w2k or XP Logical Disk Manager (LDM). But now I think it is different from 2008.

Testdisk [0] gives me the following output

testdisk /list LUN01 TestDisk 6.11, Data Recovery Utility, April 2009 Christophe GRENIER < grenier@cgsecurity.org > http://www.cgsecurity.org Please wait... Disk LUN01 - 536 GB / 500 GiB - CHS 65271 255 63, sector size=512 Disk LUN01 - 536 GB / 500 GiB - CHS 65271 255 63 Partition Start End Size in sectors 1 P MS LDM MetaData 34 2081 2048 [LDM metadata partition] No FAT, NTFS, EXT2, JFS, Reiser, cramfs or XFS marker 2 P MS Reserved 2082 262177 260096 [Microsoft reserved partition] 2 P MS Reserved 2082 262177 260096 [Microsoft reserved partition] 3 P MS LDM Data 262178 1048576966 1048314789 [LDM data partition] 

Note. Each of the 5 LUNs has the same partition table.

Many documents [1] refer to ldminfo, which does not return any useful information. I suspect that it is now outdated, simply because it was very difficult to find :) And since it does not work, I think that Windows 2008 uses a different format.

 # ldminfo LUN01 Something went wrong, skipping device 'LUN01' # losetup /dev/loop1 LUN01 # losetup -a /dev/loop1: [fd00]:14 (/mnt/LUN01) # ldminfo /dev/loop1 Something went wrong, skipping device '/dev/loop1' 

Then I tried to execute them using dmsetup, but again no luck. This is how I used dmsetup:

 # losetup /dev/loop1 LUN01 # losetup /dev/loop2 LUN02 # losetup /dev/loop3 LUN03 # losetup /dev/loop4 LUN04 # losetup /dev/loop5 LUN05 # blockdev --getsize /dev/loop1 1048577000 # cat > w2008.mapping # Offset into Size of this Raid type Device Start sector # volume device of device 0 1048577000 linear /dev/loop1 0 1048577000 1048577000 linear /dev/loop2 0 2097154000 1048577000 linear /dev/loop3 0 3145731000 1048577000 linear /dev/loop4 0 4194308000 1048577000 linear /dev/loop5 0 # dmsetup create myfs w2008.mapping # mount -t ntfs /dev/mapper/myfs /mnt/final NTFS signature is missing. Failed to mount '/dev/loop1': Invalid argument The device '/dev/loop1' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (eg /dev/sda, not /dev/sda1)? Or the other way around? # echo Poo. 

So the NTFS file system is still missing :)

Does anyone have any ideas on how I can extract the data or give me some pointers?

+7
source share
3 answers

Well, I will answer my question in order to avoid the same pain to others.

0. WARNING

If you are doing a restore, ALWAYS COPY YOUR DATA and work on a copy. DO NOT modify the original “broken” data. This thing said, keep reading.

1. Your section looks like ...

Install the detective kit and test drive. I hope there will be packages for your distribution :)

 # mmls -t gpt LUN01 GUID Partition Table (EFI) Offset Sector: 0 Units are in 512-byte sectors Slot Start End Length Description 00: Meta 0000000000 0000000000 0000000001 Safety Table 01: ----- 0000000000 0000000033 0000000034 Unallocated 02: Meta 0000000001 0000000001 0000000001 GPT Header 03: Meta 0000000002 0000000033 0000000032 Partition Table 04: 00 0000000034 0000002081 0000002048 LDM metadata partition 05: 01 0000002082 0000262177 0000260096 Microsoft reserved partition 06: 02 0000262178 1048576966 1048314789 LDM data partition 07: ----- 1048576967 1048576999 0000000033 Unallocated 

Note: testdisk will provide you the same information with less details # testdisk / list LUN01

2. Extracting disk metadata

All information about disk order, data size, and other encrypted partition attributes will be found in the LDM metadata section. W2k8 has not changed so much since this document [2], although some sizes are different and some attributes are new (and obviously unknown) ...

 # dd if=LUN01 skip=33 count=2048 |xxd -a > lun01.metadata # less lun01.metadata 

In line 0002410 you should see the server name. Encouraging? But we are after ordering drives and drive ID. Scroll down.

2.1. Order discs

On line 0003210, you will see “Disk1,” followed by a long line.

 0003200: 5642 4c4b 0000 001c 0000 0006 0000 0001 VBLK............ 0003210: 0000 0034 0000 003a 0102 0544 6973 6b31 ...4...:...Disk1 0003220: 2437 3965 3830 3239 332d 3665 6231 2d31 $79e80293-6eb1-1 0003230: 3164 662d 3838 6463 2d30 3032 3662 3938 1df-88dc-0026b98 0003240: 3335 6462 3300 0000 0040 0000 0000 0000 35db3....@...... 0003250: 0048 0000 0000 0000 0000 0000 0000 0000 .H.............. 

This means that the first drive of this volume is identified after a unique identifier (UID): 79e80293-6eb1-11df-88dc-0026b9835db3 But at the moment we do not know which of the drives has this UID! Therefore, go to the Disk2 record and pay attention to its UID and so on. all disks that you had on your volume. Note. Based on my experience, only the first 8 characters change, the rest remain unchanged. Indeed, W2k8 seems to increase the ID by 6. $ is a delimiter.

Eg .:

 Windows Disk1 UID : 79e80293-6eb1-11df-88dc-0026b9835db3 Windows Disk2 UID : 79e80299-... Windows Disk3 UID : 79e8029f-... 

2.2. Find Disk ID

Go to line 00e8200 (lun01.metadata). You should find "PRIVHEAD".

 00e8200: 5052 4956 4845 4144 0000 2c41 0002 000c PRIVHEAD..,A.... 00e8210: 01cc 6d37 2a3f c84e 0000 0000 0000 0007 ..m7*?.N........ 00e8220: 0000 0000 0000 07ff 0000 0000 0000 0740 ...............@ 00e8230: 3739 6538 3032 3939 2d36 6562 312d 3131 79e80299-6eb1-11 00e8240: 6466 2d38 3864 632d 3030 3236 6239 3833 df-88dc-0026b983 00e8250: 3564 6233 0000 0000 0000 0000 0000 0000 5db3............ 00e8260: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00e8270: 3162 3737 6461 3230 2d63 3731 372d 3131 1b77da20-c717-11 00e8280: 6430 2d61 3562 652d 3030 6130 6339 3164 d0-a5be-00a0c91d 00e8290: 6237 3363 0000 0000 0000 0000 0000 0000 b73c............ 00e82a0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00e82b0: 3839 3164 3065 3866 2d64 3932 392d 3131 891d0e8f-d929-11 00e82c0: 6530 2d61 3861 372d 3030 3236 6239 3833 e0-a8a7-0026b983 00e82d0: 3564 6235 0000 0000 0000 0000 0000 0000 5db5............ 00e82e0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 

What we need is the disk identifier of this disk. We see: - Disk ID: 79e80299-6eb1-11df-88dc-0026b9835db3 - Host ID: 1b77da20-c717-11d0-a5be-00a0c91db73c - Disk group ID: 891d0e8f-d929-11e0-a8a7b2698

So, this disk with UID 79e80299 -... is Windows Disk2, but for us it was physical disk 1. Really find this UID in the disk sequence found above. Note. There is no logical order. I mean, Windows does not decide how to set up a disk order. So, there is no human logic and do not expect your first disk to be Disk1.

Therefore, do not assume that the above order will follow any human logic. I recommend that you look through all the LDM data on your disks and extract their UID. (You can use the following command to extract PRIVHEAD information: dd if = LUNXX skip = 1890 count = 1 | xxd -a)

eg:

 (Windows) Disk1 : 79e80293-... == Physical disk 2 (Windows) Disk2 : 79e80299-... == Physical disk 1 (Windows) Disk3 : 79e8029f-... == Physical disk 3 

I am sure that somewhere in the LDM metadata you can find the type of volume (spanned, RAID0, RAIDX and the corresponding strip sizes) However, I did not dig it. I used the try and retry method to find my details. Therefore, if you know how you configure before drama, you will save a lot of time.

3. Find the NTFS file system and your data

Now we are interested in a large chunk of data that we want to recover. In my case, this is ~ 512 GB of data, so we will not convert everything to ASCII. I'm really not looking for how Windows finds the beginning of its NTFS partition. But I found that it logically starts with the following keyword: R.NTFS. Find this and find the offset we will need to apply later to see our NTFS FS.

 06: 02 0000262178 1048576966 1048314789 LDM data partition 

In this example, data begins with 262178 and longer than 1048314789 sectors

We found above that Disk1 (volume groups) is actually the second physical disk. We will extract some of our information to find where the NTFS partition begins.

 # dd if=LUN02 skip=262178 count=4096 |xxd -a > lun02.DATASTART-4k # less lun02.DATASTART-4k 0000000: 0000 0000 0000 0000 0000 0000 0000 0000 ................ * 00fbc00: eb52 904e 5446 5320 2020 2000 0208 0000 .R.NTFS ..... 00fbc10: 0000 0000 00f8 0000 3f00 ff00 0008 0400 ........?....... 00fbc20: 0000 0000 8000 8000 ffaf d770 0200 0000 ...........p.... 

Here we see that NTFS starts with 00fbc00. So knowing that we can begin to extract data from the sector 262178 + 00fbc00 bytes. Let's do a hexadecimal to decimal bit with bytes to the transform sector.

0xfbc00 bytes = 1031168 bytes = 1031168/512 sectors = 2014 sectors

So, our NTFS partition starts with 262178 + 2014 = 264192 sectors. This value will be the offset that we will use later on all drives. Let me call it the NTFS offset. Obviously, the overall size is compressed by displacement. Thus, the new size: 1048314789 - 2014 = 1048312775 sectors

4. Try to install / see the data

From now on, it will either work out of the box because your NTFS partition is healthy or not, because you are doing this to recover some data. The next process is the same as your status. All of the following is based on [1] (see links below)

A configured volume will populate the volume after another. Where in the form of striped (RAID0) will copy a piece of data across many disks (aka file is distributed across many disks). In my case, I did not know if it was a stretched or striped volume. the easiest way to find out if your volume is empty is to check if you have many zeros at the end of all your volumes. If this is the case, then it is striped. Because if it is pulled, if it fills the first disk, then the second. I am not 100% sure of this, but of what I have observed. So there's a bunch of sectors from the end of the LDM data section.

4.0 Preparing to access your data

First, connect your dd file or device through the loopback device using NTFS offset and size, which we calculated above. However, the offset and size must be in bytes not in the sectors to be used with losetup. offset = 264192 * 512 = 135266304 size = 1048312775 * 512 = 536736140800

 # losetup /dev/loop2 DDFILE_OR_DEVICE -o 135266304 --size 536736140800 # blockdev --getsize /dev/loop2 1048312775 <---- total size in sectors, same number than before 

Note: you can add '-r' to mount in read-only mode.

Make the above for all physical disks part of your volume. Display result with: losetup -a Note. If you don't have enough loop devices, you can easily create more: # mknod -m0660 / dev / loopNUMBER b 7 NUMBER && & chown root.disk / dev / loopNUMBER

Check the alignment by opening the first disk of the group (for example: Disk2) to see if the first line is R.NTFS. If not, then your alignment is incorrect. Confirm your calculations above and try again. Or you do not look at the first Windows disk

eg:

 First disk of the volume has been mounted on /dev/loop2 # xxd /dev/loop2 |head 0000000: eb52 904e 5446 5320 2020 2000 0208 0000 .R.NTFS ..... 0000010: 0000 0000 00f8 0000 3f00 ff00 0008 0400 ........?....... 

Things are good. Let it move to the annoying part :)

4.1 Spanned

Spanned disks are actually a chain of disks. You fill in the first, then use the second, etc. etc. Create a file that looks like this:

 # Offset into Size of this Raid type Device Start sector # volume device of device 0 1048312775 linear /dev/loop2 0 1048312775 1048312775 linear /dev/loop1 0 2096625550 1048312775 linear /dev/loop3 0 

Notes: - Remember to use a good disk order (you discovered earlier). for example: physical disk2 and then physical disk1 and physical disk3 - 2096625550 = 2 * 1048312775 and, obviously, if you have a fourth disk, it will be 3 times the size to offset for the 4th disk.

4.2 Striped

The problem with striped mode (aka RAID0) is that you need to know what size strip you have. Apparently, the default is 64k (in my case it was 128k, but I don't know if it was configured by Windows sysadmin :). In any case, if you do not know this, you just need to try all possible standard values ​​and see which one gives you viable NTFS.

Create a file as shown below for 3 disks with a fragment size of 128k

  .---+--> 3 chunks of 128k 0 3144938240 striped 3 128 /dev/loop2 0 /dev/loop3 0 /dev/loop1 0 `---> total size of the volume `----------+-----------+---> disk order 

/! \: The volume size is not exactly the size that we calculated earlier. The need for dmsetup is the size of the volume divided by the size of the piece (aka the size of the strip) and the number of disks in the volume. So in our case. We have 3 disks out of 1048312775 sectors. Thus, the "normal" size is 1048312775 * 3 = 3144938325 sectors, but because of the above contraint we will recalculate the size and round it # echo "3144938325/128 * 128" | BC 3144938240 sectors

  So 3144938240 is the size of your volume in a striped scenario with 3 disk and 128 chunks (aka stripes) 

4.3 Install it.

Now let's aggregate everything with dmsetup:

 # dmsetup create myldm /path/myconfigfile # dmsetup ls myldm (253, 1) # mount -t ntfs -o ro /dev/mapper/myldm /mnt 

If it is not mounted. Then you can use testdisk:

 # testdisk /dev/mapper/myldm --> Analyse ----> Quick search ------> You should see the volume name (if any). If not it seems compromised :) --------> Press 'P' to see files and copy with 'c' 

5. Conclusion

Worked above for me. Your mileage may vary. And maybe a better and easier way to do this. If so, share it so that no one can get through this hassle :) In addition, it may look hard, but it is not. While you are copying your data somewhere, just try and try again until you see something. It took me 3 days to figure out how to collect all the bits. We hope that this helps you not to spend 3 days.

Note. All of the above examples have been compiled. Perhaps some inconsistencies between the examples, despite my thoroughness;)

Good luck.

6. References

+6
source

Here (much easier) to answer, now that ldmtool exists. ldmtool reads the LDM metadata (aka Windows Dynamic Disks) and (among other things) creates mapper entries for the corresponding disks, partitions and RAID arrays, which allows you to subsequently access and mount them in the same way as other block devices in Linux.

The program has several limitations, mainly due to the fact that it does not change LDM metadata at all. Therefore, you cannot create LDM disks in Linux (using Windows for this), and you must not mount RAID volumes in which there are no disks in read and write mode. ( ldmtool will not modify metadata to reflect this, and the next time Windows builds a RAID array, problems will occur because not all drives will sync.)

The following are the steps to be taken:

  • Install ldmtool . On Debian and Ubuntu systems, type apt-get install ldmtool . On most other recent Linux distributions, this should be just as easy.
  • Run ldmtool create all .
  • You should now have many new entries in / dev / mapper. Find the correct one (in my case, the RAID1 array, therefore /dev/mapper/ldm_vol_VOLNAMEHERE-Dg0_Volume2 ), and just install it with something like mount -t ntfs /dev/mapper/ldm_vol_VOLNAMEHERE-Dg0_Volume2 .

To do this automatically at boot time, you will most likely need to insert a call to ldm create all at the desired point in the boot sequence before the contents of /etc/fstab mounted. A good way to make a call would be:

 [ -x /usr/bin/ldmtool ] && ldmtool create all >/dev/null || true 

But how to get this snippet, which will be launched at the right time during boot, will be very different depending on the distribution you use. For Ubuntu 13.10, I inserted the specified line in /etc/init/mountall.conf right before calling exec mountall ... at the end of the script section. And now I can mount the Windows LDM RAID1 partition in /etc/fstab . Enjoy it!

+17
source

Windows Dynamic Volume 5x disk, deployed, only 8TB. A.

This is what I compiled from the answer above, and referring to [1] and [2].

I found that in the metadata section there is more than just disk GUID information. There is a clear structure that contains the size, offset and offset within the stretched volumne.

Use the above sections {2.1} and {2.2} to determine the order of the drives.

My 4x disks are exported as 4x2tb fragments and 1x smaller fragment from a single RAID5 array from a 3ware 9650se controller. Each disk has the format:

 /dev/sdX1 = LDM metadata partition (~1mb) /dev/sdX2 = Reserved msoft partition (~100mb) /dev/sdX1 = LDM data partition (~1.99TB/20GB) 

from 'xxd -a -l 65535 / dev / sdd1 | more 'I get

 0002800: 5642 4c4b 0000 000c 0000 000e 0000 0001 VBLK............ 0002810: 0000 4033 0000 0031 0109 0844 6973 6b31 ..@3...1...Disk1 0002820: 2d30 3100 0000 0000 0000 0000 0000 0b00 -01............. 0002830: 0000 0000 0007 de00 0000 0000 0000 0004 ................ ^---^ Note 07 de (offset) 0002840: fffb f000 0108 0102 0000 0000 0000 0000 ................ ^-------^ Note fffb f000 (size) 0002850: 0000 0000 0000 0000 0000 0000 0000 0000 ................ * 0002880: 5642 4c4b 0000 000d 0000 000f 0000 0001 VBLK............ 0002890: 0000 4033 0000 0031 010a 0844 6973 6b32 ..@3...1...Disk2 00028a0: 2d30 3100 0000 0000 0000 0000 0000 0b00 -01............. 00028b0: 0000 0000 0007 de00 0000 00ff fbf0 0004 ................ ^---^ Offset ^--------^ Now see spanned offset 00028c0: fffb f000 0108 0103 0000 0000 0000 0000 ................ ^-------^ note size again! 00028d0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ * 0002900: 5642 4c4b 0000 000e 0000 0010 0000 0001 VBLK............ 0002910: 0000 4033 0000 0031 010b 0844 6973 6b33 ..@3...1...Disk3 0002920: 2d30 3100 0000 0000 0000 0000 0000 0b00 -01............. 0002930: 0000 0000 0007 de00 0000 01ff f7e0 0004 ................ ^---^ Offset ^--------^ Now see spanned offset 0002940: fffb f000 0108 0104 0000 0000 0000 0000 ................ ^-------^ note size again! 0002950: 0000 0000 0000 0000 0000 0000 0000 0000 ................ * 0002980: 5642 4c4b 0000 000f 0000 0011 0000 0001 VBLK............ 0002990: 0000 4033 0000 0031 010c 0844 6973 6b34 ..@3...1...Disk4 00029a0: 2d30 3100 0000 0000 0000 0000 0000 0b00 -01............. 00029b0: 0000 0000 0007 de00 0000 02ff f3d0 0004 ................ ^---^ Offset ^--------^ Now see spanned offset 00029c0: fffb f000 0108 0105 0000 0000 0000 0000 ................ ^-------^ note size again! 00029d0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ * 0002a00: 5642 4c4b 0000 0010 0000 0012 0000 0001 VBLK............ 0002a10: 0000 4033 0000 0031 010d 0844 6973 6b35 ..@3...1...Disk5 0002a20: 2d30 3100 0000 0000 0000 0000 0000 0b00 -01............. 0002a30: 0000 0000 0007 de00 0000 03ff efc0 0004 ................ ^---^ Offset ^--------^ Now see spanned offset 0002a40: 17b7 d000 0108 0106 0000 0000 0000 0000 ................ ^-------^ And my final drive is the smallest 0002a50: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 

So, from above you can clearly see the size of the data section, the offset within the section and the offset within the volume. So let's do the math;

 Disk1: Size of block = fffb f000 = 4294701056 Start offset = 07 de = 2014 Partition offset = 00 0000 00 = 0 Disk2: Size of block = fffb f000 = 4294701056 Start offset = 07 de = 2014 Partition offset = 00ff fbf0 00 = 4294701056 Disk3: Size of block = fffb f000 = 4294701056 Start offset = 07 de = 2014 Partition offset = 01ff fbf0 00 = 8589402112 Disk4: Size of block = fffb f000 = 4294701056 Start offset = 07 de = 2014 Partition offset = 02ff fbf0 00 = 12884103168 Disk5: Size of block = 17b7 d000 = 397922304 Start offset = 07 de = 2014 Partition offset = 03ff fbf0 00 = 17178804224 *Note: Use Excel, hex2dec() function* 

This translates using dmraid:

 # File /etc/ntfsvolume #offset into Size of this Raid Device Start sector # volume type in volume 0 4294701056 linear /dev/sdd3 2014 4294701056 4294701056 linear /dev/sdc3 2014 8589402112 4294701056 linear /dev/sdf3 2014 12884103168 4294701056 linear /dev/sde3 2014 17178804224 397922304 linear /dev/sdg3 2014 

which can then be directly installed via:

 $ dmsetup create myvolume /etc/ntfsvolume $ sudo mkdir /media/volume/ $ mount -t ntfs-3g /dev/mapper/myvolume /media/volume $ sudo mount -t ntfs-3g -o ro /dev/mapper/myvolume /media/volume (mount read-only) 

which requires modules:

 dmraid ntfs-3g 

ATTENTION!

Be absolutely sure that before installing read-write you have all offsets, disk size and spann offsets. ntfs-3g will mount if the offsets are wrong and the contents of your file are incorrect.

A good double check is to use the Windows check disk and the loop in the extra information at the end. Pay attention to the total number of units allocated, several in size of the block (mine - 4096), then divide it into 512 (the size of a regular sector). this should correspond to the size indicated in the windows.

My partition size is incorrect 4096 bytes smaller than the size specified in the metadata tables above. I assume the partition size is rounded to an even number. I compute 2197090816, windows say 2197090815, 4096 byte blocks.

References

+2
source

All Articles