[lug] disk image with dd

Gary Hodges Gary.Hodges at noaa.gov
Wed Jul 15 08:34:32 MDT 2009


OK, the job finished sometime last night and looks like it went well. 
I've spent some time trying to learn about how dd_rhelp works, but I'm 
unsure on one point.  Here is the final output (I have changed some 
units to GB to make it easier to read):

=== parsing at 0k, for 0k, max continuous err: 2.5k >>> ===
dd_resdd_rescue: (info): ipos: 127GB, opos: 127GB, xferd: 127GB
dd_rescue: (info): ipos: 244GB, opos: 244GB, xferd: 244GB
                    errs:   0, errxfer:     0.0k, succxfer: 244GB
              +curr.rate:   1747kB/s, avg.rate:  1791kB/s, avg.load: -0.4%

dd_rescue: (info): /dev/hdc (244GB): EOF
Summary for /dev/hdc -> /dev/hdd:
dd_rescue: (info): ipos: 244GB, opos: 244GB, xferd: 244GB
                    errs:   0, errxfer:     0.0k, succxfer: 244GB
              +curr.rate:     1200kB/s, avg.rate:     1791kB/s, 
avg.load: -0.4%

There were no errors which is a pleasant surprise, so I should have a 
duplicate copy of all the data on hdd.  Looking at the web site for 
dd_rhelp, it says "The gaps that aren't already parsed with dd_rescue 
are filled with zeroes."  I suspect I may have some deleted files on hdc 
that I'd like to try and recover.  Am I reading this correct that I 
won't be able to recover any deleted files on the destination drive, 
hdd, as all gaps will be written with zeros?

Now that I know there are no errors on the drive, will dd create an 
exact image that will allow me to search for deleted files?

Gary




Gary Hodges wrote:
> That's the ticket.  Off and running!
> 
> Ben wrote:
>> I believe the syntax is slightly different than dd. -- man dd_rhelp for 
>> more info, but I believe what you want is
>>
>> dd_rhelp /dev/hdc /dev/hdd
>>
>>
>> Ben
>>
>> Gary Hodges wrote:
>>> I must be doing something wrong...
>>>  >./dd_rhelp if=/dev/hdc of=/dev/hdd
>>>
>>> Results in:
>>> dd_rhelp: error: Please specify a usable file as first argument.
>>>
>>> The following drives are seen during boot:
>>> hda: IBM-DHEA-36480, ATA DISK drive
>>> hdb: ST340014A, ATA DISK drive
>>> hdc: ST3250824A, ATA DISK drive
>>> hdd: WDC WD2500PB-55FBA0, ATA DISK drive
>>>
>>> hdc is the disk I want to copy from.  hdd is the disk that I want to 
>>> copy to.
>>>
>>>  >fdisk /dev/hdc
>>> Command (m for help): p
>>>
>>> Disk /dev/hdc: 250.0 GB, 250059350016 bytes
>>> 255 heads, 63 sectors/track, 30401 cylinders
>>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>>
>>>     Device Boot      Start         End      Blocks   Id  System
>>> /dev/hdc1   *           1        5099    40957686    7  HPFS/NTFS
>>> /dev/hdc2            5100       30401   203238315    7  HPFS/NTFS
>>>
>>>
>>>
>>> Ben wrote:
>>>   
>>>> If you have any errors on the hard drive, dd will just stop. I'm a big 
>>>> fan on dd_rhelp (I think it is just a wrapper to ddrescue which is a 
>>>> wrapper to dd) as it jumps over bad regions (fills them with 0's) and 
>>>> comes back and figures out of the size of the bad region smartly. You 
>>>> can start/stop it many times and it picks up where it was left off. 
>>>> Basically it spends its time getting the 'easy' data and then 
>>>> progressively works harder on the data near bad regions. But otherwise, 
>>>> you are right about the mount command and making a disk image.
>>>>
>>>> Gary Hodges wrote:
>>>>     
>>>>> Hi.  I have a drive from a Windows machine (NTFS) that I think may have 
>>>>> some problems.  I'd like to make a copy to play with so I can leave the 
>>>>> original undisturbed for now.  Is this what I should do?
>>>>>    dd if=/dev/hdx of=/dev/hdy   # where x=original, y=copy
>>>>>
>>>>> Along those lines, if I did the following
>>>>>    dd if=/dev/hdx of=/path/to/image
>>>>>
>>>>> would I be able to mount the image with something like?
>>>>>    mount -o loop /path/to/image /mnt/mountpoint
> _______________________________________________
> Web Page:  http://lug.boulder.co.us
> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> Join us on IRC: lug.boulder.co.us port=6667 channel=#hackingsociety
> 




More information about the LUG mailing list