Announcement

Collapse
No announcement yet.

EVMS during installation - DATA CORRUPTION

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    EVMS during installation - DATA CORRUPTION

    Some time after posting the 1st version of this message, I detected files corrupted in all my logical volumes.
    So, because this seems to be a serious problem, I am rewriting it posting as much as I can remember of my installation process to help developers to understand what could have happened.

    May be it was just a coincidence and something got wrong with my hardware, but I have been using my system for more than six months without any problem. It is running perfectly now, for example. Besides, the iso file of kubuntu was good before I burned the CD (I checked the md5sum) and also was corrupted when I detected the corruption problems.

    I have two SATA HDs with 6 partitions each.
    The last one of each HD is managed by LVM2 (no EVMS) as follows:
    /dev/sda8 -> Volume group VSDA
    /dev/sdb8 -> Volume group VSDB

    Each VG has several Logical Vols and basically VSDB is a mirror of VSDA (backup). All LVs have the reiserfs fs.

    /dev/sda7 -> Swap
    /dev/sdb6 -> Where I installed kubuntu

    The 1st time I tried the installation I choosed the manual partition and only choosed to format /dev/sdb6 as root and /dev/sda7 as swap.

    Somehow, I cannot remember exactly how, I entered in an option to perhaps configure logical volumes. I thought it was to assign mount points to logical volumes. Just immediately after entering there, I saw some error messages. I decided to cancel the rest of installation.

    Tryning again the install process, I did not see any LVM related option!

    Requesting again to format /dev/sdb6 as root and /dev/sda7 as swap, I got a error message I cannot remember now. I decided to proceed anyway.

    The rest of the installation seemed to run OK with some "normal" problems for a test release. Some of them I posted in this forum.

    1st time I booted, I got an error message apparently from evms.
    I could not get it again but the log file has always a message I post below.

    Then I entered my usual Gentoo linux and performed a reiserfsck on all LVs. No corruption on the fs was detected.

    Only later, when I needed to use a big file (~100Mb) I could see it was corrupted.

    Because I have a program, that checksums files in dirs, I use very frequently, I could see which files were corrupted.

    Fortunately the files on VSDA were corrupted in different places of its backups on VSDB, with one exception - an old unneeded file that was corrupted on the same block of its backup.

    Because of this I wrote a python program that allowed to recover all the files (except that one).

    Only data areas of all LVs, and only in a very few sectors were corrupted. The filesystems seemed to be coherent. reiserfsck still did not report any corruptions.

    I am using LVM2 for a long time without any problems.

    I think, at least an installation option to disallow all accesses to any partitions or LVMs, except if explicitly requested, should be included. The same for the booting process (fstab mount points and/or EVMS/LVM).

    BTW, I am not sure this is the right place to post this. If it is not, I thank you for telling me where to post it. Thank you as well for any comments on this.

    Regards.
    Paulo

    __________________________________________________ _
    Message in the evms log:

    Feb 25 00:21:02 Gandalf _0_ Engine: plugin_user_message: Message is: LVM2: Object sda8 has an LVM2 PV label and header, but the recorded size of the object (385982848 sectors) does not match the actual size (385993692 sectors). Please indicate whether or not sda8 is an LVM2 PV.

    If your container includes an MD RAID region, it's possible that LVM2 has found the PV label on one of that region's child objects instead of on the MD region itself. If this is the case, then object sda8 is most likely NOT one of the LVM2 PVs.

    Choosing "no" here is the default, and is always safe, since no changes will be made to your configuration. Choosing "yes" will modify your configuration, and will cause problems if it's not the correct choice. The only time you would really need to choose "yes" here is if you are converting an existing container from using the LVM2 tools to using EVMS, and the container is NOT created from an MD RAID region. If you created and manage your containers only with EVMS, you should always be able to answer "no".
Working...
X