Announcement

Collapse
No announcement yet.

Filesystem check or mount failed. fsck not helping

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Filesystem check or mount failed. fsck not helping

    Hi,

    For some reason, yesterday skype crashed with some I/O error and said to reinstall it. I did that and then my PC restarted showing following error at start

    Filesystem check or mount failed.
    A maintenance shell will now be started.
    CONTROL-D will terminate this shell and continue booting after re-trying filesystems. Any further errors will be ignored.
    I ran fsck -f and it found few errors and fixed them. I tried hitting CONTROL-D then but everything I get immediately is the same message.
    I then tried reboot and shutdown commands but PC would just hang at "shutting down processes" or something like that. If I reboot my PC over restart button I get the same message again.

    I booted from Live cd now and I can normally see my partitions and hard drivers.
    Here is the fdisk -l output.

    ubuntu@ubuntu:~$ sudo fdisk -l

    Disk /dev/sda: 160.0 GB, 160041885696 bytes
    255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x077de6f1

    Device Boot Start End Blocks Id System
    /dev/sda1 * 63 100358054 50178996 7 HPFS/NTFS/exFAT
    /dev/sda2 100358055 312576704 106109325 7 HPFS/NTFS/exFAT

    Disk /dev/sdb: 80.0 GB, 80026361856 bytes
    255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000402ca

    Device Boot Start End Blocks Id System
    /dev/sdb1 * 2048 499711 248832 83 Linux
    /dev/sdb2 501758 156301311 77899777 5 Extended
    /dev/sdb5 501760 156301311 77899776 8e Linux LVM

    Disk /dev/mapper/Voyager-root: 76.5 GB, 76546048000 bytes
    255 heads, 63 sectors/track, 9306 cylinders, total 149504000 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000

    Disk /dev/mapper/Voyager-root doesn't contain a valid partition table

    Disk /dev/mapper/Voyager-swap_1: 3221 MB, 3221225472 bytes
    255 heads, 63 sectors/track, 391 cylinders, total 6291456 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000

    Disk /dev/mapper/Voyager-swap_1 doesn't contain a valid partition table
    Here is the fstab output
    # /etc/fstab: static file system information.
    #
    # Use 'blkid -o value -s UUID' to print the universally unique identifier
    # for a device; this may be used with UUID= as a more robust way to name
    # devices that works even if disks are added and removed. See fstab(5).
    #
    # <file system> <mount point> <type> <options> <dump> <pass>
    proc /proc proc nodev,noexec,nosuid 0 0
    /dev/mapper/Voyager-root / ext4 errors=remount-ro 0 1
    # /boot was on /dev/sda1 during installation
    UUID=5826e790-9944-4817-9e9c-3b14bf502149 /boot ext2 defaults 0 2
    /dev/mapper/Voyager-swap_1 none swap sw 0 0
    /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0
    and here is the blkid output

    ubuntu@ubuntu:~$ sudo blkid
    /dev/loop0: TYPE="squashfs"
    /dev/sda1: LABEL="Igre" UUID="1E2CFD042CFCD82B" TYPE="ntfs"
    /dev/sda2: LABEL="Storage" UUID="B84C13C44C137BF6" TYPE="ntfs"
    /dev/sdb1: UUID="5826e790-9944-4817-9e9c-3b14bf502149" TYPE="ext2"
    /dev/sdb5: UUID="2cgDDQ-CkWY-hAY9-ST3i-Su5m-bA1u-TDgWbf" TYPE="LVM2_member"
    /dev/sr0: LABEL="Ubuntu 13.04 i386" TYPE="iso9660"
    /dev/mapper/Voyager-root: UUID="c94261b0-82c6-4f2c-80c7-b53ba7c56680" TYPE="ext4"
    /dev/mapper/Voyager-swap_1: UUID="96870738-8c2f-4e37-a751-d30bd1212c15" TYPE="swap"
    Although I never touched fstab file. From googling around, I saw that almost everyone fixed it with running fsck but for some reason it doesn't work for me. Here are some images from disk utility on live cd.
    http://www.pohrani.com/f/2o/Sv/Hh7Pc...m-2013-05-.png
    http://www.pohrani.com/f/3m/vu/1C8mR...m-2013-05-.png
    http://www.pohrani.com/f/47/5Y/3lUGj...m-2013-05-.png
    http://www.pohrani.com/f/2G/KL/n8sVH...m-2013-05-.png

    Also, everything is read only wwheter I boot from live cd or try recovery console.


    I really don't understand what went wrong here. I never ever had problems like this... but it seems there's a first time for everything. And right at the wrong moment since I have exam tomorrow and I need my PC today :/

    p.s.
    I'm using latest version of kubuntu

    #2
    This is one of the reasons why I advise against using fake-raid - too hard to trouble shoot. You didn't say if your files are accessible when you boot to the LiveCD. If they are, I'd spend my time making a backup of any important data.

    What does this show?

    sudo dmraid -r

    You can try to de-activate and re-activate is and then try running fsck again:

    sudo dmraid -an
    sudo dmraid -ay


    Since the drive capacity is 80GB I assume it's fairly old. Any chance the drive is simply dying?

    Please Read Me

    Comment


      #3
      Hi,

      Output for all 3 commands is

      no raid disks
      .. and I don't remember that I ever setup something like this.

      Hard drive itself is around 4 years old. I'v checked SMART Data and everything is ok. No bad sectors, no errors.

      Edit:
      Yes, my files are accessible when I use live cd. I can' normally mount each partition and access data.

      Comment


        #4
        Well, AFAIK /dev/mapper is only used by dmraid to symlink to fakeRAID partitions so somehow you got installed this way. Maybe left over from some previous install?

        If it were my computer, I'd copy my /home directory to another disk/partition, delete the RAID partitions and re-install in a normal way. Then restore your /home and you should be good-to-go.

        Really odd that you got installed to /dev/mapper partitions without directing it.

        Please Read Me

        Comment


          #5
          I just noticed LVM in your previous post. You must have accidently selected LVM when you initially installed. So they're not RAID, they're LVM partitions. Im not experienced with LVM. Hopefully, someone will come along in a minute.

          Please Read Me

          Comment


            #6
            Hi,

            Yes, I did use LVM when initially installing. But that's been a long time ago and everything worked fine until yesterday. Only thing I remember doing few days ago was upgrading from previous kubuntu version to latest one. Although, if it was up to that then it would break immediately, wouldn't it ?

            Oh and one more detail I remembered. After I got some strange I/O error from skype. I wanted to lock my screen because I needed to leave my room for a second. But when I wanted to lock it I got something like "can't use this and this because your disk is read only" and then everything froze and pc restarted.

            Here is some data for LVM

            lvm> pvscan
            PV /dev/sdb5 VG Voyager lvm2 [74,29 GiB / 0 free]
            Total: 1 [74,29 GiB] / in use: 1 [74,29 GiB] / in no VG: 0 [0 ]
            lvm> vgscan
            Reading all physical volumes. This may take a while...
            Found volume group "Voyager" using metadata type lvm2
            lvm> lvscan
            ACTIVE '/dev/Voyager/root' [71,29 GiB] inherit
            ACTIVE '/dev/Voyager/swap_1' [3,00 GiB] inherit
            lvm> lvdisplay
            --- Logical volume ---
            LV Path /dev/Voyager/root
            LV Name root
            VG Name Voyager
            LV UUID qq1qtw-VwQn-5Uwf-AF7O-HFa2-9sSO-PI469L
            LV Write Access read/write
            LV Creation host, time ,
            LV Status available
            # open 1
            LV Size 71,29 GiB
            Current LE 18250
            Segments 1
            Allocation inherit
            Read ahead sectors auto
            - currently set to 256
            Block device 252:0

            --- Logical volume ---
            LV Path /dev/Voyager/swap_1
            LV Name swap_1
            VG Name Voyager
            LV UUID bjB2Ym-Zxbn-dbpB-G2ql-FQVD-G2XE-Tk1gAJ
            LV Write Access read/write
            LV Creation host, time ,
            LV Status available
            # open 0
            LV Size 3,00 GiB
            Current LE 768
            Segments 1
            Allocation inherit
            Read ahead sectors auto
            - currently set to 256
            Block device 252:1

            lvm>
            Edit:

            I also tried recovery menu. But I get "Recovery menu ( filesystem state: read -only )
            resume Resume normal boot
            OK

            Which gets me back to the first error.

            Comment


              #7
              If you do the recovery menu with filesystem check, it should offer you the recovery menu a second time after the file system passes. You can then re-select root prompt and it will mount r/w. I don't know if that will help or not.

              Please Read Me

              Comment


                #8
                Hi,

                Can you be more specific please ?

                At the grbu menu I select recovery opetion. Then it starts to load but I get message

                I also tried recovery menu. But I get "Recovery menu ( filesystem state: read -only )
                resume Resume normal boot
                OK
                I can only click on OK or Resume normal boot.

                Both of those options throw me to

                Filesystem check or mount failed.
                A maintenance shell will now be started.
                CONTROL-D will terminate this shell and continue booting after re-trying filesystems. Any further errors will be ignored.

                Comment


                  #9
                  Again: not experienced with LVM, but is it supposed to report no free space?

                  [74,29 GiB / 0 free]
                  Could you just have a full partition?

                  Please Read Me

                  Comment


                    #10
                    Checked with df -h now and it says I have 22G free on root partition. So it seems that this doesn't "mark" the partition free space. Probably something with the space that is not used by LVM.



                    edit:
                    If someone will need it. Here is /proc/partitions info

                    ubuntu@ubuntu:~$ cat /proc/partitions
                    major minor #blocks name

                    7 0 775440 loop0
                    11 0 813056 sr0
                    8 0 156290904 sda
                    8 1 50178996 sda1
                    8 2 106109325 sda2
                    8 16 78150744 sdb
                    8 17 248832 sdb1
                    8 18 1 sdb2
                    8 21 77899776 sdb5
                    252 0 74752000 dm-0
                    252 1 3145728 dm-1

                    Comment

                    Working...
                    X