Today had some important markdown file that accidentally deleted on my SSD and had to go over the recovery of it.

All I did was this:


run sudo systemctl status fstrim.timer to check how often TRIM runs on my system (apparently it runs weekly and the next scheduled run was in 3 days)

run sudo pacman -S testdisk

run sudo photorec

choose the correct partition where the files were deleted

choose filesystem type (ext4)

choose a destination folder where to save recovered files

start recovery

10-15 minutes and it’s done.

open nvim in parent folder and grep for content in the file that I remember adding today


That’s it - the whole process was so fast. No googling through 10 different sites with their shitty flashy UIs promising “free recovery,” wondering whether this is even trustworthy to install on your machine, dealing with installers that’ll sneak in annoying software if you click too fast, only to have them ask for payment later. No navigating complex GUIs either.

I was so thankful for this I actually donated to the maintainers of the software. Software done right.

  • redjard@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    20
    ·
    13 hours ago

    Last time I deleted a plaintext file I just grepped for it.
    cat /dev/nvme0n1 | strings | grep -n "text I remember"

    Had to hone in on the location with head -c and tail -c after I found it, then simply did a cat /dev/nvme0n1 | tail -c -123456789012 | head -c 3000 > filerec and trimmed the last filesystem garbage from the ends manually.

        • redjard@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          3
          ·
          9 hours ago

          It makes the command easier to edit here. Put the various forms across my use next to each other and it becomes apparent:

          cat /dev/nvme0n1 | strings | grep -n "text I remember"
          cat /dev/nvme0n1 | tail -c -100000000000 | head -c 50000000000 | strings | grep -n "text I remember"
          cat /dev/nvme0n1 | tail -c -123456789012 | head -c 3000 > filerec

          compare that to

          strings /dev/nvme0n1 | grep -n "text I remember"
          tail /dev/nvme0n1 -c -100000000000 | head -c 50000000000 | strings | grep -n "text I remember"
          tail /dev/nvme0n1 -c -123456789012 | head -c 3000 > filerec

          where I have to weave the long and visually distracting partition name between the active parts of the command.
          The cat here is a result of experiencing what happens when not using it.

          Worse, some commands take input file arguments in weird ways or only allow them after the options, so when taking that into account the generic style people use becomes

          strings /dev/nvme0n1 | grep -n "text I remember"
          tail -c -100000000000 /dev/nvme0n1 | head -c 50000000000 | strings | grep -n "text I remember"
          tail -c -123456789012 /dev/nvme0n1 | head -c 3000 > filerec

          This is what I’d expect to run across in the wild, and also for example what ai spits out when asked how to do this. You’ll take my stylistic cats over my dead body

          • thingsiplay@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            4 hours ago

            In that case I would prefer using variables for the filename:

            file="/dev/nvme0n1"
            text="text I remember"
            strings "${file}" | grep -n "${text}"
            tail -c -100000000000 "${file}" | head -c 50000000000 | strings | grep -n "${text}"
            tail -c -123456789012 "${file}" | head -c 3000 > filerec
            

            Even if it’s in the terminal, a temporary variable helps a lot. And for a series of commands I would probably end up writing a simple script or Bash function to share.

            • redjard@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              2
              ·
              4 hours ago

              I could do a script if I knew what I was gonna do ahead of time, or would write one later if I was gonna do it more often.

              A variable in the shell is fine, but I still have to skip over it to change the first command, it still breaks up the flow a bit more than not having that "$file" in there at all.
              Also if I interrupt the work (or in this case have to let it run for a while), or if I wanna share this with others for whatever reason, I don’t have to hunt for the variable definition, and don’t run any risk of fetching the wrong one if I changed it. Getting by without variables makes the command self-contained.

              And it still maintains the flow of left to right, it’s simply easier to take the tiny well-known packet of cat file and from that point pipe the information ever rightwards, than to see a tail, then read the options, and only then see the far more important start of where the information comes from, to the continue on with the next processing step.
              Any procedural language is always as left to right as possible.

              If you really want to avoid the cat, I have yet another different option for you:
              < /dev/nvme0n1 strings | grep -n "text I remember"
              < /dev/nvme0n1 tail -c -100000000000 | head -c 50000000000 | strings | grep -n "text I remember"
              < /dev/nvme0n1 tail -c -123456789012 | head -c 3000 > filerec

              This ofc you can again extend with ${infile} and ${recfile} if the context makes it appropriate.

              • thingsiplay@beehaw.org
                link
                fedilink
                arrow-up
                2
                ·
                3 hours ago

                I understand the reason why you do it this way. Hardcoded and be explicit has its advantage (but also disadvantage). I was just saying that I personally prefer using variables in a case like this. Especially when sharing, because the user needs to edit a single place only. And for variables, it has the advantage being a bit more flexible and easier to read and change for many commands. But that’s from someone who loves writing aliases, scripts and little programs. It’s just a different mindset and none way is wrong or correct. But probably not worth it complicating stuff for one off commands.

                And for the cat thing, I am not that concerned about it and just took the last example in your post (because it seemed to the most troublesome). I personally avoid cat when I think about it, but won’t go out of my way to hunt it down. I only do so, if performance is in any way critical issue (like in a loop).

      • redjard@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        10 hours ago

        I don’t like to use < in combination with pipes, I find it harder to read. One is left to right the other right to left, and < is also just plain weird in its specifics.
        cat is a stylistic choice avoiding needless notational complexity

        • MonkderVierte@lemmy.zip
          link
          fedilink
          arrow-up
          2
          ·
          9 hours ago

          strings /dev/nvme0n1 | grep -n "text I remember"

          tail -c -123456789012 /dev/nvme0n1 | head -c 3000 > filerec

          No need for < complexity either.

          • redjard@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            2
            ·
            9 hours ago

            Oh right I misunderstood.
            I didn’t do that because I was planning to switch out strings in that line. First inserting the tail and head before it to hone in on the position, then removing it entirely to not delete “non-string” parts of my file like empty newlines.

            cat /dev/nvme0n1 | strings | grep -n "text I remember"
            cat /dev/nvme0n1 | tail -c -100000000000 | head -c 50000000000 | strings | grep -n "text I remember"
            cat /dev/nvme0n1 | tail -c -123456789012 | head -c 3000 > filerec

            This would be the loose chain of commands I went through, editing one into the next. It’s nice keeping the “constants” like the drive device that are hard to type static. That way mentally for me the command starts only after the first pipe.

  • dellish@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    12 hours ago

    Just wondering, is there an inverse of this? Find files on disk flagged as OK to overwrite and have all bits set to 0?

    • frongt@lemmy.zip
      link
      fedilink
      arrow-up
      5
      ·
      1 day ago

      Yeah I’ve done almost this exact same process on Windows too. I just took the system offline to ensure no writes, and so that permissions weren’t an issue.

  • TachyonTele@piefed.social
    link
    fedilink
    English
    arrow-up
    35
    ·
    edit-2
    1 day ago

    Question. Where do you learn commands? There’s no way you thought “let’s try sudo” out of no where.

    Sincerely,
    New Linux User

    Edit: Great responses everyone, thank you!

    • bitwolf@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      It takes some time to understand the manpage format but it’s worth the time because they’re always available.

      When I first started, I used the website explainshell to help my get an idea of how the most common commands worked.

      This has the benefit of referencing the man pages directly so you can better learn how to interpret them.

    • puttputt@beehaw.org
      link
      fedilink
      arrow-up
      21
      ·
      1 day ago

      In another comment, OP said they checked the arch wiki, specifically https://wiki.archlinux.org/title/File_recovery. The arch wiki is a great resource; most of the information is not arch-specific and is useful for linux in general.

      Regarding “let’s try sudo”: you should get familiar with sudo because it’s one of the most important linux commands. It runs a command with elevated privileges (it originally stood for “super-user do”). That means sudo isn’t actually the important part of the commands; it just means that the following commands (pacman and photorec) need elevated privileges. pacman deals with systemwide package management and photorec needs access to the raw storage device objects in order to recover files.

      • thingsiplay@beehaw.org
        link
        fedilink
        arrow-up
        14
        ·
        1 day ago

        Acktually sudo is there to run a command as another user, it does not need to be “superuser” (also known as rot). “superuser” is just the default. To be honest, I never used the option to run as another user, because my computers are single user only. There are so many more options. One should look into man sudo to see whats possible, its incredible!

        However, there are alternatives to sudo, such as doas from OpenBSD ported over and run0 from the evil SystemD. They find sudo to be complicated and bloated.

        Also quick tip: sudoedit (same as sudo -e or sudo --edit) instead sudo vim to edit your files with elevated privileges while using your personal configuration. If you want do that, that’s up to you. I want to use my Vim configuration while editing files within sudo right.

        • towerful@programming.dev
          link
          fedilink
          arrow-up
          4
          ·
          11 hours ago

          Yeh, I think it’s actually “switch user do”.
          Like “su” is “switch user”.

          The default being root is handy.

          • thingsiplay@beehaw.org
            link
            fedilink
            arrow-up
            3
            ·
            7 hours ago

            Yes and no. The original design of sudo stands for super user do, and could only run with super user privileges. The run as other users feature was added later, and then they renamed it to substitute user do. I even looked up to get that fact right, and always forget its “substitute” and not “switch”, but I also think of sudo as switch user do.^^

    • styanax@lemmy.world
      link
      fedilink
      arrow-up
      20
      ·
      1 day ago

      In the old days, we would ls /usr/bin/ (sic, there are several locations defined for apps) and either look at the man page (if it existed) for the items we saw, or just run the commands with a --help option to figure out what they did. At best we maybe had an O’Reilly book (the ones with animals on the covers) or friends to ask. You can still do that today instead of reading blog posts or websites, just look, be curious and be willing to break something by accident. :)

      Part of the Linux journey is to be inquisitive and break some stuff so you can learn to fix it - unlike say Windows, on a Unix-style system the filesystem is laid out in a very specific way (there’s a specification [1]) so one always know where “things” are - docs go here, icons go there, programs go here, configs go there… - lending itself to just poking around and seeing what something does when you run it.

      After awhile your brain adjusts and starts to see all the beautiful patterns in design of the typical Linux OS/distro because it’s all laid out in a logical manner and documented how it’s supposed to work if you play the game correctly.

      [1] https://refspecs.linuxfoundation.org/fhs.shtml

      • AnUnusualRelic@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        10 hours ago

        In the old days, we would ls /usr/bin/ (sic, there are several locations defined for apps) and either look at the man page (if it existed) for the items we saw, or just run the commands with a --help option to figure out what they did

        I confirm, that’s exactly what I did in the 90s.

      • balsoft@lemmy.ml
        link
        fedilink
        arrow-up
        9
        ·
        1 day ago

        In the old days, your OS would come with a paper manual describing all the commands in great detail. Nowadays the OS is so complex that you can’t be expected (and don’t really need to) know all the commands that are there. But getting one of those old UNIX/early Linux manuals and reading through it would be a great start.

        • styanax@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 hours ago

          Two kinds of nerds: the read the manuals, follow the instructions folks… and then my people, the plug it in turn it on push the button first crew. :)

        • AnUnusualRelic@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          8 hours ago

          It would come with a fucking walk wall full of manuals. Only toy systems (like ms dos) came with a single manual.

    • nope@jlai.lu
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      1 day ago

      A very nice terminal-based cheatsheet is named tldr (or tealdeer).
      It gives you a very short explanation of what a program does, and then lists common uses of that program and explains them

    • balsoft@lemmy.ml
      link
      fedilink
      arrow-up
      9
      ·
      1 day ago

      I’ll be real with you. What you need to do is, whenever faced with a task that sounds like it needs CLI, go search stackoverflow for that task. It probably has something slightly relevant to what you need, you take the commands from that answer and read their manuals.

      • thingsiplay@beehaw.org
        link
        fedilink
        arrow-up
        7
        ·
        1 day ago

        At least do not ask the Ai for sudo stuff. While Stack Overflow nowadays also includes Ai answers, at least these are online monitored and checked by humans.

    • AllHailTheSheep@sh.itjust.works
      link
      fedilink
      arrow-up
      9
      ·
      1 day ago

      there are lots of cheatsheets out there but the best way to learn commands is practice. different people will use different commands, so you may not need to spend time learning ffmpeg syntax whereas others find it invaluable. Google is your friend while learning. if you have a Linux question, chances are someone else has had the same question and posted about it online. as far as basics go, spend some time learning about grep and find, they are probably the two most valuable basic commands imo outside of the common ls/mkdir/etc.

      as for sudo, it’s just “superuser do” so it’s essentially the same as hitting run as admin in windows. lots of times if you try to run a command without sudo that needs it, you’ll get a permission error which reminds you to run as superuser. it eventually becomes second nature to say “ah, this command needs direct access to a hardware device or system files” which means it’ll need to be run with sudo.

    • z3rOR0ne@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      1 day ago

      Becoming familiar with the essentials, grep, awk, sed is a good start. Manual pages can of course be accessed using man followed by the command/program you want to know more about. Understanding operators like |, >, >>, etc. can help as well. And reading the entirety of man bash is a good way to dive in further.

    • Labototmized@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      I am also a novice so take that into account. But it seems to me it’s something you learn over time what commands do what and when to use them. I think it’s kind of like knowing what folders or settings to navigate to in other operating systems. Over time you get a feel for it.

      Also most troubleshooting guides or things like photo rec have the steps kind of built in and explained to you what commands do what.

      If the guide you’re reading has the steps try to break them down and figure out what the command is actually doing rather than blindly copy pasting.

    • Cris@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      Most of what I’ve learned has been the handful that have stuck when I’ve looked up how to do stuff. If you ever install a minimal distro and follow a guide or anything that’s a great way to learn. Or if you look up how to fix something and you find commands on the internet, you can look up what their solutions do. But mostly I’m just replying to wish you well on your journey.

      Best of luck with linux, hope you have a lovely day ☺️

    • PastafARRian@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      arrow-down
      3
      ·
      edit-2
      1 day ago

      AI is great for learning Linux imo, e.g. ask “why did my command say Permission Denied?”. If you object to ChatGPT there are local AI engines too

    • Matriks404@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      8 hours ago

      Last time I used btrfs (about few months ago on OpenSUSE) it eventually fucked up the whole partition, making it unrecoverable. No, thanks.

    • cadekat@pawb.social
      link
      fedilink
      arrow-up
      4
      ·
      1 day ago

      Snapper is great, just make sure the FS is setup correctly or it causes very mysterious hangs.

  • thingsiplay@beehaw.org
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    1 day ago

    Congratz on recovering the important file. And thanks for sharing your tips and experience. Good to know in case of an accident. In general I advice you to do regular backup of changing files (or at least once if it doesn’t change), especially for important and small files like Markdown.

    I would also recommend not to install or use the system, and try to recover from a live boot rescue disk or usb stick instead. This will minimize the risk of losing the file. Even if trim didn’t run and delete the data, you could accidentally overwrite parts of it while using your system (in example while installing software or when using your browser). EDIT: When I think about it, I am actually not sure if this is true for SSDs. This is just a habit of me from old magnetic drives. I think the used data will not be overwritten, until trim runs, right?

    • chasteinsect@programming.devOP
      link
      fedilink
      arrow-up
      6
      ·
      1 day ago

      AFAIK the blocks get marked as “free space” and can be potentially overwritten by new stuff. TRIM guarantees those blocks will be wiped at hardware level. I thought about booting from a live USB but eventually decided to try it out normally.

      It was interesting to find out that TRIM runs once a week for me, I thought it runs almost continuously and not periodically? Is this common perhaps someone knows?

      • thingsiplay@beehaw.org
        link
        fedilink
        arrow-up
        4
        ·
        1 day ago

        It was interesting to find out that TRIM runs once a week for me, I thought it runs almost continuously and not periodically? Is this common perhaps someone knows?

        Oh, this is common as far as I know. You don’t want to run TRIM too often, because excessive delete/rewrite will tear down your drive faster. There is no perfect setup and might be different for specialized use cases. A weekly TRIM is absolutely normal. In some occasions after lots of lots Gigabytes write and delete, I start the process sudo fstrim -va manually myself too (it figures out all SSDs that can be trimmed). This is something you should not need to do, just make sure you have plenty of space left (personal limit in my mind is 25% free space).

        For me its weekly too:

        $ cat /etc/systemd/system/timers.target.wants/fstrim.timer
        [Unit]
        Description=Discard unused filesystem blocks once a week
        Documentation=man:fstrim
        ConditionVirtualization=!container
        ConditionPathExists=!/etc/initrd-release
        
        [Timer]
        OnCalendar=weekly
        AccuracySec=1h
        Persistent=true
        RandomizedDelaySec=100min
        
        [Install]
        WantedBy=timers.target