Today had some important markdown file that accidentally deleted on my SSD and had to go over the recovery of it.
All I did was this:
run sudo systemctl status fstrim.timer to check how often TRIM runs on my system (apparently it runs weekly and the next scheduled run was in 3 days)
run sudo pacman -S testdisk
run sudo photorec
choose the correct partition where the files were deleted
choose filesystem type (ext4)
choose a destination folder where to save recovered files
start recovery
10-15 minutes and it’s done.
open nvim in parent folder and grep for content in the file that I remember adding today
That’s it - the whole process was so fast. No googling through 10 different sites with their shitty flashy UIs promising “free recovery,” wondering whether this is even trustworthy to install on your machine, dealing with installers that’ll sneak in annoying software if you click too fast, only to have them ask for payment later. No navigating complex GUIs either.
I was so thankful for this I actually donated to the maintainers of the software. Software done right.
Last time I deleted a plaintext file I just grepped for it.
cat /dev/nvme0n1 | strings | grep -n "text I remember"Had to hone in on the location with
head -candtail -cafter I found it, then simply did acat /dev/nvme0n1 | tail -c -123456789012 | head -c 3000 > filerecand trimmed the last filesystem garbage from the ends manually.That’s the way.
Edit: no need for cat here.
How far have we fallen from God’s grace?
Edit: come to think of it, Useless Use of Cat is basically the blinking twelve problem for the Unix-inclined.
It makes the command easier to edit here. Put the various forms across my use next to each other and it becomes apparent:
cat /dev/nvme0n1 | strings | grep -n "text I remember"
cat /dev/nvme0n1 | tail -c -100000000000 | head -c 50000000000 | strings | grep -n "text I remember"
cat /dev/nvme0n1 | tail -c -123456789012 | head -c 3000 > filereccompare that to
strings /dev/nvme0n1 | grep -n "text I remember"
tail /dev/nvme0n1 -c -100000000000 | head -c 50000000000 | strings | grep -n "text I remember"
tail /dev/nvme0n1 -c -123456789012 | head -c 3000 > filerecwhere I have to weave the long and visually distracting partition name between the active parts of the command.
The cat here is a result of experiencing what happens when not using it.Worse, some commands take input file arguments in weird ways or only allow them after the options, so when taking that into account the generic style people use becomes
strings /dev/nvme0n1 | grep -n "text I remember"
tail -c -100000000000 /dev/nvme0n1 | head -c 50000000000 | strings | grep -n "text I remember"
tail -c -123456789012 /dev/nvme0n1 | head -c 3000 > filerecThis is what I’d expect to run across in the wild, and also for example what ai spits out when asked how to do this. You’ll take my stylistic cats over my dead body
In that case I would prefer using variables for the filename:
file="/dev/nvme0n1" text="text I remember" strings "${file}" | grep -n "${text}" tail -c -100000000000 "${file}" | head -c 50000000000 | strings | grep -n "${text}" tail -c -123456789012 "${file}" | head -c 3000 > filerecEven if it’s in the terminal, a temporary variable helps a lot. And for a series of commands I would probably end up writing a simple script or Bash function to share.
I could do a script if I knew what I was gonna do ahead of time, or would write one later if I was gonna do it more often.
A variable in the shell is fine, but I still have to skip over it to change the first command, it still breaks up the flow a bit more than not having that
"$file"in there at all.
Also if I interrupt the work (or in this case have to let it run for a while), or if I wanna share this with others for whatever reason, I don’t have to hunt for the variable definition, and don’t run any risk of fetching the wrong one if I changed it. Getting by without variables makes the command self-contained.And it still maintains the flow of left to right, it’s simply easier to take the tiny well-known packet of
cat fileand from that point pipe the information ever rightwards, than to see a tail, then read the options, and only then see the far more important start of where the information comes from, to the continue on with the next processing step.
Any procedural language is always as left to right as possible.If you really want to avoid the cat, I have yet another different option for you:
< /dev/nvme0n1 strings | grep -n "text I remember"
< /dev/nvme0n1 tail -c -100000000000 | head -c 50000000000 | strings | grep -n "text I remember"
< /dev/nvme0n1 tail -c -123456789012 | head -c 3000 > filerecThis ofc you can again extend with
${infile}and${recfile}if the context makes it appropriate.I understand the reason why you do it this way. Hardcoded and be explicit has its advantage (but also disadvantage). I was just saying that I personally prefer using variables in a case like this. Especially when sharing, because the user needs to edit a single place only. And for variables, it has the advantage being a bit more flexible and easier to read and change for many commands. But that’s from someone who loves writing aliases, scripts and little programs. It’s just a different mindset and none way is wrong or correct. But probably not worth it complicating stuff for one off commands.
And for the cat thing, I am not that concerned about it and just took the last example in your post (because it seemed to the most troublesome). I personally avoid cat when I think about it, but won’t go out of my way to hunt it down. I only do so, if performance is in any way critical issue (like in a loop).
I don’t like to use
<in combination with pipes, I find it harder to read. One is left to right the other right to left, and<is also just plain weird in its specifics.
catis a stylistic choice avoiding needless notational complexitystrings/dev/nvme0n1| grep -n "text I remember"tail -c -123456789012/dev/nvme0n1| head -c 3000 > filerecNo need for
<complexity either.Oh right I misunderstood.
I didn’t do that because I was planning to switch outstringsin that line. First inserting the tail and head before it to hone in on the position, then removing it entirely to not delete “non-string” parts of my file like empty newlines.cat /dev/nvme0n1 | strings | grep -n "text I remember"
cat /dev/nvme0n1 | tail -c -100000000000 | head -c 50000000000 | strings | grep -n "text I remember"
cat /dev/nvme0n1 | tail -c -123456789012 | head -c 3000 > filerecThis would be the loose chain of commands I went through, editing one into the next. It’s nice keeping the “constants” like the drive device that are hard to type static. That way mentally for me the command starts only after the first pipe.
Oh fascinating you can just do it manually.
Wow, that’s very cool :0
Just wondering, is there an inverse of this? Find files on disk flagged as OK to overwrite and have all bits set to 0?
Copy disk as image,
<hex-tool> image |grep 0000? But what for?making sure deleted data is actually deleted, I guess
That’s basically it, just for paranoid information security. I figure if it’s so easy to bring a deleted file back, it should also be easy to ensure deleted data is actually destroyed.
I don’t even want to know what can be important in a markdown file.
All of my notes are stored in markdown files (Obsidian), don’t use any other apps. Syncthing to sync between phone and PC.
lol
The notes…?
PhotoRec and TestDisk are available for Windows as well.
Is that so ? I discovered them through arch wiki so had no idea !
One extra thing I forgot to mention was just how easy it was to find this recovery software due to arch wiki.
Yeah I’ve done almost this exact same process on Windows too. I just took the system offline to ensure no writes, and so that permissions weren’t an issue.
Question. Where do you learn commands? There’s no way you thought “let’s try sudo” out of no where.
Sincerely,
New Linux UserEdit: Great responses everyone, thank you!
It takes some time to understand the manpage format but it’s worth the time because they’re always available.
When I first started, I used the website explainshell to help my get an idea of how the most common commands worked.
This has the benefit of referencing the man pages directly so you can better learn how to interpret them.
apropos foocompgen -backIn another comment, OP said they checked the arch wiki, specifically https://wiki.archlinux.org/title/File_recovery. The arch wiki is a great resource; most of the information is not arch-specific and is useful for linux in general.
Regarding “let’s try sudo”: you should get familiar with sudo because it’s one of the most important linux commands. It runs a command with elevated privileges (it originally stood for “super-user do”). That means sudo isn’t actually the important part of the commands; it just means that the following commands (pacman and photorec) need elevated privileges. pacman deals with systemwide package management and photorec needs access to the raw storage device objects in order to recover files.
Acktually
sudois there to run a command as another user, it does not need to be “superuser” (also known as rot). “superuser” is just the default. To be honest, I never used the option to run as another user, because my computers are single user only. There are so many more options. One should look intoman sudoto see whats possible, its incredible!However, there are alternatives to
sudo, such asdoasfrom OpenBSD ported over andrun0from the evil SystemD. They findsudoto be complicated and bloated.Also quick tip:
sudoedit(same assudo -eorsudo --edit) insteadsudo vimto edit your files with elevated privileges while using your personal configuration. If you want do that, that’s up to you. I want to use my Vim configuration while editing files within sudo right.Yeh, I think it’s actually “switch user do”.
Like “su” is “switch user”.The default being root is handy.
Yes and no. The original design of
sudostands for super user do, and could only run with super user privileges. The run as other users feature was added later, and then they renamed it to substitute user do. I even looked up to get that fact right, and always forget its “substitute” and not “switch”, but I also think of sudo as switch user do.^^Massive Linux lore dumps going on, and I love it.
Thanks for correcting me.
In the old days, we would
ls /usr/bin/(sic, there are several locations defined for apps) and either look at the man page (if it existed) for the items we saw, or just run the commands with a--helpoption to figure out what they did. At best we maybe had an O’Reilly book (the ones with animals on the covers) or friends to ask. You can still do that today instead of reading blog posts or websites, just look, be curious and be willing to break something by accident. :)Part of the Linux journey is to be inquisitive and break some stuff so you can learn to fix it - unlike say Windows, on a Unix-style system the filesystem is laid out in a very specific way (there’s a specification [1]) so one always know where “things” are - docs go here, icons go there, programs go here, configs go there… - lending itself to just poking around and seeing what something does when you run it.
After awhile your brain adjusts and starts to see all the beautiful patterns in design of the typical Linux OS/distro because it’s all laid out in a logical manner and documented how it’s supposed to work if you play the game correctly.
In the old days, we would
ls /usr/bin/(sic, there are several locations defined for apps) and either look at the man page (if it existed) for the items we saw, or just run the commands with a--helpoption to figure out what they didI confirm, that’s exactly what I did in the 90s.
In the old days, your OS would come with a paper manual describing all the commands in great detail. Nowadays the OS is so complex that you can’t be expected (and don’t really need to) know all the commands that are there. But getting one of those old UNIX/early Linux manuals and reading through it would be a great start.
Two kinds of nerds: the read the manuals, follow the instructions folks… and then my people, the plug it in turn it on push the button first crew. :)
It would come with a fucking
walkwall full of manuals. Only toy systems (like ms dos) came with a single manual.
A very nice terminal-based cheatsheet is named
tldr(or tealdeer).
It gives you a very short explanation of what a program does, and then lists common uses of that program and explains themI’ll be real with you. What you need to do is, whenever faced with a task that sounds like it needs CLI, go search stackoverflow for that task. It probably has something slightly relevant to what you need, you take the commands from that answer and read their manuals.
At least do not ask the Ai for
sudostuff. While Stack Overflow nowadays also includes Ai answers, at least these are online monitored and checked by humans.
there are lots of cheatsheets out there but the best way to learn commands is practice. different people will use different commands, so you may not need to spend time learning ffmpeg syntax whereas others find it invaluable. Google is your friend while learning. if you have a Linux question, chances are someone else has had the same question and posted about it online. as far as basics go, spend some time learning about grep and find, they are probably the two most valuable basic commands imo outside of the common ls/mkdir/etc.
as for sudo, it’s just “superuser do” so it’s essentially the same as hitting run as admin in windows. lots of times if you try to run a command without sudo that needs it, you’ll get a permission error which reminds you to run as superuser. it eventually becomes second nature to say “ah, this command needs direct access to a hardware device or system files” which means it’ll need to be run with sudo.
Becoming familiar with the essentials,
grep,awk,sedis a good start. Manual pages can of course be accessed usingmanfollowed by the command/program you want to know more about. Understanding operators like|,,>, etc. can help as well. And reading the entirety ofman bashis a good way to dive in further.I am also a novice so take that into account. But it seems to me it’s something you learn over time what commands do what and when to use them. I think it’s kind of like knowing what folders or settings to navigate to in other operating systems. Over time you get a feel for it.
Also most troubleshooting guides or things like photo rec have the steps kind of built in and explained to you what commands do what.
If the guide you’re reading has the steps try to break them down and figure out what the command is actually doing rather than blindly copy pasting.
Most of what I’ve learned has been the handful that have stuck when I’ve looked up how to do stuff. If you ever install a minimal distro and follow a guide or anything that’s a great way to learn. Or if you look up how to fix something and you find commands on the internet, you can look up what their solutions do. But mostly I’m just replying to wish you well on your journey.
Best of luck with linux, hope you have a lovely day ☺️
AI is great for learning Linux imo, e.g. ask “why did my command say Permission Denied?”. If you object to ChatGPT there are local AI engines too
Even easier: use btrfs or ZFS and tools that let you timeshift.
Last time I used btrfs (about few months ago on OpenSUSE) it eventually fucked up the whole partition, making it unrecoverable. No, thanks.
I’ve used it for 5 or so years
It isn’t perfect but in general it is solid
Snapper is great, just make sure the FS is setup correctly or it causes very mysterious hangs.
Congratz on recovering the important file. And thanks for sharing your tips and experience. Good to know in case of an accident. In general I advice you to do regular backup of changing files (or at least once if it doesn’t change), especially for important and small files like Markdown.
I would also recommend not to install or use the system, and try to recover from a live boot rescue disk or usb stick instead. This will minimize the risk of losing the file. Even if trim didn’t run and delete the data, you could accidentally overwrite parts of it while using your system (in example while installing software or when using your browser). EDIT: When I think about it, I am actually not sure if this is true for SSDs. This is just a habit of me from old magnetic drives. I think the used data will not be overwritten, until trim runs, right?
AFAIK the blocks get marked as “free space” and can be potentially overwritten by new stuff. TRIM guarantees those blocks will be wiped at hardware level. I thought about booting from a live USB but eventually decided to try it out normally.
It was interesting to find out that TRIM runs once a week for me, I thought it runs almost continuously and not periodically? Is this common perhaps someone knows?
It was interesting to find out that TRIM runs once a week for me, I thought it runs almost continuously and not periodically? Is this common perhaps someone knows?
Oh, this is common as far as I know. You don’t want to run TRIM too often, because excessive delete/rewrite will tear down your drive faster. There is no perfect setup and might be different for specialized use cases. A weekly TRIM is absolutely normal. In some occasions after lots of lots Gigabytes write and delete, I start the process
sudo fstrim -vamanually myself too (it figures out all SSDs that can be trimmed). This is something you should not need to do, just make sure you have plenty of space left (personal limit in my mind is 25% free space).For me its weekly too:
$ cat /etc/systemd/system/timers.target.wants/fstrim.timer [Unit] Description=Discard unused filesystem blocks once a week Documentation=man:fstrim ConditionVirtualization=!container ConditionPathExists=!/etc/initrd-release [Timer] OnCalendar=weekly AccuracySec=1h Persistent=true RandomizedDelaySec=100min [Install] WantedBy=timers.targetAh I see thanks for the info. I was not even aware you can manually run it but I suppose it makes sense.













