Hi, I am planning to purchase a 2.5-inch HDD. If I connect it to my computer using a SATA to USB adapter instead of directly to the computer’s SATA, can it somehow affect the result of this scan?
I apologize for my ignorance but I couldn’t find an answer to this question anywhere
Well, as I’m coming in here, I see two “no’s,” a “maybe” and I came to say “absolutely fucking yes” because I’ve lost hours to a couple cheap shitty usb-sata cables that did all kinds of weird stupid shit that immediately disappeared after I replaced the cables. So, “maybe” but “absolutely fucking yes.”
Did you get bad sectors? Weird things can absolutely happen but having sectors marked as bad is on the exceptional side of weird.
That won’t cause bad sectors though, that just means the data you were writing was bad.
I own a repair shop and use USB to SATA adapters all the time. Sector scans, imaging/cloning, and booting live environments.
It has less to do with the medium and more to do with the quality of your chosen adapter.
I have one of the adapter you pictured, ordered it to test it out because it was comparatively low cost. Did not order more.
I have about a dozen of the Sabrent adapters and they see daily use.
Mark my words. Don’t ever use SATA to USB for anything other than (temporary) access to non critical preexisting data. I swear to god if I had a dollar for every time USB has screwed me over trying to simplify working with customers’ (and my own) drives. Whenever it comes to anything more advanced than data level access USB just doesn’t seem to offer the necessary utilities. Whether this is rooted in software, hardware or both I don’t know.
All I know is that you cannot realistically use USB to for example carbon copy one drive to another. It may end up working, it may throw errors letting you know that it failed, it may only seem to have worked in the end. It’s hard for me to imagine that with all the individual devices I’ve gone through that this is somehow down to the parts and that somewhere out there would be something better that actually makes this work. It really does feel like whoever came up with the controlling circuits used for USB to SATA conversion industry-wide just didn’t do a good enough job to implement everything in a way that makes it wholly transparent from the view of the operating system.
TL;DR If you want to use SATA as intended you need SATA all the way to the motherboard.
tbh I often ask myself why eSATA fell by the wayside. USB just isn’t up to these tasks in my experience.
I’ve had a usb to sata running to a 2.5" sdd that acts as the main storage and boot for my pi4b, and it’s been in use for 4 years with zero issues so far.
I’ve now got 3HDDs attached to my Proxmox machine for NAS storage via usb ATM. It’s been running since Feb. It’s had it’s issues but those were more my fault for not understanding the flake factor (since my experience with the sdd) I had one drive forget what I named it, so my whole Proxmox died.
But that was remedied by passing the USB straight through to OMV.
Just saying, I’ve not really had the same experience as you with them, they seem fine if you have an idea what may fuck up.
ASMedia is the only controller IC manufacturer that can be trusted for these IME. They also have the best Linux support compared to the other options and support pass-through commands. These are commonly found in USB DAS enclosures, and a very small fraction of single disk SATA enclosures
Innostor controllers max out at SATA 2 and lock up when you issue pass-through commands (e.g. to read SMART data). These also return an incorrect serial number. These are commonly found in ultra cheap desktop hard drive docks, and 40pin IDE/44pin IDE/SATA to USB converters
JMicron controllers (not affiliated with the reputable Micron) should be avoided unless you know what you are doing… UASP is flaky, and there are hacky kernel boot time parameters required to get these working on Raspberry Pi boards. Unfortunately these are the most popular ones on the market due to very low cost
USB can actually be ideal in some data recovery scenarios. HDDSuperClone / OpenSuperClone support a relay mode that turns a disk off and back on to regain access after they drop out, and that is reliant on a USB connection.
Will definitely check to see if I can work OpenSuperClone into my workflows. Haven’t had failing drives drop out like that before so I can’t speak to that scenario. I imagine if it drops out why would that software have a harder time to recover under SATA?
You should, it’s quite powerful and can work in tandem with both DMDE and UFS Explorer!
Power cycling the drive reboots and reinitializes it. I’ve mostly seen it with SSDs - you get a few dozen MB worth of reads before it drops out, unplugging and reconnecting a SATA power connector that many times would be real tedious so you automate it with a relay.
eSATA fell out of fashion when USB got faster AND eSATA wasn’t hot plug and play.
I get that. SATA can be hot plug these days. I’m not saying it should rival the number of USB ports we get on motherboards, but I remember there were also these USB eSATA hybrid ports. Which would probably only work with USB 2.0 but still, would be nice to have.
Probably not.
However, not all USB to SATA adapters support SMART, so even if there is a bad sector that gets remapped by the HDD on-the-fly (and thus does not show up in the software scan), you may not find out easily
smartmontools has some good functionality for interfacing with SMART via usb bridges that do not provide native functionality.
I have 2 of these. One gives perfect results, one seems to drop off after lots of data transfer. They look alnoat identical, but one is name brand and ine ia probaboy a cheap chinese copy. Wiring is probably sub-par
The spelling breakdown when talking about the failing adapter is a wonderful accidental joke.
Lol. It is. Before my first coffee.
Maybe? Bad cables are a thing, so it’s something to be aware of. USB latency, in rare cases, can cause problems but not so much in this application.
I haven’t looked into the exact ways that bad sectors are detected, but it probably hasn’t changed too much over the years. Needless to say, info here is just approximate.
However, marking a sector as bad generally happens at the firmware/controller level. I am guessing that a write is quickly followed by a verification, and if the controller sees an error, it will just remap that particular sector. If HDDs use any kind of parity checks per sector, a write test may not be needed.
Tools like CHKDSK likely step through each sector manually and perform read tests, or just tells the controller to perform whatever test it does on each sector.
OS level interference or bad cables are unlikely to cause the controller to mark a sector as bad, is my point. Now, if bad data gets written to disk because of a bad cable, the controller shouldn’t care. It just sees data and writes data. (That would be rare as well, but possible.)
What you will see is latency. USB can be magnitudes slower than SATA. Buffers and wait states are causing this because of the speed differences. This latency isn’t going to cause physical problems though.
My overall point is that there are several independent software and firmware layers that need to be completely broken for a SATA drive to erroneously mark a sector as bad due to a slow conversion cable. Sure, it could happen and that is why we have software that can attempt to repair bad sectors.
If the USB port doesn’t provide enough power constantly it might have an influence. If you are on a desktop type computer use the ports on the back that are directly connected to the main board.
And if you’re not using a desktop, get an adapter that connects with a power supply to electrical mains power
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters NAS Network-Attached Storage RAID Redundant Array of Independent Disks for mass storage SATA Serial AT Attachment interface for mass storage SSD Solid State Drive mass storage ZFS Solaris/Linux filesystem focusing on data integrity
5 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.
[Thread #838 for this sub, first seen 29th Jun 2024, 09:15] [FAQ] [Full list] [Contact] [Source code]
If you’re buying used and want to check the health of the drive, you should run a SMART test and check the current SMART data. Most USB controllers do not support that.
Should be fine. Think of all the usb storage devices like senate and western digital. They all operate with a very similar adapter. The firmware on the drive should mark bad sectors not the interface that connects it.
Not under normal circumstances. I had some issues recovering damaged harddisks that had lots of errors and retries and sometimes either the USB adapter or the mainboard SATA would crap out or handle it better. But for normal copying of HDDs, both should copy the exact same data.
I second this, when a drive shits the bed a sata controller handles it better, some times with a USB adapter you mess the whole bus up and need a reboot of the machine (from using them on windows experience)
What you’re describing is data TRANSFER. Bad sector detection and management is done by the drive controller firmware.
If I connect it to my computer using a SATA to USB adapter instead of directly to the computer’s SATA, can it somehow affect the result of this scan?
It depends on how much power the disk requires and how much power the USB port can deliver. Also note that USB-A is the worst connector out there when it comes to mechanical reliability - it only takes a finger on the plug to screw whatever data transfer is going on.
For external disks (both 2.5 and 3.5") I’ve a bunch of this powered USB disk enclosures. They’ve a good chip, are made of metal and a USB-B 3 port. You can connect those to any USB-A device and you’ll know that only one side might fail… if you’ve USB-C a cable like this tends to be more reliable.
Another good option, if you’ve USB-C and you want something more portable is to get a USB-C disk enclosure as those will be able to deliver more power and be more reliable.
PS: avoid whatever garbage Orico is selling, Inateck is much better.
Any poor quality connector can affect a sector scan and drive performance. Doesn’t matter if it’s connected to a corroded usb port or a bent internal sata, at the end of the day if you’re getting disk errors it’s best to measure using two methodologies/data pathways.
Should be fine, just don’t cheap out on the external drive / cable you will be using. And when you’re using something like smartctl you’ll know right away if SMART info is passing through your USB for proper testing.
I’ve done a lot of these type of scans via USB drives, honestly the more annoying part is that some USB drives do wonky things like go into sleep mode within 1-5 minutes which will disrupt any sort of scanning you had going. So with USB drive scanning I usually implement something to keep the drive alive and awake e.g. a simple infinite loop script to write a file every x seconds, or if you’re on windows you can also use KeepAliveHD.
first off, if you plan to scan the storage for bad “sectors”, that’s gonna take eons if the disk is of any considerable size. what’s more likely is you running the SMART self-test and that will work over any medium.
the cables absolutely can and do cause corruption, whether it’s plain SATA-SATA cables or the USB-SATA with their own controller on it; however, if you don’t have reason to suspect this particular cable/adapter is faulty, it’s not a worry vector per se.