I have one and it works as a drive but all the seagate software doesn’t run in Linux.
I have one and it works as a drive but all the seagate software doesn’t run in Linux.
It literally doesn’t matter.
There are differences, but for what you’re talking about they don’t matter.
The only thing that should sway you is if you have a person who is willing to help you in person or like over the phone or text or whatever (basically not yelling into the internet void for help) then you should use what they know.
The Apple support window is pretty predictable. You get about seven years from device release to no os updates.
It used to be that they didn’t talk about it and it was kind of a “he who has eyes, let him see” situation.
Of course, we’re talking hardware here so that’s sort of neither here nor there.
The enterprise dell experience is indeed very good all around. I’d even include hp in the pile if I had any experience with em. Their scopes used to be decent.
If it’s like the older vtech educational toys it uses some z80 processor, so you won’t be able to run Linux but you can a few different hobbyist microcomputer operating systems like zeal8bit and fuzix.
This is sometimes true, but big brands have the market penetration to make hardware support very easy through second hand and third party parts and to be enticing fruit for third party support (see opencore and the dosdude patchers for apple stuff).
Generally if you want long support windows you go for big boring brands’ simplest business class laptops. Or Apple.
Small companies an make a commitment to support, but they often have neither the money, customer base or manpower to follow through when the going gets tough.
I have found that popularity is a better predictor of spare part availability than any commitment from a company of any size. When they stop selling parts, there’s always the second hand market. When that dries up there’s always third party parts.
Firmware updates are one of the places that dell, Lenovo and Apple shine. Because of their customers expectations they tend to release new updates and drivers as functionality expectations or security conditions change.
I haven’t worked on this particular model, but gaming laptop build quality is generally very bad.
Msi build quality is also generally pretty bad.
With that said, most people don’t need build quality because they don’t actually take their computers anywhere.
I think the defaults on your tunnel apply themselves to all interfaces(or whatever the active one(s) are.
If you wanna troubleshoot this from the ground up you’d start with looking at your routing table.
If you run into problems using the process enumerated in the link you posted a couple of replies down, you can start to troubleshoot it by looking at the routing table with iptables -L
Theoretically, no.
In reality, possibly/yes!
What do you have?
I’m glad you got it sorted with dd.
One thing people don’t often realize about dd is that it copies all the data from one drive to the other, including uuids that were written when the old drives filesystem were created.
For that reason it excels at cloning one’s boot disk, because when the old drive gets removed from the pc, the clone drives os says at some point during the boot process “okay, let’s mount the filesystem with uuid ABC123 at /“ and it works.
Dd is also not the best tool for cloning disk that you intend to leave hooked up because if you do it’ll put the poor host os in a “I’m seein’ double here, four spidermen!” Type situation.
Yeah, there’s a baseline of network stack understanding that you gotta have in order to use some of the tools, even Theo es that are supposed to make it easier.
What don’t you get? Maybe I can point you in the right direction.
For raid: that’s how raid and any kind of real time parity schemes like zfs work. You make the arrangement of devices first and then put the filesystem on them.
For stuff like snapraid where parity isn’t distributed across all devices you can just add it to a jbod like you want.
Welcome to Linux, everything is a permissions error. Su <username>, touch, the facl tools and namei are your first line of defense!
Most all fans are a standard size and connector type. Sizes are in mm on a side (of the square housing), connectors are in number of pins. If you can’t look up the fan size(80, 140, etc) and connector type (3-pin, 4 pin), next time you take the unit apart measure the fan with a ruler and count the pins on its connector.
Then you know what quieter one to buy.
E: there’s a Reddit thread where ppl say the fan is 92mm and the motherboard supports 4-pin but only has two installed. That means that despite the device supporting 4-pin fan speed control they only installed enough pins to run the fan at full blast, so even the quietest fan ever would be only as quiet as if it were running wide open. You can pop in a 4-pin header if you’re handy with a soldering iron or you can use a usb to 4-pin dongle to attach a 4-pin fan.
If you want even more you can populate an additional header on the motherboard and use this 3d printed 120mm fan holder with it.
Good luck!
Not really.
Even with lvm/sub volumes the benefit is that you could ostensibly keep one home directory between two different distributions you switch booting between. The better solution there would be to have a rsync backup and sync it after booting or shutting down or periodically because then you have a backup at least.
For distro hopping it’s not that great because who’s to say you’re getting the “good” experience with some random new distro when it overrides its defaults meant to be nicey-nice with some other stuff from ~/.gnome/gtk2/gtk3/desktop/widgets/clock/fonts/ttf/arial?
Just back your stuff up, rsync selectively from that backup and use the same filesystem for home as you do for /.
It’s the same thing as asking if you should put a lift in your homes attached garage. If you have to ask if it’s good idea and not just cool, then the answer is no.
I always forget what subset of bins come on the livecd, does it do lsblk?
Maybe industry specific stuff like photoshop or something.
Web browsers and normal stuff will keep on trucking as long as the os has a valid root certificate.
Oh this is entirely different than soldering the ram to the motherboard (which is really common on pc laptops now too, it’s harder to find one with sockets now than it’s ever been!).
The ram is inside the cpu. The processor isn’t “just” a cpu (although you can’t call even the old pentium “just” a cpu, they do so much nowadays!), it’s got the video card, bus controllers, ram and all kinds of other stuff built into that one IC!
It’s a SoC, System on a Chip, just like the processors that run phones and tablets and stuff.
If you go the cheap m1 route, get the most ram you can find in it. The m series have ram built into the chip, so you can’t upgrade it later.
Also if the previous owner says it’s getting slow then nuke the ssd with the dd command after you have confirmed ownership is transferred. You’ll have a longer process to reinstall the os from first principles but it’ll fix slowness from the ssds old blocks having never been rewritten.
Maybe not as expensive as you think. The classic getting into the mac game choice is the 2012 mbp 12”, which can run a supported macos with opencore legacy patcher and costs <$200 with 16gb ram and an ssd.
The next best starter option is probably to make the big long leap to a first gen m1 air which can be had for ~$400 if you keep your eyes open.
Those are both expensive to me lol, but not the multiple thousands for a new computer.
The alternative route I took is maintaining a mac computer for when I need to “be normal”.
When was the last 4yr window on a computer? I think the ati 2011 15” mbp got dropped fast af but thats the last real short one I remember. I haven’t dealt extensively with the touchbar models though.
The m1 air looks to be another 2012 mbp 12”. It would surprise me if they cut it off at 7 years. Although that decision seems to have been driven by the enterprise install base and who knows if that’s still what it once was.
I think the reason why mobile os support windows are apples thing on computers is because they don’t have a separate business line. Iirc xps used to be dells enthusiast brand and now it’s part of the business line.
Thinking more about it, the core line of processors was a real stumble for intel because they were really good and lasted forever and manufacturers had to start pushing updates to fix realtek and qualcomm chip problems or get blamed for shit not working or being supported.
Also, this is kinda tangential because the op is asking about firmware support and hardware availability and firmware support is not as important on macs and they have incredible second hand hardware markets.