

“cloud” still mostly means services like AWS and Google Cloud. People don’t refer to Hetzner dedicated servers as “cloud” for example.
Aussie living in the San Francisco Bay Area.
Coding since 1998.
.NET Foundation member. C# fan
https://d.sb/
Mastodon: @[email protected]
“cloud” still mostly means services like AWS and Google Cloud. People don’t refer to Hetzner dedicated servers as “cloud” for example.
I’m sad that Opera Unite failed. It was the closest thing to self-hosting for regular non-technical people.
They should say at least one thing that’s unique to the cloud, though.
Dual-booting works fine. You can even have more than two OSes - for a while I was running Windows 10, Fedora, and Debian. Ended up sticking with Fedora.
base RAM usage down super low (50MB to 100MB range)
A base Debian system (minimal netinstall with nothing selected in the tasksel step) doesn’t use much more than this, or at least it didn’t in the last stable release. For https://dnstools.ws/ I have a few VPSes with 256MB RAM that run Debian and the DNSTools worker. They run fine.
The article is very confusingly written. Maybe AI? It’s conflating “cloud” hosting (AWS, etc) with renting hosting infrastructure (which includes the cloud, but also things we don’t refer to as “cloud”, like dedicated servers, VPS services, and shared hosting).
This paragraph makes it sound like Amazon were the first company to allow renting their servers:
As companies such as Amazon matured in their own ability to offer what’s known as “software as a service” over the web, they started to offer others the ability to rent their virtual servers for a cost as well.
but Linux-based virtual servers have been a thing for 20+ years or so, first with Linux-VServer then with OpenVZ. Shared servers in general date back to the mainframes of the 60s and 70s.
Similarly, this paragraph makes it sound like the only two choices are either to use “the cloud” or to run your own data center:
Cloud computing enables a pay-as-you-go model similar to a utility bill, rather than the huge upfront investment required to purchase, operate and manage your own data centre.
Pika is a GUI for Borg.
Rsync is doable, but it’s not great since you essentially only have one backup set. If a file gets corrupted and you don’t notice before the next backup is done, you won’t be able to restore it. Borg’s deduping is good enough to keep lots of history - I do daily backups and keep every day for the past two weeks, every week for the past three months, and every month indefinitely (until I run out of space and need to prune it). Borgmatic handles pruning the backups that are out of retention.
I’m using Fedora KDE and haven’t set up backups on my desktop PC yet, but on Linux servers (both at home and “in the cloud”) I usually use Borgbackup with Borgmatic. All my systems have two backup destinations: My home server and a storage VPS, both via SSH.
Looks like Pika Backup is a GUI for Borgbackup, so it should be a good choice. Vorta is also popular. GNOME apps tend to focus on simple, easy to use GUIs with minimal customization, so it’s possible Vorta is more configurable. I haven’t tried either.
Don’t forget the 3-2-1 policy: you should have at least three copies of your data, in at least two different mediums (hard drives, “cloud”, Blu-rays, tape, etc), one of which is off-site (cloud, a NAS at a friend’s or family member’s house, etc). If you’re looking for cloud storage, Hetzner storage boxes are great value. Some VPS providers have good sales (less than $3/TB/month) during Black Friday.
Definitely going to fill this out once I get some free time. What will the data be used for?
This is exactly what a Yubikey is for. They’re phishing-resistant too, as opposed to TOTP codes.
deleted by creator
I’d say 9/10 aren’t doing proper backups given most people don’t actually do DR runs and verify whether they can fully recover from their backups. If you don’t test your backups, you don’t have backups!
Which containers do automatic DB backups? Normally the database is a separate container, unless the app is using SQLite. Is there a MySQL or PostgreSQL container that does automated backups?
Where’s the MySQL option? Some of my servers are running MySQL instead of MariaDB because it allowed binding to multiple IP addresses (although I think Maria has implemented this now), and some query plan optimizations were implemented in MySQL but not MariaDB.
You still need to know what database system is being used in order to make backups of the database. You can’t just snapshot or backup the data directory while a database is running, because you might end up with an inconsistent state that won’t restore properly. You need to either stop the DB before doing the backup, or use the relevant DB-specific tools to perform a backup.
I’m a C# developer and run .NET apps on Linux all the time. I usually work on CLI and server apps, but recently released my first Linux desktop app written in C#: https://flathub.org/apps/com.daniel15.wcc
Even before .NET Core, I was using Mono to run C# apps on Linux. There used to be quite a few GNOME apps written in C#.
There’s .NET and then there’s .NET Core which is a mere subset of .NET.
Nope. The old .NET Framework has been deprecated for a long time. The latest version, 4.8.1, is not very different to 4.6 which was released 10 years ago.
The modern versions are just called .NET, which is what .NET Core used to be, but with much more of the framework implemented in a cross-platform way. Something like 95% of the Windows-only .NET Framework has been reimplemented in a cross-platform way.
The list of .NET stuff that will actually run on .NET Core (alone) is a barren wasteland.
All modern .NET code is built on the cross-platform framework. Only legacy apps used the old Windows-only .NET Framework.
If you get the free community version of Visual Studio and create a new C# project, it’ll be using the latest cross-platform framework. You can even cross-compile for Linux on a Windows system.
Thanks, this is a good insight.
That’s a very old way of thinking of things. C# has been cross platform for a long time.
Almost everything ever written in C# uses Windows-specific APIs
Not really. Most C# apps use .NET (since the framework and standard library is quite feature-rich) rather than direct Win32 calls, and .NET is cross-platform. A lot of web services are written in C# and deployed to Linux servers.
basically no one installs the C# runtime on Linux anymore
You can compile a C# app to a single executable that doesn’t require the framework to be installed.
Are you running Jellyfin, the *arr suite, slskd, or Technitium DNS? They’re all written in C#.
Yeah it’s definitely not possible to reach 50MB with a Node.js Docker image, but <150MB should be doable with a distroless base image + compiling the app into one JS file (for example, using Parcel or esbuild).
It’s possible to reach ~50-60MB Docker image with a C# app. Rust and Go definitely produce more compact binaries though.
It was a feature built in to the web browser, providing a website, file sharing, a music player, a photo sharing tool, chat, a whiteboard, a guestbook, and some other features.
All you needed to do was open the browser and forward a port, or let UPnP do it (since everyone still had UPnP enabled back then), and you’d get a
.operaunite.com
subdomain that anyone could access, which would hit the web server built into the browser.This was back in 2008ish, when Opera was still good (before it was converted to be Chromium-powered). A lot of people still used independent blogs back then, rather than everything being on social media, so maybe it was ahead of its time a bit.