• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: December 27th, 2023

help-circle

  • And how you describe the email addresses for individual purpose is excellent. Spam? Want me to unsubscribe? How about I delete the email address, and you waste your time emailing? I love it!

    it actually has yet another upside. when i do receive spam or phishing on such an alias, i go to the portal or shop, change my emailadress to a newly created alias and then i also write an email to the service describing that i got a spam or phishing mail to the email alias, only they and me know about, i also cite how many other spam mails i got for other aliases (usually zero) and suggest that the data was lost on rather their side, not mine. In the past companies usually ofzen “assumed” that their customers used the email elsewehere and the leak on their side was just a hypothesis easily denyable, but only two parties knowing about that address while only that particular address was leaked seems somewhat more convincing to them. of course it could be anything their webserver, their cloud provider, some third party their cloud provider uses, some fourth party their cloud providers provider uses , their email provider, newsletter provider, proxies like cloudflare a.s.o., but as i host my emails by myself, there is not other party involved on my side (besides the VM provider) but at least not without then leaking “all” of my other aliases at the very same time. that happened a few times until now over the years and it really feels great beeing on the “capable to prevent and react” side of it =) that is you really know who failed then, you can offer that little help that they know that too and can prevent their one-time-leak from annoying you more than once.

    also interestingly: it was until now always the “good” looking companies that failed this way, not those a bit dodgy looking webpages where i only subscribed to their newsletter cause i could turn off spam anyway.

    however i had the idea of parsing logs for all deleted aliases so that i get statistics of how long spammers keep trying after they got ‘unknown user’ first time. but i didnt implement that yet.


  • for the 15gb limit it would be sufficient to just get a VM with enough space (in a datacenter or at home, maybe a rapsberry pi) and run an imap , an mta and something to fetch the mails from google so that they are archived and dont fill in the limited space. i think if i were you, i would begin with just that cz that is the annoying thing and it is always possible to change the setup as wished once it is under your control.

    i personally would not want to use mailcow but dovecot, postfix and fetchmail directly. fetchmail gets the mails from google and places it into dovecots imap storage while postfix would be used to send mails through google to the outside world using your google credentials. then you’ld have google as the external service to begin with and your server to actually host the emails and configure the phones to send emails through it or directly through google but just get the emails from it and save sent mails there. later you could add another nongoogly service so that fetchmail gets these emails too and just extend the setup.

    if you have that, you can send/receive emails when you are at home.

    but before downloading (moving) the first mails from the google storage to there i would ensure that an (incremental) backup is already running well and automatically just in case of disk failures.

    But it was insecure in that you can easily go find my IP address and my real address. I don’t want that, don’t really mind if someone knows it, but I don’t want to be spearphished.

    i have pretty good experience with giving every contact a separate email alias under my domain to communicate with me. my email aliases usually are like <contactshortname>-<randomnumber>@mydomain.tld

    that is for a newsletter from somecoolpage.com it would look like coolpage-61514@mydomain.tld

    it is near to impossible to guess that random number so i get nearly no emails from other than my real contacts cz only they know a valid address. that alias is only used for this one thing, a contact, a shop even a friend (or group of friends). mails go all into the same inbox but when i receive spam or phishing on it, i 1. know who has leaked my data and 2. i can change the alias to a new number, delete the old alias and thus stop any future spam on that address. this way i have no extra spam filters but also near to no spam.

    However your ip address can be found in any email you send in the received headers. is that what you want to prevent, or just the public ip when running an internet facing mailserver with mx records pointing to it ? with ip changes beeing a thing i guess you tried to run the mailserver behind your home internet connection, nonstatic ips are bad for email, you could get a ipv6 tunnel from hurricane electric (still free?) then have static ipv6 addresses, but google afair does not allow you to send them emails via ipv6 and thus i blocked them so they cannot send me emails via ipv6 too, so i think communicating to google victims might be a problem due to google lacking behind current tech. so your idea to use a third party service fits perfectly if you dont want to run your own public mailserver. do you have a vpn to your home network to use the homeserver from remote?


  • thanks for your opinion.

    i already have my own mailservers running for roughly two decades now so copy-paste is not what i am looking for.

    i ordered that email book and mastering dnssec from him now as i am a bit curious about some topics within the email book and want to dive into dnssec now cz i also host dns for my domains and improvement is always good ;) last time i started with dnssec i got distracted and that was it.





  • smb@lemmy.mltoSelfhosted@lemmy.worldEmail with own domain service but local?
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    30 days ago

    i guess step by step was asked for on purpose, but i also don’t know on what level ;-)

    @werefreeatlast@lemmy.world :

    i’ld suggest as step by step to start small and increase to what you want:

    1. register a new account for testing on a freemail service like gmail.com gmx.net , hotmail.com or another. as its just the first step, it does not matter if its google or not, but that you can send and receive emails through it via common protocols like smtp and pop3 and that it is ‘not’ your account you handle important mails with as data losses could occur during experimenting.
    2. make sure your freemailer account is configured to use smtp and pop3 for sending/receiving email by a mailclient rather than only through their webpage. some freemailers also need you to have a different password for using the mail client than for logging into their portal (which is good). validate with your mailclient that sending/receiving works with those credentials, and note protocols, port numbers, login mechs maybe discovered by your mailclient.
    3. setup your mailserver (mailcow if you like) and connect it to your freemailers account maybe first for sending via smtp (send one to your real mail account) then for receiving maybe via pop3, testing it by sending a mail from your real mail account to the freemailer one.
    4. search for a cheap (you are still experimenting, right?) email service where you can use your own domain with, set it up, they likely also have faqs how to do the dns of your domain right to use their MX server. according to https://www.techradar.com/news/best-email-provider NeoMail (https://neo.space/) seems a good choice. i’ld suggest that you get a separate domain for experimenting from a different company (i use name.com) so you are then more aware of how everything works together and also can change parts of it more easily later if needs change. domains are usually cheap like some bucks per year and domain services usually also provide simple ways to define some records like in this case the MX and spf records you need/want for emails to be send to that email service.
    5. once you have setup dns records and your mail providers account for sending/receiving mails to/from, try to connect your holy email cow to it and experiment with it. also sending from/to your real mail account, and let it run for a while, look into topics like dmarc and dkim, use spf, dmarc and spf online check tools to see if that setup works as you like. based on your experience you might have ideas then how to go on with it.

    spf,dkim and dmarc are good to prevent malicious parties from sending emails in your name to third parties. a mail server works good without that but it is a good practice and might prevent your domain (not your ip) from beeing blacklisted because of spam that you haven’t sent but seems to originate from your domain and cannot be distinguished from your genuine emails only due to the lack of missing spf, dkim and dmarc records. spf and dmarc are dns only settings while dkim are crypto keys you create for signing outgoing emails and the public parts of them are published as dns records again so everyone can check that the signature really comes from your domain. i dont know if or how mailcow supports dkim, but it should be at least possible ;-)



  • hm, sounds like literally any regular webhosting service that also offers email (like every such service i know of) to me, then maybe used together with imap (or pop, if you wish), and if you want to connect servers with it to send mails, then “smarthost” or “sattelite system” should be the configuration you are looking for for your own MTA. to get received emails from that service most common is to use pop3 (still common because seemingly every service offers it for compatibility) but other protocols would be faster like immediate recieve using notify within imap, and there are other options too, but those depends on what that service offers like maybe sending your mails once received by them to your own server via smtp or by other protocols depending on what they implemented. i think there is no “twist” with that and -what i understand of what you want - is a quite common thing.

    i for myself don’t want 3rd parties to be able to directly read my emails so i run my own mail server as tiny rented VMs from providers while my real emailserver is my homeserver that uses these VMs as “smarthost” and also pulls emails from there immediately. my mailclients are configured to connect to those VMs butbthat connection is relayed through VPN to my homeserver. thus i think my setup is a bit like what you want but i host everything by myself and i don’t use mailcow but it looks like i use the same software mailcow uses too. i guess you are mainly bound to what mailcow offers when limiting yourself to it ;-)



  • you’re welcome.

    what i’ld suggest… a general rule that i like to always follow is to use a test system for everything new. but that does not need to be a full separate system every time.

    lets say you have your mailbox and want to try getting new mails from it using fetchmail. first you can use uidl mechanisms to only fefch every mail once and besides that leave them all on the server, but i like it a bit more secure: create a second email adress/account at your mail providers service only for testing. thus you can do whatever you like to to test the mechanisms only without even touching your real inbox (maybe even fill it up with large emails and look how the system reacts, i once had an email account with a cheap provider that deadlocked the inboxes when full…). then when everything is as you want it, switch the account and password (or create another config file for fetchmail) and your’re done. every change (not only fetchmail things) could go tested this way before going live with the changes. filtering could be done with procmail for example, but when the mda that is called by procmail somehow exits with success when the email really isn’t delivered, then the email might get lost forever depending on the settings of course. so fiddling with new stuff always carries the risk of not fiddling correctly ;-)

    have fun !


  • Its possible to tell your mta (like postfix) to use another mta for all mails, or only some domains etc, so using a third party to play the internet facing service then getting the mails by fetchmail, storing them in a dovecot server is easy. on the sending part you could use your standard email client (i.e. thunderbird on pc or k9-mail on smartphone) to send it to your postfix instance that also sits on the server hosting your dovecot service. the mta there takes the mail and delivers it by rules which could just be using the mta of your freemailer using username/password of your account for all outgoing emails. i am doing this but the “external” mail system are my servers as well, i just don’t want emails to stay too long on VMs in the datacenter where i have no access to the physical disks in case something goes wrong.

    a raspberry pi is sufficient for such a aetup (i am using a pi4 currently but for emails only i’ld say a 3 or older would do too), adding a disk via usb makes storage huge and cheap then, i use two usb ssd’s in a raid1 for storage… that server could be only accessible through vpn if you whish, depending on your skills and needs (i mainly use ssl client certificates that are supported by k9mail and thunderbird so it fits seamless to be connected through a haproxy that authenticates these before proxying the plain connection to the pi) clients like thunderbird can offline-store all emails (configure download-or-not per imap folder) making searches easy and quick while my k9 client can search locally or on the server if needed.

    maybe adjust maximum mail size of your own mta to exactly match (or slightly less) that of the freemailer you use to prevent surprises of big but later then unsent emails.

    its possible to have a nextcloud instance on that same pi that acts as an email web mailer just in case of (i really dont need it, but i’ve set this up anyway). nextcloud is also great for syncing/backup files pictures, contacts notes todo lists and calendar of your phone (where i use davx5 opentasks and foldersync for). there are other webmailers available but installing /using nextcloud is not a too bad idea either ;-)

    i suggest also setting up some automatic offsite backup with snapshots of that pi then to cover emails and the setup and its configs ;-)


  • smb@lemmy.mltoLinux@lemmy.mlA word about systemd
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    one example of a program that did multiple things is sfdisk, it used to make the kernel reload the new partition table but that was not its main job, only changing them. the extra functionality moved to blockdev which is nearer to doing such as it also triggers flushing buffers and i think setting read/write status. i am fully ok with that change as it removes code from a program that doesn’t need it to another that already does similar things so that other partitioning programs like gdisk fdisk or parted could go the same way so that maintainers of the reread-partition-table things can concentrate on one solution at one place (in userspace) instead of opening issues at an unknown number of projects that also alter partitioning. the “do one thing” paradigma is good for developers who maintain the code and i pretty much appreciate their work. if you are up to only want one-day-flies that either die or take huge amounts of resources only for keeping them alive (image of a mayfly in an emergency room and a heart-lung machine attached while chirurgs rushing around trying to enlenghten its life a few seconds more) then you are good with monolithic tools that could hardly be maintained and suck allday as no one wants to fix any bugs or cannot without creating new ones due to the tightened dependency hell it has internally.

    the point is not a lack of examples doing wrong but where one wants to be heading towards.


  • smb@lemmy.mltoLinux@lemmy.mlA word about systemd
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 months ago

    Lol what???

    wouldn’t that be the definition of stable?

    the computer on voyager 2 is running for 47 years now, they might have rebooted some parts meanwhile but overall its a long time now, and if the program is free of bugs the time that program can run only depends on the durability of the hardware, protection from cosmic rays (which were afaik the problems the voyager probes faced mostly, not bugs) which could be quite long if protected from hazardous environments and maybe using optoelectronics but the point is that a bug free software can run forever only depending on hardware durability and energy supply, in any other way no humans are needed for a veery long time ;-)


  • smb@lemmy.mltoLinux@lemmy.mlA word about systemd
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    5 months ago

    However, systemd makes the system much more secure and reliable as it is

    less secure and less reliable day-by-day you meant? systemd introduces needless dependencies ever since as if that was it sole intention ever from its very beginning, which already were used for wide attacks, and exactly those attacks that the people working hard to remove unneeded dependencies for security reasons meant to prevent by things like “do one thing only” (but security was not the number 1 reason for this one i think), systemd instead: ‘lets add another level of that exponential dependency tree from the insecurity hell’ felt like they did this stupid thing intentionally every month for a decade or more.

    and stability… if you don’t monitor what systemd does, you’ll never know how bad it actually is. i’ve made custom scripts to monitor systemd’s failures (failing in doing a very primitive of its job) and there are hundreds (actually varying around 200 to 300 sometimes more) of such per day on all our systems for one particular(!) measurement only that was breaking service stability and i wrote a measure-and-fix+monitor workaround. other fixes were not monitored however, only silently fixed by workarounds, thus just unnumbered systemd bugs/instabilities in the dark that stole a lot of work capacity…

    if you run distros with systemd, unreliability is your daily experience unless you don’t really care or have never experienced stability before - like running a service (a single process) for 8 years without any interruption then it suddenly stops and you go like “was it maybe an attack? the process died, how could that be? were there any connects from outside at that moment?” not talking about not updating something that long, but “stability” itself CAN be like if you dont stop it, it’ll still run in 10000+ years maybe millions, more likely that humans extincted themselves way earlier than of a process “just dying” by a bug… while systemd even randomly stops things that were running well for no reason (varying) once a month more or less (also varying in what it actually randomly stops, sometimes (2 times) it even stopped ssh on my servers, me asking myself if i should create yet another workaround for systemds buggyness to not locking me out again from network or ratjer go for the real solution for most* of all systemd problems - *see below) on the few standard installs i personally have as i didn’t have the way to automatically replace provider installed distro on VMs in the DC. i want this replacing automatically for the same reason why i don’t like systemd, it causes manual work for a thing that should go automated. however due to systemd’s perpetuated instability i now managed to have this way, and every second working on getting rid of systemd is worth it 100k times. this however does not solve all systemd-introduced problems as the xz attack showed (a systemd-dependency on xz made the infected xz library beeing useful-for-the-atracker during compiletime of sshd binary with which then the attacker could infect the newly built sshd binary),one could still be attacked through systemd’s dependency hell even if one does not use systemd by oneself, but the build machines used for your distro could be affected/infected by systemd’s needless dependencies when “also” compiling for systemd-affected distributions thus there is the risk of becoming a victim of needless-systemd-dependencies while not using systemd at all. however the attack through systemd dependency (and that the public solution was not the removal of needless dependencies only included as source for superflous third party “needs”) made clear that systemd is an overall problem for security that will not be solved quickly but stay just like all windows insecurities will stay as long as they whish to push them to their “users”.

    systemd reducing overall security and its unreliability combined with some builtin impediments (i.e. when debugging its defects) is what drove me away from systemd. there are solutions way more stable and way more secure (and way better documented btw) that do not call in for needless dependencies, reducing risks, attack vectors and increases overall debuggability i.e. by deterministic behaviour as an easy example. and none of its important (to me) promises have been fulfilled yet by systemd, drop-in-replacement? have heared that lie thousands of times, but in the last decade i have not experienced it a single time in a distro and it does not seem to be included/finished any more.

    for windows users or windows admins a linux with systemd on it IS an improvement in stability, security and of course for updating, yes. but all of that does not come from systemd, rather the opposite is the case, systemd reduces it month by month, thats my experience and thats the most important experience for me, idc what lies whitdepapers tell or what broken promises are believed by anyone or the masses, i want secure and stable servers and services and systemd does not fit in for any of these goals and the time it was still “young” and early problems could be accepted in the hope they get fixed soon are gone, but without those fixes having ever appeared.


  • maybe try to find a linux user group near where you live. if there is one, usually you get help there, but its usually kinda different sort of help, you don’t get “the solution” to get your personal whishes come true ready prepared in bite-sized piezes for easy consumption but just the help by advices or suggestions that those there can give you or directly would try out.

    open source is about sharing knowledge and todays mainstream OS distributions are way more complicated than long ago so the learning curve to adjust things in ways the distribution didn’t prepare (which is often a lot) might be high but always worth a try at least for the learning.

    for a lightweight desktop environment that is somehow similar to the old windows98, i’ld say give XFCE a try. i think on debian/ubuntu trying out could be as easy as installing the xfce (or xfce4?) package (or maybe an xfce4-desktop-environment paclage) i don’t remember the exact package name but there is one meta package that depends on all needed stuff, i did it like 4 years ago… when installed you could try it by logging in and (your distro should have a login manager that allows this, or you’ld have to change that too) choosing xfce as desktop environment at login time, thus if you don’t like it, logout again and login with the other again.

    i am using xfce because it is clean, lightweight, it does its job, does not invent new unneeded features every few month (like it felt when i used kde long ago) and is adjustable enough for me. i removed the lower task bar and put the open windows components into the bar above adjustedbthat a bit, thats basically what i changed and i think it is quite similar to what win98 was (but thats not the reason for me to have it that way)

    also, it is possible to change the window manager (that handles how windows are placed), the desktop manager (like task bar, application menu, maybe widges, logout buttons) and of course also one could change x.org to wayland and back without changing the other components. the login window could come from gnome project but after login one could use a complete different projects toolset.

    “can” does not mean that every distro makes that an easy task. also mixing things will likely end in a fuller disk for lots of “needed” components that are maybe mostly unused. (i think i once used gnome but installed kde only for their printing dialog *lol)

    when using the big distributions it is likely that no 3rd party downloads are needed to try other window managers or desktop environments, maybe search for such keywords in aptitude , apt search, or such. but new fancy stuff also often first comes from unknown 3rd party websites (or git*.com which is the same security risk as 3rd party websites) before it gets into main repositories after years (or maybe even never)

    Closest thing I found was TwisterOS. […] and the fan in my case stops working. Aye-yi-yi!

    maybe “TwisterOS” tries to invent air movement by software? it might be a random unrelated incident and the fan is simply broken, it might also be that it enabled some fan control and the fan would start if you only heat up the system enough which might not happen with a lightweight distro and the maybe not cpu consuming programs you use (?). “stress” is a program that could artificially create such cpu consumption for testing (but with a broken fan it might be not a good idea to actively and unnecesarily heat up the cpu, but also cpus usually have failsafe shutdown mechanisms so they dont overheat but that might be like a sudden power down so maybe expect unsaved work to just vanish) another test could be to just give the fan another power source and see what happens, and put abother fan that works in place to see if that changes something



  • i plan to get a similar setup (music on homeserver, synced to phone for offline use) but i dont need to sync playlists as i rarely use them, i have a streaming account with one(!) playlist with all the songs i remembered and wanted to listen to but didn’t buy as CD back then and use the radio like streaming options a lot.

    but for syncing phone with nextcloud i use FolderSync (Pro) and it works as it should. it has lots of possible sync targets and lots of options to sync one or both ways. i have folders with >8000 files that take some time to sync but it works fine in the background with no prob, i let it sync over mobile network too, cz i value a more reliable in-sync status more than bandwidth. however i didn’t really try “immediate sync” for new/changed files yet as i don’t see the need for this but its one of many options.

    however i only use nextcloud sync in one or two-way syncs and once used sftp for a one-way sync, so i cannot judge all the other options, but if your playlists are organized in files, their two-way sync might be as easy as with the songs. i bought the pro version on their website so my license is not bound to a google account.


  • you should definitely know what type of authentication you use (my opinion) !! the agent can hold the key forever, so if you are just not asked again when connecting once more, thats what the agent is for. however its only in ram, so stopping the process or rebooting ends that of course. if you didn’t reboot meanwhile maybe try unload all keys from it (ssh-add -D, ssh-add -L) and see what the next login is like.

    btw: i use ControlMaster /ControlPath (with timeouts) to even reduce the number of passwordless logins and speed things up when running scripts or things like ansible, monitoring via ssh etc. then everything goes through the already open channel and no authentication is needed for the second thing any more, it gets really fast then.