Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How can a total beginner start with self-hosting?
185 points by kickaha on Oct 5, 2022 | hide | past | favorite | 190 comments
tl;dr Please point me to a true beginner’s reference/tutorial on networking.

Gradually, patiently, persistently, over the past ten years and more, I moved from Windows and Mac to all FOSS apps and then full Linux. Doing the same with my phone. Total success. Independence and self-reliance.

In short it’s all about control, privacy, and security, in that order. And: it’s a long term process that requires a commitment.

I understand desktop Linux (Ubuntu/Pop!_OS) well enough to get myself out of trouble when I mess up or an update breaks. But I have no clue about networking, and I don’t know where to start.

Syncthing keeps a handful of my important directories of user-files synced quite reliably.

I deleted my Google account years ago. But I’m still in iCloud and iOS for all the photos. Highly recommend Fastmail incidentally.

I have a small cheap Linode VPS (doing nothing right now), a Mullvad client on all my devices, Tailscale on all my devices (doing nothing because I don’t understand what it can do), and a Synology NAS in the closet with the modem/router (none of which I understand).

I want to:

- host my own photos and get out of Apple.

- host my own bare git repos and not rely on GitHub.

- host my own BitWarden server.

- host my own Tail-/Headscale (whatever the noun is).

- follow up on ideas that pop up after I comprehend networking.

I can HERPaDERP install packages on client and server, and copypasta configs I don’t understand. Where do I go to understand?




I do not know of an opinionated beginners guide, but would recommend browsing r/selfhosted and r/homelab a bit. Lots of these and similar questions are answered on a regular basis.

Some starting points

- photos: NextCloud

- git: Gitea

- BitWarden: Vaultwarden (even if you deploy this locally you want a SSL certificate as clients will refuse to connect otherwise)

I'd suggest using official docker images to get started as there’s plenty documentation available for all projects and experimenting is a bit easier when you can simply dispose a container without having to worry what’ll happen to your host OS.

As long as you run services locally on your Synology (assuming it supports docker) and don’t expose them to the Internet I’d encourage you to „just give it a try“.

Just don’t immediately start to rely on the services and run a dual strategy (NextCloud and iCloud photos for example) till you updated your container once or twice and feel comfortable troubleshooting issues with your stack. Nothing is more discouraging than having a service you need „right now“ being down and no idea how to get it back up.

It’ll be a long, fun journey. Good luck!


> - photos: NextCloud

If you don't mind horrible experience viewing photos/videos via nextcloud, go on. In my case this was unusable. Thumbnails not pregenerated even after trying (Yeah, didn't spend whole day on that issue) and generates on the fly. So viewing larger directory is... rubbish. Videos don't play as nice not to say they don't even have thumbnails. Feels like "guest book" from 2000 - no features that auto organizes stuff - just a directory with photos and you're on your own with unorganized mess of photos.

How great was HN when it suggested me https://photoprism.app/ - and it really just works! Nice, performant, featureful, yet feels lightweight. Finally I can view my photos.

I still use nextcloud just for sync and photoprism just has directory mounted as readonly. Still, sync from phone feels heavy along with "failed to sync" errors and just hangs doing nothing... I long to try out syncthing - but then I loose web access to documents which.. maybe someone can suggest some frontend for that?

Someone also suggested https://photostructure.com/ - it looks decent, haven't tried out.


Nextcloud released a new version of their photos app a few days ago (https://nextcloud.com/blog/announcing-nextcloud-hub-3-brand-...). Haven't had the chance to try it yet though


What’s the difference between nextcloud hub and the good old nextcloud server?


No difference. Just a branding exercise.

Nextcloud will soon be at version 25, which will also be named Nextcloud Hub 3. Frank Karlitchek talks about this during the Q&A, about an hour and a half into the video linked below.

However there are major improvements coming in that new version, specifically to the Photos app[1]

[1] https://www.youtube.com/watch?v=dhJXZzqsv8A&t=1103


I'm the author of PhotoStructure: AMA, or hop into our discord! https://photostructure.com/go/discord


Plex is pretty good for viewing stored video and photos (pre-generated thumbnails and video transcoding + preview), while NextCloud is pretty good at syncing them. Install both and point plex to the synced photos and videos directory in the server.


works fine for me


It looks like r/selfhosted even has a wiki with some tips on getting up and running: https://wiki.r-selfhosted.com/

I've gone through some of it and it seems like a decent primer on where to start, but I'm not sure that it has all the required info that OP would want.


> - git: Gitea

"Why gitea and not bare repos?"

- Better repo, user, and access management, all from a browser.

- LFS support.

- Being able to browse code in... well, a browser, is really nice.

- GH-like workflows generally, if you want that.

You can have it use sqlite for the DB to make it extremely easy to manage and backup and such. I'd expect that to be fine up to at least 50 moderately-active users, and maybe much higher.


Fair enough. But I just want a place to put dotfiles that I can share between machines. If I had more complex needs than that, I would probably not have had to ask this question!


Chezmoi works for me

https://www.chezmoi.io/


Either a dedicated dotfile or just use Syncthing.


> As long as you run services locally on your Synology (assuming it supports docker) and don’t expose them to the Internet

How do I do all this while exposing it to the internet? I want to host stuff for my friends and family without putting them on a VPN to my house.


You could put your services behind a reverse proxy such as Traefik with forward-auth and expose it to port 443 (HTTPS) on your router, or (that's what I do and am happy with) use the cloudflared [1] demon to connect your services to Cloudflare where they can be protected behind Cloudflare Access using an SSO provider such as Okta (or Github or Google) for authentication. This method does not require you to expose any ports on your router and can all be done on the free/dev tiers of Cloudflare and Okta.

[1]: https://hub.docker.com/r/cloudflare/cloudflared


+1 on vaultwarden. I use 2 rpi’s: one in my own house and one in the house of family. I run regular backup’s using BorgBackup. I run OpenWRT on my router with WireGuard on it. I configured my mobiles/laptops to auto-route over WireGuard to my home.


The reality is that we've let you down. Self-hosting shouldn't be any more complicated or less secure than installing an app on your phone. You shouldn't need to understand DNS, TLS, NAT, HTTP, TCP, UDP, etc, etc. Domain names shouldn't be any more difficult to buy or use than phone numbers. Apps should be sandboxed in KVM/WHPX/HVP-accelerated virtual machines that run on Windows, Mac, and Linux and are secure-by-default. Tunneling out to the public internet should be a quick OAuth flow that lets you connect a given app to a specific subdomain, with TLS certs automatically obtained from Let's Encrypt and stored locally for end-to-end encryption.

The technology exists to do all of these things, but no one has taken the time to glue it all together in a truly good UX (I'm working on it). Pretty much every solution in this space is targeted at the developer market, not self-hosters.

So for now I'd recommend using a VPS. Your main challenge is going to be learning a lot about security. There's currently no way around that. A VPS limits the scope of damage that can be done if you get hacked. Once you've learned enough you can move to your own hardware. At that point I'd recommend setting up tunneling[0] and using either Docker or QEMU/KVM.

EDIT: I see you're already using Tailscale. That can operate as a tunnel. Basically you'd want to run a reverse proxy like Caddy (recommended) or nginx on the VPS, and point it at services running on your other devices using the IP addresses from your Tailscale network.

[0]: https://github.com/anderspitman/awesome-tunneling


Wow. “The system fundamentally fails us AGAIN RAWR” is a hot take that I almost always go to eagerly, but I am vibing SO HARD on hitting the front page of HN (!!!?) and getting the community help glow, that for the moment I’m gonna let it go. :-)

Also, thanks for all the helpful tips.

At this point there is so much for me to go on that I am trying to think how I can put it all together and share it.


Interesting projects! I bookmarked boringproxy for later reference. Just a quick comment: the feed on your homepage is nice - like a personal timeline - but it misses an RSS feed! ;)


Thanks. Ah yes RSS. I started working on my static site generator about 2 years ago. That's when I got distracted with the sad state of selfhosting and have been going down that rabbit hole ever since. I'll know I've reached my goal when my dad can run a blog (with RSS) on his own hardware without having to know anything other than writing and selecting photos.


My thought on networking is to make sure you only expose SSH and HTTPS; firewall everything else off. SSH seems like a constant source of problems for people new to running servers; within seconds of your server coming online, people will be trying to guess usernames and passwords. Configure it so you can't log in as root, and you can only authenticate with a key. These folks will never guess your key in a billion years (though obviously, don't leak it or something). I wouldn't bother with the complexity of fail2ban. If you have some out-of-band virtual console, you might not even need SSH, but it will be nice to have for git clones if you're self-hosting repositories. Totally reasonable to have SSH exposed to the Internet, in my opinion.

For HTTPS, run everything through a proxy, and maintain detailed access logs. Put your other services behind that proxy, and have them listen on 127.0.0.1 and not 0.0.0.0 (not that it really matters because you firewalled off all connectivity like that). Your HTTPS proxy should handle certificate provisioning for you. There are many options out there; I use Envoy, the self-hosting types should use Caddy or similar.

Anything that you expose to the Internet on a well-known port with an easy-to-guess password will be hacked instantly; SSH, MySQL, Wordpress, you name it, they'll own it. Seriously, your mind will be blown and your head will spin. Reduce the number of network ingress points to the bare minimum (though firewall rules), make sure absolutely everything not hardened is behind auth, and log aggressively so you can see what happened when something goes wrong.

I don't know what the state of the art these days is for auth. I wrote my own thing that uses WebAuthn; most people are probably using something like Keycloak. The Tailscale idea is also good; if you don't need this stuff on the Internet, then only put it on the Tailnet and let them handle auth.


Another example of just the kind of concrete answers I was hoping for.


Once you have a stable setup you could even shut off public SSH and instead have it available only over Wireguard (I saw you already mentioned Tailscale). Tor Hidden Services are also an option that are straightforward to set up (you don't even need to think about fw/nat as long as you have tor access) but a bit off gaps in terms of recent accessible docs. If comments indicate interest I may get around to publishing some stuff on that lying around in the drafts folder.


Registering interest!


> Put your other services behind that proxy, and have them listen on 127.0.0.1 and not 0.0.0.0

Apologies for this newb question, but if you will be adding a proxy, aren't you required to bind it to 0.0.0.0, since if you bind it to 127.0.0.1, only a request within the service's host will be able to receive a response? I'd like to know what I'm misunderstanding here.


They're saying bind the services to localhost so they're only accessible via the proxy.

"Put your other services behind that proxy, and have [the services] listen on 127.0.0.1 and not 0.0.0.0"


Ah I see, I assumed it would be a multihost setup.


Another tip: change the port of your ssh server. This makes connecting a small bit more tedious(create a .ssh/config file on your clients!) But in exchange you gain much less people trying to hack your machine. This should not be your only security measure though, since its still findable for some people willing to put in the effort (or targeted attacks).


It depends on what direction you want take. You could purchase a $5/mo digital ocean server or a $5 raspberry pi and start by installing https://pi-hole.net, https://nextcloud.com, https://syncthing.net, https://www.plex.tv, or some other software to get the first (largest) thing you want resolved. Then move on from there and install the next package you need.

Ad blocking for your phone? VPN for work? Self hosted email? Retro gaming? Figure out what you want most and jump into that instead of trying to get everything all at once as it can be overwhelming to consider every system instead of taking one step at a time.


Why is the self host advice to always buy a Raspberry Pi, use an old crappy computer you have lying around, etc?

I have a very beefy desktop for the first time in over a decade (maxed out Mac Studio) and I am wondering is there any real downside to having these services run on it?


Mostly because you want a dedicated device to serve off of. If your desktop is always on it isn't that much different.

My desktop gets shut down, rebooted, crashed, wiped and reloaded, and otherwise abused as I get thoughts.

My home server has been the same (don't worry I run updates) for a couple years, chugging along, sipping about 1/4 the power my desktop would take.

In the beginning of your self hosting journey, the opposite might be true. You might want to try 300 things, and that's hard to do if you're trying to get work done on your desktop.


I have Pi's and old, beefy computers and what I really need is an "elastic personal cloud" where I can bring up the high-powered computer(s) online on demand[1]. I have WiFi-connected sockets and a python script than can turn them on or off, but no orchestration or global state.

1. My 12-year old, former top-of-the-line laptop is much faster than my 3rd-gen Pi when periodically ingesting and indexing JSON into a PostgeSQL db, I suspect slow IO and limited memory are to blame.


I have used wake on LAN sometimes for this too, but that was close enough for me, and I lost interest.

WoW (wake on wife), worked pretty well from the office, but I seem to get a little bit of latency and packet loss. Now I work from home, so I just get up and hit the power button on the more powerful machines.

My shock with compute power was when I got the Pi 4. It was faster than the AMD e350 I was using for backups except for disk IO. The E350 was not a powerful machine new, but I didn't realize how far we had come.

A USB 3 enclosure brought it to a level that was good enough the entire mini ITX build was pointless to keep around.


You could do it. But it's not a good fit. Workstations are more about customization and personalization. Servers are more about consistency, reproducibility, and reliability.

E.g. you can't restart your desktop while your roommate is watching a Plex movie. Or your VPN stops working because your roommate's cat walked on your keyboard while you were on vacation.


The downside is you might not want to leave your computer running all day and VM networking can be a pain if you want to access the services from other devices on your network.

A pi/old computer is a physical box you can put in a corner and you can be sure the physical network works if it is connected.


You may want your services to have good availability while being able to shut off and reboot your (likely power-hungry) desktop without concern.

Especially for DNS. Like, your system reboots for an OS update and now the rest of your the devices on your network have connectivity issues for an hour or so. Or you have a hardware issue and now you lack both the infra and the workstation so getting things up and running again becomes triply annoying.


Mostly energy usage. A raspi is going to use a lot less power than most crappy old computers and a hell of a lot less than your beefy desktop.


You can certainly run these as background tasks on your existing daily driver, but you don't have to.

I've found a raspberrypi/orangepi to work just fine and it doesn't matter if the main computer is on, I'm upgrading to a new system, or traveling between work (laptops). It just sits in the closet and does it's thing using a tiny amount of electricity.


I had my home server on a pi and then an old laptop, and finally in a VM on my always-on Mac Mini. The hardware devices were nice for set-and-forget operation (especially the laptop, because the PSU acted as a UPS that kept it running during several power outages over the years).

I like the VM rather than running it directly on the Mac because it's trivial to copy the VM to another computer when I want to change hardware. Obviously, your services will be unavailable whenever the host machine restarts.


The only downside is, I believe, the electricity bill. The reason to use a rpi is that it has enough computational power to do many useful self-hosting tasks while consuming only a fraction of the power of a beefy computer.


Mac Studio idle is 11 W (max 115 W) compared to RPI 4 idling at 4 W (max 6 W).

It's reasonable to assume that the 115 W on a Mac Studio will never be hit on a workload that could be supported by a RPI...so you're looking at an extra $10-20/yr to operate the Mac Studio.

The biggest issue is just the upfront cost, since of course the Mac Studio is proportionally expensive to its capability.


Idle power consumption for Apple Silicon Mac is low despite it's workstation, it's outlier. Average Intel desktop consumes similar or a little more power for idle, that is acceptable IMO for flexibility and reuse. Maybe DIY PC consumes a bit more than average Dell desktop even if CPU is same. Average Intel workstation/server consumes much more than them so it's wasteful for almost idle server. So it depends on what is the previous PC.


Oh for sure, if you pick out a high consumption CPU then it'll be high for sure


I think you not start at home. That "last mile" of connectivity is fraught with challenges of dealing with your internet provider and all sorts of other things. It can be a headache, and it's very particular to your situation, and your provider.

If you start "self hosting" on a cloud based instance, all that you learn in doing so will carry over to when you finally are able to move everything to a machine in house. It will give you results faster, you will make all the same mistakes and have similar problems, without having to fight hardware. You'll also have the network expertise of your hosting provider to fall back on. As much as you don't want someone waltzing in and walking all over your system, they don't particularly want that to happen to you either.

The hosting plans are cheap enough that you can run for several years for the price of hardware in your home. And in the end, if you find you're unhappy with it, it's a mouseclick to get it of your life instead of now having a some extra hardware (even if it's a just a Pi and an SSD) lingering and collecting dust in your house.

I think the virtual hosting is a smoother on ramp to this journey, you'll find faster success with it (which can keep you motivated rather than frustrated), and it'll give you a baseline to compare against if and when you decide to take the final step and bring it in to the broom closet. You can readily transition incrementally as you go forward depending on how you architect it.


> If you start "self hosting" on a cloud based instance ...

I can't anti-suggest this more. Everything I've done on anything labeled "cloud" is weird and special, in ways that would impede learning or dilute its quality. They're several wrapper layers above what's really going on.

Get a bare VPS (I'd start with one listed at LowEndBox for a couple bucks a month) or dedicated host for learning. After you've learned there, you can try learning the significant extra complexity of "cloud".


> That "last mile" of connectivity is fraught with challenges of dealing with your internet provider and all sorts of other things. It can be a headache, and it's very particular to your situation, and your provider.

> If you start "self hosting" on a cloud based instance, all that you learn in doing so will carry over to when you finally are able to move everything to a machine in house.

This has been my suspicion all along! Yes, my goal is to learn the basics before I pull the Pi out of the drawer.


> If you start "self hosting" on a cloud based instance, all that you learn in doing so will carry over to when you finally are able to move everything to a machine in house.

I would say this, but first the exact opposite of this…

Run your own entirely local copies at home first. Then, when you are comfortable that you can set things up right, and you understand their security (setting up user accounts, handing out & managing access rights if you aren't the only non-read-only user, keeping things patched up-to-date, tweaking any options that have might have security repercussions, …), you can chuck a copy on a publicly accessible server for others to see.

Then, if you have the local resources for it (a decent speed and reliably enough connection) at home, consider moving it back “in house” but publicly available.

Those first test local-only instances can be on small machines (a Pi if you have one or are lucky enough to find one to purchase that isn't from a stupidly expensive scalper) or VMs on what-ever other machines you have. You can keep them around to play with to test new things later, before adding those new settings/add-ons/other to your public instance.


> In short it’s all about control, privacy, and security, in that order.

I am going to strongly urge you to consider changing that order and move *security* to the first priority. I have long run my own servers, it is much easier to setup a server with strong security foundation, than to clean up afterwards.

As a beginner, you should stick to a well known and documented Linux server distribution such as Ubuntu Server LTS or Fedora. Only install the programs you need. Do not install a windowing system on it. Do everything for the server from the command line.

Here are a few blog posts I have bookmarked over the years that I think are geared to beginners:

"My First 5 Minutes On A Server; Or, Essential Security for Linux Servers": An quick walk through of how to do basic server security manually [1]. There was a good Hacker News discussion about this article, most of the response suggests using tools to automate these types of security tasks [2], however the short tutorial will teach you a great deal, and automation mostly only makes sense when you are deploying a number of similar servers. I definitely take a more manual hands-on approach to managing my personal servers compared to the ones I professionally deploy.

"How To Secure A Linux Server": An evolving how-to guide for securing a Linux server that, hopefully, also teaches you a little about security and why it matters. [3]

Both Linode[4] and Digital Ocean[5] have created good sets of Tutorials and documentation that are generally trustworthy and kept up-to-date

Good luck and have fun

[1]: https://sollove.com/2013/03/03/my-first-5-minutes-on-a-serve...

[2]: https://news.ycombinator.com/item?id=5316093

[3]: https://github.com/imthenachoman/How-To-Secure-A-Linux-Serve...

[4]: https://www.linode.com/docs/guides/

[5]: https://www.digitalocean.com/community/tutorials


Wow, thank you for all the concrete pointers!

>> In short it’s all about control, privacy, and security, in that order.

> I am going to strongly urge you to consider changing that order and move security to the first priority.

I made myself easily misunderstood. On the desktop, I moved away from proprietary platforms because of good old-fashioned paranoia ("security and privacy"), but first for control.

Needless to say, for anything exposed to the net all considerations come after security. That's why I need to "understand."


This is a very good question.

My best answer is: find a mentor.

Someone you can repeatedly ask for detailed pointers from as you get stuck. This could be a colleague, an IRC/Discord friend or even someone on Twitter that you have bonded with.

I have been mentoring people close to me on computers and Linux since I was about 13 years old and am now 39. And it has been a real blessing, since you learn a lot by being forced to explain what you already know.

As a teenager I didn’t think of this as mentoring of course. But I wad very lucky to have had my 3 years older brother as computing mentor, which gave me a great head start compared to my peers.

Not knowing exactly where you or others reading this comment are currently getting stuck, here are a few random pointers:

netstat -a -n -l -p

ls -la /proc

man mdadm

iptables -L -n

rsync -a -e ssh myfolder user@host:

And reading Beij’s (?) tutorial on TCP socket programming if you are an aspiring C programmer.


I like this idea. I live in a little college town so I think I can tap into a circle of nerds.

Thanks especially for the concrete list of topics to bork around with.


Self-Hosting has been my most passionate hobby for the last decade+ and all of the resources below are aimed at people like you. Good luck sailor!

If I may be so bold as to self promote:

  * Podcast - https://selfhosted.show
  * Website(s)
    * My blog - https://blog.ktz.me
    * Perfect Media Server - https://perfectmediaserver.com
  * Github - https://github.com/ironicbadger/infra
  * Linuxserver - https://linuxserver.io
Finally, if you'd like real-time collab with other self-hosters, the podcast has a Discord - https://selfhosted.show/discord.


Thank you! I listen to Chris and Alex religiously (and all the Jupiter Broadcasting shows), and learned a lot. But never heard mention of a beginner's reference there. Maybe I'll send this question into them.

Joe, Jim, and Allen on 2.5 Admins is also informative, but veers back and forth between English and Greek to my ears. :-)


I can’t promise any specific advice, but as someone who runs a number of self-hosted services for mostly ideological purposes (see https://compose.seedno.de/ for a subset) and works professionally in networking, I’m always happy to chat about my own experiences and suggestions!

Feel free to email me at lab (at) seedno.de to chat!


> I can HERPaDERP install packages on client and server, and copypasta configs I don’t understand. Where do I go to understand?

Other commenters have a good bunch of resources on what to do, but if you're really interested in understanding the fundamentals imo there's no better way than to RTFM. Sometimes the manual will have things you don't understand, and then you'll have to google that thing.

For networking I highly recommend the RHEL docs (https://access.redhat.com/documentation/en-us/red_hat_enterp...). There's a big chunk that's useless to you (infiniband, etc), but the basics of the TCP/IP stack is really good to know. It looks like ubuntu has some introductory material too at (https://ubuntu.com/server/docs/network-introduction) with links to more in-depth resources.

For the rest of linux, again I highly recommend the redhat docs here https://access.redhat.com/documentation/en-us/red_hat_enterp.... They're very will written and comprehensive, so feel free to skip all the stuff that you don't care about (printers, SElinux, etc).


Yes! THANK YOU! I am so grateful for all the replies, but this may ultimately be among the most useful.

If I could go back and reframe my question, it would be “Which FM should I R?” :-)


I almost forgot but back in my day there was a big effort in The Linux Documentation Project, or TLDP. They put out a bunch of good stuff at https://tldp.org/guides.html, but as you can see they haven't been updated in quite a while. Most of the stuff is still relevant though, since the basics of setting up web services really haven't changed in 20 years if you don't touch the new kubertenes/container stuff.

For your use-case, I think you'll gain a lot of benefit from just throughly reading this chapter (https://tldp.org/LDP/intro-linux/html/chap_10.html) of one of their books, and https://tldp.org/LDP/nag2/index.html is a more comprehensive view of the same. They encompasses almost everything a hobbyist linux server looked like 15 years ago, and is a good starting point for getting the fundamentals.


Hugely helpful. Thank you.


You sound like you are on a good trajectory. I'd start reading up on and implementing a firewall. Start with host based, then on a router/vlan firewall. Try out wireshark and view things like a simple wget, file download, and similar. If something weird is going on record with tcpdump (see wireshark docs for flags) and you can analyze afterwards with wireshark.

I'd recommend an IPv6 firewall if you can get IPv6, that exposes much complexity (I have 2^68 IPs on a normal consumer/home ISP connection). This will allow much of the complexity of a larger IPv4 LAN.

You sound pretty ambitious, just keep in mind that everything you mention is going to create state that you are responsible for. So implement backups from day 1. Last thing anyone wants is to lose all their photos, git repos, password database, etc.

Make sure your backups are in at least two places that can't be taken out by a single theft, flood, house burning down, company going out of business, etc.

Backups aren't backups until you verify them, do so regularly, maybe the 1st of the month or something. Verify files are exactly as backed up with sha256 or similar.

Specific recommendations: ZFS for any filesystem with 2 or more disks. Digikam for photo org and tagging, in a standards compliant way. Piwigo for self hosted photos ... that can use standard tags for organization.


SUPER helpful. Very grateful!


Generally new admins tend towards creating physical or virtual machines, logging in as root, editing files, starting services, and backing them up.

Which is fine, a great way to learn, a less great way to be in production. Upgrades can be painful, it's easy to break things is non-obvious ways, and there's no easy "undo".

Generally I'd recommend either using containers and maintain your build scripts in git, or use some configuration management software like ansible or puppet (there are many others as well) so you can install a new OS (virtually or physically), then have ansible/puppet/whatever add config files, packages, services etc to get things working. Of course then you keep ansible/puppet/whatever files in git AND back them up.

That way if there's some major upgrade of OS or software you can spin up a test env, test it, and then decide to promote/replace an existing machine ... or just ditch it.


Really good advice. I still have flashes of “Ohhhh righhhhhht it’s a VIRTUAL machine---I can...” make a snapshot, clone it and test new stuff, etc. etc. etc. I’ll be happy when it fully sinks in.


Generally I try to avoid VPSs. They are long lived, it's easy to accumulate undocumented state, and have substantial memory, disk I/O, and network I/O overhead. You may end up with a years old VPS with many changes over the years, and ugly things accumulate. Maybe you built a package from source, but now it's got a security vulnerability, but your forgot it's there. So apt upgrade doesn't help. Not to mention any security fix assumes you aren't already compromised. A rebuilt container (I suggest proxmox or docker) rebuilt will remove any previous existing compromise. Sure you can clone an old VPS... but you still don't know exactly what changes have been made, and if it was compromised the clone is as well.

Much better to have docker (or similar) build script, a ansible play, or a puppet manifest. They will clearly list needed packages, config files, services, at least in puppet's case you'll have the "trifecta". The trifecta is a mapping between a service, the required packages, and the config files. So Puppet can notice a config file change, restart a service, and if a package is missing install it. So puppet is to a degree self healing, even when an admin edits a file on the guest/container/VPS instead of in puppet and it's fixed on the next puppet run.

That way years later it's not big deal to upgrade the OS, or replace one piece of the stack with another, then apply ALL the changes you've done so far, and test the result.

I can tell you from experience it's a nightmare to inherit somebody else's VPS that has 3 python installs, tons of services, and requires manual intervention on boot to start/stop services. Would be MUCH better if it was just managed by config management and everything except the base OS is controlled by ansible/puppet.

Often custom VPSs are called pets or snowflakes (with a negative connotation), much better to make them easy to replace as needed, like cows (with a positive connotation) as the saying goes.


Can't reply to your reply, it's too deeply nested.

Even better than a VPS with multiple services is (IMO) a container per service. So you get a nice clear single function container, but don't have to pay the memory and performance overhead of having a VPS per service.

Containers have zero memory/performance overhead (it's based on cgroups, which normal linux processes have anyways), minimal disk overhead (not need for an OS install for each one), and saves a ton of ram (each VPS requires at least a linux kernel).


Wow, thanks. This is a good and useful perspective. I shall endeavor to make my VPS a source of nutrition rather than a source of affection. :-)


You could take a look at this course

https://www.reddit.com/r/linuxupskillchallenge?utm_medium=an...

It's a beginner course on Linux administration - not networking. It will give you enough knowledge to understand and manage a server. It's free, and starts on the first Monday of each month (you can also do it self paced if you like).


Wow! Thanks!


Look at serverbuilds.net to learn how to find the right hardware to buy to host your selfhosted apps.

Look into some VM hosting Hypervisor. I am using VMware because that is what all my jobs have been using. You can use ESXi for free. You could also look into something like proxmox.

If you buy and build right, you could have enough CPU bandwidth and memory to all that you would want. Put all your VMs on SSDs, multiple VMs sharing a single HDD could be considered a war crime.

Learn docker and docker-compose. Use https://www.composerize.com/ to help in the transition from docker to docker-compose. Have a look at linuxserver.io for already built docker images for most of what you want to do.

Keep an eye on Humble Bundle for a DevOps or networking/sysadmin book collection.

If you want to roll your own firewall and router, look into pfSense.

It takes a while to learn and understand, but it is worth it.


I'd recommend you to check out the specialized self hosting distros and systems such as FreedomBox , Sandstorm.io, yunohost or Cloudron . I have not used it myself, so i cant recommend which system to use. However i have gone trough the pain of setting up loads of self-hosted apps. Especially if you really want to use your system, not maintain it all the time you will search for the most easy solution. There is nothing more annoying then to fix your system every month before you can continue working. I would also love to hear from people using these systems on a daily basis. Another possibility for lots of self-hosted apps is a NAS (Synology or QNAP). However not everything runs on these systems, sometimes because they are not available, sometimes because the hardware cannot be upgraded.


I'm so sad that Sandstorm.io is a zombie. The novel capability-based security infrastructure was awesome.


Yunohost : +1


For hardware: in the past, I would say start with a raspberry pi. But those are impossible to find at list price now. So instead, if you look on ebay for SFF (small form-factor) workstations, you can find something like an HP Prodesk G1 with cpu/storage/memory for ~$50-60. Which is significantly cheaper than a raspberry pi 4B @ $100+ by resellers.

Slap on a distro, and you're off to the races. Checkout /r/homelab and /r/selfhosted on reddit. You'll probably want to read about DNS and local networking (DNS & Bind is a good book).

You'll understand by doing things. Don't blindly copy-paste configs. Spend some time to figure out what they're doing and type every line in manually.


I've got a 2012 or so Pi in a drawer...! Maybe I can play around with it.


I suspect that the Pi is the first one that only support armv6 that is pain. Newer Pis are good but now it's shortage. I'll buy random SFF PC https://www.servethehome.com/introducing-project-tinyminimic...


I recommend setting up a web server on the smallest server offering of DigitalOcean -- $6 USD/month

Here are a couple links to get that started: - Ubuntu + nginx for https traffic: https://www.digitalocean.com/community/tutorials/how-to-secu...

- Getting a small nodejs project up and running: https://www.digitalocean.com/community/tutorials/how-to-set-...


My suggestion, which many people will probably disagree with here, is to go take the following certifications:

- CompTIA Network+

- Linux Foundation Certified IT Associate

For extra credit, pass the Linux Foundation Kubernetes certs, get a AWS cert, pass the Offensive Security PEN-200 cert, or take any of the GIAC certs. These won't make you competent, but they'll provide a baseline that you can quickly attain which will get you started.

After those, maybe consider project-based learning.

- Install Arch Linux

- Install Linux from Scratch

- Learn to use QubesOS and make your own OS templates/ISO.

I'm certain others here will say certs are a waste. I do not agree. They are a way for people who don't have enough context to quickly build that context.

Good luck!


I appreciate this perspective. But I was hoping for something at, um, a smaller scale of commitment. :-)


Get a piece of hardware (old from Ebay is fine, consider power requirements, a small device with an SSD may be better) and throw the free VMWare ESXi on it, and start spinning up virtual machines at home.

Play and experiment. That's how I started, and as long as you have lots of off-line backups, you can get pretty far.

Tailscale makes two or more computers look like they're on the same network (simplification).

Later you can decide to keep things on your little virtual host on your home IP (depends on your connection and requirements) or migrate to a VPS at Linode, etc. I like having it at home with me, but that's just me.


I like this advice but would suggest proxmox instead of ESXi. (Seems to be a lot more traction in the homelab type of deployments and my own three-year experience has been really excellent.)


I thought about mentioning proxmox but in my (personal) experience it was easier to get up and running with ESXi.

Once I understood somewhat what ESXi was doing, it was easier to learn proxmox or Xen or whatever.


That could be. I came at Proxmox after other hypervisors as well, so I might be discounting the learning curve aspect; I did experience it as pretty close to zero, but that could be a measurement/framing error.


Learning to manage a self hosting environment is a great skill to learn but I would point out there are plenty of people on HN who have these skills but prefer to rely on managed providers.

Even including people who manage infrastructure professionally I would guess that the vast majority of them don't fully self host their own file storage, email, calendar etc.

Obviously you may have your own unique reasons to want to do this but just know that those who know the full extent of what is required to do this safely and resiliently don't feel it's worth the hassle or effort.


>plenty of people on HN who have these skills but prefer to rely on managed providers.

Yes! I want to learn enough to know the difference!

E.g. Fastmail has been rock solid for me for years now and I am delighted to pay them for email and calendar.


I expect you're going to get a lot of good advice. My small contribution is this: whatever you do, document it. Write down the date, what you did, what settings you tweaked. (or use Github to manage your changes - that still counts as documentation!)

Someday something's going to click for you and you'll realize you should have done something differently, but you won't remember what you did (or how to undo/change it). Keeping good documentation - especially as you're learning, is going to save you from wiping and re-installing your machine.


This is great advice! I can vouch for it---I am no longer a young man and if I don't write it down it's gone. Just for the stuff I've done so far, it has saved my bacon countless times.

Looking at you, ~/.emacs.d/init.el. :-)


This is a long but satisfying road if you're a tinkerer. Start by prioritising. In my case, I was worried about Google locking me out so I started there, other things such as VPN could wait because I wasn't locked in on my VPN service.

In your case that might be migrating your photos off iCloud. I found the awesome-selfhosted[1] list to be excellent for trying out different products that match the size of the VPS you've got or maybe you just want to put that onto your local Synology NAS if you don't need your whole photo roll on the go.

Self hosted BitWarden is also another good starting point with the very lightweight vaultwarden[2] just make sure you always know where your vault is stored on your server and make backups.

While it's a long road it doesn't need to consume your life daily but it still requires you to keep up with all the things any sysadmin needs to handle like monthly patching, monitoring the logs for sustained abuse and break in attempts.

Subreddits /r/selfhosted and /r/homelab are also great places to have a browse.

[1] https://github.com/awesome-selfhosted/awesome-selfhosted#pho...

[2] https://github.com/dani-garcia/vaultwarden


Thanks for the tips. Yes, the subjective experience I'm looking for is tinkering-and-independence-satisfaction.


Howdy! I'm the author of PhotoStructure, which is mentioned on this post a couple times already.

I've been self-hosting email and photos and playing with reverse proxies and VPNs for several decades.

There's always a ton to learn about. This is a journey, not a destination.

The biggest thing to avoid is being overwhelmed. It's super easy to just throw in the towel and give up because there are gobsmacking numbers of alternatives to everything, and everyone has _opinions_.

I've got a couple bits of general advice that should be fairly universal truths, and should help guide your journey:

1. Storage is important for all your bullet items. Know that lots of copies keeps stuff safe. Have an offline and, if possible, offsite backup of the stuff you'd be sad if you lost. Encrypt the private stuff (before it goes off to the cloud, ideally). Read more here: https://photostructure.com/faq/how-do-i-safely-store-files

Once you know you're stuff isn't going to disappear (because you have backups), it makes updates and trying out new stuff much less stressful!

2. Reduce your externally-available footprint. Ideally, the only access to any of your servers should be through a VPN between your phone/laptop and your server. The less that is externally available, the better. (hint: turn off your Synology's cloud access stuff if possible, asap).

3. Harden your servers. I wrote up a basic guide, and there are ton of others--but only run commands you understand. https://forum.photostructure.com/t/server-hardening-for-begi...

4. The more the exotic your setup, the less likely things will work out of the box, and the harder it will be for someone else to reproduce your issue.

5. Look for friendly communities that will, ideally, let you bounce ideas off of them and guide you to making fewer mistakes. There are several subreddits (like /r/selfhosted)--just remember to ignore the trolls. PhotoStructure has a discord, but it has several orders of magnitude fewer members than the popular subreddits.

6. Take any tutorial with a grain of salt. A frustrating majority are outdated. Many were written by interns or by people trying to figure it out for themselves, but in any event, aren't experts.

Good luck!


Extremely helpful. Thanks for the tips!


Been down this self-hosted route many times … from the ground up … as a local neighborhood IT dude. Don’t you hate being that guy? Not me, I get tons of free stuff from my neighbors.

For a simple default-deny-firewall IPv6 NAT gateway (zero HTTP3 support), I used Gentoo, no initramfs, all static (no kernel modules, eBPF JIT disabled, no strace/perfmon2) on 2013 Dell Optiplex 790 SFF. This is the extreme tinkerer mode a la Slackware/Linux 1.98 ramp up expert learning mode. Has Libvirt running Docker/LXD/QEMU. Virtual DNS/NTP/nextCloud/WireGuard/no-SSH. Stable, consistent, rock solid. Initial cost: $65.00 USD. Electric cost: $8.33/month.

Also a beast called Dell Precision T710 24U rack with a RAID5 having 12 hard drives at 2 TB each running Proxmox/Debian, half of RAID is encryptedFS, and NFSv4 (planned on CephFS upgrade next) for all my photo and important docs. Also an NVS (for storing video streams from doorbell and patio cameras). Also a Git repo (Gitea). And Backups too. Initial cost $100 + hard drives. Electric cost: $12.81/month.

Raspberry Pi 2B is UPS-backed Devuan (systemd-free Debian) (systemd has an open network socket for PID 1, my big no-no as a security analyst) for DNS PiHole serving, Home Assistance serving and cron jobs. Self-hosted home alarm system with Zig devices . Has a cellular GPIO adapter in which to call my phone of any home event. Maximum availability, maximum reliability, maximum uptime. Electric cost: unmeasured. Cellular cost: pay-as-you-go cell time. Filled it with $100. It has been a year, down to ~$45.

Workstation is Debian because maximum packages available for maximum experimentation. Has virt-manager for QEMU/containers hosting macOS, various Windows desktop/server, and Linux distros. ~$4.00/month.

I always start with the workstation, the Raspberry Pi, the gateway, then the file server.

Once gateway is up, is when I do full cut-over from ISP-supplied gateway to mine directly by configuring ISP gateway to bridge mode.

Also I run a $4/month 256MB-RAM 1-CPU Hosted VPS running customed module-less Debian/Linux kernel for my WireGuard and DNS proxy needs for maximum privacy.

Of course my firewall blocks all DNS and any DNS proxy attempt via my custom iCAP server adding to my transparent Squid (also on the gateway).


Thanks for the advice.


The term you're looking for is Homelab. There are so many YouTube videos. And a subreddit.


I'd say more like selfhosted. Homelab is more about people building discarded server racks and populating with power hungry and noisy second hand equipment. Its their hobby more power to them but they seldom focus on what's those monsters are running instead of how they put RGB lights on old R710s.


All of the tech that the op is asking about running is in under this banner on YouTube, aesthetic aside. If they want to get tutorials on how to do this and see other software people are running this is a resource.


You are asking too much :-) Essentially you are looking to setup a "Private Cloud" with the required services on your own hardware in your own "Homelab" location.

So you have to approach it top-down i.e. a) What services do you want? b) What are the SW and HW involved? c) How are they put together? d) What solutions/frameworks are already available for the above?

You start by reading up on "Cloud Technology/Architecture" and understand terms like IaaS/PaaS/SaaS and how the three fundamental cloud resources i.e. Compute, Storage and Networking are virtualized in the above layers. Any cloud tutorial/book will give you a good overview. I can recommend Cloud Computing for Science and Engineering by Ian Foster et.al. (https://mitpress.mit.edu/9780262037242/cloud-computing-for-s...).

Now you should be able to understand products/Jargons like GitHub/BitWarden/Tailscale/FreeNAS/VPS etc. and where they fit into the overall picture. The final step is to buy the hardware and go to town installing/configuring services.


Start by reading about docker and docker compose. You will have almost everything in docker containers.

Pay attention to the basic networking part. Especially where it says that you can refer to a container by name in your configurations

Then have a look at Caddy, a web server, to use it as a reverse proxy. You will end up with very simple configurations Read about reverse proxy in caddy's excellent doc.

Test on a machine in your lan first, by installing docker on it and administrating it via ssh,b this will be how you will interact with your VPS.

Install the containers you are interested in and configure them.

When you decide to move them to your VPS, make sure to use long passwords (and MFA if available) sans expert only 80 and 443 ( both will be marked by caddy to he right way ootb). The containers for week known apps are usually secure by default, and they highlight in their docs what you ansolutely need to change.

...

After some time, when you finally understand everything, you will reorganize everything. No worries, containers are made for that, your data is independent.

...

Then you will realize that you actually need an OS that is almost empty except for docker and a backup program (easier to use it at the OS level than as a container). You should consider borg.

...

Than you will move to home automation with Home Assistant


> Start by reading about docker and docker compose. You will have almost everything in docker containers.

The case for Docker if you don't care about containerization or any of that per se is:

1) It's basically a very-up-to-date cross-distro package & service manager for Linux servers. You can ignore a ton of stuff about your underlying distro if you use Docker, and can use a very stable LTS Ubuntu or stable Debian or whatever and still have recent versions of the services you care about, more easily than you can without it. Anything you learn about managing Docker will transfer to any other distro.

2) It makes backups and recovery of the exact same services (including same versions) easy by default. Most packages are forced by its nature to make important data directories explicit, you can map those wherever you like on your host system's filesystem, and it's very easy to test whether you're backing up all the stuff you need for recovery from scratch (destroy and recreate the container—if you've mapped all the directories you ought to, it'll look exactly the same, nothing missing or broken, when it comes back up). Back up your mapped directories from various containers (since you can define where these live, this may be as simple as backing up a single directory), backup the docker-compose.yml or shell scripts or whatever you're using to define and start up your services, and that's just about everything.


Thanks for the update, it is very relevant to my comment.

I would add one more thing: despite being discouraged, you can use an image (say, vaultwarden/server) with the tag :latest (vaultwarden/server:latest) and automatically update images to the latest version (with for instance watchtower, which is also a docker image).

This may break things. From my personal experience (5+ years of self-hosting with docker), I had a problem twice. Once was with Home Assistant and the issue was documented on their site. 5 minutes later I was good to go. The other time was with Nextcloud which failed miserably, I had to revert back to the previous container AND use a backup.

So backup! :)

If the image allows it (it is usually documented with the image), you can limit the upgrades to major/monor/patch versions. Say your product is today at 3.2.5. If you use image:3, it will be updated up to the major version (3). You will not get version 4. You can also use lower versions (:3.2 in this example) and just get the patches.

It is a decision between security, new features and availability. My personal opinion, of someone who works 30 years in IT and information security is that newer is better - you won't be alone if things break (because a lot of people use :latest and do not read teh changelogs when upgrading)

Docker is a life saver for self-hosting.


Helpful. Thank you.


I have been there. The progress was rather slow until I started to use NixOS. The learning curve is a bit steep but is very rewarding. It is not specific to self-hosting stuff, but as a side effect it makes self hosting super easy (declarative, readable, etc).

For most of the services that you would like, you just write a simple configuration and deploy it. For example, to run the service shiori (https://github.com/breakds/nixos-machines/blob/main/machines...), or to host a game (terraria) server (https://github.com/breakds/nixos-machines/blob/main/machines...), or tailscale (https://github.com/breakds/nixos-machines/blob/main/base/tai...). Since Nix is also a very good package manager, you also do not have to deal with installing packages and managing their dependencies.

With my NixOS server I am running all the services you mentioned.

> But I have no clue about networking,

My router is just a bunch of services running on a NixOS box (with this you have absolute control over the firewall/gateway, and it is also good experience to learn the networking stuff with NixOS). Note that before this I know nothing about the networking stuff as I skipped the class in college ...


I experimented with NixOS for the desktop and ran into too many paper cuts too fast to be motivated to continue. Mentally put it in the "try this again next summer" file.

But for server stuff it might be a good way to go...? Maybe I'll try it.


I'll second the NixOS recommendation, but cautiously as it is learning more (different) sysadmin rather than networking. The downside is your "get myself out of trouble when I mess up or an update breaks" is mostly changed. The upside is the NixOS solution is to reboot and choose the previous GRUB entry. Also you get a focused community where the documentation is nice and dense, and a natural reason to avoid curl|sh, docker bundles, and other disempowering nonsense.

If your goal is to self-host, you don't need to learn too much networking. Focus on the Linode, which has a nice public IP, and try out the various suggestions people have made for services.

To learn networking specifically, do projects with networking. Set up wireguard directly rather than tailscale. Multi home a server with ipv6 tunnels (he.net, 6to4, etc). Set up your own router for your house. Host a service at home, either on your home IP or with a port forwarded from the VPS. Learn nftables and how packets flow through the kernel rather than limping along with copypasta iptables commands.


Keep at it. Keep reading and applying your knowledge.

My path was to use Linux distributions that are well-documented that you can assemble piece-wise. Examples include Slackware, Debian, and Arch. By understanding the pieces you’ll come to understand networking better, and you’ll better understand how to help yourself.

That’s just one path though, certainly there are others. Just look at how far you’ve come, and realize that with time you’ll pick up more.


https://old.reddit.com/r/selfhosted/ is a great community with lots of resources as well. Check the sidebar and wiki there.

Also, if you are into Docker, I love the images hosted by these guys. https://www.linuxserver.io/


I came here to literally say the exact same thing. These two resources are likely some of the most valuable things you'll find. I know some others have said "use the official docker images" and I'd say for selfhosting: don't. The folks who run LinuxServer.IO are awesome and the consistency of their documentation and container images is some of the best, in one place, that's out there today.

I would add two things to the mix here - the first is there are a bunch of blogs who cater to selfhosting as well as on YouTube. The ones on YouTube are much easier to find. I'd say one of the more active bloggers though is Jeremy who runs Noted [0].

The other thing I'd take into consideration is if you want to manage your selfhosting, and if not there are options there as well. One popular option as of late is Umbrel [1] and there's also some like Sandstorm [2] that have been around longer. Yunohost [3] also seems to have some traction in this type of self-hosting realm.

[0] https://noted.lol/ [1] https://umbrel.com/ [2] https://sandstorm.io/ [3] https://yunohost.org/


Thanks for the endorsement and the links.


I build my own server for the first time last year running Unraid OS. It's been great. Super easy to set up, all apps are installed as docker containers. SpaceInvaderOne and Ibracorp are great youtube resources, and their forum community is super active and helpful.

Definitely still an amateur in my networking knowledge but I've learned a ton over the past year.


Thanks for those pointers!


To simplify my setup I bought a Synology NAS, crammed disks into it, then setup a bunch of Docker instances on it.

I have game servers for the kids on it, plex, pihole, home assistant and a few others plus I keep adding to it. I setup a static IP the otehr day and a cloudflare account to proxy things through it. Later I plan on setting up a VPN service (probably Tailscale) and maybe look at the cloudflare zero trust setup.

I'm contemplating putting a mail server up as well. all of this in docker instances on my NAS. its cheap effective, simple and damned effective for home use. There is a good community about it and lots of online guides.

This way I dont have to worry/spend too much time on hardware and OS level stuff and can just setup docker apps for the new needs i have. it lets me play around while not burning too much home time while adding services and capability to my family. Seriously.. look into it.


I'd recommend to take a look at OPNSense and pfSense. One of the challenges of learning networking properly is that many consumer devices go into great lengths to try abstract the details away from the user. OPNSense and pfSense bring some level of openness into the situation, by letting you to SSH into the networking equipment and generally providing a web user interface with more knobs than any sane person needs. This way, you can learn by try and mistake what does what, and you'll be exposed to all kinds of abbreviations of protocols you've likely never heard of. My point is that this gives more control into how you're connected to the Internet, which serves as a great way to also start thinking about networking more holistically from the viewpoint of individual computers in your home. And you cannot really do that unless you first control the router properly.


Hm. I'll take a look.


If you don’t mind the security, off-the-shelf NAS boxes, such as synology, offer all kinds of self hosting applications that you might need. Photos, videos, plex, backups, torrent, MS office replacement, chat, git, etc.

But, boy, they have huge attack surface, with so much PHP code, web servers, databases, etc running on the box.


Setting up WireGuard is definitely the way to go. I learned a ton about networking doing that. It's a beautifully elegant implementation and actually very easy, but it exposes a lot of networking concepts that are important to understand. Don't use PiVPN and the like, just install it and write the configuration files manually.

I used WG to get two homes talking to each other. A Pi at each end running WG, with static routes set up in the actual routers, and both networks function like one. It was fun to configure, and I learned a fair amount about networking doing it.

Also, using WG to access your network addresses a lot of security concerns. You open up 22, and you're going to get hammered day and night. Assuming you set it up correctly, it shouldn't matter, but there's still always some risk. WG just silently fails if it doesn't receive the proper key. There's literally zero difference (from the client's perspective) between using an incorrect WG key and a machine's simply not existing at that IP address.

And then, once you get WG set up, you can expand stuff like pihole to cover all of your devices wherever you are; just run a split tunnel on the client and route all DNS lookups back to your home.

You likely won't have a static IP residentially, but you have a few options there. In some cases, a business-class connection isn't much more and is better anyway (this especially is true in cities and other areas that actually have competitive markets for ISP). There are plenty of free and paid dynamic DNS services, and setting up one of those on a router or Pi or something is pretty straightforward. Finally, if your IP is _mostly_ unchanging, you can just do the lazy/cheap move (which is what I ended up doing), having a simple script run every hour that checks the IP address and sends me an email and a slack message if it changes. Happens less than once a year, and updating all of the devices that need updating everything takes maybe a half-hour. If it were a weekly or even monthly thing, I'd probably go the DDNS route.


This is definitely a direction I will explore.

> in cities and other areas that actually have competitive markets for ISP

Ho, ho. Out my window I just LITERALLY saw a bald eagle swoop down on a tumbleweed.


Sounds beautiful, and I hope your Starlink terminal is working well. :)


Y'know I'll probably make one.

> But I have no clue about networking, and I don’t know where to start.

Start here, because it's literally the foundation, but you don't need to be an expert. You probably understand more than you think if you have a VPS and can access it, though. As a start, you need to have working knowledge of the following:

- The OSI model so you understand the layered model of networking, even though nothing strictly folows it.

- what a subnet is,

- what NAT/port forwarding is,

- what TCP is,

- basics of routing (packet not on my subnet? send to default gateway which is a router, rinse, repeat)

- that IP addresses are associated to interfaces, not computers or people, and

- why HTTP is called an application-layer protocol,

- what SSL certificates are and how they work.

- you also need to study up on Docker and containers as a lot of web apps are released as containers now.


Just the kind of concrete suggestions, of where to start, that I was hoping to get. Thanks!


A bit of self-promotion regarding your:"host my own bare git repos and not rely on GitHub." objective: Here is a work-in-progress code for 1-command deployment of a self-hosted GitLab server over tor, with free, limitless self-hosted CI (that can still report build status results to GitHub if you want): https://github.com/TruCol/Self-host-GitLab-CI-for-GitHub

Tor because that way your self-hosting works from wherever you are, even if you're in a flat behind a gateway/router you do not have access to. 1-click because I like the user experience to be as simple as possible.


> even if you're in a flat behind a gateway

Perfect example: to my ears, this could be (1) networking jargon, (2) American football jargon, (3) puerile sex euphemisms


Without meaning that in a rude way: Learn to find and read documentation.

Oh and very much avoid random tutorials on the internet. As in, go for official source and use these tutorial only to connect the dots. The reason is that there is huge amounts of really bad advice on the internet and a lot of the tutorials only work in very specific situations (versions, OSs, etc.). Official documentation tends to be a lot better, and it's a good idea to choose software that provides good documentation.

Also make sure you do it one step at a time. You want to give things time to know what failure cases it might have. This prevents you from situations where everything "crashes and burns", because there is an update.


No offense taken. Sounds like good advice.


Exception: any how-to guides in Arch or Gentoo's documentation & community wikis or what have you are likely to be excellent and sometimes a far better use of time than reading the official docs. Most of the steps, tips, and advice aside from parts related to package management will be useful just about anywhere.


I have a rather powerful dev box, on which I run yggdrasil[1]. With my mobile unit laptop, that self-hosted dream is a reality. PowerDNS[2] with admin[3] and smallstep[4] are the cherry on top.

[1] https://yggdrasil-network.github.io/

[2] https://www.powerdns.com/

[3] https://github.com/PowerDNS-Admin/PowerDNS-Admin

[4] https://smallstep.com/


I am on the same path as you so my 2 cents are:

- I am using my raspberry Pi for hosting my services

- I had to configure my router and talk to my ISP to remove NAT restrictions (this was the hardest part! really hard.)

- The other router part was setting up port forwarding and firewall which were pretty easy to do

- For git I am doing it from scratch, as I just want to create a web interface, basically I run git-http-backend and a go server [1]

- If you do not want to do git from scratch I recommend using cgit.

--- [1](https://saucecode.bar/posts/09-hosting-your-git-server.html)


You don't need to do any port forwarding and/or router config if you don't mind using Cloudflare Tunnels. Seriously, these tunnels are a game changer when it comes to self hosting. I have one running on my main computer and another on a Jetson Nano being used as a webserver.


It sounds like you have a good working base of knowledge to start from and might benefit from some high level concepts. Once you understand the basics you can likely cover what you're looking for with a small open source home router or some other similar hardware.

If you prefer books check out

https://www.amazon.com/Computer-Networking-Top-Down-Approach...

Or for video lectures:

https://www.youtube.com/playlist?list=PLoCMsyE1cvdWKsLVyf6cP...


That's two votes for that book, so I'll take a look. Thank you!


Plex is a great product if you want to host your own photos, music, and media such as movies and tv shows. If you aren't a fan of Plex you can try jellyfin.

As others have mentioned, proxmox, Unraid, and/or TrueNAS are great if you have unused/extra hardware sitting around. Personally I have a box for Proxmox VM's, and an Unraid server for storage and several docker containers i use regularly. I'm still very cloud dependant for the convenience factor, but this should help give you some direction.

There are also communities on reddit like /r/selfhosted and /r/DataHoarder/ that you might want to check out.


I'll just recommend starting SMALL. Pick one, and start there. I'm not even sure which one on your list to recommend starting with, just pick one and go with it. All of those things are widely used and just asking your favorite non-Google search engine for docs/guides/how-tos will give you more good results than you need. Get to know that thing, how to set it up, configure it, keep it running, keep it secure, and how to recover from disasters. Make sure you know how to back it up and start over in the event of failure. So much of what you learn from setting up ONE will carry over to the others.


I have a Synology DS720+ and I use it to store all my photos. Two things have happened incidentally: - I became an expert in networking. Opening ports, configuring VPNs, DNSs... all of this has required some time but ultimately I am happy. - I became aware of security. Synology defaults are good enough even though you really fear the idea of being attacked, so I had to dig up how to secure my synology even more.

In the end, DS720+ together with Snyology Photos (Gphotos replacement), Drive (Dropbox replacement) and two drives of 6TB in RAID, costed me around 700€.


One thing that has kept me from starting to self-host is that I'm terrified of the thought of opening ports on my network to the open internet. For people who have self-hosted, how do you secure or set up your network?


I self host a lot, the main thing is to keep your software up to date and make sure you don't accidentally open any ports you don't mean to. The top main mistakes people make is 1) never updating software and then getting exploited, and 2) accidentally not having or misconfiguring your firewall. Like leaving an "internal" service exposed to the internet.


nginx or other reverse proxy running SSL - don't open ports directly to any of the apps.

You should also run fail2ban on everything.


jzymbaluk's fear is mine too. I should have made it explicit!

I'll follow these pointers for sure.


Cloudron.io might be helpful. Makes self-hosting a lot easier for beginners.


I would get a rapsberry pi and at first put it behind your provider’s firewall to make sure only specific ports are open to the internet (ideally none at first). Then i’d link all my home devices to that rpi and configure it to manage routing, at first with dhcp and then manual static ip addresses. Maybe add a bit of bandwidth throttling per device just for fun, and bandwidth monitoring and reporting. For hosting git and such i’d install gitlab and maybe for photos i’d add an external us device for file storage and perhaps a shared mount.


I can recommend openmediavault. It comes as something you can setup on any logic machine or on a raspberry pi. It comes with many storage servers built-in, and can also run any container service as well.


Sorry to be contrarian, but I had a pretty negative experience with OMV. The install is nonstandard and non-paradigmatic (install uses some 3rd party script, can't do an apt install), it sets up a bunch of nonstandard scripts and configs, and uninstalling is a nightmare (can't apt remove). When something breaks (like networking), you have to fight with OMV wanting to have its own way. There's also a certain lack of polish and issues that you simply don't see in more commercial products.

I'm currently using Cockpit Project (and quite happy with it).


I guess most of your "wanted" list is easily achievable, except the first one: There is just no self-hostable product for photo management that comes close to Apple/Google in terms of reliability, feature set and ease of use.

Take all recommendations with a grain of salt. They're not even close to the capabilities of Google/Apple (unfortunately) – no matter what people are trying to tell you. This is my experience from trying out most of these systems at some point in time (Nextcloud, Photoprism, Synology Photos, and more).


What is difficult about image hosting? Database containing photo endpoints and flags, simple HTML viewer based on tag selection. Database backup is solved and blob backup is solved.


I started self hosting recently. I bought a cheap desktop server and installed Ubuntu server on it. Every service I run is docker-compose because it’s easier than wrangling deps and such for multiple services. I use Caddy to proxy traffic and generate ssl certs.

I expose https and ssh through my router but use a non standard ssh port (keeps scripts and bots from knocking) and no root access over ssh and no password auth.

Everything runs very nicely and is simple to maintain and setup.

Data is backed up via Borg and rclone to B2.

So far so good as they say.


https://landchad.net may be what you're looking for.

I believe it's made by Luke Smith, with content uploaded by other people.


If your first requirement is to build a solid foundation in networking I'd suggest finding study materials for the N+ Certification from CompTIA (https://www.comptia.org/certifications/network). It is vendor neutral and is intended as an entry point into a career in IT Infrastructure/networking.


What you shouldn't do, is go bare metal (as in: install linux on some machine and start hosting). Begin with some virtualised solution like Proxmox or some Docker-based platform. This way you can quickly bring up new machines with any OS, and quickly recover mistakes and failures. Yes, steeper learning curve, but anything else will make you miserable as problems would cause long recovery phases.


Well, with Tailscale you have your own private network, so anything you run anywhere you can access on any device. You could DIY it a bit more using something like WireGuard directly (Tailscale uses WireGuard under the hood), but Tailscale makes it much easier, particularly with managing adding new devices, using DNS, etc.

I don't see much downside to sticking with Tailscale indefinitely, what are your reasons?


Exactly why I asked for pointers to tutorials: I have only the most superficial, decontextualized comprehension of words like WireGuard, tunnel, (virtual) private network, etc. Thanks to everyone who made suggestions.

> I don't see much downside to sticking with Tailscale indefinitely, what are your reasons?

Tailscale and Bitwarden both fall into the same category: they seem like good actors, they have generous free tiers, and if I come to rely on them at scales that aren't free I would be happy to pay. However, I want to understand them enough to be able to know that I could host them myself if god forbid they go out of business, are subject to attack, or whatever.

As I've succeeded at on the desktop, the hobbyist gratification is both the fun of tinkering mixed with the confidence that if it need it I have packed my own parachute.


hello,

imho. self-hosting != networking

but basic networking-knowhow is necessary for self-hosting and knowing networking helps in a lot of situation :)

idk what the "best" way to learn these things in the 21st century would be today, several decades ago "the linux documentation projects" (networking) howto was a really good start.

ad hardware: use any (cheap) machine you can get or already have ... idk, an old pc, a raspberry pi or some small virtual machine at a cheap provider.

it doesn't matter and if your hardware is "to small" at a certain point in time, you will learn a lot by moving your setup from one system to another -> sooner or later you will get into configuration-management a la ansible :)

at first you need (open)SSH to be able to remotely connect to your machine.

then get a domain and start with "the" fundamental service for all internet-connected services: DNS

the most common software for this is the ISC bind.

then add something "easy" like a webserver with static pages, add TLS ... and later PHP support etc. - necessary for a lot of webapplications.

idk, use apache2 or nginx, at this stage it doesn't really matter.

the very last service to setup will be e-mail - SMTP/IMAP/POP3 & contentscanners -, which is far more complex than it seems at first sight.

as the operating system i would recommend debian, it may not be the "slickest" linux-distribution, but it contains a lot ready-to-use of packages in its repositories and has good documentation and - last but not least - a social contract.

additionally debian is the basis of a lot of well-known linux-distros, which are often heavily tailored to a certain use-case - like ubuntu/linuxmint etc.

cheersv


* Buy a domain as Domain

* Buy a Asustor AS5304T as NAS

* Setup CloudFlare on Domain

* Setup services as docker containers on NAS

* Setup CloudFlare tunnel into NAS docker services with auth in CloudFlare

* Enjoy services.

Note, if any of that sounds hard I must kindly point out that you're playing a risky game trying to do all this yourself securely and should evaluate whether it's easier to just pay a service for all this.


> Note, if any of that sounds hard I must kindly point out that you're playing a risky game trying to do all this yourself securely and should evaluate whether it's easier to just pay a service for all this.

You articulated something I couldn't: an important goal for me is to know enough to be able to evaluate such risks and decide which things to hobbyist tinker and which things to pay to outsource.


My Dad once showed me how the moment he opened a port on our domestic router four different attempts to hack us were made. Good lesson for living on the net, here be dragons.


Can you explain your second last star?


I plan to write a blog about this. But the gist is that CloudFlare lets you host a service locally which creates a port tunnel between a domain on their platform and your box. This handles a lot of things, such as 2-factor auth, floating-IPs and TLS. So you visit "home-git.example.com" and CloudFlares routes that to your homebox.


I think he means that you don't point your domain's public DNS records straight to your home IP (where your NAS sits), but rather via Cloudflare.

If so, doesn't Cloudflare do HTTPS termination? Meaning that all your photos would be visible to them during transit.


I suggest installing Debian or Ubuntu on the NAS so that you will be familiar. Then install all the services on that NAS box. Next step is to learn iptables and make your own router. Could make the NAS into a router if it has two or more network ports. A router is just some iptables commands.


Of course, iptables is an older syntax, since replaced by nftables. Learning both isn't too hard. The easiest fw is the Uncomplicated Firewall (UFW) which is also probaby worth learning for completeness.


Thank god my wife will never know I read those words. :-)


Interested to know what others share.

I have an old computer that I connected to my router, and I’m able to ssh into it and do stuff.

It’s an old intel quad core with 16gb ram, and has a 1TB SSD. More than capable enough to handle a bit of workload. It runs Ubuntu and I’m using it to run backends for apps as I develop them.


For photos, Synology has a “moments” app which you can download on your iPhone and will back up photos to your NAS.

The only catch is that you have to manually open it to back up the photos, but I have found it to be the easiest way to get photos from my phone to the synology.


Hm. In my experience, the Synology apps (client and server both) are ghastly. And I'm snobby about non-free software.

Better not to cut off my nose to spite my face, though, right? I suppose a non-Apple but closed/proprietary solution is a step in the right direction...?


I feel ya, it just worked well enough my use case of “back up phone photos directly to NAS” in a very frictionless way.

That is to say it’s not the best app, but it works enough for me.


Looking forward to following answers to this!

I can vouch for git repositories being easy to host on a VPS. I use a private git repo as my daily backup tool for my documents. A public one should be easy too. Access management for particular users, I'm not so sure about.


Maybe not as beginner friendly as you would want but you can read my tutorial about my personnal server

https://github.com/erebe/personal-server


"Linode VPS": this isn't self hosting 8-( You're still on someone else's computer.

However, self hosting is easy!!! Follow these 3 steps:

1. Get a computer 2. Install freebsd or linux 3. Install apache or nginx

Enjoy self hosting!!

Maybe that's 4 steps?


It's a fair point, like jaequery's SPOF sibling remark. I think ultimately my goal is to have local/remote redundancy, and get to the point where I am sufficiently informed to trust myself to administer everything.


I have all of this in my Synology and use a VPN access with a Raspberry-pi to lock it all down. plus getting off google docs and using the Synology Doc system with LibreOffice


install ubuntu server either locally or on a cheap VM

read the ubuntu server docs about setting up a firewall and get started setting up a firewall that only allows inbound ssh and https ( be careful and don't firewall yourself out of your system! :) )

https://ubuntu.com/server/docs/security-firewall

from there, read the docs on setting up a webserver and create an index.html with just the text "hello world" https://ubuntu.com/server/docs/web-servers-apache

from there, read and learn how to setup letsencrypt as manual as possible https://letsencrypt.org/how-it-works/

buy a domain and learn how to setup A records to point your domain to your external IP. Your domain registrar will allow you to make your own dns records

https://www.cloudflare.com/learning/dns/dns-records/dns-a-re...

if you can get https://mycustomdomain.com/index.html to output hello world in your browser with no certificate warnings then you've learned enough to start tackling self-hosting some packaged service out there. You'll at least know enough to know what to search for when looking for answers.

if you're installing all this on a computer in your home network then you'll need to login to your home router and port forward 443 to your ubuntu server. This would be a good learning experience too https://en.wikipedia.org/wiki/Port_forwarding

edit: if you're looking for hardware to buy i've heard mac minis work really well for home servers. you wouldn't be installing ubuntu server in that case, you'd be following mac docs for firewalling and webserver setup.


I do basically this, plus I add WAF rules in Cloudflare to block access from any other country than my own (I live in a 1X mio country so no USA, Russia, China). Do I need to set up e.g. Tailscale etc to make it more secure?


well security is a very broad and deep problem. Keeping these things in mind will get you pretty far.

1. For firewalling, only allow what you need. For example, if your server is in your home then you don't need to forward port 22 in your router. You'll always be logging in to your server from your local network.

2. For services, see #1. if your using a linux distro that starts up a ton of server services you don't need then turn them off.

3. To secure specific services like sshd read their setup docs. SSHD, Nginx, Apache all have very good documentation on operating a secure server.

having said that, i don't think a VPN makes you more secure. Are you saying running a vpn endpoint on your server so that you can connect to it from outside your local network? like at a cafe or something? If you're connecting over SSH then i don't think so because SSH is already encrypted ( that's the "secure" in secure shell) and if you follow the SSH docs on secure connection best practice like disabling password auth and using keys then you're in pretty good shape.


Super helpful list. Thanks!


The trick is to just do it. You will make mistakes and you will learn from them. I'd start with learning how to be confident you have good, usable, backups.


> be confident you have good, usable, backups.

Oh dear god yes.


First requirement is to have good upload speed at network where your self-hosted device is. Which is quite a challenge if you have only DSL connection.


Take a look here : https://yunohost.org/#/


I would recommend Proxmox as a hypervisor for whatever VMs you need. Its free, they only charge if you want a support subscription.


To understand networking: read a book. Much of my early networking knowledge came from https://dl.acm.org/doi/book/10.5555/1593414 but you can probably find more modern variants of such a book. Getting a good grasp of networking requires reading and experimentation; luckily, you've got the tools for experimentation already.

Most of application deployment is little more than reading the docs and tuning the configuration to your needs. From what I read, I think you've got enough knowledge to get that stuff running on your servers. You can probably get a lot more out of learning about the underlying concepts.

For your own photos and cloud: I use Seafile, have used Nextcloud, and alternatives exist. Quite easy to set up, but with the ability to go deep into Modern (TM) Cloud (C) backends if you want.

For your Bitwarden setup: Vaultwarden is a lot easier on resources and has pretty much all the features you need. Also quite easy to set up.

For your tailscale setup: there's a guide for the server (https://github.com/juanfont/headscale/blob/main/docs/running...) and you can find more guides for the clients.

For your Git setup: Git works over simple SSH. If you can SSH into your server, you can host a git repository. If you want more (a nice web GUI) then Gitea or Gitlab can also be run on your server.

Things I recommend reading into if your knowledge about them is spotty (find guides or book recommendations):

- Networking (ARP, IPv4, IPv6, TCP, UDP, DNS, mDNS, maybe PPPoE, and other such abbreviations). This is a lot of reading. You can also try to get started with this stuff without reading into it (it's how I learned!) and have a terribly frustrated time by overlooking obvious mistakes and easy solutions, but I don't recommend that.

- SystemD services. People use Docker to solve a lot of daemon problems but good ol' systemd can do a huge part of that! I run most of my services in systemd rather than some kind of container setup because I don't want to have to deal with Docker and its many friends and dependencies whenever I'm trying to resolve a problem and so far it works great.

- Reverse proxies, if you're running multiple services on a single server with subdomains or subpaths; learn about nginx/caddy/apache2/whatever server you prefer and how to set up proxying. Along the way you will break stuff and learn new things with every error message or unexpected routing error you encounter!

- Firewalls; firewalld and ufw are nice ways to get started, nftables/iptables for the underlying stuff. It's not hard, per se, but it can get complicated fast. Maybe mess with the Windows firewall as well just for fun.

- Set up IPv6 if you don't have it already. This would allow you to do some more networking stuff and prepare you better for the future, because corporate networking people seem to be grumpy and annoyed at the thought of one day needing to enable a protocol from the 90s. If your ISP only does IPv4, https://ipv6.he.net/ will get you an IPv6 subnet for free and if you do all of their quizzes they'll even send you a free shirt!

- Along the way, you will (or should, at least) learn to use Wireshark and friends. Incredibly overwhelming at first but with some knowledge about networks you'll get the hang of it by setting up the right filters.


> To understand networking: read a book. Much of my early networking knowledge came from https://dl.acm.org/doi/book/10.5555/1593414 > [Computer networking: a top-down approach, Kurose and Ross 2009]

This is the kind of reference I was looking for. Anybody else have a favorite. Maybe more accessible? "For Dummies?" :-)

Thanks for all these concrete suggestions.

EDIT: "Free t-shirt"?! AWESOME :-)


I think this book is its own "for dummies" version: it starts at the very basic ("what is a HTTP?") and then slowly walks its way down to "how does conflict prevention in WiFi work?")

Most of those chapters are probably not relevant to you. All the wireless stuff is nice if you want to dig into setting up your own radio network but you can stick to the internet stuff. Not even everything in there may be relevant to your interests, I've forgotten most of the network/routing graph discovery stuff myself because it's not really relevant unless you're planning on running a corporate network or an ISP. I recommend just giving it a good go, and skipping the parts you probably won't ever need. You can always read them later if a later chapter refers to them and you need to understand after all.

As for Linux stuff, last month's Humble Bundle had a bunch of Linux related books: https://web.archive.org/web/20220913000217/https://www.humbl...

You can probably find people talking about this bundle online and find out what books/alternatives they recommend. There were a few Reddit threads about this at the very least, maybe a HN thread or two as well.


Great to know! Thank you.


Let's start with hardware. What hardware and internet connection should one have to reasonably self host things ?


Just keep at it. Read some source. Play with vms or docker containers. You will get it with practice and exposure.


Youtube technotim is a good starting point. The selfhosted reddit sub also has lots of resources.


Give cloudron.io a try. Simple to get started, technical yet secure and has an active community.


Check Caloudron and CapRover


There are several projects designed to help you self-host your own services.

Proxmox[0] is mentioned by a few folks here. It's mostly a hypervisor. It's good if you have a "big" server and want to split it up into VMs for various needs. It doesn't have any concept of an AppStore or service catalog. I think this is too low level for what you're asking.

Unraid [1] is probably the easiest way to turn an arbitrary computer into a useful server. You install the OS on a thumb drive and it runs from there. It provides network storage services out of the box, can host VMs, and has a solid catalog of packaged services in their Community Applications plug in [2]. These are packaged in weird obscure way that I tried and failed to figure out. I've run this on an old T410 for a couple years and it's been pretty good. Not as flexible as some other options, but quick to get going on the basics. You can see this in their storage system... you can easily add arbitrary disks to your pool, but parity options are limited. My biggest complaint is that it's hard to spin up your own docker images, especially if you don't want to mess with Docker Hub.

TrueNasSCALE [3] is my next platform. It's an iteration on the very solid FreeNAS/TrueNAS and ZFS. It handles containers and containerized services as first-class citizens using kubernetes, but also includes KVM so you can do virtual machines. Like Unraid, it has a healthy app library over at TrueCharts [4]. Unlike Unraid's weird XML manifest, SCALE uses Helm. Nice.

coolLabs [5] is sort of a self-hosted Heroku alternative. I just discovered it on HN the other day [5a] in that context. It looks pretty neat. It has some pre-packaged services already [6] but seems to lack any concept of a community-curated service package repo. It seems to be mostly focused on helping you deploy applications you develop yourself. I don't think it gives you network shares, for example. Still, it could be a great choice to throw onto the VPS you're wonder what to do with. [7]

Kubesail [8] is a k3s-based self-hosting operating system. It's designed to help you run basic web services as easily as possible. Where Unraid assumes you have an old computer laying around, Kubesail will sell you a PiBox [9] to get you up and running. (You can also bring your own hardware). The have a nice AppStore and have put particular attention into the photo use case you mentioned - they emphasize support for PhotoStructure [10].

Cloudron [11] was mentioned by a few other comments. I haven't dug into it, but it does seem to have an appstore as well.

[0] https://www.proxmox.com/en/

[1] https://www.unraid.net/

[2] https://unraid.net/community/apps

[3] https://www.truenas.com/truenas-scale/

[4] https://truecharts.org/

[5] https://coollabs.io/

[5a] https://news.ycombinator.com/item?id=33077118

[6] https://docs.coollabs.io/coolify/services/

[7] https://docs.coollabs.io/coolify/installation

[8] https://kubesail.com/homepage

[9] https://pibox.io/

[10] https://kubesail.com/template/erulabs/photostructure

[11] https://www.cloudron.io/


Thank you for compiling this list.


sure, it's good for learning, but this has SPOF written all over it.


r/selfhosted


This might be controversial, but I recommend picking up copies of:

- K&R

- Stroustrup's 'Tour of C++'

- Programming Practice and Principles

You don't have to do the exercises because your main goal isn't to become a programmer. But reading through these and getting an idea how data, types, memory, and files work at a low level will help add a LOT of context to using Unix-based operating systems. The first two books you can probably get through in a weekend or two, the latter is quite a bit more, but you'll go a long way with the first 8-10 chapters + the ones about I/O.

This has little to directly do with networking, but many of the resources on networking assume the knowledge that is contained in those books. Networking is a whole intimidating ocean and using high-level resources is like starting at the surface, looking down into an abyss. With these books, there's still a whole ocean to explore, but now you've got scuba gear and you're standing on the ocean floor looking up.


I know enough to know who Kernighan, Ritchie, and Stroustrup are.

I know enough to know why this (sincere, and appreciated!) answer is being downvoted. :-)

(I encountered these gentlemen in the 1980s. Before I changed my major.)


Haha, well you did say you were a total beginner. Perhaps you are more intermediate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: