Sounds fun. What kind of mini PC are looking at? I am a big fan of used Dell wyse 5070 devices you find from $70, native Linux support, rather big community (easy to Google). Running one as a server and one as an mobile PC even.
One thing I would consider earlier is virtual machine hosting. Making it easy to setup, test, build and destroy a system without consequences. Virtual machines (and docker container, and more) are easy to setup and manage trough 'cockpit' and you can learn the command lines later when you need it.
Imo there is no point going directly into Industrie tools (kubernetes, docker, anisble) it may is better to first learn why they even exist by building systems the traditional way.
Availability in Nepal is a tough call. I can import from India given they're available in amazon.in and aren't too big weight. But that's about it. Yeah, I am going to do VMs hosting. Everything on bare metal, easy to make and break.
Look into Proxmox(https://www.proxmox.com/) for setting up your own ec2 like VM platform. This makes it pretty easy to experiment with setting up and running different services on the same box in an isolated way. Practice things like setting up a VM with a GPU passthrough and then running gpu enabled docker containers on the VM.
Also look into getting a Hetzner server and setting up site to site Wireguard.
FWIW you don't really need to buy a mini pc. You will learn more by doing a build. The great thing about doing something like this is that you can get old data center parts for cheap. Ex: pickup a server motherboard (maybe a SUPERMICRO X9DRI-F), CPUs, and RAM on ebay for < $300. Then put everything into a used server case with a new PSU and you can do everything for $500 or $600. This setup will have more resources then any off the shelf thing you can buy.
While I absolutely support the idea of Proxmox (especially with ZFS), I think to start a learning project it is probably overkill to spend $600. I would maybe start with a Dell T30 or Fujitsu Celsius W550 for < $100 used or using a Gigabyte MC12-LE0 Mainboard with a Ryzen Pro 5600. Stacking one of these with a modern NVMe and some ECC RAM should go for additional $200, the Fujitsu Machine is OLD, but has intel AMT Remote Management and draws <10W Idle.
For a really good practical project: build a website, then host it on your mini-PC and find a way to expose it to the internet. This will teach you a lot about DNS, proxying, building websites, managing the server (via some config management or container), monitoring, etc.
General advice not necessarily optimized for hirability:
First thing you do: Get another one of what you're already getting.
That way you can actually run "production" and host stuff while you can experiment and play around fearlessly without breaking prod. You'll also have a spare on hand in case of hardware failure etc.
Second, do at least basic separation of networking right away. Get a $10 switch and separate subnet instead of piggybacking on your home LAN.
Then, get comfortable with virtualization (qemu+kvm) and figure out a way to automate image your image builds and deployments.
Automate backups early. Start simple and iterate over time.
I found the way I learned was to install these technologies but also write my own apps that take advantage of them. This way you aren’t just playing with the technology but can also make insightful recommendations to others on that technology’s limitations.
I find the basic Linux stuff is learning by doing and reading docs, using search and asking around.
Docker has docs and also searchable examples and blogs.
But k8s is an IMHO artificially fenced off area.
No matter where I ask, I receive no response or am met with arrogance and elitism.
I can't even get the simplest thing answered, which is "what is the minimum required setup for k8s, if not using a hosted solution? I plan to use 3cp 3w nodes and utilize all resources available. Do I NEED external storage, do I NEED an external load balancer or can I use DNS-LB?"
Official k8s forum - zero response.
Reddit k8s - zero response and downvotes
Home operations Discord - zero response
It's like this little well kept secret only select few have access to.
The documentation is also not clear on that subject, because I believe the big companies like Google and Amazon want to sell you their k8s offerings, which are super expensive, over 2k per month for a 5 node cluster are you kidding me
I can't even get the simplest thing answered, which is "what is the minimum required setup for k8s, if not using a hosted solution?
Most mininal is to run a cluster inside a docker container with KIND. 2 cores and 4G of memory should allow you to run some small workloads
I plan to use 3cp 3w nodes and utilize all resources available. Do I NEED external storage,
No. Storage is optional AKA stateless workloads.
do I NEED an external load balancer or can I use DNS-LB?"
I'm assuming you want to bootstrap an upstream k8s cluster with kubeadm. In that case yes you need load balancer to sit in front of your control plane nodes. You can use a project like kube-vip to function as the load balancer without introducing an extra machine. If you want to use an extra machine you use something like haproxy.
You can use DNS but it's not ideal for production without healthcheck - but for a lab go for it
That's what I've been seeing in the industry as well, and it's honestly pretty disgusting.
We were evaluating this one particular I*M product for infrastructure management, which needed to be installed on top of this other I*M platform, which specifically required to be run on top of a particular version of RHEL and a particular version of OpenShift (no other k8s flavor would do), and they wanted it as dedicated instance on a dedicated physical server (so we couldn't use our existing OCP instance).
The overal system requirements to run the damn thing (which was ultimately just a Ruby app) was enormous, and the bill quoted was astronomical. Customising, upgrading and maintaining it would've been a massive PITA too.
Thankfully our C-suits had a rare lightbulb moment and the whole project was canned. But man, what a massive waste of time and effort it was, we spent hundreds of hours doing the discovery, design and coding for it. Heck, I even ended up having to learn Ruby just so we could customise the app, because I*M were incapable of delivering a decent product.
We then decided to go for another white labeled product from a competing vendor - which was admittedly a lot better than that I*M crap, but during our consultation with them it was decided that we'd need to port all our existing code and tooling to their proprietary platform - a lengthy process that would take several hundreds of hours of effort, with a migration plan set over the course of an year.
Personally, none of this makes any sense to me. I can understand why we wanted to go for a commercial solution, but with the amount of time, money, effort we'd be putting in, we could've just continued to develop our in-house product which leveraged existing opensource solutions like Ansible. The whole insurance for "getting hit by a bus" scenario and "vendor support" is so blown out of proportion these days.
I just want to go back to the good ol' days where being a sysadmin meant you were the one in control.
Sounds fun. What kind of mini PC are looking at? I am a big fan of used Dell wyse 5070 devices you find from $70, native Linux support, rather big community (easy to Google). Running one as a server and one as an mobile PC even.
One thing I would consider earlier is virtual machine hosting. Making it easy to setup, test, build and destroy a system without consequences. Virtual machines (and docker container, and more) are easy to setup and manage trough 'cockpit' and you can learn the command lines later when you need it.
Imo there is no point going directly into Industrie tools (kubernetes, docker, anisble) it may is better to first learn why they even exist by building systems the traditional way.
https://exortstore.com/product/gmktec-nucbox-9-mini-pc-cpu-a...
This is the one I am looking into.
Availability in Nepal is a tough call. I can import from India given they're available in amazon.in and aren't too big weight. But that's about it. Yeah, I am going to do VMs hosting. Everything on bare metal, easy to make and break.
This is a great way to learn!
Look into Proxmox(https://www.proxmox.com/) for setting up your own ec2 like VM platform. This makes it pretty easy to experiment with setting up and running different services on the same box in an isolated way. Practice things like setting up a VM with a GPU passthrough and then running gpu enabled docker containers on the VM.
Also look into getting a Hetzner server and setting up site to site Wireguard.
FWIW you don't really need to buy a mini pc. You will learn more by doing a build. The great thing about doing something like this is that you can get old data center parts for cheap. Ex: pickup a server motherboard (maybe a SUPERMICRO X9DRI-F), CPUs, and RAM on ebay for < $300. Then put everything into a used server case with a new PSU and you can do everything for $500 or $600. This setup will have more resources then any off the shelf thing you can buy.
While I absolutely support the idea of Proxmox (especially with ZFS), I think to start a learning project it is probably overkill to spend $600. I would maybe start with a Dell T30 or Fujitsu Celsius W550 for < $100 used or using a Gigabyte MC12-LE0 Mainboard with a Ryzen Pro 5600. Stacking one of these with a modern NVMe and some ECC RAM should go for additional $200, the Fujitsu Machine is OLD, but has intel AMT Remote Management and draws <10W Idle.
I can get a mini pc for similar price range. I don't think that'd be true in Nepal(more resources than...)
This is what I used when I first learning: https://serversforhackers.com/
For a really good practical project: build a website, then host it on your mini-PC and find a way to expose it to the internet. This will teach you a lot about DNS, proxying, building websites, managing the server (via some config management or container), monitoring, etc.
Thanks for sharing this. It is incredibly useful!
Then remember that we live in 2024 and that you need to learn how to do DevOps, not sysadmin
General advice not necessarily optimized for hirability:
First thing you do: Get another one of what you're already getting.
That way you can actually run "production" and host stuff while you can experiment and play around fearlessly without breaking prod. You'll also have a spare on hand in case of hardware failure etc.
Second, do at least basic separation of networking right away. Get a $10 switch and separate subnet instead of piggybacking on your home LAN.
Then, get comfortable with virtualization (qemu+kvm) and figure out a way to automate image your image builds and deployments.
Automate backups early. Start simple and iterate over time.
I found the way I learned was to install these technologies but also write my own apps that take advantage of them. This way you aren’t just playing with the technology but can also make insightful recommendations to others on that technology’s limitations.
https://www.reddit.com/r/homelab/
I find the basic Linux stuff is learning by doing and reading docs, using search and asking around.
Docker has docs and also searchable examples and blogs.
But k8s is an IMHO artificially fenced off area. No matter where I ask, I receive no response or am met with arrogance and elitism.
I can't even get the simplest thing answered, which is "what is the minimum required setup for k8s, if not using a hosted solution? I plan to use 3cp 3w nodes and utilize all resources available. Do I NEED external storage, do I NEED an external load balancer or can I use DNS-LB?"
Official k8s forum - zero response. Reddit k8s - zero response and downvotes Home operations Discord - zero response
It's like this little well kept secret only select few have access to.
The documentation is also not clear on that subject, because I believe the big companies like Google and Amazon want to sell you their k8s offerings, which are super expensive, over 2k per month for a 5 node cluster are you kidding me
I can't even get the simplest thing answered, which is "what is the minimum required setup for k8s, if not using a hosted solution?
Most mininal is to run a cluster inside a docker container with KIND. 2 cores and 4G of memory should allow you to run some small workloads
I plan to use 3cp 3w nodes and utilize all resources available. Do I NEED external storage,
No. Storage is optional AKA stateless workloads.
do I NEED an external load balancer or can I use DNS-LB?"
I'm assuming you want to bootstrap an upstream k8s cluster with kubeadm. In that case yes you need load balancer to sit in front of your control plane nodes. You can use a project like kube-vip to function as the load balancer without introducing an extra machine. If you want to use an extra machine you use something like haproxy.
You can use DNS but it's not ideal for production without healthcheck - but for a lab go for it
That's what I've been seeing in the industry as well, and it's honestly pretty disgusting.
We were evaluating this one particular I*M product for infrastructure management, which needed to be installed on top of this other I*M platform, which specifically required to be run on top of a particular version of RHEL and a particular version of OpenShift (no other k8s flavor would do), and they wanted it as dedicated instance on a dedicated physical server (so we couldn't use our existing OCP instance).
The overal system requirements to run the damn thing (which was ultimately just a Ruby app) was enormous, and the bill quoted was astronomical. Customising, upgrading and maintaining it would've been a massive PITA too.
Thankfully our C-suits had a rare lightbulb moment and the whole project was canned. But man, what a massive waste of time and effort it was, we spent hundreds of hours doing the discovery, design and coding for it. Heck, I even ended up having to learn Ruby just so we could customise the app, because I*M were incapable of delivering a decent product.
We then decided to go for another white labeled product from a competing vendor - which was admittedly a lot better than that I*M crap, but during our consultation with them it was decided that we'd need to port all our existing code and tooling to their proprietary platform - a lengthy process that would take several hundreds of hours of effort, with a migration plan set over the course of an year.
Personally, none of this makes any sense to me. I can understand why we wanted to go for a commercial solution, but with the amount of time, money, effort we'd be putting in, we could've just continued to develop our in-house product which leveraged existing opensource solutions like Ansible. The whole insurance for "getting hit by a bus" scenario and "vendor support" is so blown out of proportion these days.
I just want to go back to the good ol' days where being a sysadmin meant you were the one in control.