Move setup of foundational service from k8s to k3s

It simply doesn't make sense to split the installation of the
kubernetes-cluster from the provisioning of foundational services.
Therefore I drop the idea to organise these services in another
terraform-setup and instead ensure their presence with ansible, as it's
already responsible for setting up the cluster and keep it up-to.date.
This commit is contained in:
2025-09-21 19:03:46 +02:00
parent adec38e1cd
commit fef383fed4
15 changed files with 121 additions and 177 deletions

1
.gitignore vendored
View File

@@ -2,6 +2,7 @@ inventory.*
!inventory.ini.tftpl
README.*
!README.adoc
password.txt
# Created by https://www.toptal.com/developers/gitignore/api/vim,ansible,jetbrains,terraform
# Edit at https://www.toptal.com/developers/gitignore?templates=vim,ansible,jetbrains,terraform

View File

@@ -18,9 +18,6 @@ sleep 300 # Wait 5 minutes since the machines start _slow_ sometimes
cd ../k3s
ansible-galaxy install -r requirements.yml
ansible-playbook site.yml
cd ../k8s
terraform init
terraform apply
----
== Preparation
@@ -51,18 +48,12 @@ So, make sure to apply the infra at least once, before running these playbooks.
include::./k3s/README.adoc[tag=setup]
=== k8s
Run this setup in the `k8s/` directory.
include::./k8s/README.adoc[tag=setup]
== Enlarge / Reduce size of cluster
Increase::
--
. Simply adjust the number of agents/servers in your `infra/config.auto.tfvars`.
. Run steps 3 & 4 of the setup again
. Then run the ansible-playbook of k3s again
--
Decrease::
--
@@ -85,16 +76,17 @@ Instead proceed as the following:
* Firewall rules to block everything from the servers except of:
** ping (protocol: `icmp`)
** kubernetes api (Usually port `6443`)
** ssh (I prefer to use a non-standard port since I want to provide a git-server on port `22`)
** public services, e.g. http and https (port `80` and `443`)
* Creating the kubernetes-servers in the public subnet
* Creating the kubernetes-agents in the private subnet
* Setting up routing on all servers
* Setup SSH-connections
** ssh (I prefer to use a non-standard port (usually port `1022`)
** public services, e.g. http and https (port `80` and `443`) but also git-ssh (port `22`)
* Creating the machines for kubernetes-servers in the public subnet
* Creating the machines for kubernetes-agents in the private subnet
* Creating DNS-records in Hetzer Cloud
`k3s/`::
* Setup SSH-connections
* Setting up routing on all servers
* Installing k3s
* Keep the software up-to-date
* Add foundational services to the cluster

View File

@@ -1,7 +1,32 @@
= k3s
:icons: font
This project is responsible for setting up a k3s installation.
This project is responsible for setting up a k3s installation and provide a set of foundational services in the cluster.
The provided services are:
cert-manager::
This allows issuing TLS certificates.
The certificates are issued via https://letsencrypt.org[let's encrypt] and can be issued for the staging and production stage of let's encrypt.
minio::
Allow me to store data in an object storage.
+
TODO: Not setup yet!
concourse-ci::
A powerful CI-cervice which I like to use to automate all kind of workloads.
+
TODO: Not setup yet!
gitea::
My personal favourite git-server.
+
TODO: Not setup yet!
snappass::
A secure and reliable tool to share password.
+
TODO: Not setup yet!
== Setup
@@ -19,4 +44,13 @@ ansible-playbook site.yml # <2>
[IMPORTANT]
The second step will override any existing kube config, this might destroy any existing settings!
[NOTE]
--
To apply the playbook you may need to install additional packages:
* https://helm.sh/docs/intro/install/[helm]
* https://github.com/databus23/helm-diff?tab=readme-ov-file#install[helm-diff]
* python3-kubernetes (Debian/Ubuntu)
--
// end::setup[]

View File

@@ -1,5 +1,6 @@
[defaults]
nocows = True
inventory = inventory.ini,config.ini
inventory = inventory.ini,config.yml,vault.yml
display_skipped_hosts = False
error_on_undefined_vars = True
vault_password_file = password.txt

View File

@@ -1,15 +0,0 @@
[all:vars]
k8s_api_endpoint = "{{ hostvars[groups['server'][0]]['ansible_host'] | default(groups['server'][0]) }}"
[k3s_cluster:vars]
ansible_user = root
# note the space between the IPs!
dns_servers = 8.8.8.8 8.8.4.4
[agent:vars]
ansible_ssh_common_args = -o StrictHostKeyChecking=accept-new -o ProxyCommand="ssh -p 1022 -W %h:%p -q root@{{ k8s_api_endpoint }}"
k3s_version = v1.31.6+k3s1
[server:vars]
ansible_ssh_common_args = '-o StrictHostKeyChecking=accept-new'
k3s_version = v1.31.6+k3s1

28
k3s/config.yml Normal file
View File

@@ -0,0 +1,28 @@
all:
vars:
k8s_api_endpoint: "{{ hostvars[groups['server'][0]]['ansible_host'] | default(groups['server'][0]) }}"
cert_manager_state: present
cert_manager_version: v1.18.2
letsencrypt_clusterissuers:
staging:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: "{{ vault_letsencrypt_issuer_email }}"
prod:
server: https://acme-v02.api.letsencrypt.org/directory
email: "{{ vault_letsencrypt_issuer_email }}"
k3s_cluster:
vars:
ansible_user: root
# note the space between the IPs!
dns_servers: 8.8.8.8 8.8.4.4
agent:
vars:
ansible_ssh_common_args: -o StrictHostKeyChecking=accept-new -o ProxyCommand="ssh -p 1022 -W %h:%p -q root@{{ k8s_api_endpoint }}"
k3s_version: v1.31.6+k3s1
server:
vars:
ansible_ssh_common_args: '-o StrictHostKeyChecking=accept-new'
k3s_version: v1.31.6+k3s1

View File

@@ -0,0 +1,3 @@
cert_manager_state: present
cert_manager_version: v1.18.2
letsencrypt_clusterissuers: {}

View File

@@ -0,0 +1,29 @@
- name: Deploy cert manager {{ cert_manager_version }}
kubernetes.core.helm:
name: cert-manager
chart_ref: "oci://quay.io/jetstack/charts/cert-manager"
chart_version: "{{ cert_manager_version }}"
release_namespace: "cert-manager"
create_namespace: True
release_state: "{{ cert_manager_state }}"
set_values:
- value: crds.enabled=true
- name: Provide let's encrypt clusterissuers
kubernetes.core.k8s:
definition:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: "letsencrypt-{{ item.key }}"
spec:
acme:
email: "{{ item.value.email }}"
privateKeySecretRef:
name: "letsencrypt-{{ item.key }}"
server: "{{ item.value.server }}"
solvers:
- http01:
ingress:
class: "traefik"
loop: "{{ letsencrypt_clusterissuers | dict2items }}"

View File

@@ -26,3 +26,9 @@
- init
- config
- update
- hosts: localhost
roles:
- role: k8s-setup
tags:
- init
- k8s

9
k3s/vault.yml Normal file
View File

@@ -0,0 +1,9 @@
$ANSIBLE_VAULT;1.1;AES256
39663830333033356463613461373238356334303063343634343463643961313266636163326638
6161313335653163656230333566343465353535663630620a353664363735656264333766303136
61333138366230336339316638633834393738663032303732623832326233323635363230626430
3564653635323334320a636531633061376135666333303961643633356361306635666639396534
34363933623239316439396636663164396633336639346539356664663136386262326666656665
64623764316530363163353033656536343034383039613961623563373836383238366131666362
38323739353461303135373239646334616134616539313133636634306265343261623038613030
33643830666331326164

View File

@@ -1,42 +0,0 @@
# This file is maintained automatically by "terraform init".
# Manual edits may be lost in future updates.
provider "registry.terraform.io/hashicorp/helm" {
version = "3.0.2"
constraints = "3.0.2"
hashes = [
"h1:+tHGl509bhyUrvvj9GQTBsdK+ImHJnRuo6ppDZPavqY=",
"zh:2778de76c7dfb2e85c75fe6de3c11172a25551ed499bfb9e9f940a5be81167b0",
"zh:3b4c436a41e4fbae5f152852a9bd5c97db4460af384e26977477a40adf036690",
"zh:617a372f5bb2288f3faf5fd4c878a68bf08541cf418a3dbb8a19bc41ad4a0bf2",
"zh:84de431479548c96cb61c495278e320f361e80ab4f8835a5425ece24a9b6d310",
"zh:8b4cf5f81d10214e5e1857d96cff60a382a22b9caded7f5d7a92e5537fc166c1",
"zh:baeb26a00ffbcf3d507cdd940b2a2887eee723af5d3319a53eec69048d5e341e",
"zh:ca05a8814e9bf5fbffcd642df3a8d9fae9549776c7057ceae6d6f56471bae80f",
"zh:ca4bf3f94dedb5c5b1a73568f2dad7daf0ef3f85e688bc8bc2d0e915ec148366",
"zh:d331f2129fd3165c4bda875c84a65555b22eb007801522b9e017d065ac69b67e",
"zh:e583b2b478dde67da28e605ab4ef6521c2e390299b471d7d8ef05a0b608dcdad",
"zh:f238b86611647c108c073d265f8891a2738d3158c247468ae0ff5b1a3ac4122a",
"zh:f569b65999264a9416862bca5cd2a6177d94ccb0424f3a4ef424428912b9cb3c",
]
}
provider "registry.terraform.io/hashicorp/kubernetes" {
version = "2.38.0"
constraints = "2.38.0"
hashes = [
"h1:5CkveFo5ynsLdzKk+Kv+r7+U9rMrNjfZPT3a0N/fhgE=",
"zh:0af928d776eb269b192dc0ea0f8a3f0f5ec117224cd644bdacdc682300f84ba0",
"zh:1be998e67206f7cfc4ffe77c01a09ac91ce725de0abaec9030b22c0a832af44f",
"zh:326803fe5946023687d603f6f1bab24de7af3d426b01d20e51d4e6fbe4e7ec1b",
"zh:4a99ec8d91193af961de1abb1f824be73df07489301d62e6141a656b3ebfff12",
"zh:5136e51765d6a0b9e4dbcc3b38821e9736bd2136cf15e9aac11668f22db117d2",
"zh:63fab47349852d7802fb032e4f2b6a101ee1ce34b62557a9ad0f0f0f5b6ecfdc",
"zh:924fb0257e2d03e03e2bfe9c7b99aa73c195b1f19412ca09960001bee3c50d15",
"zh:b63a0be5e233f8f6727c56bed3b61eb9456ca7a8bb29539fba0837f1badf1396",
"zh:d39861aa21077f1bc899bc53e7233262e530ba8a3a2d737449b100daeb303e4d",
"zh:de0805e10ebe4c83ce3b728a67f6b0f9d18be32b25146aa89116634df5145ad4",
"zh:f569b65999264a9416862bca5cd2a6177d94ccb0424f3a4ef424428912b9cb3c",
"zh:faf23e45f0090eef8ba28a8aac7ec5d4fdf11a36c40a8d286304567d71c1e7db",
]
}

View File

@@ -1,21 +0,0 @@
= k8s
:icons: font
This project is responsible for providing general services in the kubernetes-cluster.
== Setup
// tag::setup[]
[WARNING]
Make sure `config.auto.tfvars` with all the needed configuration-secrets is present otherwise the module cannot be applied!
The file is savely stored in the password-manager.
[source,bash]
----
terraform init # <1>
terraform apply # <2>
----
<1> Init the terraform modules if necessary
<2> Create services in the cluster
// end::setup[]

View File

@@ -1,56 +0,0 @@
resource "helm_release" "cert_manager" {
name = "cert-manager"
repository = "oci://quay.io/jetstack/charts"
chart = "cert-manager"
version = "v1.18.2"
namespace = "cert-manager"
create_namespace = true
set = [{
name = "crds.enabled"
value = "true"
}]
}
locals {
letsencrypt = {
staging = {
server = "https://acme-staging-v02.api.letsencrypt.org/directory"
email = var.letsencrypt_issuer_email
}
prod = {
server = "https://acme-v02.api.letsencrypt.org/directory"
email = var.letsencrypt_issuer_email
}
}
}
resource "kubernetes_manifest" "letsencrypt_clusterissuer" {
depends_on = [ helm_release.cert_manager ]
for_each = local.letsencrypt
manifest = {
apiVersion = "cert-manager.io/v1"
kind = "ClusterIssuer"
metadata = {
name = "letsencrypt-${each.key}"
}
spec = {
acme = {
email = lookup(each.value, "email")
privateKeySecretRef = {
name = "letsencrypt-${each.key}"
}
server = lookup(each.value, "server")
solvers = [{
http01 = {
ingress = {
class = "traefik"
}
}
}]
}
}
}
}

View File

@@ -1,3 +0,0 @@
variable "letsencrypt_issuer_email" {
type = string
}

View File

@@ -1,22 +0,0 @@
terraform {
required_providers {
helm = {
source = "hashicorp/helm"
version = "3.0.2"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.38.0"
}
}
}
provider "helm" {
kubernetes = {
config_path = "~/.kube/config"
}
}
provider "kubernetes" {
config_path = "~/.kube/config"
}