tlu/doc/salt-states.md

222 lines
6.0 KiB
Markdown
Raw Permalink Normal View History

2021-11-06 10:49:10 +00:00
# Components - Salt States
2021-11-06 10:51:25 +00:00
- [chrony](#chrony)
2021-11-06 10:49:10 +00:00
- [dnsmasq](#dnsmasq)
- [docker](#docker)
- [firewalld](#firewalld)
- [hostapd](#hostapd)
- [hosts](#hosts)
- [mariadb](#mariadb)
- [nfs-server](#nfs-server)
- [nginx](#nginx)
- [packages](#packages)
- [podman](#podman)
- [pxe](#pxe)
- [rancher](#rancher)
- [remote-desktop](#remote-desktop)
- [rmt](#rmt)
- [ssh](#ssh)
- [tlu-harvester](#tlu-harvester)
- [vlan](#vlan)
- [wol](#wol)
## chrony
Chrony is an implementation of the Network Time Protocol (NTP).
This step will install chrony and configure upstream pools and start serving NTP on internal networks
```
# salt/pillars/chrony.sls
chrony:
pool:
- 0.se.pool.ntp.org
- 1.se.pool.ntp.org
- 2.se.pool.ntp.org
- 3.se.pool.ntp.org
```
## dnsmasq
Serves dhcp and dns for internal network
This step will install dnsmasq and configure dhcp for all internal networks defined in `salt/pillars/network.sls`
It will also configure the pxe next step, etc.
## docker
This step will create podman container running a local docker registry pull through instance, also a systemd service called `registry-container.service` is created.
```
# salt/pillars/docker.sls
docker:
username:
access_token:
url: docker.io/registry
tag: 2.7.1
```
## firewalld
Configures firewalld services and networks
Installs and configures hostapd to use the wireless interface as a access point
```
# salt/pillars/hostapd.sls
hostapd:
country_code: SE
ssid: Transportable Lab Unit
channel: 6
wpa_passphrase: linux2linux
```
## hosts
2021-11-06 10:51:25 +00:00
Configures the hostname and `/etc/hosts` file so [dnsmasq](#dnsmasq) has correct information
2021-11-06 10:49:10 +00:00
### mariadb
Installs and configures mariadb, [rmt](#rmt) needs a database
```
# salt/pillars/mysql.sls
mysql:
root_password: linux
```
## nfs-server
2021-11-06 11:30:19 +00:00
Installs nfs-server and creates a backup export `/srv/exports/backups <internal network>/24(rw,no_root_squash,sync,no_subtree_check)`
## nginx
Installs nginx and configures the www.suse.lan web site
2021-11-06 12:05:40 +00:00
## packages
installs additional packages
```
# salt/pillars/packages.sls
packages:
- vim
- jq
```
## podman
Installs podman and configures it to use the [docker registry proxy](#docker)
## rancher
Installs rancher server in a podman container, creates a systemd unit called `rancher`
The container redirects host port `6080->80` and `6443->443` to the container.
It also adds a [nginx](#nginx) reverse proxy configuration to `rancher.suse.lan`
```
# salt/pillars/rancher.sls
rancher:
ca_passphrase: rancher
url: docker.io/rancher/rancher
tag: v2.6.1
bootstrapPassword: rancher
```
## remote-desktop
installs `xorg-x11-Xvnc` disables wayland logins
and creates a vnc login session for the user running update.sh
```
# salt/pillars/remote-desktop.sls
remote-desktop:
password: linux0
```
## rmt
installs and configures rmt
```
# salt/pillars/rmt.sls
rmt:
ca_passphrase: linux
db_password: linux
scc:
username:
password:
stopped_services:
- rmt-server-mirror.timer
- rmt-server-sync.timer
- rmt-server-systems-scc-sync.timer
products:
-
name: SUSE Linux Enterprise Server 15 SP3 x86_64
id: 2140
-
name: SUSE Linux Enterprise High Performance Computing 15 SP3 x86_64
id: 2133
-
name: Containers Module 15 SP3 x86_64
id: 2157
-
name: SUSE Linux Enterprise Micro 5.0 x86_64
id: 2202
-
name: SUSE Linux Enterprise Micro 5.1 x86_64
id: 2283
-
name: SUSE Linux Enterprise High Availability Extension 15 SP3 x86_64
id: 2195
-
name: openSUSE Leap 15.3 aarch64
id: 2233
-
name: openSUSE Leap 15.3 x86_64
id: 2236
-
name: Public Cloud Module 15 SP3 x86_64
id: 2175
```
## ssh
installs openSSH server and start the daemon.
It also configures the authorized keys for remote sessions to the admin server.
add ssh public keys to the user-pub-keys in your local.sls and they will be added
```
# salt/pillars/ssh.sls
ssh:
user-pub-keys: []
```
## tlu-harvester
This state creates all files necessary to install a harvester cluster on node1, node2 and node3.
It will create the [pxe](#pxe) configurations and also manifests that can be applied to your harvester cluster when it's up and running. The manifests will land in the `$HOME/tlu-havester` directory, just apply them with kubectl
It will also download some images and place them in corresponding folder on the [www.suse.lan](#nginx).
Id you download SUSE images and place them in `/srv/www/htdocs/images/suse/` and run this state, manifests for them will be created and added to the `$HOME/tlu-harvester` directory
```
# salt/pillars/tlu-harvester.sls
tlu-harvester:
version: 0.3.0
dns_host: harvester
password: rancher
token: ThisShouldBeConfiguredInYour_local.sls
os:
ssh_authorized_keys: []
password: rancher
install:
mgmt-interface: enp2s0f0
device: /dev/nvme0n1
images:
opensuse:
- name: openSUSE Leap 15.3
url: https://download.opensuse.org/distribution/leap/15.3/appliances/openSUSE-Leap-15.3-JeOS.x86_64-OpenStack-Cloud.qcow2
checksum: 7207cce5b77d9d040610c39cd3d09437489797882b1b834acfb8b0f9d82be26c
ns: default
- name: openSUSE MicroOS
url: https://download.opensuse.org/tumbleweed/appliances/openSUSE-MicroOS.x86_64-kvm-and-xen.qcow2
ubuntu:
- name: ubuntu 20.04 (Focal Fossa)
url: https://cloud-images.ubuntu.com/focal/20211015/focal-server-cloudimg-amd64.img
checksum: c7adca2038a5fdda38328ecd461462bf4ab2cbaec2cc1bfd9340d9ee6bc543a8
ns: default
- name: ubuntu 21.04 (Hirsute Hippo)
url: https://cloud-images.ubuntu.com/hirsute/20211017/hirsute-server-cloudimg-amd64.img
checksum: 2d8c7f872aab587f70268a34f031c6380197f6940b29eb5f241050bb98ba420e
```
## vlan
configures the vlan configuration, settings are in `salt/pillar/network.sls`
## wol
Creates `$HOME/wol` bash script so I can send Wake on LAN packages to node1, node2 and node3.
mac address needs to be set in your local.sls
```
network:
wol:
1: xx:xx:xx:xx:xx:9b
2: xx:xx:xx:xx:xx:0a
3: xx:xx:xx:xx:xx:58
```