Chrony is an implementation of the Network Time Protocol (NTP).
This step will install chrony and configure upstream pools and start serving NTP on internal networks
```
# salt/pillars/chrony.sls
chrony:
pool:
- 0.se.pool.ntp.org
- 1.se.pool.ntp.org
- 2.se.pool.ntp.org
- 3.se.pool.ntp.org
```
## dnsmasq
Serves dhcp and dns for internal network
This step will install dnsmasq and configure dhcp for all internal networks defined in `salt/pillars/network.sls`
It will also configure the pxe next step, etc.
## docker
This step will create podman container running a local docker registry pull through instance, also a systemd service called `registry-container.service` is created.
```
# salt/pillars/docker.sls
docker:
username:
access_token:
url: docker.io/registry
tag: 2.7.1
```
## firewalld
Configures firewalld services and networks
Installs and configures hostapd to use the wireless interface as a access point
Installs podman and configures it to use the [docker registry proxy](#docker)
## rancher
Installs rancher server in a podman container, creates a systemd unit called `rancher`
The container redirects host port `6080->80` and `6443->443` to the container.
It also adds a [nginx](#nginx) reverse proxy configuration to `rancher.suse.lan`
```
# salt/pillars/rancher.sls
rancher:
ca_passphrase: rancher
url: docker.io/rancher/rancher
tag: v2.6.1
bootstrapPassword: rancher
```
## remote-desktop
installs `xorg-x11-Xvnc` disables wayland logins
and creates a vnc login session for the user running update.sh
```
# salt/pillars/remote-desktop.sls
remote-desktop:
password: linux0
```
## rmt
installs and configures rmt
```
# salt/pillars/rmt.sls
rmt:
ca_passphrase: linux
db_password: linux
scc:
username:
password:
stopped_services:
- rmt-server-mirror.timer
- rmt-server-sync.timer
- rmt-server-systems-scc-sync.timer
products:
-
name: SUSE Linux Enterprise Server 15 SP3 x86_64
id: 2140
-
name: SUSE Linux Enterprise High Performance Computing 15 SP3 x86_64
id: 2133
-
name: Containers Module 15 SP3 x86_64
id: 2157
-
name: SUSE Linux Enterprise Micro 5.0 x86_64
id: 2202
-
name: SUSE Linux Enterprise Micro 5.1 x86_64
id: 2283
-
name: SUSE Linux Enterprise High Availability Extension 15 SP3 x86_64
id: 2195
-
name: openSUSE Leap 15.3 aarch64
id: 2233
-
name: openSUSE Leap 15.3 x86_64
id: 2236
-
name: Public Cloud Module 15 SP3 x86_64
id: 2175
```
## ssh
installs openSSH server and start the daemon.
It also configures the authorized keys for remote sessions to the admin server.
add ssh public keys to the user-pub-keys in your local.sls and they will be added
```
# salt/pillars/ssh.sls
ssh:
user-pub-keys: []
```
## tlu-harvester
This state creates all files necessary to install a harvester cluster on node1, node2 and node3.
It will create the [pxe](#pxe) configurations and also manifests that can be applied to your harvester cluster when it's up and running. The manifests will land in the `$HOME/tlu-havester` directory, just apply them with kubectl
It will also download some images and place them in corresponding folder on the [www.suse.lan](#nginx).
Id you download SUSE images and place them in `/srv/www/htdocs/images/suse/` and run this state, manifests for them will be created and added to the `$HOME/tlu-harvester` directory