tlu/doc/admin-server.md
Jonas Forsberg 631f4012ed
.
2021-11-06 11:40:04 +01:00

138 lines
4.0 KiB
Markdown

# Installing the Admin server
## Prerequisites
The installation script assumes a installed and configured openSUSE Leap 15.3 x86_64.
My setup has two NIC and one wifi.
- `eth0` is connected to the lab switch, internal network
- `eth1` is going to be the external interface
- `wlan0` acts as a wireless access point to internal network
## Installing
Clone this repository and run the `update.sh` script as a normal user with **sudo** permissions, the admin server needs access to Internet.
The update script will run through several steps:
- check prerequisites, if any packages needed to install/configure the admin server it will ask to install them.
- salt-call, the main installation/configuration is done with a masterless salt-call. This step will apply the highstate
- rmt sync, a sync with SUSE Customer Center will be performed
- rmt enable products, this will enable the preconfigured products to sync with SCC
- rmt mirror, mirror all enabled products
- install tools, install latest versions of some additional tools into `$HOME/bin`, such as helm, kubctl, stern, virtctl, etc
### customizing the setup.
The preconfigured defaults are located in the `salt/pillar/*.sls` files, you can override them by creating a `salt/pillar/local.sls` and specify your setting
:warning: You need at least specify your SCC organization mirror credentials and your docker username and access token
```yaml
rmt:
scc:
username: <SCC mirror credential username>
password: <SCC mirror credential password>
docker:
username: <Docker HUB username>
access_token: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXX
```
There's also a bunch of default passwords you might want to change, some examples:
- WPA pass phrase for **hostapd**
- root password for **mariadb**
- CA certificate pass phrase for rancher
- Bootstrap password for rancher
- CA certificate pass phrase for rmt
- DB password for rmt
- TLU Harvester OS password
- TLU Harvester admin password
- TLU Harvester token
- remote-desktop password
When you have made your changes just run
```
./update.sh
```
If you make changes in your `salt/pillars/local.sls` you can run the following to apply them
```
./update.sh --salt
```
## Components - Salt States
[chrony](#chrony)
[dnsmasq](#dnsmasq)
[docker](#docker)
[firewalld](#firewalld)
[hostapd](#hostapd)
[hosts](#hosts)
[mariadb](#mariadb)
[nfs-server](#nfs-server)
[nginx](#nginx)
[packages](#packages)
[podman](#podman)
[pxe](#pxe)
[rancher](#rancher)
[remote-desktop](#remote-desktop)
[rmt](#rmt)
[ssh](#ssh)
[tlu-harvester](#tlu-harvester)
[vlan](#vlan)
[wol](#wol)
<a name="chrony"/>
### chrony
Chrony is an implementation of the Network Time Protocol (NTP).
This step will install chrony and configure upstream pools and start serving NTP on internal networks
```
# salt/pillars/chrony.sls
chrony:
pool:
- 0.se.pool.ntp.org
- 1.se.pool.ntp.org
- 2.se.pool.ntp.org
- 3.se.pool.ntp.org
```
<a name="dnsmasq"/>
### dnsmasq
Serves dhcp and dns for internal network
This step will install dnsmasq and configure dhcp for all internal networks defined in `salt/pillars/network.sls`
It will also configure the pxe next step, etc.
<a name="docker"/>
### docker
This step will create podman container running a local docker registry pull through instance, also a systemd service called `registry-container.service` is created.
```
# salt/pillars/docker.sls
docker:
username:
access_token:
url: docker.io/registry
tag: 2.7.1
```
<a name="firewalld"/>
### firewalld
Configures firewalld services and networks
<a name="hostapd"/>
Installs and configures hostapd to use the wireless interface as a access point
```
# salt/pillars/hostapd.sls
hostapd:
country_code: SE
ssid: Transportable Lab Unit
channel: 6
wpa_passphrase: linux2linux
```
<a name="hosts"/>
### hosts
Configures the hostname and `/etc/hosts`file so [dnsmasq](#dnsmask) has correct information
<a name="mariadb"/>
### mariadb
Installs and configures mariadb, [rmt](#rmt) needs a database
```
# salt/pillars/mysql.sls
mysql:
root_password: linux
```
<a href="nfs-server"/>
### nfs-server