Fix trailing space.

This commit is contained in:
Michael Lipp 2025-01-30 22:17:35 +01:00
parent 150b9f2908
commit ecd7ba7baf
10 changed files with 97 additions and 97 deletions

View file

@ -6,7 +6,7 @@
# Run Qemu in Kubernetes Pods
The goal of this project is to provide easy to use and flexible components
for running Qemu based VMs in Kubernetes pods.
for running Qemu based VMs in Kubernetes pods.
See the [project's home page](https://jdrupes.org/vm-operator/)
for details.

View file

@ -1,7 +1,7 @@
# Example setup for development
The CRD must be deployed independently. Apart from that, the
`kustomize.yaml`
`kustomize.yaml`
* creates a small cdrom image repository and

View file

@ -3,7 +3,7 @@ A Kubernetes operator for running VMs as pods.
VM-Operator
===========
The VM-operator enables you to easily run Qemu based VMs as pods
The VM-operator enables you to easily run Qemu based VMs as pods
in Kubernetes. It is built on the
[JGrapes](https://mnlipp.github.io/jgrapes/) event driven framework.

View file

@ -5,12 +5,12 @@ layout: vm-operator
# The Controller
The controller component (which is part of the manager) monitors
custom resources of kind `VirtualMachine`. It creates or modifies
The controller component (which is part of the manager) monitors
custom resources of kind `VirtualMachine`. It creates or modifies
other resources in the cluster as required to get the VM defined
by the CR up and running.
by the CR up and running.
Here is the sample definition of a VM from the
Here is the sample definition of a VM from the
["local-path" example](https://github.com/mnlipp/VM-Operator/tree/main/example/local-path):
```yaml
@ -28,10 +28,10 @@ spec:
currentCpus: 2
maximumRam: 8Gi
currentRam: 4Gi
networks:
- user: {}
disks:
- volumeClaimTemplate:
metadata:
@ -58,9 +58,9 @@ spec:
# generateSecret: false
```
## Pod management
## Pod management
The central resource created by the controller is a
The central resource created by the controller is a
[`Pod`](https://kubernetes.io/docs/concepts/workloads/pods/)
with the same name as the VM (`metadata.name`). The pod is created only
if `spec.vm.state` is "Running" (default is "Stopped" which deletes the
@ -72,7 +72,7 @@ and thus the VM is automatically restarted. If set to `true`, the
VM's state is set to "Stopped" when the VM terminates and the pod is
deleted.
[^oldSts]: Before version 3.4, the operator created a
[^oldSts]: Before version 3.4, the operator created a
[stateful set](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/)
that in turn created the pod and the PVCs (see below).
@ -113,7 +113,7 @@ as shown in this example:
```
The disk will be available as "/dev/*name*-disk" in the VM,
using the string from `.volumeClaimTemplate.metadata.name` as *name*.
using the string from `.volumeClaimTemplate.metadata.name` as *name*.
If no name is defined in the metadata, then "/dev/disk-*n*"
is used instead, with *n* being the index of the volume claim
template in the list of disks.
@ -140,28 +140,28 @@ the PVCs by label in a delete command.
## Choosing an image for the runner
The image used for the runner can be configured with
The image used for the runner can be configured with
[`spec.image`](https://github.com/mnlipp/VM-Operator/blob/7e094e720b7b59a5e50f4a9a4ad29a6000ec76e6/deploy/crds/vms-crd.yaml#L19).
This is a mapping with either a single key `source` or a detailed
configuration using the keys `repository`, `path` etc.
Currently two runner images are maintained. One that is based on
Arch Linux (`ghcr.io/mnlipp/org.jdrupes.vmoperator.runner.qemu-arch`) and a
Currently two runner images are maintained. One that is based on
Arch Linux (`ghcr.io/mnlipp/org.jdrupes.vmoperator.runner.qemu-arch`) and a
second one based on Alpine (`ghcr.io/mnlipp/org.jdrupes.vmoperator.runner.qemu-alpine`).
Starting with release 1.0, all versions of runner images and managers
Starting with release 1.0, all versions of runner images and managers
that have the same major release number are guaranteed to be compatible.
## Generating cloud-init data
*Since: 2.2.0*
*Since: 2.2.0*
The optional object `.spec.cloudInit` with sub-objects `.cloudInit.metaData`,
`.cloudInit.userData` and `.cloudInit.networkConfig` can be used to provide
`.cloudInit.userData` and `.cloudInit.networkConfig` can be used to provide
data for
[cloud-init](https://cloudinit.readthedocs.io/en/latest/index.html).
The data from the CRD will be made available to the VM by the runner
as a vfat formatted disk (see the description of
as a vfat formatted disk (see the description of
[NoCloud](https://cloudinit.readthedocs.io/en/latest/reference/datasources/nocloud.html)).
If `.metaData.instance-id` is not defined, the controller automatically
@ -180,9 +180,9 @@ generated automatically by the runner.)
*Since: 2.3.0*
You can define a display password using a Kubernetes secret.
When you start a VM, the controller checks if there is a secret
with labels "app.kubernetes.io/name: vm-runner,
app.kubernetes.io/component: display-secret,
When you start a VM, the controller checks if there is a secret
with labels "app.kubernetes.io/name: vm-runner,
app.kubernetes.io/component: display-secret,
app.kubernetes.io/instance: *vmname*" in the namespace of the
VM definition. The name of the secret can be chosen freely.
@ -204,13 +204,13 @@ data:
```
If such a secret for the VM is found, the VM is configured to use
the display password specified. The display password in the secret
the display password specified. The display password in the secret
can be updated while the VM runs[^delay]. Activating/deactivating
the display password while a VM runs is not supported by Qemu and
therefore requires stopping the VM, adding/removing the secret and
restarting the VM.
[^delay]: Be aware of the possible delay, see e.g.
[^delay]: Be aware of the possible delay, see e.g.
[here](https://web.archive.org/web/20240223073838/https://ahmet.im/blog/kubernetes-secret-volumes-delay/).
*Since: 3.0.0*
@ -221,7 +221,7 @@ values are those defined by qemu (`+n` seconds from now, `n` Unix
timestamp, `never` and `now`).
Unless `spec.vm.display.spice.generateSecret` is set to `false` in the VM
definition (CRD), the controller creates a secret for the display
definition (CRD), the controller creates a secret for the display
password automatically if none is found. The secret is created
with a random password that expires immediately, which makes the
display effectively inaccessible until the secret is modified.

View file

@ -15,9 +15,9 @@ The image used for the VM pods combines Qemu and a control program
for starting and managing the Qemu process. This application is called
"[the runner](runner.html)".
While you can deploy a runner manually (or with the help of some
While you can deploy a runner manually (or with the help of some
helm templates), the preferred way is to deploy "[the manager](manager.html)"
application which acts as a Kubernetes operator for runners
application which acts as a Kubernetes operator for runners
and thus the VMs.
If you just want to try out things, you can skip the remainder of this
@ -25,11 +25,11 @@ page and proceed to "[the manager](manager.html)".
## Motivation
The project was triggered by a remark in the discussion about RedHat
[dropping SPICE support](https://bugzilla.redhat.com/show_bug.cgi?id=2030592)
[dropping SPICE support](https://bugzilla.redhat.com/show_bug.cgi?id=2030592)
from the RHEL packages. Which means that you have to run Qemu in a
container on RHEL and derivatives if you want to continue using Spice.
So KubeVirt comes to mind. But
[one comment](https://bugzilla.redhat.com/show_bug.cgi?id=2030592#c4)
[one comment](https://bugzilla.redhat.com/show_bug.cgi?id=2030592#c4)
mentioned that the [KubeVirt](https://kubevirt.io/) project isn't
interested in supporting SPICE either.
@ -44,7 +44,7 @@ much as possible.
## VMs and Pods
VMs are not the typical workload managed by Kubernetes. You can neither
have replicas nor can the containers simply be restarted without a major
have replicas nor can the containers simply be restarted without a major
impact on the "application". So there are many features for managing
pods that we cannot make use of. Qemu in its container can only be
deployed as a pod or using a stateful set with replica 1, which is rather
@ -57,6 +57,6 @@ A second look, however, reveals that Kubernetes has more to offer.
* Its managing features *are* useful for running the component that
manages the pods with the VMs.
And if you use Kubernetes anyway, well then the VMs within Kubernetes
And if you use Kubernetes anyway, well then the VMs within Kubernetes
provide you with a unified view of all (or most of) your workloads,
which simplifies the maintenance of your platform.

View file

@ -7,13 +7,13 @@ layout: vm-operator
The Manager is the program that provides the controller from the
[operator pattern](https://github.com/cncf/tag-app-delivery/blob/eece8f7307f2970f46f100f51932db106db46968/operator-wg/whitepaper/Operator-WhitePaper_v1-0.md#operator-components-in-kubernetes)
together with a web user interface. It should be run in a container in the cluster.
together with a web user interface. It should be run in a container in the cluster.
## Installation
A manager instance manages the VMs in its own namespace. The only
common (and therefore cluster scoped) resource used by all instances
is the CRD. It is available
is the CRD. It is available
[here](https://github.com/mnlipp/VM-Operator/raw/main/deploy/crds/vms-crd.yaml)
and must be created first.
@ -25,24 +25,24 @@ The example above uses the CRD from the main branch. This is okay if
you apply it once. If you want to preserve the link for automatic
upgrades, you should use a link that points to one of the release branches.
The next step is to create a namespace for the manager and the VMs, e.g.
The next step is to create a namespace for the manager and the VMs, e.g.
`vmop-demo`.
```sh
kubectl create namespace vmop-demo
```
Finally you have to create an account, the role, the binding etc. The
default files for creating these resources using the default namespace
can be found in the
Finally you have to create an account, the role, the binding etc. The
default files for creating these resources using the default namespace
can be found in the
[deploy](https://github.com/mnlipp/VM-Operator/tree/main/deploy)
directory. I recommend to use
[kustomize](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/) to create your own configuration.
directory. I recommend to use
[kustomize](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/) to create your own configuration.
## Initial Configuration
Use one of the `kustomize.yaml` files from the
[example](https://github.com/mnlipp/VM-Operator/tree/main/example) directory
[example](https://github.com/mnlipp/VM-Operator/tree/main/example) directory
as a starting point. The directory contains two examples. Here's the file
from subdirectory `local-path`:
@ -91,9 +91,9 @@ patches:
storageClassName: local-path
```
The sample file adds a namespace (`vmop-demo`) to all resource
The sample file adds a namespace (`vmop-demo`) to all resource
definitions and patches the PVC `vmop-image-repository`. This is a volume
that is mounted into all pods that run a VM. The volume is intended
that is mounted into all pods that run a VM. The volume is intended
to be used as a common repository for CDROM images. The PVC must exist
and it must be bound before any pods can run.
@ -101,13 +101,13 @@ The second patch affects the small volume that is created for each
runner and contains the VM's configuration data such as the EFI vars.
The manager's default configuration causes the PVC for this volume
to be created with no storage class (which causes the default storage
class to be used). The patch provides a new configuration file for
the manager that makes the reconciler use local-path as storage
class for this PVC. Details about the manager configuration can be
class to be used). The patch provides a new configuration file for
the manager that makes the reconciler use local-path as storage
class for this PVC. Details about the manager configuration can be
found in the next section.
Note that you need none of the patches if you are fine with using your
cluster's default storage class and this class supports ReadOnlyMany as
Note that you need none of the patches if you are fine with using your
cluster's default storage class and this class supports ReadOnlyMany as
access mode.
Check that the pod with the manager is running:
@ -121,30 +121,30 @@ for creating your first VM.
## Configuration Details
The [config map](https://github.com/mnlipp/VM-Operator/blob/main/deploy/vmop-config-map.yaml)
for the manager may provide a configuration file (`config.yaml`) and
The [config map](https://github.com/mnlipp/VM-Operator/blob/main/deploy/vmop-config-map.yaml)
for the manager may provide a configuration file (`config.yaml`) and
a file with logging properties (`logging.properties`). Both files are mounted
into the container that runs the manager and are evaluated by the manager
on startup. If no files are provided, the manager uses built-in defaults.
The configuration file for the Manager follows the conventions of
the [JGrapes](https://jgrapes.org/) component framework.
The keys that start with a slash select the component within the
The keys that start with a slash select the component within the
application's component hierarchy. The mapping associated with the
selected component configures this component's properties.
The available configuration options for the components can be found
in their respective JavaDocs (e.g.
in their respective JavaDocs (e.g.
[here](latest-release/javadoc/org/jdrupes/vmoperator/manager/Reconciler.html)
for the Reconciler).
## Development Configuration
The [dev-example](https://github.com/mnlipp/VM-Operator/tree/main/dev-example)
directory contains a `kustomize.yaml` that uses the development namespace
directory contains a `kustomize.yaml` that uses the development namespace
`vmop-dev` and creates a deployment for the manager with 0 replicas.
This environment can be used for running the manager in the IDE. As the
This environment can be used for running the manager in the IDE. As the
namespace to manage cannot be detected from the environment, you must use
`-c ../dev-example/config.yaml` as argument when starting the manager. This
`-c ../dev-example/config.yaml` as argument when starting the manager. This
configures it to use the namespace `vmop-dev`.

View file

@ -5,9 +5,9 @@ layout: vm-operator
# The Runner
For most use cases, Qemu needs to be started and controlled by another
program that manages the Qemu process. This program is called the
runner in this context.
For most use cases, Qemu needs to be started and controlled by another
program that manages the Qemu process. This program is called the
runner in this context.
The most prominent reason for this second program is that it allows
a VM to be shutdown cleanly in response to a TERM signal. Qemu handles
@ -26,38 +26,38 @@ CPUs and the memory.
The runner takes care of all these issues. Although it is intended to
run in a container (which runs in a Kubernetes pod) it does not require
a container. You can start and use it as an ordinary program on any
system, provided that you have the required commands (qemu, swtpm)
system, provided that you have the required commands (qemu, swtpm)
installed.
## Stand-alone Configuration
Upon startup, the runner reads its main configuration file
Upon startup, the runner reads its main configuration file
which defaults to `/etc/opt/vmrunner/config.yaml` and may be changed
using the `-c` (or `--config`) command line option.
A sample configuration file with annotated options can be found
[here](https://github.com/mnlipp/VM-Operator/blob/main/org.jdrupes.vmoperator.runner.qemu/config-sample.yaml).
As the runner implementation uses the
[JGrapes](https://jgrapes.org/) framework, the file
follows the framework's
As the runner implementation uses the
[JGrapes](https://jgrapes.org/) framework, the file
follows the framework's
[conventions](https://jgrapes.org/latest-release/javadoc/org/jgrapes/util/YamlConfigurationStore.html). The top level "`/Runner`" selects
the component to be configured. Nested within is the information
to be applied to the component.
The main entries in the configuration file are the "template" and
the "vm" information. The runner processes the
the "vm" information. The runner processes the
[freemarker template](https://freemarker.apache.org/), using the
"vm" information to derive the qemu command. The idea is that
"vm" information to derive the qemu command. The idea is that
the "vm" section provides high level information such as the boot
mode, the number of CPUs, the RAM size and the disks. The template
defines a particular VM type, i.e. it contains the "nasty details"
that do not need to be modified for some given set of VM instances.
The templates provided with the runner can be found
[here](https://github.com/mnlipp/VM-Operator/tree/main/org.jdrupes.vmoperator.runner.qemu/templates). When details
The templates provided with the runner can be found
[here](https://github.com/mnlipp/VM-Operator/tree/main/org.jdrupes.vmoperator.runner.qemu/templates). When details
of the VM configuration need modification, a new VM type
(i.e. a new template) has to be defined. Authoring a new
template requires some knowledge about the
(i.e. a new template) has to be defined. Authoring a new
template requires some knowledge about the
[qemu invocation](https://www.qemu.org/docs/master/system/invocation.html).
Despite many "warnings" that you find in the web, configuring the
invocation arguments of qemu is only a bit (but not much) more
@ -72,13 +72,13 @@ provided by a
If additional templates are required, some ReadOnlyMany PV should
be mounted in `/opt/vmrunner/templates`. The PV should contain copies
of the standard templates as well as the additional templates. Of course,
of the standard templates as well as the additional templates. Of course,
a ConfigMap can be used for this purpose again.
Networking options are rather limited. The assumption is that in general
the VM wants full network connectivity. To achieve this, the pod must
run with host networking and the host's networking must provide a
bridge that the VM can attach to. The only currently supported
bridge that the VM can attach to. The only currently supported
alternative is the less performant
"[user networking](https://wiki.qemu.org/Documentation/Networking#User_Networking_(SLIRP))",
which may be used in a stand-alone development configuration.
@ -87,7 +87,7 @@ which may be used in a stand-alone development configuration.
The runner supports adaption to changes of the RAM size (using the
balloon device) and to changes of the number of CPUs. Note that
in order to get new CPUs online on Linux guests, you need a
in order to get new CPUs online on Linux guests, you need a
[udev rule](https://docs.kernel.org/core-api/cpu_hotplug.html#user-space-notification) which is not installed by default[^simplest].
The runner also changes the images loaded in CDROM drives. If the
@ -103,6 +103,6 @@ Finally, `powerdownTimeout` can be changed while the qemu process runs.
## Testing with Helm
There is a
There is a
[Helm Chart](https://github.com/mnlipp/VM-Operator/tree/main/org.jdrupes.vmoperator.runner.qemu/helm-test)
for testing the runner.

View file

@ -13,7 +13,7 @@ The VmViewer conlet has been renamed to VmAccess. This affects the
is still accepted for backward compatibility, but should be updated.
The change of name also causes conlets added to the overview page by
users to "disappear" from the GUI. They have to be re-added.
users to "disappear" from the GUI. They have to be re-added.
The latter behavior also applies to the VmConlet conlet which has been
renamed to VmMgmt.
@ -25,14 +25,14 @@ with replica set to 1 to (indirectly) start the pod with the VM. Rather
it creates the pod directly. This implies that the PVCs must also be created
by the VM-Operator, which needs additional permissions to do so (update of
`deploy/vmop-role.yaml). As it would be ridiculous to keep the naming scheme
used by the stateful set when generating PVCs, the VM-Operator uses a
used by the stateful set when generating PVCs, the VM-Operator uses a
[different pattern](controller.html#defining-disks) for creating new PVCs.
The change is backward compatible:
* Running pods created by a stateful set are left alone until stopped.
Only then will the stateful set be removed.
* The VM-Operator looks for existing PVCs generated by a stateful
set in the pre 3.4 versions (naming pattern "*name*-disk-*vmName*-0")
and reuses them. Only new PVCs are generated using the new pattern.
@ -40,22 +40,22 @@ The change is backward compatible:
## To version 3.0.0
All configuration files are backward compatible to version 2.3.0.
Note that in order to make use of the new viewer component,
Note that in order to make use of the new viewer component,
[permissions](https://mnlipp.github.io/VM-Operator/user-gui.html#control-access-to-vms)
must be configured in the CR definition. Also note that
must be configured in the CR definition. Also note that
[display secrets](https://mnlipp.github.io/VM-Operator/user-gui.html#securing-access)
are automatically created unless explicitly disabled.
## To version 2.3.0
Starting with version 2.3.0, the web GUI uses a login conlet that
supports OIDC providers. This effects the configuration of the
supports OIDC providers. This effects the configuration of the
web GUI components.
## To version 2.2.0
## To version 2.2.0
Version 2.2.0 sets the stateful set's `.spec.updateStrategy.type` to
"OnDelete". This fails for no apparent reason if a definition of
"OnDelete". This fails for no apparent reason if a definition of
the stateful set with the default value "RollingUpdate" already exists.
In order to fix this, either the stateful set or the complete VM definition
must be deleted and the manager must be restarted.

View file

@ -19,20 +19,20 @@ requirement are unexpectedly complex.
## Control access to VMs
First of all, we have to define which VMs a user can access. This
is done using the optional property `spec.permissions` of the
is done using the optional property `spec.permissions` of the
VM definition (CRD).
```yaml
spec:
permissions:
- role: admin
may:
may:
- "*"
- user: test
may:
- start
- stop
- accessConsole
- accessConsole
```
Permissions can be granted to individual users or to roles. There
@ -104,7 +104,7 @@ spec:
```
The value of `server` is used as value for key "host" in the
connection file, thus overriding the default value. The
connection file, thus overriding the default value. The
value of `proxyUrl` is used as value for key "proxy".
## Securing access
@ -123,8 +123,8 @@ in the future or with value "never" or doesn't define a
`password-expiry` at all.
The automatically generated password is the base64 encoded value
of 16 (strong) random bytes (128 random bits). It is valid for
10 seconds only. This may be challenging on a slower computer
of 16 (strong) random bytes (128 random bits). It is valid for
10 seconds only. This may be challenging on a slower computer
or if users may not enable automatic open for connection files
in the browser. The validity can therefore be adjusted in the
configuration.

View file

@ -11,7 +11,7 @@ implemented using components from the
project. Configuration of the GUI therefore follows the conventions
of that framework.
The structure of the configuration information should be easy to
The structure of the configuration information should be easy to
understand from the examples provided. In general, configuration values
are applied to the individual components that make up an application.
The hierarchy of the components is reflected in the configuration
@ -22,9 +22,9 @@ for information about the complete component structure.)
## Network access
By default, the service is made available at port 8080 of the manager
By default, the service is made available at port 8080 of the manager
pod. Of course, a kubernetes service and an ingress configuration must
be added as required by the environment. (See the
be added as required by the environment. (See the
[definition](https://github.com/mnlipp/VM-Operator/blob/main/deploy/vmop-service.yaml)
from the
[sample deployment](https://github.com/mnlipp/VM-Operator/tree/main/deploy)).
@ -49,7 +49,7 @@ and role management.
# configure an OIDC provider for user management and
# authorization. See the text for details.
oidcProviders: {}
# Support for "local" users is provided as a fallback mechanism.
# Note that up to Version 2.2.x "users" was an object with user names
# as its properties. Starting with 2.3.0 it is a list as shown.
@ -60,11 +60,11 @@ and role management.
- name: test
fullName: Test Account
password: "Generate hash with bcrypt"
# Required for using OIDC, see the text for details.
"/OidcClient":
redirectUri: https://my.server.here/oauth/callback"
# May be used for assigning roles to both local users and users from
# the OIDC provider. Not needed if roles are managed by the OIDC provider.
"/RoleConfigurator":
@ -79,7 +79,7 @@ and role management.
"*":
- other
replace: false
# Manages the permissions for the roles.
"/RoleConletFilter":
conletTypesByRole:
@ -98,8 +98,8 @@ and role management.
```
How local users can be configured should be obvious from the example.
The configuration of OIDC providers for user authentication (and
optionally for role assignment) is explained in the documentation of the
The configuration of OIDC providers for user authentication (and
optionally for role assignment) is explained in the documentation of the
[login conlet](https://jgrapes.org/javadoc-webconsole/org/jgrapes/webconlet/oidclogin/LoginConlet.html).
Details about the `RoleConfigurator` and `RoleConletFilter` can also be found
in the documentation of the
@ -113,5 +113,5 @@ all users to use the login conlet to log out.
## Views
The configuration of the components that provide the manager and
The configuration of the components that provide the manager and
users views is explained in the respective sections.