The FDO Operator deploys FIDO Device Onboard (FDO) servers on Red Hat OpenShift.
The FDO Operator makes it easier to deploy and run any of the FDO servers (manufacturing, rendezvous, or owner-onboarding) on Red Hat OpenShift, catering to both device manufacturers and device owners. It is based on the Fedora IoT implementation of FDO.
Keep in mind that the operator is a work in progress, is highly opinionated and currently has many limitations.
-
The owner-onboarding and service-info API servers are deployed as a single unit called the Onboarding server. All communication between the owner-onboarding and the service-info is only within a pod.
-
The servers are exposed as OpenShift routes with default generated host names, and support only HTTP on port 80. We intend to allow custom host names, and will consider enabling other protocols if needed.
-
The number of replicas is always one, it is currently not possible to scale the deployment.
-
The API validation is limited and needs to be updated (e.g. Optional/Requires, default values), as well as the API documentation. Admission webhooks should be added for complex cross-field validations.
-
The log level inside FDO containers is TRACE by default and currently cannot be changed.
-
The container images the operator is used by default are stable but not maintained. It could be better to use either the development FDO images or Red Hat certified images for FDO once available.
-
It is not possible to explicitly specify container resources (requests/limits). This should change.
-
There are currently no liveness or readiness probes.
-
There is no place for additional service info configuration in the Onboarding Server CRD. In general, only a limited set of FDO configuration parameters is exposed via the CRDs.
-
The names of required secrets (for keys and certificates) and persistent volume claims (for ownership vouchers) are hard-coded. We should allow customizing those, and/or make them include a CR instance name for deduplication within the same namespace.
-
Device-specific service-info configuration is not supported. Enabling this functionality would require a persistent volume, exposing the admin API via an endpoint, and managing a secret for the admin authentication token.
-
Currently, service-info files are automatically added to the onboarding configuration by creating and annotating
ConfigMaps
. Those have size limitations and we may consider other mechanisms as sources of service-info files. In addition,Secrets
should be supported as a source of sensitive files. -
There is also room for many optimizations and code improvements:
- Modify the watchers (
Owns()
) to be more selective and watch only relevant resources. - Generate a new
ConfigMap
with a random suffix every time the configuration changes to automatically trigger deployment updates. - Write a lot more unit tests.
- Refactor the code for DRY
- Remove the use of github.com/redhat-cop/operator-utils as outdated
- Implements smarter re-queues in case of success and errors in the reconcile logic
- Update an object only if its related part changes instead of trying to do it on every reconciliation attempt
- Populate the
Status
of a custom resource to better reflect its state - We can store public certificates in
ConfigMaps
instead ofSecrets
(as usually done in OpenShift).
- Modify the watchers (
-
Finally, there are a few open questions:
- How can we make it easier for a user to work with (create, attach) the required persistent volumes?
- How can we enforce the mandatory secrets (keys, certificates), and respond to any changes in them?
You will need an OpenShift cluster to run against. You can use Red Hat OpenShift Local to get a local cluster for testing, or run against a remote cluster.
Before some of the custom resources created by the operator can start, they require the following pre-configured Kubernetes resources:
-
Keys and certificates as dictated by the FDO implementation. Sample keys and certificates for testing can be generated by running
make keys-gen
and deployed to the cluster with
make keys-push
-
Persistent volume claims for ownership vouchers. A manufacturing server and an onboarding server both expect a
fdo-ownership-vouchers-pvc
. The volume can be shared if the servers are deployed into the same namespace, making the synchronizing of ownership vouchers automatic (no manual copying will be required in this case).Note: If you are trying the sample manifests (below) on Red Hat OpenShift Local (CRC), a sample PVC definition is already included and you do not need to create a PVC separately.
To make it easier for a user to manage service info files that will be copied to an onboarded device by FDO, they are stored in ConfigMaps
. The service-info configuration file is updated accordingly and does not require a user action.
In order to add a file to the service-info, create a ConfigMap
labeled and annotated as follows, either before or after creating an instance of FDOOnboardingServer
. In the latter case, the server will be updated to pick up the new file.
kind: ConfigMap
apiVersion: v1
metadata:
labels:
fdo.serviceinfo.file/owner: <onboarding-server-instance>
annotations:
fdo.serviceinfo.file/name: <filename>
fdo.serviceinfo.file/path: /<destination-path>/<destination-filename>
fdo.serviceinfo.file/permissions: <permissions> # optional, e.g. '755'
name: <configmap-name>
immutable: false/true
binaryData:
<filename>: <file-contents>
Note: This guide assumes that you are running on Red Hat OpenShift Local (CRC) and your current namespace for testing is named fdo
.
-
Install the operator in any standard way for operators, or from the catalog at
ghcr.io/empovit/fdo-operator-catalog:v99.0.0
. -
Create the required secrets as described in Getting Started.
-
Create sample instances and configuration:
oc apply -f hack/samples/
The manufacturing server is now available at http://manufacturing-server-fdo.apps-crc.testing:80
.
You can list generated ownership vouchers by running exec
in a manufacturing server pod, e.g.
oc exec -ti manufacturing-server-<pod-id> -- ls -1 /etc/fdo/ownership_vouchers
And copy an ownership voucher from a pod by running
oc cp manufacturing-server-<pod-id>:/etc/fdo/ownership_vouchers/<device-guid> <device-guid>
When testing FDO onboarding using OpenShift Local, you may need to enable traffic between a device and the OpenShift cluster. For instance, if you are simulating a device using a VM, you can allow the VM to access the OpenShift Local (CRC) cluster as explained in Libvirt routing between two NAT networks:
sudo iptables -t nat -I POSTROUTING 1 -s 192.168.130.0/24 -d 192.168.122.0/24 -j ACCEPT
sudo iptables -t nat -I POSTROUTING 1 -s 192.168.122.0/24 -d 192.168.130.0/24 -j ACCEPT
sudo iptables -I FORWARD 1 -s 192.168.122.0/24 -d 192.168.130.0/24 -j ACCEPT
sudo iptables -I FORWARD 1 -s 192.168.130.0/24 -d 192.168.122.0/24 -j ACCEPT
where 192.168.130.0/24
and 192.168.122.0/24
are the two libvirt networks, one is for CRC (usually crc
) and the other for VMs (e.g. default
).
Copyright 2023.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.