Once you are comfortable with Clear Linux concepts, your next step as a system administrator is to understand how to deploy Clear Linux* at scale in your environment.
In this document the term endpoint refers to a system targeted for Clear Linux OS installation, whether that is a datacenter system or unit deployed in field.
This is not a replacement or blueprint for designing your own IT operating environment.
Your Clear Linux OS deployment should complement the existing environment and available tools. It is assumed foundational core IT dependencies of your environment, such as your network, are healthy and scaled to suit the deployment.
- Pick a Clear Linux OS usage and update strategy
- Pick an image distribution strategy
- Considerations with stateless systems
Different business scenarios call for different deployment methodologies. Clear Linux* OS offers the flexibility to continue consuming the upstream Clear Linux OS distribution or the option to fork away from the Clear Linux OS distribution and act as your own OSV.
Below are overviews of both approaches and some considerations.
This approach is easier to adopt by relying on the Clear Linux OS upstream for packaging updates for you to deploy.
Custom software or packages that are not available in a preformed bundle can be added using the mixin process to form a custom bundle. If custom bundles are needed, you will be responsible for maintaining the custom bundle(s) and testing between Clear Linux OS releases in your environment, while the rest of the operating system and preformed bundles come from the Clear Linux OS upstream.
Ensure Clear Linux OS systems are able to be inventoried, managed, and orchestrated to coordinate software updates.
With autoupdate enabled, Clear Linux OS is updated daily, however you may wish to act as an intermediary buffer between the OS releases. If you do decide to act as a gate to Clear Linux OS versions, define a desired release cadence for yourself which is realistic with the operational expectations of your environment.
Use a web caching proxy for Clear Linux OS updates for devices connected to a local area network (LAN), like a datacenter, to increase the speed and resiliency of updates from the Clear Linux OS update servers.
Your caching proxy server is just like any other web application. There are many well-known ways to achieve a scalable and resilient web server for this purpose, however implementation details are not in the scope of this document. In general, they should be close to your endpoints, highly available, and easy to scale with a load balancer when necessary.
This approach forks away from the Clear Linux OS upstream and has you act as your own OSV by leveraging the mixer process to create customized images based on Clear Linux OS. This is a level of responsibility that requires having more infrastructure and processes to adopt. In return, this approach offers you a high degree of control and customization.
Development systems which are generating bundles and updates should be sufficiently performant for the task and separate from the swupd update webservers which are serving update content to production machines.
swupd update webservers which are serving update content to production machines (see mixer process for more information) should be appropriately scaled.
Your swupd update server is just like any other web application. There are many well-known ways to achieve a scalable and resilient web server for this purpose, however implementation details are not in the scope of this document. In general, they should be close to your endpoints, highly available, and easy to scale with a load balancer when necessary.
The cloud, and other scaled deployments, are all about flexibility and speed. It only makes sense that any Clear Linux OS deployment strategy should follow suit.
Manually rebuilding your own bundles or mix for every release is not sustainable at a large scale. A Clear Linux OS deployment pipeline should be agile enough to validate and produce new versions with speed. Whether or not those updates actually make their way to your production can be separate business decision. However this ability to frequently roll new versions of software to your endpoints is an important prerequisite.
You own the validation and lifecycle of the OS and should treat it like any other software development lifecycle. Below are some pointers:
Thoroughly understand the custom software packages that you will need to integrate with Clear Linux OS and maintain along with their dependencies.
Setup a path to production for building Clear Linux OS based images. At minimum this should include:
- A development clr-on-clr environment to test building packages and bundles for Clear Linux OS systems.
- A pre-production environment to deploy Clear Linux OS versions to before production
Employ a continuous integration and continuous deployment (CI/CD) philosophy in order to:
- Automatically pull custom packages as they are updated from their upstream projects or vendors.
- Generate Clear Linux OS bundles and potentially bootable images with your customizations, if any.
- Measure against metrics and indicators which are relevant to your business (e.g. performance, power, etc) from release to release.
- Integrate with your organization’s governance processes, such as change control.
Clear Linux OS version numbers are very important as they apply to the whole infrastructure stack from OS components to libraries and applications.
Good record keeping is important, so you should keep a detailed registry and history of previously deployed versions and their contents.
With a glance at the Clear Linux OS version numbers deployed, you should be able to tell if your Clear systems are patched against a particular security vulnerability or incorporate a critical new feature.
Once you have decided on a usage and update strategy, you should understand how Clear Linux OS will be deployed to your endpoints. In a large scale deployment, interactive installers should be avoided in favor of automated installations or prebuilt images.
There are many well-known ways to install an operating system at scale. Each have their own benefits, and one may lend itself easier in your environment depending on the resources available to you.
Below are some common ways to install Clear Linux OS to systems at scale:
Preboot Execution Environments (PXE) or other out-of-band booting options are one way to distribute Clear Linux OS to physical baremetal systems on a LAN.
This option works well if your customizations are fairly small in size and infrastructure can be stateless.
The Clear Linux OS downloads page offers a Live Image that can be deployed as a PXE boot server if one doesn’t already exist in your environment. Also see documentation on installing Clear Linux on bare metal systems
Image templates in the form of cloneable disks are an effective way to distribute Clear Linux OS for virtual machine environments, whether on-premises or hosted by a Cloud Solution Provider (CSP).
When used in concert with cloud VM migration features, this can be a good option for allowing your applications a degree of high availability and workload mobility; VMs can be restarted on a cluster of hypervisor host or moved between datacenters transparently.
Containerization platforms allow images to be pulled from a repository and deployed repeatedly as isolated containers.
Containers with a Clear Linux OS image can be a good option to blueprint and ship your application, including all its dependencies, as an artifact while allowing you or your customers to dynamically orchestrate and scale applications.
Clear Linux OS is capable of running a Docker host, has a container image which can be pulled from DockerHub, or can be built as a customized container. For more information visit the containers page.
An important Clear Linux OS concept is statelessness and partitioning of system data from user data. This concept can change the way you think about an at scale deployment.
A Clear Linux OS system and its infrastructure should be considered a commodity and be easily reproducible. Avoid focusing on backing up the operating system itself or default values.
Instead, focus on backing up what’s important and unique - the application and data. In other words, only focus on backing up critical areas like /home, /etc, and /var.
Offload logging and telemetry from endpoints to external servers, so it is persistent and can be accessed on another server when an issue occurs.
Remote syslogging in Clear Linux OS is available through the systemd journal-remote service
Clear Linux OS offers a native telemetry solution which can be a powerful tool for a large deployment to quickly crowdsource issues of interest. Take advantage of this feature with careful consideration of the target audience and the kind of data that would be valuable, and expose events appropriately.
Your telemetry server is just like any other web application. There are many well-known ways to achieve a scalable and resilient web server for this purpose, however implementation details are not in the scope of this document. In general, they should be close to your endpoints, highly available, and easy to scale with a load balancer when necessary.
In cloud environments, where systems can be ephemeral, being able to configure and maintain generic instances is valuable.
Clear Linux OS offers an efficient cloud-init style solution, micro-config-drive, through the os-cloudguest bundles which allow you to configure many Day 1 tasks such as setting hostname, creating users, or placing SSH keys in an automated way at boot. For more information on automating configuration during deployment of Clear Linux OS endpoints see the documentation on bulk provisioning .
A configuration management tool is useful for maintaining consistent system and application-level configuration. Ansible* is offered through the sysadmin-hostmgmt bundle as a configuration management and automation tool.
An Infrastructure OS can design for good behavior, but it is ultimately up to applications to make agile design choices. Applications deployed on Clear Linux OS should aim to be host-aware but not depend on any specific host to run. References should be relative and dynamic when possible.
The application architecture should incorporate an appropriate tolerance for infrastructure outages. Don’t just keep stateless design as a noted feature. Continuously test its use; Automate its use by redeploying Clear Linux OS and application on new hosts. This naturally minimizes configuration drift, challenges your monitoring systems, and business continuity plans.