Note: Upgrading from our previous release of v2.1 onwards (OpenStack Yoga and Ceph Quincy on CentOS 8 Stream) will be supported in the next minor release. We will be testing the upgrade process and tooling, then work with customers to complete upgrades of their clusters in the coming months. The next minor release is focused on fine-tuning the upgrade migration process for OpenStack, Ceph, and CentOS.
New Base Operating System: CentOS 9 Stream
- Linux Kernel v5.14+
- Opens the door for utilization of newer virtualization features, such as Intel AMX support
- Should allow in-place upgrades for existing deployments running CentOS 8 Stream
- Network configuration fully managed via NetworkManager (support for configuration via network-scripts has been removed)
New OpenStack Release: 2023.2 “Bobcat”
This OpenStack release is our first SLURP (Skip Level Upgrade) release– OpenStack’s (somewhat) equivalent to an LTS release. Each SLURP release will ensure upgradability from one SLURP release to the next, which ensures that users and organizations don’t need to perform upgrades every 6 months to keep their clusters up-to-date, and allows the intermediate releases to be “skipped” during an upgrade.
2023.2 (Bobcat): https://releases.openstack.org/bobcat/highlights.html
2023.1 (Antelope): https://releases.openstack.org/antelope/highlights.html
2022.10 (Zed): https://releases.openstack.org/zed/highlights.html
Kolla-Ansible
- Kolla-Ansible has been updated to track upstream version
v17.3
Keystone
- Keystone admin interface was removed (previously used port
35357
). Use theinternal
interface on port5000
instead. This usage was deprecated in Zed and removed in 2023.1 - Default Keystone user role has been changed from deprecated role
_member_
tomember
role.
Nova
Deployment changes:
- The SPICE console can now be optionally enabled instead of VNC/noVNC
- The libvirt CPU model(s) will now be set dynamically based on the underlying hardware or manually overridden with
nova_libvirt_cpu_models
- Memory reservation (
om_reserved_host_memory_mb
) will now be dynamically calculated based on node type (control
= 32 GiB,compute
only = 8 GiB) and number of disks (disk_count * 4 GiB
)- Example: Small and Standard control nodes will have 36 GiB reserved (
32 + (1 * 4)
)
- Example: Small and Standard control nodes will have 36 GiB reserved (
- Overcommit ratios and reservations can now be overidden via Extra vars or Inventory vars
om_reserved_host_cpus
/om_cpu_allocation_ratio
om_reserved_host_memory_mb
/om_ram_allocation_ratio
Upstream changes:
- Power management improvements
- SPICE console improvements: ability to enable and control image compression settings
- PCI device scheduling support via Placement enabled
- FQDNs can now be used for instance hostnames in compute microversion v2.95+
- Rebuild support for volume-backed instances is now supported with compute microversion v2.93+
- Virtual IOMMU device support
- Fixed CPU feature flag comparisons
Neutron
- OVN driver can now be optionally enabled (can only be enabled when deploying a brand new cluster from scratch — can not be used on pre-warmed machines or for updating existing clusters)
- Distributed Floating IP support – allows traffic to enter from the node where a VM is located rather than traversing the tunnels between nodes
- Neutron Metering plugin enabled – allows tracking egress/ingress traffic per instance using Ceilometer/Gnocchi
- Neutron Trunking plugin enabled – allows the use and management of 802.1Q trunked VLANs inside of private networks and assigning VLANs on a per-port basis
- Neutron Packet Logging plugin enabled – allows custom meters to be setup to track the triggering of Security Group rules
- Neutron Port Forwarding plugin enabled – allows creating port forward (NAT) rules directly on router ports
- Neutron QoS plugin enabled – allows specifying target RX/TX byte and packet bandwidth rates per port and per network
Magnum
- Kubernetes releases up to v1.26.x are now supported with Fedora CoreOS 38+
- Docker Swarm driver deprecated
- Mesos driver removed
Mistral
Mistral service enabled — the Openstack Workflow Service.
- Ability to create scheduled tasks (similar to Cron jobs)
- Ability to create “workbooks” to handle repetitive tasks, such as performing backups, upgrades, maintenance, etc.
New Ceph Release & Deployment Changes
Ceph version updated to v18.2.2 (Reef).
ceph-ansible has been deprecated in favor of cephadm. All new v3.0.0+ deployments will be using cephadm
, which will allow easier cluster lifecycle management for customers and support — for example, making configuration changes, adding and replacing disks or OSDs, adding or removing services (such as MDSes for CephFS), etc.
All Ceph services are now running in Docker containers, just like Kolla services. cephadm also configures systemd services and targets to ensure the containers start up correctly (eg. ceph.target
).
Ceph v18.2.2 “Reef”
- Significant performance improvements to RocksDB
- New read balancer feature was introduced which allows balancing primary PGs per pool across the cluster. For now, the actual balancing must be done offline via the osdmap, with automated balancing added in the future. See: Operating the Read (Primary) Balancer
- Numerous multi-site replication improvements
- Extensive dashboard updates, changes, and improvements
- RBD:
compare-and-write
operations now support operating on entire stripe units (4MB by default) instead of being limited to 512-byte sectors - RBD: layered client-side encryption support
- Compression now supported for objects with server-side encryption
- FileStore support was removed (default store type has been BlueStore for almost a decade now)
- Cache Tiering deprecated