In this article
- Ceph RGW Installation and Basic Configuration
- OpenStack Keystone Integration
- Storage Rules and Limits (Quotas)
- OpenStack Service Integration
- RGW Performance Management
- RGW Advanced Features
- Wrapping Up: Guide to Ceph RADOS Gateway (RGW) in OpenStack
- Get Started on an OpenStack- and Ceph-Powered Hosted Private Cloud
Need straightforward and scalable object storage in your OpenStack cloud? Ceph’s RADOS Gateway (RGW) is a popular choice, giving you S3 and Swift-compatible storage built on the dependable Ceph storage system.
What It Does
Ceph RGW, also called the Ceph Object Gateway, is an object storage interface that provides access to Ceph’s storage. It offers RESTful APIs (the kind S3 and Swift use) so your apps and users can easily store and retrieve objects. It handles things like access control and separating storage for different tenants (projects) within OpenStack. RGW connects nicely with OpenStack’s own Keystone for authentication and can act as the storage backend for Glance images and Cinder volume backups, helping create a more streamlined cloud setup.
Key Features
- Scales Up: Uses Ceph’s natural ability to grow as your storage needs increase.
- Speaks S3 and Swift: Works out-of-the-box with the common S3 and Swift APIs, meaning lots of tools and apps are compatible.
- Dependable: Because it’s built on Ceph, it benefits from Ceph’s data replication and self-healing features to keep data safe.
- Unified Storage: Manages object storage within your main Ceph cluster, which can simplify your setup and potentially save costs compared to running a separate system.
- Keystone Integration: Handles logins and permissions through OpenStack’s standard identity service.
Setup Overview
- Install RGW: Get the RGW service running on its server(s).
- Configure RGW: Set up the basic RGW network settings.
- Connect to Keystone: Tell RGW how to talk to Keystone for user authentication and add the RGW service to OpenStack’s catalog.
- Define Storage Policies: Set up quotas and access rules.
- (Optional) Connect Services: Point Glance, Cinder backups, or other services to use RGW.
Advanced Capabilities
- Multi-Site Replication: Set up replication between RGW gateways in different data centers for disaster recovery or getting content closer to users.
- Object Lifecycle Management: Automatically delete or move data based on rules you define (like deleting old logs).
- Encryption and Security: Use server-side encryption and detailed access controls.
- Bucket/Object Tagging: Add metadata tags to your objects to help organize them.
This guide walks you through the main steps for getting RGW installed, connected within OpenStack, and using its key features.
Ceph RGW Installation and Basic Configuration
1. Install RGW Packages
On the server(s) where you’ll run RGW, install the needed packages. (Package names might differ slightly depending on your Linux version and Ceph release. This example is for Debian/Ubuntu.)
# Make sure ceph-common is installed (usually needed first)
sudo apt update
sudo apt install ceph-common radosgw
2. Create RGW Keyring
RGW acts like a Ceph client, so it needs its own credentials. Create a keyring for each RGW instance (server process). Replace ${RGW_INSTANCE_NAME} with a name you choose for this gateway (like rgw.gateway1
).
sudo ceph auth get-or-create client.${RGW_INSTANCE_NAME} osd 'allow rwx' mon 'allow rw' -o /etc/ceph/ceph.client.${RGW_INSTANCE_NAME}.keyring
# Set the right permissions
sudo chown ceph:ceph /etc/ceph/ceph.client.${RGW_INSTANCE_NAME}.keyring
sudo chmod 640 /etc/ceph/ceph.client.${RGW_INSTANCE_NAME}.keyring
3. Configure RGW Service (ceph.conf
)
Add a section for your RGW instance in your /etc/ceph/ceph.conf
file. Make sure this updated config file gets distributed to your Ceph monitors, OSDs, and RGW nodes.
[client.${RGW_INSTANCE_NAME}]
host = ${HOSTNAME} # Or the specific hostname where this instance runs
keyring = /etc/ceph/ceph.client.${RGW_INSTANCE_NAME}.keyring
rgw_frontends = "beast endpoint=0.0.0.0:7480" # Beast frontend listening on port 7480 (HTTP)
# For production, consider using HTTPS: "beast endpoint=0.0.0.0:7443 ssl_certificate=/path/to/cert.pem ssl_private_key=/path/to/key.pem"
rgw_data = /var/lib/ceph/radosgw/ceph-${RGW_INSTANCE_NAME} # Optional: Where RGW keeps its data
log_file = /var/log/ceph/ceph-rgw-${RGW_INSTANCE_NAME}.log
# We'll add Keystone settings later
rgw_frontends
: This sets up the web server part of RGW. beast
is the recommended, fast option. 0.0.0.0
means it listens on all network interfaces – make sure your firewall is set up correctly. Port 7480
is typical for HTTP. You should definitely use HTTPS (port 7443
or similar) for real-world setups
${HOSTNAME}
: Make sure this is the actual hostname of the server running this RGW instance, matching the ${RGW_INSTANCE_NAME}
you used for the keyring.
4. Turn on and Start the RGW Service
Use systemd to get the RGW service running and make sure it starts on boot. The name after @
should be the same client.
name you used in ceph.conf
and for the keyring.
sudo systemctl enable ceph-radosgw@${RGW_INSTANCE_NAME}
sudo systemctl start ceph-radosgw@${RGW_INSTANCE_NAME}
# Check if it's running
sudo systemctl status ceph-radosgw@${RGW_INSTANCE_NAME}
sudo ceph -s # Check overall Ceph cluster health, look for RGW status
OpenStack Keystone Integration
Let’s connect RGW to Keystone so OpenStack users can log in and use it.
1. Create Keystone Service and Endpoint
Add the RGW service and its address (endpoint) to Keystone’s service catalog. This tells OpenStack how to find RGW. Run these commands as an OpenStack admin. Replace ${RGW_PUBLIC_URL}
with the actual address users will use to reach RGW (e.g., http://rgw.example.com:7480
).
# We'll use the 'swift' service type so it works well with Swift tools
openstack service create --name swift --description "OpenStack Object Storage (Ceph RGW)" object-store
openstack endpoint create --region RegionOne \
object-store public ${RGW_PUBLIC_URL}/swift/v1
# You might also want internal/admin URLs depending on your network
openstack endpoint create --region RegionOne \
object-store internal ${RGW_INTERNAL_URL}/swift/v1
openstack endpoint create --region RegionOne \
object-store admin ${RGW_ADMIN_URL}/swift/v1
# You could also register it with the 's3' type if you prefer
# openstack service create --name s3 --description "S3 Object Storage (Ceph RGW)" s3
# openstack endpoint create --region RegionOne s3 public ${RGW_PUBLIC_URL}
# ... etc ...
2. Configure RGW for Keystone Authentication
Edit your RGW settings in /etc/ceph/ceph.conf
(in the [client.${RGW_INSTANCE_NAME}]
section) to tell it where Keystone is.
[client.${RGW_INSTANCE_NAME}]
# ... other settings from above ...
# Keystone Settings
rgw_keystone_url = http://keystone.example.com:5000 # Address of your Keystone Admin endpoint
rgw_keystone_api_version = 3
rgw_keystone_accepted_roles = member, _member_, admin # Which OpenStack roles can use RGW?
rgw_keystone_token_cache_size = 500 # How many tokens RGW should remember (cache)
rgw_keystone_revocation_interval = 600 # How often RGW checks if tokens were cancelled (seconds)
rgw_keystone_implicit_tenants = true # Automatically maps Keystone projects to RGW tenants (Recommended)
# RGW needs to talk to Keystone to check user tokens. It needs credentials for this.
# !! Security Warning: Don't put passwords directly in ceph.conf if you can avoid it !!
# Option 1: Dedicated OpenStack User (Better Security)
# Create a user in OpenStack (like 'rgw-checker') in a project (like 'service')
# Give it a role that can validate tokens (like 'service' or 'admin').
rgw_keystone_admin_user = rgw-checker
rgw_keystone_admin_project = service
rgw_keystone_admin_password = YOUR_RGW_KEYSTONE_USER_PASSWORD # Keep this password safe! Use Ceph secrets or a protected file if possible.
# Option 2 (Not Ideal): Use the main admin user
# rgw_keystone_admin_user = admin
# rgw_keystone_admin_project = admin
# rgw_keystone_admin_password = YOUR_ADMIN_PASSWORD
Security: Putting passwords in plain text is risky. Look into storing the password in a separate file (rgw_keystone_admin_password_file = /path/to/secure/file
) or using Ceph’s built-in secrets management if your version supports it.
rgw_keystone_accepted_roles
: Adjust this list based on who should have access according to your security rules.
rgw_keystone_implicit_tenants
: This makes life easier by automatically creating RGW tenants that match OpenStack projects when users log in.
3. Restart RGW Service
Make the Keystone changes take effect by restarting RGW:
sudo systemctl restart ceph-radosgw@${RGW_INSTANCE_NAME}
Now, OpenStack users with the right roles should be able to use S3 or Swift tools to connect to RGW using their regular OpenStack login.
We also recommend Daniel Persson, he has this great tutorial about how to install the Ceph RADOS Gateway to enable the S3 API on your Ceph cluster if you’d like to follow along with his video!
Storage Rules and Limits (Quotas)
Manage how much storage people can use by setting quotas.
1. Set Default Quota Settings (Optional, in ceph.conf
)
You can put some default limits in ceph.conf
under the [client.${RGW_INSTANCE_NAME}]
section:
[client.${RGW_INSTANCE_NAME}]
# ... other settings ...
rgw_user_quota_bucket_sync_interval = 180 # How often RGW updates user totals from bucket stats (seconds)
rgw_user_quota_sync_interval = 600 # How often RGW updates global user totals (seconds)
rgw_bucket_quota_ttl = 600 # How long RGW caches bucket quota info (seconds)
# You can set global defaults too (optional):
# rgw_max_put_size = 5368709120 # Example: 5 GiB max object size
# rgw_bucket_default_quota_max_objects = 1000000 # Example default object limit per bucket
# rgw_bucket_default_quota_max_size = 1099511627776 # Example default size limit per bucket (1TiB)
Restart RGW if you add or change these.
2. Set Quotas for Users/Buckets (Using radosgw-admin
)
Use the radosgw-admin
tool to set quotas for specific RGW users. If you’re using Keystone with rgw_keystone_implicit_tenants = true
, the RGW User ID (--uid
) is usually the OpenStack Project ID.
# Set quota for a specific user (using their Keystone Project ID)
# Limit their total storage to 1 TB and 2 million objects across all their buckets
radosgw-admin quota set --quota-scope=user --uid='{KEYSTONE_PROJECT_ID}' --max-size=1T --max-objects=2000000
radosgw-admin quota enable --quota-scope=user --uid='{KEYSTONE_PROJECT_ID}' # Turn the quota on
# Set quota for just one specific bucket owned by that user
# Limit this bucket to 100,000 objects
radosgw-admin quota set --quota-scope=bucket --bucket='{BUCKET_NAME}' --max-objects=100000 --uid='{KEYSTONE_PROJECT_ID}'
radosgw-admin quota enable --quota-scope=bucket --bucket='{BUCKET_NAME}' --uid='{KEYSTONE_PROJECT_ID}'
# See the current quotas
radosgw-admin quota get --quota-scope=user --uid='{KEYSTONE_PROJECT_ID}'
radosgw-admin quota get --quota-scope=bucket --bucket='{BUCKET_NAME}' --uid='{KEYSTONE_PROJECT_ID}'
3. Check Usage
See how much storage a specific user (Project ID) is using:
# Get a usage summary
radosgw-admin usage show --uid='{KEYSTONE_PROJECT_ID}'
# Get usage for a specific time period
radosgw-admin usage show --uid='{KEYSTONE_PROJECT_ID}' --start-date='2025-04-01' --end-date='2025-04-30'
OpenStack Service Integration
Set up OpenStack services to use RGW for storage.
1. Configuring Glance with RGW
Keep OpenStack images right in RGW.
Create Glance Bucket:
# Run this as an OpenStack admin or a user with RGW admin rights
# The RGW user ID (--uid) should be the Project ID of the Glance service user in Keystone
radosgw-admin bucket create --bucket=glance --uid='{GLANCE_SERVICE_PROJECT_ID}'
# You might need to set permissions (ACLs) depending on your setup
Edit glance-api.conf
:
[glance_store]
stores = swift, http, file # Add 'swift' to the list of storage options
default_store = swift # Make RGW (via Swift API) the default place to store images
swift_store_auth_version = 3
swift_store_auth_address = http://keystone.example.com:5000/v3 # Your Keystone v3 address
swift_store_user = service:glance # Format: {project_name}:{user_name} OR {project_id}:{user_id}
swift_store_key = YOUR_GLANCE_SERVICE_PASSWORD
swift_store_container = glance # The bucket name we just created
swift_store_create_container_on_put = false # Good idea to create the bucket beforehand
# swift_store_endpoint_type = publicURL # Or internalURL, adminURL - depends on your network
# Glance Image Cache Settings (Optional but helpful)
# Check your Glance version docs for where these go ([DEFAULT] or [glance_store])
image_cache_dir = /var/lib/glance/image-cache/
image_cache_stall_time = 86400 # 1 day
image_cache_max_size = 10737418240 # 10 GiB
Make sure you have a glance
user in the service
project (or whichever project you use for services) in Keystone. It needs a role (like admin
or service
) allowing it to talk to Keystone and RGW. Put its password in place of YOUR_GLANCE_SERVICE_PASSWORD
.
Restart Glance Services:
sudo systemctl restart openstack-glance-api # Service name might vary
sudo systemctl restart openstack-glance-registry # If you run this separately
2. Configuring Cinder Backups with RGW
Use RGW to store backups of Cinder volumes.
Create Cinder Backups Bucket:
radosgw-admin bucket create --bucket=cinder-backups --uid='{CINDER_SERVICE_PROJECT_ID}'
Edit cinder.conf
:
[DEFAULT]
backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
backup_swift_url = ${RGW_PUBLIC_URL}/swift/v1 # Your RGW Swift endpoint address
backup_swift_auth_url = http://keystone.example.com:5000/v3
backup_swift_user = service:cinder # Format: {project_name}:{user_name}
backup_swift_key = YOUR_CINDER_SERVICE_PASSWORD
backup_swift_container = cinder-backups
backup_swift_auth_version = 3
# backup_swift_endpoint_type = publicURL # Or internalURL
backup_swift_create_container_on_put = false
Again, make sure you have a cinder
user in your service
project in Keystone with the right permissions and password.
Restart Cinder Services:
sudo systemctl restart openstack-cinder-backup # Service name might vary
sudo systemctl restart openstack-cinder-volume
3. Configuring Nova Ephemeral Storage (Using Ceph RBD – Not RGW)
People often set this up when using Ceph with OpenStack, but it uses Ceph’s block storage (RBD), not the object storage gateway (RGW). It’s a different part of Ceph.
Make Sure Ceph Pool Exists: You need a Ceph pool for VM disks (like vms
).
sudo ceph osd pool create vms 64 64 # Adjust PG count for your cluster size
sudo rbd pool init vms
Configure nova.conf
(on compute nodes):
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
# If using Ceph authentication (recommended):
images_rbd_ceph_user = cinder # Or maybe a specific 'nova' user
images_rbd_ceph_secret_uuid = ${UUID_of_secret_in_libvirt} # Needs setup in libvirt
rbd_user = cinder # Same user as above
rbd_secret_uuid = ${UUID_of_secret_in_libvirt}
This needs extra setup: creating Ceph credentials for Nova/Cinder (like client.cinder
or client.nova
) and telling libvirt how to use them via secrets. That’s a separate task.
Restart Nova Compute:
sudo systemctl restart openstack-nova-compute # Service name might vary
4. Heat Template Storage Example
Users can work with RGW directly in Heat templates using the Swift resource types.
Heat Template Snippet:
resources:
my_object_container:
type: OS::Swift::Container
properties:
name: heat-deployment-artifacts
X-Container-Read: ".r:*" # Example: Make container readable by anyone
config_file_object:
type: OS::Swift::Object
properties:
container: { get_resource: my_object_container }
name: config/app_v1.conf
content: |
# Your config file content goes here
setting1 = value1
feature_enabled = true
Applying Lifecycle Policy via radosgw-admin
:
You can automatically manage objects created by Heat (or anything else) with lifecycle policies. Create a JSON file (say, lifecycle.json
):
{
"Rules": [
{
"ID": "DeleteOldLogs",
"Status": "Enabled",
"Filter": {
"Prefix": "logs/"
},
"Expiration": {
"Days": 30
}
},
{
"ID": "DeleteOldArtifactsAfter90d",
"Status": "Enabled",
"Filter": {
"Prefix": "artifacts/"
},
"Expiration": {
"Days": 90
}
}
]
}</code
Apply It:
# User needs permissions (like RGW admin or bucket owner)
radosgw-admin bucket lifecycle set --bucket=heat-deployment-artifacts --lifecycle=lifecycle.json --uid='{KEYSTONE_PROJECT_ID}'
Check Bucket Stats:
radosgw-admin bucket stats --bucket=heat-deployment-artifacts --uid='{KEYSTONE_PROJECT_ID}'
Connecting these services lets OpenStack use Ceph RGW effectively for different storage needs, making your storage backend more unified and scalable.
RGW Performance Management
Keep an eye on RGW performance and tweak it for the best results.
Key Metrics to Monitor
- RGW Request Speed (Latency): How long does RGW take to handle requests (GETs, PUTs, etc.)? Track averages and high percentiles (like 95th or 99th).
- RGW Throughput: How much data is moving (MB/s or GB/s) and how many requests per second are happening (Ops/s).
- Ceph Cluster Health: How are the underlying Ceph OSDs doing? Check their latency, recovery activity, and the network between them and RGW. RGW’s speed really depends on how the Ceph cluster is doing.
- RGW Server Resources: Check CPU, memory, and network use on the servers running the RGW service.
- Keystone Response Time: If using Keystone auth, how fast is Keystone responding to RGW’s checks?
Monitoring Tools
- Ceph Dashboard: Can show basic RGW performance graphs if you have it set up.
ceph -s
/ceph health detail
: Gives you the overall cluster status.radosgw-admin perf dump
: Gets detailed performance numbers from the running RGW processes.radosgw-admin ops log
: Shows current and recent RGW operations, helpful for finding problems.ceph tell osd.* perf dump
: Gets performance numbers directly from the OSDs.- Prometheus Node Exporter and Ceph Exporter: Good choices for collecting time-series data to view in Grafana dashboards.
- System Tools: Standard Linux tools like
top
,htop
,iostat
,netstat
on the RGW servers.
Fixing Common Speed Problems
- High Latency (Slow Requests):
- Check the main Ceph cluster first (
ceph health detail
). Slow OSDs or network problems are often the cause. - Use
radosgw-admin ops log show
to see if specific operations are stuck or slow. - If the RGW server has spare CPU but requests are slow, you might try increasing RGW worker threads (
rgw_num_rados_handles
,rgw_thread_pool_size
inceph.conf
). Test carefully! - Improve the network between clients, RGW servers, and the Ceph OSDs.
- Check the main Ceph cluster first (
- Keystone Authentication Bottlenecks:
- If RGW is checking tokens constantly, try increasing
rgw_keystone_token_cache_size
. - Make sure the network path to your Keystone server (
rgw_keystone_url
) is fast. - Check if Keystone itself is slow.
- If RGW is checking tokens constantly, try increasing
- Slow Bucket Listing:
- Listing buckets with millions (or billions!) of objects gets slow. Look into enabling bucket index sharding. This changes how the bucket info is stored, so plan carefully. See Ceph docs for
rgw_override_bucket_index_max_shards
. - It’s better if applications can avoid listing huge buckets. Using prefixes and delimiters in requests is much faster.
- Listing buckets with millions (or billions!) of objects gets slow. Look into enabling bucket index sharding. This changes how the bucket info is stored, so plan carefully. See Ceph docs for
- High CPU/Memory on RGW Server:
- Add more RGW servers and put a load balancer in front of them.
- Figure out which operations are using the most resources (e.g., huge file transfers, complex queries).
- Tune frontend settings (like
beast
connection limits) if needed.
Tips for Better Performance
- Tune Your Ceph Cluster: A healthy Ceph cluster is the most important factor. Make sure you have the right number of PGs, good network setup, and decent hardware (especially SSDs for Ceph journals/metadata).
- RGW Server Hardware: Give RGW servers enough CPU, RAM, and network speed. Running them on separate machines keeps their resources apart from OSDs.
- Load Balancing: Use multiple RGW instances behind a load balancer (like HAProxy or Nginx) to handle more traffic and provide failover.
- Caching: Adjust Keystone token caching (
rgw_keystone_token_cache_size
). RGW also has some internal caches you might tune (rgw_cache_enabled
, etc.), but be careful. - Frontend Tuning: Experiment with
rgw_thread_pool_size
and possibly Beast settings based on your workload. Make changes one step at a time and watch the results.
RGW Advanced Features
Check out some features beyond the basics.
1. Multi-Site Replication
Set up RGW to copy data between different Ceph clusters (called zones), maybe in different cities or data centers. This involves setting up Realms, Zonegroups, and Zones.
Basic Idea:
- Create a Realm: A top-level name for your setup.
- Create a Zonegroup: A group of zones, often for a region. One zonegroup is the “master”.
- Create Zones: Define each location (cluster) as a zone within a zonegroup. One zone is the “master” zone for the group. You’ll need RGW addresses and sync credentials.
- Create a Sync User: A special RGW user dedicated to handling replication traffic.
- Update Configuration: Commit the changes to start replication.
- Check Status: Use
radosgw-admin sync status
to see how replication is going.
(Heads up: Setting up multi-site is tricky and needs careful planning. Read the official Ceph RGW docs thoroughly.)
2. Data Lifecycle Management
Automatically delete or move objects after a certain time or based on other rules.
Create a Policy File (JSON, e.g., policy.json
):
{
"Rules": [
{
"ID": "ExpireOldLogs",
"Status": "Enabled",
"Filter": { "Prefix": "logs/" }, // Only apply to objects starting with "logs/"
"Expiration": { "Days": 30 } // Delete after 30 days
},
{
"ID": "AbortIncompleteMultipartUploads", // Clean up unfinished uploads
"Status": "Enabled",
"Filter": {}, // Apply to all objects
"AbortIncompleteMultipartUpload": { "DaysAfterInitiation": 7 }
}
// You can add rules to move data to cheaper storage (Transitions), delete old versions, etc.
]
}
Apply the Policy:
# Needs admin or bucket owner permissions
radosgw-admin bucket lifecycle set --bucket=your-bucket --lifecycle=policy.json --uid='{PROJECT_ID}'
# See the current policy
radosgw-admin bucket lifecycle get --bucket=your-bucket --uid='{PROJECT_ID}'
# Remove the policy
radosgw-admin bucket lifecycle delete --bucket=your-bucket --uid='{PROJECT_ID}'
3. Encryption and Data Tagging
Server-Side Encryption (SSE-S3): RGW handles the encryption keys. You can set buckets to automatically encrypt new objects.
# Check if a bucket has encryption enabled (needs newer Ceph versions)
# radosgw-admin bucket encryption get --bucket=secure-bucket --uid='{PROJECT_ID}'
# Usually, you set default encryption using S3 tools, not radosgw-admin
# Example using AWS CLI to make AES256 default for new objects:
# aws s3api put-bucket-encryption --endpoint-url ${RGW_S3_URL} --bucket secure-bucket \
# --server-side-encryption-configuration '{ "Rules": [ { "ApplyServerSideEncryptionByDefault": { "SSEAlgorithm": "AES256" } } ] }'
RGW also supports encryption using external KMS (like Barbican or Vault) or keys provided by the client (SSE-C).
Data Tagging: Add key-value tags to objects to help organize them or control access.
# Add tags using radosgw-admin (uses metadata format)
radosgw-admin object set-attrs --bucket=your-bucket --object=report.pdf --uid='{PROJECT_ID}' --attr 'x-amz-meta-project=phoenix' --attr 'x-amz-meta-status=approved'
# Add tags using S3 tools (standard way)
aws s3api put-object-tagging --endpoint-url ${RGW_S3_URL} --bucket your-bucket --key report.pdf \
--tagging '{"TagSet": [{ "Key": "project", "Value": "phoenix" }, { "Key": "status", "Value": "approved" }]}'
# See an object's tags
aws s3api get-object-tagging --endpoint-url ${RGW_S3_URL} --bucket your-bucket --key report.pdf>
Tag-Based Access Control (S3 Bucket Policy): Write bucket policies that grant access based on tags.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": "*", // Or a specific user/role
"Action": ["s3:GetObject"],
"Resource": "arn:aws:s3:::your-bucket/*", // Your bucket name here
"Condition": {
"StringEquals": {
// Only allow GET if the object has a 'project' tag with value 'phoenix'
"s3:ExistingObjectTag/project": "phoenix"
}
}
}]
}
Apply this policy using aws s3api put-bucket-policy
or radosgw-admin bucket policy set
.
These advanced features give you tools for handling data location, cleanup, security, and organization in your OpenStack cloud’s object storage.
Wrapping Up: Guide to Ceph RADOS Gateway (RGW) in OpenStack
Ceph RADOS Gateway, paired with OpenStack, gives you scalable, dependable object storage that works with common S3 and Swift APIs. By using Keystone for logins, acting as storage for services like Glance and Cinder backups, and providing features like multi-site replication, lifecycle rules, and detailed security options, RGW gives you a solid base for cloud storage.
Good setup, monitoring, and tuning are important for keeping things running smoothly. By following the steps here, admins can successfully set up and manage Ceph RGW in their OpenStack clouds, handling different application needs and growing as needed.
Read More on the OpenMetal Blog