<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Topics tagged with ceph]]></title><description><![CDATA[A list of topics that have been tagged with ceph]]></description><link>https://board.circlewithadot.net/tags/ceph</link><generator>RSS for Node</generator><lastBuildDate>Fri, 15 May 2026 06:11:18 GMT</lastBuildDate><atom:link href="https://board.circlewithadot.net/tags/ceph.rss" rel="self" type="application/rss+xml"/><pubDate>Invalid Date</pubDate><ttl>60</ttl><item><title><![CDATA[The #Ceph at work drives me crazy...]]></title><description><![CDATA[The #Ceph at work drives me crazy... We want to organize the data in subvolumes and subvolume groups. Seems that I'm unable to mount different subvolumes that are on the same default volume at once.So, ok, then mounting the directories directly from CephFS. Easy going, works on my private Ceph on Proxmox like a charme. Only thing: it is not working at work. Only one mount possible. Oh my.... One difference between those two installations are:Private Cluster: Proxmox with Debian packagesWork Cluster: official container images via podman. Does someone know if there are issues with containerized Ceph?EDIT: Found the guilty option: "fsc" was causing the problems. Removing that mount option did the trick. Yay! And tomorrow I'll test subvolumes again... #followerpower]]></description><link>https://board.circlewithadot.net/topic/962e2ee5-dee3-44f1-99a9-7c065ab66481/the-ceph-at-work-drives-me-crazy...</link><guid isPermaLink="true">https://board.circlewithadot.net/topic/962e2ee5-dee3-44f1-99a9-7c065ab66481/the-ceph-at-work-drives-me-crazy...</guid><dc:creator><![CDATA[ij@nerdculture.de]]></dc:creator><pubDate>Invalid Date</pubDate></item></channel></rss>