No description
- Add proper double-fork daemonization with pipe-based signaling - Fix write buffer lock contention (hold lock briefly, do I/O outside) - Fix config file values not being applied to CLI defaults - Add synchronous initial stats collection before mount completes - Fix build script to collect all packages in same directory - Add patches for aioboto3/aiobotocore botocore version compatibility - Update README with config file docs, performance numbers, and features Performance improved from ~50 MiB/s to ~145 MiB/s writes. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> |
||
|---|---|---|
| debian | ||
| dist | ||
| packages | ||
| scripts | ||
| src/comity | ||
| tests | ||
| .gitignore | ||
| comity.conf.example | ||
| comity.spec | ||
| docker-compose.test.yml | ||
| docker-compose.yml | ||
| Dockerfile.test | ||
| fdb.cluster | ||
| garage-config.toml | ||
| garage-gateway.toml | ||
| garage-node1.toml | ||
| garage-node2.toml | ||
| garage-node3.toml | ||
| LICENSE | ||
| Makefile | ||
| PKGBUILD | ||
| pyproject.toml | ||
| README.md | ||
| seaweed-s3.json | ||
| setup.py | ||
Comity
A distributed FUSE filesystem using FoundationDB for metadata and S3-compatible storage (Garage, MinIO, SeaweedFS) for data.
Features
- Distributed: Mount from multiple machines simultaneously
- POSIX-compliant: Full filesystem semantics with distributed locking
- Hybrid storage: Small files in FDB for low latency, large files in S3
- Adaptive chunking: Optimized chunk sizes based on file size
- Write buffering: Batches small writes for better throughput
- Multiple backends: S3-compatible (Garage, MinIO) or SeaweedFS direct API
- TLS support: Secure connections to FoundationDB clusters
- Background GC: Automatic cleanup of orphaned chunks
- Config file: INI-style configuration with sensible defaults
Performance
Tested on a distributed setup with FoundationDB and Garage:
| Operation | Throughput |
|---|---|
| Sequential Write | 145 MiB/s |
| Sequential Read | 278 MiB/s |
| Random Read (cached) | 271 MiB/s |
Requirements
FoundationDB Client Library
Install the FoundationDB client package for your distribution:
Debian/Ubuntu:
wget https://github.com/apple/foundationdb/releases/download/7.1.61/foundationdb-clients_7.1.61-1_amd64.deb
sudo dpkg -i foundationdb-clients_7.1.61-1_amd64.deb
Fedora/RHEL:
wget https://github.com/apple/foundationdb/releases/download/7.1.61/foundationdb-clients-7.1.61-1.el9.x86_64.rpm
sudo dnf install ./foundationdb-clients-7.1.61-1.el9.x86_64.rpm
Python 3.11+
pip install comity
Or from source:
git clone https://github.com/your-repo/comity.git
cd comity
pip install -e .
Usage
Basic Mount
comity /mnt/comity \
--s3-endpoint http://localhost:3900 \
--s3-access-key YOUR_ACCESS_KEY \
--s3-secret-key YOUR_SECRET_KEY \
--s3-bucket comity
With Config File
comity /mnt/comity --config /etc/comity.conf
Foreground Mode (for debugging)
comity /mnt/comity --foreground --log-level DEBUG
Unmount
fusermount -u /mnt/comity
# or
umount /mnt/comity
Configuration
Comity supports INI-style config files. Default locations (checked in order):
- Path specified via
--config /etc/comity.conf/etc/comity/config.conf~/.config/comity/config.conf
Example config:
[s3]
endpoints = http://node1:3900,http://node2:3900,http://node3:3900
bucket = comity
access_key = GKxxxxxxxx
secret_key = xxxxxxxx
region = garage
connections = 100
[fdb]
cluster = /etc/foundationdb/fdb.cluster
threads = 16
# TLS settings (optional)
tls_cert_path = /etc/comity/client.crt
tls_key_path = /etc/comity/client.key
tls_ca_path = /etc/comity/ca.crt
[performance]
cache_size = 256
write_buffer_size = 131072
hybrid_storage = true
hybrid_threshold = 64
noatime = true
adaptive_chunks = true
[gc]
interval = 300
dry_run = false
[fuse]
allow_other = true
Key Options
| Option | Default | Description |
|---|---|---|
--s3-endpoints |
- | Comma-separated S3 endpoints for round-robin |
--hybrid-storage |
off | Store small chunks in FDB, large in S3 |
--hybrid-threshold |
64 | Threshold in KB for hybrid storage |
--write-buffer-size |
8192 | Write buffer per file (KB) |
--cache-size |
256 | Chunk cache size (MB) |
--noatime |
off | Don't update access times (improves perf) |
--gc-interval |
0 | Background GC interval in seconds |
--fdb-threads |
16 | FDB thread pool size |
--s3-connections |
100 | Max parallel S3 connections |
See comity --help for all options.
Architecture
┌─────────────┐ ┌─────────────┐
│ Client │ │ Client │
│ (comity) │ │ (comity) │
└──────┬──────┘ └──────┬──────┘
│ │
└─────────┬─────────┘
│
┌────────────┴────────────┐
│ │
▼ ▼
┌─────────┐ ┌─────────────┐
│ FDB │ │ S3/Garage │
│ (meta) │ │ (data) │
└─────────┘ └─────────────┘
- FoundationDB: Stores all metadata (inodes, directory entries, chunk maps, locks)
- S3/Garage: Stores file content as content-addressed chunks
- Hybrid mode: Stores small chunks (≤64KB) directly in FDB for lower latency
Building Debian Packages
# Install build dependencies
sudo apt install debhelper dh-python pybuild-plugin-pyproject \
python3-all python3-setuptools python3-poetry-core
# Build all packages (comity + Python dependencies)
cd packages
./build-all.sh
# Install
cd build
sudo apt install ./*.deb
Development
# Start test infrastructure
docker-compose up -d
# Run tests
python tests/test_filesystem.py
# Run with debug logging
comity /mnt/comity --foreground --log-level DEBUG
Storage Backends
S3-Compatible (Garage, MinIO)
Default backend. Works with any S3-compatible storage:
comity /mnt/comity \
--storage-backend s3 \
--s3-endpoints http://node1:3900,http://node2:3900 \
--s3-bucket comity
SeaweedFS Direct API
Higher throughput using SeaweedFS volume API directly:
comity /mnt/comity \
--storage-backend seaweedfs \
--seaweed-master http://localhost:9333
License
GPL-3.0-only