git clone https://github.com/bitswalk/ldf.git
cd ldf
task build
This produces build/bin/ldfd and build/bin/ldfctl.
sudo cp build/bin/ldfd /usr/local/bin/
sudo cp build/bin/ldfctl /usr/local/bin/
sudo useradd -r -s /usr/sbin/nologin -d /var/lib/ldfd ldfd
sudo mkdir -p /var/lib/ldfd
sudo chown ldfd:ldfd /var/lib/ldfd
sudo mkdir -p /etc/ldfd
sudo cp docs/samples/ldfd.yml /etc/ldfd/ldfd.yml
sudo chown ldfd:ldfd /etc/ldfd/ldfd.yml
Edit /etc/ldfd/ldfd.yml to set your desired configuration. See Configuration for all options.
sudo -u ldfd ldfd --config /etc/ldfd/ldfd.yml
A sample systemd unit file is provided at docs/samples/ldfd.service.
After building and installing the binary (see Bare Metal steps 1-4 above):
sudo cp docs/samples/ldfd.service /etc/systemd/system/ldfd.service
sudo systemctl daemon-reload
For S3 storage credentials, create an environment file:
sudo touch /etc/ldfd/ldfd.env
sudo chmod 600 /etc/ldfd/ldfd.env
sudo chown ldfd:ldfd /etc/ldfd/ldfd.env
Add credentials to /etc/ldfd/ldfd.env:
LDFD_STORAGE_S3_ACCESS_KEY=your-access-key
LDFD_STORAGE_S3_SECRET_KEY=your-secret-key
Then uncomment the EnvironmentFile line in the service file:
EnvironmentFile=-/etc/ldfd/ldfd.env
sudo systemctl enable ldfd
sudo systemctl start ldfd
sudo systemctl status ldfd
sudo journalctl -u ldfd -f
The provided unit file includes security hardening:
ldfd user (non-root)ProtectSystem=strict – Read-only filesystem except allowed pathsProtectHome=yes – No access to home directoriesPrivateTmp=yes – Isolated /tmpNoNewPrivileges=yes – Cannot gain additional privilegesReadWritePaths=/var/lib/ldfd – Only writable pathLimitNOFILE=65536 – File descriptor limitA multi-stage Dockerfile is provided at tools/docker/Dockerfile.
docker build -f tools/docker/Dockerfile -t ldf:latest .
docker run -d \
--name ldfd \
-p 8443:8443 \
-v ldfd-data:/var/lib/ldfd \
ldf:latest
Mount a config file:
docker run -d \
--name ldfd \
-p 8443:8443 \
-v ldfd-data:/var/lib/ldfd \
-v /path/to/ldfd.yml:/opt/ldf/config/ldfd.yml:ro \
ldf:latest
Pass credentials via environment variables:
docker run -d \
--name ldfd \
-p 8443:8443 \
-v ldfd-data:/var/lib/ldfd \
-e LDFD_STORAGE_S3_ENDPOINT=s3.example.com \
-e LDFD_STORAGE_S3_PROVIDER=garage \
-e LDFD_STORAGE_S3_BUCKET=ldf-distributions \
-e LDFD_STORAGE_S3_ACCESS_KEY=your-key \
-e LDFD_STORAGE_S3_SECRET_KEY=your-secret \
ldf:latest
The Docker image:
ldf user (uid 1000)ldfd, ldfctl binaries and WebUI assets/opt/ldf/config/ldfd.ymlLocal storage is the default. Artifacts are stored in a directory on the filesystem:
storage:
type: local
local:
path: /var/lib/ldfd/artifacts
Ensure the ldfd user has write access to this directory.
ldfd supports four S3 provider types, each with different URL construction:
storage:
type: s3
s3:
provider: garage
endpoint: s3.example.com
region: garage
bucket: ldf-distributions
Garage uses api.{endpoint} for the API and {bucket}.{endpoint} for web access.
storage:
type: s3
s3:
provider: minio
endpoint: minio.example.com:9000
region: us-east-1
bucket: ldf-distributions
MinIO uses the endpoint directly with path-style addressing.
storage:
type: s3
s3:
provider: aws
region: us-east-1
bucket: ldf-distributions
AWS uses s3.{region}.amazonaws.com automatically.
storage:
type: s3
s3:
provider: other
endpoint: s3.example.com
region: us-east-1
bucket: ldf-distributions
Generic provider uses path-style addressing with the endpoint directly.
For all S3 providers, pass credentials via environment variables rather than config files:
export LDFD_STORAGE_S3_ACCESS_KEY="your-access-key"
export LDFD_STORAGE_S3_SECRET_KEY="your-secret-key"