This guide covers deploying Oxy on Kubernetes using raw YAML manifests. This approach gives you complete control over the deployment configuration and is ideal for advanced users who need custom configurations not supported by Helm.
Consider Helm First: For most deployments, our Helm chart is the recommended approach as it handles configuration complexity and provides better maintainability. Use raw manifests only when you need full control or have specific customization requirements.
Prerequisites: You’ll need a Kubernetes cluster (>= 1.19) and kubectl configured to access your cluster.
Architecture Overview
Raw Kubernetes deployment consists of these core resources:
Resource Type | Name | Purpose |
---|
Namespace | oxy | Isolated environment for all Oxy resources |
StatefulSet | oxy-app | Main application deployment with persistent identity |
Service | oxy-app-service | Internal service discovery and load balancing |
PersistentVolumeClaim | oxy-app-pvc | Persistent storage for application data |
ConfigMap | oxy-app-config | Optional non-sensitive configuration data |
Secret | oxy-app-secrets | Optional sensitive configuration and credentials |
Ingress | oxy-app-ingress | Optional external access |
Basic Deployment
1. Namespace
Create a dedicated namespace for Oxy:
# 01-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: oxy
labels:
app.kubernetes.io/name: oxy
app.kubernetes.io/instance: oxy-app
2. ConfigMap (Optional)
Configure non-sensitive application settings:
# 02-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: oxy-app-config
namespace: oxy
labels:
app.kubernetes.io/name: oxy
app.kubernetes.io/instance: oxy-app
data:
# Application configuration
OXY_STATE_DIR: "/workspace/oxy_data"
OXY_PORT: "3000"
OXY_HOST: "0.0.0.0"
RUST_LOG: "info"
3. Secret (Optional)
Store sensitive configuration data:
# 03-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: oxy-app-secrets
namespace: oxy
labels:
app.kubernetes.io/name: oxy
app.kubernetes.io/instance: oxy-app
type: Opaque
data:
# Base64 encoded values - use: echo -n "your-secret" | base64
OPENAI_API_KEY: "" # Your base64 encoded API key
OXY_DATABASE_URL: "" # Base64 encoded connection string (if using external DB)
4. PersistentVolumeClaim
Define persistent storage for application data:
# 04-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: oxy-app-pvc
namespace: oxy
labels:
app.kubernetes.io/name: oxy
app.kubernetes.io/instance: oxy-app
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
# Adjust storageClassName for your cluster
storageClassName: gp3 # AWS EKS example
5. StatefulSet
Deploy the main Oxy application:
# 05-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: oxy-app
namespace: oxy
labels:
app.kubernetes.io/name: oxy
app.kubernetes.io/instance: oxy-app
spec:
serviceName: oxy-app-headless
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: oxy
app.kubernetes.io/instance: oxy-app
template:
metadata:
labels:
app.kubernetes.io/name: oxy
app.kubernetes.io/instance: oxy-app
spec:
securityContext:
fsGroup: 1000
containers:
- name: oxy
image: ghcr.io/oxy-hq/oxy:latest
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 3000
protocol: TCP
# Environment variables - can be set directly or from ConfigMap/Secret
env:
- name: OXY_STATE_DIR
value: "/workspace/oxy_data"
- name: OXY_DATABASE_URL
value: "postgresql://oxy:password@postgres.svc.cluster.local:5432/oxydb"
# Enable readonly mode for in-app git/secret management
# - name: OXY_READONLY
# value: "true"
# Optional: Load from ConfigMap
# - name: OXY_STATE_DIR
# valueFrom:
# configMapKeyRef:
# name: oxy-app-config
# key: OXY_STATE_DIR
# Optional: Load from Secret
# - name: OPENAI_API_KEY
# valueFrom:
# secretKeyRef:
# name: oxy-app-secrets
# key: OPENAI_API_KEY
# optional: true
# Health checks
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
# Resource limits
resources:
requests:
cpu: 250m
memory: 512Mi
limits:
cpu: 1000m
memory: 2Gi
# Volume mounts
volumeMounts:
- name: workspace
mountPath: /workspace
# Node scheduling (optional)
nodeSelector:
"workload-type": "oxy-app"
"app": "oxy-app"
tolerations:
- key: "oxy-app"
operator: "Equal"
value: "true"
effect: "NoSchedule"
volumes:
- name: workspace
persistentVolumeClaim:
claimName: oxy-app-pvc
6. Services
Create services for internal and external access:
# 06-services.yaml
---
# Headless service for StatefulSet
apiVersion: v1
kind: Service
metadata:
name: oxy-app-headless
namespace: oxy
labels:
app.kubernetes.io/name: oxy
app.kubernetes.io/instance: oxy-app
spec:
clusterIP: None
ports:
- name: http
port: 3000
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/name: oxy
app.kubernetes.io/instance: oxy-app
---
# Regular service for external access
apiVersion: v1
kind: Service
metadata:
name: oxy-app-service
namespace: oxy
labels:
app.kubernetes.io/name: oxy
app.kubernetes.io/instance: oxy-app
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/name: oxy
app.kubernetes.io/instance: oxy-app
7. Ingress (Optional)
Expose Oxy externally. Example for AWS Load Balancer Controller:
# 07-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oxy-app-ingress
namespace: oxy
labels:
app.kubernetes.io/name: oxy
app.kubernetes.io/instance: oxy-app
annotations:
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: "internet-facing"
alb.ingress.kubernetes.io/target-type: "ip"
alb.ingress.kubernetes.io/group.name: "oxy-shared"
# Health checks
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/healthcheck-port: "traffic-port"
alb.ingress.kubernetes.io/healthcheck-protocol: "HTTP"
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "30"
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "10"
alb.ingress.kubernetes.io/healthy-threshold-count: "2"
alb.ingress.kubernetes.io/unhealthy-threshold-count: "3"
# Session stickiness
alb.ingress.kubernetes.io/target-group-attributes: |
stickiness.enabled=true,
stickiness.lb_cookie.duration_seconds=86400,
stickiness.type=lb_cookie,
deregistration_delay.timeout_seconds=30,
load_balancing.algorithm.type=round_robin
alb.ingress.kubernetes.io/ssl-redirect: "443"
spec:
ingressClassName: alb
rules:
- host: oxy.your-domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: oxy-app-service
port:
number: 80
Readonly Mode
For users who prefer to manage git repositories and secrets through Oxy’s web interface rather than Kubernetes configurations, enable readonly mode:
# Add to StatefulSet environment variables:
env:
- name: OXY_READONLY
value: "true"
With readonly mode enabled:
- Git repositories can be configured and managed through the Oxy web UI
- API keys and secrets can be stored and managed in-app
- No need to configure Kubernetes Secrets or git-sync init containers
- Simplifies the Kubernetes deployment while maintaining full functionality
This is ideal for teams who want the benefits of Kubernetes deployment without the complexity of managing external integrations at the infrastructure level.
Other Configurations
Service Account with IRSA (AWS)
For production deployments on AWS EKS:
# service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: oxy-app-sa
namespace: oxy
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::123456789012:role/oxy-app-irsa"
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/metrics"
Git Sync Init Container
Add git synchronization with an init container:
# Add to StatefulSet spec.template.spec
initContainers:
- name: git-sync
image: k8s.gcr.io/git-sync/git-sync:v3.6.2
env:
- name: GIT_SYNC_REPO
value: "git@github.com:your-org/your-oxy-workspace.git"
- name: GIT_SYNC_BRANCH
value: "main"
- name: GIT_SYNC_ROOT
value: "/workspace/current"
- name: GIT_SYNC_ONE_TIME
value: "true"
- name: GIT_SYNC_SSH
value: "true"
volumeMounts:
- name: workspace
mountPath: /workspace
- name: ssh-key
mountPath: /etc/git-secret
readOnly: true
# Add SSH key volume
volumes:
- name: ssh-key
secret:
secretName: oxy-git-ssh
defaultMode: 0400
External Secrets
For external secret management:
# external-secret.yaml
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: oxy-external-secrets
namespace: oxy
spec:
refreshInterval: 5m
secretStoreRef:
name: aws-parameter-store
kind: SecretStore
target:
name: oxy-app-secrets
creationPolicy: Owner
data:
- secretKey: OPENAI_API_KEY
remoteRef:
key: /oxy/openai-api-key
- secretKey: OXY_DATABASE_URL
remoteRef:
key: /oxy/database-url
Backup to S3
For production backups, we recommend using Velero for comprehensive backup and restore capabilities. For simple file backups to S3:
# backup-s3-job.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: oxy-backup-s3
namespace: oxy
spec:
schedule: "0 2 * * *" # Daily at 2 AM UTC
jobTemplate:
spec:
template:
spec:
serviceAccountName: oxy-app-sa # With IRSA for S3 access
containers:
- name: backup
image: amazon/aws-cli:2.15.17
command:
- /bin/sh
- -c
- |
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
BACKUP_FILE="/tmp/oxy-backup-${TIMESTAMP}.tar.gz"
echo "Creating backup archive..."
tar -czf "$BACKUP_FILE" -C /workspace .
echo "Uploading to S3..."
aws s3 cp "$BACKUP_FILE" "s3://your-oxy-backups/backups/oxy-backup-${TIMESTAMP}.tar.gz"
echo "Backup completed: oxy-backup-${TIMESTAMP}.tar.gz"
# Optional: Clean up old backups (keep last 30 days)
aws s3api list-objects-v2 --bucket your-oxy-backups --prefix "backups/" \
--query "Contents[?LastModified<='$(date -d '30 days ago' -u +%Y-%m-%dT%H:%M:%SZ)'].Key" \
--output text | xargs -I {} aws s3 rm "s3://your-oxy-backups/{}"
env:
- name: AWS_DEFAULT_REGION
value: "us-east-1"
volumeMounts:
- name: workspace
mountPath: /workspace
readOnly: true
restartPolicy: OnFailure
volumes:
- name: workspace
persistentVolumeClaim:
claimName: oxy-app-pvc
Restore from S3
To restore from an S3 backup:
# 1. Scale down the application
kubectl scale statefulset oxy-app --replicas=0 -n oxy
# 2. Create a restore job
kubectl run oxy-restore --rm -i --tty --restart=Never \
--serviceaccount=oxy-app-sa \
--image=amazon/aws-cli:2.15.17 \
--env="AWS_DEFAULT_REGION=us-east-1" \
--overrides='
{
"spec": {
"containers": [
{
"name": "oxy-restore",
"image": "amazon/aws-cli:2.15.17",
"command": ["/bin/sh"],
"tty": true,
"stdin": true,
"volumeMounts": [
{
"name": "workspace",
"mountPath": "/workspace"
}
]
}
],
"volumes": [
{
"name": "workspace",
"persistentVolumeClaim": {
"claimName": "oxy-app-pvc"
}
}
]
}
}' -- /bin/sh
# 3. Inside the restore pod, download and extract backup
aws s3 cp s3://your-oxy-backups/backups/oxy-backup-20240913-020000.tar.gz /tmp/
cd /workspace
rm -rf * .[^.]* 2>/dev/null || true # Clear existing data
tar -xzf /tmp/oxy-backup-20240913-020000.tar.gz
exit
# 4. Scale application back up
kubectl scale statefulset oxy-app --replicas=1 -n oxy
For production environments, consider using Velero which provides:
- Automated backup scheduling
- Cross-cluster disaster recovery
- Volume snapshot integration
- Backup verification and retention policies