Swap auto-detection order from KVM → bwrap → host to bwrap → KVM → host. KVM remains available via COWORK_VM_BACKEND=kvm. - detectBackend(): check bwrap before KVM - --doctor: bwrap checked first; KVM deps shown as info (not warnings) unless COWORK_VM_BACKEND=kvm is set - Fix header comment inaccuracy about rootfs.qcow2 check - Update README and handover docs to reflect new default Fixes #326 Co-Authored-By: Claude <claude@anthropic.com>
22 KiB
Cowork Mode Linux Implementation - Handover Document
Summary
This work enables Claude Desktop's Cowork mode on Linux by patching the Electron app to use the Windows-style TypeScript VM client (instead of the macOS @ant/claude-swift native addon) and routing it through a Unix domain socket to a custom Node.js service daemon.
The service daemon uses a pluggable backend architecture with three isolation levels: KvmBackend (full QEMU/KVM VM with vsock + virtiofs), BwrapBackend (bubblewrap namespace sandbox), and HostBackend (no isolation, direct execution). The backend is auto-detected based on available system capabilities, or can be forced via the COWORK_VM_BACKEND environment variable.
Target Architecture
Claude Desktop (Electron)
↕ Unix domain socket (length-prefixed JSON, same protocol as Windows pipe)
cowork-vm-service (Node.js daemon)
└── VMManager (thin dispatcher)
→ delegates to this.backend (auto-detected or COWORK_VM_BACKEND override)
Backend selection (priority order):
1. BwrapBackend — bubblewrap namespace sandbox (default)
2. KvmBackend — QEMU/KVM + vsock + virtiofs (opt-in, full VM isolation)
3. HostBackend — direct on host, no isolation (fallback)
KvmBackend path:
QEMU/KVM (qemu-system-x86_64 -enable-kvm ...)
↕ virtio-vsock (socat bridge: Unix socket ↔ vsock CID:port)
↕ virtiofsd (host directory sharing via vhost-user-fs-pci)
Linux VM (rootfs.qcow2 overlay)
└── sdk-daemon → Claude Code CLI
BwrapBackend path:
bwrap --ro-bind / / --dev /dev --proc /proc --tmpfs /tmp --tmpfs /run \
--bind $workDir $workDir --unshare-pid --die-with-parent --new-session
└── Claude Code CLI (sandboxed)
HostBackend path:
spawn(claude-code-cli, args) ← direct execution, no isolation
Current state (Phases 1-3 implemented): All three backends are implemented. The active backend is auto-detected at daemon startup based on system capabilities. The KVM backend requires a rootfs image, which has not yet been tested with the actual Anthropic rootfs; the bwrap and host backends are functional and tested.
ISOLATION NOTE: The default backend depends on what is installed. With no additional packages, the HostBackend runs Claude Code directly on the host with full user permissions. Install
bubblewrapfor namespace-level sandbox isolation, or set up QEMU/KVM with a rootfs image for full VM isolation. The--doctorflag shows which backend will be active.
Dependencies
Build-time (all backends):
- Node.js 20+ (already required)
- All existing build.sh dependencies
Runtime Dependencies by Backend:
| Dependency | HostBackend | BwrapBackend | KvmBackend | Notes |
|---|---|---|---|---|
| Claude Code CLI | Required | Required | — | Resolved via installSdk or which |
bubblewrap (bwrap) |
— | Required | — | Namespace sandbox |
/dev/kvm (read+write) |
— | — | Required | KVM acceleration |
qemu-system-x86_64 |
— | — | Required | VM hypervisor |
qemu-img |
— | — | Required | Overlay disk creation |
/dev/vhost-vsock |
— | — | Required | Host↔guest communication |
socat |
— | — | Required | vsock bridge (Unix socket ↔ vsock) |
virtiofsd |
— | — | Recommended | Host directory sharing via virtiofs |
rootfs.qcow2 |
— | — | Required | VM disk image in ~/.local/share/claude-desktop/vm/ |
zstd |
— | — | Optional | Rootfs decompression (build-time) |
The --doctor flag checks all of these and shows distro-specific install commands.
What Was Done
Files Modified
build.sh— Addedpatch_cowork_linux()function (6 patches), removed@ant/claude-swiftstub references, added service daemon to build output. Patch 4 updated to extract real file entries from the win32 manifest (rootfs.vhdx, vmlinuz, initrd with checksums) instead of empty arrays.scripts/cowork-vm-service.js— Refactored from monolithic VMManager into pluggable backend architecture:- BackendBase — Abstract base class defining the interface (
init,startVM,stopVM,spawn,kill,writeStdin, etc.) - HostBackend — Original Phase 1 logic moved here verbatim (direct execution, no isolation)
- BwrapBackend — Wraps commands in
bwrapwith namespace isolation (--unshare-pid,--die-with-parent,--new-session) - KvmBackend — Full QEMU/KVM with overlay disks, virtiofsd, socat vsock bridge, QMP monitor, graceful shutdown (ACPI -> QMP quit -> SIGKILL)
- VMManager — Now a thin dispatcher that delegates all methods to
this.backend - Shared helpers extracted —
filterEnv(),buildSpawnEnv(),cleanSpawnArgs(),resolveWorkDir(),resolveCommand()used by all backends detectBackend()— Auto-detection function withCOWORK_VM_BACKENDenv override
- BackendBase — Abstract base class defining the interface (
scripts/launcher-common.sh— Added--doctorchecks for Cowork mode dependencies: KVM accessibility, vsock module, QEMU, qemu-img, socat, virtiofsd, bubblewrap, VM image presence. Includes distro-specific package install hints (Debian/Ubuntu, Fedora, Arch). Shows summary of active backend.scripts/claude-swift-stub.js— Deleted (replaced by TypeScript VM client approach)
Patches Applied to index.js (via patch_cowork_linux())
All patches use unique string anchors and dynamic variable extraction to be version-agnostic (minified variable names change between releases).
| # | Patch | Anchor String | Status |
|---|---|---|---|
| 1 | Platform check in fz(): add &&t!=="linux" |
"Unsupported platform" |
WORKS |
| 2a | Module loading log: add ||process.platform==="linux" |
"vmClient (TypeScript)" |
WORKS |
| 2b | Module assignment: same OR condition | {vm: near @ant/claude-swift |
WORKS (fixed: optional parens for minified code) |
| 3 | Socket path: Unix domain socket on Linux | "cowork-vm-service" |
WORKS |
| 4 | Bundle manifest: add linux:{x64:[...],arm64:[...]} |
SHA hash near files: |
WORKS (extracts win32 file entries — rootfs.vhdx, vmlinuz, initrd with checksums — and reuses them as linux entries; falls back to empty arrays if extraction fails) |
| 5 | Auto-launch service daemon in Ma() retry |
"VM service not running. The service failed to start." |
PARTIALLY WORKS (see issues) |
Service Daemon (cowork-vm-service.js)
Implements the Windows named pipe protocol over a Unix domain socket:
- Transport: Unix socket at
$XDG_RUNTIME_DIR/cowork-vm-service.sock - Framing: 4-byte big-endian length prefix + JSON payload
- Architecture: VMManager (thin dispatcher) -> BackendBase subclass
- Methods: configure, createVM, startVM, stopVM, isRunning, isGuestConnected, spawn, kill, writeStdin, isProcessRunning, mountPath, readFile, installSdk, addApprovedOauthToken, subscribeEvents
- Events: Persistent connection via
subscribeEvents, broadcasts stdout/stderr/exit/error/networkStatus/apiReachability
What Works
- Platform gate passes —
fz()returns{status: "supported"}for Linux - TypeScript VM client loads — Log shows
[VM] Loading vmClient (TypeScript) module...+Module loaded successfully - Full VM startup sequence completes — download_and_sdk_prepare → load_swift_api → callbacks → network connected → sdk_install → startup complete (541ms on warm start)
- Service daemon launches — Socket created, responds to all protocol methods
- Spawn succeeds — Claude Code CLI is spawned, stdin chunks are flushed
- Event field names fixed — Events use
id(notprocessId) matching client expectations - Clean environment — Strips
CLAUDECODE(session detection trigger) andELECTRON_*from daemon's inherited env. Preserves app-providedCLAUDE_CODE_*vars (OAuth tokens, API keys, entrypoint config) that Claude Code needs to function. - Error events use correct field name — Events use
messagefield matching client expectations (waserror, fixed) - SDK binary path tracked —
installSdkresolves and stores the downloaded binary path for use inspawn - VM guest paths handled —
CLAUDE_CONFIG_DIRandcwdpointing to/sessions/...are detected and corrected to host paths. Args--plugin-dirand--add-dirwith VM guest paths are stripped. - Stale socket cleanup is synchronous — No race condition on restart; socket is always cleaned up before
listen - Messages work end-to-end — Cowork mode sends messages and receives responses
What's Broken / Needs Investigation
1. Service Daemon Process Lifecycle
The service daemon runs as a detached forked process. When the app quits, the stopVM method is called which sets running=false, but the service daemon process continues running. On next app launch, the dedup check should detect it's alive and reuse it, but this path hasn't been validated.
2. Message Flow — RESOLVED
All issues preventing message flow have been fixed:
- Error event field mismatch (
error→message) — FIXED - VM guest paths in env vars (
CLAUDE_CONFIG_DIR,cwd) — FIXED - SDK binary path lost from
installSdkno-op — FIXED - Stale socket race condition on restart — FIXED
CLAUDECODE=1env var causing "cannot be launched inside another session" — FIXED- Over-stripping app-provided env vars (OAuth tokens, API keys stripped) — FIXED
- VM guest paths in args (
--plugin-dir,--add-dir) — FIXED
Architecture Notes
How the TypeScript VM Client Works (from beautified reference)
App calls method (e.g., spawn)
→ bYe.spawn() calls Ma("spawn", params)
→ Ma() retries up to 5 times with 1s delay
→ yYe() creates one-shot connection to socket
→ Sends length-prefixed JSON request
→ Receives length-prefixed JSON response
→ Connection closes
Events flow on separate persistent connection:
→ nAe() creates persistent connection
→ Sends { method: "subscribeEvents" }
→ Keeps connection open
→ Receives pushed events (stdout, stderr, exit, etc.)
→ Auto-reconnects after 1s if connection drops
Key Internal Codenames
yukonSilver— VM/Cowork feature gateCi—process.platform === "win32"(minified, changes per version)bYe— TypeScript VM client objectMa()— Retry wrapper for socket IPC callsfz()— Platform support checkov()— VM startup entry pointnAe()— Persistent event subscription connectionJi— Event callback registry
Electron/asar Gotchas Discovered
process.execPathin Electron = Electron binary, NOT Node.js. Usingspawn(process.execPath, [script])triggers Electron's "open file" handler instead of executing the script- Solution: Use
child_process.fork()withELECTRON_RUN_AS_NODE: "1"env var - Files inside
.asarcannot be executed bychild_process. Service daemon must be inapp.asar.unpacked/ process.resourcesPathgives path to the resources directory containing bothapp.asarandapp.asar.unpacked
Backend Detection
The detectBackend() function selects the active backend at daemon startup. The COWORK_VM_BACKEND environment variable can override auto-detection.
Auto-detection order:
-
Bwrap (default) — Requires:
bwrapin PATHbwrap --ro-bind / / truesucceeds (functional test)
-
KVM (opt-in via
COWORK_VM_BACKEND=kvm) — Requires ALL of:/dev/kvmreadable and writableqemu-system-x86_64in PATH/dev/vhost-vsockreadable- Rootfs image checked at
startVM()time, not during detection
-
Host — Always available (fallback)
Override:
# Force a specific backend
COWORK_VM_BACKEND=host ./claude-desktop.AppImage
COWORK_VM_BACKEND=bwrap ./claude-desktop.AppImage
COWORK_VM_BACKEND=kvm ./claude-desktop.AppImage
# Check which backend is active
./claude-desktop.AppImage --doctor
# Output: "Cowork isolation: KVM (full VM isolation)" or
# "Cowork isolation: bubblewrap (namespace sandbox)" or
# "Cowork isolation: none (host-direct, no isolation)"
The selected backend is logged to ~/.config/Claude/logs/cowork_vm_daemon.log at startup.
Service Daemon Method Reference
| Method | Params | Returns | Status |
|---|---|---|---|
configure |
{memoryMB?, cpuCount?} |
{} |
Stores config, delegates to backend init() |
createVM |
{bundlePath, diskSizeGB?} |
{} |
No-op (KVM creates overlay on startVM) |
startVM |
{bundlePath, memoryGB?} |
{} |
Host/Bwrap: sets running=true. KVM: starts QEMU, virtiofsd, socat bridge, waits for guest |
stopVM |
— | {} |
Host/Bwrap: kills spawned procs. KVM: ACPI shutdown -> QMP quit -> SIGKILL, cleans session dir |
isRunning |
— | {running: bool} |
Works (all backends) |
isGuestConnected |
— | {connected: bool} |
Host/Bwrap: true after startVM. KVM: true after guest responds to ping |
spawn |
{id, name, command, args, cwd?, env?, additionalMounts?, isResume?, allowedDomains?, sharedCwdPath?, oneShot?} |
{} |
Host: direct spawn. Bwrap: wrapped in bwrap sandbox. KVM: forwarded to guest sdk-daemon via vsock |
kill |
{id, signal?} |
{} |
Works (all backends) |
writeStdin |
{id, data} |
{} |
Works (all backends) |
isProcessRunning |
{id} |
{running: bool} |
Works (all backends) |
mountPath |
{processId, subpath, mountName, mode} |
{guestPath} |
Host: returns host path. Bwrap: stores for --bind on next spawn. KVM: returns /mnt/host/... if virtiofs active |
readFile |
{processName, filePath} |
{content} |
Host/Bwrap: reads from host. KVM: forwards to guest, falls back to host |
installSdk |
{sdkSubpath, version} |
{} |
Tracks binary path for spawn (all backends) |
addApprovedOauthToken |
{token} |
{} |
Host/Bwrap: no-op. KVM: forwards to guest |
subscribeEvents |
— | {} + persistent event stream |
Works (all backends) |
Event types pushed on subscribeEvents connection:
| Event | Fields | Notes |
|---|---|---|
stdout |
{type, id, data} |
Process stdout output |
stderr |
{type, id, data} |
Process stderr output |
exit |
{type, id, exitCode, signal} |
Process exited |
error |
{type, id, message} |
Process error |
networkStatus |
{type, status} |
"connected" or "disconnected" |
apiReachability |
{type, status} |
API reachability status |
QEMU Configuration
The KvmBackend builds QEMU arguments dynamically. The actual command is:
qemu-system-x86_64 \
-enable-kvm \
-m ${memoryGB}G \
-cpu host \
-smp ${cpuCount} \
-nographic \
-kernel ${VM_BASE_DIR}/vmlinuz \ # if present
-initrd ${VM_BASE_DIR}/initrd \ # if present
-append "root=/dev/vda1 console=ttyS0 quiet" \
-drive file=${sessionDir}/overlay.qcow2,format=qcow2,if=virtio \
-device vhost-vsock-pci,guest-cid=${cid} \
-qmp unix:${sessionDir}/qmp.sock,server,nowait \
-netdev user,id=net0 \
-device virtio-net-pci,netdev=net0 \
-chardev socket,id=virtiofs,path=${sessionDir}/virtiofs.sock \ # if virtiofsd
-device vhost-user-fs-pci,chardev=virtiofs,tag=hostshare # if virtiofsd
Key details:
- Overlay disks: Each session creates a qcow2 overlay backed by
rootfs.qcow2, so the base image is never modified. Session dir:~/.local/share/claude-desktop/vm/sessions/<uuid>/ - Guest CID: Allocated incrementally starting at 3 (0-2 are reserved), tracked via
~/.local/share/claude-desktop/vm/.next_cid - VHDX conversion: If
rootfs.vhdxexists butrootfs.qcow2does not,qemu-img convertruns automatically on first init - virtiofsd: Started separately, shares user's home directory to guest via
vhost-user-fs-pciwith taghostshare - socat bridge:
socat UNIX-LISTEN:${bridgeSock},fork VSOCK-CONNECT:${cid}:2222bridges Unix socket to vsock for host->guest communication - Graceful shutdown: ACPI power-down via QMP -> wait 10s -> QMP quit -> wait 3s -> SIGKILL
- Kernel+initrd: Optional; if
vmlinuzexists inVM_BASE_DIR, direct kernel boot is used. Otherwise falls back to full disk boot.
Remaining rootfs questions:
- The actual Anthropic rootfs format (VHDX from Windows downloads) needs testing with the conversion path
- Guest sdk-daemon vsock port and protocol need verification
- virtiofs mount point inside the guest needs confirmation
Verification Checklist
Phase 1 (current)
- Build:
./build.sh --build appimage --clean nocompletes without errors - Patches: All 6 cowork patches applied (check build output)
- Module: Logs show
[VM] Loading vmClient (TypeScript) module...(not@ant/claude-swift) - Startup:
[VM:start] Startup completeappears in cowork_vm_node.log - Socket:
$XDG_RUNTIME_DIR/cowork-vm-service.sockexists after startup - Service:
pgrep -af cowork-vm-serviceshows running process - Messages: Send a message in Cowork, verify response appears
- Restart: Kill app, relaunch, verify Cowork reconnects without ECONNREFUSED
- Clean exit: Close app normally, verify service daemon stops
Phase 2 — Backend Architecture (implemented)
- Refactor VMManager into thin dispatcher with pluggable backends
- Extract shared helpers:
filterEnv,buildSpawnEnv,cleanSpawnArgs,resolveWorkDir,resolveCommand - HostBackend: move existing logic verbatim
- BwrapBackend:
bwrapnamespace sandbox with--unshare-pid,--die-with-parent,--new-session - KvmBackend: QEMU/KVM with overlay disks, virtiofsd, socat vsock bridge, QMP
detectBackend()auto-detection withCOWORK_VM_BACKENDenv override--doctorchecks for all dependencies (KVM, vsock, QEMU, socat, virtiofsd, bwrap)- Distro-specific install hints in
--doctor(Debian/Ubuntu, Fedora, Arch) - Patch 4 updated to extract real win32 file entries for linux manifest
Phase 3 — VM Integration (implemented, needs testing with real rootfs)
- QEMU boots with overlay disk, vsock, QMP monitor, and virtiofs
- socat bridge created for host->guest communication
- Guest readiness polling via ping over bridge socket
- Service daemon forwards spawn/kill/writeStdin/readFile to guest sdk-daemon
- Event forwarding: subscribes to guest events and relays to Electron app
- Host directory sharing via virtiofsd + vhost-user-fs-pci
- Graceful VM shutdown: ACPI -> QMP quit -> SIGKILL
- Per-session overlay disks (base image never modified)
- Session directory cleanup on stopVM
- Download rootfs from
https://downloads.claude.ai/vms/linux/x64/{sha}/rootfs.img.zst - Decompress and convert rootfs (zstd -> VHDX -> qcow2)
- Boot actual Anthropic rootfs and verify guest sdk-daemon starts
- End-to-end test: Cowork session with KVM backend
- Verify virtiofs mounts are accessible inside guest
Next Steps
Phase 1 — ALL DONE
Fix stale socket handling— SynchronousunlinkbeforelistenFix error event field name—error→messagein broadcastEventFix VM guest paths— Strip/sessions/...fromCLAUDE_CONFIG_DIR,cwd,--plugin-dir,--add-dirTrack SDK binary path—installSdkstores path,spawnuses itFix— Strip from daemon env, keep app-providedCLAUDECODEsession detectionCLAUDE_CODE_*Verify end-to-end message flow— Messages sent and responses received
Phase 2: Backend Architecture — DONE
Refactor VMManager into dispatcher + backend patternExtract shared helpersImplement BwrapBackend with namespace isolationImplement KvmBackend with QEMU/KVM, vsock, virtiofsAdddetectBackend()auto-detectionAdd--doctorchecks for all cowork dependenciesUpdate Patch 4 with real bundle manifest entries
Phase 3: VM Integration — DONE (code complete, needs rootfs testing)
QEMU startup with overlay disks, QMP monitorvirtiofsd for host directory sharingsocat vsock bridge for host↔guest communicationGuest readiness pollingRequest forwarding to guest sdk-daemonEvent forwarding from guest to Electron appGraceful VM shutdown (ACPI -> QMP quit -> SIGKILL)
Remaining Work
- Rootfs analysis — Download actual Anthropic rootfs, decompress (zstd), convert (VHDX->qcow2), mount and inspect for sdk-daemon, vsock port, systemd services
- End-to-end KVM testing — Boot rootfs in QEMU, verify guest connects via vsock, test full Cowork session
- Service daemon lifecycle — Validate restart behavior (kill app, relaunch, verify reconnect)
- Clean exit — Verify service daemon stops on normal app close
- BwrapBackend testing — Verify sandbox isolation works for real Cowork sessions
- ARM64 support — KvmBackend currently uses
qemu-system-x86_64; ARM64 would needqemu-system-aarch64
Build & Test Commands
# Build
./build.sh --build appimage --clean no
# Launch with debug logging
COWORK_VM_DEBUG=1 ./claude-desktop-*.AppImage
# Force a specific backend
COWORK_VM_BACKEND=bwrap COWORK_VM_DEBUG=1 ./claude-desktop-*.AppImage
# Check doctor output for cowork dependencies
./claude-desktop-*.AppImage --doctor
# Check logs
tail -f ~/.config/Claude/logs/cowork_vm_node.log # Electron VM client logs
tail -f ~/.config/Claude/logs/cowork_vm_daemon.log # Service daemon logs
# Check service daemon
ls -la $XDG_RUNTIME_DIR/cowork-vm-service.sock
pgrep -af cowork-vm-service
# Kill everything for fresh start
pkill -9 -f "mount_claude"
pkill -9 -f "cowork-vm-service"
rm -f $XDG_RUNTIME_DIR/cowork-vm-service.sock
Reference Files
build-reference/app-extracted/.vite/build/index.js— Beautified v1.1.3189 source (224K lines)- Blog posts with architecture analysis:
aaddrick.com/blog/reverse-engineering-claude-desktops-cowork-mode-a-deep-dive-into-vm-isolation-and-linux-possibilities.mdaaddrick.com/blog/claude-desktop-cowork-mode-vm-architecture-analysis.md