Compare commits

...

41 Commits

Author SHA1 Message Date
Mathias Wagner
08cf515d2a make sync client clone root directory 2025-09-10 13:47:56 +02:00
Mathias Wagner
7ffd64049a Implement file browser & web ui components 2025-09-10 13:47:19 +02:00
Mathias Wagner
0a16e46372 Remove UI for machine details 2025-09-10 11:11:53 +02:00
e595fcbdac Add working test 2025-09-09 22:42:16 +02:00
fa00747e80 Add working test 2025-09-09 22:23:01 +02:00
4e38b13faa Add sync test using AI 2025-09-09 21:02:37 +02:00
8b1a9be8c2 Implement UI for machines & system settings 2025-09-09 19:08:59 +02:00
7b3ae6bb6e Add provisioning system to server 2025-09-09 19:06:16 +02:00
88e5f3d694 Add config controller in server 2025-09-09 19:06:03 +02:00
7a7a909440 Update profile menu to have duotone icon 2025-09-09 18:00:31 +02:00
2d2b1b9c00 Update profile menu to reflect actual name & role 2025-09-09 17:57:36 +02:00
Mathias Wagner
87efa1cf0e Bootstrap tauri app in client 2025-09-09 14:25:09 +02:00
Mathias Wagner
3bb2dcabaf Fix Select styling 2025-09-09 14:23:12 +02:00
Mathias Wagner
8fe30668e0 Fix UserManagement.jsx page 2025-09-09 13:46:17 +02:00
Mathias Wagner
804b3e577d Update App to integrate ToastProvider & UserManagement 2025-09-09 13:44:14 +02:00
Mathias Wagner
42a036a84c Create PageHeader component 2025-09-09 13:43:59 +02:00
Mathias Wagner
17bc9d3f0c Create DetailItem component 2025-09-09 13:43:16 +02:00
Mathias Wagner
a5f3ed1634 Create Avatar component 2025-09-09 13:43:07 +02:00
Mathias Wagner
29b32ec317 Craete UserManagement page 2025-09-09 13:42:49 +02:00
Mathias Wagner
5908ee0f99 Add ModalActions to Modal component 2025-09-09 13:42:09 +02:00
Mathias Wagner
54be320dc1 Add muted to main.sass 2025-09-09 13:33:50 +02:00
Mathias Wagner
7ef4d8b8b2 Update page title in Root.jsx 2025-09-09 13:33:05 +02:00
Mathias Wagner
0ddfc36eb8 Create ToastContext.jsx 2025-09-09 13:32:48 +02:00
Mathias Wagner
0e82a40d66 Fix bug in UserContext.jsx 2025-09-09 13:32:24 +02:00
Mathias Wagner
19e0407dbd Create Toast component 2025-09-09 13:32:16 +02:00
Mathias Wagner
676a2ac869 Create Select component 2025-09-09 13:31:55 +02:00
Mathias Wagner
4d0722d282 Create Modal component 2025-09-09 13:31:45 +02:00
Mathias Wagner
8d97de06fd Add isIconOnly to Button.jsx 2025-09-09 13:31:19 +02:00
Mathias Wagner
da6fe42d30 Create LoadingSpinner component 2025-09-09 13:31:08 +02:00
Mathias Wagner
16f5162541 Create Grid component 2025-09-09 13:30:59 +02:00
Mathias Wagner
2f8b301a61 Create EmptyState component 2025-09-09 13:30:49 +02:00
Mathias Wagner
61418fb072 Create Card component 2025-09-09 13:30:38 +02:00
Mathias Wagner
d3d7a10351 Create Badge component 2025-09-09 13:30:27 +02:00
Mathias Wagner
e39a583e95 Add UI components 2025-09-09 13:07:35 +02:00
Mathias Wagner
12f9eebfad Create me route 2025-09-09 12:39:16 +02:00
Mathias Wagner
0ce3751d08 Create test design 2025-09-09 12:39:03 +02:00
Mathias Wagner
0eb7e9d4ca Create base webui 2025-09-09 12:00:12 +02:00
Mathias Wagner
439578434e Create Dockerfile 2025-09-09 10:39:24 +02:00
Mathias Wagner
5a9e1e2e2b Implement shutdown signal in main.rs 2025-09-09 10:38:24 +02:00
Mathias Wagner
efe4549f82 Implement support for serving webui 2025-09-09 09:53:30 +02:00
Mathias Wagner
1936304e56 Add dist to .gitignore 2025-09-09 09:42:07 +02:00
142 changed files with 18111 additions and 99 deletions

35
.dockerignore Normal file
View File

@@ -0,0 +1,35 @@
# Git
.git
.gitignore
README.md
LICENSE
# Rust
server/target/
**/*.rs.bk
# Node.js
webui/node_modules/
webui/dist/
webui/.vite/
# IDE
.vscode/
.idea/
*.swp
*.swo
# OS
.DS_Store
Thumbs.db
# Logs
*.log
server/data/logs/
# Database (for development)
server/data/db/*.db
server/data/backups/
# Cache
.cache/

1
.gitignore vendored
View File

@@ -3,6 +3,7 @@
# will have compiled files and executables
debug/
target/
dist/
# These are backup files generated by rustfmt
**/*.rs.bk

53
Dockerfile Normal file
View File

@@ -0,0 +1,53 @@
FROM node:20-alpine AS webui-builder
WORKDIR /app/webui
COPY webui/package.json webui/pnpm-lock.yaml ./
RUN npm install -g pnpm && pnpm install --frozen-lockfile
COPY webui/ .
RUN pnpm build
FROM rust:1.89-alpine AS rust-builder
RUN apk add --no-cache musl-dev sqlite-dev pkgconfig openssl-dev
WORKDIR /app/server
COPY server/Cargo.toml server/Cargo.lock ./
COPY server/src ./src
RUN cargo build --release
FROM alpine:latest
RUN apk add --no-cache sqlite
RUN addgroup -g 1001 -S appgroup && \
adduser -u 1001 -S appuser -G appgroup
WORKDIR /app
RUN mkdir -p data/db data/logs data/backups && \
chown -R appuser:appgroup /app
COPY --from=rust-builder /app/server/target/release/server ./server
COPY --from=webui-builder /app/webui/dist ./dist
RUN chown -R appuser:appgroup /app
USER appuser
EXPOSE 8379
ENV RUST_LOG=info
VOLUME ["/app/data/db"]
VOLUME ["/app/data/logs"]
VOLUME ["/app/data/backups"]
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:8379/ || exit 1
CMD ["./server"]

396
PROTOCOL.md Normal file
View File

@@ -0,0 +1,396 @@
Gute Einsicht — du hast recht: **nur Disk-Hashes** allein reichen nicht, der Sync-Client muss die **gesamte Objekt-Graph-Struktur** (Disk → Partition → RootDir → Dirs → Files → Chunks) erzeugen und der Server muss diese Meta-Objekte speichern/prüfen.
Ich aktualisiere das Protokoll so, dass es komplett, binär und KI-implementierbar ist — mit klaren Message-Formaten, Meta-Objekt-Binärlayouts und Pseudocode für Client **(was er erzeugen / berechnen muss)** und für Server **(wie er prüfen & persistieren muss)**.
Ich halte es pragmatisch: keine Kompression-Extensions, keine Encryption — alles einfach, später erweiterbar.
---
# Kurzüberblick (Was passiert)
1. Client scannt Disk/Partition, chunked Dateien (4 MiB), berechnet Chunks-Hashes (BLAKE3-256, 32B).
2. Client baut Binär-Meta-Objekte: FileObj, DirObj, PartitionObj, DiskObj, SnapshotObj (jeweils Body → `meta_hash = BLAKE3(body)`).
3. Client spricht Server über TCP an (HELLO → AUTH), fragt in Batches nach fehlenden Chunks/Metas, sendet nur fehlende Chunks/Metas.
4. Am Ende sendet er das Snapshot-Commit; Server validiert, schreibt Snapshot-Referenz (Server führt Pointer).
---
# Nachrichtengeneralstruktur (Envelopes)
Jede Nachricht: fixer 24-Byte Header + Payload:
```
struct MsgHeader {
u8 cmd; // Befehlscode (siehe Tabelle)
u8 flags; // reserved
u8 reserved[2];
u8 session_id[16]; // 0..0 bevor AUTH_OK
u32 payload_len; // LE
}
```
Antwort-Nachrichten haben dieselbe Hülle.
---
# Command-Codes (u8)
* 0x01 HELLO
* 0x02 HELLO_OK
* 0x10 AUTH_USERPASS
* 0x11 AUTH_CODE
* 0x12 AUTH_OK
* 0x13 AUTH_FAIL
* 0x20 BATCH_CHECK_CHUNK
* 0x21 CHECK_CHUNK_RESP
* 0x22 SEND_CHUNK
* 0x23 CHUNK_OK
* 0x24 CHUNK_FAIL
* 0x30 BATCH_CHECK_META
* 0x31 CHECK_META_RESP
* 0x32 SEND_META
* 0x33 META_OK
* 0x34 META_FAIL
* 0x40 SEND_SNAPSHOT (Snapshot-Commit)
* 0x41 SNAPSHOT_OK
* 0x42 SNAPSHOT_FAIL
* 0xFF CLOSE
---
# Wichtige Designentscheidungen (Kurz)
* **Hashes**: BLAKE3-256 (32 Bytes). Client berechnet alle Hashes (Chunks + Meta bodies).
* **Chunks auf Wire**: unkomprimiert (einfach & verlässlich). Kompression wäre später Erweiterung.
* **Meta-Objekt-Body**: kompakte binäre Strukturen (siehe unten). `meta_hash = BLAKE3(body)`.
* **Batch-Checks**: Client fragt in Batches nach fehlenden Chunks/Metas (+ Server liefert nur die fehlenden Hashes zurück). Minimiert RTT.
* **Server persistiert**: `chunks/<ab>/<cd>/<hash>.chk`, `meta/<type>/<ab>/<cd>/<hash>.meta`. Server verwaltet Snapshot-Pointers (z. B. `machines/<client>/snapshots/<id>.ref`).
* **Snapshot Commit**: Server validiert Objekt-Graph vor Abschluss; falls etwas fehlt, sendet Liste zurück (Snapshot_FAIL mit missing list).
---
# Binary Payload-Formate
Alle mehrteiligen Zähler / Längen sind little-endian (`LE`).
## A) BATCH_CHECK_CHUNK (Client → Server)
```
payload:
u32 count
for i in 0..count:
u8[32] chunk_hash
```
## CHECK_CHUNK_RESP (Server → Client)
```
payload:
u32 missing_count
for i in 0..missing_count:
u8[32] missing_chunk_hash
```
## SEND_CHUNK (Client → Server)
```
payload:
u8[32] chunk_hash
u32 size
u8[size] data // raw chunk bytes
```
Server computes BLAKE3(data) and compares to chunk_hash; if equal -> speichert.
## A) BATCH_CHECK_META
```
payload:
u32 count
for i in 0..count:
u8 meta_type // 1=file,2=dir,3=partition,4=disk,5=snapshot
u8[32] meta_hash
```
## CHECK_META_RESP
```
payload:
u32 missing_count
for i in 0..missing_count:
u8 meta_type
u8[32] meta_hash
```
## SEND_META
```
payload:
u8 meta_type // 1..5
u8[32] meta_hash
u32 body_len
u8[body_len] body_bytes // the canonical body; server will BLAKE3(body_bytes) and compare to meta_hash
```
## SEND_SNAPSHOT (Commit)
```
payload:
u8[32] snapshot_hash
u32 body_len
u8[body_len] snapshot_body // Snapshot body same encoding as meta (server validates body hash == snapshot_hash)
```
Server validates that snapshot_body references only existing meta objects (recursive / direct check). If OK → creates persistent snapshot pointer and replies SNAPSHOT_OK; if not, reply SNAPSHOT_FAIL with missing list (same format as CHECK_META_RESP).
---
# Meta-Objekt-Binärformate (Bodies)
> Client erzeugt `body_bytes` für jedes Meta-Objekt; `meta_hash = BLAKE3(body_bytes)`.
### FileObj (meta_type = 1)
```
FileObjBody:
u8 version (1)
u32 fs_type_code // e.g. 1=ext*, 2=ntfs, 3=fat32 (enum)
u64 size
u32 mode // POSIX mode for linux; 0 for FS without
u32 uid
u32 gid
u64 mtime_unixsec
u32 chunk_count
for i in 0..chunk_count:
u8[32] chunk_hash
// optional: xattrs/ACLs TLV (not in v1)
```
### DirObj (meta_type = 2)
```
DirObjBody:
u8 version (1)
u32 entry_count
for each entry:
u8 entry_type // 0 = file, 1 = dir, 2 = symlink
u16 name_len
u8[name_len] name (UTF-8)
u8[32] target_meta_hash
```
### PartitionObj (meta_type = 3)
```
PartitionObjBody:
u8 version (1)
u32 fs_type_code
u8[32] root_dir_hash // DirObj hash for root of this partition
u64 start_lba
u64 end_lba
u8[16] type_guid // zeroed if unused
```
### DiskObj (meta_type = 4)
```
DiskObjBody:
u8 version (1)
u32 partition_count
for i in 0..partition_count:
u8[32] partition_hash
u64 disk_size_bytes
u16 serial_len
u8[serial_len] serial_bytes
```
### SnapshotObj (meta_type = 5)
```
SnapshotObjBody:
u8 version (1)
u64 created_at_unixsec
u32 disk_count
for i in 0..disk_count:
u8[32] disk_hash
// optional: snapshot metadata (user, note) as TLV extension later
```
---
# Ablauf (Pseudocode) — **Client-Seite (Sync-Client)**
(Erzeugt alle Hashes; sendet nur fehlendes per Batch)
```text
FUNCTION client_backup(tcp_conn, computer_id, disks):
send_msg(HELLO{client_type=0, auth_type=0})
await HELLO_OK
send_msg(AUTH_USERPASS{username,password})
resp = await
if resp != AUTH_OK: abort
session_id = resp.session_id
// traverse per-partition to limit memory
snapshot_disk_hashes = []
FOR disk IN disks:
partition_hashes = []
FOR part IN disk.partitions:
root_dir_hash = process_dir(part.root_path, tcp_conn)
part_body = build_partition_body(part.fs_type, root_dir_hash, part.start, part.end, part.guid)
part_hash = blake3(part_body)
batch_check_and_send_meta_if_missing(tcp_conn, meta_type=3, [(part_hash,part_body)])
partition_hashes.append(part_hash)
disk_body = build_disk_body(partition_hashes, disk.size, disk.serial)
disk_hash = blake3(disk_body)
batch_check_and_send_meta_if_missing(tcp_conn, meta_type=4, [(disk_hash,disk_body)])
snapshot_disk_hashes.append(disk_hash)
snapshot_body = build_snapshot_body(now(), snapshot_disk_hashes)
snapshot_hash = blake3(snapshot_body)
// final TRY: ask server if snapshot can be committed (server will verify)
send_msg(SEND_SNAPSHOT(snapshot_hash, snapshot_body))
resp = await
if resp == SNAPSHOT_OK: success
else if resp == SNAPSHOT_FAIL: // server returns missing meta list
// receive missing metas; client should send the remaining missing meta/chunks (loop)
handle_missing_and_retry()
```
Hilfsfunktionen:
```text
FUNCTION process_dir(path, tcp_conn):
entries_meta = [] // list of (name, entry_type, target_hash)
collect a list meta_to_check_for_this_dir = []
FOR entry IN readdir(path):
IF entry.is_file:
file_hash = process_file(entry.path, tcp_conn) // below
entries_meta.append((entry.name, 0, file_hash))
ELSE IF entry.is_dir:
subdir_hash = process_dir(entry.path, tcp_conn)
entries_meta.append((entry.name, 1, subdir_hash))
ELSE IF symlink:
symlink_body = build_symlink_body(target)
symlink_hash = blake3(symlink_body)
batch_check_and_send_meta_if_missing(tcp_conn, meta_type=1, [(symlink_hash, symlink_body)])
entries_meta.append((entry.name, 2, symlink_hash))
dir_body = build_dir_body(entries_meta)
dir_hash = blake3(dir_body)
batch_check_and_send_meta_if_missing(tcp_conn, meta_type=2, [(dir_hash,dir_body)])
RETURN dir_hash
```
```text
FUNCTION process_file(path, tcp_conn):
chunk_hashes = []
FOR each chunk IN read_in_chunks(path, 4*1024*1024):
chunk_hash = blake3(chunk)
chunk_hashes.append(chunk_hash)
// Batch-check chunks for this file
missing = batch_check_chunks(tcp_conn, chunk_hashes)
FOR each missing_hash IN missing:
chunk_bytes = read_chunk_by_hash_from_disk(path, missing_hash) // or buffer earlier
send_msg(SEND_CHUNK {hash,size,data})
await CHUNK_OK
file_body = build_file_body(fs_type, size, mode, uid, gid, mtime, chunk_hashes)
file_hash = blake3(file_body)
batch_check_and_send_meta_if_missing(tcp_conn, meta_type=1, [(file_hash,file_body)])
RETURN file_hash
```
`batch_check_and_send_meta_if_missing`:
* Send BATCH_CHECK_META for all items
* Server returns list of missing metas
* For each missing, send SEND_META(meta_type, meta_hash, body)
* Await META_OK
Bemerkung: batching per directory/file-group reduziert RTT.
---
# Ablauf (Pseudocode) — **Server-Seite (Sync-Server)**
```text
ON connection:
read HELLO -> verify allowed client type
send HELLO_OK OR HELLO_FAIL
ON AUTH_USERPASS:
validate credentials
if ok: generate session_id (16B), send AUTH_OK{session_id}
else send AUTH_FAIL
ON BATCH_CHECK_CHUNK:
read list of hashes
missing_list = []
for hash in hashes:
if not exists chunks/shard(hash): missing_list.append(hash)
send CHECK_CHUNK_RESP {missing_list}
ON SEND_CHUNK:
read chunk_hash, size, data
computed = blake3(data)
if computed != chunk_hash: send CHUNK_FAIL{reason} and drop
else if exists chunk already: send CHUNK_OK
else: write atomic to chunks/<ab>/<cd>/<hash>.chk and send CHUNK_OK
ON BATCH_CHECK_META:
similar: check meta/<type>/<hash>.meta exists — return missing list
ON SEND_META:
verify blake3(body) == meta_hash; if ok write meta/<type>/<ab>/<cd>/<hash>.meta atomically; respond META_OK
ON SEND_SNAPSHOT:
verify blake3(snapshot_body) == snapshot_hash
// Validate the object graph:
missing = validate_graph(snapshot_body) // DFS: disks -> partitions -> dirs -> files -> chunks
if missing not empty:
send SNAPSHOT_FAIL {missing (as meta list and/or chunk list)}
else:
store snapshot file and create pointer machines/<client_id>/snapshots/<id>.ref
send SNAPSHOT_OK {snapshot_id}
```
`validate_graph`:
* parse snapshot_body → disk_hashes
* for each disk_hash check meta exists; load disk meta → for each partition_hash check meta exists … recursively for dir entries -> file metas -> check chunk existence for each chunk_hash. Collect missing set and return.
---
# Verhalten bei `SNAPSHOT_FAIL`
* Server liefert fehlende meta/chunk-Hashes.
* Client sendet diese gezielt (batch) und wiederholt `SEND_SNAPSHOT` (retry).
* Alternativ: Client kann beim ersten Versuch inkrementell alle benötigten metas/chunks hochladen (das ist die übliche Reihenfolge dieses Pseudocodes — so fehlt beim Commit nichts mehr).
---
# Speicherung / Pfade (Server intern)
* `chunks/<ab>/<cd>/<hash>.chk` (ab = first 2 hex chars; cd = next 2)
* `meta/files/<ab>/<cd>/<hash>.meta`
* `meta/dirs/<...>`
* `meta/parts/...`
* `meta/disks/...`
* `meta/snapshots/<snapshot_hash>.meta`
* `machines/<client_id>/snapshots/<snapshot_id>.ref` (Pointer -> snapshot_hash + timestamp)
Atomic writes: `tmp -> rename`.
---
# Wichtige Implementations-Hinweise für die KI/Server-Implementierung
* **Batching ist Pflicht**: Implementiere `BATCH_CHECK_CHUNK` & `BATCH_CHECK_META` effizient (Bitset, HashSet lookups).
* **Limits**: begrenze `count` pro Batch (z. B. 1000) — Client muss chunk lists stückeln.
* **Validation:** Server muss auf `SEND_SNAPSHOT` den Graph validieren (sonst verliert man Konsistenz).
* **Atomic Snapshot Commit:** erst persistieren, wenn Graph vollständig vorhanden.
* **SessionID**: muss in Header für alle Nachfolgemsgs verwendet werden.
* **Perf:** parallelisiere Chunk-Uploads (mehrere TCP-Tasks) und erlaubt Server mehrere parallele Handshakes.
* **Sicherheit:** produktiv TLS/TCP oder VPN; Rate-limit / brute-force Schutz; Provisioning-Codes mit TTL.

24
client/.gitignore vendored Normal file
View File

@@ -0,0 +1,24 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
node_modules
dist
dist-ssr
*.local
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?

13
client/index.html Normal file
View File

@@ -0,0 +1,13 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Arkendro Client</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.jsx"></script>
</body>
</html>

23
client/package.json Normal file
View File

@@ -0,0 +1,23 @@
{
"name": "client",
"private": true,
"version": "0.1.0",
"type": "module",
"scripts": {
"dev": "vite",
"build": "vite build",
"preview": "vite preview",
"tauri": "tauri"
},
"dependencies": {
"react": "^19.1.0",
"react-dom": "^19.1.0",
"@tauri-apps/api": "^2",
"@tauri-apps/plugin-opener": "^2"
},
"devDependencies": {
"@vitejs/plugin-react": "^4.6.0",
"vite": "^7.0.4",
"@tauri-apps/cli": "^2"
}
}

1179
client/pnpm-lock.yaml generated Normal file

File diff suppressed because it is too large Load Diff

2
client/src-tauri/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
/target/
/gen/schemas

5219
client/src-tauri/Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,25 @@
[package]
name = "client"
version = "0.1.0"
description = "A Tauri App"
authors = ["you"]
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[lib]
# The `_lib` suffix may seem redundant but it is necessary
# to make the lib name unique and wouldn't conflict with the bin name.
# This seems to be only an issue on Windows, see https://github.com/rust-lang/cargo/issues/8519
name = "client_lib"
crate-type = ["staticlib", "cdylib", "rlib"]
[build-dependencies]
tauri-build = { version = "2", features = [] }
[dependencies]
tauri = { version = "2.0.0", features = [ "tray-icon" ] }
tauri-plugin-opener = "2"
serde = { version = "1", features = ["derive"] }
serde_json = "1"

View File

@@ -0,0 +1,3 @@
fn main() {
tauri_build::build()
}

View File

@@ -0,0 +1,10 @@
{
"$schema": "../gen/schemas/desktop-schema.json",
"identifier": "default",
"description": "Capability for the main window",
"windows": ["main"],
"permissions": [
"core:default",
"opener:default"
]
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 974 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 903 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

View File

@@ -0,0 +1,29 @@
use tauri::{
menu::{Menu, MenuItem},
tray::TrayIconBuilder
};
#[cfg_attr(mobile, tauri::mobile_entry_point)]
pub fn run() {
tauri::Builder::default()
.setup(|app| {
let quit_i = MenuItem::with_id(app, "quit", "Quit", true, None::<&str>)?;
let menu = Menu::with_items(app, &[&quit_i])?;
TrayIconBuilder::new()
.menu(&menu)
.icon(app.default_window_icon().unwrap().clone())
.on_menu_event(|app, event| match event.id.as_ref() {
"quit" => {
app.exit(0);
}
_ => {}
})
.build(app)?;
Ok(())
})
.plugin(tauri_plugin_opener::init())
.run(tauri::generate_context!())
.expect("error while running tauri application");
}

View File

@@ -0,0 +1,6 @@
// Prevents additional console window on Windows in release, DO NOT REMOVE!!
#![cfg_attr(not(debug_assertions), windows_subsystem = "windows")]
fn main() {
client_lib::run()
}

View File

@@ -0,0 +1,35 @@
{
"$schema": "https://schema.tauri.app/config/2",
"productName": "client",
"version": "0.1.0",
"identifier": "dev.gnm.arkendro-client",
"build": {
"beforeDevCommand": "pnpm dev",
"devUrl": "http://localhost:1420",
"beforeBuildCommand": "pnpm build",
"frontendDist": "../dist"
},
"app": {
"windows": [
{
"title": "client",
"width": 800,
"height": 600
}
],
"security": {
"csp": null
}
},
"bundle": {
"active": true,
"targets": "all",
"icon": [
"icons/32x32.png",
"icons/128x128.png",
"icons/128x128@2x.png",
"icons/icon.icns",
"icons/icon.ico"
]
}
}

10
client/src/App.jsx Normal file
View File

@@ -0,0 +1,10 @@
const App = () => {
return (
<main className="container">
<h1>Arkendro client</h1>
</main>
);
}
export default App;

9
client/src/main.jsx Normal file
View File

@@ -0,0 +1,9 @@
import React from "react";
import ReactDOM from "react-dom/client";
import App from "./App";
ReactDOM.createRoot(document.getElementById("root")).render(
<React.StrictMode>
<App />
</React.StrictMode>,
);

25
client/vite.config.js Normal file
View File

@@ -0,0 +1,25 @@
import { defineConfig } from "vite";
import react from "@vitejs/plugin-react";
const host = process.env.TAURI_DEV_HOST;
export default defineConfig(async () => ({
plugins: [react()],
clearScreen: false,
server: {
port: 1420,
strictPort: true,
host: host || false,
hmr: host
? {
protocol: "ws",
host,
port: 1421,
}
: undefined,
watch: {
ignored: ["**/src-tauri/**"],
},
},
}));

View File

@@ -0,0 +1,56 @@
{
"db_name": "SQLite",
"query": "\n SELECT pc.id, pc.code, pc.expires_at, pc.used, m.id as machine_id, m.user_id, u.username\n FROM provisioning_codes pc\n JOIN machines m ON pc.machine_id = m.id\n JOIN users u ON m.user_id = u.id\n WHERE pc.code = ? AND pc.used = 0\n ",
"describe": {
"columns": [
{
"name": "id",
"ordinal": 0,
"type_info": "Integer"
},
{
"name": "code",
"ordinal": 1,
"type_info": "Text"
},
{
"name": "expires_at",
"ordinal": 2,
"type_info": "Datetime"
},
{
"name": "used",
"ordinal": 3,
"type_info": "Bool"
},
{
"name": "machine_id",
"ordinal": 4,
"type_info": "Integer"
},
{
"name": "user_id",
"ordinal": 5,
"type_info": "Integer"
},
{
"name": "username",
"ordinal": 6,
"type_info": "Text"
}
],
"parameters": {
"Right": 1
},
"nullable": [
true,
false,
false,
true,
true,
false,
false
]
},
"hash": "2d6e5810f76e780a4a9b54c5ea39d707be614eb304dc6b4f32d8b6d28464c4b5"
}

View File

@@ -0,0 +1,26 @@
{
"db_name": "SQLite",
"query": "SELECT id, user_id FROM machines WHERE id = ?",
"describe": {
"columns": [
{
"name": "id",
"ordinal": 0,
"type_info": "Integer"
},
{
"name": "user_id",
"ordinal": 1,
"type_info": "Integer"
}
],
"parameters": {
"Right": 1
},
"nullable": [
false,
false
]
},
"hash": "43af0c22d05eca56b2a7b1f6eed873102d8e006330fd7d8063657d2df936b3fb"
}

View File

@@ -0,0 +1,12 @@
{
"db_name": "SQLite",
"query": "UPDATE provisioning_codes SET used = 1 WHERE id = ?",
"describe": {
"columns": [],
"parameters": {
"Right": 1
},
"nullable": []
},
"hash": "508e673540beae31730d323bbb52d91747bb405ef3d6f4a7f20776fdeb618688"
}

View File

@@ -0,0 +1,32 @@
{
"db_name": "SQLite",
"query": "SELECT id, username, password_hash FROM users WHERE username = ?",
"describe": {
"columns": [
{
"name": "id",
"ordinal": 0,
"type_info": "Integer"
},
{
"name": "username",
"ordinal": 1,
"type_info": "Text"
},
{
"name": "password_hash",
"ordinal": 2,
"type_info": "Text"
}
],
"parameters": {
"Right": 1
},
"nullable": [
true,
false,
false
]
},
"hash": "9f9215a05f729db6f707c84967f4f11033d39d17ded98f4fe9fb48f3d1598596"
}

View File

@@ -0,0 +1,26 @@
{
"db_name": "SQLite",
"query": "SELECT id, user_id FROM machines WHERE id = ? AND user_id = ?",
"describe": {
"columns": [
{
"name": "id",
"ordinal": 0,
"type_info": "Integer"
},
{
"name": "user_id",
"ordinal": 1,
"type_info": "Integer"
}
],
"parameters": {
"Right": 2
},
"nullable": [
false,
false
]
},
"hash": "cc5f2e47cc53dd29682506ff84f07f7d0914e3141e62b470e84b3886b50764a1"
}

141
server/Cargo.lock generated
View File

@@ -38,6 +38,18 @@ version = "1.0.99"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b0674a1ddeecb70197781e945de4b3b8ffb61fa939a5597bcf48503737663100"
[[package]]
name = "arrayref"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "76a2e8124351fda1ef8aaaa3bbd7ebbcb486bbcd4225aca0aa0d84bb2db8fecb"
[[package]]
name = "arrayvec"
version = "0.7.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7c02d123df017efcdfbd739ef81735b36c5ba83ec3c59c80a9d7ecc718f92e50"
[[package]]
name = "atoi"
version = "2.0.0"
@@ -153,6 +165,15 @@ dependencies = [
"zeroize",
]
[[package]]
name = "bincode"
version = "1.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b1f45e9417d87227c7a56d22e471c6206462cba514c7590c09aff4cf6d1ddcad"
dependencies = [
"serde",
]
[[package]]
name = "bitflags"
version = "2.9.4"
@@ -162,6 +183,19 @@ dependencies = [
"serde",
]
[[package]]
name = "blake3"
version = "1.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3888aaa89e4b2a40fca9848e400f6a658a5a3978de7be858e209cafa8be9a4a0"
dependencies = [
"arrayref",
"arrayvec",
"cc",
"cfg-if",
"constant_time_eq",
]
[[package]]
name = "block-buffer"
version = "0.10.4"
@@ -254,6 +288,12 @@ version = "0.9.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c2459377285ad874054d797f3ccebf984978aa39129f6eafde5cdc8315b612f8"
[[package]]
name = "constant_time_eq"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7c74b8349d32d297c9134b8c88677813a227df8f779daa29bfc29c183fe3dca6"
[[package]]
name = "core-foundation-sys"
version = "0.8.7"
@@ -364,6 +404,16 @@ version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "877a4ace8713b0bcf2a4e7eec82529c029f1d0619886d18145fea96c3ffe5c0f"
[[package]]
name = "errno"
version = "0.3.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "39cab71617ae0d63f51a36d69f866391735b51691dbda63cf6f96d042b63efeb"
dependencies = [
"libc",
"windows-sys 0.59.0",
]
[[package]]
name = "etcetera"
version = "0.8.0"
@@ -386,6 +436,12 @@ dependencies = [
"pin-project-lite",
]
[[package]]
name = "fastrand"
version = "2.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be"
[[package]]
name = "find-msvc-tools"
version = "0.1.1"
@@ -628,6 +684,12 @@ dependencies = [
"pin-project-lite",
]
[[package]]
name = "http-range-header"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9171a2ea8a68358193d15dd5d70c1c10a2afc3e7e4c5bc92bc9f025cebd7359c"
[[package]]
name = "httparse"
version = "1.10.1"
@@ -897,6 +959,12 @@ dependencies = [
"vcpkg",
]
[[package]]
name = "linux-raw-sys"
version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df1d3c3b53da64cf5760482273a98e575c651a67eec7f77df96b5b642de8f039"
[[package]]
name = "litemap"
version = "0.8.0"
@@ -947,6 +1015,16 @@ version = "0.3.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6877bb514081ee2a7ff5ef9de3281f14a4dd4bceac4c09388074a6b5df8a139a"
[[package]]
name = "mime_guess"
version = "2.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f7c44f8e672c00fe5308fa235f821cb4198414e1c77935c1ab6948d3fd78550e"
dependencies = [
"mime",
"unicase",
]
[[package]]
name = "miniz_oxide"
version = "0.8.9"
@@ -1233,6 +1311,19 @@ version = "0.1.26"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "56f7d92ca342cea22a06f2121d944b4fd82af56988c270852495420f961d4ace"
[[package]]
name = "rustix"
version = "1.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cd15f8a2c5551a84d56efdc1cd049089e409ac19a3072d5037a17fd70719ff3e"
dependencies = [
"bitflags",
"errno",
"libc",
"linux-raw-sys",
"windows-sys 0.59.0",
]
[[package]]
name = "rustls"
version = "0.23.31"
@@ -1346,10 +1437,16 @@ dependencies = [
"anyhow",
"axum",
"bcrypt",
"bincode",
"blake3",
"bytes",
"chrono",
"hex",
"rand",
"serde",
"serde_json",
"sqlx",
"tempfile",
"tokio",
"tower-http",
"uuid",
@@ -1695,6 +1792,19 @@ dependencies = [
"syn",
]
[[package]]
name = "tempfile"
version = "3.22.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "84fa4d11fadde498443cca10fd3ac23c951f0dc59e080e9f4b93d4df4e4eea53"
dependencies = [
"fastrand",
"getrandom 0.3.3",
"once_cell",
"rustix",
"windows-sys 0.59.0",
]
[[package]]
name = "thiserror"
version = "2.0.16"
@@ -1782,6 +1892,19 @@ dependencies = [
"tokio",
]
[[package]]
name = "tokio-util"
version = "0.7.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "14307c986784f72ef81c89db7d9e28d6ac26d16213b109ea501696195e6e3ce5"
dependencies = [
"bytes",
"futures-core",
"futures-sink",
"pin-project-lite",
"tokio",
]
[[package]]
name = "tower"
version = "0.5.2"
@@ -1806,10 +1929,22 @@ checksum = "adc82fd73de2a9722ac5da747f12383d2bfdb93591ee6c58486e0097890f05f2"
dependencies = [
"bitflags",
"bytes",
"futures-core",
"futures-util",
"http",
"http-body",
"http-body-util",
"http-range-header",
"httpdate",
"mime",
"mime_guess",
"percent-encoding",
"pin-project-lite",
"tokio",
"tokio-util",
"tower-layer",
"tower-service",
"tracing",
]
[[package]]
@@ -1862,6 +1997,12 @@ version = "1.18.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1dccffe3ce07af9386bfd29e80c0ab1a8205a2fc34e4bcd40364df902cfa8f3f"
[[package]]
name = "unicase"
version = "2.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75b844d17643ee918803943289730bec8aac480150456169e647ed0b576ba539"
[[package]]
name = "unicode-bidi"
version = "0.3.18"

View File

@@ -5,12 +5,20 @@ edition = "2021"
[dependencies]
axum = "0.8.4"
tokio = { version = "1.47.1", features = ["full"] }
tokio = { version = "1.47.1", features = ["full", "signal"] }
sqlx = { version = "0.8.6", features = ["runtime-tokio-rustls", "sqlite", "chrono", "uuid"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
bcrypt = "0.17.1"
uuid = { version = "1.0", features = ["v4", "serde"] }
chrono = { version = "0.4", features = ["serde"] }
tower-http = { version = "0.6.6", features = ["cors"] }
anyhow = "1.0"
tower-http = { version = "0.6.6", features = ["cors", "fs"] }
anyhow = "1.0"
rand = "0.8"
blake3 = "1.5"
bytes = "1.0"
bincode = "1.3"
hex = "0.4"
[dev-dependencies]
tempfile = "3.0"

View File

@@ -0,0 +1,365 @@
use crate::sync::storage::Storage;
use crate::sync::meta::{MetaObj, EntryType};
use crate::sync::protocol::MetaType;
use crate::utils::{error::*, models::*, DbPool};
use serde::Serialize;
use axum::response::Response;
use axum::body::Body;
use axum::http::{HeaderMap, HeaderValue};
#[derive(Debug, Serialize)]
pub struct FileSystemEntry {
pub name: String,
pub entry_type: String, // "file", "dir", "symlink"
pub size_bytes: Option<u64>,
pub meta_hash: String,
}
#[derive(Debug, Serialize)]
pub struct DirectoryListing {
pub path: String,
pub entries: Vec<FileSystemEntry>,
pub parent_hash: Option<String>,
}
#[derive(Debug, Serialize)]
pub struct FileMetadata {
pub name: String,
pub size_bytes: u64,
pub mime_type: String,
pub meta_hash: String,
}
pub struct FilesController;
impl FilesController {
/// List directory contents for a partition
pub async fn list_partition_root(
pool: &DbPool,
machine_id: i64,
snapshot_id: String,
partition_index: usize,
user: &User,
) -> AppResult<DirectoryListing> {
// Verify machine access
Self::verify_machine_access(pool, machine_id, user).await?;
let storage = Storage::new("./data");
// Get partition hash from snapshot
let partition_hash = Self::get_partition_hash(&storage, machine_id, &snapshot_id, partition_index).await?;
// Load partition metadata to get root directory hash
let partition_meta = storage.load_meta(MetaType::Partition, &partition_hash).await
.map_err(|_| AppError::NotFoundError("Partition metadata not found".to_string()))?
.ok_or_else(|| AppError::NotFoundError("Partition metadata not found".to_string()))?;
if let MetaObj::Partition(partition_obj) = partition_meta {
Self::list_directory_by_hash(&storage, &partition_obj.root_dir_hash, "/".to_string()).await
} else {
Err(AppError::ValidationError("Invalid partition metadata".to_string()))
}
}
/// List directory contents by directory hash
pub async fn list_directory(
pool: &DbPool,
machine_id: i64,
snapshot_id: String,
partition_index: usize,
dir_hash: String,
user: &User,
) -> AppResult<DirectoryListing> {
// Verify machine access
Self::verify_machine_access(pool, machine_id, user).await?;
let storage = Storage::new("./data");
// Decode directory hash
let hash_bytes = hex::decode(&dir_hash)
.map_err(|_| AppError::ValidationError("Invalid directory hash format".to_string()))?;
if hash_bytes.len() != 32 {
return Err(AppError::ValidationError("Directory hash must be 32 bytes".to_string()));
}
let mut hash = [0u8; 32];
hash.copy_from_slice(&hash_bytes);
Self::list_directory_by_hash(&storage, &hash, dir_hash).await
}
/// Download a file by file hash with filename
pub async fn download_file(
pool: &DbPool,
machine_id: i64,
_snapshot_id: String,
_partition_index: usize,
file_hash: String,
filename: Option<String>,
user: &User,
) -> AppResult<Response<Body>> {
// Verify machine access
Self::verify_machine_access(pool, machine_id, user).await?;
let storage = Storage::new("./data");
// Decode file hash
let hash_bytes = hex::decode(&file_hash)
.map_err(|_| AppError::ValidationError("Invalid file hash format".to_string()))?;
if hash_bytes.len() != 32 {
return Err(AppError::ValidationError("File hash must be 32 bytes".to_string()));
}
let mut hash = [0u8; 32];
hash.copy_from_slice(&hash_bytes);
// Load file metadata
let file_meta = storage.load_meta(MetaType::File, &hash).await
.map_err(|_| AppError::NotFoundError("File metadata not found".to_string()))?
.ok_or_else(|| AppError::NotFoundError("File metadata not found".to_string()))?;
if let MetaObj::File(file_obj) = file_meta {
// Reconstruct file content from chunks
let mut file_content = Vec::new();
for chunk_hash in &file_obj.chunk_hashes {
let chunk_data = storage.load_chunk(chunk_hash).await
.map_err(|_| AppError::NotFoundError(format!("Chunk {} not found", hex::encode(chunk_hash))))?
.ok_or_else(|| AppError::NotFoundError(format!("Chunk {} not found", hex::encode(chunk_hash))))?;
file_content.extend_from_slice(&chunk_data);
}
// Use provided filename or generate a generic one
let filename = filename.unwrap_or_else(|| format!("file_{}.bin", &file_hash[..8]));
// Determine MIME type from file content
let mime_type = Self::detect_mime_type(&filename, &file_content);
// Create response headers
let mut headers = HeaderMap::new();
headers.insert(
"content-type",
HeaderValue::from_str(&mime_type).unwrap_or_else(|_| HeaderValue::from_static("application/octet-stream"))
);
headers.insert(
"content-disposition",
HeaderValue::from_str(&format!("attachment; filename=\"{}\"", filename))
.unwrap_or_else(|_| HeaderValue::from_static("attachment"))
);
headers.insert(
"content-length",
HeaderValue::from_str(&file_content.len().to_string()).unwrap()
);
let mut response = Response::new(Body::from(file_content));
*response.headers_mut() = headers;
Ok(response)
} else {
Err(AppError::ValidationError("Invalid file metadata".to_string()))
}
}
/// Get file metadata without downloading content
pub async fn get_file_metadata(
pool: &DbPool,
machine_id: i64,
snapshot_id: String,
partition_index: usize,
file_hash: String,
user: &User,
) -> AppResult<FileMetadata> {
// Verify machine access
Self::verify_machine_access(pool, machine_id, user).await?;
let storage = Storage::new("./data");
// Decode file hash
let hash_bytes = hex::decode(&file_hash)
.map_err(|_| AppError::ValidationError("Invalid file hash format".to_string()))?;
if hash_bytes.len() != 32 {
return Err(AppError::ValidationError("File hash must be 32 bytes".to_string()));
}
let mut hash = [0u8; 32];
hash.copy_from_slice(&hash_bytes);
// Load file metadata
let file_meta = storage.load_meta(MetaType::File, &hash).await
.map_err(|_| AppError::NotFoundError("File metadata not found".to_string()))?
.ok_or_else(|| AppError::NotFoundError("File metadata not found".to_string()))?;
if let MetaObj::File(file_obj) = file_meta {
let filename = format!("file_{}.bin", &file_hash[..8]);
let mime_type = Self::detect_mime_type(&filename, &[]);
Ok(FileMetadata {
name: filename,
size_bytes: file_obj.size,
mime_type,
meta_hash: file_hash,
})
} else {
Err(AppError::ValidationError("Invalid file metadata".to_string()))
}
}
// Helper methods
async fn verify_machine_access(pool: &DbPool, machine_id: i64, user: &User) -> AppResult<()> {
let machine = sqlx::query!(
"SELECT id, user_id FROM machines WHERE id = ? AND user_id = ?",
machine_id,
user.id
)
.fetch_optional(pool)
.await
.map_err(|e| AppError::DatabaseError(e.to_string()))?;
if machine.is_none() {
return Err(AppError::NotFoundError("Machine not found or access denied".to_string()));
}
Ok(())
}
async fn get_partition_hash(
storage: &Storage,
machine_id: i64,
snapshot_id: &str,
partition_index: usize,
) -> AppResult<[u8; 32]> {
// Load snapshot reference to get hash
let (snapshot_hash, _) = storage.load_snapshot_ref(machine_id, snapshot_id).await
.map_err(|_| AppError::NotFoundError("Snapshot not found".to_string()))?
.ok_or_else(|| AppError::NotFoundError("Snapshot not found".to_string()))?;
// Load snapshot metadata
let snapshot_meta = storage.load_meta(MetaType::Snapshot, &snapshot_hash).await
.map_err(|_| AppError::NotFoundError("Snapshot metadata not found".to_string()))?
.ok_or_else(|| AppError::NotFoundError("Snapshot metadata not found".to_string()))?;
if let MetaObj::Snapshot(snapshot_obj) = snapshot_meta {
// Get first disk (assuming single disk for now)
if snapshot_obj.disk_hashes.is_empty() {
return Err(AppError::NotFoundError("No disks in snapshot".to_string()));
}
let disk_hash = snapshot_obj.disk_hashes[0];
// Load disk metadata
let disk_meta = storage.load_meta(MetaType::Disk, &disk_hash).await
.map_err(|_| AppError::NotFoundError("Disk metadata not found".to_string()))?
.ok_or_else(|| AppError::NotFoundError("Disk metadata not found".to_string()))?;
if let MetaObj::Disk(disk_obj) = disk_meta {
if partition_index >= disk_obj.partition_hashes.len() {
return Err(AppError::NotFoundError("Partition index out of range".to_string()));
}
Ok(disk_obj.partition_hashes[partition_index])
} else {
Err(AppError::ValidationError("Invalid disk metadata".to_string()))
}
} else {
Err(AppError::ValidationError("Invalid snapshot metadata".to_string()))
}
}
async fn list_directory_by_hash(
storage: &Storage,
dir_hash: &[u8; 32],
path: String,
) -> AppResult<DirectoryListing> {
// Load directory metadata
let dir_meta = storage.load_meta(MetaType::Dir, dir_hash).await
.map_err(|_| AppError::NotFoundError("Directory metadata not found".to_string()))?
.ok_or_else(|| AppError::NotFoundError("Directory metadata not found".to_string()))?;
if let MetaObj::Dir(dir_obj) = dir_meta {
let mut entries = Vec::new();
for entry in dir_obj.entries {
let entry_type_str = match entry.entry_type {
EntryType::File => "file",
EntryType::Dir => "dir",
EntryType::Symlink => "symlink",
};
let size_bytes = if entry.entry_type == EntryType::File {
// Load file metadata to get size
if let Ok(Some(MetaObj::File(file_obj))) = storage.load_meta(MetaType::File, &entry.target_meta_hash).await {
Some(file_obj.size)
} else {
None
}
} else {
None
};
entries.push(FileSystemEntry {
name: entry.name,
entry_type: entry_type_str.to_string(),
size_bytes,
meta_hash: hex::encode(entry.target_meta_hash),
});
}
// Sort entries: directories first, then files, both alphabetically
entries.sort_by(|a, b| {
match (a.entry_type.as_str(), b.entry_type.as_str()) {
("dir", "file") => std::cmp::Ordering::Less,
("file", "dir") => std::cmp::Ordering::Greater,
_ => a.name.cmp(&b.name),
}
});
Ok(DirectoryListing {
path,
entries,
parent_hash: None, // TODO: Implement parent tracking if needed
})
} else {
Err(AppError::ValidationError("Invalid directory metadata".to_string()))
}
}
fn detect_mime_type(filename: &str, _content: &[u8]) -> String {
// Simple MIME type detection based on file extension
let extension = std::path::Path::new(filename)
.extension()
.and_then(|ext| ext.to_str())
.unwrap_or("")
.to_lowercase();
match extension.as_str() {
"txt" | "md" | "readme" => "text/plain",
"html" | "htm" => "text/html",
"css" => "text/css",
"js" => "application/javascript",
"json" => "application/json",
"xml" => "application/xml",
"pdf" => "application/pdf",
"zip" => "application/zip",
"tar" => "application/x-tar",
"gz" => "application/gzip",
"jpg" | "jpeg" => "image/jpeg",
"png" => "image/png",
"gif" => "image/gif",
"svg" => "image/svg+xml",
"mp4" => "video/mp4",
"mp3" => "audio/mpeg",
"wav" => "audio/wav",
"exe" => "application/x-msdownload",
"dll" => "application/x-msdownload",
"so" => "application/x-sharedlib",
"deb" => "application/vnd.debian.binary-package",
"rpm" => "application/x-rpm",
_ => "application/octet-stream",
}.to_string()
}
}

View File

@@ -1,48 +1,65 @@
use crate::utils::{error::*, models::*, DbPool};
use chrono::Utc;
use crate::utils::{base62::Base62, config::ConfigManager, error::*, models::*, DbPool};
use chrono::{Duration, Utc};
use rand::{distributions::Alphanumeric, Rng};
use sqlx::Row;
use uuid::Uuid;
pub struct MachinesController;
impl MachinesController {
pub async fn register_machine(
pool: &DbPool,
code: &str,
uuid: &Uuid,
name: &str,
) -> AppResult<Machine> {
pub async fn register_machine(pool: &DbPool, user: &User, name: &str) -> AppResult<Machine> {
Self::validate_machine_input(name)?;
let provisioning_code = Self::get_provisioning_code(pool, code)
.await?
.ok_or_else(|| validation_error("Invalid provisioning code"))?;
let machine_uuid = Uuid::new_v4();
if provisioning_code.used {
return Err(validation_error("Provisioning code already used"));
}
if provisioning_code.expires_at < Utc::now() {
return Err(validation_error("Provisioning code expired"));
}
if Self::machine_exists_by_uuid(pool, uuid).await? {
return Err(conflict_error("Machine with this UUID already exists"));
}
let machine = Self::create_machine(pool, provisioning_code.user_id, uuid, name).await?;
Self::mark_provisioning_code_used(pool, code).await?;
let machine = Self::create_machine(pool, user.id, &machine_uuid, name).await?;
Ok(machine)
}
pub async fn get_machines_for_user(pool: &DbPool, user: &User) -> AppResult<Vec<Machine>> {
if user.role == UserRole::Admin {
Self::get_all_machines(pool).await
} else {
Self::get_machines_by_user_id(pool, user.id).await
pub async fn create_provisioning_code(
pool: &DbPool,
machine_id: i64,
user: &User,
) -> AppResult<ProvisioningCodeResponse> {
let machine = Self::get_machine_by_id(pool, machine_id).await?;
if user.role != UserRole::Admin && machine.user_id != user.id {
return Err(forbidden_error("Access denied"));
}
let code: String = rand::thread_rng()
.sample_iter(&Alphanumeric)
.take(5)
.map(char::from)
.collect();
let external_url = ConfigManager::get_external_url(pool).await?;
let provisioning_string = format!("52?#{}/{}", external_url, code);
let encoded_code = Base62::encode(&provisioning_string);
let expires_at = Utc::now() + Duration::hours(1);
sqlx::query(
r#"
INSERT INTO provisioning_codes (machine_id, code, expires_at)
VALUES (?, ?, ?)
"#,
)
.bind(machine_id)
.bind(&code)
.bind(expires_at)
.execute(pool)
.await?;
Ok(ProvisioningCodeResponse {
code: encoded_code,
raw_code: code,
expires_at,
})
}
pub async fn get_machines_for_user(pool: &DbPool, user: &User) -> AppResult<Vec<Machine>> {
Self::get_machines_by_user_id(pool, user.id).await
}
pub async fn delete_machine(pool: &DbPool, machine_id: i64, user: &User) -> AppResult<()> {
@@ -70,35 +87,12 @@ impl MachinesController {
id: row.get("id"),
user_id: row.get("user_id"),
uuid: Uuid::parse_str(&row.get::<String, _>("uuid")).unwrap(),
machine_id: row.get::<String, _>("uuid"),
name: row.get("name"),
created_at: row.get("created_at"),
})
}
async fn get_all_machines(pool: &DbPool) -> AppResult<Vec<Machine>> {
let rows = sqlx::query(
r#"
SELECT id, user_id, uuid, name, created_at
FROM machines ORDER BY created_at DESC
"#,
)
.fetch_all(pool)
.await?;
let mut machines = Vec::new();
for row in rows {
machines.push(Machine {
id: row.get("id"),
user_id: row.get("user_id"),
uuid: Uuid::parse_str(&row.get::<String, _>("uuid")).unwrap(),
name: row.get("name"),
created_at: row.get("created_at"),
});
}
Ok(machines)
}
async fn get_machines_by_user_id(pool: &DbPool, user_id: i64) -> AppResult<Vec<Machine>> {
let rows = sqlx::query(
r#"
@@ -116,6 +110,7 @@ impl MachinesController {
id: row.get("id"),
user_id: row.get("user_id"),
uuid: Uuid::parse_str(&row.get::<String, _>("uuid")).unwrap(),
machine_id: row.get::<String, _>("uuid"),
name: row.get("name"),
created_at: row.get("created_at"),
});
@@ -169,7 +164,7 @@ impl MachinesController {
) -> AppResult<Option<ProvisioningCode>> {
let row = sqlx::query(
r#"
SELECT id, user_id, code, created_at, expires_at, used
SELECT id, machine_id, code, created_at, expires_at, used
FROM provisioning_codes WHERE code = ?
"#,
)
@@ -180,7 +175,7 @@ impl MachinesController {
if let Some(row) = row {
Ok(Some(ProvisioningCode {
id: row.get("id"),
user_id: row.get("user_id"),
machine_id: row.get("machine_id"),
code: row.get("code"),
created_at: row.get("created_at"),
expires_at: row.get("expires_at"),

View File

@@ -1,3 +1,5 @@
pub mod auth;
pub mod machines;
pub mod snapshots;
pub mod users;
pub mod files;

View File

@@ -0,0 +1,184 @@
use crate::sync::storage::Storage;
use crate::sync::meta::{MetaObj, FsType};
use crate::sync::protocol::MetaType;
use crate::utils::{error::*, models::*, DbPool};
use serde::Serialize;
use chrono::{DateTime, Utc};
// Basic snapshot info for listing
#[derive(Debug, Serialize)]
pub struct SnapshotSummary {
pub id: String,
pub snapshot_hash: String,
pub created_at: String,
}
// Detailed snapshot info with disk/partition data
#[derive(Debug, Serialize)]
pub struct SnapshotDetails {
pub id: String,
pub snapshot_hash: String,
pub created_at: String,
pub disks: Vec<DiskInfo>,
}
#[derive(Debug, Serialize)]
pub struct DiskInfo {
pub serial: String,
pub size_bytes: u64,
pub partitions: Vec<PartitionInfo>,
}
#[derive(Debug, Serialize)]
pub struct PartitionInfo {
pub fs_type: String,
pub start_lba: u64,
pub end_lba: u64,
pub size_bytes: u64,
}
pub struct SnapshotsController;
impl SnapshotsController {
pub async fn get_machine_snapshots(
pool: &DbPool,
machine_id: i64,
user: &User,
) -> AppResult<Vec<SnapshotSummary>> {
// Verify machine access
let machine = sqlx::query!(
"SELECT id, user_id FROM machines WHERE id = ? AND user_id = ?",
machine_id,
user.id
)
.fetch_optional(pool)
.await
.map_err(|e| AppError::DatabaseError(e.to_string()))?;
if machine.is_none() {
return Err(AppError::NotFoundError("Machine not found or access denied".to_string()));
}
let _machine = machine.unwrap();
let storage = Storage::new("./data");
let mut snapshot_summaries = Vec::new();
// List all snapshots for this machine from storage
match storage.list_snapshots(machine_id).await {
Ok(snapshot_ids) => {
for snapshot_id in snapshot_ids {
// Load snapshot reference to get hash and timestamp
if let Ok(Some((snapshot_hash, created_at_timestamp))) = storage.load_snapshot_ref(machine_id, &snapshot_id).await {
let created_at = DateTime::from_timestamp(created_at_timestamp as i64, 0)
.unwrap_or_else(|| Utc::now())
.format("%Y-%m-%d %H:%M:%S UTC")
.to_string();
snapshot_summaries.push(SnapshotSummary {
id: snapshot_id,
snapshot_hash: hex::encode(snapshot_hash),
created_at,
});
}
}
},
Err(_) => {
// If no snapshots directory exists, return empty list
return Ok(Vec::new());
}
}
// Sort by creation time (newest first)
snapshot_summaries.sort_by(|a, b| b.created_at.cmp(&a.created_at));
Ok(snapshot_summaries)
}
pub async fn get_snapshot_details(
pool: &DbPool,
machine_id: i64,
snapshot_id: String,
user: &User,
) -> AppResult<SnapshotDetails> {
// Verify machine access
let machine = sqlx::query!(
"SELECT id, user_id FROM machines WHERE id = ? AND user_id = ?",
machine_id,
user.id
)
.fetch_optional(pool)
.await
.map_err(|e| AppError::DatabaseError(e.to_string()))?;
if machine.is_none() {
return Err(AppError::NotFoundError("Machine not found or access denied".to_string()));
}
let _machine = machine.unwrap();
let storage = Storage::new("./data");
// Load snapshot reference to get hash and timestamp
let (snapshot_hash, created_at_timestamp) = storage.load_snapshot_ref(machine_id, &snapshot_id).await
.map_err(|_| AppError::NotFoundError("Snapshot not found".to_string()))?
.ok_or_else(|| AppError::NotFoundError("Snapshot not found".to_string()))?;
// Load snapshot metadata
let snapshot_meta = storage.load_meta(MetaType::Snapshot, &snapshot_hash).await
.map_err(|_| AppError::NotFoundError("Snapshot metadata not found".to_string()))?
.ok_or_else(|| AppError::NotFoundError("Snapshot metadata not found".to_string()))?;
if let MetaObj::Snapshot(snapshot_obj) = snapshot_meta {
let mut disks = Vec::new();
for disk_hash in snapshot_obj.disk_hashes {
if let Ok(Some(disk_meta)) = storage.load_meta(MetaType::Disk, &disk_hash).await {
if let MetaObj::Disk(disk_obj) = disk_meta {
let mut partitions = Vec::new();
for partition_hash in disk_obj.partition_hashes {
if let Ok(Some(partition_meta)) = storage.load_meta(MetaType::Partition, &partition_hash).await {
if let MetaObj::Partition(partition_obj) = partition_meta {
let fs_type_str = match partition_obj.fs_type_code {
FsType::Ext => "ext",
FsType::Ntfs => "ntfs",
FsType::Fat32 => "fat32",
FsType::Unknown => "unknown",
};
partitions.push(PartitionInfo {
fs_type: fs_type_str.to_string(),
start_lba: partition_obj.start_lba,
end_lba: partition_obj.end_lba,
size_bytes: (partition_obj.end_lba - partition_obj.start_lba) * 512,
});
}
}
}
disks.push(DiskInfo {
serial: disk_obj.serial,
size_bytes: disk_obj.disk_size_bytes,
partitions,
});
}
}
}
// Convert timestamp to readable format
let created_at_str = DateTime::<Utc>::from_timestamp(created_at_timestamp as i64, 0)
.map(|dt| dt.format("%Y-%m-%d %H:%M:%S").to_string())
.unwrap_or_else(|| "Unknown".to_string());
Ok(SnapshotDetails {
id: snapshot_id,
snapshot_hash: hex::encode(snapshot_hash),
created_at: created_at_str,
disks,
})
} else {
Err(AppError::ValidationError("Invalid snapshot metadata".to_string()))
}
}
}

View File

@@ -1,44 +1,111 @@
mod controllers;
mod routes;
mod utils;
mod sync;
use utils::init_database;
use anyhow::Result;
use axum::{
routing::{delete, get, post, put},
Router,
};
use routes::{admin, auth as auth_routes, machines, setup};
use tower_http::cors::CorsLayer;
use routes::{accounts, admin, auth, config, machines, setup, snapshots, files};
use std::path::Path;
use tokio::signal;
use tower_http::{
cors::CorsLayer,
services::{ServeDir, ServeFile},
};
use utils::init_database;
use sync::{SyncServer, server::SyncServerConfig};
#[tokio::main]
async fn main() -> Result<()> {
let pool = init_database().await?;
let app = Router::new()
let sync_pool = pool.clone();
let api_routes = Router::new()
.route("/setup/status", get(setup::get_setup_status))
.route("/setup/init", post(setup::init_setup))
.route("/auth/login", post(auth_routes::login))
.route("/auth/logout", post(auth_routes::logout))
.route("/auth/login", post(auth::login))
.route("/auth/logout", post(auth::logout))
.route("/accounts/me", get(accounts::me))
.route("/admin/users", get(admin::get_users))
.route("/admin/users", post(admin::create_user_handler))
.route("/admin/users/{id}", put(admin::update_user_handler))
.route("/admin/users/{id}", delete(admin::delete_user_handler))
.route("/admin/config", get(config::get_all_configs))
.route("/admin/config", post(config::set_config))
.route("/admin/config/{key}", get(config::get_config))
.route("/machines/register", post(machines::register_machine))
.route("/machines/provisioning-code", post(machines::create_provisioning_code))
.route("/machines", get(machines::get_machines))
.route("/machines/{id}", get(machines::get_machine))
.route("/machines/{id}", delete(machines::delete_machine))
.route("/machines/{id}/snapshots", get(snapshots::get_machine_snapshots))
.route("/machines/{machine_id}/snapshots/{snapshot_id}", get(snapshots::get_snapshot_details))
.route("/machines/{machine_id}/snapshots/{snapshot_id}/partitions/{partition_index}/files", get(files::list_partition_root))
.route("/machines/{machine_id}/snapshots/{snapshot_id}/partitions/{partition_index}/files/{dir_hash}", get(files::list_directory))
.route("/machines/{machine_id}/snapshots/{snapshot_id}/partitions/{partition_index}/download/{file_hash}", get(files::download_file))
.route("/machines/{machine_id}/snapshots/{snapshot_id}/partitions/{partition_index}/metadata/{file_hash}", get(files::get_file_metadata))
.layer(CorsLayer::permissive())
.with_state(pool);
let dist_path = "./dist";
let app = Router::new()
.nest("/api", api_routes)
.nest_service("/assets", ServeDir::new(format!("{}/assets", dist_path)))
.route_service("/", ServeFile::new(format!("{}/index.html", dist_path)))
.fallback_service(ServeFile::new(format!("{}/index.html", dist_path)))
.layer(CorsLayer::permissive());
if !Path::new(dist_path).exists() {
println!("Warning: dist directory not found at {}", dist_path);
}
let sync_config = SyncServerConfig::default();
let sync_server = SyncServer::new(sync_config.clone(), sync_pool);
tokio::spawn(async move {
if let Err(e) = sync_server.start().await {
eprintln!("Sync server error: {}", e);
}
});
let listener = tokio::net::TcpListener::bind("0.0.0.0:8379").await?;
println!("Server running on http://0.0.0.0:8379");
axum::serve(listener, app).await?;
println!("HTTP server running on http://0.0.0.0:8379");
println!("Sync server running on {}:{}", sync_config.bind_address, sync_config.port);
axum::serve(listener, app)
.with_graceful_shutdown(shutdown_signal())
.await?;
Ok(())
}
async fn shutdown_signal() {
let ctrl_c = async {
signal::ctrl_c()
.await
.expect("failed to install Ctrl+C handler");
};
#[cfg(unix)]
let terminate = async {
signal::unix::signal(signal::unix::SignalKind::terminate())
.expect("failed to install signal handler")
.recv()
.await;
};
#[cfg(not(unix))]
let terminate = std::future::pending::<()>();
tokio::select! {
_ = ctrl_c => {
println!("\nShutting down due to Ctrl+C...");
},
_ = terminate => {
println!("\nShutting down due to terminate signal...");
},
}
}

View File

@@ -0,0 +1,6 @@
use crate::utils::{auth::AuthUser, error::*, models::User};
use axum::response::Json;
pub async fn me(auth_user: AuthUser) -> Result<Json<User>, AppError> {
Ok(success_response(auth_user.user))
}

113
server/src/routes/config.rs Normal file
View File

@@ -0,0 +1,113 @@
use crate::utils::{auth::*, config::ConfigManager, error::*, DbPool};
use axum::{extract::State, response::Json};
use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize)]
pub struct ConfigRequest {
pub key: String,
pub value: String,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct ConfigResponse {
pub key: String,
pub value: String,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct ConfigDefinition {
pub key: String,
pub description: String,
pub value: Option<String>,
pub default_value: Option<String>,
pub required: bool,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct ConfigListResponse {
pub configs: Vec<ConfigDefinition>,
}
pub async fn get_all_configs(
auth_user: AuthUser,
State(pool): State<DbPool>,
) -> Result<Json<ConfigListResponse>, AppError> {
if auth_user.user.role != crate::utils::models::UserRole::Admin {
return Err(forbidden_error("Admin access required"));
}
let allowed_configs = vec![
ConfigDefinition {
key: "EXTERNAL_URL".to_string(),
description: "The external URL used for provisioning codes. This should be the public URL where this server can be reached.".to_string(),
value: ConfigManager::get_config(&pool, "EXTERNAL_URL").await?,
default_value: Some("https://your-domain.com".to_string()),
required: true,
},
ConfigDefinition {
key: "SESSION_TIMEOUT_HOURS".to_string(),
description: "Number of hours before user sessions expire and require re-authentication.".to_string(),
value: ConfigManager::get_config(&pool, "SESSION_TIMEOUT_HOURS").await?,
default_value: Some("24".to_string()),
required: false,
},
];
Ok(success_response(ConfigListResponse {
configs: allowed_configs,
}))
}
pub async fn set_config(
auth_user: AuthUser,
State(pool): State<DbPool>,
Json(request): Json<ConfigRequest>,
) -> Result<Json<serde_json::Value>, AppError> {
if auth_user.user.role != crate::utils::models::UserRole::Admin {
return Err(forbidden_error("Admin access required"));
}
let allowed_keys = vec!["EXTERNAL_URL", "SESSION_TIMEOUT_HOURS"];
if !allowed_keys.contains(&request.key.as_str()) {
return Err(validation_error("Invalid configuration key"));
}
match request.key.as_str() {
"EXTERNAL_URL" => {
if request.value.trim().is_empty() {
return Err(validation_error("External URL cannot be empty"));
}
if !request.value.starts_with("http://") && !request.value.starts_with("https://") {
return Err(validation_error(
"External URL must start with http:// or https://",
));
}
}
"SESSION_TIMEOUT_HOURS" => {
if request.value.parse::<i32>().is_err() || request.value.parse::<i32>().unwrap() <= 0 {
return Err(validation_error("Value must be a positive number"));
}
}
_ => {}
}
ConfigManager::set_config(&pool, &request.key, &request.value).await?;
Ok(success_message("Configuration updated successfully"))
}
pub async fn get_config(
auth_user: AuthUser,
State(pool): State<DbPool>,
axum::extract::Path(key): axum::extract::Path<String>,
) -> Result<Json<ConfigResponse>, AppError> {
if auth_user.user.role != crate::utils::models::UserRole::Admin {
return Err(forbidden_error("Admin access required"));
}
let value = ConfigManager::get_config(&pool, &key)
.await?
.ok_or_else(|| not_found_error("Configuration key not found"))?;
Ok(success_response(ConfigResponse { key, value }))
}

View File

@@ -0,0 +1,77 @@
use axum::{extract::{Path, Query, State}, Json, response::Response};
use axum::body::Body;
use serde::Deserialize;
use crate::controllers::files::{FilesController, DirectoryListing, FileMetadata};
use crate::utils::{auth::AuthUser, error::AppResult, DbPool};
#[derive(Deserialize)]
pub struct DownloadQuery {
filename: Option<String>,
}
pub async fn list_partition_root(
State(pool): State<DbPool>,
Path((machine_id, snapshot_id, partition_index)): Path<(i64, String, usize)>,
auth_user: AuthUser,
) -> AppResult<Json<DirectoryListing>> {
let listing = FilesController::list_partition_root(
&pool,
machine_id,
snapshot_id,
partition_index,
&auth_user.user,
).await?;
Ok(Json(listing))
}
pub async fn list_directory(
State(pool): State<DbPool>,
Path((machine_id, snapshot_id, partition_index, dir_hash)): Path<(i64, String, usize, String)>,
auth_user: AuthUser,
) -> AppResult<Json<DirectoryListing>> {
let listing = FilesController::list_directory(
&pool,
machine_id,
snapshot_id,
partition_index,
dir_hash,
&auth_user.user,
).await?;
Ok(Json(listing))
}
pub async fn download_file(
State(pool): State<DbPool>,
Path((machine_id, snapshot_id, partition_index, file_hash)): Path<(i64, String, usize, String)>,
Query(query): Query<DownloadQuery>,
auth_user: AuthUser,
) -> AppResult<Response<Body>> {
FilesController::download_file(
&pool,
machine_id,
snapshot_id,
partition_index,
file_hash,
query.filename,
&auth_user.user,
).await
}
pub async fn get_file_metadata(
State(pool): State<DbPool>,
Path((machine_id, snapshot_id, partition_index, file_hash)): Path<(i64, String, usize, String)>,
auth_user: AuthUser,
) -> AppResult<Json<FileMetadata>> {
let metadata = FilesController::get_file_metadata(
&pool,
machine_id,
snapshot_id,
partition_index,
file_hash,
&auth_user.user,
).await?;
Ok(Json(metadata))
}

View File

@@ -6,13 +6,13 @@ use axum::{
};
pub async fn register_machine(
auth_user: AuthUser,
State(pool): State<DbPool>,
Json(request): Json<RegisterMachineRequest>,
) -> Result<Json<Machine>, AppError> {
let machine = MachinesController::register_machine(
&pool,
&request.code,
&request.uuid,
&auth_user.user,
&request.name,
)
.await?;
@@ -20,6 +20,21 @@ pub async fn register_machine(
Ok(success_response(machine))
}
pub async fn create_provisioning_code(
auth_user: AuthUser,
State(pool): State<DbPool>,
Json(request): Json<CreateProvisioningCodeRequest>,
) -> Result<Json<ProvisioningCodeResponse>, AppError> {
let response = MachinesController::create_provisioning_code(
&pool,
request.machine_id,
&auth_user.user,
)
.await?;
Ok(success_response(response))
}
pub async fn get_machines(
auth_user: AuthUser,
State(pool): State<DbPool>,
@@ -28,6 +43,21 @@ pub async fn get_machines(
Ok(success_response(machines))
}
pub async fn get_machine(
auth_user: AuthUser,
State(pool): State<DbPool>,
Path(machine_id): Path<i64>,
) -> Result<Json<Machine>, AppError> {
let machine = MachinesController::get_machine_by_id(&pool, machine_id).await?;
// Check if user has access to this machine
if auth_user.user.role != UserRole::Admin && machine.user_id != auth_user.user.id {
return Err(AppError::NotFoundError("Machine not found or access denied".to_string()));
}
Ok(success_response(machine))
}
pub async fn delete_machine(
auth_user: AuthUser,
State(pool): State<DbPool>,

View File

@@ -1,4 +1,8 @@
pub mod accounts;
pub mod admin;
pub mod auth;
pub mod config;
pub mod machines;
pub mod setup;
pub mod snapshots;
pub mod files;

View File

@@ -0,0 +1,32 @@
use axum::{extract::{Path, State}, Json};
use crate::controllers::snapshots::{SnapshotsController, SnapshotSummary, SnapshotDetails};
use crate::utils::{auth::AuthUser, error::AppResult, DbPool};
pub async fn get_machine_snapshots(
State(pool): State<DbPool>,
Path(machine_id): Path<i64>,
auth_user: AuthUser,
) -> AppResult<Json<Vec<SnapshotSummary>>> {
let snapshots = SnapshotsController::get_machine_snapshots(
&pool,
machine_id,
&auth_user.user,
).await?;
Ok(Json(snapshots))
}
pub async fn get_snapshot_details(
State(pool): State<DbPool>,
Path((machine_id, snapshot_id)): Path<(i64, String)>,
auth_user: AuthUser,
) -> AppResult<Json<SnapshotDetails>> {
let snapshot = SnapshotsController::get_snapshot_details(
&pool,
machine_id,
snapshot_id,
&auth_user.user,
).await?;
Ok(Json(snapshot))
}

605
server/src/sync/meta.rs Normal file
View File

@@ -0,0 +1,605 @@
use bytes::{Buf, BufMut, Bytes, BytesMut};
use std::io::{Error, ErrorKind, Result};
use crate::sync::protocol::{Hash, MetaType};
/// Filesystem type codes
#[repr(u32)]
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum FsType {
Ext = 1,
Ntfs = 2,
Fat32 = 3,
Unknown = 0,
}
impl From<u32> for FsType {
fn from(value: u32) -> Self {
match value {
1 => FsType::Ext,
2 => FsType::Ntfs,
3 => FsType::Fat32,
_ => FsType::Unknown,
}
}
}
/// Directory entry types
#[repr(u8)]
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum EntryType {
File = 0,
Dir = 1,
Symlink = 2,
}
impl TryFrom<u8> for EntryType {
type Error = Error;
fn try_from(value: u8) -> Result<Self> {
match value {
0 => Ok(EntryType::File),
1 => Ok(EntryType::Dir),
2 => Ok(EntryType::Symlink),
_ => Err(Error::new(ErrorKind::InvalidData, "Unknown entry type")),
}
}
}
/// File metadata object
#[derive(Debug, Clone)]
pub struct FileObj {
pub version: u8,
pub fs_type_code: FsType,
pub size: u64,
pub mode: u32,
pub uid: u32,
pub gid: u32,
pub mtime_unixsec: u64,
pub chunk_hashes: Vec<Hash>,
}
impl FileObj {
pub fn new(
fs_type_code: FsType,
size: u64,
mode: u32,
uid: u32,
gid: u32,
mtime_unixsec: u64,
chunk_hashes: Vec<Hash>,
) -> Self {
Self {
version: 1,
fs_type_code,
size,
mode,
uid,
gid,
mtime_unixsec,
chunk_hashes,
}
}
pub fn serialize(&self) -> Result<Bytes> {
let mut buf = BytesMut::new();
buf.put_u8(self.version);
buf.put_u32_le(self.fs_type_code as u32);
buf.put_u64_le(self.size);
buf.put_u32_le(self.mode);
buf.put_u32_le(self.uid);
buf.put_u32_le(self.gid);
buf.put_u64_le(self.mtime_unixsec);
buf.put_u32_le(self.chunk_hashes.len() as u32);
for hash in &self.chunk_hashes {
buf.put_slice(hash);
}
Ok(buf.freeze())
}
pub fn deserialize(mut data: Bytes) -> Result<Self> {
if data.remaining() < 41 {
return Err(Error::new(ErrorKind::UnexpectedEof, "FileObj data too short"));
}
let version = data.get_u8();
if version != 1 {
return Err(Error::new(ErrorKind::InvalidData, "Unsupported FileObj version"));
}
let fs_type_code = FsType::from(data.get_u32_le());
let size = data.get_u64_le();
let mode = data.get_u32_le();
let uid = data.get_u32_le();
let gid = data.get_u32_le();
let mtime_unixsec = data.get_u64_le();
let chunk_count = data.get_u32_le() as usize;
if data.remaining() < chunk_count * 32 {
return Err(Error::new(ErrorKind::UnexpectedEof, "FileObj chunk hashes too short"));
}
let mut chunk_hashes = Vec::with_capacity(chunk_count);
for _ in 0..chunk_count {
let mut hash = [0u8; 32];
data.copy_to_slice(&mut hash);
chunk_hashes.push(hash);
}
Ok(Self {
version,
fs_type_code,
size,
mode,
uid,
gid,
mtime_unixsec,
chunk_hashes,
})
}
pub fn compute_hash(&self) -> Result<Hash> {
let serialized = self.serialize()?;
Ok(blake3::hash(&serialized).into())
}
}
/// Directory entry
#[derive(Debug, Clone)]
pub struct DirEntry {
pub entry_type: EntryType,
pub name: String,
pub target_meta_hash: Hash,
}
/// Directory metadata object
#[derive(Debug, Clone)]
pub struct DirObj {
pub version: u8,
pub entries: Vec<DirEntry>,
}
impl DirObj {
pub fn new(entries: Vec<DirEntry>) -> Self {
Self {
version: 1,
entries,
}
}
pub fn serialize(&self) -> Result<Bytes> {
let mut buf = BytesMut::new();
buf.put_u8(self.version);
buf.put_u32_le(self.entries.len() as u32);
for entry in &self.entries {
buf.put_u8(entry.entry_type as u8);
let name_bytes = entry.name.as_bytes();
buf.put_u16_le(name_bytes.len() as u16);
buf.put_slice(name_bytes);
buf.put_slice(&entry.target_meta_hash);
}
Ok(buf.freeze())
}
pub fn deserialize(mut data: Bytes) -> Result<Self> {
if data.remaining() < 5 {
return Err(Error::new(ErrorKind::UnexpectedEof, "DirObj data too short"));
}
let version = data.get_u8();
if version != 1 {
return Err(Error::new(ErrorKind::InvalidData, "Unsupported DirObj version"));
}
let entry_count = data.get_u32_le() as usize;
let mut entries = Vec::with_capacity(entry_count);
for _ in 0..entry_count {
if data.remaining() < 35 {
return Err(Error::new(ErrorKind::UnexpectedEof, "DirObj entry too short"));
}
let entry_type = EntryType::try_from(data.get_u8())?;
let name_len = data.get_u16_le() as usize;
if data.remaining() < name_len + 32 {
return Err(Error::new(ErrorKind::UnexpectedEof, "DirObj entry name/hash too short"));
}
let name = String::from_utf8(data.copy_to_bytes(name_len).to_vec())
.map_err(|_| Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in entry name"))?;
let mut target_meta_hash = [0u8; 32];
data.copy_to_slice(&mut target_meta_hash);
entries.push(DirEntry {
entry_type,
name,
target_meta_hash,
});
}
Ok(Self {
version,
entries,
})
}
pub fn compute_hash(&self) -> Result<Hash> {
let serialized = self.serialize()?;
Ok(blake3::hash(&serialized).into())
}
}
/// Partition metadata object
#[derive(Debug, Clone)]
pub struct PartitionObj {
pub version: u8,
pub fs_type_code: FsType,
pub root_dir_hash: Hash,
pub start_lba: u64,
pub end_lba: u64,
pub type_guid: [u8; 16],
}
impl PartitionObj {
pub fn new(
fs_type_code: FsType,
root_dir_hash: Hash,
start_lba: u64,
end_lba: u64,
type_guid: [u8; 16],
) -> Self {
Self {
version: 1,
fs_type_code,
root_dir_hash,
start_lba,
end_lba,
type_guid,
}
}
pub fn serialize(&self) -> Result<Bytes> {
let mut buf = BytesMut::new();
buf.put_u8(self.version);
buf.put_u32_le(self.fs_type_code as u32);
buf.put_slice(&self.root_dir_hash);
buf.put_u64_le(self.start_lba);
buf.put_u64_le(self.end_lba);
buf.put_slice(&self.type_guid);
Ok(buf.freeze())
}
pub fn deserialize(mut data: Bytes) -> Result<Self> {
if data.remaining() < 69 {
return Err(Error::new(ErrorKind::UnexpectedEof, "PartitionObj data too short"));
}
let version = data.get_u8();
if version != 1 {
return Err(Error::new(ErrorKind::InvalidData, "Unsupported PartitionObj version"));
}
let fs_type_code = FsType::from(data.get_u32_le());
let mut root_dir_hash = [0u8; 32];
data.copy_to_slice(&mut root_dir_hash);
let start_lba = data.get_u64_le();
let end_lba = data.get_u64_le();
let mut type_guid = [0u8; 16];
data.copy_to_slice(&mut type_guid);
Ok(Self {
version,
fs_type_code,
root_dir_hash,
start_lba,
end_lba,
type_guid,
})
}
pub fn compute_hash(&self) -> Result<Hash> {
let serialized = self.serialize()?;
Ok(blake3::hash(&serialized).into())
}
}
/// Disk metadata object
#[derive(Debug, Clone)]
pub struct DiskObj {
pub version: u8,
pub partition_hashes: Vec<Hash>,
pub disk_size_bytes: u64,
pub serial: String,
}
impl DiskObj {
pub fn new(partition_hashes: Vec<Hash>, disk_size_bytes: u64, serial: String) -> Self {
Self {
version: 1,
partition_hashes,
disk_size_bytes,
serial,
}
}
pub fn serialize(&self) -> Result<Bytes> {
let mut buf = BytesMut::new();
buf.put_u8(self.version);
buf.put_u32_le(self.partition_hashes.len() as u32);
for hash in &self.partition_hashes {
buf.put_slice(hash);
}
buf.put_u64_le(self.disk_size_bytes);
let serial_bytes = self.serial.as_bytes();
buf.put_u16_le(serial_bytes.len() as u16);
buf.put_slice(serial_bytes);
Ok(buf.freeze())
}
pub fn deserialize(mut data: Bytes) -> Result<Self> {
println!("DiskObj::deserialize: input data length = {}", data.len());
if data.remaining() < 15 {
println!("DiskObj::deserialize: data too short, remaining = {}", data.remaining());
return Err(Error::new(ErrorKind::UnexpectedEof, "DiskObj data too short"));
}
let version = data.get_u8();
println!("DiskObj::deserialize: version = {}", version);
if version != 1 {
println!("DiskObj::deserialize: unsupported version {}", version);
return Err(Error::new(ErrorKind::InvalidData, "Unsupported DiskObj version"));
}
let partition_count = data.get_u32_le() as usize;
println!("DiskObj::deserialize: partition_count = {}", partition_count);
if data.remaining() < partition_count * 32 + 10 {
println!("DiskObj::deserialize: not enough data for partitions, remaining = {}, needed = {}",
data.remaining(), partition_count * 32 + 10);
return Err(Error::new(ErrorKind::UnexpectedEof, "DiskObj partitions too short"));
}
let mut partition_hashes = Vec::with_capacity(partition_count);
for i in 0..partition_count {
let mut hash = [0u8; 32];
data.copy_to_slice(&mut hash);
println!("DiskObj::deserialize: partition {} hash = {}", i, hex::encode(&hash));
partition_hashes.push(hash);
}
let disk_size_bytes = data.get_u64_le();
println!("DiskObj::deserialize: disk_size_bytes = {}", disk_size_bytes);
let serial_len = data.get_u16_le() as usize;
println!("DiskObj::deserialize: serial_len = {}", serial_len);
if data.remaining() < serial_len {
println!("DiskObj::deserialize: not enough data for serial, remaining = {}, needed = {}",
data.remaining(), serial_len);
return Err(Error::new(ErrorKind::UnexpectedEof, "DiskObj serial too short"));
}
let serial_bytes = data.copy_to_bytes(serial_len).to_vec();
println!("DiskObj::deserialize: serial_bytes = {:?}", serial_bytes);
let serial = String::from_utf8(serial_bytes)
.map_err(|e| {
println!("DiskObj::deserialize: UTF-8 error: {}", e);
Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in serial")
})?;
println!("DiskObj::deserialize: serial = '{}'", serial);
println!("DiskObj::deserialize: successfully deserialized");
Ok(Self {
version,
partition_hashes,
disk_size_bytes,
serial,
})
}
pub fn compute_hash(&self) -> Result<Hash> {
let serialized = self.serialize()?;
Ok(blake3::hash(&serialized).into())
}
}
/// Snapshot metadata object
#[derive(Debug, Clone)]
pub struct SnapshotObj {
pub version: u8,
pub created_at_unixsec: u64,
pub disk_hashes: Vec<Hash>,
}
impl SnapshotObj {
pub fn new(created_at_unixsec: u64, disk_hashes: Vec<Hash>) -> Self {
Self {
version: 1,
created_at_unixsec,
disk_hashes,
}
}
pub fn serialize(&self) -> Result<Bytes> {
let mut buf = BytesMut::new();
buf.put_u8(self.version);
buf.put_u64_le(self.created_at_unixsec);
buf.put_u32_le(self.disk_hashes.len() as u32);
for hash in &self.disk_hashes {
buf.put_slice(hash);
}
Ok(buf.freeze())
}
pub fn deserialize(mut data: Bytes) -> Result<Self> {
if data.remaining() < 13 {
return Err(Error::new(ErrorKind::UnexpectedEof, "SnapshotObj data too short"));
}
let version = data.get_u8();
if version != 1 {
return Err(Error::new(ErrorKind::InvalidData, "Unsupported SnapshotObj version"));
}
let created_at_unixsec = data.get_u64_le();
let disk_count = data.get_u32_le() as usize;
if data.remaining() < disk_count * 32 {
return Err(Error::new(ErrorKind::UnexpectedEof, "SnapshotObj disk hashes too short"));
}
let mut disk_hashes = Vec::with_capacity(disk_count);
for _ in 0..disk_count {
let mut hash = [0u8; 32];
data.copy_to_slice(&mut hash);
disk_hashes.push(hash);
}
Ok(Self {
version,
created_at_unixsec,
disk_hashes,
})
}
pub fn compute_hash(&self) -> Result<Hash> {
let serialized = self.serialize()?;
Ok(blake3::hash(&serialized).into())
}
}
/// Meta object wrapper
#[derive(Debug, Clone)]
pub enum MetaObj {
File(FileObj),
Dir(DirObj),
Partition(PartitionObj),
Disk(DiskObj),
Snapshot(SnapshotObj),
}
impl MetaObj {
pub fn meta_type(&self) -> MetaType {
match self {
MetaObj::File(_) => MetaType::File,
MetaObj::Dir(_) => MetaType::Dir,
MetaObj::Partition(_) => MetaType::Partition,
MetaObj::Disk(_) => MetaType::Disk,
MetaObj::Snapshot(_) => MetaType::Snapshot,
}
}
pub fn serialize(&self) -> Result<Bytes> {
match self {
MetaObj::File(obj) => obj.serialize(),
MetaObj::Dir(obj) => obj.serialize(),
MetaObj::Partition(obj) => obj.serialize(),
MetaObj::Disk(obj) => obj.serialize(),
MetaObj::Snapshot(obj) => obj.serialize(),
}
}
pub fn deserialize(meta_type: MetaType, data: Bytes) -> Result<Self> {
match meta_type {
MetaType::File => Ok(MetaObj::File(FileObj::deserialize(data)?)),
MetaType::Dir => Ok(MetaObj::Dir(DirObj::deserialize(data)?)),
MetaType::Partition => Ok(MetaObj::Partition(PartitionObj::deserialize(data)?)),
MetaType::Disk => Ok(MetaObj::Disk(DiskObj::deserialize(data)?)),
MetaType::Snapshot => Ok(MetaObj::Snapshot(SnapshotObj::deserialize(data)?)),
}
}
pub fn compute_hash(&self) -> Result<Hash> {
match self {
MetaObj::File(obj) => obj.compute_hash(),
MetaObj::Dir(obj) => obj.compute_hash(),
MetaObj::Partition(obj) => obj.compute_hash(),
MetaObj::Disk(obj) => obj.compute_hash(),
MetaObj::Snapshot(obj) => obj.compute_hash(),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_file_obj_serialization() {
let obj = FileObj::new(
FsType::Ext,
1024,
0o644,
1000,
1000,
1234567890,
vec![[1; 32], [2; 32]],
);
let serialized = obj.serialize().unwrap();
let deserialized = FileObj::deserialize(serialized).unwrap();
assert_eq!(obj.fs_type_code, deserialized.fs_type_code);
assert_eq!(obj.size, deserialized.size);
assert_eq!(obj.chunk_hashes, deserialized.chunk_hashes);
}
#[test]
fn test_dir_obj_serialization() {
let entries = vec![
DirEntry {
entry_type: EntryType::File,
name: "test.txt".to_string(),
target_meta_hash: [1; 32],
},
DirEntry {
entry_type: EntryType::Dir,
name: "subdir".to_string(),
target_meta_hash: [2; 32],
},
];
let obj = DirObj::new(entries);
let serialized = obj.serialize().unwrap();
let deserialized = DirObj::deserialize(serialized).unwrap();
assert_eq!(obj.entries.len(), deserialized.entries.len());
assert_eq!(obj.entries[0].name, deserialized.entries[0].name);
assert_eq!(obj.entries[1].entry_type, deserialized.entries[1].entry_type);
}
#[test]
fn test_hash_computation() {
let obj = FileObj::new(FsType::Ext, 1024, 0o644, 1000, 1000, 1234567890, vec![]);
let hash1 = obj.compute_hash().unwrap();
let hash2 = obj.compute_hash().unwrap();
assert_eq!(hash1, hash2);
let obj2 = FileObj::new(FsType::Ext, 1025, 0o644, 1000, 1000, 1234567890, vec![]);
let hash3 = obj2.compute_hash().unwrap();
assert_ne!(hash1, hash3);
}
}

8
server/src/sync/mod.rs Normal file
View File

@@ -0,0 +1,8 @@
pub mod protocol;
pub mod server;
pub mod storage;
pub mod session;
pub mod meta;
pub mod validation;
pub use server::SyncServer;

620
server/src/sync/protocol.rs Normal file
View File

@@ -0,0 +1,620 @@
use bytes::{Buf, BufMut, Bytes, BytesMut};
use std::io::{Error, ErrorKind, Result};
/// Command codes for the sync protocol
#[repr(u8)]
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum Command {
Hello = 0x01,
HelloOk = 0x02,
AuthUserPass = 0x10,
AuthCode = 0x11,
AuthOk = 0x12,
AuthFail = 0x13,
BatchCheckChunk = 0x20,
CheckChunkResp = 0x21,
SendChunk = 0x22,
ChunkOk = 0x23,
ChunkFail = 0x24,
BatchCheckMeta = 0x30,
CheckMetaResp = 0x31,
SendMeta = 0x32,
MetaOk = 0x33,
MetaFail = 0x34,
SendSnapshot = 0x40,
SnapshotOk = 0x41,
SnapshotFail = 0x42,
Close = 0xFF,
}
impl TryFrom<u8> for Command {
type Error = Error;
fn try_from(value: u8) -> Result<Self> {
match value {
0x01 => Ok(Command::Hello),
0x02 => Ok(Command::HelloOk),
0x10 => Ok(Command::AuthUserPass),
0x11 => Ok(Command::AuthCode),
0x12 => Ok(Command::AuthOk),
0x13 => Ok(Command::AuthFail),
0x20 => Ok(Command::BatchCheckChunk),
0x21 => Ok(Command::CheckChunkResp),
0x22 => Ok(Command::SendChunk),
0x23 => Ok(Command::ChunkOk),
0x24 => Ok(Command::ChunkFail),
0x30 => Ok(Command::BatchCheckMeta),
0x31 => Ok(Command::CheckMetaResp),
0x32 => Ok(Command::SendMeta),
0x33 => Ok(Command::MetaOk),
0x34 => Ok(Command::MetaFail),
0x40 => Ok(Command::SendSnapshot),
0x41 => Ok(Command::SnapshotOk),
0x42 => Ok(Command::SnapshotFail),
0xFF => Ok(Command::Close),
_ => Err(Error::new(ErrorKind::InvalidData, "Unknown command code")),
}
}
}
/// Message header structure (24 bytes fixed)
#[derive(Debug, Clone)]
pub struct MessageHeader {
pub cmd: Command,
pub flags: u8,
pub reserved: [u8; 2],
pub session_id: [u8; 16],
pub payload_len: u32,
}
impl MessageHeader {
pub const SIZE: usize = 24;
pub fn new(cmd: Command, session_id: [u8; 16], payload_len: u32) -> Self {
Self {
cmd,
flags: 0,
reserved: [0; 2],
session_id,
payload_len,
}
}
pub fn serialize(&self) -> [u8; Self::SIZE] {
let mut buf = [0u8; Self::SIZE];
buf[0] = self.cmd as u8;
buf[1] = self.flags;
buf[2..4].copy_from_slice(&self.reserved);
buf[4..20].copy_from_slice(&self.session_id);
buf[20..24].copy_from_slice(&self.payload_len.to_le_bytes());
buf
}
pub fn deserialize(buf: &[u8]) -> Result<Self> {
if buf.len() < Self::SIZE {
return Err(Error::new(ErrorKind::UnexpectedEof, "Header too short"));
}
let cmd = Command::try_from(buf[0])?;
let flags = buf[1];
let reserved = [buf[2], buf[3]];
let mut session_id = [0u8; 16];
session_id.copy_from_slice(&buf[4..20]);
let payload_len = u32::from_le_bytes([buf[20], buf[21], buf[22], buf[23]]);
Ok(Self {
cmd,
flags,
reserved,
session_id,
payload_len,
})
}
}
/// A 32-byte BLAKE3 hash
pub type Hash = [u8; 32];
/// Meta object types
#[repr(u8)]
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum MetaType {
File = 1,
Dir = 2,
Partition = 3,
Disk = 4,
Snapshot = 5,
}
impl TryFrom<u8> for MetaType {
type Error = Error;
fn try_from(value: u8) -> Result<Self> {
match value {
1 => Ok(MetaType::File),
2 => Ok(MetaType::Dir),
3 => Ok(MetaType::Partition),
4 => Ok(MetaType::Disk),
5 => Ok(MetaType::Snapshot),
_ => Err(Error::new(ErrorKind::InvalidData, "Unknown meta type")),
}
}
}
/// Protocol message types
#[derive(Debug, Clone)]
pub enum Message {
Hello {
client_type: u8,
auth_type: u8,
},
HelloOk,
AuthUserPass {
username: String,
password: String,
machine_id: i64,
},
AuthCode {
code: String,
},
AuthOk {
session_id: [u8; 16],
},
AuthFail {
reason: String,
},
BatchCheckChunk {
hashes: Vec<Hash>,
},
CheckChunkResp {
missing_hashes: Vec<Hash>,
},
SendChunk {
hash: Hash,
data: Bytes,
},
ChunkOk,
ChunkFail {
reason: String,
},
BatchCheckMeta {
items: Vec<(MetaType, Hash)>,
},
CheckMetaResp {
missing_items: Vec<(MetaType, Hash)>,
},
SendMeta {
meta_type: MetaType,
meta_hash: Hash,
body: Bytes,
},
MetaOk,
MetaFail {
reason: String,
},
SendSnapshot {
snapshot_hash: Hash,
body: Bytes,
},
SnapshotOk {
snapshot_id: String,
},
SnapshotFail {
missing_chunks: Vec<Hash>,
missing_metas: Vec<(MetaType, Hash)>,
},
Close,
}
impl Message {
/// Serialize message payload to bytes
pub fn serialize_payload(&self) -> Result<Bytes> {
let mut buf = BytesMut::new();
match self {
Message::Hello { client_type, auth_type } => {
buf.put_u8(*client_type);
buf.put_u8(*auth_type);
}
Message::HelloOk => {
// No payload
}
Message::AuthUserPass { username, password, machine_id } => {
let username_bytes = username.as_bytes();
let password_bytes = password.as_bytes();
buf.put_u16_le(username_bytes.len() as u16);
buf.put_slice(username_bytes);
buf.put_u16_le(password_bytes.len() as u16);
buf.put_slice(password_bytes);
buf.put_i64_le(*machine_id);
}
Message::AuthCode { code } => {
let code_bytes = code.as_bytes();
buf.put_u16_le(code_bytes.len() as u16);
buf.put_slice(code_bytes);
}
Message::AuthOk { session_id } => {
buf.put_slice(session_id);
}
Message::AuthFail { reason } => {
let reason_bytes = reason.as_bytes();
buf.put_u16_le(reason_bytes.len() as u16);
buf.put_slice(reason_bytes);
}
Message::BatchCheckChunk { hashes } => {
buf.put_u32_le(hashes.len() as u32);
for hash in hashes {
buf.put_slice(hash);
}
}
Message::CheckChunkResp { missing_hashes } => {
buf.put_u32_le(missing_hashes.len() as u32);
for hash in missing_hashes {
buf.put_slice(hash);
}
}
Message::SendChunk { hash, data } => {
buf.put_slice(hash);
buf.put_u32_le(data.len() as u32);
buf.put_slice(data);
}
Message::ChunkOk => {
// No payload
}
Message::ChunkFail { reason } => {
let reason_bytes = reason.as_bytes();
buf.put_u16_le(reason_bytes.len() as u16);
buf.put_slice(reason_bytes);
}
Message::BatchCheckMeta { items } => {
buf.put_u32_le(items.len() as u32);
for (meta_type, hash) in items {
buf.put_u8(*meta_type as u8);
buf.put_slice(hash);
}
}
Message::CheckMetaResp { missing_items } => {
buf.put_u32_le(missing_items.len() as u32);
for (meta_type, hash) in missing_items {
buf.put_u8(*meta_type as u8);
buf.put_slice(hash);
}
}
Message::SendMeta { meta_type, meta_hash, body } => {
buf.put_u8(*meta_type as u8);
buf.put_slice(meta_hash);
buf.put_u32_le(body.len() as u32);
buf.put_slice(body);
}
Message::MetaOk => {
// No payload
}
Message::MetaFail { reason } => {
let reason_bytes = reason.as_bytes();
buf.put_u16_le(reason_bytes.len() as u16);
buf.put_slice(reason_bytes);
}
Message::SendSnapshot { snapshot_hash, body } => {
buf.put_slice(snapshot_hash);
buf.put_u32_le(body.len() as u32);
buf.put_slice(body);
}
Message::SnapshotOk { snapshot_id } => {
let id_bytes = snapshot_id.as_bytes();
buf.put_u16_le(id_bytes.len() as u16);
buf.put_slice(id_bytes);
}
Message::SnapshotFail { missing_chunks, missing_metas } => {
buf.put_u32_le(missing_chunks.len() as u32);
for hash in missing_chunks {
buf.put_slice(hash);
}
buf.put_u32_le(missing_metas.len() as u32);
for (meta_type, hash) in missing_metas {
buf.put_u8(*meta_type as u8);
buf.put_slice(hash);
}
}
Message::Close => {
// No payload
}
}
Ok(buf.freeze())
}
/// Deserialize message payload from bytes
pub fn deserialize_payload(cmd: Command, mut payload: Bytes) -> Result<Self> {
match cmd {
Command::Hello => {
if payload.remaining() < 2 {
return Err(Error::new(ErrorKind::UnexpectedEof, "Hello payload too short"));
}
let client_type = payload.get_u8();
let auth_type = payload.get_u8();
Ok(Message::Hello { client_type, auth_type })
}
Command::HelloOk => Ok(Message::HelloOk),
Command::AuthUserPass => {
if payload.remaining() < 12 { // 4 bytes for lengths + at least 8 bytes for machine_id
return Err(Error::new(ErrorKind::UnexpectedEof, "AuthUserPass payload too short"));
}
let username_len = payload.get_u16_le() as usize;
if payload.remaining() < username_len + 10 { // 2 bytes for password len + 8 bytes for machine_id
return Err(Error::new(ErrorKind::UnexpectedEof, "AuthUserPass username too short"));
}
let username = String::from_utf8(payload.copy_to_bytes(username_len).to_vec())
.map_err(|_| Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in username"))?;
let password_len = payload.get_u16_le() as usize;
if payload.remaining() < password_len + 8 { // 8 bytes for machine_id
return Err(Error::new(ErrorKind::UnexpectedEof, "AuthUserPass password too short"));
}
let password = String::from_utf8(payload.copy_to_bytes(password_len).to_vec())
.map_err(|_| Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in password"))?;
let machine_id = payload.get_i64_le();
Ok(Message::AuthUserPass { username, password, machine_id })
}
Command::AuthCode => {
if payload.remaining() < 2 {
return Err(Error::new(ErrorKind::UnexpectedEof, "AuthCode payload too short"));
}
let code_len = payload.get_u16_le() as usize;
if payload.remaining() < code_len {
return Err(Error::new(ErrorKind::UnexpectedEof, "AuthCode code too short"));
}
let code = String::from_utf8(payload.copy_to_bytes(code_len).to_vec())
.map_err(|_| Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in code"))?;
Ok(Message::AuthCode { code })
}
Command::AuthOk => {
if payload.remaining() < 16 {
return Err(Error::new(ErrorKind::UnexpectedEof, "AuthOk payload too short"));
}
let mut session_id = [0u8; 16];
payload.copy_to_slice(&mut session_id);
Ok(Message::AuthOk { session_id })
}
Command::AuthFail => {
if payload.remaining() < 2 {
return Err(Error::new(ErrorKind::UnexpectedEof, "AuthFail payload too short"));
}
let reason_len = payload.get_u16_le() as usize;
if payload.remaining() < reason_len {
return Err(Error::new(ErrorKind::UnexpectedEof, "AuthFail reason too short"));
}
let reason = String::from_utf8(payload.copy_to_bytes(reason_len).to_vec())
.map_err(|_| Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in reason"))?;
Ok(Message::AuthFail { reason })
}
Command::BatchCheckChunk => {
if payload.remaining() < 4 {
return Err(Error::new(ErrorKind::UnexpectedEof, "BatchCheckChunk payload too short"));
}
let count = payload.get_u32_le() as usize;
if payload.remaining() < count * 32 {
return Err(Error::new(ErrorKind::UnexpectedEof, "BatchCheckChunk hashes too short"));
}
let mut hashes = Vec::with_capacity(count);
for _ in 0..count {
let mut hash = [0u8; 32];
payload.copy_to_slice(&mut hash);
hashes.push(hash);
}
Ok(Message::BatchCheckChunk { hashes })
}
Command::CheckChunkResp => {
if payload.remaining() < 4 {
return Err(Error::new(ErrorKind::UnexpectedEof, "CheckChunkResp payload too short"));
}
let count = payload.get_u32_le() as usize;
if payload.remaining() < count * 32 {
return Err(Error::new(ErrorKind::UnexpectedEof, "CheckChunkResp hashes too short"));
}
let mut missing_hashes = Vec::with_capacity(count);
for _ in 0..count {
let mut hash = [0u8; 32];
payload.copy_to_slice(&mut hash);
missing_hashes.push(hash);
}
Ok(Message::CheckChunkResp { missing_hashes })
}
Command::SendChunk => {
if payload.remaining() < 36 {
return Err(Error::new(ErrorKind::UnexpectedEof, "SendChunk payload too short"));
}
let mut hash = [0u8; 32];
payload.copy_to_slice(&mut hash);
let size = payload.get_u32_le() as usize;
if payload.remaining() < size {
return Err(Error::new(ErrorKind::UnexpectedEof, "SendChunk data too short"));
}
let data = payload.copy_to_bytes(size);
Ok(Message::SendChunk { hash, data })
}
Command::ChunkOk => Ok(Message::ChunkOk),
Command::ChunkFail => {
if payload.remaining() < 2 {
return Err(Error::new(ErrorKind::UnexpectedEof, "ChunkFail payload too short"));
}
let reason_len = payload.get_u16_le() as usize;
if payload.remaining() < reason_len {
return Err(Error::new(ErrorKind::UnexpectedEof, "ChunkFail reason too short"));
}
let reason = String::from_utf8(payload.copy_to_bytes(reason_len).to_vec())
.map_err(|_| Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in reason"))?;
Ok(Message::ChunkFail { reason })
}
Command::BatchCheckMeta => {
if payload.remaining() < 4 {
return Err(Error::new(ErrorKind::UnexpectedEof, "BatchCheckMeta payload too short"));
}
let count = payload.get_u32_le() as usize;
if payload.remaining() < count * 33 {
return Err(Error::new(ErrorKind::UnexpectedEof, "BatchCheckMeta items too short"));
}
let mut items = Vec::with_capacity(count);
for _ in 0..count {
let meta_type = MetaType::try_from(payload.get_u8())?;
let mut hash = [0u8; 32];
payload.copy_to_slice(&mut hash);
items.push((meta_type, hash));
}
Ok(Message::BatchCheckMeta { items })
}
Command::CheckMetaResp => {
if payload.remaining() < 4 {
return Err(Error::new(ErrorKind::UnexpectedEof, "CheckMetaResp payload too short"));
}
let count = payload.get_u32_le() as usize;
if payload.remaining() < count * 33 {
return Err(Error::new(ErrorKind::UnexpectedEof, "CheckMetaResp items too short"));
}
let mut missing_items = Vec::with_capacity(count);
for _ in 0..count {
let meta_type = MetaType::try_from(payload.get_u8())?;
let mut hash = [0u8; 32];
payload.copy_to_slice(&mut hash);
missing_items.push((meta_type, hash));
}
Ok(Message::CheckMetaResp { missing_items })
}
Command::SendMeta => {
if payload.remaining() < 37 {
return Err(Error::new(ErrorKind::UnexpectedEof, "SendMeta payload too short"));
}
let meta_type = MetaType::try_from(payload.get_u8())?;
let mut meta_hash = [0u8; 32];
payload.copy_to_slice(&mut meta_hash);
let body_len = payload.get_u32_le() as usize;
if payload.remaining() < body_len {
return Err(Error::new(ErrorKind::UnexpectedEof, "SendMeta body too short"));
}
let body = payload.copy_to_bytes(body_len);
Ok(Message::SendMeta { meta_type, meta_hash, body })
}
Command::MetaOk => Ok(Message::MetaOk),
Command::MetaFail => {
if payload.remaining() < 2 {
return Err(Error::new(ErrorKind::UnexpectedEof, "MetaFail payload too short"));
}
let reason_len = payload.get_u16_le() as usize;
if payload.remaining() < reason_len {
return Err(Error::new(ErrorKind::UnexpectedEof, "MetaFail reason too short"));
}
let reason = String::from_utf8(payload.copy_to_bytes(reason_len).to_vec())
.map_err(|_| Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in reason"))?;
Ok(Message::MetaFail { reason })
}
Command::SendSnapshot => {
if payload.remaining() < 36 {
return Err(Error::new(ErrorKind::UnexpectedEof, "SendSnapshot payload too short"));
}
let mut snapshot_hash = [0u8; 32];
payload.copy_to_slice(&mut snapshot_hash);
let body_len = payload.get_u32_le() as usize;
if payload.remaining() < body_len {
return Err(Error::new(ErrorKind::UnexpectedEof, "SendSnapshot body too short"));
}
let body = payload.copy_to_bytes(body_len);
Ok(Message::SendSnapshot { snapshot_hash, body })
}
Command::SnapshotOk => {
if payload.remaining() < 2 {
return Err(Error::new(ErrorKind::UnexpectedEof, "SnapshotOk payload too short"));
}
let id_len = payload.get_u16_le() as usize;
if payload.remaining() < id_len {
return Err(Error::new(ErrorKind::UnexpectedEof, "SnapshotOk id too short"));
}
let snapshot_id = String::from_utf8(payload.copy_to_bytes(id_len).to_vec())
.map_err(|_| Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in snapshot_id"))?;
Ok(Message::SnapshotOk { snapshot_id })
}
Command::SnapshotFail => {
if payload.remaining() < 8 {
return Err(Error::new(ErrorKind::UnexpectedEof, "SnapshotFail payload too short"));
}
let chunk_count = payload.get_u32_le() as usize;
if payload.remaining() < chunk_count * 32 + 4 {
return Err(Error::new(ErrorKind::UnexpectedEof, "SnapshotFail chunks too short"));
}
let mut missing_chunks = Vec::with_capacity(chunk_count);
for _ in 0..chunk_count {
let mut hash = [0u8; 32];
payload.copy_to_slice(&mut hash);
missing_chunks.push(hash);
}
let meta_count = payload.get_u32_le() as usize;
if payload.remaining() < meta_count * 33 {
return Err(Error::new(ErrorKind::UnexpectedEof, "SnapshotFail metas too short"));
}
let mut missing_metas = Vec::with_capacity(meta_count);
for _ in 0..meta_count {
let meta_type = MetaType::try_from(payload.get_u8())?;
let mut hash = [0u8; 32];
payload.copy_to_slice(&mut hash);
missing_metas.push((meta_type, hash));
}
Ok(Message::SnapshotFail { missing_chunks, missing_metas })
}
Command::Close => Ok(Message::Close),
}
}
/// Get the command for this message
pub fn command(&self) -> Command {
match self {
Message::Hello { .. } => Command::Hello,
Message::HelloOk => Command::HelloOk,
Message::AuthUserPass { .. } => Command::AuthUserPass,
Message::AuthCode { .. } => Command::AuthCode,
Message::AuthOk { .. } => Command::AuthOk,
Message::AuthFail { .. } => Command::AuthFail,
Message::BatchCheckChunk { .. } => Command::BatchCheckChunk,
Message::CheckChunkResp { .. } => Command::CheckChunkResp,
Message::SendChunk { .. } => Command::SendChunk,
Message::ChunkOk => Command::ChunkOk,
Message::ChunkFail { .. } => Command::ChunkFail,
Message::BatchCheckMeta { .. } => Command::BatchCheckMeta,
Message::CheckMetaResp { .. } => Command::CheckMetaResp,
Message::SendMeta { .. } => Command::SendMeta,
Message::MetaOk => Command::MetaOk,
Message::MetaFail { .. } => Command::MetaFail,
Message::SendSnapshot { .. } => Command::SendSnapshot,
Message::SnapshotOk { .. } => Command::SnapshotOk,
Message::SnapshotFail { .. } => Command::SnapshotFail,
Message::Close => Command::Close,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_header_serialization() {
let header = MessageHeader::new(Command::Hello, [1; 16], 42);
let serialized = header.serialize();
let deserialized = MessageHeader::deserialize(&serialized).unwrap();
assert_eq!(deserialized.cmd, Command::Hello);
assert_eq!(deserialized.session_id, [1; 16]);
assert_eq!(deserialized.payload_len, 42);
}
#[test]
fn test_hello_message() {
let msg = Message::Hello { client_type: 1, auth_type: 2 };
let payload = msg.serialize_payload().unwrap();
let deserialized = Message::deserialize_payload(Command::Hello, payload).unwrap();
match deserialized {
Message::Hello { client_type, auth_type } => {
assert_eq!(client_type, 1);
assert_eq!(auth_type, 2);
}
_ => panic!("Wrong message type"),
}
}
}

468
server/src/sync/server.rs Normal file
View File

@@ -0,0 +1,468 @@
use anyhow::{Context, Result};
use bytes::Bytes;
use sqlx::SqlitePool;
use std::sync::Arc;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use tokio::net::{TcpListener, TcpStream};
use uuid::Uuid;
use crate::sync::protocol::{Command, Message, MessageHeader, MetaType};
use crate::sync::session::{SessionManager, session_cleanup_task};
use crate::sync::storage::Storage;
use crate::sync::validation::SnapshotValidator;
/// Configuration for the sync server
#[derive(Debug, Clone)]
pub struct SyncServerConfig {
pub bind_address: String,
pub port: u16,
pub data_dir: String,
pub max_connections: usize,
pub chunk_size_limit: usize,
pub meta_size_limit: usize,
pub batch_limit: usize,
}
impl Default for SyncServerConfig {
fn default() -> Self {
Self {
bind_address: "0.0.0.0".to_string(),
port: 8380,
data_dir: "./data".to_string(),
max_connections: 100,
chunk_size_limit: 4 * 1024 * 1024, // 4 MiB
meta_size_limit: 1024 * 1024, // 1 MiB
batch_limit: 1000,
}
}
}
/// Main sync server
pub struct SyncServer {
config: SyncServerConfig,
storage: Storage,
session_manager: Arc<SessionManager>,
validator: SnapshotValidator,
}
impl SyncServer {
pub fn new(config: SyncServerConfig, db_pool: SqlitePool) -> Self {
let storage = Storage::new(&config.data_dir);
let session_manager = Arc::new(SessionManager::new(db_pool));
let validator = SnapshotValidator::new(storage.clone());
Self {
config,
storage,
session_manager,
validator,
}
}
/// Start the sync server
pub async fn start(&self) -> Result<()> {
// Initialize storage
self.storage.init().await
.context("Failed to initialize storage")?;
let bind_addr = format!("{}:{}", self.config.bind_address, self.config.port);
let listener = TcpListener::bind(&bind_addr).await
.with_context(|| format!("Failed to bind to {}", bind_addr))?;
println!("Sync server listening on {}", bind_addr);
// Start session cleanup task
let session_manager_clone = Arc::clone(&self.session_manager);
tokio::spawn(async move {
session_cleanup_task(session_manager_clone).await;
});
// Accept connections
loop {
match listener.accept().await {
Ok((stream, addr)) => {
println!("New sync connection from {}", addr);
let handler = ConnectionHandler::new(
stream,
self.storage.clone(),
Arc::clone(&self.session_manager),
self.validator.clone(),
self.config.clone(),
);
tokio::spawn(async move {
if let Err(e) = handler.handle().await {
eprintln!("Connection error from {}: {}", addr, e);
}
});
}
Err(e) => {
eprintln!("Failed to accept connection: {}", e);
}
}
}
}
}
/// Connection handler for individual sync clients
struct ConnectionHandler {
stream: TcpStream,
storage: Storage,
session_manager: Arc<SessionManager>,
validator: SnapshotValidator,
config: SyncServerConfig,
session_id: Option<[u8; 16]>,
machine_id: Option<i64>,
}
impl ConnectionHandler {
fn new(
stream: TcpStream,
storage: Storage,
session_manager: Arc<SessionManager>,
validator: SnapshotValidator,
config: SyncServerConfig,
) -> Self {
Self {
stream,
storage,
session_manager,
validator,
config,
session_id: None,
machine_id: None,
}
}
/// Handle the connection
async fn handle(mut self) -> Result<()> {
loop {
// Read message header
let header = self.read_header().await?;
// Read payload with appropriate size limit based on command type
let payload = if header.payload_len > 0 {
self.read_payload(header.cmd, header.payload_len).await?
} else {
Bytes::new()
};
// Parse message
let message = Message::deserialize_payload(header.cmd, payload)
.context("Failed to deserialize message")?;
// Handle message
let response = self.handle_message(message).await?;
// Send response
if let Some(response_msg) = response {
self.send_message(response_msg).await?;
}
// Close connection if requested
if header.cmd == Command::Close {
break;
}
}
// Clean up session
if let Some(session_id) = self.session_id {
self.session_manager.remove_session(&session_id).await;
}
Ok(())
}
/// Read message header
async fn read_header(&mut self) -> Result<MessageHeader> {
let mut header_buf = [0u8; MessageHeader::SIZE];
self.stream.read_exact(&mut header_buf).await
.context("Failed to read message header")?;
MessageHeader::deserialize(&header_buf)
.context("Failed to parse message header")
}
/// Read message payload with appropriate size limit based on command type
async fn read_payload(&mut self, cmd: Command, len: u32) -> Result<Bytes> {
// Use different size limits based on command type
let size_limit = match cmd {
Command::SendChunk => self.config.chunk_size_limit,
_ => self.config.meta_size_limit,
};
if len as usize > size_limit {
return Err(anyhow::anyhow!("Payload too large: {} bytes", len));
}
let mut payload_buf = vec![0u8; len as usize];
self.stream.read_exact(&mut payload_buf).await
.context("Failed to read message payload")?;
Ok(Bytes::from(payload_buf))
}
/// Send a message
async fn send_message(&mut self, message: Message) -> Result<()> {
let session_id = self.session_id.unwrap_or([0u8; 16]);
let payload = message.serialize_payload()?;
let header = MessageHeader::new(message.command(), session_id, payload.len() as u32);
let header_bytes = header.serialize();
self.stream.write_all(&header_bytes).await
.context("Failed to write message header")?;
if !payload.is_empty() {
self.stream.write_all(&payload).await
.context("Failed to write message payload")?;
}
self.stream.flush().await
.context("Failed to flush stream")?;
Ok(())
}
/// Handle a received message
async fn handle_message(&mut self, message: Message) -> Result<Option<Message>> {
match message {
Message::Hello { client_type: _, auth_type: _ } => {
Ok(Some(Message::HelloOk))
}
Message::AuthUserPass { username, password, machine_id } => {
match self.session_manager.authenticate_userpass(&username, &password, machine_id).await {
Ok(session) => {
self.session_id = Some(session.session_id);
self.machine_id = Some(session.machine_id);
Ok(Some(Message::AuthOk { session_id: session.session_id }))
}
Err(e) => {
Ok(Some(Message::AuthFail { reason: e.to_string() }))
}
}
}
Message::AuthCode { code } => {
match self.session_manager.authenticate_code(&code).await {
Ok(session) => {
self.session_id = Some(session.session_id);
self.machine_id = Some(session.machine_id);
Ok(Some(Message::AuthOk { session_id: session.session_id }))
}
Err(e) => {
Ok(Some(Message::AuthFail { reason: e.to_string() }))
}
}
}
Message::BatchCheckChunk { hashes } => {
self.require_auth()?;
if hashes.len() > self.config.batch_limit {
return Err(anyhow::anyhow!("Batch size exceeds limit: {}", hashes.len()));
}
let missing_hashes = self.validator.validate_chunk_batch(&hashes).await?;
Ok(Some(Message::CheckChunkResp { missing_hashes }))
}
Message::SendChunk { hash, data } => {
self.require_auth()?;
if data.len() > self.config.chunk_size_limit {
return Ok(Some(Message::ChunkFail {
reason: format!("Chunk too large: {} bytes", data.len())
}));
}
match self.storage.store_chunk(&hash, &data).await {
Ok(()) => Ok(Some(Message::ChunkOk)),
Err(e) => Ok(Some(Message::ChunkFail { reason: e.to_string() })),
}
}
Message::BatchCheckMeta { items } => {
self.require_auth()?;
if items.len() > self.config.batch_limit {
return Err(anyhow::anyhow!("Batch size exceeds limit: {}", items.len()));
}
let missing_items = self.validator.validate_meta_batch(&items).await?;
Ok(Some(Message::CheckMetaResp { missing_items }))
}
Message::SendMeta { meta_type, meta_hash, body } => {
self.require_auth()?;
if body.len() > self.config.meta_size_limit {
return Ok(Some(Message::MetaFail {
reason: format!("Meta object too large: {} bytes", body.len())
}));
}
match self.storage.store_meta(meta_type, &meta_hash, &body).await {
Ok(()) => Ok(Some(Message::MetaOk)),
Err(e) => Ok(Some(Message::MetaFail { reason: e.to_string() })),
}
}
Message::SendSnapshot { snapshot_hash, body } => {
self.require_auth()?;
if body.len() > self.config.meta_size_limit {
println!("Snapshot rejected: size limit exceeded ({} > {})", body.len(), self.config.meta_size_limit);
return Ok(Some(Message::SnapshotFail {
missing_chunks: vec![],
missing_metas: vec![],
}));
}
println!("Validating snapshot hash: {}", hex::encode(&snapshot_hash));
// Validate snapshot
match self.validator.validate_snapshot(&snapshot_hash, &body).await {
Ok(validation_result) => {
println!("Validation result - is_valid: {}, missing_chunks: {}, missing_metas: {}",
validation_result.is_valid,
validation_result.missing_chunks.len(),
validation_result.missing_metas.len());
if validation_result.is_valid {
// Store snapshot meta
if let Err(e) = self.storage.store_meta(MetaType::Snapshot, &snapshot_hash, &body).await {
println!("Failed to store snapshot meta: {}", e);
return Ok(Some(Message::SnapshotFail {
missing_chunks: vec![],
missing_metas: vec![],
}));
}
// Create snapshot reference
let snapshot_id = Uuid::new_v4().to_string();
let machine_id = *self.machine_id.as_ref().unwrap();
let created_at = chrono::Utc::now().timestamp() as u64;
println!("Creating snapshot reference: machine_id={}, snapshot_id={}", machine_id, snapshot_id);
if let Err(e) = self.storage.store_snapshot_ref(
machine_id,
&snapshot_id,
&snapshot_hash,
created_at
).await {
println!("Failed to store snapshot reference: {}", e);
return Ok(Some(Message::SnapshotFail {
missing_chunks: vec![],
missing_metas: vec![],
}));
}
println!("Snapshot successfully stored with ID: {}", snapshot_id);
Ok(Some(Message::SnapshotOk { snapshot_id }))
} else {
println!("Snapshot validation failed - returning missing items");
Ok(Some(Message::SnapshotFail {
missing_chunks: validation_result.missing_chunks,
missing_metas: validation_result.missing_metas,
}))
}
}
Err(e) => {
println!("Snapshot validation error: {}", e);
Ok(Some(Message::SnapshotFail {
missing_chunks: vec![],
missing_metas: vec![],
}))
}
}
}
Message::Close => {
Ok(None) // No response needed
}
// These are response messages that shouldn't be received by the server
Message::HelloOk | Message::AuthOk { .. } | Message::AuthFail { .. } |
Message::CheckChunkResp { .. } | Message::ChunkOk | Message::ChunkFail { .. } |
Message::CheckMetaResp { .. } | Message::MetaOk | Message::MetaFail { .. } |
Message::SnapshotOk { .. } | Message::SnapshotFail { .. } => {
Err(anyhow::anyhow!("Unexpected response message from client"))
}
}
}
/// Require authentication for protected operations
fn require_auth(&self) -> Result<()> {
if self.session_id.is_none() {
return Err(anyhow::anyhow!("Authentication required"));
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
use sqlx::sqlite::SqlitePoolOptions;
async fn setup_test_server() -> (SyncServer, TempDir) {
let temp_dir = TempDir::new().unwrap();
let pool = SqlitePoolOptions::new()
.connect(":memory:")
.await
.unwrap();
// Create required tables
sqlx::query!(
r#"
CREATE TABLE users (
id INTEGER PRIMARY KEY,
username TEXT UNIQUE NOT NULL,
password_hash TEXT NOT NULL,
active INTEGER DEFAULT 1
)
"#
)
.execute(&pool)
.await
.unwrap();
sqlx::query!(
r#"
CREATE TABLE provisioning_codes (
id INTEGER PRIMARY KEY,
code TEXT UNIQUE NOT NULL,
created_by INTEGER NOT NULL,
expires_at TEXT NOT NULL,
used INTEGER DEFAULT 0,
used_at TEXT,
FOREIGN KEY (created_by) REFERENCES users (id)
)
"#
)
.execute(&pool)
.await
.unwrap();
let config = SyncServerConfig {
data_dir: temp_dir.path().to_string_lossy().to_string(),
..Default::default()
};
(SyncServer::new(config, pool), temp_dir)
}
#[tokio::test]
async fn test_server_creation() {
let (server, _temp_dir) = setup_test_server().await;
// Initialize storage to verify everything works
server.storage.init().await.unwrap();
}
}

343
server/src/sync/session.rs Normal file
View File

@@ -0,0 +1,343 @@
use anyhow::{Context, Result};
use rand::RngCore;
use sqlx::SqlitePool;
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::RwLock;
/// Session information
#[derive(Debug, Clone)]
pub struct Session {
pub session_id: [u8; 16],
pub machine_id: i64,
pub user_id: i64,
pub created_at: chrono::DateTime<chrono::Utc>,
}
/// Session manager for sync connections
#[derive(Debug)]
pub struct SessionManager {
sessions: Arc<RwLock<HashMap<[u8; 16], Session>>>,
db_pool: SqlitePool,
}
impl SessionManager {
pub fn new(db_pool: SqlitePool) -> Self {
Self {
sessions: Arc::new(RwLock::new(HashMap::new())),
db_pool,
}
}
/// Get database pool reference
pub fn get_db_pool(&self) -> &SqlitePool {
&self.db_pool
}
/// Generate a new session ID
fn generate_session_id() -> [u8; 16] {
let mut session_id = [0u8; 16];
rand::thread_rng().fill_bytes(&mut session_id);
session_id
}
/// Authenticate with username/password and validate machine ownership
pub async fn authenticate_userpass(&self, username: &str, password: &str, machine_id: i64) -> Result<Session> {
// Query user from database
let user = sqlx::query!(
"SELECT id, username, password_hash FROM users WHERE username = ?",
username
)
.fetch_optional(&self.db_pool)
.await
.context("Failed to query user")?;
let user = user.ok_or_else(|| anyhow::anyhow!("Invalid credentials"))?;
// Verify password
if !bcrypt::verify(password, &user.password_hash)
.context("Failed to verify password")? {
return Err(anyhow::anyhow!("Invalid credentials"));
}
let user_id = user.id.unwrap_or(0) as i64;
// Validate machine ownership
let machine = sqlx::query!(
"SELECT id, user_id FROM machines WHERE id = ?",
machine_id
)
.fetch_optional(&self.db_pool)
.await
.context("Failed to query machine")?;
let machine = machine.ok_or_else(|| anyhow::anyhow!("Machine not found"))?;
let machine_user_id = machine.user_id;
if machine_user_id != user_id {
return Err(anyhow::anyhow!("Machine does not belong to user"));
}
// Create session with machine ID
let session_id = Self::generate_session_id();
let machine_id = machine.id; // Use database ID
let session = Session {
session_id,
machine_id,
user_id,
created_at: chrono::Utc::now(),
};
// Store session
let mut sessions = self.sessions.write().await;
sessions.insert(session_id, session.clone());
Ok(session)
}
/// Authenticate with provisioning code
pub async fn authenticate_code(&self, code: &str) -> Result<Session> {
// Query provisioning code from database
let provisioning_code = sqlx::query!(
r#"
SELECT pc.id, pc.code, pc.expires_at, pc.used, m.id as machine_id, m.user_id, u.username
FROM provisioning_codes pc
JOIN machines m ON pc.machine_id = m.id
JOIN users u ON m.user_id = u.id
WHERE pc.code = ? AND pc.used = 0
"#,
code
)
.fetch_optional(&self.db_pool)
.await
.context("Failed to query provisioning code")?;
let provisioning_code = provisioning_code
.ok_or_else(|| anyhow::anyhow!("Invalid or used provisioning code"))?;
// Check if code is expired
let expires_at: chrono::DateTime<chrono::Utc> = chrono::DateTime::from_naive_utc_and_offset(
provisioning_code.expires_at,
chrono::Utc
);
if chrono::Utc::now() > expires_at {
return Err(anyhow::anyhow!("Provisioning code expired"));
}
// Mark code as used
sqlx::query!(
"UPDATE provisioning_codes SET used = 1 WHERE id = ?",
provisioning_code.id
)
.execute(&self.db_pool)
.await
.context("Failed to mark provisioning code as used")?;
// Create session
let session_id = Self::generate_session_id();
let machine_id = provisioning_code.machine_id.expect("Machine ID should not be null"); // Use machine ID from database
let session = Session {
session_id,
machine_id,
user_id: provisioning_code.user_id as i64,
created_at: chrono::Utc::now(),
};
// Store session
let mut sessions = self.sessions.write().await;
sessions.insert(session_id, session.clone());
Ok(session)
}
/// Get session by session ID
pub async fn get_session(&self, session_id: &[u8; 16]) -> Option<Session> {
let sessions = self.sessions.read().await;
sessions.get(session_id).cloned()
}
/// Validate session and return associated machine ID
pub async fn validate_session(&self, session_id: &[u8; 16]) -> Result<i64> {
let session = self.get_session(session_id).await
.ok_or_else(|| anyhow::anyhow!("Invalid session"))?;
// Check if session is too old (24 hours)
let session_age = chrono::Utc::now() - session.created_at;
if session_age > chrono::Duration::hours(24) {
// Remove expired session
let mut sessions = self.sessions.write().await;
sessions.remove(session_id);
return Err(anyhow::anyhow!("Session expired"));
}
Ok(session.machine_id)
}
/// Remove session
pub async fn remove_session(&self, session_id: &[u8; 16]) {
let mut sessions = self.sessions.write().await;
sessions.remove(session_id);
}
/// Clean up expired sessions
pub async fn cleanup_expired_sessions(&self) {
let mut sessions = self.sessions.write().await;
let now = chrono::Utc::now();
sessions.retain(|_, session| {
let age = now - session.created_at;
age <= chrono::Duration::hours(24)
});
}
/// Get active session count
pub async fn active_session_count(&self) -> usize {
let sessions = self.sessions.read().await;
sessions.len()
}
/// List active sessions
pub async fn list_active_sessions(&self) -> Vec<Session> {
let sessions = self.sessions.read().await;
sessions.values().cloned().collect()
}
}
/// Periodic cleanup task for expired sessions
pub async fn session_cleanup_task(session_manager: Arc<SessionManager>) {
let mut interval = tokio::time::interval(tokio::time::Duration::from_secs(3600)); // Every hour
loop {
interval.tick().await;
session_manager.cleanup_expired_sessions().await;
println!("Cleaned up expired sync sessions. Active sessions: {}",
session_manager.active_session_count().await);
}
}
#[cfg(test)]
mod tests {
use super::*;
use sqlx::sqlite::SqlitePoolOptions;
async fn setup_test_db() -> SqlitePool {
let pool = SqlitePoolOptions::new()
.connect(":memory:")
.await
.unwrap();
// Create tables
sqlx::query!(
r#"
CREATE TABLE users (
id INTEGER PRIMARY KEY,
username TEXT UNIQUE NOT NULL,
password_hash TEXT NOT NULL,
active INTEGER DEFAULT 1
)
"#
)
.execute(&pool)
.await
.unwrap();
sqlx::query!(
r#"
CREATE TABLE provisioning_codes (
id INTEGER PRIMARY KEY,
code TEXT UNIQUE NOT NULL,
created_by INTEGER NOT NULL,
expires_at TEXT NOT NULL,
used INTEGER DEFAULT 0,
used_at TEXT,
FOREIGN KEY (created_by) REFERENCES users (id)
)
"#
)
.execute(&pool)
.await
.unwrap();
// Insert test user
let password_hash = bcrypt::hash("password123", bcrypt::DEFAULT_COST).unwrap();
sqlx::query!(
"INSERT INTO users (username, password_hash) VALUES (?, ?)",
"testuser",
password_hash
)
.execute(&pool)
.await
.unwrap();
pool
}
#[tokio::test]
async fn test_authenticate_userpass() {
let pool = setup_test_db().await;
let session_manager = SessionManager::new(pool);
let session = session_manager
.authenticate_userpass("testuser", "password123")
.await
.unwrap();
assert_eq!(session.user_id, 1);
assert!(!session.machine_id.is_empty());
}
#[tokio::test]
async fn test_authenticate_userpass_invalid() {
let pool = setup_test_db().await;
let session_manager = SessionManager::new(pool);
let result = session_manager
.authenticate_userpass("testuser", "wrongpassword")
.await;
assert!(result.is_err());
}
#[tokio::test]
async fn test_session_validation() {
let pool = setup_test_db().await;
let session_manager = SessionManager::new(pool);
let session = session_manager
.authenticate_userpass("testuser", "password123")
.await
.unwrap();
let machine_id = session_manager
.validate_session(&session.session_id)
.await
.unwrap();
assert_eq!(machine_id, session.machine_id);
}
#[tokio::test]
async fn test_session_cleanup() {
let pool = setup_test_db().await;
let session_manager = SessionManager::new(pool);
let session = session_manager
.authenticate_userpass("testuser", "password123")
.await
.unwrap();
assert_eq!(session_manager.active_session_count().await, 1);
// Manually expire the session
{
let mut sessions = session_manager.sessions.write().await;
if let Some(mut session) = sessions.get_mut(&session.session_id) {
session.created_at = chrono::Utc::now() - chrono::Duration::hours(25);
}
}
session_manager.cleanup_expired_sessions().await;
assert_eq!(session_manager.active_session_count().await, 0);
}
}

406
server/src/sync/storage.rs Normal file
View File

@@ -0,0 +1,406 @@
use anyhow::{Context, Result};
use bytes::Bytes;
use std::collections::HashSet;
use std::path::{Path, PathBuf};
use tokio::fs;
use crate::sync::protocol::{Hash, MetaType};
use crate::sync::meta::MetaObj;
/// Storage backend for chunks and metadata objects
#[derive(Debug, Clone)]
pub struct Storage {
data_dir: PathBuf,
}
impl Storage {
pub fn new<P: AsRef<Path>>(data_dir: P) -> Self {
Self {
data_dir: data_dir.as_ref().to_path_buf(),
}
}
/// Initialize storage directories
pub async fn init(&self) -> Result<()> {
let chunks_dir = self.data_dir.join("sync").join("chunks");
let meta_dir = self.data_dir.join("sync").join("meta");
let machines_dir = self.data_dir.join("sync").join("machines");
fs::create_dir_all(&chunks_dir).await
.context("Failed to create chunks directory")?;
fs::create_dir_all(&meta_dir).await
.context("Failed to create meta directory")?;
fs::create_dir_all(&machines_dir).await
.context("Failed to create machines directory")?;
// Create subdirectories for each meta type
for meta_type in &["files", "dirs", "partitions", "disks", "snapshots"] {
fs::create_dir_all(meta_dir.join(meta_type)).await
.with_context(|| format!("Failed to create meta/{} directory", meta_type))?;
}
Ok(())
}
/// Get chunk storage path for a hash
fn chunk_path(&self, hash: &Hash) -> PathBuf {
let hex = hex::encode(hash);
let ab = &hex[0..2];
let cd = &hex[2..4];
let filename = format!("{}.chk", hex);
self.data_dir
.join("sync")
.join("chunks")
.join(ab)
.join(cd)
.join(filename)
}
/// Get meta storage path for a hash and type
fn meta_path(&self, meta_type: MetaType, hash: &Hash) -> PathBuf {
let hex = hex::encode(hash);
let ab = &hex[0..2];
let cd = &hex[2..4];
let filename = format!("{}.meta", hex);
let type_dir = match meta_type {
MetaType::File => "files",
MetaType::Dir => "dirs",
MetaType::Partition => "partitions",
MetaType::Disk => "disks",
MetaType::Snapshot => "snapshots",
};
self.data_dir
.join("sync")
.join("meta")
.join(type_dir)
.join(ab)
.join(cd)
.join(filename)
}
/// Check if a chunk exists
pub async fn chunk_exists(&self, hash: &Hash) -> bool {
let path = self.chunk_path(hash);
path.exists()
}
/// Check if multiple chunks exist
pub async fn chunks_exist(&self, hashes: &[Hash]) -> Result<HashSet<Hash>> {
let mut existing = HashSet::new();
for hash in hashes {
if self.chunk_exists(hash).await {
existing.insert(*hash);
}
}
Ok(existing)
}
/// Store a chunk
pub async fn store_chunk(&self, hash: &Hash, data: &[u8]) -> Result<()> {
// Verify hash
let computed_hash = blake3::hash(data);
if computed_hash.as_bytes() != hash {
return Err(anyhow::anyhow!("Chunk hash mismatch"));
}
let path = self.chunk_path(hash);
// Create parent directories
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await
.context("Failed to create chunk directory")?;
}
// Write to temporary file first, then rename (atomic write)
let temp_path = path.with_extension("tmp");
fs::write(&temp_path, data).await
.context("Failed to write chunk to temporary file")?;
fs::rename(&temp_path, &path).await
.context("Failed to rename chunk file")?;
Ok(())
}
/// Load a chunk
pub async fn load_chunk(&self, hash: &Hash) -> Result<Option<Bytes>> {
let path = self.chunk_path(hash);
if !path.exists() {
return Ok(None);
}
let data = fs::read(&path).await
.context("Failed to read chunk file")?;
// Verify hash
let computed_hash = blake3::hash(&data);
if computed_hash.as_bytes() != hash {
return Err(anyhow::anyhow!("Stored chunk hash mismatch"));
}
Ok(Some(Bytes::from(data)))
}
/// Check if a meta object exists
pub async fn meta_exists(&self, meta_type: MetaType, hash: &Hash) -> bool {
let path = self.meta_path(meta_type, hash);
path.exists()
}
/// Check if multiple meta objects exist
pub async fn metas_exist(&self, items: &[(MetaType, Hash)]) -> Result<HashSet<(MetaType, Hash)>> {
let mut existing = HashSet::new();
for &(meta_type, hash) in items {
if self.meta_exists(meta_type, &hash).await {
existing.insert((meta_type, hash));
}
}
Ok(existing)
}
/// Store a meta object
pub async fn store_meta(&self, meta_type: MetaType, hash: &Hash, body: &[u8]) -> Result<()> {
// Verify hash
let computed_hash = blake3::hash(body);
if computed_hash.as_bytes() != hash {
return Err(anyhow::anyhow!("Meta object hash mismatch"));
}
let path = self.meta_path(meta_type, hash);
// Create parent directories
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await
.context("Failed to create meta directory")?;
}
// Write to temporary file first, then rename (atomic write)
let temp_path = path.with_extension("tmp");
fs::write(&temp_path, body).await
.context("Failed to write meta to temporary file")?;
fs::rename(&temp_path, &path).await
.context("Failed to rename meta file")?;
Ok(())
}
/// Load a meta object
pub async fn load_meta(&self, meta_type: MetaType, hash: &Hash) -> Result<Option<MetaObj>> {
let path = self.meta_path(meta_type, hash);
if !path.exists() {
println!("Meta file does not exist: {:?}", path);
return Ok(None);
}
println!("Reading meta file: {:?}", path);
let data = fs::read(&path).await
.context("Failed to read meta file")?;
println!("Read {} bytes from meta file", data.len());
// Verify hash
let computed_hash = blake3::hash(&data);
if computed_hash.as_bytes() != hash {
println!("Hash mismatch: expected {}, got {}", hex::encode(hash), hex::encode(computed_hash.as_bytes()));
return Err(anyhow::anyhow!("Stored meta object hash mismatch"));
}
println!("Hash verified, deserializing {:?} object", meta_type);
let meta_obj = MetaObj::deserialize(meta_type, Bytes::from(data))
.context("Failed to deserialize meta object")?;
println!("Successfully deserialized meta object");
Ok(Some(meta_obj))
}
/// Get snapshot storage path for a machine
fn snapshot_ref_path(&self, machine_id: i64, snapshot_id: &str) -> PathBuf {
self.data_dir
.join("sync")
.join("machines")
.join(machine_id.to_string())
.join("snapshots")
.join(format!("{}.ref", snapshot_id))
}
/// Store a snapshot reference
pub async fn store_snapshot_ref(
&self,
machine_id: i64,
snapshot_id: &str,
snapshot_hash: &Hash,
created_at: u64
) -> Result<()> {
let path = self.snapshot_ref_path(machine_id, snapshot_id);
// Create parent directories
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await
.context("Failed to create snapshot reference directory")?;
}
// Create snapshot reference content
let content = format!("{}:{}", hex::encode(snapshot_hash), created_at);
// Write to temporary file first, then rename (atomic write)
let temp_path = path.with_extension("tmp");
fs::write(&temp_path, content).await
.context("Failed to write snapshot reference to temporary file")?;
fs::rename(&temp_path, &path).await
.context("Failed to rename snapshot reference file")?;
Ok(())
}
/// Load a snapshot reference
pub async fn load_snapshot_ref(&self, machine_id: i64, snapshot_id: &str) -> Result<Option<(Hash, u64)>> {
let path = self.snapshot_ref_path(machine_id, snapshot_id);
if !path.exists() {
return Ok(None);
}
let content = fs::read_to_string(&path).await
.context("Failed to read snapshot reference file")?;
let parts: Vec<&str> = content.trim().split(':').collect();
if parts.len() != 2 {
return Err(anyhow::anyhow!("Invalid snapshot reference format"));
}
let snapshot_hash: Hash = hex::decode(parts[0])
.context("Failed to decode snapshot hash")?
.try_into()
.map_err(|_| anyhow::anyhow!("Invalid snapshot hash length"))?;
let created_at: u64 = parts[1].parse()
.context("Failed to parse snapshot timestamp")?;
Ok(Some((snapshot_hash, created_at)))
}
/// List snapshots for a machine
pub async fn list_snapshots(&self, machine_id: i64) -> Result<Vec<String>> {
let snapshots_dir = self.data_dir
.join("sync")
.join("machines")
.join(machine_id.to_string())
.join("snapshots");
if !snapshots_dir.exists() {
return Ok(Vec::new());
}
let mut entries = fs::read_dir(&snapshots_dir).await
.context("Failed to read snapshots directory")?;
let mut snapshots = Vec::new();
while let Some(entry) = entries.next_entry().await
.context("Failed to read snapshot entry")? {
if let Some(file_name) = entry.file_name().to_str() {
if file_name.ends_with(".ref") {
let snapshot_id = file_name.trim_end_matches(".ref");
snapshots.push(snapshot_id.to_string());
}
}
}
snapshots.sort();
Ok(snapshots)
}
/// Delete old snapshots, keeping only the latest N
pub async fn cleanup_snapshots(&self, machine_id: i64, keep_count: usize) -> Result<()> {
let mut snapshots = self.list_snapshots(machine_id).await?;
if snapshots.len() <= keep_count {
return Ok(());
}
snapshots.sort();
snapshots.reverse(); // Most recent first
// Delete older snapshots
for snapshot_id in snapshots.iter().skip(keep_count) {
let path = self.snapshot_ref_path(machine_id, snapshot_id);
if path.exists() {
fs::remove_file(&path).await
.with_context(|| format!("Failed to delete snapshot {}", snapshot_id))?;
}
}
Ok(())
}
}
/// Add hex crate to dependencies
use hex;
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
#[tokio::test]
async fn test_storage_init() {
let temp_dir = TempDir::new().unwrap();
let storage = Storage::new(temp_dir.path());
storage.init().await.unwrap();
assert!(temp_dir.path().join("sync/chunks").exists());
assert!(temp_dir.path().join("sync/meta/files").exists());
assert!(temp_dir.path().join("sync/machines").exists());
}
#[tokio::test]
async fn test_chunk_storage() {
let temp_dir = TempDir::new().unwrap();
let storage = Storage::new(temp_dir.path());
storage.init().await.unwrap();
let data = b"test chunk data";
let hash = blake3::hash(data).into();
// Store chunk
storage.store_chunk(&hash, data).await.unwrap();
assert!(storage.chunk_exists(&hash).await);
// Load chunk
let loaded = storage.load_chunk(&hash).await.unwrap().unwrap();
assert_eq!(loaded.as_ref(), data);
}
#[tokio::test]
async fn test_snapshot_ref_storage() {
let temp_dir = TempDir::new().unwrap();
let storage = Storage::new(temp_dir.path());
storage.init().await.unwrap();
let machine_id = 123i64;
let snapshot_id = "snapshot-001";
let snapshot_hash = [1u8; 32];
let created_at = 1234567890;
storage.store_snapshot_ref(machine_id, snapshot_id, &snapshot_hash, created_at)
.await.unwrap();
let loaded = storage.load_snapshot_ref(machine_id, snapshot_id)
.await.unwrap().unwrap();
assert_eq!(loaded.0, snapshot_hash);
assert_eq!(loaded.1, created_at);
}
}

View File

@@ -0,0 +1,235 @@
use anyhow::{Context, Result};
use std::collections::{HashSet, VecDeque};
use crate::sync::protocol::{Hash, MetaType};
use crate::sync::storage::Storage;
use crate::sync::meta::{MetaObj, SnapshotObj, EntryType};
/// Validation result for snapshot commits
#[derive(Debug, Clone)]
pub struct ValidationResult {
pub is_valid: bool,
pub missing_chunks: Vec<Hash>,
pub missing_metas: Vec<(MetaType, Hash)>,
}
impl ValidationResult {
pub fn valid() -> Self {
Self {
is_valid: true,
missing_chunks: Vec::new(),
missing_metas: Vec::new(),
}
}
pub fn invalid(missing_chunks: Vec<Hash>, missing_metas: Vec<(MetaType, Hash)>) -> Self {
Self {
is_valid: false,
missing_chunks,
missing_metas,
}
}
pub fn has_missing(&self) -> bool {
!self.missing_chunks.is_empty() || !self.missing_metas.is_empty()
}
}
/// Validator for snapshot object graphs
#[derive(Clone)]
pub struct SnapshotValidator {
storage: Storage,
}
impl SnapshotValidator {
pub fn new(storage: Storage) -> Self {
Self { storage }
}
/// Validate a complete snapshot object graph using BFS only
pub async fn validate_snapshot(&self, snapshot_hash: &Hash, snapshot_body: &[u8]) -> Result<ValidationResult> {
// Use the BFS implementation
self.validate_snapshot_bfs(snapshot_hash, snapshot_body).await
}
/// Validate a batch of meta objects (for incremental validation)
pub async fn validate_meta_batch(&self, metas: &[(MetaType, Hash)]) -> Result<Vec<(MetaType, Hash)>> {
let mut missing = Vec::new();
for &(meta_type, hash) in metas {
if !self.storage.meta_exists(meta_type, &hash).await {
missing.push((meta_type, hash));
}
}
Ok(missing)
}
/// Validate a batch of chunks (for incremental validation)
pub async fn validate_chunk_batch(&self, chunks: &[Hash]) -> Result<Vec<Hash>> {
let mut missing = Vec::new();
for &hash in chunks {
if !self.storage.chunk_exists(&hash).await {
missing.push(hash);
}
}
Ok(missing)
}
/// Perform a breadth-first validation (useful for large snapshots)
pub async fn validate_snapshot_bfs(&self, snapshot_hash: &Hash, snapshot_body: &[u8]) -> Result<ValidationResult> {
// Verify snapshot hash
let computed_hash = blake3::hash(snapshot_body);
if computed_hash.as_bytes() != snapshot_hash {
return Err(anyhow::anyhow!("Snapshot hash mismatch"));
}
// Parse snapshot object
let snapshot_obj = SnapshotObj::deserialize(bytes::Bytes::from(snapshot_body.to_vec()))
.context("Failed to deserialize snapshot object")?;
let mut missing_chunks = Vec::new();
let mut missing_metas = Vec::new();
let mut visited_metas = HashSet::new();
let mut queue = VecDeque::new();
// Initialize queue with disk hashes
for disk_hash in &snapshot_obj.disk_hashes {
queue.push_back((MetaType::Disk, *disk_hash));
}
// BFS traversal
while let Some((meta_type, hash)) = queue.pop_front() {
let meta_key = (meta_type, hash);
if visited_metas.contains(&meta_key) {
continue;
}
visited_metas.insert(meta_key);
// Check if meta exists
if !self.storage.meta_exists(meta_type, &hash).await {
println!("Missing metadata: {:?} hash {}", meta_type, hex::encode(&hash));
missing_metas.push((meta_type, hash));
continue; // Skip loading if missing
}
// Load and process meta object
println!("Loading metadata: {:?} hash {}", meta_type, hex::encode(&hash));
if let Some(meta_obj) = self.storage.load_meta(meta_type, &hash).await
.context("Failed to load meta object")? {
match meta_obj {
MetaObj::Disk(disk) => {
for partition_hash in &disk.partition_hashes {
queue.push_back((MetaType::Partition, *partition_hash));
}
}
MetaObj::Partition(partition) => {
queue.push_back((MetaType::Dir, partition.root_dir_hash));
}
MetaObj::Dir(dir) => {
for entry in &dir.entries {
match entry.entry_type {
EntryType::File | EntryType::Symlink => {
queue.push_back((MetaType::File, entry.target_meta_hash));
}
EntryType::Dir => {
queue.push_back((MetaType::Dir, entry.target_meta_hash));
}
}
}
}
MetaObj::File(file) => {
// Check chunk dependencies
for chunk_hash in &file.chunk_hashes {
if !self.storage.chunk_exists(chunk_hash).await {
missing_chunks.push(*chunk_hash);
}
}
}
MetaObj::Snapshot(_) => {
// Snapshots shouldn't be nested
return Err(anyhow::anyhow!("Unexpected nested snapshot object"));
}
}
}
}
if missing_chunks.is_empty() && missing_metas.is_empty() {
Ok(ValidationResult::valid())
} else {
Ok(ValidationResult::invalid(missing_chunks, missing_metas))
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
use crate::sync::meta::*;
async fn setup_test_storage() -> Storage {
let temp_dir = TempDir::new().unwrap();
let storage = Storage::new(temp_dir.path());
storage.init().await.unwrap();
storage
}
#[tokio::test]
async fn test_validate_empty_snapshot() {
let storage = setup_test_storage().await;
let validator = SnapshotValidator::new(storage);
let snapshot = SnapshotObj::new(1234567890, vec![]);
let snapshot_body = snapshot.serialize().unwrap();
let snapshot_hash = snapshot.compute_hash().unwrap();
let result = validator.validate_snapshot(&snapshot_hash, &snapshot_body)
.await.unwrap();
assert!(result.is_valid);
assert!(result.missing_chunks.is_empty());
assert!(result.missing_metas.is_empty());
}
#[tokio::test]
async fn test_validate_missing_disk() {
let storage = setup_test_storage().await;
let validator = SnapshotValidator::new(storage);
let missing_disk_hash = [1u8; 32];
let snapshot = SnapshotObj::new(1234567890, vec![missing_disk_hash]);
let snapshot_body = snapshot.serialize().unwrap();
let snapshot_hash = snapshot.compute_hash().unwrap();
let result = validator.validate_snapshot(&snapshot_hash, &snapshot_body)
.await.unwrap();
assert!(!result.is_valid);
assert!(result.missing_chunks.is_empty());
assert_eq!(result.missing_metas.len(), 1);
assert_eq!(result.missing_metas[0], (MetaType::Disk, missing_disk_hash));
}
#[tokio::test]
async fn test_validate_chunk_batch() {
let storage = setup_test_storage().await;
let validator = SnapshotValidator::new(storage);
let chunk_data = b"test chunk";
let chunk_hash = blake3::hash(chunk_data).into();
let missing_hash = [1u8; 32];
// Store one chunk
storage.store_chunk(&chunk_hash, chunk_data).await.unwrap();
let chunks = vec![chunk_hash, missing_hash];
let missing = validator.validate_chunk_batch(&chunks).await.unwrap();
assert_eq!(missing.len(), 1);
assert_eq!(missing[0], missing_hash);
}
}

108
server/src/utils/base62.rs Normal file
View File

@@ -0,0 +1,108 @@
const CHARS: &str = "rYTSJ96O2ntiEBkuwQq0vdslyfI8Ph51bpae3LgHoFZAxj7WmzUNCGXcR4MDKV";
pub struct Base62;
impl Base62 {
pub fn encode(input: &str) -> String {
if input.is_empty() {
return String::new();
}
let bytes = input.as_bytes();
let alphabet_chars: Vec<char> = CHARS.chars().collect();
let mut number = bytes.iter().fold(String::from("0"), |acc, &byte| {
Self::multiply_and_add(&acc, 256, byte as u32)
});
if number == "0" {
return "0".to_string();
}
let mut result = String::new();
while number != "0" {
let (new_number, remainder) = Self::divide_by(&number, 62);
result.push(alphabet_chars[remainder as usize]);
number = new_number;
}
result.chars().rev().collect()
}
pub fn decode(encoded: &str) -> Option<String> {
if encoded.is_empty() {
return Some(String::new());
}
let char_to_value: std::collections::HashMap<char, u32> = CHARS
.chars()
.enumerate()
.map(|(i, c)| (c, i as u32))
.collect();
let mut number = String::from("0");
for c in encoded.chars() {
let value = *char_to_value.get(&c)?;
number = Self::multiply_and_add(&number, 62, value);
}
if number == "0" {
return Some(String::new());
}
let mut bytes = Vec::new();
while number != "0" {
let (new_number, remainder) = Self::divide_by(&number, 256);
bytes.push(remainder as u8);
number = new_number;
}
bytes.reverse();
String::from_utf8(bytes).ok()
}
fn multiply_and_add(num_str: &str, base: u32, add: u32) -> String {
let mut result = Vec::new();
let mut carry = add;
for c in num_str.chars().rev() {
let digit = c.to_digit(10).unwrap_or(0);
let product = digit * base + carry;
result.push((product % 10).to_string());
carry = product / 10;
}
while carry > 0 {
result.push((carry % 10).to_string());
carry /= 10;
}
if result.is_empty() {
"0".to_string()
} else {
result.into_iter().rev().collect()
}
}
fn divide_by(num_str: &str, base: u32) -> (String, u32) {
let mut quotient = String::new();
let mut remainder = 0u32;
for c in num_str.chars() {
let digit = c.to_digit(10).unwrap_or(0);
let current = remainder * 10 + digit;
let q = current / base;
remainder = current % base;
if !quotient.is_empty() || q > 0 {
quotient.push_str(&q.to_string());
}
}
if quotient.is_empty() {
quotient = "0".to_string();
}
(quotient, remainder)
}
}

View File

@@ -0,0 +1,44 @@
use crate::utils::{error::*, DbPool};
use sqlx::Row;
pub struct ConfigManager;
impl ConfigManager {
pub async fn get_config(pool: &DbPool, key: &str) -> AppResult<Option<String>> {
let row = sqlx::query("SELECT value FROM config WHERE key = ?")
.bind(key)
.fetch_optional(pool)
.await?;
if let Some(row) = row {
Ok(Some(row.get("value")))
} else {
Ok(None)
}
}
pub async fn set_config(pool: &DbPool, key: &str, value: &str) -> AppResult<()> {
sqlx::query(
r#"
INSERT INTO config (key, value, updated_at)
VALUES (?, ?, CURRENT_TIMESTAMP)
ON CONFLICT(key) DO UPDATE SET
value = excluded.value,
updated_at = CURRENT_TIMESTAMP
"#,
)
.bind(key)
.bind(value)
.execute(pool)
.await?;
Ok(())
}
pub async fn get_external_url(pool: &DbPool) -> AppResult<String> {
match Self::get_config(pool, "EXTERNAL_URL").await? {
Some(url) => Ok(url),
None => Err(internal_error("EXTERNAL_URL not configured")),
}
}
}

View File

@@ -72,12 +72,12 @@ async fn run_migrations(pool: &DbPool) -> AppResult<()> {
r#"
CREATE TABLE IF NOT EXISTS provisioning_codes (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL,
machine_id INTEGER NOT NULL,
code TEXT UNIQUE NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
expires_at DATETIME NOT NULL,
used BOOLEAN DEFAULT 0,
FOREIGN KEY(user_id) REFERENCES users(id) ON DELETE CASCADE
FOREIGN KEY(machine_id) REFERENCES machines(id) ON DELETE CASCADE
)
"#,
)
@@ -98,6 +98,19 @@ async fn run_migrations(pool: &DbPool) -> AppResult<()> {
.execute(pool)
.await?;
sqlx::query(
r#"
CREATE TABLE IF NOT EXISTS config (
key TEXT PRIMARY KEY,
value TEXT NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
)
"#,
)
.execute(pool)
.await?;
Ok(())
}

View File

@@ -1,4 +1,6 @@
pub mod auth;
pub mod base62;
pub mod config;
pub mod database;
pub mod db_path;
pub mod error;

View File

@@ -83,21 +83,40 @@ pub struct Machine {
pub id: i64,
pub user_id: i64,
pub uuid: Uuid,
#[serde(rename = "machine_id")]
pub machine_id: String,
pub name: String,
pub created_at: DateTime<Utc>,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct RegisterMachineRequest {
pub name: String,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct UseProvisioningCodeRequest {
pub code: String,
pub uuid: Uuid,
pub name: String,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct CreateProvisioningCodeRequest {
pub machine_id: i64,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct ProvisioningCodeResponse {
pub code: String,
pub raw_code: String,
pub expires_at: DateTime<Utc>,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct ProvisioningCode {
pub id: i64,
pub user_id: i64,
pub machine_id: i64,
pub code: String,
pub created_at: DateTime<Utc>,
pub expires_at: DateTime<Utc>,

76
sync_client_test/Cargo.lock generated Normal file
View File

@@ -0,0 +1,76 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
version = 4
[[package]]
name = "arrayref"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "76a2e8124351fda1ef8aaaa3bbd7ebbcb486bbcd4225aca0aa0d84bb2db8fecb"
[[package]]
name = "arrayvec"
version = "0.7.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7c02d123df017efcdfbd739ef81735b36c5ba83ec3c59c80a9d7ecc718f92e50"
[[package]]
name = "blake3"
version = "1.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3888aaa89e4b2a40fca9848e400f6a658a5a3978de7be858e209cafa8be9a4a0"
dependencies = [
"arrayref",
"arrayvec",
"cc",
"cfg-if",
"constant_time_eq",
]
[[package]]
name = "cc"
version = "1.2.36"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5252b3d2648e5eedbc1a6f501e3c795e07025c1e93bbf8bbdd6eef7f447a6d54"
dependencies = [
"find-msvc-tools",
"shlex",
]
[[package]]
name = "cfg-if"
version = "1.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2fd1289c04a9ea8cb22300a459a72a385d7c73d3259e2ed7dcb2af674838cfa9"
[[package]]
name = "constant_time_eq"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7c74b8349d32d297c9134b8c88677813a227df8f779daa29bfc29c183fe3dca6"
[[package]]
name = "find-msvc-tools"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7fd99930f64d146689264c637b5af2f0233a933bef0d8570e2526bf9e083192d"
[[package]]
name = "hex"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
[[package]]
name = "shlex"
version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64"
[[package]]
name = "sync_client_test"
version = "0.1.0"
dependencies = [
"blake3",
"hex",
]

View File

@@ -0,0 +1,8 @@
[package]
name = "sync_client_test"
version = "0.1.0"
edition = "2021"
[dependencies]
blake3 = "1.5"
hex = "0.4"

1051
sync_client_test/src/main.rs Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -3,7 +3,7 @@
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Vite + React</title>
<title>Arkendro</title>
</head>
<body>
<div id="root"></div>

8
webui/jsconfig.json Normal file
View File

@@ -0,0 +1,8 @@
{
"compilerOptions": {
"baseUrl": ".",
"paths": {
"@/*": ["./src/*"]
}
}
}

View File

@@ -10,8 +10,13 @@
"preview": "vite preview"
},
"dependencies": {
"@fontsource/plus-jakarta-sans": "^5.2.6",
"@phosphor-icons/react": "^2.1.10",
"classnames": "^2.5.1",
"react": "^19.1.1",
"react-dom": "^19.1.1"
"react-dom": "^19.1.1",
"react-router-dom": "^7.8.2",
"sass-embedded": "^1.92.1"
},
"devDependencies": {
"@eslint/js": "^9.33.0",

585
webui/pnpm-lock.yaml generated
View File

@@ -8,12 +8,27 @@ importers:
.:
dependencies:
'@fontsource/plus-jakarta-sans':
specifier: ^5.2.6
version: 5.2.6
'@phosphor-icons/react':
specifier: ^2.1.10
version: 2.1.10(react-dom@19.1.1(react@19.1.1))(react@19.1.1)
classnames:
specifier: ^2.5.1
version: 2.5.1
react:
specifier: ^19.1.1
version: 19.1.1
react-dom:
specifier: ^19.1.1
version: 19.1.1(react@19.1.1)
react-router-dom:
specifier: ^7.8.2
version: 7.8.2(react-dom@19.1.1(react@19.1.1))(react@19.1.1)
sass-embedded:
specifier: ^1.92.1
version: 1.92.1
devDependencies:
'@eslint/js':
specifier: ^9.33.0
@@ -26,7 +41,7 @@ importers:
version: 19.1.9(@types/react@19.1.12)
'@vitejs/plugin-react':
specifier: ^5.0.0
version: 5.0.2(vite@7.1.5)
version: 5.0.2(vite@7.1.5(sass-embedded@1.92.1)(sass@1.92.1))
eslint:
specifier: ^9.33.0
version: 9.35.0
@@ -41,7 +56,7 @@ importers:
version: 16.3.0
vite:
specifier: ^7.1.2
version: 7.1.5
version: 7.1.5(sass-embedded@1.92.1)(sass@1.92.1)
packages:
@@ -128,6 +143,9 @@ packages:
resolution: {integrity: sha512-bkFqkLhh3pMBUQQkpVgWDWq/lqzc2678eUyDlTBhRqhCHFguYYGM0Efga7tYk4TogG/3x0EEl66/OQ+WGbWB/Q==}
engines: {node: '>=6.9.0'}
'@bufbuild/protobuf@2.7.0':
resolution: {integrity: sha512-qn6tAIZEw5i/wiESBF4nQxZkl86aY4KoO0IkUa2Lh+rya64oTOdJQFlZuMwI1Qz9VBJQrQC4QlSA2DNek5gCOA==}
'@esbuild/aix-ppc64@0.25.9':
resolution: {integrity: sha512-OaGtL73Jck6pBKjNIe24BnFE6agGl+6KxDtTfHhy1HmhthfKouEcOhqpSL64K4/0WCtbKFLOdzD/44cJ4k9opA==}
engines: {node: '>=18'}
@@ -322,6 +340,9 @@ packages:
resolution: {integrity: sha512-Z5kJ+wU3oA7MMIqVR9tyZRtjYPr4OC004Q4Rw7pgOKUOKkJfZ3O24nz3WYfGRpMDNmcOi3TwQOmgm7B7Tpii0w==}
engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0}
'@fontsource/plus-jakarta-sans@5.2.6':
resolution: {integrity: sha512-mvUiz1ta3bCVhP/DPmAOmuzhHQi6ddOo1GgaW58rpojr510Rx9BkqXqcnMhGEOMZZB3+84frvfFmw/jKCctHLw==}
'@humanfs/core@0.19.1':
resolution: {integrity: sha512-5DyQ4+1JEUzejeK1JGICcideyfUbGixgS9jNgex5nqkW+cY7WZhxBigmieN5Qnw9ZosSNVC9KQKyb+GUaGyKUA==}
engines: {node: '>=18.18.0'}
@@ -354,6 +375,95 @@ packages:
'@jridgewell/trace-mapping@0.3.30':
resolution: {integrity: sha512-GQ7Nw5G2lTu/BtHTKfXhKHok2WGetd4XYcVKGx00SjAk8GMwgJM3zr6zORiPGuOE+/vkc90KtTosSSvaCjKb2Q==}
'@parcel/watcher-android-arm64@2.5.1':
resolution: {integrity: sha512-KF8+j9nNbUN8vzOFDpRMsaKBHZ/mcjEjMToVMJOhTozkDonQFFrRcfdLWn6yWKCmJKmdVxSgHiYvTCef4/qcBA==}
engines: {node: '>= 10.0.0'}
cpu: [arm64]
os: [android]
'@parcel/watcher-darwin-arm64@2.5.1':
resolution: {integrity: sha512-eAzPv5osDmZyBhou8PoF4i6RQXAfeKL9tjb3QzYuccXFMQU0ruIc/POh30ePnaOyD1UXdlKguHBmsTs53tVoPw==}
engines: {node: '>= 10.0.0'}
cpu: [arm64]
os: [darwin]
'@parcel/watcher-darwin-x64@2.5.1':
resolution: {integrity: sha512-1ZXDthrnNmwv10A0/3AJNZ9JGlzrF82i3gNQcWOzd7nJ8aj+ILyW1MTxVk35Db0u91oD5Nlk9MBiujMlwmeXZg==}
engines: {node: '>= 10.0.0'}
cpu: [x64]
os: [darwin]
'@parcel/watcher-freebsd-x64@2.5.1':
resolution: {integrity: sha512-SI4eljM7Flp9yPuKi8W0ird8TI/JK6CSxju3NojVI6BjHsTyK7zxA9urjVjEKJ5MBYC+bLmMcbAWlZ+rFkLpJQ==}
engines: {node: '>= 10.0.0'}
cpu: [x64]
os: [freebsd]
'@parcel/watcher-linux-arm-glibc@2.5.1':
resolution: {integrity: sha512-RCdZlEyTs8geyBkkcnPWvtXLY44BCeZKmGYRtSgtwwnHR4dxfHRG3gR99XdMEdQ7KeiDdasJwwvNSF5jKtDwdA==}
engines: {node: '>= 10.0.0'}
cpu: [arm]
os: [linux]
'@parcel/watcher-linux-arm-musl@2.5.1':
resolution: {integrity: sha512-6E+m/Mm1t1yhB8X412stiKFG3XykmgdIOqhjWj+VL8oHkKABfu/gjFj8DvLrYVHSBNC+/u5PeNrujiSQ1zwd1Q==}
engines: {node: '>= 10.0.0'}
cpu: [arm]
os: [linux]
'@parcel/watcher-linux-arm64-glibc@2.5.1':
resolution: {integrity: sha512-LrGp+f02yU3BN9A+DGuY3v3bmnFUggAITBGriZHUREfNEzZh/GO06FF5u2kx8x+GBEUYfyTGamol4j3m9ANe8w==}
engines: {node: '>= 10.0.0'}
cpu: [arm64]
os: [linux]
'@parcel/watcher-linux-arm64-musl@2.5.1':
resolution: {integrity: sha512-cFOjABi92pMYRXS7AcQv9/M1YuKRw8SZniCDw0ssQb/noPkRzA+HBDkwmyOJYp5wXcsTrhxO0zq1U11cK9jsFg==}
engines: {node: '>= 10.0.0'}
cpu: [arm64]
os: [linux]
'@parcel/watcher-linux-x64-glibc@2.5.1':
resolution: {integrity: sha512-GcESn8NZySmfwlTsIur+49yDqSny2IhPeZfXunQi48DMugKeZ7uy1FX83pO0X22sHntJ4Ub+9k34XQCX+oHt2A==}
engines: {node: '>= 10.0.0'}
cpu: [x64]
os: [linux]
'@parcel/watcher-linux-x64-musl@2.5.1':
resolution: {integrity: sha512-n0E2EQbatQ3bXhcH2D1XIAANAcTZkQICBPVaxMeaCVBtOpBZpWJuf7LwyWPSBDITb7In8mqQgJ7gH8CILCURXg==}
engines: {node: '>= 10.0.0'}
cpu: [x64]
os: [linux]
'@parcel/watcher-win32-arm64@2.5.1':
resolution: {integrity: sha512-RFzklRvmc3PkjKjry3hLF9wD7ppR4AKcWNzH7kXR7GUe0Igb3Nz8fyPwtZCSquGrhU5HhUNDr/mKBqj7tqA2Vw==}
engines: {node: '>= 10.0.0'}
cpu: [arm64]
os: [win32]
'@parcel/watcher-win32-ia32@2.5.1':
resolution: {integrity: sha512-c2KkcVN+NJmuA7CGlaGD1qJh1cLfDnQsHjE89E60vUEMlqduHGCdCLJCID5geFVM0dOtA3ZiIO8BoEQmzQVfpQ==}
engines: {node: '>= 10.0.0'}
cpu: [ia32]
os: [win32]
'@parcel/watcher-win32-x64@2.5.1':
resolution: {integrity: sha512-9lHBdJITeNR++EvSQVUcaZoWupyHfXe1jZvGZ06O/5MflPcuPLtEphScIBL+AiCWBO46tDSHzWyD0uDmmZqsgA==}
engines: {node: '>= 10.0.0'}
cpu: [x64]
os: [win32]
'@parcel/watcher@2.5.1':
resolution: {integrity: sha512-dfUnCxiN9H4ap84DvD2ubjw+3vUNpstxa0TneY/Paat8a3R4uQZDLSvWjmznAY/DoahqTHl9V46HF/Zs3F29pg==}
engines: {node: '>= 10.0.0'}
'@phosphor-icons/react@2.1.10':
resolution: {integrity: sha512-vt8Tvq8GLjheAZZYa+YG/pW7HDbov8El/MANW8pOAz4eGxrwhnbfrQZq0Cp4q8zBEu8NIhHdnr+r8thnfRSNYA==}
engines: {node: '>=10'}
peerDependencies:
react: '>= 16.8'
react-dom: '>= 16.8'
'@rolldown/pluginutils@1.0.0-beta.34':
resolution: {integrity: sha512-LyAREkZHP5pMom7c24meKmJCdhf2hEyvam2q0unr3or9ydwDL+DJ8chTF6Av/RFPb3rH8UFBdMzO5MxTZW97oA==}
@@ -520,11 +630,18 @@ packages:
brace-expansion@1.1.12:
resolution: {integrity: sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==}
braces@3.0.3:
resolution: {integrity: sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==}
engines: {node: '>=8'}
browserslist@4.25.4:
resolution: {integrity: sha512-4jYpcjabC606xJ3kw2QwGEZKX0Aw7sgQdZCvIK9dhVSPh76BKo+C+btT1RRofH7B+8iNpEbgGNVWiLki5q93yg==}
engines: {node: ^6 || ^7 || ^8 || ^9 || ^10 || ^11 || ^12 || >=13.7}
hasBin: true
buffer-builder@0.2.0:
resolution: {integrity: sha512-7VPMEPuYznPSoR21NE1zvd2Xna6c/CloiZCfcMXR1Jny6PjX0N4Nsa38zcBFo/FMK+BlA+FLKbJCQ0i2yxp+Xg==}
callsites@3.1.0:
resolution: {integrity: sha512-P8BjAsXvZS+VIDUI11hHCQEv74YT67YUi5JJFNWIqL235sBmjX4+qx9Muvls5ivyNENctx46xQLQ3aTuE7ssaQ==}
engines: {node: '>=6'}
@@ -536,6 +653,13 @@ packages:
resolution: {integrity: sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==}
engines: {node: '>=10'}
chokidar@4.0.3:
resolution: {integrity: sha512-Qgzu8kfBvo+cA4962jnP1KkS6Dop5NS6g7R5LFYJr4b8Ub94PPQXUksCw9PvXoeXPRRddRNC5C1JQUR2SMGtnA==}
engines: {node: '>= 14.16.0'}
classnames@2.5.1:
resolution: {integrity: sha512-saHYOzhIQs6wy2sVxTM6bUDsQO4F50V9RQ22qBpEdCW+I+/Wmke2HOl6lS6dTpdxVhb88/I6+Hs+438c3lfUow==}
color-convert@2.0.1:
resolution: {integrity: sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==}
engines: {node: '>=7.0.0'}
@@ -543,12 +667,19 @@ packages:
color-name@1.1.4:
resolution: {integrity: sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==}
colorjs.io@0.5.2:
resolution: {integrity: sha512-twmVoizEW7ylZSN32OgKdXRmo1qg+wT5/6C3xu5b9QsWzSFAhHLn2xd8ro0diCsKfCj1RdaTP/nrcW+vAoQPIw==}
concat-map@0.0.1:
resolution: {integrity: sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==}
convert-source-map@2.0.0:
resolution: {integrity: sha512-Kvp459HrV2FEJ1CAsi1Ku+MY3kasH19TFykTz2xWmMeq6bk2NU3XXvfJ+Q61m0xktWwt+1HSYf3JZsTms3aRJg==}
cookie@1.0.2:
resolution: {integrity: sha512-9Kr/j4O16ISv8zBBhJoi4bXOYNTkFLOqSL3UDB0njXxCXNezjeyVrJyGOWtgfs/q2km1gwBcfH8q1yEGoMYunA==}
engines: {node: '>=18'}
cross-spawn@7.0.6:
resolution: {integrity: sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==}
engines: {node: '>= 8'}
@@ -568,6 +699,11 @@ packages:
deep-is@0.1.4:
resolution: {integrity: sha512-oIPzksmTg4/MriiaYGO+okXDT7ztn/w3Eptv/+gSIdMdKsJo0u4CfYNFJPy+4SKMuCqGw2wxnA+URMg3t8a/bQ==}
detect-libc@1.0.3:
resolution: {integrity: sha512-pGjwhsmsp4kL2RTz08wcOlGN83otlqHeD/Z5T8GXZB+/YcpQ/dgo+lbU8ZsGxV0HIvqqxo9l7mqYwyYMD9bKDg==}
engines: {node: '>=0.10'}
hasBin: true
electron-to-chromium@1.5.215:
resolution: {integrity: sha512-TIvGp57UpeNetj/wV/xpFNpWGb0b/ROw372lHPx5Aafx02gjTBtWnEEcaSX3W2dLM3OSdGGyHX/cHl01JQsLaQ==}
@@ -659,6 +795,10 @@ packages:
resolution: {integrity: sha512-XXTUwCvisa5oacNGRP9SfNtYBNAMi+RPwBFmblZEF7N7swHYQS6/Zfk7SRwx4D5j3CH211YNRco1DEMNVfZCnQ==}
engines: {node: '>=16.0.0'}
fill-range@7.1.1:
resolution: {integrity: sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg==}
engines: {node: '>=8'}
find-up@5.0.0:
resolution: {integrity: sha512-78/PXT1wlLLDgTzDs7sjq9hzz0vXD+zn+7wypEe4fXQxCmdmqfGsEPQxmiCSQI3ajFV91bVSsvNtrJRiW6nGng==}
engines: {node: '>=10'}
@@ -699,6 +839,9 @@ packages:
resolution: {integrity: sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g==}
engines: {node: '>= 4'}
immutable@5.1.3:
resolution: {integrity: sha512-+chQdDfvscSF1SJqv2gn4SRO2ZyS3xL3r7IW/wWEEzrzLisnOlKiQu5ytC/BVNcS15C39WT2Hg/bjKjDMcu+zg==}
import-fresh@3.3.1:
resolution: {integrity: sha512-TR3KfrTZTYLPB6jUjfx6MF9WcWrHL9su5TObK4ZkYgBdWKPOFoSoQIdEuTuR82pmtxH2spWG9h6etwfr1pLBqQ==}
engines: {node: '>=6'}
@@ -715,6 +858,10 @@ packages:
resolution: {integrity: sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==}
engines: {node: '>=0.10.0'}
is-number@7.0.0:
resolution: {integrity: sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==}
engines: {node: '>=0.12.0'}
isexe@2.0.0:
resolution: {integrity: sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==}
@@ -761,6 +908,10 @@ packages:
lru-cache@5.1.1:
resolution: {integrity: sha512-KpNARQA3Iwv+jTA0utUVVbrh+Jlrr1Fv0e56GGzAFOXN7dk/FviaDW8LHmK52DlcH4WP2n6gI8vN1aesBFgo9w==}
micromatch@4.0.8:
resolution: {integrity: sha512-PXwfBhYu0hBCPw8Dn0E+WDYb7af3dSLVWKi3HGv84IdF4TyFoC0ysxFd0Goxw7nSv4T/PzEJQxsYsEiFCKo2BA==}
engines: {node: '>=8.6'}
minimatch@3.1.2:
resolution: {integrity: sha512-J7p63hRiAjw1NDEww1W7i37+ByIrOWO5XQQAzZ3VOcL0PNybwpfmV/N05zFAzwQ9USyEcX6t3UO+K5aqBQOIHw==}
@@ -775,6 +926,9 @@ packages:
natural-compare@1.4.0:
resolution: {integrity: sha512-OWND8ei3VtNC9h7V60qff3SVobHr996CTwgxubgyQYEpg290h9J0buyECNNJexkFm5sOajh5G116RYA1c8ZMSw==}
node-addon-api@7.1.1:
resolution: {integrity: sha512-5m3bsyrjFWE1xf7nz7YXdN4udnVtXK6/Yfgn5qnahL6bCkf2yKt4k3nuTKAtT4r3IG8JNR2ncsIMdZuAzJjHQQ==}
node-releases@2.0.20:
resolution: {integrity: sha512-7gK6zSXEH6neM212JgfYFXe+GmZQM+fia5SsusuBIUgnPheLFBmIPhtFoAQRj8/7wASYQnbDlHPVwY0BefoFgA==}
@@ -805,6 +959,10 @@ packages:
picocolors@1.1.1:
resolution: {integrity: sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==}
picomatch@2.3.1:
resolution: {integrity: sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==}
engines: {node: '>=8.6'}
picomatch@4.0.3:
resolution: {integrity: sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==}
engines: {node: '>=12'}
@@ -830,10 +988,31 @@ packages:
resolution: {integrity: sha512-z6F7K9bV85EfseRCp2bzrpyQ0Gkw1uLoCel9XBVWPg/TjRj94SkJzUTGfOa4bs7iJvBWtQG0Wq7wnI0syw3EBQ==}
engines: {node: '>=0.10.0'}
react-router-dom@7.8.2:
resolution: {integrity: sha512-Z4VM5mKDipal2jQ385H6UBhiiEDlnJPx6jyWsTYoZQdl5TrjxEV2a9yl3Fi60NBJxYzOTGTTHXPi0pdizvTwow==}
engines: {node: '>=20.0.0'}
peerDependencies:
react: '>=18'
react-dom: '>=18'
react-router@7.8.2:
resolution: {integrity: sha512-7M2fR1JbIZ/jFWqelpvSZx+7vd7UlBTfdZqf6OSdF9g6+sfdqJDAWcak6ervbHph200ePlu+7G8LdoiC3ReyAQ==}
engines: {node: '>=20.0.0'}
peerDependencies:
react: '>=18'
react-dom: '>=18'
peerDependenciesMeta:
react-dom:
optional: true
react@19.1.1:
resolution: {integrity: sha512-w8nqGImo45dmMIfljjMwOGtbmC/mk4CMYhWIicdSflH91J9TyCyczcPFXJzrZ/ZXcgGRFeP6BU0BEJTw6tZdfQ==}
engines: {node: '>=0.10.0'}
readdirp@4.1.2:
resolution: {integrity: sha512-GDhwkLfywWL2s6vEjyhri+eXmfH6j1L7JE27WhqLeYzoh/A3DBaYGEj2H/HFZCn/kMfim73FXxEJTw06WtxQwg==}
engines: {node: '>= 14.18.0'}
resolve-from@4.0.0:
resolution: {integrity: sha512-pb/MYmXstAkysRFx8piNI1tGFNQIFA3vkE3Gq4EuA1dF6gHp/+vgZqsCGJapvy8N3Q+4o7FwvquPJcnZ7RYy4g==}
engines: {node: '>=4'}
@@ -843,6 +1022,123 @@ packages:
engines: {node: '>=18.0.0', npm: '>=8.0.0'}
hasBin: true
rxjs@7.8.2:
resolution: {integrity: sha512-dhKf903U/PQZY6boNNtAGdWbG85WAbjT/1xYoZIC7FAY0yWapOBQVsVrDl58W86//e1VpMNBtRV4MaXfdMySFA==}
sass-embedded-all-unknown@1.92.1:
resolution: {integrity: sha512-5t6/YZf+vhO3OY/49h8RCL6Cwo78luva0M+TnTM9gu9ASffRXAuOVLNKciSXa3loptyemDDS6IU5/dVH5w0KmA==}
cpu: ['!arm', '!arm64', '!riscv64', '!x64']
sass-embedded-android-arm64@1.92.1:
resolution: {integrity: sha512-Q+UruGb7yKawHagVmVDRRKsnc4mJZvWMBnuRCu2coJo2FofyqBmXohVGXbxko97sYceA9TJTrUEx3WVKQUNCbQ==}
engines: {node: '>=14.0.0'}
cpu: [arm64]
os: [android]
sass-embedded-android-arm@1.92.1:
resolution: {integrity: sha512-4EjpVVzuksERdgAd4BqeSXFnWtWN3DSRyEIUPJ7BhcS9sfDh2Gf6miI2kNTvIQLJ2XIJynDDcEQ8a1U9KwKUTQ==}
engines: {node: '>=14.0.0'}
cpu: [arm]
os: [android]
sass-embedded-android-riscv64@1.92.1:
resolution: {integrity: sha512-nCY5btLlX7W7Jc6cCL6D2Yklpiu540EJ2G08YVGu12DrAMCBzqM347CSRf2ojp1H8jyhvmLkaFwnrJWzh+6S+w==}
engines: {node: '>=14.0.0'}
cpu: [riscv64]
os: [android]
sass-embedded-android-x64@1.92.1:
resolution: {integrity: sha512-qYWR3bftJ77aLYwYDFuzDI4dcwVVixxqQxlIQWNGkHRCexj614qGSSHemr18C2eVj3mjXAQxTQxU68U7pkGPAA==}
engines: {node: '>=14.0.0'}
cpu: [x64]
os: [android]
sass-embedded-darwin-arm64@1.92.1:
resolution: {integrity: sha512-g2yQ3txjMYLKMjL2cW1xRO9nnV3ijf95NbX/QShtV6tiVUETZNWDsRMDEwBNGYY6PTE/UZerjJL1R/2xpQg6WA==}
engines: {node: '>=14.0.0'}
cpu: [arm64]
os: [darwin]
sass-embedded-darwin-x64@1.92.1:
resolution: {integrity: sha512-eH+fgxLQhTEPjZPCgPAVuX5e514Qp/4DMAUMtlNShv4cr4TD5qOp1XlsPYR/b7uE7p2cKFkUpUn/bHNqJ2ay4A==}
engines: {node: '>=14.0.0'}
cpu: [x64]
os: [darwin]
sass-embedded-linux-arm64@1.92.1:
resolution: {integrity: sha512-dNmlpGeZkry1BofhAdGFBXrpM69y9LlYuNnncf+HfsOOUtj8j0q1RwS+zb5asknhKFUOAG8GCGRY1df7Rwu35g==}
engines: {node: '>=14.0.0'}
cpu: [arm64]
os: [linux]
sass-embedded-linux-arm@1.92.1:
resolution: {integrity: sha512-cT3w8yoQTqrtZvWLJeutEGmawITDTY4J6oSVQjeDcPnnoPt0gOFxem8YMznraACXvahw/2+KJDH33BTNgiPo0A==}
engines: {node: '>=14.0.0'}
cpu: [arm]
os: [linux]
sass-embedded-linux-musl-arm64@1.92.1:
resolution: {integrity: sha512-TfiEBkCyNzVoOhjHXUT+vZ6+p0ueDbvRw6f4jHdkvljZzXdXMby4wh7BU1odl69rgRTkSvYKhgbErRLDR/F7pQ==}
engines: {node: '>=14.0.0'}
cpu: [arm64]
os: [linux]
sass-embedded-linux-musl-arm@1.92.1:
resolution: {integrity: sha512-nPBos6lI31ef2zQhqTZhFOU7ar4impJbLIax0XsqS269YsiCwjhk11VmUloJTpFlJuKMiVXNo7dPx+katxhD/Q==}
engines: {node: '>=14.0.0'}
cpu: [arm]
os: [linux]
sass-embedded-linux-musl-riscv64@1.92.1:
resolution: {integrity: sha512-R+RcJA4EYpJDE9JM1GgPYgZo7x94FlxZ6jPodOQkEaZ1S9kvXVCuP5X/0PXRPhu08KJOfeMsAElzfdAjUf7KJg==}
engines: {node: '>=14.0.0'}
cpu: [riscv64]
os: [linux]
sass-embedded-linux-musl-x64@1.92.1:
resolution: {integrity: sha512-/HolYRGXJjx8nLw6oj5ZrkR7PFM7X/5kE4MYZaFMpDIPIcw3bqB2fUXLo/MYlRLsw7gBAT6hJAMBrNdKuTphfw==}
engines: {node: '>=14.0.0'}
cpu: [x64]
os: [linux]
sass-embedded-linux-riscv64@1.92.1:
resolution: {integrity: sha512-b9bxe0CMsbSsLx3nrR0cq8xpIkoAC6X36o4DGMITF3m2v3KsojC7ru9X0Gz+zUFr6rwpq/0lTNzFLNu6sPNo3w==}
engines: {node: '>=14.0.0'}
cpu: [riscv64]
os: [linux]
sass-embedded-linux-x64@1.92.1:
resolution: {integrity: sha512-xuiK5Jp5NldW4bvlC7AuX1Wf7o0gLZ3md/hNg+bkTvxtCDgnUHtfdo8Q+xWP11bD9QX31xXFWpmUB8UDLi6XQQ==}
engines: {node: '>=14.0.0'}
cpu: [x64]
os: [linux]
sass-embedded-unknown-all@1.92.1:
resolution: {integrity: sha512-AT9oXvtNY4N+Nd0wvoWqq9A5HjdH/X3aUH4boQUtXyaJ/9DUwnQmBpP5Gtn028ZS8exOGBdobmmWAuigv0k/OA==}
os: ['!android', '!darwin', '!linux', '!win32']
sass-embedded-win32-arm64@1.92.1:
resolution: {integrity: sha512-KvmpQjY9yTBMtTYz4WBqetlv9bGaDW1aStcu7MSTbH7YiSybX/9fnxlCAEQv1WlIidQhcJAiyk0Eae+LGK7cIQ==}
engines: {node: '>=14.0.0'}
cpu: [arm64]
os: [win32]
sass-embedded-win32-x64@1.92.1:
resolution: {integrity: sha512-B6Nz/GbH7Vkpb2TkQHsGcczWM5t+70VWopWF1x5V5yxLpA8ZzVQ7NTKKi+jDoVY2Efu6ZyzgT9n5KgG2kWliXA==}
engines: {node: '>=14.0.0'}
cpu: [x64]
os: [win32]
sass-embedded@1.92.1:
resolution: {integrity: sha512-28YwLnF5atAhogt3E4hXzz/NB9dwKffyw08a7DEasLh94P7+aELkG3ENSHYCWB9QFN14hYNLfwr9ozUsPDhcDQ==}
engines: {node: '>=16.0.0'}
hasBin: true
sass@1.92.1:
resolution: {integrity: sha512-ffmsdbwqb3XeyR8jJR6KelIXARM9bFQe8A6Q3W4Klmwy5Ckd5gz7jgUNHo4UOqutU5Sk1DtKLbpDP0nLCg1xqQ==}
engines: {node: '>=14.0.0'}
hasBin: true
scheduler@0.26.0:
resolution: {integrity: sha512-NlHwttCI/l5gCPR3D1nNXtWABUmBwvZpEQiD4IXSbIDq8BzLIK/7Ir5gTFSGZDUu37K5cMNp0hFtzO38sC7gWA==}
@@ -850,6 +1146,9 @@ packages:
resolution: {integrity: sha512-BR7VvDCVHO+q2xBEWskxS6DJE1qRnb7DxzUrogb71CWoSficBxYsiAGd+Kl0mmq/MprG9yArRkyrQxTO6XjMzA==}
hasBin: true
set-cookie-parser@2.7.1:
resolution: {integrity: sha512-IOc8uWeOZgnb3ptbCURJWNjWUPcO3ZnTTdzsurqERrP6nPyv+paC55vJM0LpOlT2ne+Ix+9+CRG1MNLlyZ4GjQ==}
shebang-command@2.0.0:
resolution: {integrity: sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==}
engines: {node: '>=8'}
@@ -870,10 +1169,29 @@ packages:
resolution: {integrity: sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==}
engines: {node: '>=8'}
supports-color@8.1.1:
resolution: {integrity: sha512-MpUEN2OodtUzxvKQl72cUF7RQ5EiHsGvSsVG0ia9c5RbWGL2CI4C7EpPS8UTBIplnlzZiNuV56w+FuNxy3ty2Q==}
engines: {node: '>=10'}
sync-child-process@1.0.2:
resolution: {integrity: sha512-8lD+t2KrrScJ/7KXCSyfhT3/hRq78rC0wBFqNJXv3mZyn6hW2ypM05JmlSvtqRbeq6jqA94oHbxAr2vYsJ8vDA==}
engines: {node: '>=16.0.0'}
sync-message-port@1.1.3:
resolution: {integrity: sha512-GTt8rSKje5FilG+wEdfCkOcLL7LWqpMlr2c3LRuKt/YXxcJ52aGSbGBAdI4L3aaqfrBt6y711El53ItyH1NWzg==}
engines: {node: '>=16.0.0'}
tinyglobby@0.2.15:
resolution: {integrity: sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ==}
engines: {node: '>=12.0.0'}
to-regex-range@5.0.1:
resolution: {integrity: sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==}
engines: {node: '>=8.0'}
tslib@2.8.1:
resolution: {integrity: sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==}
type-check@0.4.0:
resolution: {integrity: sha512-XleUoc9uwGXqjWwXaUTZAmzMcFZ5858QA2vvx1Ur5xIcixXIP+8LnFDgRplU30us6teqdlskFfu+ae4K79Ooew==}
engines: {node: '>= 0.8.0'}
@@ -887,6 +1205,9 @@ packages:
uri-js@4.4.1:
resolution: {integrity: sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==}
varint@6.0.0:
resolution: {integrity: sha512-cXEIW6cfr15lFv563k4GuVuW/fiwjknytD37jIOLSdSWuOI6WnO/oKwmP2FQTU2l01LP8/M5TSAJpzUaGe3uWg==}
vite@7.1.5:
resolution: {integrity: sha512-4cKBO9wR75r0BeIWWWId9XK9Lj6La5X846Zw9dFfzMRw38IlTk2iCcUt6hsyiDRcPidc55ZParFYDXi0nXOeLQ==}
engines: {node: ^20.19.0 || >=22.12.0}
@@ -1057,6 +1378,8 @@ snapshots:
'@babel/helper-string-parser': 7.27.1
'@babel/helper-validator-identifier': 7.27.1
'@bufbuild/protobuf@2.7.0': {}
'@esbuild/aix-ppc64@0.25.9':
optional: true
@@ -1179,6 +1502,8 @@ snapshots:
'@eslint/core': 0.15.2
levn: 0.4.1
'@fontsource/plus-jakarta-sans@5.2.6': {}
'@humanfs/core@0.19.1': {}
'@humanfs/node@0.16.7':
@@ -1209,6 +1534,72 @@ snapshots:
'@jridgewell/resolve-uri': 3.1.2
'@jridgewell/sourcemap-codec': 1.5.5
'@parcel/watcher-android-arm64@2.5.1':
optional: true
'@parcel/watcher-darwin-arm64@2.5.1':
optional: true
'@parcel/watcher-darwin-x64@2.5.1':
optional: true
'@parcel/watcher-freebsd-x64@2.5.1':
optional: true
'@parcel/watcher-linux-arm-glibc@2.5.1':
optional: true
'@parcel/watcher-linux-arm-musl@2.5.1':
optional: true
'@parcel/watcher-linux-arm64-glibc@2.5.1':
optional: true
'@parcel/watcher-linux-arm64-musl@2.5.1':
optional: true
'@parcel/watcher-linux-x64-glibc@2.5.1':
optional: true
'@parcel/watcher-linux-x64-musl@2.5.1':
optional: true
'@parcel/watcher-win32-arm64@2.5.1':
optional: true
'@parcel/watcher-win32-ia32@2.5.1':
optional: true
'@parcel/watcher-win32-x64@2.5.1':
optional: true
'@parcel/watcher@2.5.1':
dependencies:
detect-libc: 1.0.3
is-glob: 4.0.3
micromatch: 4.0.8
node-addon-api: 7.1.1
optionalDependencies:
'@parcel/watcher-android-arm64': 2.5.1
'@parcel/watcher-darwin-arm64': 2.5.1
'@parcel/watcher-darwin-x64': 2.5.1
'@parcel/watcher-freebsd-x64': 2.5.1
'@parcel/watcher-linux-arm-glibc': 2.5.1
'@parcel/watcher-linux-arm-musl': 2.5.1
'@parcel/watcher-linux-arm64-glibc': 2.5.1
'@parcel/watcher-linux-arm64-musl': 2.5.1
'@parcel/watcher-linux-x64-glibc': 2.5.1
'@parcel/watcher-linux-x64-musl': 2.5.1
'@parcel/watcher-win32-arm64': 2.5.1
'@parcel/watcher-win32-ia32': 2.5.1
'@parcel/watcher-win32-x64': 2.5.1
optional: true
'@phosphor-icons/react@2.1.10(react-dom@19.1.1(react@19.1.1))(react@19.1.1)':
dependencies:
react: 19.1.1
react-dom: 19.1.1(react@19.1.1)
'@rolldown/pluginutils@1.0.0-beta.34': {}
'@rollup/rollup-android-arm-eabi@4.50.1':
@@ -1307,7 +1698,7 @@ snapshots:
dependencies:
csstype: 3.1.3
'@vitejs/plugin-react@5.0.2(vite@7.1.5)':
'@vitejs/plugin-react@5.0.2(vite@7.1.5(sass-embedded@1.92.1)(sass@1.92.1))':
dependencies:
'@babel/core': 7.28.4
'@babel/plugin-transform-react-jsx-self': 7.27.1(@babel/core@7.28.4)
@@ -1315,7 +1706,7 @@ snapshots:
'@rolldown/pluginutils': 1.0.0-beta.34
'@types/babel__core': 7.20.5
react-refresh: 0.17.0
vite: 7.1.5
vite: 7.1.5(sass-embedded@1.92.1)(sass@1.92.1)
transitivePeerDependencies:
- supports-color
@@ -1345,6 +1736,11 @@ snapshots:
balanced-match: 1.0.2
concat-map: 0.0.1
braces@3.0.3:
dependencies:
fill-range: 7.1.1
optional: true
browserslist@4.25.4:
dependencies:
caniuse-lite: 1.0.30001741
@@ -1352,6 +1748,8 @@ snapshots:
node-releases: 2.0.20
update-browserslist-db: 1.1.3(browserslist@4.25.4)
buffer-builder@0.2.0: {}
callsites@3.1.0: {}
caniuse-lite@1.0.30001741: {}
@@ -1361,16 +1759,27 @@ snapshots:
ansi-styles: 4.3.0
supports-color: 7.2.0
chokidar@4.0.3:
dependencies:
readdirp: 4.1.2
optional: true
classnames@2.5.1: {}
color-convert@2.0.1:
dependencies:
color-name: 1.1.4
color-name@1.1.4: {}
colorjs.io@0.5.2: {}
concat-map@0.0.1: {}
convert-source-map@2.0.0: {}
cookie@1.0.2: {}
cross-spawn@7.0.6:
dependencies:
path-key: 3.1.1
@@ -1385,6 +1794,9 @@ snapshots:
deep-is@0.1.4: {}
detect-libc@1.0.3:
optional: true
electron-to-chromium@1.5.215: {}
esbuild@0.25.9:
@@ -1509,6 +1921,11 @@ snapshots:
dependencies:
flat-cache: 4.0.1
fill-range@7.1.1:
dependencies:
to-regex-range: 5.0.1
optional: true
find-up@5.0.0:
dependencies:
locate-path: 6.0.0
@@ -1538,6 +1955,8 @@ snapshots:
ignore@5.3.2: {}
immutable@5.1.3: {}
import-fresh@3.3.1:
dependencies:
parent-module: 1.0.1
@@ -1551,6 +1970,9 @@ snapshots:
dependencies:
is-extglob: 2.1.1
is-number@7.0.0:
optional: true
isexe@2.0.0: {}
js-tokens@4.0.0: {}
@@ -1588,6 +2010,12 @@ snapshots:
dependencies:
yallist: 3.1.1
micromatch@4.0.8:
dependencies:
braces: 3.0.3
picomatch: 2.3.1
optional: true
minimatch@3.1.2:
dependencies:
brace-expansion: 1.1.12
@@ -1598,6 +2026,9 @@ snapshots:
natural-compare@1.4.0: {}
node-addon-api@7.1.1:
optional: true
node-releases@2.0.20: {}
optionator@0.9.4:
@@ -1627,6 +2058,9 @@ snapshots:
picocolors@1.1.1: {}
picomatch@2.3.1:
optional: true
picomatch@4.0.3: {}
postcss@8.5.6:
@@ -1646,8 +2080,25 @@ snapshots:
react-refresh@0.17.0: {}
react-router-dom@7.8.2(react-dom@19.1.1(react@19.1.1))(react@19.1.1):
dependencies:
react: 19.1.1
react-dom: 19.1.1(react@19.1.1)
react-router: 7.8.2(react-dom@19.1.1(react@19.1.1))(react@19.1.1)
react-router@7.8.2(react-dom@19.1.1(react@19.1.1))(react@19.1.1):
dependencies:
cookie: 1.0.2
react: 19.1.1
set-cookie-parser: 2.7.1
optionalDependencies:
react-dom: 19.1.1(react@19.1.1)
react@19.1.1: {}
readdirp@4.1.2:
optional: true
resolve-from@4.0.0: {}
rollup@4.50.1:
@@ -1677,10 +2128,113 @@ snapshots:
'@rollup/rollup-win32-x64-msvc': 4.50.1
fsevents: 2.3.3
rxjs@7.8.2:
dependencies:
tslib: 2.8.1
sass-embedded-all-unknown@1.92.1:
dependencies:
sass: 1.92.1
optional: true
sass-embedded-android-arm64@1.92.1:
optional: true
sass-embedded-android-arm@1.92.1:
optional: true
sass-embedded-android-riscv64@1.92.1:
optional: true
sass-embedded-android-x64@1.92.1:
optional: true
sass-embedded-darwin-arm64@1.92.1:
optional: true
sass-embedded-darwin-x64@1.92.1:
optional: true
sass-embedded-linux-arm64@1.92.1:
optional: true
sass-embedded-linux-arm@1.92.1:
optional: true
sass-embedded-linux-musl-arm64@1.92.1:
optional: true
sass-embedded-linux-musl-arm@1.92.1:
optional: true
sass-embedded-linux-musl-riscv64@1.92.1:
optional: true
sass-embedded-linux-musl-x64@1.92.1:
optional: true
sass-embedded-linux-riscv64@1.92.1:
optional: true
sass-embedded-linux-x64@1.92.1:
optional: true
sass-embedded-unknown-all@1.92.1:
dependencies:
sass: 1.92.1
optional: true
sass-embedded-win32-arm64@1.92.1:
optional: true
sass-embedded-win32-x64@1.92.1:
optional: true
sass-embedded@1.92.1:
dependencies:
'@bufbuild/protobuf': 2.7.0
buffer-builder: 0.2.0
colorjs.io: 0.5.2
immutable: 5.1.3
rxjs: 7.8.2
supports-color: 8.1.1
sync-child-process: 1.0.2
varint: 6.0.0
optionalDependencies:
sass-embedded-all-unknown: 1.92.1
sass-embedded-android-arm: 1.92.1
sass-embedded-android-arm64: 1.92.1
sass-embedded-android-riscv64: 1.92.1
sass-embedded-android-x64: 1.92.1
sass-embedded-darwin-arm64: 1.92.1
sass-embedded-darwin-x64: 1.92.1
sass-embedded-linux-arm: 1.92.1
sass-embedded-linux-arm64: 1.92.1
sass-embedded-linux-musl-arm: 1.92.1
sass-embedded-linux-musl-arm64: 1.92.1
sass-embedded-linux-musl-riscv64: 1.92.1
sass-embedded-linux-musl-x64: 1.92.1
sass-embedded-linux-riscv64: 1.92.1
sass-embedded-linux-x64: 1.92.1
sass-embedded-unknown-all: 1.92.1
sass-embedded-win32-arm64: 1.92.1
sass-embedded-win32-x64: 1.92.1
sass@1.92.1:
dependencies:
chokidar: 4.0.3
immutable: 5.1.3
source-map-js: 1.2.1
optionalDependencies:
'@parcel/watcher': 2.5.1
optional: true
scheduler@0.26.0: {}
semver@6.3.1: {}
set-cookie-parser@2.7.1: {}
shebang-command@2.0.0:
dependencies:
shebang-regex: 3.0.0
@@ -1695,11 +2249,28 @@ snapshots:
dependencies:
has-flag: 4.0.0
supports-color@8.1.1:
dependencies:
has-flag: 4.0.0
sync-child-process@1.0.2:
dependencies:
sync-message-port: 1.1.3
sync-message-port@1.1.3: {}
tinyglobby@0.2.15:
dependencies:
fdir: 6.5.0(picomatch@4.0.3)
picomatch: 4.0.3
to-regex-range@5.0.1:
dependencies:
is-number: 7.0.0
optional: true
tslib@2.8.1: {}
type-check@0.4.0:
dependencies:
prelude-ls: 1.2.1
@@ -1714,7 +2285,9 @@ snapshots:
dependencies:
punycode: 2.3.1
vite@7.1.5:
varint@6.0.0: {}
vite@7.1.5(sass-embedded@1.92.1)(sass@1.92.1):
dependencies:
esbuild: 0.25.9
fdir: 6.5.0(picomatch@4.0.3)
@@ -1724,6 +2297,8 @@ snapshots:
tinyglobby: 0.2.15
optionalDependencies:
fsevents: 2.3.3
sass: 1.92.1
sass-embedded: 1.92.1
which@2.0.2:
dependencies:

View File

@@ -1,10 +1,39 @@
const App = () => {
import {createBrowserRouter, Navigate, RouterProvider} from "react-router-dom";
import {UserProvider} from '@/common/contexts/UserContext.jsx';
import {ToastProvider} from '@/common/contexts/ToastContext.jsx';
import "@/common/styles/main.sass";
import Root from "@/common/layouts/Root.jsx";
import UserManagement from "@/pages/UserManagement";
import SystemSettings from "@/pages/SystemSettings";
import Machines, {MachineDetails} from "@/pages/Machines";
import "@fontsource/plus-jakarta-sans/300.css";
import "@fontsource/plus-jakarta-sans/400.css";
import "@fontsource/plus-jakarta-sans/600.css";
import "@fontsource/plus-jakarta-sans/700.css";
import "@fontsource/plus-jakarta-sans/800.css";
return (
<>
<h1>vite init</h1>
</>
)
}
const Placeholder = ({title}) => <div className="content"><h2 style={{fontSize: '1rem'}}>{title}</h2><p
className="muted">Content coming soon.</p></div>;
const App = () => {
const router = createBrowserRouter([
{
path: "/",
element: <Root/>,
children: [
{path: "/", element: <Navigate to="/dashboard"/>},
{path: "/dashboard", element: <Placeholder title="Dashboard"/>},
{path: "/machines", element: <Machines/>},
{path: "/machines/:id", element: <MachineDetails/>},
{path: "/servers", element: <Placeholder title="Servers"/>},
{path: "/settings", element: <Placeholder title="Settings"/>},
{path: "/admin/users", element: <UserManagement/>},
{path: "/admin/settings", element: <SystemSettings/>},
],
},
]);
return <UserProvider><ToastProvider><RouterProvider router={router}/></ToastProvider></UserProvider>;
};
export default App;

View File

@@ -0,0 +1,23 @@
import React from 'react';
import './styles.sass';
export const Avatar = ({
children,
size = 'md',
variant = 'default',
className = '',
...rest
}) => {
const avatarClasses = [
'avatar',
`avatar--${size}`,
`avatar--${variant}`,
className
].filter(Boolean).join(' ');
return (
<div className={avatarClasses} {...rest}>
{children}
</div>
);
};

View File

@@ -0,0 +1 @@
export { Avatar as default } from './Avatar.jsx';

View File

@@ -0,0 +1,35 @@
.avatar
background: var(--bg-elev)
border: 1px solid var(--border)
border-radius: 50%
display: flex
align-items: center
justify-content: center
color: var(--text-dim)
flex-shrink: 0
&--sm
width: 32px
height: 32px
&--md
width: 48px
height: 48px
&--lg
width: 64px
height: 64px
&--xl
width: 80px
height: 80px
&--primary
background: var(--accent)
color: white
border-color: var(--accent)
&--success
background: #16a34a
color: white
border-color: #16a34a

View File

@@ -0,0 +1,23 @@
import React from 'react';
import './styles.sass';
export const Badge = ({
children,
variant = 'default',
size = 'md',
className = '',
...rest
}) => {
const badgeClasses = [
'badge',
`badge--${variant}`,
`badge--${size}`,
className
].filter(Boolean).join(' ');
return (
<span className={badgeClasses} {...rest}>
{children}
</span>
);
};

View File

@@ -0,0 +1 @@
export { Badge as default } from './Badge.jsx';

View File

@@ -0,0 +1,53 @@
.badge
display: inline-flex
align-items: center
justify-content: center
border-radius: 12px
font-weight: 600
text-transform: uppercase
letter-spacing: 0.5px
white-space: nowrap
&--sm
padding: 0.125rem 0.5rem
font-size: 0.65rem
&--md
padding: 0.25rem 0.75rem
font-size: 0.75rem
&--lg
padding: 0.375rem 1rem
font-size: 0.85rem
&--default
background: var(--bg-elev)
color: var(--text-dim)
&--primary
background: rgba(15, 98, 254, 0.1)
color: #0f62fe
&--success
background: rgba(22, 163, 74, 0.1)
color: #16a34a
&--warning
background: rgba(245, 158, 11, 0.1)
color: #f59e0b
&--danger
background: rgba(217, 48, 37, 0.1)
color: #d93025
&--admin
background: #e3f2fd
color: #1976d2
&--user
background: var(--bg-elev)
color: var(--text-dim)
&--subtle
background: var(--bg-elev)
color: var(--text-dim)

View File

@@ -0,0 +1,41 @@
import React from "react";
import cn from "classnames";
import "./styles.sass";
export const Button = ({
as: Component = "button",
variant = "primary",
size = "md",
full = false,
icon,
iconRight,
loading = false,
disabled,
className,
children,
...rest
}) => {
const isDisabled = disabled || loading;
const isIconOnly = (icon || iconRight) && !children;
return (
<Component
className={cn(
"btn",
`btn--${variant}`,
`btn--${size}`,
full && "btn--full",
loading && "is-loading",
isIconOnly && "btn--icon-only",
className
)}
disabled={isDisabled}
{...rest}
>
{loading && <span className="btn-spinner" aria-hidden />}
{icon && <span className="btn-icon btn-icon--left">{icon}</span>}
{children && <span className="btn-label">{children}</span>}
{iconRight && <span className="btn-icon btn-icon--right">{iconRight}</span>}
</Component>
);
};

View File

@@ -0,0 +1 @@
export { Button as default } from "./Button.jsx";

View File

@@ -0,0 +1,105 @@
.btn
--c-bg: #ffffff
--c-bg-hover: #f2f5f8
--c-bg-active: #e6ebf0
--c-border: #dfe3e8
--c-border-hover: #c7ced6
--c-text: #1f2429
--c-accent: #0f62fe
--c-danger: #d93025
position: relative
display: inline-flex
align-items: center
justify-content: center
gap: .6rem
font-family: inherit
font-weight: 600
line-height: 1.2
cursor: pointer
border: 1px solid var(--c-border)
background: var(--c-bg)
color: var(--c-text)
border-radius: 12px
transition: all .2s ease
user-select: none
text-decoration: none
&:hover:not(:disabled)
background: var(--c-bg-hover)
border-color: var(--c-border-hover)
&:active:not(:disabled)
background: var(--c-bg-active)
transform: translateY(1px)
&:focus-visible
outline: 2px solid var(--c-accent)
outline-offset: 2px
&:disabled
opacity: .55
cursor: not-allowed
&.btn--full
width: 100%
&.btn--sm
font-size: .85rem
padding: .7rem 1rem
&.btn--md
font-size: .95rem
padding: .85rem 1.25rem
&.btn--lg
font-size: 1.05rem
padding: 1rem 1.5rem
&.btn--primary
--c-bg: #1f2429
--c-bg-hover: #374048
--c-bg-active: #2a3038
--c-border: #1f2429
--c-text: #ffffff
background: var(--c-bg)
border-color: var(--c-border)
&:hover:not(:disabled)
background: var(--c-bg-hover)
&.btn--subtle
--c-bg: #f0f3f6
--c-bg-hover: #e6ebf0
--c-bg-active: #dfe3e8
--c-border: #dfe3e8
&.btn--danger
--c-bg: #d93025
--c-bg-hover: #c22b21
--c-bg-active: #a9241b
--c-border: #d93025
--c-text: #ffffff
background: var(--c-bg)
border-color: var(--c-border)
&.btn--icon-only
padding: 0.75rem
aspect-ratio: 1
justify-content: center
&.btn--sm
padding: 0.6rem
&.btn--lg
padding: 0.9rem
.btn-icon
margin: 0
.btn-icon
display: inline-flex
align-items: center
&--left
margin-right: .25rem
&--right
margin-left: .25rem
.btn-spinner
width: 14px
height: 14px
border: 2px solid rgba(0,0,0,.15)
border-top-color: var(--c-text)
border-radius: 50%
animation: spin .7s linear infinite
@keyframes spin
to
transform: rotate(360deg)

View File

@@ -0,0 +1,43 @@
import React from 'react';
import './styles.sass';
export const Card = ({
children,
className = '',
hover = false,
padding = 'md',
variant = 'default',
...rest
}) => {
const cardClasses = [
'card',
`card--${variant}`,
`card--padding-${padding}`,
hover && 'card--hover',
className
].filter(Boolean).join(' ');
return (
<div className={cardClasses} {...rest}>
{children}
</div>
);
};
export const CardHeader = ({ children, className = '' }) => (
<div className={`card-header ${className}`}>
{children}
</div>
);
export const CardBody = ({ children, className = '' }) => (
<div className={`card-body ${className}`}>
{children}
</div>
);
export const CardFooter = ({ children, className = '' }) => (
<div className={`card-footer ${className}`}>
{children}
</div>
);

View File

@@ -0,0 +1 @@
export { Card as default, CardHeader, CardBody, CardFooter } from './Card.jsx';

View File

@@ -0,0 +1,43 @@
.card
background: var(--bg-alt)
border: 1px solid var(--border)
border-radius: var(--radius-lg)
transition: all 0.2s ease
&--hover:hover
border-color: var(--border-strong)
transform: translateY(-2px)
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1)
&--padding-none
padding: 0
&--padding-sm
padding: 1rem
&--padding-md
padding: 1.5rem
&--padding-lg
padding: 2rem
&--variant-elevated
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.1)
&--variant-outlined
border-width: 2px
.card-header
margin-bottom: 1rem
&:last-child
margin-bottom: 0
.card-body
flex: 1
.card-footer
margin-top: 1rem
&:first-child
margin-top: 0

View File

@@ -0,0 +1,28 @@
import React from 'react';
import './styles.sass';
export const DetailItem = ({
icon,
children,
className = '',
...rest
}) => {
return (
<div className={`detail-item ${className}`} {...rest}>
{icon && <span className="detail-item-icon">{icon}</span>}
<span className="detail-item-content">{children}</span>
</div>
);
};
export const DetailList = ({
children,
className = '',
...rest
}) => {
return (
<div className={`detail-list ${className}`} {...rest}>
{children}
</div>
);
};

View File

@@ -0,0 +1 @@
export { DetailItem as default, DetailList } from './DetailItem.jsx';

View File

@@ -0,0 +1,23 @@
.detail-list
display: flex
flex-direction: column
gap: 0.75rem
.detail-item
display: flex
align-items: center
gap: 0.75rem
color: var(--text-dim)
font-size: 0.9rem
.detail-item-icon
color: var(--text-dim)
display: inline-flex
flex-shrink: 0
svg
color: inherit
.detail-item-content
flex: 1
min-width: 0

View File

@@ -0,0 +1,28 @@
import React from 'react';
import './styles.sass';
export const EmptyState = ({
icon,
title,
description,
action,
size = 'md',
variant = 'default',
className = ''
}) => {
const emptyStateClasses = [
'empty-state',
`empty-state--${size}`,
`empty-state--${variant}`,
className
].filter(Boolean).join(' ');
return (
<div className={emptyStateClasses}>
{icon && <div className="empty-state-icon">{icon}</div>}
{title && <h3 className="empty-state-title">{title}</h3>}
{description && <p className="empty-state-description">{description}</p>}
{action && <div className="empty-state-action">{action}</div>}
</div>
);
};

View File

@@ -0,0 +1 @@
export { EmptyState as default } from './EmptyState.jsx';

View File

@@ -0,0 +1,83 @@
.empty-state
text-align: center
padding: 3rem 2rem
color: var(--text-dim)
display: flex
flex-direction: column
align-items: center
gap: 1rem
&--sm
padding: 1.5rem 1rem
gap: 0.5rem
.empty-state-icon svg
width: 32px
height: 32px
.empty-state-title
font-size: 1rem
.empty-state-description
font-size: 0.875rem
&--md
padding: 3rem 2rem
gap: 1rem
.empty-state-icon svg
width: 48px
height: 48px
.empty-state-title
font-size: 1.2rem
.empty-state-description
font-size: 0.95rem
&--lg
padding: 4rem 3rem
gap: 1.5rem
.empty-state-icon svg
width: 64px
height: 64px
.empty-state-title
font-size: 1.5rem
.empty-state-description
font-size: 1rem
&--subtle
opacity: 0.8
.empty-state-icon
color: var(--text-dim)
.empty-state-title
color: var(--text-dim)
.empty-state-icon
color: var(--text-dim)
display: flex
justify-content: center
svg
width: 48px
height: 48px
.empty-state-title
font-size: 1.2rem
font-weight: 600
color: var(--text)
margin: 0
.empty-state-description
font-size: 0.95rem
margin: 0
max-width: 400px
line-height: 1.5
.empty-state-action
margin-top: 0.5rem

View File

@@ -0,0 +1,514 @@
import React, {useState, useEffect} from 'react';
import {useToast} from '@/common/contexts/ToastContext.jsx';
import {getRequest} from '@/common/utils/RequestUtil.js';
import Button from '@/common/components/Button';
import Modal, {ModalActions} from '@/common/components/Modal';
import Card, {CardHeader, CardBody} from '@/common/components/Card';
import LoadingSpinner from '@/common/components/LoadingSpinner';
import EmptyState from '@/common/components/EmptyState';
import {
FolderIcon,
FileIcon,
ArrowLeftIcon,
DownloadIcon,
LinkIcon,
XIcon,
FolderOpenIcon,
HouseIcon,
FileTextIcon,
FilePdfIcon,
FileZipIcon,
FileImageIcon,
FileVideoIcon,
FileAudioIcon,
FileCodeIcon,
GearIcon,
Database
} from '@phosphor-icons/react';
import './file-browser.sass';
export const FileBrowser = ({
isOpen,
onClose,
machineId,
snapshotId,
partitionIndex,
partitionInfo
}) => {
const toast = useToast();
const [currentPath, setCurrentPath] = useState([]);
const [currentDirHash, setCurrentDirHash] = useState(null);
const [entries, setEntries] = useState([]);
const [loading, setLoading] = useState(false);
const [breadcrumbs, setBreadcrumbs] = useState([{name: 'Root', hash: null}]);
useEffect(() => {
if (isOpen && machineId && snapshotId && partitionIndex !== undefined) {
loadPartitionRoot();
}
}, [isOpen, machineId, snapshotId, partitionIndex]);
const loadPartitionRoot = async () => {
try {
setLoading(true);
const response = await getRequest(
`machines/${machineId}/snapshots/${snapshotId}/partitions/${partitionIndex}/files`
);
setEntries(response.entries || []);
setCurrentPath([]);
setCurrentDirHash(null);
setBreadcrumbs([{name: 'Root', hash: null}]);
} catch (error) {
console.error('Failed to load partition root:', error);
toast.error('Failed to load files. Please try again.');
} finally {
setLoading(false);
}
};
const loadDirectory = async (dirHash, dirName) => {
try {
setLoading(true);
const response = await getRequest(
`machines/${machineId}/snapshots/${snapshotId}/partitions/${partitionIndex}/files/${dirHash}`
);
setEntries(response.entries || []);
setCurrentDirHash(dirHash);
// Update path and breadcrumbs
const newPath = [...currentPath, dirName];
setCurrentPath(newPath);
const newBreadcrumbs = [
{name: 'Root', hash: null},
...newPath.map((name, index) => ({
name,
hash: index === newPath.length - 1 ? dirHash : null // Only store hash for current dir
}))
];
setBreadcrumbs(newBreadcrumbs);
} catch (error) {
console.error('Failed to load directory:', error);
toast.error('Failed to load directory. Please try again.');
} finally {
setLoading(false);
}
};
const navigateToBreadcrumb = async (index) => {
if (index === 0) {
// Navigate to root
await loadPartitionRoot();
} else {
// For now, we can only navigate back to root since we don't store intermediate hashes
// In a full implementation, you'd need to track the full path with hashes
toast.info('Navigation to intermediate directories is not implemented yet. Use the back button or go to root.');
}
};
const goBack = async () => {
if (currentPath.length === 0) {
return; // Already at root
}
if (currentPath.length === 1) {
// Go back to root
await loadPartitionRoot();
} else {
// For now, just go to root. Full implementation would require tracking parent hashes
await loadPartitionRoot();
toast.info('Navigated back to root. Full directory navigation will be enhanced in future updates.');
}
};
const handleEntryClick = async (entry) => {
if (entry.entry_type === 'dir') {
await loadDirectory(entry.meta_hash, entry.name);
} else if (entry.entry_type === 'file') {
await downloadFile(entry.meta_hash, entry.name);
}
};
const downloadFile = async (fileHash, fileName) => {
try {
toast.info(`Starting download of ${fileName}...`);
// Get auth token
const token = localStorage.getItem('sessionToken');
if (!token) {
toast.error('Authentication required. Please log in again.');
return;
}
// Make authenticated request to download file
const downloadUrl = `/api/machines/${machineId}/snapshots/${snapshotId}/partitions/${partitionIndex}/download/${fileHash}?filename=${encodeURIComponent(fileName)}`;
const response = await fetch(downloadUrl, {
method: 'GET',
headers: {
'Authorization': `Bearer ${token}`
}
});
if (!response.ok) {
if (response.status === 401) {
toast.error('Authentication failed. Please log in again.');
return;
}
throw new Error(`Download failed: ${response.status} ${response.statusText}`);
}
// Get the file as a blob
const blob = await response.blob();
// Create a temporary URL for the blob
const blobUrl = window.URL.createObjectURL(blob);
// Create a temporary anchor element
const link = document.createElement('a');
link.href = blobUrl;
link.download = fileName;
link.style.display = 'none';
// Add to DOM, click, and remove
document.body.appendChild(link);
link.click();
document.body.removeChild(link);
// Clean up the blob URL
window.URL.revokeObjectURL(blobUrl);
toast.success(`Downloaded ${fileName}`);
} catch (error) {
console.error('Failed to download file:', error);
toast.error(`Failed to download file: ${error.message}`);
}
};
const formatFileSize = (bytes) => {
if (!bytes) return 'Unknown size';
if (bytes === 0) return '0 B';
const k = 1024;
const sizes = ['B', 'KB', 'MB', 'GB', 'TB'];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i];
};
const getFileIcon = (entry) => {
if (entry.entry_type === 'dir') {
return <FolderIcon size={20} weight="duotone"/>;
} else if (entry.entry_type === 'symlink') {
return <LinkIcon size={20} weight="duotone"/>;
} else {
// Get file extension
const extension = entry.name.split('.').pop()?.toLowerCase() || '';
// Return appropriate icon based on extension
switch (extension) {
case 'txt':
case 'md':
case 'readme':
case 'log':
return <FileTextIcon size={20} weight="duotone"/>;
case 'pdf':
return <FilePdfIcon size={20} weight="duotone"/>;
case 'zip':
case 'rar':
case 'tar':
case 'gz':
case '7z':
return <FileZipIcon size={20} weight="duotone"/>;
case 'jpg':
case 'jpeg':
case 'png':
case 'gif':
case 'bmp':
case 'svg':
case 'webp':
return <FileImageIcon size={20} weight="duotone"/>;
case 'mp4':
case 'avi':
case 'mov':
case 'wmv':
case 'flv':
case 'webm':
return <FileVideoIcon size={20} weight="duotone"/>;
case 'mp3':
case 'wav':
case 'flac':
case 'aac':
case 'ogg':
return <FileAudioIcon size={20} weight="duotone"/>;
case 'js':
case 'ts':
case 'jsx':
case 'tsx':
case 'html':
case 'css':
case 'scss':
case 'sass':
case 'json':
case 'xml':
case 'py':
case 'java':
case 'cpp':
case 'c':
case 'h':
case 'rs':
case 'go':
case 'php':
case 'rb':
case 'sh':
case 'sql':
return <FileCodeIcon size={20} weight="duotone"/>;
case 'exe':
case 'msi':
case 'deb':
case 'rpm':
case 'dmg':
case 'app':
return <GearIcon size={20} weight="duotone"/>;
case 'db':
case 'sqlite':
case 'mysql':
return <Database size={20} weight="duotone"/>;
default:
return <FileIcon size={20} weight="duotone"/>;
}
}
};
const getEntryTypeColor = (entry) => {
if (entry.entry_type === 'dir') {
return '#4589ff'; // Blue for directories
} else if (entry.entry_type === 'symlink') {
return '#a855f7'; // Purple for symlinks
} else {
// Color code by file extension
const extension = entry.name.split('.').pop()?.toLowerCase() || '';
switch (extension) {
case 'txt':
case 'md':
case 'readme':
case 'log':
return '#16a34a'; // Green for text files
case 'pdf':
return '#dc2626'; // Red for PDFs
case 'zip':
case 'rar':
case 'tar':
case 'gz':
case '7z':
return '#f59e0b'; // Orange for archives
case 'jpg':
case 'jpeg':
case 'png':
case 'gif':
case 'bmp':
case 'svg':
case 'webp':
return '#ec4899'; // Pink for images
case 'mp4':
case 'avi':
case 'mov':
case 'wmv':
case 'flv':
case 'webm':
return '#8b5cf6'; // Purple for videos
case 'mp3':
case 'wav':
case 'flac':
case 'aac':
case 'ogg':
return '#06b6d4'; // Cyan for audio
case 'js':
case 'ts':
case 'jsx':
case 'tsx':
case 'html':
case 'css':
case 'scss':
case 'sass':
case 'json':
case 'xml':
case 'py':
case 'java':
case 'cpp':
case 'c':
case 'h':
case 'rs':
case 'go':
case 'php':
case 'rb':
case 'sh':
case 'sql':
return '#10b981'; // Emerald for code files
case 'exe':
case 'msi':
case 'deb':
case 'rpm':
case 'dmg':
case 'app':
return '#6b7280'; // Gray for executables
case 'db':
case 'sqlite':
case 'mysql':
return '#0ea5e9'; // Blue for databases
default:
return '#6b7280'; // Default gray
}
}
};
if (!isOpen) return null;
return (
<Modal
isOpen={isOpen}
onClose={onClose}
title={`File Browser - ${partitionInfo?.fs_type?.toUpperCase() || 'Partition'} Partition`}
size="xl"
className="file-browser-modal"
>
<div className="file-browser">
{/* Navigation Bar */}
<div className="file-browser-nav">
<div className="nav-controls">
<Button
variant="subtle"
size="sm"
icon={<ArrowLeftIcon size={16}/>}
onClick={goBack}
disabled={currentPath.length === 0 || loading}
>
Back
</Button>
<Button
variant="subtle"
size="sm"
icon={<HouseIcon size={16}/>}
onClick={loadPartitionRoot}
disabled={currentPath.length === 0 || loading}
>
Root
</Button>
</div>
{/* Breadcrumbs */}
<div className="breadcrumbs">
{breadcrumbs.map((crumb, index) => (
<React.Fragment key={index}>
{index > 0 && <span className="breadcrumb-separator">/</span>}
<button
className={`breadcrumb ${index === breadcrumbs.length - 1 ? 'breadcrumb--current' : ''}`}
onClick={() => navigateToBreadcrumb(index)}
disabled={index === breadcrumbs.length - 1 || loading}
>
{crumb.name}
</button>
</React.Fragment>
))}
</div>
</div>
{/* File List */}
<div className="file-browser-content">
{loading ? (
<div className="file-browser-loading">
<LoadingSpinner text="Loading directory contents..."/>
</div>
) : entries.length === 0 ? (
<EmptyState
icon={<FolderOpenIcon size={48} weight="duotone"/>}
title="Empty directory"
description="This directory doesn't contain any files or subdirectories"
variant="subtle"
size="sm"
/>
) : (
<>
{/* File List Header */}
<div className="file-list-header">
<div className="file-list-header-item">
<span>Name</span>
</div>
<div className="file-list-header-item">
<span>Type</span>
</div>
<div className="file-list-header-item">
<span>Size</span>
</div>
</div>
<div className="file-list">
{entries.map((entry, index) => (
<div
key={index}
className={`file-entry file-entry--${entry.entry_type}`}
onClick={() => handleEntryClick(entry)}
>
<div className="file-entry-main">
<div className="file-entry-icon" style={{color: getEntryTypeColor(entry)}}>
{getFileIcon(entry)}
</div>
<div className="file-entry-info">
<div className="file-entry-name">
{entry.name}
</div>
</div>
</div>
<div className="file-entry-type-cell">
<span className="file-entry-type">
{entry.entry_type === 'dir' ? 'Folder' :
entry.entry_type === 'symlink' ? 'Link' :
entry.name.includes('.') ? entry.name.split('.').pop()?.toUpperCase() + ' File' : 'File'}
</span>
</div>
<div className="file-entry-size-cell">
{entry.size_bytes ? (
<span className="file-entry-size">
{formatFileSize(entry.size_bytes)}
</span>
) : (
<span className="file-entry-size-empty"></span>
)}
</div>
{entry.entry_type === 'file' && (
<div className="file-entry-actions">
<Button
variant="subtle"
size="sm"
icon={<DownloadIcon size={14}/>}
onClick={(e) => {
e.stopPropagation();
downloadFile(entry.meta_hash, entry.name);
}}
title="Download file"
/>
</div>
)}
</div>
))}
</div>
</>
)}
</div>
</div>
<ModalActions>
<Button
variant="subtle"
onClick={onClose}
icon={<XIcon size={16}/>}
>
Close
</Button>
</ModalActions>
</Modal>
);
};

View File

@@ -0,0 +1,356 @@
// File Browser Styles - Modern File Manager Design
.file-browser-modal
.modal-dialog
max-width: 95vw
width: 1200px
height: 85vh
display: flex
flex-direction: column
.modal-content
display: flex
flex-direction: column
height: 100%
overflow: hidden
.modal-body
flex: 1
padding: 0
display: flex
flex-direction: column
overflow: hidden
.file-browser
display: flex
flex-direction: column
height: 100%
background: var(--bg-alt)
border-radius: var(--radius)
overflow: hidden
// Navigation Bar - File Explorer Style
.file-browser-nav
display: flex
align-items: center
justify-content: space-between
padding: 0.75rem 1rem
border-bottom: 1px solid var(--border)
background: linear-gradient(to bottom, var(--bg-alt), var(--bg-elev))
gap: 1rem
min-height: 60px
.nav-controls
display: flex
gap: 0.5rem
flex-shrink: 0
.breadcrumbs
display: flex
align-items: center
gap: 0.25rem
flex: 1
min-width: 0
overflow-x: auto
padding: 0.5rem
background: var(--bg)
border: 1px solid var(--border)
border-radius: var(--radius-sm)
font-family: 'SF Mono', 'Monaco', 'Consolas', monospace
font-size: 0.875rem
.breadcrumb
background: none
border: none
color: var(--accent)
cursor: pointer
font-size: 0.875rem
padding: 0.25rem 0.5rem
border-radius: var(--radius-sm)
transition: all 0.2s ease
white-space: nowrap
font-weight: 500
&:hover:not(:disabled)
background: rgba(15, 98, 254, 0.1)
color: var(--accent)
&:disabled
color: var(--text-dim)
cursor: default
&--current
color: var(--text)
font-weight: 600
cursor: default
background: rgba(15, 98, 254, 0.05)
&:hover
background: rgba(15, 98, 254, 0.05)
.breadcrumb-separator
color: var(--text-dim)
font-size: 0.875rem
margin: 0 0.25rem
font-weight: 400
// Content Area
.file-browser-content
flex: 1
display: flex
flex-direction: column
overflow: hidden
background: var(--bg)
.file-browser-loading
flex: 1
display: flex
align-items: center
justify-content: center
padding: 3rem
// File List - Table-like layout with header
.file-list-header
display: flex
background: var(--bg-elev)
border-bottom: 1px solid var(--border)
padding: 0.75rem 1rem
font-size: 0.75rem
font-weight: 600
color: var(--text-dim)
text-transform: uppercase
letter-spacing: 0.025em
position: sticky
top: 0
z-index: 10
&-item
display: flex
align-items: center
&:nth-child(1)
flex: 1
min-width: 0
padding-right: 1rem
&:nth-child(2)
width: 120px
padding-right: 1rem
&:nth-child(3)
width: 100px
text-align: right
.file-list
flex: 1
overflow-y: auto
padding: 0
background: var(--bg)
.file-entry
display: flex
align-items: center
padding: 0.75rem 1rem
border-bottom: 1px solid rgba(223, 227, 232, 0.3)
cursor: pointer
transition: all 0.15s ease
user-select: none
position: relative
&:hover
background: linear-gradient(90deg, rgba(15, 98, 254, 0.03), rgba(15, 98, 254, 0.01))
border-left: 3px solid rgba(15, 98, 254, 0.3)
&:active
background: rgba(15, 98, 254, 0.08)
&--dir
&:hover
background: linear-gradient(90deg, rgba(15, 98, 254, 0.05), rgba(15, 98, 254, 0.02))
border-left: 3px solid var(--accent)
.file-entry-icon
filter: drop-shadow(0 1px 2px rgba(0, 0, 0, 0.1))
// Main content area (icon + name)
&-main
display: flex
align-items: center
flex: 1
min-width: 0
padding-right: 1rem
&-icon
display: flex
align-items: center
justify-content: center
flex-shrink: 0
width: 24px
height: 24px
margin-right: 0.75rem
transition: transform 0.2s ease
.file-entry:hover &
transform: scale(1.05)
&-info
flex: 1
min-width: 0
&-name
font-weight: 500
color: var(--text)
word-break: break-word
line-height: 1.3
font-size: 0.875rem
.file-entry--dir &
font-weight: 600
// Type column
&-type-cell
width: 120px
padding-right: 1rem
flex-shrink: 0
&-type
text-transform: uppercase
font-weight: 600
letter-spacing: 0.025em
font-size: 0.7rem
color: var(--text-dim)
// Size column
&-size-cell
width: 100px
text-align: right
flex-shrink: 0
&-size
font-family: 'SF Mono', 'Monaco', 'Consolas', monospace
font-size: 0.7rem
color: var(--text-dim)
background: var(--bg-elev)
padding: 0.125rem 0.375rem
border-radius: var(--radius-sm)
&-size-empty
font-size: 0.7rem
color: var(--text-dim)
opacity: 0.5
&-actions
display: flex
gap: 0.25rem
flex-shrink: 0
opacity: 0
transition: opacity 0.2s ease
transform: translateX(8px)
margin-left: 0.5rem
&:hover &-actions
opacity: 1
transform: translateX(0)
// Empty state styling
.file-browser-content .empty-state
margin: 3rem auto
max-width: 400px
// Header improvements
.file-browser-nav .nav-controls button
border: 1px solid var(--border)
background: var(--bg)
transition: all 0.2s ease
&:hover:not(:disabled)
background: var(--bg-elev)
border-color: var(--accent)
transform: translateY(-1px)
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1)
&:disabled
opacity: 0.5
cursor: not-allowed
// Scrollbar styling
.file-list
scrollbar-width: thin
scrollbar-color: var(--border) transparent
&::-webkit-scrollbar
width: 8px
&::-webkit-scrollbar-track
background: transparent
&::-webkit-scrollbar-thumb
background: var(--border)
border-radius: 4px
&:hover
background: var(--border-strong)
// File type specific styling
.file-entry--dir
.file-entry-name
color: var(--text)
.file-entry-type
color: #4589ff
.file-entry--file
.file-entry-name
color: var(--text)
.file-entry--symlink
.file-entry-name
color: var(--text-dim)
font-style: italic
.file-entry-type
color: #a855f7
// Loading state
.file-browser-loading .loading-spinner
color: var(--accent)
// Modal actions
.file-browser-modal .modal-actions
padding: 1rem
border-top: 1px solid var(--border)
background: var(--bg-elev)
// Responsive design
@media (max-width: 768px)
.file-browser-modal .modal-dialog
max-width: 98vw
width: 98vw
height: 90vh
.file-browser-nav
flex-direction: column
align-items: stretch
gap: 0.75rem
padding: 1rem
.breadcrumbs
order: -1
font-size: 0.8rem
.file-entry
grid-template-columns: auto 1fr
gap: 0.75rem
padding: 1rem 0.75rem
.file-entry-actions
grid-column: 1 / -1
justify-self: end
margin-top: 0.5rem
opacity: 1
transform: none
.file-entry-name
font-size: 0.875rem
.file-entry-details
font-size: 0.75rem

View File

@@ -0,0 +1 @@
export {FileBrowser} from './FileBrowser.jsx';

View File

@@ -0,0 +1,26 @@
import React from 'react';
import './styles.sass';
export const Grid = ({
children,
columns = 'auto-fill',
minWidth = '300px',
gap = '1.5rem',
className = '',
...rest
}) => {
const gridStyle = {
'--grid-columns': columns === 'auto-fill' ? `repeat(auto-fill, minmax(${minWidth}, 1fr))` : `repeat(${columns}, 1fr)`,
'--grid-gap': gap
};
return (
<div
className={`grid ${className}`}
style={gridStyle}
{...rest}
>
{children}
</div>
);
};

View File

@@ -0,0 +1 @@
export { Grid as default } from './Grid.jsx';

View File

@@ -0,0 +1,8 @@
.grid
display: grid
grid-template-columns: var(--grid-columns)
gap: var(--grid-gap)
@media (max-width: 768px)
.grid
grid-template-columns: 1fr

View File

@@ -0,0 +1,23 @@
import React from "react";
import cn from "classnames";
import "./styles.sass";
export const Input = ({
label,
error,
icon,
className,
containerClassName,
type = "text",
...rest
}) => {
return (
<div className={cn("field", containerClassName)}>
{label && <label className="field-label">{label}</label>}
<div className={cn("field-control", error && "has-error", icon && "has-icon", className)}>
{icon && <span className="field-icon">{icon}</span>}
<input type={type} className="field-input" {...rest} />
</div>
{error && <div className="field-error">{error}</div>}
</div>
);
};

View File

@@ -0,0 +1 @@
export { Input as default } from "./Input.jsx";

View File

@@ -0,0 +1,63 @@
.field
display: flex
flex-direction: column
gap: .5rem
font-size: .9rem
font-weight: 600
color: #374048
.field-label
letter-spacing: .3px
margin-bottom: .2rem
.field-control
position: relative
display: flex
align-items: center
background: #ffffff
border: 2px solid #e1e8f0
border-radius: 16px
padding: 1rem 1.2rem
transition: all .2s ease
&.has-icon .field-input
padding-left: 2.2rem
&.has-error
border-color: #d93025
box-shadow: 0 0 0 4px rgba(217, 48, 37, 0.1)
&:focus-within
border-color: #0f62fe
box-shadow: 0 0 0 4px rgba(15, 98, 254, 0.1)
transform: translateY(-1px)
.field-icon
position: absolute
left: 1rem
top: 50%
transform: translateY(-50%)
display: inline-flex
font-size: 1.1rem
color: #6b7781
pointer-events: none
.field-input
appearance: none
outline: none
background: transparent
border: 0
color: #1f2429
font: inherit
font-size: 1rem
font-weight: 500
width: 100%
line-height: 1.3
&::placeholder
color: #a0abb4
font-weight: 400
&:focus
outline: none
.field-error
font-size: .65rem
font-weight: 600
color: #d93025
letter-spacing: .5px

Some files were not shown because too many files have changed in this diff Show More