Compare commits
3 Commits
8b1a9be8c2
...
e595fcbdac
Author | SHA1 | Date | |
---|---|---|---|
e595fcbdac
|
|||
fa00747e80
|
|||
4e38b13faa
|
396
PROTOCOL.md
Normal file
396
PROTOCOL.md
Normal file
@@ -0,0 +1,396 @@
|
|||||||
|
Gute Einsicht — du hast recht: **nur Disk-Hashes** allein reichen nicht, der Sync-Client muss die **gesamte Objekt-Graph-Struktur** (Disk → Partition → RootDir → Dirs → Files → Chunks) erzeugen und der Server muss diese Meta-Objekte speichern/prüfen.
|
||||||
|
Ich aktualisiere das Protokoll so, dass es komplett, binär und KI-implementierbar ist — mit klaren Message-Formaten, Meta-Objekt-Binärlayouts und Pseudocode für Client **(was er erzeugen / berechnen muss)** und für Server **(wie er prüfen & persistieren muss)**.
|
||||||
|
|
||||||
|
Ich halte es pragmatisch: keine Kompression-Extensions, keine Encryption — alles einfach, später erweiterbar.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Kurzüberblick (Was passiert)
|
||||||
|
|
||||||
|
1. Client scannt Disk/Partition, chunked Dateien (4 MiB), berechnet Chunks-Hashes (BLAKE3-256, 32B).
|
||||||
|
2. Client baut Binär-Meta-Objekte: FileObj, DirObj, PartitionObj, DiskObj, SnapshotObj (jeweils Body → `meta_hash = BLAKE3(body)`).
|
||||||
|
3. Client spricht Server über TCP an (HELLO → AUTH), fragt in Batches nach fehlenden Chunks/Metas, sendet nur fehlende Chunks/Metas.
|
||||||
|
4. Am Ende sendet er das Snapshot-Commit; Server validiert, schreibt Snapshot-Referenz (Server führt Pointer).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Nachrichtengeneralstruktur (Envelopes)
|
||||||
|
|
||||||
|
Jede Nachricht: fixer 24-Byte Header + Payload:
|
||||||
|
|
||||||
|
```
|
||||||
|
struct MsgHeader {
|
||||||
|
u8 cmd; // Befehlscode (siehe Tabelle)
|
||||||
|
u8 flags; // reserved
|
||||||
|
u8 reserved[2];
|
||||||
|
u8 session_id[16]; // 0..0 bevor AUTH_OK
|
||||||
|
u32 payload_len; // LE
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Antwort-Nachrichten haben dieselbe Hülle.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Command-Codes (u8)
|
||||||
|
|
||||||
|
* 0x01 HELLO
|
||||||
|
* 0x02 HELLO_OK
|
||||||
|
* 0x10 AUTH_USERPASS
|
||||||
|
* 0x11 AUTH_CODE
|
||||||
|
* 0x12 AUTH_OK
|
||||||
|
* 0x13 AUTH_FAIL
|
||||||
|
* 0x20 BATCH_CHECK_CHUNK
|
||||||
|
* 0x21 CHECK_CHUNK_RESP
|
||||||
|
* 0x22 SEND_CHUNK
|
||||||
|
* 0x23 CHUNK_OK
|
||||||
|
* 0x24 CHUNK_FAIL
|
||||||
|
* 0x30 BATCH_CHECK_META
|
||||||
|
* 0x31 CHECK_META_RESP
|
||||||
|
* 0x32 SEND_META
|
||||||
|
* 0x33 META_OK
|
||||||
|
* 0x34 META_FAIL
|
||||||
|
* 0x40 SEND_SNAPSHOT (Snapshot-Commit)
|
||||||
|
* 0x41 SNAPSHOT_OK
|
||||||
|
* 0x42 SNAPSHOT_FAIL
|
||||||
|
* 0xFF CLOSE
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Wichtige Designentscheidungen (Kurz)
|
||||||
|
|
||||||
|
* **Hashes**: BLAKE3-256 (32 Bytes). Client berechnet alle Hashes (Chunks + Meta bodies).
|
||||||
|
* **Chunks auf Wire**: unkomprimiert (einfach & verlässlich). Kompression wäre später Erweiterung.
|
||||||
|
* **Meta-Objekt-Body**: kompakte binäre Strukturen (siehe unten). `meta_hash = BLAKE3(body)`.
|
||||||
|
* **Batch-Checks**: Client fragt in Batches nach fehlenden Chunks/Metas (+ Server liefert nur die fehlenden Hashes zurück). Minimiert RTT.
|
||||||
|
* **Server persistiert**: `chunks/<ab>/<cd>/<hash>.chk`, `meta/<type>/<ab>/<cd>/<hash>.meta`. Server verwaltet Snapshot-Pointers (z. B. `machines/<client>/snapshots/<id>.ref`).
|
||||||
|
* **Snapshot Commit**: Server validiert Objekt-Graph vor Abschluss; falls etwas fehlt, sendet Liste zurück (Snapshot_FAIL mit missing list).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Binary Payload-Formate
|
||||||
|
|
||||||
|
Alle mehrteiligen Zähler / Längen sind little-endian (`LE`).
|
||||||
|
|
||||||
|
## A) BATCH_CHECK_CHUNK (Client → Server)
|
||||||
|
|
||||||
|
```
|
||||||
|
payload:
|
||||||
|
u32 count
|
||||||
|
for i in 0..count:
|
||||||
|
u8[32] chunk_hash
|
||||||
|
```
|
||||||
|
|
||||||
|
## CHECK_CHUNK_RESP (Server → Client)
|
||||||
|
|
||||||
|
```
|
||||||
|
payload:
|
||||||
|
u32 missing_count
|
||||||
|
for i in 0..missing_count:
|
||||||
|
u8[32] missing_chunk_hash
|
||||||
|
```
|
||||||
|
|
||||||
|
## SEND_CHUNK (Client → Server)
|
||||||
|
|
||||||
|
```
|
||||||
|
payload:
|
||||||
|
u8[32] chunk_hash
|
||||||
|
u32 size
|
||||||
|
u8[size] data // raw chunk bytes
|
||||||
|
```
|
||||||
|
|
||||||
|
Server computes BLAKE3(data) and compares to chunk_hash; if equal -> speichert.
|
||||||
|
|
||||||
|
## A) BATCH_CHECK_META
|
||||||
|
|
||||||
|
```
|
||||||
|
payload:
|
||||||
|
u32 count
|
||||||
|
for i in 0..count:
|
||||||
|
u8 meta_type // 1=file,2=dir,3=partition,4=disk,5=snapshot
|
||||||
|
u8[32] meta_hash
|
||||||
|
```
|
||||||
|
|
||||||
|
## CHECK_META_RESP
|
||||||
|
|
||||||
|
```
|
||||||
|
payload:
|
||||||
|
u32 missing_count
|
||||||
|
for i in 0..missing_count:
|
||||||
|
u8 meta_type
|
||||||
|
u8[32] meta_hash
|
||||||
|
```
|
||||||
|
|
||||||
|
## SEND_META
|
||||||
|
|
||||||
|
```
|
||||||
|
payload:
|
||||||
|
u8 meta_type // 1..5
|
||||||
|
u8[32] meta_hash
|
||||||
|
u32 body_len
|
||||||
|
u8[body_len] body_bytes // the canonical body; server will BLAKE3(body_bytes) and compare to meta_hash
|
||||||
|
```
|
||||||
|
|
||||||
|
## SEND_SNAPSHOT (Commit)
|
||||||
|
|
||||||
|
```
|
||||||
|
payload:
|
||||||
|
u8[32] snapshot_hash
|
||||||
|
u32 body_len
|
||||||
|
u8[body_len] snapshot_body // Snapshot body same encoding as meta (server validates body hash == snapshot_hash)
|
||||||
|
```
|
||||||
|
|
||||||
|
Server validates that snapshot_body references only existing meta objects (recursive / direct check). If OK → creates persistent snapshot pointer and replies SNAPSHOT_OK; if not, reply SNAPSHOT_FAIL with missing list (same format as CHECK_META_RESP).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Meta-Objekt-Binärformate (Bodies)
|
||||||
|
|
||||||
|
> Client erzeugt `body_bytes` für jedes Meta-Objekt; `meta_hash = BLAKE3(body_bytes)`.
|
||||||
|
|
||||||
|
### FileObj (meta_type = 1)
|
||||||
|
|
||||||
|
```
|
||||||
|
FileObjBody:
|
||||||
|
u8 version (1)
|
||||||
|
u32 fs_type_code // e.g. 1=ext*, 2=ntfs, 3=fat32 (enum)
|
||||||
|
u64 size
|
||||||
|
u32 mode // POSIX mode for linux; 0 for FS without
|
||||||
|
u32 uid
|
||||||
|
u32 gid
|
||||||
|
u64 mtime_unixsec
|
||||||
|
u32 chunk_count
|
||||||
|
for i in 0..chunk_count:
|
||||||
|
u8[32] chunk_hash
|
||||||
|
// optional: xattrs/ACLs TLV (not in v1)
|
||||||
|
```
|
||||||
|
|
||||||
|
### DirObj (meta_type = 2)
|
||||||
|
|
||||||
|
```
|
||||||
|
DirObjBody:
|
||||||
|
u8 version (1)
|
||||||
|
u32 entry_count
|
||||||
|
for each entry:
|
||||||
|
u8 entry_type // 0 = file, 1 = dir, 2 = symlink
|
||||||
|
u16 name_len
|
||||||
|
u8[name_len] name (UTF-8)
|
||||||
|
u8[32] target_meta_hash
|
||||||
|
```
|
||||||
|
|
||||||
|
### PartitionObj (meta_type = 3)
|
||||||
|
|
||||||
|
```
|
||||||
|
PartitionObjBody:
|
||||||
|
u8 version (1)
|
||||||
|
u32 fs_type_code
|
||||||
|
u8[32] root_dir_hash // DirObj hash for root of this partition
|
||||||
|
u64 start_lba
|
||||||
|
u64 end_lba
|
||||||
|
u8[16] type_guid // zeroed if unused
|
||||||
|
```
|
||||||
|
|
||||||
|
### DiskObj (meta_type = 4)
|
||||||
|
|
||||||
|
```
|
||||||
|
DiskObjBody:
|
||||||
|
u8 version (1)
|
||||||
|
u32 partition_count
|
||||||
|
for i in 0..partition_count:
|
||||||
|
u8[32] partition_hash
|
||||||
|
u64 disk_size_bytes
|
||||||
|
u16 serial_len
|
||||||
|
u8[serial_len] serial_bytes
|
||||||
|
```
|
||||||
|
|
||||||
|
### SnapshotObj (meta_type = 5)
|
||||||
|
|
||||||
|
```
|
||||||
|
SnapshotObjBody:
|
||||||
|
u8 version (1)
|
||||||
|
u64 created_at_unixsec
|
||||||
|
u32 disk_count
|
||||||
|
for i in 0..disk_count:
|
||||||
|
u8[32] disk_hash
|
||||||
|
// optional: snapshot metadata (user, note) as TLV extension later
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Ablauf (Pseudocode) — **Client-Seite (Sync-Client)**
|
||||||
|
|
||||||
|
(Erzeugt alle Hashes; sendet nur fehlendes per Batch)
|
||||||
|
|
||||||
|
```text
|
||||||
|
FUNCTION client_backup(tcp_conn, computer_id, disks):
|
||||||
|
send_msg(HELLO{client_type=0, auth_type=0})
|
||||||
|
await HELLO_OK
|
||||||
|
|
||||||
|
send_msg(AUTH_USERPASS{username,password})
|
||||||
|
resp = await
|
||||||
|
if resp != AUTH_OK: abort
|
||||||
|
session_id = resp.session_id
|
||||||
|
|
||||||
|
// traverse per-partition to limit memory
|
||||||
|
snapshot_disk_hashes = []
|
||||||
|
FOR disk IN disks:
|
||||||
|
partition_hashes = []
|
||||||
|
FOR part IN disk.partitions:
|
||||||
|
root_dir_hash = process_dir(part.root_path, tcp_conn)
|
||||||
|
part_body = build_partition_body(part.fs_type, root_dir_hash, part.start, part.end, part.guid)
|
||||||
|
part_hash = blake3(part_body)
|
||||||
|
batch_check_and_send_meta_if_missing(tcp_conn, meta_type=3, [(part_hash,part_body)])
|
||||||
|
partition_hashes.append(part_hash)
|
||||||
|
|
||||||
|
disk_body = build_disk_body(partition_hashes, disk.size, disk.serial)
|
||||||
|
disk_hash = blake3(disk_body)
|
||||||
|
batch_check_and_send_meta_if_missing(tcp_conn, meta_type=4, [(disk_hash,disk_body)])
|
||||||
|
snapshot_disk_hashes.append(disk_hash)
|
||||||
|
|
||||||
|
snapshot_body = build_snapshot_body(now(), snapshot_disk_hashes)
|
||||||
|
snapshot_hash = blake3(snapshot_body)
|
||||||
|
// final TRY: ask server if snapshot can be committed (server will verify)
|
||||||
|
send_msg(SEND_SNAPSHOT(snapshot_hash, snapshot_body))
|
||||||
|
resp = await
|
||||||
|
if resp == SNAPSHOT_OK: success
|
||||||
|
else if resp == SNAPSHOT_FAIL: // server returns missing meta list
|
||||||
|
// receive missing metas; client should send the remaining missing meta/chunks (loop)
|
||||||
|
handle_missing_and_retry()
|
||||||
|
```
|
||||||
|
|
||||||
|
Hilfsfunktionen:
|
||||||
|
|
||||||
|
```text
|
||||||
|
FUNCTION process_dir(path, tcp_conn):
|
||||||
|
entries_meta = [] // list of (name, entry_type, target_hash)
|
||||||
|
collect a list meta_to_check_for_this_dir = []
|
||||||
|
FOR entry IN readdir(path):
|
||||||
|
IF entry.is_file:
|
||||||
|
file_hash = process_file(entry.path, tcp_conn) // below
|
||||||
|
entries_meta.append((entry.name, 0, file_hash))
|
||||||
|
ELSE IF entry.is_dir:
|
||||||
|
subdir_hash = process_dir(entry.path, tcp_conn)
|
||||||
|
entries_meta.append((entry.name, 1, subdir_hash))
|
||||||
|
ELSE IF symlink:
|
||||||
|
symlink_body = build_symlink_body(target)
|
||||||
|
symlink_hash = blake3(symlink_body)
|
||||||
|
batch_check_and_send_meta_if_missing(tcp_conn, meta_type=1, [(symlink_hash, symlink_body)])
|
||||||
|
entries_meta.append((entry.name, 2, symlink_hash))
|
||||||
|
|
||||||
|
dir_body = build_dir_body(entries_meta)
|
||||||
|
dir_hash = blake3(dir_body)
|
||||||
|
batch_check_and_send_meta_if_missing(tcp_conn, meta_type=2, [(dir_hash,dir_body)])
|
||||||
|
RETURN dir_hash
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
FUNCTION process_file(path, tcp_conn):
|
||||||
|
chunk_hashes = []
|
||||||
|
FOR each chunk IN read_in_chunks(path, 4*1024*1024):
|
||||||
|
chunk_hash = blake3(chunk)
|
||||||
|
chunk_hashes.append(chunk_hash)
|
||||||
|
// Batch-check chunks for this file
|
||||||
|
missing = batch_check_chunks(tcp_conn, chunk_hashes)
|
||||||
|
FOR each missing_hash IN missing:
|
||||||
|
chunk_bytes = read_chunk_by_hash_from_disk(path, missing_hash) // or buffer earlier
|
||||||
|
send_msg(SEND_CHUNK {hash,size,data})
|
||||||
|
await CHUNK_OK
|
||||||
|
|
||||||
|
file_body = build_file_body(fs_type, size, mode, uid, gid, mtime, chunk_hashes)
|
||||||
|
file_hash = blake3(file_body)
|
||||||
|
batch_check_and_send_meta_if_missing(tcp_conn, meta_type=1, [(file_hash,file_body)])
|
||||||
|
RETURN file_hash
|
||||||
|
```
|
||||||
|
|
||||||
|
`batch_check_and_send_meta_if_missing`:
|
||||||
|
|
||||||
|
* Send BATCH_CHECK_META for all items
|
||||||
|
* Server returns list of missing metas
|
||||||
|
* For each missing, send SEND_META(meta_type, meta_hash, body)
|
||||||
|
* Await META_OK
|
||||||
|
|
||||||
|
Bemerkung: batching per directory/file-group reduziert RTT.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Ablauf (Pseudocode) — **Server-Seite (Sync-Server)**
|
||||||
|
|
||||||
|
```text
|
||||||
|
ON connection:
|
||||||
|
read HELLO -> verify allowed client type
|
||||||
|
send HELLO_OK OR HELLO_FAIL
|
||||||
|
|
||||||
|
ON AUTH_USERPASS:
|
||||||
|
validate credentials
|
||||||
|
if ok: generate session_id (16B), send AUTH_OK{session_id}
|
||||||
|
else send AUTH_FAIL
|
||||||
|
|
||||||
|
ON BATCH_CHECK_CHUNK:
|
||||||
|
read list of hashes
|
||||||
|
missing_list = []
|
||||||
|
for hash in hashes:
|
||||||
|
if not exists chunks/shard(hash): missing_list.append(hash)
|
||||||
|
send CHECK_CHUNK_RESP {missing_list}
|
||||||
|
|
||||||
|
ON SEND_CHUNK:
|
||||||
|
read chunk_hash, size, data
|
||||||
|
computed = blake3(data)
|
||||||
|
if computed != chunk_hash: send CHUNK_FAIL{reason} and drop
|
||||||
|
else if exists chunk already: send CHUNK_OK
|
||||||
|
else: write atomic to chunks/<ab>/<cd>/<hash>.chk and send CHUNK_OK
|
||||||
|
|
||||||
|
ON BATCH_CHECK_META:
|
||||||
|
similar: check meta/<type>/<hash>.meta exists — return missing list
|
||||||
|
|
||||||
|
ON SEND_META:
|
||||||
|
verify blake3(body) == meta_hash; if ok write meta/<type>/<ab>/<cd>/<hash>.meta atomically; respond META_OK
|
||||||
|
|
||||||
|
ON SEND_SNAPSHOT:
|
||||||
|
verify blake3(snapshot_body) == snapshot_hash
|
||||||
|
// Validate the object graph:
|
||||||
|
missing = validate_graph(snapshot_body) // DFS: disks -> partitions -> dirs -> files -> chunks
|
||||||
|
if missing not empty:
|
||||||
|
send SNAPSHOT_FAIL {missing (as meta list and/or chunk list)}
|
||||||
|
else:
|
||||||
|
store snapshot file and create pointer machines/<client_id>/snapshots/<id>.ref
|
||||||
|
send SNAPSHOT_OK {snapshot_id}
|
||||||
|
```
|
||||||
|
|
||||||
|
`validate_graph`:
|
||||||
|
|
||||||
|
* parse snapshot_body → disk_hashes
|
||||||
|
* for each disk_hash check meta exists; load disk meta → for each partition_hash check meta exists … recursively for dir entries -> file metas -> check chunk existence for each chunk_hash. Collect missing set and return.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Verhalten bei `SNAPSHOT_FAIL`
|
||||||
|
|
||||||
|
* Server liefert fehlende meta/chunk-Hashes.
|
||||||
|
* Client sendet diese gezielt (batch) und wiederholt `SEND_SNAPSHOT` (retry).
|
||||||
|
* Alternativ: Client kann beim ersten Versuch inkrementell alle benötigten metas/chunks hochladen (das ist die übliche Reihenfolge dieses Pseudocodes — so fehlt beim Commit nichts mehr).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Speicherung / Pfade (Server intern)
|
||||||
|
|
||||||
|
* `chunks/<ab>/<cd>/<hash>.chk` (ab = first 2 hex chars; cd = next 2)
|
||||||
|
* `meta/files/<ab>/<cd>/<hash>.meta`
|
||||||
|
* `meta/dirs/<...>`
|
||||||
|
* `meta/parts/...`
|
||||||
|
* `meta/disks/...`
|
||||||
|
* `meta/snapshots/<snapshot_hash>.meta`
|
||||||
|
* `machines/<client_id>/snapshots/<snapshot_id>.ref` (Pointer -> snapshot_hash + timestamp)
|
||||||
|
|
||||||
|
Atomic writes: `tmp -> rename`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Wichtige Implementations-Hinweise für die KI/Server-Implementierung
|
||||||
|
|
||||||
|
* **Batching ist Pflicht**: Implementiere `BATCH_CHECK_CHUNK` & `BATCH_CHECK_META` effizient (Bitset, HashSet lookups).
|
||||||
|
* **Limits**: begrenze `count` pro Batch (z. B. 1000) — Client muss chunk lists stückeln.
|
||||||
|
* **Validation:** Server muss auf `SEND_SNAPSHOT` den Graph validieren (sonst verliert man Konsistenz).
|
||||||
|
* **Atomic Snapshot Commit:** erst persistieren, wenn Graph vollständig vorhanden.
|
||||||
|
* **SessionID**: muss in Header für alle Nachfolgemsgs verwendet werden.
|
||||||
|
* **Perf:** parallelisiere Chunk-Uploads (mehrere TCP-Tasks) und erlaubt Server mehrere parallele Handshakes.
|
||||||
|
* **Sicherheit:** produktiv TLS/TCP oder VPN; Rate-limit / brute-force Schutz; Provisioning-Codes mit TTL.
|
56
server/.sqlx/query-2d6e5810f76e780a4a9b54c5ea39d707be614eb304dc6b4f32d8b6d28464c4b5.json
generated
Normal file
56
server/.sqlx/query-2d6e5810f76e780a4a9b54c5ea39d707be614eb304dc6b4f32d8b6d28464c4b5.json
generated
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
{
|
||||||
|
"db_name": "SQLite",
|
||||||
|
"query": "\n SELECT pc.id, pc.code, pc.expires_at, pc.used, m.id as machine_id, m.user_id, u.username\n FROM provisioning_codes pc\n JOIN machines m ON pc.machine_id = m.id\n JOIN users u ON m.user_id = u.id\n WHERE pc.code = ? AND pc.used = 0\n ",
|
||||||
|
"describe": {
|
||||||
|
"columns": [
|
||||||
|
{
|
||||||
|
"name": "id",
|
||||||
|
"ordinal": 0,
|
||||||
|
"type_info": "Integer"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "code",
|
||||||
|
"ordinal": 1,
|
||||||
|
"type_info": "Text"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "expires_at",
|
||||||
|
"ordinal": 2,
|
||||||
|
"type_info": "Datetime"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "used",
|
||||||
|
"ordinal": 3,
|
||||||
|
"type_info": "Bool"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "machine_id",
|
||||||
|
"ordinal": 4,
|
||||||
|
"type_info": "Integer"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "user_id",
|
||||||
|
"ordinal": 5,
|
||||||
|
"type_info": "Integer"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "username",
|
||||||
|
"ordinal": 6,
|
||||||
|
"type_info": "Text"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"parameters": {
|
||||||
|
"Right": 1
|
||||||
|
},
|
||||||
|
"nullable": [
|
||||||
|
true,
|
||||||
|
false,
|
||||||
|
false,
|
||||||
|
true,
|
||||||
|
true,
|
||||||
|
false,
|
||||||
|
false
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"hash": "2d6e5810f76e780a4a9b54c5ea39d707be614eb304dc6b4f32d8b6d28464c4b5"
|
||||||
|
}
|
26
server/.sqlx/query-43af0c22d05eca56b2a7b1f6eed873102d8e006330fd7d8063657d2df936b3fb.json
generated
Normal file
26
server/.sqlx/query-43af0c22d05eca56b2a7b1f6eed873102d8e006330fd7d8063657d2df936b3fb.json
generated
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
{
|
||||||
|
"db_name": "SQLite",
|
||||||
|
"query": "SELECT id, user_id FROM machines WHERE id = ?",
|
||||||
|
"describe": {
|
||||||
|
"columns": [
|
||||||
|
{
|
||||||
|
"name": "id",
|
||||||
|
"ordinal": 0,
|
||||||
|
"type_info": "Integer"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "user_id",
|
||||||
|
"ordinal": 1,
|
||||||
|
"type_info": "Integer"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"parameters": {
|
||||||
|
"Right": 1
|
||||||
|
},
|
||||||
|
"nullable": [
|
||||||
|
false,
|
||||||
|
false
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"hash": "43af0c22d05eca56b2a7b1f6eed873102d8e006330fd7d8063657d2df936b3fb"
|
||||||
|
}
|
12
server/.sqlx/query-508e673540beae31730d323bbb52d91747bb405ef3d6f4a7f20776fdeb618688.json
generated
Normal file
12
server/.sqlx/query-508e673540beae31730d323bbb52d91747bb405ef3d6f4a7f20776fdeb618688.json
generated
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
{
|
||||||
|
"db_name": "SQLite",
|
||||||
|
"query": "UPDATE provisioning_codes SET used = 1 WHERE id = ?",
|
||||||
|
"describe": {
|
||||||
|
"columns": [],
|
||||||
|
"parameters": {
|
||||||
|
"Right": 1
|
||||||
|
},
|
||||||
|
"nullable": []
|
||||||
|
},
|
||||||
|
"hash": "508e673540beae31730d323bbb52d91747bb405ef3d6f4a7f20776fdeb618688"
|
||||||
|
}
|
32
server/.sqlx/query-9f9215a05f729db6f707c84967f4f11033d39d17ded98f4fe9fb48f3d1598596.json
generated
Normal file
32
server/.sqlx/query-9f9215a05f729db6f707c84967f4f11033d39d17ded98f4fe9fb48f3d1598596.json
generated
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
{
|
||||||
|
"db_name": "SQLite",
|
||||||
|
"query": "SELECT id, username, password_hash FROM users WHERE username = ?",
|
||||||
|
"describe": {
|
||||||
|
"columns": [
|
||||||
|
{
|
||||||
|
"name": "id",
|
||||||
|
"ordinal": 0,
|
||||||
|
"type_info": "Integer"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "username",
|
||||||
|
"ordinal": 1,
|
||||||
|
"type_info": "Text"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "password_hash",
|
||||||
|
"ordinal": 2,
|
||||||
|
"type_info": "Text"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"parameters": {
|
||||||
|
"Right": 1
|
||||||
|
},
|
||||||
|
"nullable": [
|
||||||
|
true,
|
||||||
|
false,
|
||||||
|
false
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"hash": "9f9215a05f729db6f707c84967f4f11033d39d17ded98f4fe9fb48f3d1598596"
|
||||||
|
}
|
26
server/.sqlx/query-cc5f2e47cc53dd29682506ff84f07f7d0914e3141e62b470e84b3886b50764a1.json
generated
Normal file
26
server/.sqlx/query-cc5f2e47cc53dd29682506ff84f07f7d0914e3141e62b470e84b3886b50764a1.json
generated
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
{
|
||||||
|
"db_name": "SQLite",
|
||||||
|
"query": "SELECT id, user_id FROM machines WHERE id = ? AND user_id = ?",
|
||||||
|
"describe": {
|
||||||
|
"columns": [
|
||||||
|
{
|
||||||
|
"name": "id",
|
||||||
|
"ordinal": 0,
|
||||||
|
"type_info": "Integer"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "user_id",
|
||||||
|
"ordinal": 1,
|
||||||
|
"type_info": "Integer"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"parameters": {
|
||||||
|
"Right": 2
|
||||||
|
},
|
||||||
|
"nullable": [
|
||||||
|
false,
|
||||||
|
false
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"hash": "cc5f2e47cc53dd29682506ff84f07f7d0914e3141e62b470e84b3886b50764a1"
|
||||||
|
}
|
93
server/Cargo.lock
generated
93
server/Cargo.lock
generated
@@ -38,6 +38,18 @@ version = "1.0.99"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "b0674a1ddeecb70197781e945de4b3b8ffb61fa939a5597bcf48503737663100"
|
checksum = "b0674a1ddeecb70197781e945de4b3b8ffb61fa939a5597bcf48503737663100"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "arrayref"
|
||||||
|
version = "0.3.9"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "76a2e8124351fda1ef8aaaa3bbd7ebbcb486bbcd4225aca0aa0d84bb2db8fecb"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "arrayvec"
|
||||||
|
version = "0.7.6"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "7c02d123df017efcdfbd739ef81735b36c5ba83ec3c59c80a9d7ecc718f92e50"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "atoi"
|
name = "atoi"
|
||||||
version = "2.0.0"
|
version = "2.0.0"
|
||||||
@@ -153,6 +165,15 @@ dependencies = [
|
|||||||
"zeroize",
|
"zeroize",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "bincode"
|
||||||
|
version = "1.3.3"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "b1f45e9417d87227c7a56d22e471c6206462cba514c7590c09aff4cf6d1ddcad"
|
||||||
|
dependencies = [
|
||||||
|
"serde",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "bitflags"
|
name = "bitflags"
|
||||||
version = "2.9.4"
|
version = "2.9.4"
|
||||||
@@ -162,6 +183,19 @@ dependencies = [
|
|||||||
"serde",
|
"serde",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "blake3"
|
||||||
|
version = "1.8.2"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "3888aaa89e4b2a40fca9848e400f6a658a5a3978de7be858e209cafa8be9a4a0"
|
||||||
|
dependencies = [
|
||||||
|
"arrayref",
|
||||||
|
"arrayvec",
|
||||||
|
"cc",
|
||||||
|
"cfg-if",
|
||||||
|
"constant_time_eq",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "block-buffer"
|
name = "block-buffer"
|
||||||
version = "0.10.4"
|
version = "0.10.4"
|
||||||
@@ -254,6 +288,12 @@ version = "0.9.6"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "c2459377285ad874054d797f3ccebf984978aa39129f6eafde5cdc8315b612f8"
|
checksum = "c2459377285ad874054d797f3ccebf984978aa39129f6eafde5cdc8315b612f8"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "constant_time_eq"
|
||||||
|
version = "0.3.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "7c74b8349d32d297c9134b8c88677813a227df8f779daa29bfc29c183fe3dca6"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "core-foundation-sys"
|
name = "core-foundation-sys"
|
||||||
version = "0.8.7"
|
version = "0.8.7"
|
||||||
@@ -364,6 +404,16 @@ version = "1.0.2"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "877a4ace8713b0bcf2a4e7eec82529c029f1d0619886d18145fea96c3ffe5c0f"
|
checksum = "877a4ace8713b0bcf2a4e7eec82529c029f1d0619886d18145fea96c3ffe5c0f"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "errno"
|
||||||
|
version = "0.3.14"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "39cab71617ae0d63f51a36d69f866391735b51691dbda63cf6f96d042b63efeb"
|
||||||
|
dependencies = [
|
||||||
|
"libc",
|
||||||
|
"windows-sys 0.59.0",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "etcetera"
|
name = "etcetera"
|
||||||
version = "0.8.0"
|
version = "0.8.0"
|
||||||
@@ -386,6 +436,12 @@ dependencies = [
|
|||||||
"pin-project-lite",
|
"pin-project-lite",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "fastrand"
|
||||||
|
version = "2.3.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "find-msvc-tools"
|
name = "find-msvc-tools"
|
||||||
version = "0.1.1"
|
version = "0.1.1"
|
||||||
@@ -903,6 +959,12 @@ dependencies = [
|
|||||||
"vcpkg",
|
"vcpkg",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "linux-raw-sys"
|
||||||
|
version = "0.11.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "df1d3c3b53da64cf5760482273a98e575c651a67eec7f77df96b5b642de8f039"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "litemap"
|
name = "litemap"
|
||||||
version = "0.8.0"
|
version = "0.8.0"
|
||||||
@@ -1249,6 +1311,19 @@ version = "0.1.26"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "56f7d92ca342cea22a06f2121d944b4fd82af56988c270852495420f961d4ace"
|
checksum = "56f7d92ca342cea22a06f2121d944b4fd82af56988c270852495420f961d4ace"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "rustix"
|
||||||
|
version = "1.1.2"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "cd15f8a2c5551a84d56efdc1cd049089e409ac19a3072d5037a17fd70719ff3e"
|
||||||
|
dependencies = [
|
||||||
|
"bitflags",
|
||||||
|
"errno",
|
||||||
|
"libc",
|
||||||
|
"linux-raw-sys",
|
||||||
|
"windows-sys 0.59.0",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "rustls"
|
name = "rustls"
|
||||||
version = "0.23.31"
|
version = "0.23.31"
|
||||||
@@ -1362,11 +1437,16 @@ dependencies = [
|
|||||||
"anyhow",
|
"anyhow",
|
||||||
"axum",
|
"axum",
|
||||||
"bcrypt",
|
"bcrypt",
|
||||||
|
"bincode",
|
||||||
|
"blake3",
|
||||||
|
"bytes",
|
||||||
"chrono",
|
"chrono",
|
||||||
|
"hex",
|
||||||
"rand",
|
"rand",
|
||||||
"serde",
|
"serde",
|
||||||
"serde_json",
|
"serde_json",
|
||||||
"sqlx",
|
"sqlx",
|
||||||
|
"tempfile",
|
||||||
"tokio",
|
"tokio",
|
||||||
"tower-http",
|
"tower-http",
|
||||||
"uuid",
|
"uuid",
|
||||||
@@ -1712,6 +1792,19 @@ dependencies = [
|
|||||||
"syn",
|
"syn",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "tempfile"
|
||||||
|
version = "3.22.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "84fa4d11fadde498443cca10fd3ac23c951f0dc59e080e9f4b93d4df4e4eea53"
|
||||||
|
dependencies = [
|
||||||
|
"fastrand",
|
||||||
|
"getrandom 0.3.3",
|
||||||
|
"once_cell",
|
||||||
|
"rustix",
|
||||||
|
"windows-sys 0.59.0",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "thiserror"
|
name = "thiserror"
|
||||||
version = "2.0.16"
|
version = "2.0.16"
|
||||||
|
@@ -14,4 +14,11 @@ uuid = { version = "1.0", features = ["v4", "serde"] }
|
|||||||
chrono = { version = "0.4", features = ["serde"] }
|
chrono = { version = "0.4", features = ["serde"] }
|
||||||
tower-http = { version = "0.6.6", features = ["cors", "fs"] }
|
tower-http = { version = "0.6.6", features = ["cors", "fs"] }
|
||||||
anyhow = "1.0"
|
anyhow = "1.0"
|
||||||
rand = "0.8"
|
rand = "0.8"
|
||||||
|
blake3 = "1.5"
|
||||||
|
bytes = "1.0"
|
||||||
|
bincode = "1.3"
|
||||||
|
hex = "0.4"
|
||||||
|
|
||||||
|
[dev-dependencies]
|
||||||
|
tempfile = "3.0"
|
@@ -87,6 +87,7 @@ impl MachinesController {
|
|||||||
id: row.get("id"),
|
id: row.get("id"),
|
||||||
user_id: row.get("user_id"),
|
user_id: row.get("user_id"),
|
||||||
uuid: Uuid::parse_str(&row.get::<String, _>("uuid")).unwrap(),
|
uuid: Uuid::parse_str(&row.get::<String, _>("uuid")).unwrap(),
|
||||||
|
machine_id: row.get::<String, _>("uuid"),
|
||||||
name: row.get("name"),
|
name: row.get("name"),
|
||||||
created_at: row.get("created_at"),
|
created_at: row.get("created_at"),
|
||||||
})
|
})
|
||||||
@@ -109,6 +110,7 @@ impl MachinesController {
|
|||||||
id: row.get("id"),
|
id: row.get("id"),
|
||||||
user_id: row.get("user_id"),
|
user_id: row.get("user_id"),
|
||||||
uuid: Uuid::parse_str(&row.get::<String, _>("uuid")).unwrap(),
|
uuid: Uuid::parse_str(&row.get::<String, _>("uuid")).unwrap(),
|
||||||
|
machine_id: row.get::<String, _>("uuid"),
|
||||||
name: row.get("name"),
|
name: row.get("name"),
|
||||||
created_at: row.get("created_at"),
|
created_at: row.get("created_at"),
|
||||||
});
|
});
|
||||||
|
@@ -1,3 +1,4 @@
|
|||||||
pub mod auth;
|
pub mod auth;
|
||||||
pub mod machines;
|
pub mod machines;
|
||||||
|
pub mod snapshots;
|
||||||
pub mod users;
|
pub mod users;
|
||||||
|
184
server/src/controllers/snapshots.rs
Normal file
184
server/src/controllers/snapshots.rs
Normal file
@@ -0,0 +1,184 @@
|
|||||||
|
use crate::sync::storage::Storage;
|
||||||
|
use crate::sync::meta::{MetaObj, FsType};
|
||||||
|
use crate::sync::protocol::MetaType;
|
||||||
|
use crate::utils::{error::*, models::*, DbPool};
|
||||||
|
use serde::Serialize;
|
||||||
|
use chrono::{DateTime, Utc};
|
||||||
|
|
||||||
|
// Basic snapshot info for listing
|
||||||
|
#[derive(Debug, Serialize)]
|
||||||
|
pub struct SnapshotSummary {
|
||||||
|
pub id: String,
|
||||||
|
pub snapshot_hash: String,
|
||||||
|
pub created_at: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Detailed snapshot info with disk/partition data
|
||||||
|
#[derive(Debug, Serialize)]
|
||||||
|
pub struct SnapshotDetails {
|
||||||
|
pub id: String,
|
||||||
|
pub snapshot_hash: String,
|
||||||
|
pub created_at: String,
|
||||||
|
pub disks: Vec<DiskInfo>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Serialize)]
|
||||||
|
pub struct DiskInfo {
|
||||||
|
pub serial: String,
|
||||||
|
pub size_bytes: u64,
|
||||||
|
pub partitions: Vec<PartitionInfo>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Serialize)]
|
||||||
|
pub struct PartitionInfo {
|
||||||
|
pub fs_type: String,
|
||||||
|
pub start_lba: u64,
|
||||||
|
pub end_lba: u64,
|
||||||
|
pub size_bytes: u64,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct SnapshotsController;
|
||||||
|
|
||||||
|
impl SnapshotsController {
|
||||||
|
pub async fn get_machine_snapshots(
|
||||||
|
pool: &DbPool,
|
||||||
|
machine_id: i64,
|
||||||
|
user: &User,
|
||||||
|
) -> AppResult<Vec<SnapshotSummary>> {
|
||||||
|
// Verify machine access
|
||||||
|
let machine = sqlx::query!(
|
||||||
|
"SELECT id, user_id FROM machines WHERE id = ? AND user_id = ?",
|
||||||
|
machine_id,
|
||||||
|
user.id
|
||||||
|
)
|
||||||
|
.fetch_optional(pool)
|
||||||
|
.await
|
||||||
|
.map_err(|e| AppError::DatabaseError(e.to_string()))?;
|
||||||
|
|
||||||
|
if machine.is_none() {
|
||||||
|
return Err(AppError::NotFoundError("Machine not found or access denied".to_string()));
|
||||||
|
}
|
||||||
|
|
||||||
|
let _machine = machine.unwrap();
|
||||||
|
|
||||||
|
let storage = Storage::new("./data");
|
||||||
|
let mut snapshot_summaries = Vec::new();
|
||||||
|
|
||||||
|
// List all snapshots for this machine from storage
|
||||||
|
match storage.list_snapshots(machine_id).await {
|
||||||
|
Ok(snapshot_ids) => {
|
||||||
|
for snapshot_id in snapshot_ids {
|
||||||
|
// Load snapshot reference to get hash and timestamp
|
||||||
|
if let Ok(Some((snapshot_hash, created_at_timestamp))) = storage.load_snapshot_ref(machine_id, &snapshot_id).await {
|
||||||
|
let created_at = DateTime::from_timestamp(created_at_timestamp as i64, 0)
|
||||||
|
.unwrap_or_else(|| Utc::now())
|
||||||
|
.format("%Y-%m-%d %H:%M:%S UTC")
|
||||||
|
.to_string();
|
||||||
|
|
||||||
|
snapshot_summaries.push(SnapshotSummary {
|
||||||
|
id: snapshot_id,
|
||||||
|
snapshot_hash: hex::encode(snapshot_hash),
|
||||||
|
created_at,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
Err(_) => {
|
||||||
|
// If no snapshots directory exists, return empty list
|
||||||
|
return Ok(Vec::new());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sort by creation time (newest first)
|
||||||
|
snapshot_summaries.sort_by(|a, b| b.created_at.cmp(&a.created_at));
|
||||||
|
|
||||||
|
Ok(snapshot_summaries)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn get_snapshot_details(
|
||||||
|
pool: &DbPool,
|
||||||
|
machine_id: i64,
|
||||||
|
snapshot_id: String,
|
||||||
|
user: &User,
|
||||||
|
) -> AppResult<SnapshotDetails> {
|
||||||
|
// Verify machine access
|
||||||
|
let machine = sqlx::query!(
|
||||||
|
"SELECT id, user_id FROM machines WHERE id = ? AND user_id = ?",
|
||||||
|
machine_id,
|
||||||
|
user.id
|
||||||
|
)
|
||||||
|
.fetch_optional(pool)
|
||||||
|
.await
|
||||||
|
.map_err(|e| AppError::DatabaseError(e.to_string()))?;
|
||||||
|
|
||||||
|
if machine.is_none() {
|
||||||
|
return Err(AppError::NotFoundError("Machine not found or access denied".to_string()));
|
||||||
|
}
|
||||||
|
|
||||||
|
let _machine = machine.unwrap();
|
||||||
|
|
||||||
|
let storage = Storage::new("./data");
|
||||||
|
|
||||||
|
// Load snapshot reference to get hash and timestamp
|
||||||
|
let (snapshot_hash, created_at_timestamp) = storage.load_snapshot_ref(machine_id, &snapshot_id).await
|
||||||
|
.map_err(|_| AppError::NotFoundError("Snapshot not found".to_string()))?
|
||||||
|
.ok_or_else(|| AppError::NotFoundError("Snapshot not found".to_string()))?;
|
||||||
|
|
||||||
|
// Load snapshot metadata
|
||||||
|
let snapshot_meta = storage.load_meta(MetaType::Snapshot, &snapshot_hash).await
|
||||||
|
.map_err(|_| AppError::NotFoundError("Snapshot metadata not found".to_string()))?
|
||||||
|
.ok_or_else(|| AppError::NotFoundError("Snapshot metadata not found".to_string()))?;
|
||||||
|
|
||||||
|
if let MetaObj::Snapshot(snapshot_obj) = snapshot_meta {
|
||||||
|
let mut disks = Vec::new();
|
||||||
|
|
||||||
|
for disk_hash in snapshot_obj.disk_hashes {
|
||||||
|
if let Ok(Some(disk_meta)) = storage.load_meta(MetaType::Disk, &disk_hash).await {
|
||||||
|
if let MetaObj::Disk(disk_obj) = disk_meta {
|
||||||
|
let mut partitions = Vec::new();
|
||||||
|
|
||||||
|
for partition_hash in disk_obj.partition_hashes {
|
||||||
|
if let Ok(Some(partition_meta)) = storage.load_meta(MetaType::Partition, &partition_hash).await {
|
||||||
|
if let MetaObj::Partition(partition_obj) = partition_meta {
|
||||||
|
let fs_type_str = match partition_obj.fs_type_code {
|
||||||
|
FsType::Ext => "ext",
|
||||||
|
FsType::Ntfs => "ntfs",
|
||||||
|
FsType::Fat32 => "fat32",
|
||||||
|
FsType::Unknown => "unknown",
|
||||||
|
};
|
||||||
|
|
||||||
|
partitions.push(PartitionInfo {
|
||||||
|
fs_type: fs_type_str.to_string(),
|
||||||
|
start_lba: partition_obj.start_lba,
|
||||||
|
end_lba: partition_obj.end_lba,
|
||||||
|
size_bytes: (partition_obj.end_lba - partition_obj.start_lba) * 512,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
disks.push(DiskInfo {
|
||||||
|
serial: disk_obj.serial,
|
||||||
|
size_bytes: disk_obj.disk_size_bytes,
|
||||||
|
partitions,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert timestamp to readable format
|
||||||
|
let created_at_str = DateTime::<Utc>::from_timestamp(created_at_timestamp as i64, 0)
|
||||||
|
.map(|dt| dt.format("%Y-%m-%d %H:%M:%S").to_string())
|
||||||
|
.unwrap_or_else(|| "Unknown".to_string());
|
||||||
|
|
||||||
|
Ok(SnapshotDetails {
|
||||||
|
id: snapshot_id,
|
||||||
|
snapshot_hash: hex::encode(snapshot_hash),
|
||||||
|
created_at: created_at_str,
|
||||||
|
disks,
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
Err(AppError::ValidationError("Invalid snapshot metadata".to_string()))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
@@ -1,13 +1,14 @@
|
|||||||
mod controllers;
|
mod controllers;
|
||||||
mod routes;
|
mod routes;
|
||||||
mod utils;
|
mod utils;
|
||||||
|
mod sync;
|
||||||
|
|
||||||
use anyhow::Result;
|
use anyhow::Result;
|
||||||
use axum::{
|
use axum::{
|
||||||
routing::{delete, get, post, put},
|
routing::{delete, get, post, put},
|
||||||
Router,
|
Router,
|
||||||
};
|
};
|
||||||
use routes::{accounts, admin, auth as auth_routes, config, machines, setup};
|
use routes::{accounts, admin, auth, config, machines, setup, snapshots};
|
||||||
use std::path::Path;
|
use std::path::Path;
|
||||||
use tokio::signal;
|
use tokio::signal;
|
||||||
use tower_http::{
|
use tower_http::{
|
||||||
@@ -15,15 +16,19 @@ use tower_http::{
|
|||||||
services::{ServeDir, ServeFile},
|
services::{ServeDir, ServeFile},
|
||||||
};
|
};
|
||||||
use utils::init_database;
|
use utils::init_database;
|
||||||
|
use sync::{SyncServer, server::SyncServerConfig};
|
||||||
|
|
||||||
#[tokio::main]
|
#[tokio::main]
|
||||||
async fn main() -> Result<()> {
|
async fn main() -> Result<()> {
|
||||||
let pool = init_database().await?;
|
let pool = init_database().await?;
|
||||||
|
|
||||||
|
let sync_pool = pool.clone();
|
||||||
|
|
||||||
let api_routes = Router::new()
|
let api_routes = Router::new()
|
||||||
.route("/setup/status", get(setup::get_setup_status))
|
.route("/setup/status", get(setup::get_setup_status))
|
||||||
.route("/setup/init", post(setup::init_setup))
|
.route("/setup/init", post(setup::init_setup))
|
||||||
.route("/auth/login", post(auth_routes::login))
|
.route("/auth/login", post(auth::login))
|
||||||
.route("/auth/logout", post(auth_routes::logout))
|
.route("/auth/logout", post(auth::logout))
|
||||||
.route("/accounts/me", get(accounts::me))
|
.route("/accounts/me", get(accounts::me))
|
||||||
.route("/admin/users", get(admin::get_users))
|
.route("/admin/users", get(admin::get_users))
|
||||||
.route("/admin/users", post(admin::create_user_handler))
|
.route("/admin/users", post(admin::create_user_handler))
|
||||||
@@ -35,7 +40,10 @@ async fn main() -> Result<()> {
|
|||||||
.route("/machines/register", post(machines::register_machine))
|
.route("/machines/register", post(machines::register_machine))
|
||||||
.route("/machines/provisioning-code", post(machines::create_provisioning_code))
|
.route("/machines/provisioning-code", post(machines::create_provisioning_code))
|
||||||
.route("/machines", get(machines::get_machines))
|
.route("/machines", get(machines::get_machines))
|
||||||
|
.route("/machines/{id}", get(machines::get_machine))
|
||||||
.route("/machines/{id}", delete(machines::delete_machine))
|
.route("/machines/{id}", delete(machines::delete_machine))
|
||||||
|
.route("/machines/{id}/snapshots", get(snapshots::get_machine_snapshots))
|
||||||
|
.route("/machines/{machine_id}/snapshots/{snapshot_id}", get(snapshots::get_snapshot_details))
|
||||||
.layer(CorsLayer::permissive())
|
.layer(CorsLayer::permissive())
|
||||||
.with_state(pool);
|
.with_state(pool);
|
||||||
|
|
||||||
@@ -51,8 +59,18 @@ async fn main() -> Result<()> {
|
|||||||
println!("Warning: dist directory not found at {}", dist_path);
|
println!("Warning: dist directory not found at {}", dist_path);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
let sync_config = SyncServerConfig::default();
|
||||||
|
let sync_server = SyncServer::new(sync_config.clone(), sync_pool);
|
||||||
|
|
||||||
|
tokio::spawn(async move {
|
||||||
|
if let Err(e) = sync_server.start().await {
|
||||||
|
eprintln!("Sync server error: {}", e);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
let listener = tokio::net::TcpListener::bind("0.0.0.0:8379").await?;
|
let listener = tokio::net::TcpListener::bind("0.0.0.0:8379").await?;
|
||||||
println!("Server running on http://0.0.0.0:8379");
|
println!("HTTP server running on http://0.0.0.0:8379");
|
||||||
|
println!("Sync server running on {}:{}", sync_config.bind_address, sync_config.port);
|
||||||
|
|
||||||
axum::serve(listener, app)
|
axum::serve(listener, app)
|
||||||
.with_graceful_shutdown(shutdown_signal())
|
.with_graceful_shutdown(shutdown_signal())
|
||||||
|
@@ -43,6 +43,21 @@ pub async fn get_machines(
|
|||||||
Ok(success_response(machines))
|
Ok(success_response(machines))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub async fn get_machine(
|
||||||
|
auth_user: AuthUser,
|
||||||
|
State(pool): State<DbPool>,
|
||||||
|
Path(machine_id): Path<i64>,
|
||||||
|
) -> Result<Json<Machine>, AppError> {
|
||||||
|
let machine = MachinesController::get_machine_by_id(&pool, machine_id).await?;
|
||||||
|
|
||||||
|
// Check if user has access to this machine
|
||||||
|
if auth_user.user.role != UserRole::Admin && machine.user_id != auth_user.user.id {
|
||||||
|
return Err(AppError::NotFoundError("Machine not found or access denied".to_string()));
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(success_response(machine))
|
||||||
|
}
|
||||||
|
|
||||||
pub async fn delete_machine(
|
pub async fn delete_machine(
|
||||||
auth_user: AuthUser,
|
auth_user: AuthUser,
|
||||||
State(pool): State<DbPool>,
|
State(pool): State<DbPool>,
|
||||||
|
@@ -4,3 +4,4 @@ pub mod config;
|
|||||||
pub mod machines;
|
pub mod machines;
|
||||||
pub mod setup;
|
pub mod setup;
|
||||||
pub mod accounts;
|
pub mod accounts;
|
||||||
|
pub mod snapshots;
|
||||||
|
32
server/src/routes/snapshots.rs
Normal file
32
server/src/routes/snapshots.rs
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
use axum::{extract::{Path, State}, Json};
|
||||||
|
use crate::controllers::snapshots::{SnapshotsController, SnapshotSummary, SnapshotDetails};
|
||||||
|
use crate::utils::{auth::AuthUser, error::AppResult, DbPool};
|
||||||
|
|
||||||
|
pub async fn get_machine_snapshots(
|
||||||
|
State(pool): State<DbPool>,
|
||||||
|
Path(machine_id): Path<i64>,
|
||||||
|
auth_user: AuthUser,
|
||||||
|
) -> AppResult<Json<Vec<SnapshotSummary>>> {
|
||||||
|
let snapshots = SnapshotsController::get_machine_snapshots(
|
||||||
|
&pool,
|
||||||
|
machine_id,
|
||||||
|
&auth_user.user,
|
||||||
|
).await?;
|
||||||
|
|
||||||
|
Ok(Json(snapshots))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn get_snapshot_details(
|
||||||
|
State(pool): State<DbPool>,
|
||||||
|
Path((machine_id, snapshot_id)): Path<(i64, String)>,
|
||||||
|
auth_user: AuthUser,
|
||||||
|
) -> AppResult<Json<SnapshotDetails>> {
|
||||||
|
let snapshot = SnapshotsController::get_snapshot_details(
|
||||||
|
&pool,
|
||||||
|
machine_id,
|
||||||
|
snapshot_id,
|
||||||
|
&auth_user.user,
|
||||||
|
).await?;
|
||||||
|
|
||||||
|
Ok(Json(snapshot))
|
||||||
|
}
|
605
server/src/sync/meta.rs
Normal file
605
server/src/sync/meta.rs
Normal file
@@ -0,0 +1,605 @@
|
|||||||
|
use bytes::{Buf, BufMut, Bytes, BytesMut};
|
||||||
|
use std::io::{Error, ErrorKind, Result};
|
||||||
|
use crate::sync::protocol::{Hash, MetaType};
|
||||||
|
|
||||||
|
/// Filesystem type codes
|
||||||
|
#[repr(u32)]
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||||
|
pub enum FsType {
|
||||||
|
Ext = 1,
|
||||||
|
Ntfs = 2,
|
||||||
|
Fat32 = 3,
|
||||||
|
Unknown = 0,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl From<u32> for FsType {
|
||||||
|
fn from(value: u32) -> Self {
|
||||||
|
match value {
|
||||||
|
1 => FsType::Ext,
|
||||||
|
2 => FsType::Ntfs,
|
||||||
|
3 => FsType::Fat32,
|
||||||
|
_ => FsType::Unknown,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Directory entry types
|
||||||
|
#[repr(u8)]
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||||
|
pub enum EntryType {
|
||||||
|
File = 0,
|
||||||
|
Dir = 1,
|
||||||
|
Symlink = 2,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TryFrom<u8> for EntryType {
|
||||||
|
type Error = Error;
|
||||||
|
|
||||||
|
fn try_from(value: u8) -> Result<Self> {
|
||||||
|
match value {
|
||||||
|
0 => Ok(EntryType::File),
|
||||||
|
1 => Ok(EntryType::Dir),
|
||||||
|
2 => Ok(EntryType::Symlink),
|
||||||
|
_ => Err(Error::new(ErrorKind::InvalidData, "Unknown entry type")),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// File metadata object
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct FileObj {
|
||||||
|
pub version: u8,
|
||||||
|
pub fs_type_code: FsType,
|
||||||
|
pub size: u64,
|
||||||
|
pub mode: u32,
|
||||||
|
pub uid: u32,
|
||||||
|
pub gid: u32,
|
||||||
|
pub mtime_unixsec: u64,
|
||||||
|
pub chunk_hashes: Vec<Hash>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl FileObj {
|
||||||
|
pub fn new(
|
||||||
|
fs_type_code: FsType,
|
||||||
|
size: u64,
|
||||||
|
mode: u32,
|
||||||
|
uid: u32,
|
||||||
|
gid: u32,
|
||||||
|
mtime_unixsec: u64,
|
||||||
|
chunk_hashes: Vec<Hash>,
|
||||||
|
) -> Self {
|
||||||
|
Self {
|
||||||
|
version: 1,
|
||||||
|
fs_type_code,
|
||||||
|
size,
|
||||||
|
mode,
|
||||||
|
uid,
|
||||||
|
gid,
|
||||||
|
mtime_unixsec,
|
||||||
|
chunk_hashes,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn serialize(&self) -> Result<Bytes> {
|
||||||
|
let mut buf = BytesMut::new();
|
||||||
|
|
||||||
|
buf.put_u8(self.version);
|
||||||
|
buf.put_u32_le(self.fs_type_code as u32);
|
||||||
|
buf.put_u64_le(self.size);
|
||||||
|
buf.put_u32_le(self.mode);
|
||||||
|
buf.put_u32_le(self.uid);
|
||||||
|
buf.put_u32_le(self.gid);
|
||||||
|
buf.put_u64_le(self.mtime_unixsec);
|
||||||
|
buf.put_u32_le(self.chunk_hashes.len() as u32);
|
||||||
|
|
||||||
|
for hash in &self.chunk_hashes {
|
||||||
|
buf.put_slice(hash);
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(buf.freeze())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn deserialize(mut data: Bytes) -> Result<Self> {
|
||||||
|
if data.remaining() < 41 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "FileObj data too short"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let version = data.get_u8();
|
||||||
|
if version != 1 {
|
||||||
|
return Err(Error::new(ErrorKind::InvalidData, "Unsupported FileObj version"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let fs_type_code = FsType::from(data.get_u32_le());
|
||||||
|
let size = data.get_u64_le();
|
||||||
|
let mode = data.get_u32_le();
|
||||||
|
let uid = data.get_u32_le();
|
||||||
|
let gid = data.get_u32_le();
|
||||||
|
let mtime_unixsec = data.get_u64_le();
|
||||||
|
let chunk_count = data.get_u32_le() as usize;
|
||||||
|
|
||||||
|
if data.remaining() < chunk_count * 32 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "FileObj chunk hashes too short"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut chunk_hashes = Vec::with_capacity(chunk_count);
|
||||||
|
for _ in 0..chunk_count {
|
||||||
|
let mut hash = [0u8; 32];
|
||||||
|
data.copy_to_slice(&mut hash);
|
||||||
|
chunk_hashes.push(hash);
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(Self {
|
||||||
|
version,
|
||||||
|
fs_type_code,
|
||||||
|
size,
|
||||||
|
mode,
|
||||||
|
uid,
|
||||||
|
gid,
|
||||||
|
mtime_unixsec,
|
||||||
|
chunk_hashes,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn compute_hash(&self) -> Result<Hash> {
|
||||||
|
let serialized = self.serialize()?;
|
||||||
|
Ok(blake3::hash(&serialized).into())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Directory entry
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct DirEntry {
|
||||||
|
pub entry_type: EntryType,
|
||||||
|
pub name: String,
|
||||||
|
pub target_meta_hash: Hash,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Directory metadata object
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct DirObj {
|
||||||
|
pub version: u8,
|
||||||
|
pub entries: Vec<DirEntry>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl DirObj {
|
||||||
|
pub fn new(entries: Vec<DirEntry>) -> Self {
|
||||||
|
Self {
|
||||||
|
version: 1,
|
||||||
|
entries,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn serialize(&self) -> Result<Bytes> {
|
||||||
|
let mut buf = BytesMut::new();
|
||||||
|
|
||||||
|
buf.put_u8(self.version);
|
||||||
|
buf.put_u32_le(self.entries.len() as u32);
|
||||||
|
|
||||||
|
for entry in &self.entries {
|
||||||
|
buf.put_u8(entry.entry_type as u8);
|
||||||
|
let name_bytes = entry.name.as_bytes();
|
||||||
|
buf.put_u16_le(name_bytes.len() as u16);
|
||||||
|
buf.put_slice(name_bytes);
|
||||||
|
buf.put_slice(&entry.target_meta_hash);
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(buf.freeze())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn deserialize(mut data: Bytes) -> Result<Self> {
|
||||||
|
if data.remaining() < 5 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "DirObj data too short"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let version = data.get_u8();
|
||||||
|
if version != 1 {
|
||||||
|
return Err(Error::new(ErrorKind::InvalidData, "Unsupported DirObj version"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let entry_count = data.get_u32_le() as usize;
|
||||||
|
let mut entries = Vec::with_capacity(entry_count);
|
||||||
|
|
||||||
|
for _ in 0..entry_count {
|
||||||
|
if data.remaining() < 35 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "DirObj entry too short"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let entry_type = EntryType::try_from(data.get_u8())?;
|
||||||
|
let name_len = data.get_u16_le() as usize;
|
||||||
|
|
||||||
|
if data.remaining() < name_len + 32 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "DirObj entry name/hash too short"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let name = String::from_utf8(data.copy_to_bytes(name_len).to_vec())
|
||||||
|
.map_err(|_| Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in entry name"))?;
|
||||||
|
|
||||||
|
let mut target_meta_hash = [0u8; 32];
|
||||||
|
data.copy_to_slice(&mut target_meta_hash);
|
||||||
|
|
||||||
|
entries.push(DirEntry {
|
||||||
|
entry_type,
|
||||||
|
name,
|
||||||
|
target_meta_hash,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(Self {
|
||||||
|
version,
|
||||||
|
entries,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn compute_hash(&self) -> Result<Hash> {
|
||||||
|
let serialized = self.serialize()?;
|
||||||
|
Ok(blake3::hash(&serialized).into())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Partition metadata object
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct PartitionObj {
|
||||||
|
pub version: u8,
|
||||||
|
pub fs_type_code: FsType,
|
||||||
|
pub root_dir_hash: Hash,
|
||||||
|
pub start_lba: u64,
|
||||||
|
pub end_lba: u64,
|
||||||
|
pub type_guid: [u8; 16],
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PartitionObj {
|
||||||
|
pub fn new(
|
||||||
|
fs_type_code: FsType,
|
||||||
|
root_dir_hash: Hash,
|
||||||
|
start_lba: u64,
|
||||||
|
end_lba: u64,
|
||||||
|
type_guid: [u8; 16],
|
||||||
|
) -> Self {
|
||||||
|
Self {
|
||||||
|
version: 1,
|
||||||
|
fs_type_code,
|
||||||
|
root_dir_hash,
|
||||||
|
start_lba,
|
||||||
|
end_lba,
|
||||||
|
type_guid,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn serialize(&self) -> Result<Bytes> {
|
||||||
|
let mut buf = BytesMut::new();
|
||||||
|
|
||||||
|
buf.put_u8(self.version);
|
||||||
|
buf.put_u32_le(self.fs_type_code as u32);
|
||||||
|
buf.put_slice(&self.root_dir_hash);
|
||||||
|
buf.put_u64_le(self.start_lba);
|
||||||
|
buf.put_u64_le(self.end_lba);
|
||||||
|
buf.put_slice(&self.type_guid);
|
||||||
|
|
||||||
|
Ok(buf.freeze())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn deserialize(mut data: Bytes) -> Result<Self> {
|
||||||
|
if data.remaining() < 69 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "PartitionObj data too short"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let version = data.get_u8();
|
||||||
|
if version != 1 {
|
||||||
|
return Err(Error::new(ErrorKind::InvalidData, "Unsupported PartitionObj version"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let fs_type_code = FsType::from(data.get_u32_le());
|
||||||
|
|
||||||
|
let mut root_dir_hash = [0u8; 32];
|
||||||
|
data.copy_to_slice(&mut root_dir_hash);
|
||||||
|
|
||||||
|
let start_lba = data.get_u64_le();
|
||||||
|
let end_lba = data.get_u64_le();
|
||||||
|
|
||||||
|
let mut type_guid = [0u8; 16];
|
||||||
|
data.copy_to_slice(&mut type_guid);
|
||||||
|
|
||||||
|
Ok(Self {
|
||||||
|
version,
|
||||||
|
fs_type_code,
|
||||||
|
root_dir_hash,
|
||||||
|
start_lba,
|
||||||
|
end_lba,
|
||||||
|
type_guid,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn compute_hash(&self) -> Result<Hash> {
|
||||||
|
let serialized = self.serialize()?;
|
||||||
|
Ok(blake3::hash(&serialized).into())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Disk metadata object
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct DiskObj {
|
||||||
|
pub version: u8,
|
||||||
|
pub partition_hashes: Vec<Hash>,
|
||||||
|
pub disk_size_bytes: u64,
|
||||||
|
pub serial: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl DiskObj {
|
||||||
|
pub fn new(partition_hashes: Vec<Hash>, disk_size_bytes: u64, serial: String) -> Self {
|
||||||
|
Self {
|
||||||
|
version: 1,
|
||||||
|
partition_hashes,
|
||||||
|
disk_size_bytes,
|
||||||
|
serial,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn serialize(&self) -> Result<Bytes> {
|
||||||
|
let mut buf = BytesMut::new();
|
||||||
|
|
||||||
|
buf.put_u8(self.version);
|
||||||
|
buf.put_u32_le(self.partition_hashes.len() as u32);
|
||||||
|
|
||||||
|
for hash in &self.partition_hashes {
|
||||||
|
buf.put_slice(hash);
|
||||||
|
}
|
||||||
|
|
||||||
|
buf.put_u64_le(self.disk_size_bytes);
|
||||||
|
|
||||||
|
let serial_bytes = self.serial.as_bytes();
|
||||||
|
buf.put_u16_le(serial_bytes.len() as u16);
|
||||||
|
buf.put_slice(serial_bytes);
|
||||||
|
|
||||||
|
Ok(buf.freeze())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn deserialize(mut data: Bytes) -> Result<Self> {
|
||||||
|
println!("DiskObj::deserialize: input data length = {}", data.len());
|
||||||
|
|
||||||
|
if data.remaining() < 15 {
|
||||||
|
println!("DiskObj::deserialize: data too short, remaining = {}", data.remaining());
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "DiskObj data too short"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let version = data.get_u8();
|
||||||
|
println!("DiskObj::deserialize: version = {}", version);
|
||||||
|
if version != 1 {
|
||||||
|
println!("DiskObj::deserialize: unsupported version {}", version);
|
||||||
|
return Err(Error::new(ErrorKind::InvalidData, "Unsupported DiskObj version"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let partition_count = data.get_u32_le() as usize;
|
||||||
|
println!("DiskObj::deserialize: partition_count = {}", partition_count);
|
||||||
|
|
||||||
|
if data.remaining() < partition_count * 32 + 10 {
|
||||||
|
println!("DiskObj::deserialize: not enough data for partitions, remaining = {}, needed = {}",
|
||||||
|
data.remaining(), partition_count * 32 + 10);
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "DiskObj partitions too short"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut partition_hashes = Vec::with_capacity(partition_count);
|
||||||
|
for i in 0..partition_count {
|
||||||
|
let mut hash = [0u8; 32];
|
||||||
|
data.copy_to_slice(&mut hash);
|
||||||
|
println!("DiskObj::deserialize: partition {} hash = {}", i, hex::encode(&hash));
|
||||||
|
partition_hashes.push(hash);
|
||||||
|
}
|
||||||
|
|
||||||
|
let disk_size_bytes = data.get_u64_le();
|
||||||
|
println!("DiskObj::deserialize: disk_size_bytes = {}", disk_size_bytes);
|
||||||
|
|
||||||
|
let serial_len = data.get_u16_le() as usize;
|
||||||
|
println!("DiskObj::deserialize: serial_len = {}", serial_len);
|
||||||
|
|
||||||
|
if data.remaining() < serial_len {
|
||||||
|
println!("DiskObj::deserialize: not enough data for serial, remaining = {}, needed = {}",
|
||||||
|
data.remaining(), serial_len);
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "DiskObj serial too short"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let serial_bytes = data.copy_to_bytes(serial_len).to_vec();
|
||||||
|
println!("DiskObj::deserialize: serial_bytes = {:?}", serial_bytes);
|
||||||
|
|
||||||
|
let serial = String::from_utf8(serial_bytes)
|
||||||
|
.map_err(|e| {
|
||||||
|
println!("DiskObj::deserialize: UTF-8 error: {}", e);
|
||||||
|
Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in serial")
|
||||||
|
})?;
|
||||||
|
|
||||||
|
println!("DiskObj::deserialize: serial = '{}'", serial);
|
||||||
|
println!("DiskObj::deserialize: successfully deserialized");
|
||||||
|
|
||||||
|
Ok(Self {
|
||||||
|
version,
|
||||||
|
partition_hashes,
|
||||||
|
disk_size_bytes,
|
||||||
|
serial,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn compute_hash(&self) -> Result<Hash> {
|
||||||
|
let serialized = self.serialize()?;
|
||||||
|
Ok(blake3::hash(&serialized).into())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Snapshot metadata object
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct SnapshotObj {
|
||||||
|
pub version: u8,
|
||||||
|
pub created_at_unixsec: u64,
|
||||||
|
pub disk_hashes: Vec<Hash>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SnapshotObj {
|
||||||
|
pub fn new(created_at_unixsec: u64, disk_hashes: Vec<Hash>) -> Self {
|
||||||
|
Self {
|
||||||
|
version: 1,
|
||||||
|
created_at_unixsec,
|
||||||
|
disk_hashes,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn serialize(&self) -> Result<Bytes> {
|
||||||
|
let mut buf = BytesMut::new();
|
||||||
|
|
||||||
|
buf.put_u8(self.version);
|
||||||
|
buf.put_u64_le(self.created_at_unixsec);
|
||||||
|
buf.put_u32_le(self.disk_hashes.len() as u32);
|
||||||
|
|
||||||
|
for hash in &self.disk_hashes {
|
||||||
|
buf.put_slice(hash);
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(buf.freeze())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn deserialize(mut data: Bytes) -> Result<Self> {
|
||||||
|
if data.remaining() < 13 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "SnapshotObj data too short"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let version = data.get_u8();
|
||||||
|
if version != 1 {
|
||||||
|
return Err(Error::new(ErrorKind::InvalidData, "Unsupported SnapshotObj version"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let created_at_unixsec = data.get_u64_le();
|
||||||
|
let disk_count = data.get_u32_le() as usize;
|
||||||
|
|
||||||
|
if data.remaining() < disk_count * 32 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "SnapshotObj disk hashes too short"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut disk_hashes = Vec::with_capacity(disk_count);
|
||||||
|
for _ in 0..disk_count {
|
||||||
|
let mut hash = [0u8; 32];
|
||||||
|
data.copy_to_slice(&mut hash);
|
||||||
|
disk_hashes.push(hash);
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(Self {
|
||||||
|
version,
|
||||||
|
created_at_unixsec,
|
||||||
|
disk_hashes,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn compute_hash(&self) -> Result<Hash> {
|
||||||
|
let serialized = self.serialize()?;
|
||||||
|
Ok(blake3::hash(&serialized).into())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Meta object wrapper
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub enum MetaObj {
|
||||||
|
File(FileObj),
|
||||||
|
Dir(DirObj),
|
||||||
|
Partition(PartitionObj),
|
||||||
|
Disk(DiskObj),
|
||||||
|
Snapshot(SnapshotObj),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl MetaObj {
|
||||||
|
pub fn meta_type(&self) -> MetaType {
|
||||||
|
match self {
|
||||||
|
MetaObj::File(_) => MetaType::File,
|
||||||
|
MetaObj::Dir(_) => MetaType::Dir,
|
||||||
|
MetaObj::Partition(_) => MetaType::Partition,
|
||||||
|
MetaObj::Disk(_) => MetaType::Disk,
|
||||||
|
MetaObj::Snapshot(_) => MetaType::Snapshot,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn serialize(&self) -> Result<Bytes> {
|
||||||
|
match self {
|
||||||
|
MetaObj::File(obj) => obj.serialize(),
|
||||||
|
MetaObj::Dir(obj) => obj.serialize(),
|
||||||
|
MetaObj::Partition(obj) => obj.serialize(),
|
||||||
|
MetaObj::Disk(obj) => obj.serialize(),
|
||||||
|
MetaObj::Snapshot(obj) => obj.serialize(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn deserialize(meta_type: MetaType, data: Bytes) -> Result<Self> {
|
||||||
|
match meta_type {
|
||||||
|
MetaType::File => Ok(MetaObj::File(FileObj::deserialize(data)?)),
|
||||||
|
MetaType::Dir => Ok(MetaObj::Dir(DirObj::deserialize(data)?)),
|
||||||
|
MetaType::Partition => Ok(MetaObj::Partition(PartitionObj::deserialize(data)?)),
|
||||||
|
MetaType::Disk => Ok(MetaObj::Disk(DiskObj::deserialize(data)?)),
|
||||||
|
MetaType::Snapshot => Ok(MetaObj::Snapshot(SnapshotObj::deserialize(data)?)),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn compute_hash(&self) -> Result<Hash> {
|
||||||
|
match self {
|
||||||
|
MetaObj::File(obj) => obj.compute_hash(),
|
||||||
|
MetaObj::Dir(obj) => obj.compute_hash(),
|
||||||
|
MetaObj::Partition(obj) => obj.compute_hash(),
|
||||||
|
MetaObj::Disk(obj) => obj.compute_hash(),
|
||||||
|
MetaObj::Snapshot(obj) => obj.compute_hash(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_file_obj_serialization() {
|
||||||
|
let obj = FileObj::new(
|
||||||
|
FsType::Ext,
|
||||||
|
1024,
|
||||||
|
0o644,
|
||||||
|
1000,
|
||||||
|
1000,
|
||||||
|
1234567890,
|
||||||
|
vec![[1; 32], [2; 32]],
|
||||||
|
);
|
||||||
|
|
||||||
|
let serialized = obj.serialize().unwrap();
|
||||||
|
let deserialized = FileObj::deserialize(serialized).unwrap();
|
||||||
|
|
||||||
|
assert_eq!(obj.fs_type_code, deserialized.fs_type_code);
|
||||||
|
assert_eq!(obj.size, deserialized.size);
|
||||||
|
assert_eq!(obj.chunk_hashes, deserialized.chunk_hashes);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_dir_obj_serialization() {
|
||||||
|
let entries = vec![
|
||||||
|
DirEntry {
|
||||||
|
entry_type: EntryType::File,
|
||||||
|
name: "test.txt".to_string(),
|
||||||
|
target_meta_hash: [1; 32],
|
||||||
|
},
|
||||||
|
DirEntry {
|
||||||
|
entry_type: EntryType::Dir,
|
||||||
|
name: "subdir".to_string(),
|
||||||
|
target_meta_hash: [2; 32],
|
||||||
|
},
|
||||||
|
];
|
||||||
|
|
||||||
|
let obj = DirObj::new(entries);
|
||||||
|
let serialized = obj.serialize().unwrap();
|
||||||
|
let deserialized = DirObj::deserialize(serialized).unwrap();
|
||||||
|
|
||||||
|
assert_eq!(obj.entries.len(), deserialized.entries.len());
|
||||||
|
assert_eq!(obj.entries[0].name, deserialized.entries[0].name);
|
||||||
|
assert_eq!(obj.entries[1].entry_type, deserialized.entries[1].entry_type);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_hash_computation() {
|
||||||
|
let obj = FileObj::new(FsType::Ext, 1024, 0o644, 1000, 1000, 1234567890, vec![]);
|
||||||
|
let hash1 = obj.compute_hash().unwrap();
|
||||||
|
let hash2 = obj.compute_hash().unwrap();
|
||||||
|
assert_eq!(hash1, hash2);
|
||||||
|
|
||||||
|
let obj2 = FileObj::new(FsType::Ext, 1025, 0o644, 1000, 1000, 1234567890, vec![]);
|
||||||
|
let hash3 = obj2.compute_hash().unwrap();
|
||||||
|
assert_ne!(hash1, hash3);
|
||||||
|
}
|
||||||
|
}
|
8
server/src/sync/mod.rs
Normal file
8
server/src/sync/mod.rs
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
pub mod protocol;
|
||||||
|
pub mod server;
|
||||||
|
pub mod storage;
|
||||||
|
pub mod session;
|
||||||
|
pub mod meta;
|
||||||
|
pub mod validation;
|
||||||
|
|
||||||
|
pub use server::SyncServer;
|
620
server/src/sync/protocol.rs
Normal file
620
server/src/sync/protocol.rs
Normal file
@@ -0,0 +1,620 @@
|
|||||||
|
use bytes::{Buf, BufMut, Bytes, BytesMut};
|
||||||
|
use std::io::{Error, ErrorKind, Result};
|
||||||
|
|
||||||
|
/// Command codes for the sync protocol
|
||||||
|
#[repr(u8)]
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||||
|
pub enum Command {
|
||||||
|
Hello = 0x01,
|
||||||
|
HelloOk = 0x02,
|
||||||
|
AuthUserPass = 0x10,
|
||||||
|
AuthCode = 0x11,
|
||||||
|
AuthOk = 0x12,
|
||||||
|
AuthFail = 0x13,
|
||||||
|
BatchCheckChunk = 0x20,
|
||||||
|
CheckChunkResp = 0x21,
|
||||||
|
SendChunk = 0x22,
|
||||||
|
ChunkOk = 0x23,
|
||||||
|
ChunkFail = 0x24,
|
||||||
|
BatchCheckMeta = 0x30,
|
||||||
|
CheckMetaResp = 0x31,
|
||||||
|
SendMeta = 0x32,
|
||||||
|
MetaOk = 0x33,
|
||||||
|
MetaFail = 0x34,
|
||||||
|
SendSnapshot = 0x40,
|
||||||
|
SnapshotOk = 0x41,
|
||||||
|
SnapshotFail = 0x42,
|
||||||
|
Close = 0xFF,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TryFrom<u8> for Command {
|
||||||
|
type Error = Error;
|
||||||
|
|
||||||
|
fn try_from(value: u8) -> Result<Self> {
|
||||||
|
match value {
|
||||||
|
0x01 => Ok(Command::Hello),
|
||||||
|
0x02 => Ok(Command::HelloOk),
|
||||||
|
0x10 => Ok(Command::AuthUserPass),
|
||||||
|
0x11 => Ok(Command::AuthCode),
|
||||||
|
0x12 => Ok(Command::AuthOk),
|
||||||
|
0x13 => Ok(Command::AuthFail),
|
||||||
|
0x20 => Ok(Command::BatchCheckChunk),
|
||||||
|
0x21 => Ok(Command::CheckChunkResp),
|
||||||
|
0x22 => Ok(Command::SendChunk),
|
||||||
|
0x23 => Ok(Command::ChunkOk),
|
||||||
|
0x24 => Ok(Command::ChunkFail),
|
||||||
|
0x30 => Ok(Command::BatchCheckMeta),
|
||||||
|
0x31 => Ok(Command::CheckMetaResp),
|
||||||
|
0x32 => Ok(Command::SendMeta),
|
||||||
|
0x33 => Ok(Command::MetaOk),
|
||||||
|
0x34 => Ok(Command::MetaFail),
|
||||||
|
0x40 => Ok(Command::SendSnapshot),
|
||||||
|
0x41 => Ok(Command::SnapshotOk),
|
||||||
|
0x42 => Ok(Command::SnapshotFail),
|
||||||
|
0xFF => Ok(Command::Close),
|
||||||
|
_ => Err(Error::new(ErrorKind::InvalidData, "Unknown command code")),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Message header structure (24 bytes fixed)
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct MessageHeader {
|
||||||
|
pub cmd: Command,
|
||||||
|
pub flags: u8,
|
||||||
|
pub reserved: [u8; 2],
|
||||||
|
pub session_id: [u8; 16],
|
||||||
|
pub payload_len: u32,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl MessageHeader {
|
||||||
|
pub const SIZE: usize = 24;
|
||||||
|
|
||||||
|
pub fn new(cmd: Command, session_id: [u8; 16], payload_len: u32) -> Self {
|
||||||
|
Self {
|
||||||
|
cmd,
|
||||||
|
flags: 0,
|
||||||
|
reserved: [0; 2],
|
||||||
|
session_id,
|
||||||
|
payload_len,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn serialize(&self) -> [u8; Self::SIZE] {
|
||||||
|
let mut buf = [0u8; Self::SIZE];
|
||||||
|
buf[0] = self.cmd as u8;
|
||||||
|
buf[1] = self.flags;
|
||||||
|
buf[2..4].copy_from_slice(&self.reserved);
|
||||||
|
buf[4..20].copy_from_slice(&self.session_id);
|
||||||
|
buf[20..24].copy_from_slice(&self.payload_len.to_le_bytes());
|
||||||
|
buf
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn deserialize(buf: &[u8]) -> Result<Self> {
|
||||||
|
if buf.len() < Self::SIZE {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "Header too short"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let cmd = Command::try_from(buf[0])?;
|
||||||
|
let flags = buf[1];
|
||||||
|
let reserved = [buf[2], buf[3]];
|
||||||
|
let mut session_id = [0u8; 16];
|
||||||
|
session_id.copy_from_slice(&buf[4..20]);
|
||||||
|
let payload_len = u32::from_le_bytes([buf[20], buf[21], buf[22], buf[23]]);
|
||||||
|
|
||||||
|
Ok(Self {
|
||||||
|
cmd,
|
||||||
|
flags,
|
||||||
|
reserved,
|
||||||
|
session_id,
|
||||||
|
payload_len,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A 32-byte BLAKE3 hash
|
||||||
|
pub type Hash = [u8; 32];
|
||||||
|
|
||||||
|
/// Meta object types
|
||||||
|
#[repr(u8)]
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
|
||||||
|
pub enum MetaType {
|
||||||
|
File = 1,
|
||||||
|
Dir = 2,
|
||||||
|
Partition = 3,
|
||||||
|
Disk = 4,
|
||||||
|
Snapshot = 5,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TryFrom<u8> for MetaType {
|
||||||
|
type Error = Error;
|
||||||
|
|
||||||
|
fn try_from(value: u8) -> Result<Self> {
|
||||||
|
match value {
|
||||||
|
1 => Ok(MetaType::File),
|
||||||
|
2 => Ok(MetaType::Dir),
|
||||||
|
3 => Ok(MetaType::Partition),
|
||||||
|
4 => Ok(MetaType::Disk),
|
||||||
|
5 => Ok(MetaType::Snapshot),
|
||||||
|
_ => Err(Error::new(ErrorKind::InvalidData, "Unknown meta type")),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Protocol message types
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub enum Message {
|
||||||
|
Hello {
|
||||||
|
client_type: u8,
|
||||||
|
auth_type: u8,
|
||||||
|
},
|
||||||
|
HelloOk,
|
||||||
|
AuthUserPass {
|
||||||
|
username: String,
|
||||||
|
password: String,
|
||||||
|
machine_id: i64,
|
||||||
|
},
|
||||||
|
AuthCode {
|
||||||
|
code: String,
|
||||||
|
},
|
||||||
|
AuthOk {
|
||||||
|
session_id: [u8; 16],
|
||||||
|
},
|
||||||
|
AuthFail {
|
||||||
|
reason: String,
|
||||||
|
},
|
||||||
|
BatchCheckChunk {
|
||||||
|
hashes: Vec<Hash>,
|
||||||
|
},
|
||||||
|
CheckChunkResp {
|
||||||
|
missing_hashes: Vec<Hash>,
|
||||||
|
},
|
||||||
|
SendChunk {
|
||||||
|
hash: Hash,
|
||||||
|
data: Bytes,
|
||||||
|
},
|
||||||
|
ChunkOk,
|
||||||
|
ChunkFail {
|
||||||
|
reason: String,
|
||||||
|
},
|
||||||
|
BatchCheckMeta {
|
||||||
|
items: Vec<(MetaType, Hash)>,
|
||||||
|
},
|
||||||
|
CheckMetaResp {
|
||||||
|
missing_items: Vec<(MetaType, Hash)>,
|
||||||
|
},
|
||||||
|
SendMeta {
|
||||||
|
meta_type: MetaType,
|
||||||
|
meta_hash: Hash,
|
||||||
|
body: Bytes,
|
||||||
|
},
|
||||||
|
MetaOk,
|
||||||
|
MetaFail {
|
||||||
|
reason: String,
|
||||||
|
},
|
||||||
|
SendSnapshot {
|
||||||
|
snapshot_hash: Hash,
|
||||||
|
body: Bytes,
|
||||||
|
},
|
||||||
|
SnapshotOk {
|
||||||
|
snapshot_id: String,
|
||||||
|
},
|
||||||
|
SnapshotFail {
|
||||||
|
missing_chunks: Vec<Hash>,
|
||||||
|
missing_metas: Vec<(MetaType, Hash)>,
|
||||||
|
},
|
||||||
|
Close,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Message {
|
||||||
|
/// Serialize message payload to bytes
|
||||||
|
pub fn serialize_payload(&self) -> Result<Bytes> {
|
||||||
|
let mut buf = BytesMut::new();
|
||||||
|
|
||||||
|
match self {
|
||||||
|
Message::Hello { client_type, auth_type } => {
|
||||||
|
buf.put_u8(*client_type);
|
||||||
|
buf.put_u8(*auth_type);
|
||||||
|
}
|
||||||
|
Message::HelloOk => {
|
||||||
|
// No payload
|
||||||
|
}
|
||||||
|
Message::AuthUserPass { username, password, machine_id } => {
|
||||||
|
let username_bytes = username.as_bytes();
|
||||||
|
let password_bytes = password.as_bytes();
|
||||||
|
buf.put_u16_le(username_bytes.len() as u16);
|
||||||
|
buf.put_slice(username_bytes);
|
||||||
|
buf.put_u16_le(password_bytes.len() as u16);
|
||||||
|
buf.put_slice(password_bytes);
|
||||||
|
buf.put_i64_le(*machine_id);
|
||||||
|
}
|
||||||
|
Message::AuthCode { code } => {
|
||||||
|
let code_bytes = code.as_bytes();
|
||||||
|
buf.put_u16_le(code_bytes.len() as u16);
|
||||||
|
buf.put_slice(code_bytes);
|
||||||
|
}
|
||||||
|
Message::AuthOk { session_id } => {
|
||||||
|
buf.put_slice(session_id);
|
||||||
|
}
|
||||||
|
Message::AuthFail { reason } => {
|
||||||
|
let reason_bytes = reason.as_bytes();
|
||||||
|
buf.put_u16_le(reason_bytes.len() as u16);
|
||||||
|
buf.put_slice(reason_bytes);
|
||||||
|
}
|
||||||
|
Message::BatchCheckChunk { hashes } => {
|
||||||
|
buf.put_u32_le(hashes.len() as u32);
|
||||||
|
for hash in hashes {
|
||||||
|
buf.put_slice(hash);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Message::CheckChunkResp { missing_hashes } => {
|
||||||
|
buf.put_u32_le(missing_hashes.len() as u32);
|
||||||
|
for hash in missing_hashes {
|
||||||
|
buf.put_slice(hash);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Message::SendChunk { hash, data } => {
|
||||||
|
buf.put_slice(hash);
|
||||||
|
buf.put_u32_le(data.len() as u32);
|
||||||
|
buf.put_slice(data);
|
||||||
|
}
|
||||||
|
Message::ChunkOk => {
|
||||||
|
// No payload
|
||||||
|
}
|
||||||
|
Message::ChunkFail { reason } => {
|
||||||
|
let reason_bytes = reason.as_bytes();
|
||||||
|
buf.put_u16_le(reason_bytes.len() as u16);
|
||||||
|
buf.put_slice(reason_bytes);
|
||||||
|
}
|
||||||
|
Message::BatchCheckMeta { items } => {
|
||||||
|
buf.put_u32_le(items.len() as u32);
|
||||||
|
for (meta_type, hash) in items {
|
||||||
|
buf.put_u8(*meta_type as u8);
|
||||||
|
buf.put_slice(hash);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Message::CheckMetaResp { missing_items } => {
|
||||||
|
buf.put_u32_le(missing_items.len() as u32);
|
||||||
|
for (meta_type, hash) in missing_items {
|
||||||
|
buf.put_u8(*meta_type as u8);
|
||||||
|
buf.put_slice(hash);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Message::SendMeta { meta_type, meta_hash, body } => {
|
||||||
|
buf.put_u8(*meta_type as u8);
|
||||||
|
buf.put_slice(meta_hash);
|
||||||
|
buf.put_u32_le(body.len() as u32);
|
||||||
|
buf.put_slice(body);
|
||||||
|
}
|
||||||
|
Message::MetaOk => {
|
||||||
|
// No payload
|
||||||
|
}
|
||||||
|
Message::MetaFail { reason } => {
|
||||||
|
let reason_bytes = reason.as_bytes();
|
||||||
|
buf.put_u16_le(reason_bytes.len() as u16);
|
||||||
|
buf.put_slice(reason_bytes);
|
||||||
|
}
|
||||||
|
Message::SendSnapshot { snapshot_hash, body } => {
|
||||||
|
buf.put_slice(snapshot_hash);
|
||||||
|
buf.put_u32_le(body.len() as u32);
|
||||||
|
buf.put_slice(body);
|
||||||
|
}
|
||||||
|
Message::SnapshotOk { snapshot_id } => {
|
||||||
|
let id_bytes = snapshot_id.as_bytes();
|
||||||
|
buf.put_u16_le(id_bytes.len() as u16);
|
||||||
|
buf.put_slice(id_bytes);
|
||||||
|
}
|
||||||
|
Message::SnapshotFail { missing_chunks, missing_metas } => {
|
||||||
|
buf.put_u32_le(missing_chunks.len() as u32);
|
||||||
|
for hash in missing_chunks {
|
||||||
|
buf.put_slice(hash);
|
||||||
|
}
|
||||||
|
buf.put_u32_le(missing_metas.len() as u32);
|
||||||
|
for (meta_type, hash) in missing_metas {
|
||||||
|
buf.put_u8(*meta_type as u8);
|
||||||
|
buf.put_slice(hash);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Message::Close => {
|
||||||
|
// No payload
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(buf.freeze())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Deserialize message payload from bytes
|
||||||
|
pub fn deserialize_payload(cmd: Command, mut payload: Bytes) -> Result<Self> {
|
||||||
|
match cmd {
|
||||||
|
Command::Hello => {
|
||||||
|
if payload.remaining() < 2 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "Hello payload too short"));
|
||||||
|
}
|
||||||
|
let client_type = payload.get_u8();
|
||||||
|
let auth_type = payload.get_u8();
|
||||||
|
Ok(Message::Hello { client_type, auth_type })
|
||||||
|
}
|
||||||
|
Command::HelloOk => Ok(Message::HelloOk),
|
||||||
|
Command::AuthUserPass => {
|
||||||
|
if payload.remaining() < 12 { // 4 bytes for lengths + at least 8 bytes for machine_id
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "AuthUserPass payload too short"));
|
||||||
|
}
|
||||||
|
let username_len = payload.get_u16_le() as usize;
|
||||||
|
if payload.remaining() < username_len + 10 { // 2 bytes for password len + 8 bytes for machine_id
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "AuthUserPass username too short"));
|
||||||
|
}
|
||||||
|
let username = String::from_utf8(payload.copy_to_bytes(username_len).to_vec())
|
||||||
|
.map_err(|_| Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in username"))?;
|
||||||
|
let password_len = payload.get_u16_le() as usize;
|
||||||
|
if payload.remaining() < password_len + 8 { // 8 bytes for machine_id
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "AuthUserPass password too short"));
|
||||||
|
}
|
||||||
|
let password = String::from_utf8(payload.copy_to_bytes(password_len).to_vec())
|
||||||
|
.map_err(|_| Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in password"))?;
|
||||||
|
let machine_id = payload.get_i64_le();
|
||||||
|
Ok(Message::AuthUserPass { username, password, machine_id })
|
||||||
|
}
|
||||||
|
Command::AuthCode => {
|
||||||
|
if payload.remaining() < 2 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "AuthCode payload too short"));
|
||||||
|
}
|
||||||
|
let code_len = payload.get_u16_le() as usize;
|
||||||
|
if payload.remaining() < code_len {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "AuthCode code too short"));
|
||||||
|
}
|
||||||
|
let code = String::from_utf8(payload.copy_to_bytes(code_len).to_vec())
|
||||||
|
.map_err(|_| Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in code"))?;
|
||||||
|
Ok(Message::AuthCode { code })
|
||||||
|
}
|
||||||
|
Command::AuthOk => {
|
||||||
|
if payload.remaining() < 16 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "AuthOk payload too short"));
|
||||||
|
}
|
||||||
|
let mut session_id = [0u8; 16];
|
||||||
|
payload.copy_to_slice(&mut session_id);
|
||||||
|
Ok(Message::AuthOk { session_id })
|
||||||
|
}
|
||||||
|
Command::AuthFail => {
|
||||||
|
if payload.remaining() < 2 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "AuthFail payload too short"));
|
||||||
|
}
|
||||||
|
let reason_len = payload.get_u16_le() as usize;
|
||||||
|
if payload.remaining() < reason_len {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "AuthFail reason too short"));
|
||||||
|
}
|
||||||
|
let reason = String::from_utf8(payload.copy_to_bytes(reason_len).to_vec())
|
||||||
|
.map_err(|_| Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in reason"))?;
|
||||||
|
Ok(Message::AuthFail { reason })
|
||||||
|
}
|
||||||
|
Command::BatchCheckChunk => {
|
||||||
|
if payload.remaining() < 4 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "BatchCheckChunk payload too short"));
|
||||||
|
}
|
||||||
|
let count = payload.get_u32_le() as usize;
|
||||||
|
if payload.remaining() < count * 32 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "BatchCheckChunk hashes too short"));
|
||||||
|
}
|
||||||
|
let mut hashes = Vec::with_capacity(count);
|
||||||
|
for _ in 0..count {
|
||||||
|
let mut hash = [0u8; 32];
|
||||||
|
payload.copy_to_slice(&mut hash);
|
||||||
|
hashes.push(hash);
|
||||||
|
}
|
||||||
|
Ok(Message::BatchCheckChunk { hashes })
|
||||||
|
}
|
||||||
|
Command::CheckChunkResp => {
|
||||||
|
if payload.remaining() < 4 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "CheckChunkResp payload too short"));
|
||||||
|
}
|
||||||
|
let count = payload.get_u32_le() as usize;
|
||||||
|
if payload.remaining() < count * 32 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "CheckChunkResp hashes too short"));
|
||||||
|
}
|
||||||
|
let mut missing_hashes = Vec::with_capacity(count);
|
||||||
|
for _ in 0..count {
|
||||||
|
let mut hash = [0u8; 32];
|
||||||
|
payload.copy_to_slice(&mut hash);
|
||||||
|
missing_hashes.push(hash);
|
||||||
|
}
|
||||||
|
Ok(Message::CheckChunkResp { missing_hashes })
|
||||||
|
}
|
||||||
|
Command::SendChunk => {
|
||||||
|
if payload.remaining() < 36 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "SendChunk payload too short"));
|
||||||
|
}
|
||||||
|
let mut hash = [0u8; 32];
|
||||||
|
payload.copy_to_slice(&mut hash);
|
||||||
|
let size = payload.get_u32_le() as usize;
|
||||||
|
if payload.remaining() < size {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "SendChunk data too short"));
|
||||||
|
}
|
||||||
|
let data = payload.copy_to_bytes(size);
|
||||||
|
Ok(Message::SendChunk { hash, data })
|
||||||
|
}
|
||||||
|
Command::ChunkOk => Ok(Message::ChunkOk),
|
||||||
|
Command::ChunkFail => {
|
||||||
|
if payload.remaining() < 2 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "ChunkFail payload too short"));
|
||||||
|
}
|
||||||
|
let reason_len = payload.get_u16_le() as usize;
|
||||||
|
if payload.remaining() < reason_len {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "ChunkFail reason too short"));
|
||||||
|
}
|
||||||
|
let reason = String::from_utf8(payload.copy_to_bytes(reason_len).to_vec())
|
||||||
|
.map_err(|_| Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in reason"))?;
|
||||||
|
Ok(Message::ChunkFail { reason })
|
||||||
|
}
|
||||||
|
Command::BatchCheckMeta => {
|
||||||
|
if payload.remaining() < 4 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "BatchCheckMeta payload too short"));
|
||||||
|
}
|
||||||
|
let count = payload.get_u32_le() as usize;
|
||||||
|
if payload.remaining() < count * 33 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "BatchCheckMeta items too short"));
|
||||||
|
}
|
||||||
|
let mut items = Vec::with_capacity(count);
|
||||||
|
for _ in 0..count {
|
||||||
|
let meta_type = MetaType::try_from(payload.get_u8())?;
|
||||||
|
let mut hash = [0u8; 32];
|
||||||
|
payload.copy_to_slice(&mut hash);
|
||||||
|
items.push((meta_type, hash));
|
||||||
|
}
|
||||||
|
Ok(Message::BatchCheckMeta { items })
|
||||||
|
}
|
||||||
|
Command::CheckMetaResp => {
|
||||||
|
if payload.remaining() < 4 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "CheckMetaResp payload too short"));
|
||||||
|
}
|
||||||
|
let count = payload.get_u32_le() as usize;
|
||||||
|
if payload.remaining() < count * 33 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "CheckMetaResp items too short"));
|
||||||
|
}
|
||||||
|
let mut missing_items = Vec::with_capacity(count);
|
||||||
|
for _ in 0..count {
|
||||||
|
let meta_type = MetaType::try_from(payload.get_u8())?;
|
||||||
|
let mut hash = [0u8; 32];
|
||||||
|
payload.copy_to_slice(&mut hash);
|
||||||
|
missing_items.push((meta_type, hash));
|
||||||
|
}
|
||||||
|
Ok(Message::CheckMetaResp { missing_items })
|
||||||
|
}
|
||||||
|
Command::SendMeta => {
|
||||||
|
if payload.remaining() < 37 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "SendMeta payload too short"));
|
||||||
|
}
|
||||||
|
let meta_type = MetaType::try_from(payload.get_u8())?;
|
||||||
|
let mut meta_hash = [0u8; 32];
|
||||||
|
payload.copy_to_slice(&mut meta_hash);
|
||||||
|
let body_len = payload.get_u32_le() as usize;
|
||||||
|
if payload.remaining() < body_len {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "SendMeta body too short"));
|
||||||
|
}
|
||||||
|
let body = payload.copy_to_bytes(body_len);
|
||||||
|
Ok(Message::SendMeta { meta_type, meta_hash, body })
|
||||||
|
}
|
||||||
|
Command::MetaOk => Ok(Message::MetaOk),
|
||||||
|
Command::MetaFail => {
|
||||||
|
if payload.remaining() < 2 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "MetaFail payload too short"));
|
||||||
|
}
|
||||||
|
let reason_len = payload.get_u16_le() as usize;
|
||||||
|
if payload.remaining() < reason_len {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "MetaFail reason too short"));
|
||||||
|
}
|
||||||
|
let reason = String::from_utf8(payload.copy_to_bytes(reason_len).to_vec())
|
||||||
|
.map_err(|_| Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in reason"))?;
|
||||||
|
Ok(Message::MetaFail { reason })
|
||||||
|
}
|
||||||
|
Command::SendSnapshot => {
|
||||||
|
if payload.remaining() < 36 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "SendSnapshot payload too short"));
|
||||||
|
}
|
||||||
|
let mut snapshot_hash = [0u8; 32];
|
||||||
|
payload.copy_to_slice(&mut snapshot_hash);
|
||||||
|
let body_len = payload.get_u32_le() as usize;
|
||||||
|
if payload.remaining() < body_len {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "SendSnapshot body too short"));
|
||||||
|
}
|
||||||
|
let body = payload.copy_to_bytes(body_len);
|
||||||
|
Ok(Message::SendSnapshot { snapshot_hash, body })
|
||||||
|
}
|
||||||
|
Command::SnapshotOk => {
|
||||||
|
if payload.remaining() < 2 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "SnapshotOk payload too short"));
|
||||||
|
}
|
||||||
|
let id_len = payload.get_u16_le() as usize;
|
||||||
|
if payload.remaining() < id_len {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "SnapshotOk id too short"));
|
||||||
|
}
|
||||||
|
let snapshot_id = String::from_utf8(payload.copy_to_bytes(id_len).to_vec())
|
||||||
|
.map_err(|_| Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in snapshot_id"))?;
|
||||||
|
Ok(Message::SnapshotOk { snapshot_id })
|
||||||
|
}
|
||||||
|
Command::SnapshotFail => {
|
||||||
|
if payload.remaining() < 8 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "SnapshotFail payload too short"));
|
||||||
|
}
|
||||||
|
let chunk_count = payload.get_u32_le() as usize;
|
||||||
|
if payload.remaining() < chunk_count * 32 + 4 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "SnapshotFail chunks too short"));
|
||||||
|
}
|
||||||
|
let mut missing_chunks = Vec::with_capacity(chunk_count);
|
||||||
|
for _ in 0..chunk_count {
|
||||||
|
let mut hash = [0u8; 32];
|
||||||
|
payload.copy_to_slice(&mut hash);
|
||||||
|
missing_chunks.push(hash);
|
||||||
|
}
|
||||||
|
let meta_count = payload.get_u32_le() as usize;
|
||||||
|
if payload.remaining() < meta_count * 33 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "SnapshotFail metas too short"));
|
||||||
|
}
|
||||||
|
let mut missing_metas = Vec::with_capacity(meta_count);
|
||||||
|
for _ in 0..meta_count {
|
||||||
|
let meta_type = MetaType::try_from(payload.get_u8())?;
|
||||||
|
let mut hash = [0u8; 32];
|
||||||
|
payload.copy_to_slice(&mut hash);
|
||||||
|
missing_metas.push((meta_type, hash));
|
||||||
|
}
|
||||||
|
Ok(Message::SnapshotFail { missing_chunks, missing_metas })
|
||||||
|
}
|
||||||
|
Command::Close => Ok(Message::Close),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the command for this message
|
||||||
|
pub fn command(&self) -> Command {
|
||||||
|
match self {
|
||||||
|
Message::Hello { .. } => Command::Hello,
|
||||||
|
Message::HelloOk => Command::HelloOk,
|
||||||
|
Message::AuthUserPass { .. } => Command::AuthUserPass,
|
||||||
|
Message::AuthCode { .. } => Command::AuthCode,
|
||||||
|
Message::AuthOk { .. } => Command::AuthOk,
|
||||||
|
Message::AuthFail { .. } => Command::AuthFail,
|
||||||
|
Message::BatchCheckChunk { .. } => Command::BatchCheckChunk,
|
||||||
|
Message::CheckChunkResp { .. } => Command::CheckChunkResp,
|
||||||
|
Message::SendChunk { .. } => Command::SendChunk,
|
||||||
|
Message::ChunkOk => Command::ChunkOk,
|
||||||
|
Message::ChunkFail { .. } => Command::ChunkFail,
|
||||||
|
Message::BatchCheckMeta { .. } => Command::BatchCheckMeta,
|
||||||
|
Message::CheckMetaResp { .. } => Command::CheckMetaResp,
|
||||||
|
Message::SendMeta { .. } => Command::SendMeta,
|
||||||
|
Message::MetaOk => Command::MetaOk,
|
||||||
|
Message::MetaFail { .. } => Command::MetaFail,
|
||||||
|
Message::SendSnapshot { .. } => Command::SendSnapshot,
|
||||||
|
Message::SnapshotOk { .. } => Command::SnapshotOk,
|
||||||
|
Message::SnapshotFail { .. } => Command::SnapshotFail,
|
||||||
|
Message::Close => Command::Close,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_header_serialization() {
|
||||||
|
let header = MessageHeader::new(Command::Hello, [1; 16], 42);
|
||||||
|
let serialized = header.serialize();
|
||||||
|
let deserialized = MessageHeader::deserialize(&serialized).unwrap();
|
||||||
|
|
||||||
|
assert_eq!(deserialized.cmd, Command::Hello);
|
||||||
|
assert_eq!(deserialized.session_id, [1; 16]);
|
||||||
|
assert_eq!(deserialized.payload_len, 42);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_hello_message() {
|
||||||
|
let msg = Message::Hello { client_type: 1, auth_type: 2 };
|
||||||
|
let payload = msg.serialize_payload().unwrap();
|
||||||
|
let deserialized = Message::deserialize_payload(Command::Hello, payload).unwrap();
|
||||||
|
|
||||||
|
match deserialized {
|
||||||
|
Message::Hello { client_type, auth_type } => {
|
||||||
|
assert_eq!(client_type, 1);
|
||||||
|
assert_eq!(auth_type, 2);
|
||||||
|
}
|
||||||
|
_ => panic!("Wrong message type"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
462
server/src/sync/server.rs
Normal file
462
server/src/sync/server.rs
Normal file
@@ -0,0 +1,462 @@
|
|||||||
|
use anyhow::{Context, Result};
|
||||||
|
use bytes::Bytes;
|
||||||
|
use sqlx::SqlitePool;
|
||||||
|
use std::sync::Arc;
|
||||||
|
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
||||||
|
use tokio::net::{TcpListener, TcpStream};
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
use crate::sync::protocol::{Command, Message, MessageHeader, MetaType};
|
||||||
|
use crate::sync::session::{SessionManager, session_cleanup_task};
|
||||||
|
use crate::sync::storage::Storage;
|
||||||
|
use crate::sync::validation::SnapshotValidator;
|
||||||
|
|
||||||
|
/// Configuration for the sync server
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct SyncServerConfig {
|
||||||
|
pub bind_address: String,
|
||||||
|
pub port: u16,
|
||||||
|
pub data_dir: String,
|
||||||
|
pub max_connections: usize,
|
||||||
|
pub chunk_size_limit: usize,
|
||||||
|
pub meta_size_limit: usize,
|
||||||
|
pub batch_limit: usize,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for SyncServerConfig {
|
||||||
|
fn default() -> Self {
|
||||||
|
Self {
|
||||||
|
bind_address: "0.0.0.0".to_string(),
|
||||||
|
port: 8380,
|
||||||
|
data_dir: "./data".to_string(),
|
||||||
|
max_connections: 100,
|
||||||
|
chunk_size_limit: 4 * 1024 * 1024, // 4 MiB
|
||||||
|
meta_size_limit: 1024 * 1024, // 1 MiB
|
||||||
|
batch_limit: 1000,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Main sync server
|
||||||
|
pub struct SyncServer {
|
||||||
|
config: SyncServerConfig,
|
||||||
|
storage: Storage,
|
||||||
|
session_manager: Arc<SessionManager>,
|
||||||
|
validator: SnapshotValidator,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SyncServer {
|
||||||
|
pub fn new(config: SyncServerConfig, db_pool: SqlitePool) -> Self {
|
||||||
|
let storage = Storage::new(&config.data_dir);
|
||||||
|
let session_manager = Arc::new(SessionManager::new(db_pool));
|
||||||
|
let validator = SnapshotValidator::new(storage.clone());
|
||||||
|
|
||||||
|
Self {
|
||||||
|
config,
|
||||||
|
storage,
|
||||||
|
session_manager,
|
||||||
|
validator,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Start the sync server
|
||||||
|
pub async fn start(&self) -> Result<()> {
|
||||||
|
// Initialize storage
|
||||||
|
self.storage.init().await
|
||||||
|
.context("Failed to initialize storage")?;
|
||||||
|
|
||||||
|
let bind_addr = format!("{}:{}", self.config.bind_address, self.config.port);
|
||||||
|
let listener = TcpListener::bind(&bind_addr).await
|
||||||
|
.with_context(|| format!("Failed to bind to {}", bind_addr))?;
|
||||||
|
|
||||||
|
println!("Sync server listening on {}", bind_addr);
|
||||||
|
|
||||||
|
// Start session cleanup task
|
||||||
|
let session_manager_clone = Arc::clone(&self.session_manager);
|
||||||
|
tokio::spawn(async move {
|
||||||
|
session_cleanup_task(session_manager_clone).await;
|
||||||
|
});
|
||||||
|
|
||||||
|
// Accept connections
|
||||||
|
loop {
|
||||||
|
match listener.accept().await {
|
||||||
|
Ok((stream, addr)) => {
|
||||||
|
println!("New sync connection from {}", addr);
|
||||||
|
|
||||||
|
let handler = ConnectionHandler::new(
|
||||||
|
stream,
|
||||||
|
self.storage.clone(),
|
||||||
|
Arc::clone(&self.session_manager),
|
||||||
|
self.validator.clone(),
|
||||||
|
self.config.clone(),
|
||||||
|
);
|
||||||
|
|
||||||
|
tokio::spawn(async move {
|
||||||
|
if let Err(e) = handler.handle().await {
|
||||||
|
eprintln!("Connection error from {}: {}", addr, e);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
eprintln!("Failed to accept connection: {}", e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Connection handler for individual sync clients
|
||||||
|
struct ConnectionHandler {
|
||||||
|
stream: TcpStream,
|
||||||
|
storage: Storage,
|
||||||
|
session_manager: Arc<SessionManager>,
|
||||||
|
validator: SnapshotValidator,
|
||||||
|
config: SyncServerConfig,
|
||||||
|
session_id: Option<[u8; 16]>,
|
||||||
|
machine_id: Option<i64>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ConnectionHandler {
|
||||||
|
fn new(
|
||||||
|
stream: TcpStream,
|
||||||
|
storage: Storage,
|
||||||
|
session_manager: Arc<SessionManager>,
|
||||||
|
validator: SnapshotValidator,
|
||||||
|
config: SyncServerConfig,
|
||||||
|
) -> Self {
|
||||||
|
Self {
|
||||||
|
stream,
|
||||||
|
storage,
|
||||||
|
session_manager,
|
||||||
|
validator,
|
||||||
|
config,
|
||||||
|
session_id: None,
|
||||||
|
machine_id: None,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Handle the connection
|
||||||
|
async fn handle(mut self) -> Result<()> {
|
||||||
|
loop {
|
||||||
|
// Read message header
|
||||||
|
let header = self.read_header().await?;
|
||||||
|
|
||||||
|
// Read payload
|
||||||
|
let payload = if header.payload_len > 0 {
|
||||||
|
self.read_payload(header.payload_len).await?
|
||||||
|
} else {
|
||||||
|
Bytes::new()
|
||||||
|
};
|
||||||
|
|
||||||
|
// Parse message
|
||||||
|
let message = Message::deserialize_payload(header.cmd, payload)
|
||||||
|
.context("Failed to deserialize message")?;
|
||||||
|
|
||||||
|
// Handle message
|
||||||
|
let response = self.handle_message(message).await?;
|
||||||
|
|
||||||
|
// Send response
|
||||||
|
if let Some(response_msg) = response {
|
||||||
|
self.send_message(response_msg).await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Close connection if requested
|
||||||
|
if header.cmd == Command::Close {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clean up session
|
||||||
|
if let Some(session_id) = self.session_id {
|
||||||
|
self.session_manager.remove_session(&session_id).await;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Read message header
|
||||||
|
async fn read_header(&mut self) -> Result<MessageHeader> {
|
||||||
|
let mut header_buf = [0u8; MessageHeader::SIZE];
|
||||||
|
self.stream.read_exact(&mut header_buf).await
|
||||||
|
.context("Failed to read message header")?;
|
||||||
|
|
||||||
|
MessageHeader::deserialize(&header_buf)
|
||||||
|
.context("Failed to parse message header")
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Read message payload
|
||||||
|
async fn read_payload(&mut self, len: u32) -> Result<Bytes> {
|
||||||
|
if len as usize > self.config.meta_size_limit {
|
||||||
|
return Err(anyhow::anyhow!("Payload too large: {} bytes", len));
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut payload_buf = vec![0u8; len as usize];
|
||||||
|
self.stream.read_exact(&mut payload_buf).await
|
||||||
|
.context("Failed to read message payload")?;
|
||||||
|
|
||||||
|
Ok(Bytes::from(payload_buf))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Send a message
|
||||||
|
async fn send_message(&mut self, message: Message) -> Result<()> {
|
||||||
|
let session_id = self.session_id.unwrap_or([0u8; 16]);
|
||||||
|
let payload = message.serialize_payload()?;
|
||||||
|
|
||||||
|
let header = MessageHeader::new(message.command(), session_id, payload.len() as u32);
|
||||||
|
let header_bytes = header.serialize();
|
||||||
|
|
||||||
|
self.stream.write_all(&header_bytes).await
|
||||||
|
.context("Failed to write message header")?;
|
||||||
|
|
||||||
|
if !payload.is_empty() {
|
||||||
|
self.stream.write_all(&payload).await
|
||||||
|
.context("Failed to write message payload")?;
|
||||||
|
}
|
||||||
|
|
||||||
|
self.stream.flush().await
|
||||||
|
.context("Failed to flush stream")?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Handle a received message
|
||||||
|
async fn handle_message(&mut self, message: Message) -> Result<Option<Message>> {
|
||||||
|
match message {
|
||||||
|
Message::Hello { client_type: _, auth_type: _ } => {
|
||||||
|
Ok(Some(Message::HelloOk))
|
||||||
|
}
|
||||||
|
|
||||||
|
Message::AuthUserPass { username, password, machine_id } => {
|
||||||
|
match self.session_manager.authenticate_userpass(&username, &password, machine_id).await {
|
||||||
|
Ok(session) => {
|
||||||
|
self.session_id = Some(session.session_id);
|
||||||
|
self.machine_id = Some(session.machine_id);
|
||||||
|
Ok(Some(Message::AuthOk { session_id: session.session_id }))
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
Ok(Some(Message::AuthFail { reason: e.to_string() }))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Message::AuthCode { code } => {
|
||||||
|
match self.session_manager.authenticate_code(&code).await {
|
||||||
|
Ok(session) => {
|
||||||
|
self.session_id = Some(session.session_id);
|
||||||
|
self.machine_id = Some(session.machine_id);
|
||||||
|
Ok(Some(Message::AuthOk { session_id: session.session_id }))
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
Ok(Some(Message::AuthFail { reason: e.to_string() }))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Message::BatchCheckChunk { hashes } => {
|
||||||
|
self.require_auth()?;
|
||||||
|
|
||||||
|
if hashes.len() > self.config.batch_limit {
|
||||||
|
return Err(anyhow::anyhow!("Batch size exceeds limit: {}", hashes.len()));
|
||||||
|
}
|
||||||
|
|
||||||
|
let missing_hashes = self.validator.validate_chunk_batch(&hashes).await?;
|
||||||
|
Ok(Some(Message::CheckChunkResp { missing_hashes }))
|
||||||
|
}
|
||||||
|
|
||||||
|
Message::SendChunk { hash, data } => {
|
||||||
|
self.require_auth()?;
|
||||||
|
|
||||||
|
if data.len() > self.config.chunk_size_limit {
|
||||||
|
return Ok(Some(Message::ChunkFail {
|
||||||
|
reason: format!("Chunk too large: {} bytes", data.len())
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
match self.storage.store_chunk(&hash, &data).await {
|
||||||
|
Ok(()) => Ok(Some(Message::ChunkOk)),
|
||||||
|
Err(e) => Ok(Some(Message::ChunkFail { reason: e.to_string() })),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Message::BatchCheckMeta { items } => {
|
||||||
|
self.require_auth()?;
|
||||||
|
|
||||||
|
if items.len() > self.config.batch_limit {
|
||||||
|
return Err(anyhow::anyhow!("Batch size exceeds limit: {}", items.len()));
|
||||||
|
}
|
||||||
|
|
||||||
|
let missing_items = self.validator.validate_meta_batch(&items).await?;
|
||||||
|
Ok(Some(Message::CheckMetaResp { missing_items }))
|
||||||
|
}
|
||||||
|
|
||||||
|
Message::SendMeta { meta_type, meta_hash, body } => {
|
||||||
|
self.require_auth()?;
|
||||||
|
|
||||||
|
if body.len() > self.config.meta_size_limit {
|
||||||
|
return Ok(Some(Message::MetaFail {
|
||||||
|
reason: format!("Meta object too large: {} bytes", body.len())
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
match self.storage.store_meta(meta_type, &meta_hash, &body).await {
|
||||||
|
Ok(()) => Ok(Some(Message::MetaOk)),
|
||||||
|
Err(e) => Ok(Some(Message::MetaFail { reason: e.to_string() })),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Message::SendSnapshot { snapshot_hash, body } => {
|
||||||
|
self.require_auth()?;
|
||||||
|
|
||||||
|
if body.len() > self.config.meta_size_limit {
|
||||||
|
println!("Snapshot rejected: size limit exceeded ({} > {})", body.len(), self.config.meta_size_limit);
|
||||||
|
return Ok(Some(Message::SnapshotFail {
|
||||||
|
missing_chunks: vec![],
|
||||||
|
missing_metas: vec![],
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
println!("Validating snapshot hash: {}", hex::encode(&snapshot_hash));
|
||||||
|
|
||||||
|
// Validate snapshot
|
||||||
|
match self.validator.validate_snapshot(&snapshot_hash, &body).await {
|
||||||
|
Ok(validation_result) => {
|
||||||
|
println!("Validation result - is_valid: {}, missing_chunks: {}, missing_metas: {}",
|
||||||
|
validation_result.is_valid,
|
||||||
|
validation_result.missing_chunks.len(),
|
||||||
|
validation_result.missing_metas.len());
|
||||||
|
|
||||||
|
if validation_result.is_valid {
|
||||||
|
// Store snapshot meta
|
||||||
|
if let Err(e) = self.storage.store_meta(MetaType::Snapshot, &snapshot_hash, &body).await {
|
||||||
|
println!("Failed to store snapshot meta: {}", e);
|
||||||
|
return Ok(Some(Message::SnapshotFail {
|
||||||
|
missing_chunks: vec![],
|
||||||
|
missing_metas: vec![],
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create snapshot reference
|
||||||
|
let snapshot_id = Uuid::new_v4().to_string();
|
||||||
|
let machine_id = *self.machine_id.as_ref().unwrap();
|
||||||
|
let created_at = chrono::Utc::now().timestamp() as u64;
|
||||||
|
|
||||||
|
println!("Creating snapshot reference: machine_id={}, snapshot_id={}", machine_id, snapshot_id);
|
||||||
|
|
||||||
|
if let Err(e) = self.storage.store_snapshot_ref(
|
||||||
|
machine_id,
|
||||||
|
&snapshot_id,
|
||||||
|
&snapshot_hash,
|
||||||
|
created_at
|
||||||
|
).await {
|
||||||
|
println!("Failed to store snapshot reference: {}", e);
|
||||||
|
return Ok(Some(Message::SnapshotFail {
|
||||||
|
missing_chunks: vec![],
|
||||||
|
missing_metas: vec![],
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
println!("Snapshot successfully stored with ID: {}", snapshot_id);
|
||||||
|
Ok(Some(Message::SnapshotOk { snapshot_id }))
|
||||||
|
} else {
|
||||||
|
println!("Snapshot validation failed - returning missing items");
|
||||||
|
Ok(Some(Message::SnapshotFail {
|
||||||
|
missing_chunks: validation_result.missing_chunks,
|
||||||
|
missing_metas: validation_result.missing_metas,
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
println!("Snapshot validation error: {}", e);
|
||||||
|
Ok(Some(Message::SnapshotFail {
|
||||||
|
missing_chunks: vec![],
|
||||||
|
missing_metas: vec![],
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Message::Close => {
|
||||||
|
Ok(None) // No response needed
|
||||||
|
}
|
||||||
|
|
||||||
|
// These are response messages that shouldn't be received by the server
|
||||||
|
Message::HelloOk | Message::AuthOk { .. } | Message::AuthFail { .. } |
|
||||||
|
Message::CheckChunkResp { .. } | Message::ChunkOk | Message::ChunkFail { .. } |
|
||||||
|
Message::CheckMetaResp { .. } | Message::MetaOk | Message::MetaFail { .. } |
|
||||||
|
Message::SnapshotOk { .. } | Message::SnapshotFail { .. } => {
|
||||||
|
Err(anyhow::anyhow!("Unexpected response message from client"))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Require authentication for protected operations
|
||||||
|
fn require_auth(&self) -> Result<()> {
|
||||||
|
if self.session_id.is_none() {
|
||||||
|
return Err(anyhow::anyhow!("Authentication required"));
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use tempfile::TempDir;
|
||||||
|
use sqlx::sqlite::SqlitePoolOptions;
|
||||||
|
|
||||||
|
async fn setup_test_server() -> (SyncServer, TempDir) {
|
||||||
|
let temp_dir = TempDir::new().unwrap();
|
||||||
|
|
||||||
|
let pool = SqlitePoolOptions::new()
|
||||||
|
.connect(":memory:")
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Create required tables
|
||||||
|
sqlx::query!(
|
||||||
|
r#"
|
||||||
|
CREATE TABLE users (
|
||||||
|
id INTEGER PRIMARY KEY,
|
||||||
|
username TEXT UNIQUE NOT NULL,
|
||||||
|
password_hash TEXT NOT NULL,
|
||||||
|
active INTEGER DEFAULT 1
|
||||||
|
)
|
||||||
|
"#
|
||||||
|
)
|
||||||
|
.execute(&pool)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
sqlx::query!(
|
||||||
|
r#"
|
||||||
|
CREATE TABLE provisioning_codes (
|
||||||
|
id INTEGER PRIMARY KEY,
|
||||||
|
code TEXT UNIQUE NOT NULL,
|
||||||
|
created_by INTEGER NOT NULL,
|
||||||
|
expires_at TEXT NOT NULL,
|
||||||
|
used INTEGER DEFAULT 0,
|
||||||
|
used_at TEXT,
|
||||||
|
FOREIGN KEY (created_by) REFERENCES users (id)
|
||||||
|
)
|
||||||
|
"#
|
||||||
|
)
|
||||||
|
.execute(&pool)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let config = SyncServerConfig {
|
||||||
|
data_dir: temp_dir.path().to_string_lossy().to_string(),
|
||||||
|
..Default::default()
|
||||||
|
};
|
||||||
|
|
||||||
|
(SyncServer::new(config, pool), temp_dir)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_server_creation() {
|
||||||
|
let (server, _temp_dir) = setup_test_server().await;
|
||||||
|
|
||||||
|
// Initialize storage to verify everything works
|
||||||
|
server.storage.init().await.unwrap();
|
||||||
|
}
|
||||||
|
}
|
343
server/src/sync/session.rs
Normal file
343
server/src/sync/session.rs
Normal file
@@ -0,0 +1,343 @@
|
|||||||
|
use anyhow::{Context, Result};
|
||||||
|
use rand::RngCore;
|
||||||
|
use sqlx::SqlitePool;
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use std::sync::Arc;
|
||||||
|
use tokio::sync::RwLock;
|
||||||
|
|
||||||
|
/// Session information
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct Session {
|
||||||
|
pub session_id: [u8; 16],
|
||||||
|
pub machine_id: i64,
|
||||||
|
pub user_id: i64,
|
||||||
|
pub created_at: chrono::DateTime<chrono::Utc>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Session manager for sync connections
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub struct SessionManager {
|
||||||
|
sessions: Arc<RwLock<HashMap<[u8; 16], Session>>>,
|
||||||
|
db_pool: SqlitePool,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SessionManager {
|
||||||
|
pub fn new(db_pool: SqlitePool) -> Self {
|
||||||
|
Self {
|
||||||
|
sessions: Arc::new(RwLock::new(HashMap::new())),
|
||||||
|
db_pool,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get database pool reference
|
||||||
|
pub fn get_db_pool(&self) -> &SqlitePool {
|
||||||
|
&self.db_pool
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate a new session ID
|
||||||
|
fn generate_session_id() -> [u8; 16] {
|
||||||
|
let mut session_id = [0u8; 16];
|
||||||
|
rand::thread_rng().fill_bytes(&mut session_id);
|
||||||
|
session_id
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Authenticate with username/password and validate machine ownership
|
||||||
|
pub async fn authenticate_userpass(&self, username: &str, password: &str, machine_id: i64) -> Result<Session> {
|
||||||
|
// Query user from database
|
||||||
|
let user = sqlx::query!(
|
||||||
|
"SELECT id, username, password_hash FROM users WHERE username = ?",
|
||||||
|
username
|
||||||
|
)
|
||||||
|
.fetch_optional(&self.db_pool)
|
||||||
|
.await
|
||||||
|
.context("Failed to query user")?;
|
||||||
|
|
||||||
|
let user = user.ok_or_else(|| anyhow::anyhow!("Invalid credentials"))?;
|
||||||
|
|
||||||
|
// Verify password
|
||||||
|
if !bcrypt::verify(password, &user.password_hash)
|
||||||
|
.context("Failed to verify password")? {
|
||||||
|
return Err(anyhow::anyhow!("Invalid credentials"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let user_id = user.id.unwrap_or(0) as i64;
|
||||||
|
|
||||||
|
// Validate machine ownership
|
||||||
|
let machine = sqlx::query!(
|
||||||
|
"SELECT id, user_id FROM machines WHERE id = ?",
|
||||||
|
machine_id
|
||||||
|
)
|
||||||
|
.fetch_optional(&self.db_pool)
|
||||||
|
.await
|
||||||
|
.context("Failed to query machine")?;
|
||||||
|
|
||||||
|
let machine = machine.ok_or_else(|| anyhow::anyhow!("Machine not found"))?;
|
||||||
|
|
||||||
|
let machine_user_id = machine.user_id;
|
||||||
|
if machine_user_id != user_id {
|
||||||
|
return Err(anyhow::anyhow!("Machine does not belong to user"));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create session with machine ID
|
||||||
|
let session_id = Self::generate_session_id();
|
||||||
|
let machine_id = machine.id; // Use database ID
|
||||||
|
let session = Session {
|
||||||
|
session_id,
|
||||||
|
machine_id,
|
||||||
|
user_id,
|
||||||
|
created_at: chrono::Utc::now(),
|
||||||
|
};
|
||||||
|
|
||||||
|
// Store session
|
||||||
|
let mut sessions = self.sessions.write().await;
|
||||||
|
sessions.insert(session_id, session.clone());
|
||||||
|
|
||||||
|
Ok(session)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Authenticate with provisioning code
|
||||||
|
pub async fn authenticate_code(&self, code: &str) -> Result<Session> {
|
||||||
|
// Query provisioning code from database
|
||||||
|
let provisioning_code = sqlx::query!(
|
||||||
|
r#"
|
||||||
|
SELECT pc.id, pc.code, pc.expires_at, pc.used, m.id as machine_id, m.user_id, u.username
|
||||||
|
FROM provisioning_codes pc
|
||||||
|
JOIN machines m ON pc.machine_id = m.id
|
||||||
|
JOIN users u ON m.user_id = u.id
|
||||||
|
WHERE pc.code = ? AND pc.used = 0
|
||||||
|
"#,
|
||||||
|
code
|
||||||
|
)
|
||||||
|
.fetch_optional(&self.db_pool)
|
||||||
|
.await
|
||||||
|
.context("Failed to query provisioning code")?;
|
||||||
|
|
||||||
|
let provisioning_code = provisioning_code
|
||||||
|
.ok_or_else(|| anyhow::anyhow!("Invalid or used provisioning code"))?;
|
||||||
|
|
||||||
|
// Check if code is expired
|
||||||
|
let expires_at: chrono::DateTime<chrono::Utc> = chrono::DateTime::from_naive_utc_and_offset(
|
||||||
|
provisioning_code.expires_at,
|
||||||
|
chrono::Utc
|
||||||
|
);
|
||||||
|
|
||||||
|
if chrono::Utc::now() > expires_at {
|
||||||
|
return Err(anyhow::anyhow!("Provisioning code expired"));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mark code as used
|
||||||
|
sqlx::query!(
|
||||||
|
"UPDATE provisioning_codes SET used = 1 WHERE id = ?",
|
||||||
|
provisioning_code.id
|
||||||
|
)
|
||||||
|
.execute(&self.db_pool)
|
||||||
|
.await
|
||||||
|
.context("Failed to mark provisioning code as used")?;
|
||||||
|
|
||||||
|
// Create session
|
||||||
|
let session_id = Self::generate_session_id();
|
||||||
|
let machine_id = provisioning_code.machine_id.expect("Machine ID should not be null"); // Use machine ID from database
|
||||||
|
let session = Session {
|
||||||
|
session_id,
|
||||||
|
machine_id,
|
||||||
|
user_id: provisioning_code.user_id as i64,
|
||||||
|
created_at: chrono::Utc::now(),
|
||||||
|
};
|
||||||
|
|
||||||
|
// Store session
|
||||||
|
let mut sessions = self.sessions.write().await;
|
||||||
|
sessions.insert(session_id, session.clone());
|
||||||
|
|
||||||
|
Ok(session)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get session by session ID
|
||||||
|
pub async fn get_session(&self, session_id: &[u8; 16]) -> Option<Session> {
|
||||||
|
let sessions = self.sessions.read().await;
|
||||||
|
sessions.get(session_id).cloned()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Validate session and return associated machine ID
|
||||||
|
pub async fn validate_session(&self, session_id: &[u8; 16]) -> Result<i64> {
|
||||||
|
let session = self.get_session(session_id).await
|
||||||
|
.ok_or_else(|| anyhow::anyhow!("Invalid session"))?;
|
||||||
|
|
||||||
|
// Check if session is too old (24 hours)
|
||||||
|
let session_age = chrono::Utc::now() - session.created_at;
|
||||||
|
if session_age > chrono::Duration::hours(24) {
|
||||||
|
// Remove expired session
|
||||||
|
let mut sessions = self.sessions.write().await;
|
||||||
|
sessions.remove(session_id);
|
||||||
|
return Err(anyhow::anyhow!("Session expired"));
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(session.machine_id)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Remove session
|
||||||
|
pub async fn remove_session(&self, session_id: &[u8; 16]) {
|
||||||
|
let mut sessions = self.sessions.write().await;
|
||||||
|
sessions.remove(session_id);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Clean up expired sessions
|
||||||
|
pub async fn cleanup_expired_sessions(&self) {
|
||||||
|
let mut sessions = self.sessions.write().await;
|
||||||
|
let now = chrono::Utc::now();
|
||||||
|
|
||||||
|
sessions.retain(|_, session| {
|
||||||
|
let age = now - session.created_at;
|
||||||
|
age <= chrono::Duration::hours(24)
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get active session count
|
||||||
|
pub async fn active_session_count(&self) -> usize {
|
||||||
|
let sessions = self.sessions.read().await;
|
||||||
|
sessions.len()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List active sessions
|
||||||
|
pub async fn list_active_sessions(&self) -> Vec<Session> {
|
||||||
|
let sessions = self.sessions.read().await;
|
||||||
|
sessions.values().cloned().collect()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Periodic cleanup task for expired sessions
|
||||||
|
pub async fn session_cleanup_task(session_manager: Arc<SessionManager>) {
|
||||||
|
let mut interval = tokio::time::interval(tokio::time::Duration::from_secs(3600)); // Every hour
|
||||||
|
|
||||||
|
loop {
|
||||||
|
interval.tick().await;
|
||||||
|
session_manager.cleanup_expired_sessions().await;
|
||||||
|
println!("Cleaned up expired sync sessions. Active sessions: {}",
|
||||||
|
session_manager.active_session_count().await);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use sqlx::sqlite::SqlitePoolOptions;
|
||||||
|
|
||||||
|
async fn setup_test_db() -> SqlitePool {
|
||||||
|
let pool = SqlitePoolOptions::new()
|
||||||
|
.connect(":memory:")
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Create tables
|
||||||
|
sqlx::query!(
|
||||||
|
r#"
|
||||||
|
CREATE TABLE users (
|
||||||
|
id INTEGER PRIMARY KEY,
|
||||||
|
username TEXT UNIQUE NOT NULL,
|
||||||
|
password_hash TEXT NOT NULL,
|
||||||
|
active INTEGER DEFAULT 1
|
||||||
|
)
|
||||||
|
"#
|
||||||
|
)
|
||||||
|
.execute(&pool)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
sqlx::query!(
|
||||||
|
r#"
|
||||||
|
CREATE TABLE provisioning_codes (
|
||||||
|
id INTEGER PRIMARY KEY,
|
||||||
|
code TEXT UNIQUE NOT NULL,
|
||||||
|
created_by INTEGER NOT NULL,
|
||||||
|
expires_at TEXT NOT NULL,
|
||||||
|
used INTEGER DEFAULT 0,
|
||||||
|
used_at TEXT,
|
||||||
|
FOREIGN KEY (created_by) REFERENCES users (id)
|
||||||
|
)
|
||||||
|
"#
|
||||||
|
)
|
||||||
|
.execute(&pool)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Insert test user
|
||||||
|
let password_hash = bcrypt::hash("password123", bcrypt::DEFAULT_COST).unwrap();
|
||||||
|
sqlx::query!(
|
||||||
|
"INSERT INTO users (username, password_hash) VALUES (?, ?)",
|
||||||
|
"testuser",
|
||||||
|
password_hash
|
||||||
|
)
|
||||||
|
.execute(&pool)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
pool
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_authenticate_userpass() {
|
||||||
|
let pool = setup_test_db().await;
|
||||||
|
let session_manager = SessionManager::new(pool);
|
||||||
|
|
||||||
|
let session = session_manager
|
||||||
|
.authenticate_userpass("testuser", "password123")
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(session.user_id, 1);
|
||||||
|
assert!(!session.machine_id.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_authenticate_userpass_invalid() {
|
||||||
|
let pool = setup_test_db().await;
|
||||||
|
let session_manager = SessionManager::new(pool);
|
||||||
|
|
||||||
|
let result = session_manager
|
||||||
|
.authenticate_userpass("testuser", "wrongpassword")
|
||||||
|
.await;
|
||||||
|
|
||||||
|
assert!(result.is_err());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_session_validation() {
|
||||||
|
let pool = setup_test_db().await;
|
||||||
|
let session_manager = SessionManager::new(pool);
|
||||||
|
|
||||||
|
let session = session_manager
|
||||||
|
.authenticate_userpass("testuser", "password123")
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let machine_id = session_manager
|
||||||
|
.validate_session(&session.session_id)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(machine_id, session.machine_id);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_session_cleanup() {
|
||||||
|
let pool = setup_test_db().await;
|
||||||
|
let session_manager = SessionManager::new(pool);
|
||||||
|
|
||||||
|
let session = session_manager
|
||||||
|
.authenticate_userpass("testuser", "password123")
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(session_manager.active_session_count().await, 1);
|
||||||
|
|
||||||
|
// Manually expire the session
|
||||||
|
{
|
||||||
|
let mut sessions = session_manager.sessions.write().await;
|
||||||
|
if let Some(mut session) = sessions.get_mut(&session.session_id) {
|
||||||
|
session.created_at = chrono::Utc::now() - chrono::Duration::hours(25);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
session_manager.cleanup_expired_sessions().await;
|
||||||
|
assert_eq!(session_manager.active_session_count().await, 0);
|
||||||
|
}
|
||||||
|
}
|
406
server/src/sync/storage.rs
Normal file
406
server/src/sync/storage.rs
Normal file
@@ -0,0 +1,406 @@
|
|||||||
|
use anyhow::{Context, Result};
|
||||||
|
use bytes::Bytes;
|
||||||
|
use std::collections::HashSet;
|
||||||
|
use std::path::{Path, PathBuf};
|
||||||
|
use tokio::fs;
|
||||||
|
use crate::sync::protocol::{Hash, MetaType};
|
||||||
|
use crate::sync::meta::MetaObj;
|
||||||
|
|
||||||
|
/// Storage backend for chunks and metadata objects
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct Storage {
|
||||||
|
data_dir: PathBuf,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Storage {
|
||||||
|
pub fn new<P: AsRef<Path>>(data_dir: P) -> Self {
|
||||||
|
Self {
|
||||||
|
data_dir: data_dir.as_ref().to_path_buf(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Initialize storage directories
|
||||||
|
pub async fn init(&self) -> Result<()> {
|
||||||
|
let chunks_dir = self.data_dir.join("sync").join("chunks");
|
||||||
|
let meta_dir = self.data_dir.join("sync").join("meta");
|
||||||
|
let machines_dir = self.data_dir.join("sync").join("machines");
|
||||||
|
|
||||||
|
fs::create_dir_all(&chunks_dir).await
|
||||||
|
.context("Failed to create chunks directory")?;
|
||||||
|
|
||||||
|
fs::create_dir_all(&meta_dir).await
|
||||||
|
.context("Failed to create meta directory")?;
|
||||||
|
|
||||||
|
fs::create_dir_all(&machines_dir).await
|
||||||
|
.context("Failed to create machines directory")?;
|
||||||
|
|
||||||
|
// Create subdirectories for each meta type
|
||||||
|
for meta_type in &["files", "dirs", "partitions", "disks", "snapshots"] {
|
||||||
|
fs::create_dir_all(meta_dir.join(meta_type)).await
|
||||||
|
.with_context(|| format!("Failed to create meta/{} directory", meta_type))?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get chunk storage path for a hash
|
||||||
|
fn chunk_path(&self, hash: &Hash) -> PathBuf {
|
||||||
|
let hex = hex::encode(hash);
|
||||||
|
let ab = &hex[0..2];
|
||||||
|
let cd = &hex[2..4];
|
||||||
|
let filename = format!("{}.chk", hex);
|
||||||
|
|
||||||
|
self.data_dir
|
||||||
|
.join("sync")
|
||||||
|
.join("chunks")
|
||||||
|
.join(ab)
|
||||||
|
.join(cd)
|
||||||
|
.join(filename)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get meta storage path for a hash and type
|
||||||
|
fn meta_path(&self, meta_type: MetaType, hash: &Hash) -> PathBuf {
|
||||||
|
let hex = hex::encode(hash);
|
||||||
|
let ab = &hex[0..2];
|
||||||
|
let cd = &hex[2..4];
|
||||||
|
let filename = format!("{}.meta", hex);
|
||||||
|
|
||||||
|
let type_dir = match meta_type {
|
||||||
|
MetaType::File => "files",
|
||||||
|
MetaType::Dir => "dirs",
|
||||||
|
MetaType::Partition => "partitions",
|
||||||
|
MetaType::Disk => "disks",
|
||||||
|
MetaType::Snapshot => "snapshots",
|
||||||
|
};
|
||||||
|
|
||||||
|
self.data_dir
|
||||||
|
.join("sync")
|
||||||
|
.join("meta")
|
||||||
|
.join(type_dir)
|
||||||
|
.join(ab)
|
||||||
|
.join(cd)
|
||||||
|
.join(filename)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a chunk exists
|
||||||
|
pub async fn chunk_exists(&self, hash: &Hash) -> bool {
|
||||||
|
let path = self.chunk_path(hash);
|
||||||
|
path.exists()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if multiple chunks exist
|
||||||
|
pub async fn chunks_exist(&self, hashes: &[Hash]) -> Result<HashSet<Hash>> {
|
||||||
|
let mut existing = HashSet::new();
|
||||||
|
|
||||||
|
for hash in hashes {
|
||||||
|
if self.chunk_exists(hash).await {
|
||||||
|
existing.insert(*hash);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(existing)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Store a chunk
|
||||||
|
pub async fn store_chunk(&self, hash: &Hash, data: &[u8]) -> Result<()> {
|
||||||
|
// Verify hash
|
||||||
|
let computed_hash = blake3::hash(data);
|
||||||
|
if computed_hash.as_bytes() != hash {
|
||||||
|
return Err(anyhow::anyhow!("Chunk hash mismatch"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let path = self.chunk_path(hash);
|
||||||
|
|
||||||
|
// Create parent directories
|
||||||
|
if let Some(parent) = path.parent() {
|
||||||
|
fs::create_dir_all(parent).await
|
||||||
|
.context("Failed to create chunk directory")?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write to temporary file first, then rename (atomic write)
|
||||||
|
let temp_path = path.with_extension("tmp");
|
||||||
|
fs::write(&temp_path, data).await
|
||||||
|
.context("Failed to write chunk to temporary file")?;
|
||||||
|
|
||||||
|
fs::rename(&temp_path, &path).await
|
||||||
|
.context("Failed to rename chunk file")?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Load a chunk
|
||||||
|
pub async fn load_chunk(&self, hash: &Hash) -> Result<Option<Bytes>> {
|
||||||
|
let path = self.chunk_path(hash);
|
||||||
|
|
||||||
|
if !path.exists() {
|
||||||
|
return Ok(None);
|
||||||
|
}
|
||||||
|
|
||||||
|
let data = fs::read(&path).await
|
||||||
|
.context("Failed to read chunk file")?;
|
||||||
|
|
||||||
|
// Verify hash
|
||||||
|
let computed_hash = blake3::hash(&data);
|
||||||
|
if computed_hash.as_bytes() != hash {
|
||||||
|
return Err(anyhow::anyhow!("Stored chunk hash mismatch"));
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(Some(Bytes::from(data)))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a meta object exists
|
||||||
|
pub async fn meta_exists(&self, meta_type: MetaType, hash: &Hash) -> bool {
|
||||||
|
let path = self.meta_path(meta_type, hash);
|
||||||
|
path.exists()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if multiple meta objects exist
|
||||||
|
pub async fn metas_exist(&self, items: &[(MetaType, Hash)]) -> Result<HashSet<(MetaType, Hash)>> {
|
||||||
|
let mut existing = HashSet::new();
|
||||||
|
|
||||||
|
for &(meta_type, hash) in items {
|
||||||
|
if self.meta_exists(meta_type, &hash).await {
|
||||||
|
existing.insert((meta_type, hash));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(existing)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Store a meta object
|
||||||
|
pub async fn store_meta(&self, meta_type: MetaType, hash: &Hash, body: &[u8]) -> Result<()> {
|
||||||
|
// Verify hash
|
||||||
|
let computed_hash = blake3::hash(body);
|
||||||
|
if computed_hash.as_bytes() != hash {
|
||||||
|
return Err(anyhow::anyhow!("Meta object hash mismatch"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let path = self.meta_path(meta_type, hash);
|
||||||
|
|
||||||
|
// Create parent directories
|
||||||
|
if let Some(parent) = path.parent() {
|
||||||
|
fs::create_dir_all(parent).await
|
||||||
|
.context("Failed to create meta directory")?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write to temporary file first, then rename (atomic write)
|
||||||
|
let temp_path = path.with_extension("tmp");
|
||||||
|
fs::write(&temp_path, body).await
|
||||||
|
.context("Failed to write meta to temporary file")?;
|
||||||
|
|
||||||
|
fs::rename(&temp_path, &path).await
|
||||||
|
.context("Failed to rename meta file")?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Load a meta object
|
||||||
|
pub async fn load_meta(&self, meta_type: MetaType, hash: &Hash) -> Result<Option<MetaObj>> {
|
||||||
|
let path = self.meta_path(meta_type, hash);
|
||||||
|
|
||||||
|
if !path.exists() {
|
||||||
|
println!("Meta file does not exist: {:?}", path);
|
||||||
|
return Ok(None);
|
||||||
|
}
|
||||||
|
|
||||||
|
println!("Reading meta file: {:?}", path);
|
||||||
|
let data = fs::read(&path).await
|
||||||
|
.context("Failed to read meta file")?;
|
||||||
|
|
||||||
|
println!("Read {} bytes from meta file", data.len());
|
||||||
|
|
||||||
|
// Verify hash
|
||||||
|
let computed_hash = blake3::hash(&data);
|
||||||
|
if computed_hash.as_bytes() != hash {
|
||||||
|
println!("Hash mismatch: expected {}, got {}", hex::encode(hash), hex::encode(computed_hash.as_bytes()));
|
||||||
|
return Err(anyhow::anyhow!("Stored meta object hash mismatch"));
|
||||||
|
}
|
||||||
|
|
||||||
|
println!("Hash verified, deserializing {:?} object", meta_type);
|
||||||
|
let meta_obj = MetaObj::deserialize(meta_type, Bytes::from(data))
|
||||||
|
.context("Failed to deserialize meta object")?;
|
||||||
|
|
||||||
|
println!("Successfully deserialized meta object");
|
||||||
|
Ok(Some(meta_obj))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get snapshot storage path for a machine
|
||||||
|
fn snapshot_ref_path(&self, machine_id: i64, snapshot_id: &str) -> PathBuf {
|
||||||
|
self.data_dir
|
||||||
|
.join("sync")
|
||||||
|
.join("machines")
|
||||||
|
.join(machine_id.to_string())
|
||||||
|
.join("snapshots")
|
||||||
|
.join(format!("{}.ref", snapshot_id))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Store a snapshot reference
|
||||||
|
pub async fn store_snapshot_ref(
|
||||||
|
&self,
|
||||||
|
machine_id: i64,
|
||||||
|
snapshot_id: &str,
|
||||||
|
snapshot_hash: &Hash,
|
||||||
|
created_at: u64
|
||||||
|
) -> Result<()> {
|
||||||
|
let path = self.snapshot_ref_path(machine_id, snapshot_id);
|
||||||
|
|
||||||
|
// Create parent directories
|
||||||
|
if let Some(parent) = path.parent() {
|
||||||
|
fs::create_dir_all(parent).await
|
||||||
|
.context("Failed to create snapshot reference directory")?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create snapshot reference content
|
||||||
|
let content = format!("{}:{}", hex::encode(snapshot_hash), created_at);
|
||||||
|
|
||||||
|
// Write to temporary file first, then rename (atomic write)
|
||||||
|
let temp_path = path.with_extension("tmp");
|
||||||
|
fs::write(&temp_path, content).await
|
||||||
|
.context("Failed to write snapshot reference to temporary file")?;
|
||||||
|
|
||||||
|
fs::rename(&temp_path, &path).await
|
||||||
|
.context("Failed to rename snapshot reference file")?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Load a snapshot reference
|
||||||
|
pub async fn load_snapshot_ref(&self, machine_id: i64, snapshot_id: &str) -> Result<Option<(Hash, u64)>> {
|
||||||
|
let path = self.snapshot_ref_path(machine_id, snapshot_id);
|
||||||
|
|
||||||
|
if !path.exists() {
|
||||||
|
return Ok(None);
|
||||||
|
}
|
||||||
|
|
||||||
|
let content = fs::read_to_string(&path).await
|
||||||
|
.context("Failed to read snapshot reference file")?;
|
||||||
|
|
||||||
|
let parts: Vec<&str> = content.trim().split(':').collect();
|
||||||
|
if parts.len() != 2 {
|
||||||
|
return Err(anyhow::anyhow!("Invalid snapshot reference format"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let snapshot_hash: Hash = hex::decode(parts[0])
|
||||||
|
.context("Failed to decode snapshot hash")?
|
||||||
|
.try_into()
|
||||||
|
.map_err(|_| anyhow::anyhow!("Invalid snapshot hash length"))?;
|
||||||
|
|
||||||
|
let created_at: u64 = parts[1].parse()
|
||||||
|
.context("Failed to parse snapshot timestamp")?;
|
||||||
|
|
||||||
|
Ok(Some((snapshot_hash, created_at)))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List snapshots for a machine
|
||||||
|
pub async fn list_snapshots(&self, machine_id: i64) -> Result<Vec<String>> {
|
||||||
|
let snapshots_dir = self.data_dir
|
||||||
|
.join("sync")
|
||||||
|
.join("machines")
|
||||||
|
.join(machine_id.to_string())
|
||||||
|
.join("snapshots");
|
||||||
|
|
||||||
|
if !snapshots_dir.exists() {
|
||||||
|
return Ok(Vec::new());
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut entries = fs::read_dir(&snapshots_dir).await
|
||||||
|
.context("Failed to read snapshots directory")?;
|
||||||
|
|
||||||
|
let mut snapshots = Vec::new();
|
||||||
|
while let Some(entry) = entries.next_entry().await
|
||||||
|
.context("Failed to read snapshot entry")? {
|
||||||
|
|
||||||
|
if let Some(file_name) = entry.file_name().to_str() {
|
||||||
|
if file_name.ends_with(".ref") {
|
||||||
|
let snapshot_id = file_name.trim_end_matches(".ref");
|
||||||
|
snapshots.push(snapshot_id.to_string());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
snapshots.sort();
|
||||||
|
Ok(snapshots)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Delete old snapshots, keeping only the latest N
|
||||||
|
pub async fn cleanup_snapshots(&self, machine_id: i64, keep_count: usize) -> Result<()> {
|
||||||
|
let mut snapshots = self.list_snapshots(machine_id).await?;
|
||||||
|
|
||||||
|
if snapshots.len() <= keep_count {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
snapshots.sort();
|
||||||
|
snapshots.reverse(); // Most recent first
|
||||||
|
|
||||||
|
// Delete older snapshots
|
||||||
|
for snapshot_id in snapshots.iter().skip(keep_count) {
|
||||||
|
let path = self.snapshot_ref_path(machine_id, snapshot_id);
|
||||||
|
if path.exists() {
|
||||||
|
fs::remove_file(&path).await
|
||||||
|
.with_context(|| format!("Failed to delete snapshot {}", snapshot_id))?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Add hex crate to dependencies
|
||||||
|
use hex;
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use tempfile::TempDir;
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_storage_init() {
|
||||||
|
let temp_dir = TempDir::new().unwrap();
|
||||||
|
let storage = Storage::new(temp_dir.path());
|
||||||
|
storage.init().await.unwrap();
|
||||||
|
|
||||||
|
assert!(temp_dir.path().join("sync/chunks").exists());
|
||||||
|
assert!(temp_dir.path().join("sync/meta/files").exists());
|
||||||
|
assert!(temp_dir.path().join("sync/machines").exists());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_chunk_storage() {
|
||||||
|
let temp_dir = TempDir::new().unwrap();
|
||||||
|
let storage = Storage::new(temp_dir.path());
|
||||||
|
storage.init().await.unwrap();
|
||||||
|
|
||||||
|
let data = b"test chunk data";
|
||||||
|
let hash = blake3::hash(data).into();
|
||||||
|
|
||||||
|
// Store chunk
|
||||||
|
storage.store_chunk(&hash, data).await.unwrap();
|
||||||
|
assert!(storage.chunk_exists(&hash).await);
|
||||||
|
|
||||||
|
// Load chunk
|
||||||
|
let loaded = storage.load_chunk(&hash).await.unwrap().unwrap();
|
||||||
|
assert_eq!(loaded.as_ref(), data);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_snapshot_ref_storage() {
|
||||||
|
let temp_dir = TempDir::new().unwrap();
|
||||||
|
let storage = Storage::new(temp_dir.path());
|
||||||
|
storage.init().await.unwrap();
|
||||||
|
|
||||||
|
let machine_id = 123i64;
|
||||||
|
let snapshot_id = "snapshot-001";
|
||||||
|
let snapshot_hash = [1u8; 32];
|
||||||
|
let created_at = 1234567890;
|
||||||
|
|
||||||
|
storage.store_snapshot_ref(machine_id, snapshot_id, &snapshot_hash, created_at)
|
||||||
|
.await.unwrap();
|
||||||
|
|
||||||
|
let loaded = storage.load_snapshot_ref(machine_id, snapshot_id)
|
||||||
|
.await.unwrap().unwrap();
|
||||||
|
|
||||||
|
assert_eq!(loaded.0, snapshot_hash);
|
||||||
|
assert_eq!(loaded.1, created_at);
|
||||||
|
}
|
||||||
|
}
|
235
server/src/sync/validation.rs
Normal file
235
server/src/sync/validation.rs
Normal file
@@ -0,0 +1,235 @@
|
|||||||
|
use anyhow::{Context, Result};
|
||||||
|
use std::collections::{HashSet, VecDeque};
|
||||||
|
use crate::sync::protocol::{Hash, MetaType};
|
||||||
|
use crate::sync::storage::Storage;
|
||||||
|
use crate::sync::meta::{MetaObj, SnapshotObj, EntryType};
|
||||||
|
|
||||||
|
/// Validation result for snapshot commits
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct ValidationResult {
|
||||||
|
pub is_valid: bool,
|
||||||
|
pub missing_chunks: Vec<Hash>,
|
||||||
|
pub missing_metas: Vec<(MetaType, Hash)>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ValidationResult {
|
||||||
|
pub fn valid() -> Self {
|
||||||
|
Self {
|
||||||
|
is_valid: true,
|
||||||
|
missing_chunks: Vec::new(),
|
||||||
|
missing_metas: Vec::new(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn invalid(missing_chunks: Vec<Hash>, missing_metas: Vec<(MetaType, Hash)>) -> Self {
|
||||||
|
Self {
|
||||||
|
is_valid: false,
|
||||||
|
missing_chunks,
|
||||||
|
missing_metas,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn has_missing(&self) -> bool {
|
||||||
|
!self.missing_chunks.is_empty() || !self.missing_metas.is_empty()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Validator for snapshot object graphs
|
||||||
|
#[derive(Clone)]
|
||||||
|
pub struct SnapshotValidator {
|
||||||
|
storage: Storage,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SnapshotValidator {
|
||||||
|
pub fn new(storage: Storage) -> Self {
|
||||||
|
Self { storage }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Validate a complete snapshot object graph using BFS only
|
||||||
|
pub async fn validate_snapshot(&self, snapshot_hash: &Hash, snapshot_body: &[u8]) -> Result<ValidationResult> {
|
||||||
|
// Use the BFS implementation
|
||||||
|
self.validate_snapshot_bfs(snapshot_hash, snapshot_body).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Validate a batch of meta objects (for incremental validation)
|
||||||
|
pub async fn validate_meta_batch(&self, metas: &[(MetaType, Hash)]) -> Result<Vec<(MetaType, Hash)>> {
|
||||||
|
let mut missing = Vec::new();
|
||||||
|
|
||||||
|
for &(meta_type, hash) in metas {
|
||||||
|
if !self.storage.meta_exists(meta_type, &hash).await {
|
||||||
|
missing.push((meta_type, hash));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(missing)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Validate a batch of chunks (for incremental validation)
|
||||||
|
pub async fn validate_chunk_batch(&self, chunks: &[Hash]) -> Result<Vec<Hash>> {
|
||||||
|
let mut missing = Vec::new();
|
||||||
|
|
||||||
|
for &hash in chunks {
|
||||||
|
if !self.storage.chunk_exists(&hash).await {
|
||||||
|
missing.push(hash);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(missing)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Perform a breadth-first validation (useful for large snapshots)
|
||||||
|
pub async fn validate_snapshot_bfs(&self, snapshot_hash: &Hash, snapshot_body: &[u8]) -> Result<ValidationResult> {
|
||||||
|
// Verify snapshot hash
|
||||||
|
let computed_hash = blake3::hash(snapshot_body);
|
||||||
|
if computed_hash.as_bytes() != snapshot_hash {
|
||||||
|
return Err(anyhow::anyhow!("Snapshot hash mismatch"));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse snapshot object
|
||||||
|
let snapshot_obj = SnapshotObj::deserialize(bytes::Bytes::from(snapshot_body.to_vec()))
|
||||||
|
.context("Failed to deserialize snapshot object")?;
|
||||||
|
|
||||||
|
let mut missing_chunks = Vec::new();
|
||||||
|
let mut missing_metas = Vec::new();
|
||||||
|
let mut visited_metas = HashSet::new();
|
||||||
|
let mut queue = VecDeque::new();
|
||||||
|
|
||||||
|
// Initialize queue with disk hashes
|
||||||
|
for disk_hash in &snapshot_obj.disk_hashes {
|
||||||
|
queue.push_back((MetaType::Disk, *disk_hash));
|
||||||
|
}
|
||||||
|
|
||||||
|
// BFS traversal
|
||||||
|
while let Some((meta_type, hash)) = queue.pop_front() {
|
||||||
|
let meta_key = (meta_type, hash);
|
||||||
|
|
||||||
|
if visited_metas.contains(&meta_key) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
visited_metas.insert(meta_key);
|
||||||
|
|
||||||
|
// Check if meta exists
|
||||||
|
if !self.storage.meta_exists(meta_type, &hash).await {
|
||||||
|
println!("Missing metadata: {:?} hash {}", meta_type, hex::encode(&hash));
|
||||||
|
missing_metas.push((meta_type, hash));
|
||||||
|
continue; // Skip loading if missing
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load and process meta object
|
||||||
|
println!("Loading metadata: {:?} hash {}", meta_type, hex::encode(&hash));
|
||||||
|
if let Some(meta_obj) = self.storage.load_meta(meta_type, &hash).await
|
||||||
|
.context("Failed to load meta object")? {
|
||||||
|
|
||||||
|
match meta_obj {
|
||||||
|
MetaObj::Disk(disk) => {
|
||||||
|
for partition_hash in &disk.partition_hashes {
|
||||||
|
queue.push_back((MetaType::Partition, *partition_hash));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
MetaObj::Partition(partition) => {
|
||||||
|
queue.push_back((MetaType::Dir, partition.root_dir_hash));
|
||||||
|
}
|
||||||
|
MetaObj::Dir(dir) => {
|
||||||
|
for entry in &dir.entries {
|
||||||
|
match entry.entry_type {
|
||||||
|
EntryType::File | EntryType::Symlink => {
|
||||||
|
queue.push_back((MetaType::File, entry.target_meta_hash));
|
||||||
|
}
|
||||||
|
EntryType::Dir => {
|
||||||
|
queue.push_back((MetaType::Dir, entry.target_meta_hash));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
MetaObj::File(file) => {
|
||||||
|
// Check chunk dependencies
|
||||||
|
for chunk_hash in &file.chunk_hashes {
|
||||||
|
if !self.storage.chunk_exists(chunk_hash).await {
|
||||||
|
missing_chunks.push(*chunk_hash);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
MetaObj::Snapshot(_) => {
|
||||||
|
// Snapshots shouldn't be nested
|
||||||
|
return Err(anyhow::anyhow!("Unexpected nested snapshot object"));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if missing_chunks.is_empty() && missing_metas.is_empty() {
|
||||||
|
Ok(ValidationResult::valid())
|
||||||
|
} else {
|
||||||
|
Ok(ValidationResult::invalid(missing_chunks, missing_metas))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use tempfile::TempDir;
|
||||||
|
use crate::sync::meta::*;
|
||||||
|
|
||||||
|
async fn setup_test_storage() -> Storage {
|
||||||
|
let temp_dir = TempDir::new().unwrap();
|
||||||
|
let storage = Storage::new(temp_dir.path());
|
||||||
|
storage.init().await.unwrap();
|
||||||
|
storage
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_validate_empty_snapshot() {
|
||||||
|
let storage = setup_test_storage().await;
|
||||||
|
let validator = SnapshotValidator::new(storage);
|
||||||
|
|
||||||
|
let snapshot = SnapshotObj::new(1234567890, vec![]);
|
||||||
|
let snapshot_body = snapshot.serialize().unwrap();
|
||||||
|
let snapshot_hash = snapshot.compute_hash().unwrap();
|
||||||
|
|
||||||
|
let result = validator.validate_snapshot(&snapshot_hash, &snapshot_body)
|
||||||
|
.await.unwrap();
|
||||||
|
|
||||||
|
assert!(result.is_valid);
|
||||||
|
assert!(result.missing_chunks.is_empty());
|
||||||
|
assert!(result.missing_metas.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_validate_missing_disk() {
|
||||||
|
let storage = setup_test_storage().await;
|
||||||
|
let validator = SnapshotValidator::new(storage);
|
||||||
|
|
||||||
|
let missing_disk_hash = [1u8; 32];
|
||||||
|
let snapshot = SnapshotObj::new(1234567890, vec![missing_disk_hash]);
|
||||||
|
let snapshot_body = snapshot.serialize().unwrap();
|
||||||
|
let snapshot_hash = snapshot.compute_hash().unwrap();
|
||||||
|
|
||||||
|
let result = validator.validate_snapshot(&snapshot_hash, &snapshot_body)
|
||||||
|
.await.unwrap();
|
||||||
|
|
||||||
|
assert!(!result.is_valid);
|
||||||
|
assert!(result.missing_chunks.is_empty());
|
||||||
|
assert_eq!(result.missing_metas.len(), 1);
|
||||||
|
assert_eq!(result.missing_metas[0], (MetaType::Disk, missing_disk_hash));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_validate_chunk_batch() {
|
||||||
|
let storage = setup_test_storage().await;
|
||||||
|
let validator = SnapshotValidator::new(storage);
|
||||||
|
|
||||||
|
let chunk_data = b"test chunk";
|
||||||
|
let chunk_hash = blake3::hash(chunk_data).into();
|
||||||
|
let missing_hash = [1u8; 32];
|
||||||
|
|
||||||
|
// Store one chunk
|
||||||
|
storage.store_chunk(&chunk_hash, chunk_data).await.unwrap();
|
||||||
|
|
||||||
|
let chunks = vec![chunk_hash, missing_hash];
|
||||||
|
let missing = validator.validate_chunk_batch(&chunks).await.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(missing.len(), 1);
|
||||||
|
assert_eq!(missing[0], missing_hash);
|
||||||
|
}
|
||||||
|
}
|
@@ -83,6 +83,8 @@ pub struct Machine {
|
|||||||
pub id: i64,
|
pub id: i64,
|
||||||
pub user_id: i64,
|
pub user_id: i64,
|
||||||
pub uuid: Uuid,
|
pub uuid: Uuid,
|
||||||
|
#[serde(rename = "machine_id")]
|
||||||
|
pub machine_id: String,
|
||||||
pub name: String,
|
pub name: String,
|
||||||
pub created_at: DateTime<Utc>,
|
pub created_at: DateTime<Utc>,
|
||||||
}
|
}
|
||||||
|
76
sync_client_test/Cargo.lock
generated
Normal file
76
sync_client_test/Cargo.lock
generated
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
# This file is automatically @generated by Cargo.
|
||||||
|
# It is not intended for manual editing.
|
||||||
|
version = 4
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "arrayref"
|
||||||
|
version = "0.3.9"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "76a2e8124351fda1ef8aaaa3bbd7ebbcb486bbcd4225aca0aa0d84bb2db8fecb"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "arrayvec"
|
||||||
|
version = "0.7.6"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "7c02d123df017efcdfbd739ef81735b36c5ba83ec3c59c80a9d7ecc718f92e50"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "blake3"
|
||||||
|
version = "1.8.2"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "3888aaa89e4b2a40fca9848e400f6a658a5a3978de7be858e209cafa8be9a4a0"
|
||||||
|
dependencies = [
|
||||||
|
"arrayref",
|
||||||
|
"arrayvec",
|
||||||
|
"cc",
|
||||||
|
"cfg-if",
|
||||||
|
"constant_time_eq",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "cc"
|
||||||
|
version = "1.2.36"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "5252b3d2648e5eedbc1a6f501e3c795e07025c1e93bbf8bbdd6eef7f447a6d54"
|
||||||
|
dependencies = [
|
||||||
|
"find-msvc-tools",
|
||||||
|
"shlex",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "cfg-if"
|
||||||
|
version = "1.0.3"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "2fd1289c04a9ea8cb22300a459a72a385d7c73d3259e2ed7dcb2af674838cfa9"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "constant_time_eq"
|
||||||
|
version = "0.3.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "7c74b8349d32d297c9134b8c88677813a227df8f779daa29bfc29c183fe3dca6"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "find-msvc-tools"
|
||||||
|
version = "0.1.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "7fd99930f64d146689264c637b5af2f0233a933bef0d8570e2526bf9e083192d"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "hex"
|
||||||
|
version = "0.4.3"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "shlex"
|
||||||
|
version = "1.3.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "sync_client_test"
|
||||||
|
version = "0.1.0"
|
||||||
|
dependencies = [
|
||||||
|
"blake3",
|
||||||
|
"hex",
|
||||||
|
]
|
8
sync_client_test/Cargo.toml
Normal file
8
sync_client_test/Cargo.toml
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
[package]
|
||||||
|
name = "sync_client_test"
|
||||||
|
version = "0.1.0"
|
||||||
|
edition = "2021"
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
blake3 = "1.5"
|
||||||
|
hex = "0.4"
|
856
sync_client_test/src/main.rs
Normal file
856
sync_client_test/src/main.rs
Normal file
@@ -0,0 +1,856 @@
|
|||||||
|
// Mock sync client for testing the Arkendro sync server
|
||||||
|
// This implements the binary protocol specified in PROTOCOL.md
|
||||||
|
|
||||||
|
use std::io::{Read, Write, Result, Error, ErrorKind};
|
||||||
|
use std::net::TcpStream;
|
||||||
|
|
||||||
|
/// Command codes from the protocol
|
||||||
|
#[derive(Debug, Clone, Copy)]
|
||||||
|
#[repr(u8)]
|
||||||
|
enum Command {
|
||||||
|
Hello = 0x01,
|
||||||
|
HelloOk = 0x02,
|
||||||
|
AuthUserPass = 0x10,
|
||||||
|
AuthCode = 0x11,
|
||||||
|
AuthOk = 0x12,
|
||||||
|
AuthFail = 0x13,
|
||||||
|
BatchCheckChunk = 0x20,
|
||||||
|
CheckChunkResp = 0x21,
|
||||||
|
SendChunk = 0x22,
|
||||||
|
ChunkOk = 0x23,
|
||||||
|
ChunkFail = 0x24,
|
||||||
|
BatchCheckMeta = 0x30,
|
||||||
|
CheckMetaResp = 0x31,
|
||||||
|
SendMeta = 0x32,
|
||||||
|
MetaOk = 0x33,
|
||||||
|
MetaFail = 0x34,
|
||||||
|
SendSnapshot = 0x40,
|
||||||
|
SnapshotOk = 0x41,
|
||||||
|
SnapshotFail = 0x42,
|
||||||
|
Close = 0xFF,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Command {
|
||||||
|
fn from_u8(value: u8) -> Result<Self> {
|
||||||
|
match value {
|
||||||
|
0x01 => Ok(Command::Hello),
|
||||||
|
0x02 => Ok(Command::HelloOk),
|
||||||
|
0x10 => Ok(Command::AuthUserPass),
|
||||||
|
0x11 => Ok(Command::AuthCode),
|
||||||
|
0x12 => Ok(Command::AuthOk),
|
||||||
|
0x13 => Ok(Command::AuthFail),
|
||||||
|
0x20 => Ok(Command::BatchCheckChunk),
|
||||||
|
0x21 => Ok(Command::CheckChunkResp),
|
||||||
|
0x22 => Ok(Command::SendChunk),
|
||||||
|
0x23 => Ok(Command::ChunkOk),
|
||||||
|
0x24 => Ok(Command::ChunkFail),
|
||||||
|
0x30 => Ok(Command::BatchCheckMeta),
|
||||||
|
0x31 => Ok(Command::CheckMetaResp),
|
||||||
|
0x32 => Ok(Command::SendMeta),
|
||||||
|
0x33 => Ok(Command::MetaOk),
|
||||||
|
0x34 => Ok(Command::MetaFail),
|
||||||
|
0x40 => Ok(Command::SendSnapshot),
|
||||||
|
0x41 => Ok(Command::SnapshotOk),
|
||||||
|
0x42 => Ok(Command::SnapshotFail),
|
||||||
|
0xFF => Ok(Command::Close),
|
||||||
|
_ => Err(Error::new(ErrorKind::InvalidData, "Unknown command")),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Message header (24 bytes)
|
||||||
|
#[derive(Debug)]
|
||||||
|
struct MessageHeader {
|
||||||
|
cmd: Command,
|
||||||
|
flags: u8,
|
||||||
|
reserved: [u8; 2],
|
||||||
|
session_id: [u8; 16],
|
||||||
|
payload_len: u32,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl MessageHeader {
|
||||||
|
fn new(cmd: Command, session_id: [u8; 16], payload_len: u32) -> Self {
|
||||||
|
Self {
|
||||||
|
cmd,
|
||||||
|
flags: 0,
|
||||||
|
reserved: [0; 2],
|
||||||
|
session_id,
|
||||||
|
payload_len,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn to_bytes(&self) -> [u8; 24] {
|
||||||
|
let mut buf = [0u8; 24];
|
||||||
|
buf[0] = self.cmd as u8;
|
||||||
|
buf[1] = self.flags;
|
||||||
|
buf[2..4].copy_from_slice(&self.reserved);
|
||||||
|
buf[4..20].copy_from_slice(&self.session_id);
|
||||||
|
buf[20..24].copy_from_slice(&self.payload_len.to_le_bytes());
|
||||||
|
buf
|
||||||
|
}
|
||||||
|
|
||||||
|
fn from_bytes(buf: &[u8]) -> Result<Self> {
|
||||||
|
if buf.len() < 24 {
|
||||||
|
return Err(Error::new(ErrorKind::UnexpectedEof, "Header too short"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let cmd = Command::from_u8(buf[0])?;
|
||||||
|
let flags = buf[1];
|
||||||
|
let reserved = [buf[2], buf[3]];
|
||||||
|
let mut session_id = [0u8; 16];
|
||||||
|
session_id.copy_from_slice(&buf[4..20]);
|
||||||
|
let payload_len = u32::from_le_bytes([buf[20], buf[21], buf[22], buf[23]]);
|
||||||
|
|
||||||
|
Ok(Self {
|
||||||
|
cmd,
|
||||||
|
flags,
|
||||||
|
reserved,
|
||||||
|
session_id,
|
||||||
|
payload_len,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Metadata types
|
||||||
|
#[derive(Debug, Clone, Copy)]
|
||||||
|
#[repr(u8)]
|
||||||
|
enum MetaType {
|
||||||
|
File = 1,
|
||||||
|
Dir = 2,
|
||||||
|
Partition = 3,
|
||||||
|
Disk = 4,
|
||||||
|
Snapshot = 5,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Filesystem types
|
||||||
|
#[derive(Debug, Clone, Copy)]
|
||||||
|
#[repr(u32)]
|
||||||
|
enum FsType {
|
||||||
|
Unknown = 0,
|
||||||
|
Ext = 1,
|
||||||
|
Ntfs = 2,
|
||||||
|
Fat32 = 3,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Directory entry types
|
||||||
|
#[derive(Debug, Clone, Copy)]
|
||||||
|
#[repr(u8)]
|
||||||
|
enum EntryType {
|
||||||
|
File = 0,
|
||||||
|
Dir = 1,
|
||||||
|
Symlink = 2,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Directory entry
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
struct DirEntry {
|
||||||
|
entry_type: EntryType,
|
||||||
|
name: String,
|
||||||
|
target_meta_hash: [u8; 32],
|
||||||
|
}
|
||||||
|
|
||||||
|
/// File metadata object
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
struct FileObj {
|
||||||
|
version: u8,
|
||||||
|
fs_type_code: FsType,
|
||||||
|
size: u64,
|
||||||
|
mode: u32,
|
||||||
|
uid: u32,
|
||||||
|
gid: u32,
|
||||||
|
mtime_unixsec: u64,
|
||||||
|
chunk_hashes: Vec<[u8; 32]>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl FileObj {
|
||||||
|
fn new(size: u64, chunk_hashes: Vec<[u8; 32]>) -> Self {
|
||||||
|
Self {
|
||||||
|
version: 1,
|
||||||
|
fs_type_code: FsType::Ext,
|
||||||
|
size,
|
||||||
|
mode: 0o644,
|
||||||
|
uid: 1000,
|
||||||
|
gid: 1000,
|
||||||
|
mtime_unixsec: std::time::SystemTime::now()
|
||||||
|
.duration_since(std::time::UNIX_EPOCH)
|
||||||
|
.unwrap()
|
||||||
|
.as_secs(),
|
||||||
|
chunk_hashes,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn serialize(&self) -> Vec<u8> {
|
||||||
|
let mut buf = Vec::new();
|
||||||
|
buf.push(self.version);
|
||||||
|
buf.extend_from_slice(&(self.fs_type_code as u32).to_le_bytes());
|
||||||
|
buf.extend_from_slice(&self.size.to_le_bytes());
|
||||||
|
buf.extend_from_slice(&self.mode.to_le_bytes());
|
||||||
|
buf.extend_from_slice(&self.uid.to_le_bytes());
|
||||||
|
buf.extend_from_slice(&self.gid.to_le_bytes());
|
||||||
|
buf.extend_from_slice(&self.mtime_unixsec.to_le_bytes());
|
||||||
|
buf.extend_from_slice(&(self.chunk_hashes.len() as u32).to_le_bytes());
|
||||||
|
for hash in &self.chunk_hashes {
|
||||||
|
buf.extend_from_slice(hash);
|
||||||
|
}
|
||||||
|
buf
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Directory metadata object
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
struct DirObj {
|
||||||
|
version: u8,
|
||||||
|
entries: Vec<DirEntry>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl DirObj {
|
||||||
|
fn new(entries: Vec<DirEntry>) -> Self {
|
||||||
|
Self {
|
||||||
|
version: 1,
|
||||||
|
entries,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn serialize(&self) -> Vec<u8> {
|
||||||
|
let mut buf = Vec::new();
|
||||||
|
buf.push(self.version);
|
||||||
|
buf.extend_from_slice(&(self.entries.len() as u32).to_le_bytes());
|
||||||
|
|
||||||
|
for entry in &self.entries {
|
||||||
|
buf.push(entry.entry_type as u8);
|
||||||
|
let name_bytes = entry.name.as_bytes();
|
||||||
|
buf.extend_from_slice(&(name_bytes.len() as u16).to_le_bytes());
|
||||||
|
buf.extend_from_slice(name_bytes);
|
||||||
|
buf.extend_from_slice(&entry.target_meta_hash);
|
||||||
|
}
|
||||||
|
buf
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Partition metadata object
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
struct PartitionObj {
|
||||||
|
version: u8,
|
||||||
|
fs_type: FsType,
|
||||||
|
root_dir_hash: [u8; 32],
|
||||||
|
start_lba: u64,
|
||||||
|
end_lba: u64,
|
||||||
|
type_guid: [u8; 16],
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PartitionObj {
|
||||||
|
fn new(label: String, root_dir_hash: [u8; 32]) -> Self {
|
||||||
|
// Generate a deterministic GUID from the label for testing
|
||||||
|
let mut type_guid = [0u8; 16];
|
||||||
|
let label_bytes = label.as_bytes();
|
||||||
|
for (i, &byte) in label_bytes.iter().take(16).enumerate() {
|
||||||
|
type_guid[i] = byte;
|
||||||
|
}
|
||||||
|
|
||||||
|
Self {
|
||||||
|
version: 1,
|
||||||
|
fs_type: FsType::Ext,
|
||||||
|
root_dir_hash,
|
||||||
|
start_lba: 2048, // Common starting LBA
|
||||||
|
end_lba: 2097152, // ~1GB partition
|
||||||
|
type_guid,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn serialize(&self) -> Vec<u8> {
|
||||||
|
let mut buf = Vec::new();
|
||||||
|
buf.push(self.version);
|
||||||
|
buf.extend_from_slice(&(self.fs_type as u32).to_le_bytes());
|
||||||
|
buf.extend_from_slice(&self.root_dir_hash);
|
||||||
|
buf.extend_from_slice(&self.start_lba.to_le_bytes());
|
||||||
|
buf.extend_from_slice(&self.end_lba.to_le_bytes());
|
||||||
|
buf.extend_from_slice(&self.type_guid);
|
||||||
|
buf
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Disk metadata object
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
struct DiskObj {
|
||||||
|
version: u8,
|
||||||
|
partition_hashes: Vec<[u8; 32]>,
|
||||||
|
disk_size_bytes: u64,
|
||||||
|
serial: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl DiskObj {
|
||||||
|
fn new(serial: String, partition_hashes: Vec<[u8; 32]>) -> Self {
|
||||||
|
Self {
|
||||||
|
version: 1,
|
||||||
|
partition_hashes,
|
||||||
|
disk_size_bytes: 1024 * 1024 * 1024, // 1GB default
|
||||||
|
serial,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn serialize(&self) -> Vec<u8> {
|
||||||
|
let mut buf = Vec::new();
|
||||||
|
buf.push(self.version);
|
||||||
|
buf.extend_from_slice(&(self.partition_hashes.len() as u32).to_le_bytes());
|
||||||
|
for hash in &self.partition_hashes {
|
||||||
|
buf.extend_from_slice(hash);
|
||||||
|
}
|
||||||
|
buf.extend_from_slice(&self.disk_size_bytes.to_le_bytes());
|
||||||
|
let serial_bytes = self.serial.as_bytes();
|
||||||
|
buf.extend_from_slice(&(serial_bytes.len() as u16).to_le_bytes());
|
||||||
|
buf.extend_from_slice(serial_bytes);
|
||||||
|
buf
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Snapshot metadata object
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
struct SnapshotObj {
|
||||||
|
version: u8,
|
||||||
|
created_at_unixsec: u64,
|
||||||
|
disk_hashes: Vec<[u8; 32]>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SnapshotObj {
|
||||||
|
fn new(disk_hashes: Vec<[u8; 32]>) -> Self {
|
||||||
|
Self {
|
||||||
|
version: 1,
|
||||||
|
created_at_unixsec: std::time::SystemTime::now()
|
||||||
|
.duration_since(std::time::UNIX_EPOCH)
|
||||||
|
.unwrap()
|
||||||
|
.as_secs(),
|
||||||
|
disk_hashes,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn serialize(&self) -> Vec<u8> {
|
||||||
|
let mut buf = Vec::new();
|
||||||
|
buf.push(self.version);
|
||||||
|
buf.extend_from_slice(&self.created_at_unixsec.to_le_bytes());
|
||||||
|
buf.extend_from_slice(&(self.disk_hashes.len() as u32).to_le_bytes());
|
||||||
|
for hash in &self.disk_hashes {
|
||||||
|
buf.extend_from_slice(hash);
|
||||||
|
}
|
||||||
|
buf
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Simple sync client for testing
|
||||||
|
struct SyncClient {
|
||||||
|
stream: TcpStream,
|
||||||
|
session_id: [u8; 16],
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SyncClient {
|
||||||
|
fn connect(addr: &str) -> Result<Self> {
|
||||||
|
let stream = TcpStream::connect(addr)?;
|
||||||
|
Ok(Self {
|
||||||
|
stream,
|
||||||
|
session_id: [0; 16],
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn send_message(&mut self, cmd: Command, payload: &[u8]) -> Result<()> {
|
||||||
|
let header = MessageHeader::new(cmd, self.session_id, payload.len() as u32);
|
||||||
|
|
||||||
|
self.stream.write_all(&header.to_bytes())?;
|
||||||
|
if !payload.is_empty() {
|
||||||
|
self.stream.write_all(payload)?;
|
||||||
|
}
|
||||||
|
self.stream.flush()?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn receive_message(&mut self) -> Result<(Command, Vec<u8>)> {
|
||||||
|
// Read header
|
||||||
|
let mut header_buf = [0u8; 24];
|
||||||
|
self.stream.read_exact(&mut header_buf)?;
|
||||||
|
let header = MessageHeader::from_bytes(&header_buf)?;
|
||||||
|
|
||||||
|
// Read payload
|
||||||
|
let mut payload = vec![0u8; header.payload_len as usize];
|
||||||
|
if header.payload_len > 0 {
|
||||||
|
self.stream.read_exact(&mut payload)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok((header.cmd, payload))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hello(&mut self) -> Result<()> {
|
||||||
|
println!("Sending HELLO...");
|
||||||
|
// Hello message needs client_type (1 byte) and auth_type (1 byte)
|
||||||
|
let payload = vec![0x01, 0x01]; // client_type=1, auth_type=1
|
||||||
|
self.send_message(Command::Hello, &payload)?;
|
||||||
|
|
||||||
|
let (cmd, _payload) = self.receive_message()?;
|
||||||
|
match cmd {
|
||||||
|
Command::HelloOk => {
|
||||||
|
println!("✓ Received HELLO_OK");
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
_ => Err(Error::new(ErrorKind::InvalidData, "Expected HELLO_OK")),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn authenticate(&mut self, username: &str, password: &str, machine_id: i64) -> Result<()> {
|
||||||
|
println!("Authenticating as {} with machine ID {}...", username, machine_id);
|
||||||
|
|
||||||
|
// Build auth payload: username_len (u16_le) + username + password_len (u16_le) + password + machine_id (i64_le)
|
||||||
|
let mut payload = Vec::new();
|
||||||
|
payload.extend_from_slice(&(username.len() as u16).to_le_bytes());
|
||||||
|
payload.extend_from_slice(username.as_bytes());
|
||||||
|
payload.extend_from_slice(&(password.len() as u16).to_le_bytes());
|
||||||
|
payload.extend_from_slice(password.as_bytes());
|
||||||
|
payload.extend_from_slice(&machine_id.to_le_bytes());
|
||||||
|
|
||||||
|
self.send_message(Command::AuthUserPass, &payload)?;
|
||||||
|
|
||||||
|
let (cmd, payload) = self.receive_message()?;
|
||||||
|
match cmd {
|
||||||
|
Command::AuthOk => {
|
||||||
|
// Extract session ID from payload
|
||||||
|
if payload.len() >= 16 {
|
||||||
|
self.session_id.copy_from_slice(&payload[0..16]);
|
||||||
|
println!("✓ Authentication successful! Session ID: {:?}", self.session_id);
|
||||||
|
Ok(())
|
||||||
|
} else {
|
||||||
|
Err(Error::new(ErrorKind::InvalidData, "Invalid session ID"))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Command::AuthFail => Err(Error::new(ErrorKind::PermissionDenied, "Authentication failed")),
|
||||||
|
_ => Err(Error::new(ErrorKind::InvalidData, "Unexpected response")),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn check_chunks(&mut self, hashes: &[[u8; 32]]) -> Result<Vec<[u8; 32]>> {
|
||||||
|
println!("Checking {} chunks...", hashes.len());
|
||||||
|
|
||||||
|
let mut payload = Vec::new();
|
||||||
|
payload.extend_from_slice(&(hashes.len() as u32).to_le_bytes());
|
||||||
|
for hash in hashes {
|
||||||
|
payload.extend_from_slice(hash);
|
||||||
|
}
|
||||||
|
|
||||||
|
self.send_message(Command::BatchCheckChunk, &payload)?;
|
||||||
|
|
||||||
|
let (cmd, payload) = self.receive_message()?;
|
||||||
|
match cmd {
|
||||||
|
Command::CheckChunkResp => {
|
||||||
|
if payload.len() < 4 {
|
||||||
|
return Err(Error::new(ErrorKind::InvalidData, "Invalid response"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let count = u32::from_le_bytes([payload[0], payload[1], payload[2], payload[3]]) as usize;
|
||||||
|
let mut missing = Vec::new();
|
||||||
|
|
||||||
|
for i in 0..count {
|
||||||
|
let start = 4 + i * 32;
|
||||||
|
if payload.len() < start + 32 {
|
||||||
|
return Err(Error::new(ErrorKind::InvalidData, "Invalid hash in response"));
|
||||||
|
}
|
||||||
|
let mut hash = [0u8; 32];
|
||||||
|
hash.copy_from_slice(&payload[start..start + 32]);
|
||||||
|
missing.push(hash);
|
||||||
|
}
|
||||||
|
|
||||||
|
println!("✓ {} chunks missing out of {}", missing.len(), hashes.len());
|
||||||
|
Ok(missing)
|
||||||
|
}
|
||||||
|
_ => Err(Error::new(ErrorKind::InvalidData, "Expected CheckChunkResp")),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn send_chunk(&mut self, hash: &[u8; 32], data: &[u8]) -> Result<()> {
|
||||||
|
println!("Sending chunk {} bytes...", data.len());
|
||||||
|
println!("Chunk hash: {}", hex::encode(hash));
|
||||||
|
|
||||||
|
// Verify hash matches data
|
||||||
|
let computed_hash = blake3_hash(data);
|
||||||
|
if computed_hash != *hash {
|
||||||
|
return Err(Error::new(ErrorKind::InvalidData, "Hash mismatch"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut payload = Vec::new();
|
||||||
|
payload.extend_from_slice(hash);
|
||||||
|
payload.extend_from_slice(&(data.len() as u32).to_le_bytes());
|
||||||
|
payload.extend_from_slice(data);
|
||||||
|
|
||||||
|
self.send_message(Command::SendChunk, &payload)?;
|
||||||
|
|
||||||
|
let (cmd, payload) = self.receive_message()?;
|
||||||
|
match cmd {
|
||||||
|
Command::ChunkOk => {
|
||||||
|
println!("✓ Chunk uploaded successfully");
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
Command::ChunkFail => {
|
||||||
|
let reason = if !payload.is_empty() {
|
||||||
|
String::from_utf8_lossy(&payload).to_string()
|
||||||
|
} else {
|
||||||
|
"Unknown error".to_string()
|
||||||
|
};
|
||||||
|
Err(Error::new(ErrorKind::Other, format!("Server rejected chunk: {}", reason)))
|
||||||
|
}
|
||||||
|
_ => Err(Error::new(ErrorKind::InvalidData, "Expected ChunkOk or ChunkFail")),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn check_metadata(&mut self, items: &[(MetaType, [u8; 32])]) -> Result<Vec<(MetaType, [u8; 32])>> {
|
||||||
|
println!("Checking {} metadata items...", items.len());
|
||||||
|
|
||||||
|
let mut payload = Vec::new();
|
||||||
|
payload.extend_from_slice(&(items.len() as u32).to_le_bytes());
|
||||||
|
for (meta_type, hash) in items {
|
||||||
|
payload.push(*meta_type as u8);
|
||||||
|
payload.extend_from_slice(hash);
|
||||||
|
}
|
||||||
|
|
||||||
|
self.send_message(Command::BatchCheckMeta, &payload)?;
|
||||||
|
|
||||||
|
let (cmd, payload) = self.receive_message()?;
|
||||||
|
match cmd {
|
||||||
|
Command::CheckMetaResp => {
|
||||||
|
if payload.len() < 4 {
|
||||||
|
return Err(Error::new(ErrorKind::InvalidData, "Invalid response"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let count = u32::from_le_bytes([payload[0], payload[1], payload[2], payload[3]]) as usize;
|
||||||
|
let mut missing = Vec::new();
|
||||||
|
|
||||||
|
for i in 0..count {
|
||||||
|
let start = 4 + i * 33; // 1 byte type + 32 bytes hash
|
||||||
|
if payload.len() < start + 33 {
|
||||||
|
return Err(Error::new(ErrorKind::InvalidData, "Invalid metadata in response"));
|
||||||
|
}
|
||||||
|
let meta_type = match payload[start] {
|
||||||
|
1 => MetaType::File,
|
||||||
|
2 => MetaType::Dir,
|
||||||
|
3 => MetaType::Partition,
|
||||||
|
4 => MetaType::Disk,
|
||||||
|
5 => MetaType::Snapshot,
|
||||||
|
_ => return Err(Error::new(ErrorKind::InvalidData, "Invalid metadata type")),
|
||||||
|
};
|
||||||
|
let mut hash = [0u8; 32];
|
||||||
|
hash.copy_from_slice(&payload[start + 1..start + 33]);
|
||||||
|
missing.push((meta_type, hash));
|
||||||
|
}
|
||||||
|
|
||||||
|
println!("✓ {} metadata items missing out of {}", missing.len(), items.len());
|
||||||
|
Ok(missing)
|
||||||
|
}
|
||||||
|
_ => Err(Error::new(ErrorKind::InvalidData, "Expected CheckMetaResp")),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn send_metadata(&mut self, meta_type: MetaType, meta_hash: &[u8; 32], body: &[u8]) -> Result<()> {
|
||||||
|
println!("Sending {:?} metadata {} bytes...", meta_type, body.len());
|
||||||
|
println!("Metadata hash: {}", hex::encode(meta_hash));
|
||||||
|
|
||||||
|
// Verify hash matches body
|
||||||
|
let computed_hash = blake3_hash(body);
|
||||||
|
if computed_hash != *meta_hash {
|
||||||
|
return Err(Error::new(ErrorKind::InvalidData, "Metadata hash mismatch"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut payload = Vec::new();
|
||||||
|
payload.push(meta_type as u8);
|
||||||
|
payload.extend_from_slice(meta_hash);
|
||||||
|
payload.extend_from_slice(&(body.len() as u32).to_le_bytes());
|
||||||
|
payload.extend_from_slice(body);
|
||||||
|
|
||||||
|
self.send_message(Command::SendMeta, &payload)?;
|
||||||
|
|
||||||
|
let (cmd, payload) = self.receive_message()?;
|
||||||
|
match cmd {
|
||||||
|
Command::MetaOk => {
|
||||||
|
println!("✓ Metadata uploaded successfully");
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
Command::MetaFail => {
|
||||||
|
let reason = if !payload.is_empty() {
|
||||||
|
String::from_utf8_lossy(&payload).to_string()
|
||||||
|
} else {
|
||||||
|
"Unknown error".to_string()
|
||||||
|
};
|
||||||
|
Err(Error::new(ErrorKind::Other, format!("Server rejected metadata: {}", reason)))
|
||||||
|
}
|
||||||
|
_ => Err(Error::new(ErrorKind::InvalidData, "Expected MetaOk or MetaFail")),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn send_snapshot(&mut self, snapshot_hash: &[u8; 32], snapshot_data: &[u8]) -> Result<()> {
|
||||||
|
println!("Sending snapshot {} bytes...", snapshot_data.len());
|
||||||
|
println!("Snapshot hash: {}", hex::encode(snapshot_hash));
|
||||||
|
|
||||||
|
// Verify hash matches data
|
||||||
|
let computed_hash = blake3_hash(snapshot_data);
|
||||||
|
if computed_hash != *snapshot_hash {
|
||||||
|
return Err(Error::new(ErrorKind::InvalidData, "Snapshot hash mismatch"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut payload = Vec::new();
|
||||||
|
payload.extend_from_slice(snapshot_hash);
|
||||||
|
payload.extend_from_slice(&(snapshot_data.len() as u32).to_le_bytes());
|
||||||
|
payload.extend_from_slice(snapshot_data);
|
||||||
|
|
||||||
|
self.send_message(Command::SendSnapshot, &payload)?;
|
||||||
|
|
||||||
|
let (cmd, payload) = self.receive_message()?;
|
||||||
|
match cmd {
|
||||||
|
Command::SnapshotOk => {
|
||||||
|
println!("✓ Snapshot uploaded successfully");
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
Command::SnapshotFail => {
|
||||||
|
// Parse SnapshotFail payload: missing_chunks_count + chunks + missing_metas_count + metas
|
||||||
|
if payload.len() < 8 {
|
||||||
|
return Err(Error::new(ErrorKind::Other, "Server rejected snapshot: Invalid response format"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let missing_chunks_count = u32::from_le_bytes([payload[0], payload[1], payload[2], payload[3]]) as usize;
|
||||||
|
let missing_metas_count = u32::from_le_bytes([payload[4], payload[5], payload[6], payload[7]]) as usize;
|
||||||
|
|
||||||
|
let mut error_msg = format!("Server rejected snapshot: {} missing chunks, {} missing metadata items",
|
||||||
|
missing_chunks_count, missing_metas_count);
|
||||||
|
|
||||||
|
// Optionally parse the actual missing items for more detailed error
|
||||||
|
if missing_chunks_count > 0 || missing_metas_count > 0 {
|
||||||
|
error_msg.push_str(" (run with chunk/metadata verification to see details)");
|
||||||
|
}
|
||||||
|
|
||||||
|
Err(Error::new(ErrorKind::Other, error_msg))
|
||||||
|
}
|
||||||
|
_ => Err(Error::new(ErrorKind::InvalidData, "Expected SnapshotOk or SnapshotFail")),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn close(&mut self) -> Result<()> {
|
||||||
|
self.send_message(Command::Close, &[])?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Hash function using blake3
|
||||||
|
fn blake3_hash(data: &[u8]) -> [u8; 32] {
|
||||||
|
blake3::hash(data).into()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate some mock data for testing
|
||||||
|
fn generate_mock_data() -> Vec<(Vec<u8>, [u8; 32])> {
|
||||||
|
let mut data_chunks = Vec::new();
|
||||||
|
|
||||||
|
// Some test data chunks
|
||||||
|
let chunks = [
|
||||||
|
b"Hello, Arkendro sync server! This is test chunk data.".to_vec(),
|
||||||
|
b"Another test chunk with different content for variety.".to_vec(),
|
||||||
|
b"Binary data test: \x00\x01\x02\x03\xFF\xFE\xFD\xFC".to_vec(),
|
||||||
|
];
|
||||||
|
|
||||||
|
for chunk in chunks {
|
||||||
|
let hash = blake3_hash(&chunk);
|
||||||
|
data_chunks.push((chunk, hash));
|
||||||
|
}
|
||||||
|
|
||||||
|
data_chunks
|
||||||
|
}
|
||||||
|
|
||||||
|
fn main() -> Result<()> {
|
||||||
|
println!("🚀 Arkendro Sync Client Extended Test");
|
||||||
|
println!("====================================\n");
|
||||||
|
|
||||||
|
// Connect to server
|
||||||
|
let mut client = SyncClient::connect("127.0.0.1:8380")?;
|
||||||
|
println!("Connected to sync server\n");
|
||||||
|
|
||||||
|
// Test protocol flow
|
||||||
|
client.hello()?;
|
||||||
|
|
||||||
|
// Try to authenticate with hardcoded machine ID (you'll need to create a machine first via the web interface)
|
||||||
|
let machine_id = 1; // Hardcoded machine ID for testing
|
||||||
|
match client.authenticate("admin", "password123", machine_id) {
|
||||||
|
Ok(()) => println!("Authentication successful!\n"),
|
||||||
|
Err(e) => {
|
||||||
|
println!("Authentication failed: {}", e);
|
||||||
|
println!("Make sure you have:");
|
||||||
|
println!("1. Created a user 'admin' with password 'password123' via the web interface");
|
||||||
|
println!("2. Created a machine with ID {} that belongs to user 'admin'", machine_id);
|
||||||
|
client.close()?;
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
println!("📁 Creating test filesystem hierarchy...\n");
|
||||||
|
|
||||||
|
// Step 1: Create test file data chunks
|
||||||
|
let file1_data = b"Hello, this is the content of file1.txt in our test filesystem!";
|
||||||
|
let file2_data = b"This is file2.log with some different content for testing purposes.";
|
||||||
|
let file3_data = b"Binary data file: \x00\x01\x02\x03\xFF\xFE\xFD\xFC and some text after.";
|
||||||
|
|
||||||
|
let file1_hash = blake3_hash(file1_data);
|
||||||
|
let file2_hash = blake3_hash(file2_data);
|
||||||
|
let file3_hash = blake3_hash(file3_data);
|
||||||
|
|
||||||
|
// Upload chunks if needed
|
||||||
|
println!("🔗 Uploading file chunks...");
|
||||||
|
let chunk_hashes = vec![file1_hash, file2_hash, file3_hash];
|
||||||
|
let missing_chunks = client.check_chunks(&chunk_hashes)?;
|
||||||
|
|
||||||
|
if !missing_chunks.is_empty() {
|
||||||
|
for &missing_hash in &missing_chunks {
|
||||||
|
if missing_hash == file1_hash {
|
||||||
|
client.send_chunk(&file1_hash, file1_data)?;
|
||||||
|
} else if missing_hash == file2_hash {
|
||||||
|
client.send_chunk(&file2_hash, file2_data)?;
|
||||||
|
} else if missing_hash == file3_hash {
|
||||||
|
client.send_chunk(&file3_hash, file3_data)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
println!("✓ All chunks already exist on server");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 2: Create file metadata objects
|
||||||
|
println!("\n📄 Creating file metadata objects...");
|
||||||
|
let file1_obj = FileObj::new(file1_data.len() as u64, vec![file1_hash]);
|
||||||
|
let file2_obj = FileObj::new(file2_data.len() as u64, vec![file2_hash]);
|
||||||
|
let file3_obj = FileObj::new(file3_data.len() as u64, vec![file3_hash]);
|
||||||
|
|
||||||
|
let file1_meta_data = file1_obj.serialize();
|
||||||
|
let file2_meta_data = file2_obj.serialize();
|
||||||
|
let file3_meta_data = file3_obj.serialize();
|
||||||
|
|
||||||
|
let file1_meta_hash = blake3_hash(&file1_meta_data);
|
||||||
|
let file2_meta_hash = blake3_hash(&file2_meta_data);
|
||||||
|
let file3_meta_hash = blake3_hash(&file3_meta_data);
|
||||||
|
|
||||||
|
// Upload file metadata
|
||||||
|
client.send_metadata(MetaType::File, &file1_meta_hash, &file1_meta_data)?;
|
||||||
|
client.send_metadata(MetaType::File, &file2_meta_hash, &file2_meta_data)?;
|
||||||
|
client.send_metadata(MetaType::File, &file3_meta_hash, &file3_meta_data)?;
|
||||||
|
|
||||||
|
// Step 3: Create directory structures
|
||||||
|
println!("\n📁 Creating directory structures...");
|
||||||
|
|
||||||
|
// Create /logs subdirectory with file2
|
||||||
|
let logs_dir_entries = vec![
|
||||||
|
DirEntry {
|
||||||
|
entry_type: EntryType::File,
|
||||||
|
name: "app.log".to_string(),
|
||||||
|
target_meta_hash: file2_meta_hash,
|
||||||
|
},
|
||||||
|
];
|
||||||
|
let logs_dir_obj = DirObj::new(logs_dir_entries);
|
||||||
|
let logs_dir_data = logs_dir_obj.serialize();
|
||||||
|
let logs_dir_hash = blake3_hash(&logs_dir_data);
|
||||||
|
client.send_metadata(MetaType::Dir, &logs_dir_hash, &logs_dir_data)?;
|
||||||
|
|
||||||
|
// Create /data subdirectory with file3
|
||||||
|
let data_dir_entries = vec![
|
||||||
|
DirEntry {
|
||||||
|
entry_type: EntryType::File,
|
||||||
|
name: "binary.dat".to_string(),
|
||||||
|
target_meta_hash: file3_meta_hash,
|
||||||
|
},
|
||||||
|
];
|
||||||
|
let data_dir_obj = DirObj::new(data_dir_entries);
|
||||||
|
let data_dir_data = data_dir_obj.serialize();
|
||||||
|
let data_dir_hash = blake3_hash(&data_dir_data);
|
||||||
|
client.send_metadata(MetaType::Dir, &data_dir_hash, &data_dir_data)?;
|
||||||
|
|
||||||
|
// Create root directory with file1 and subdirectories
|
||||||
|
let root_dir_entries = vec![
|
||||||
|
DirEntry {
|
||||||
|
entry_type: EntryType::File,
|
||||||
|
name: "readme.txt".to_string(),
|
||||||
|
target_meta_hash: file1_meta_hash,
|
||||||
|
},
|
||||||
|
DirEntry {
|
||||||
|
entry_type: EntryType::Dir,
|
||||||
|
name: "logs".to_string(),
|
||||||
|
target_meta_hash: logs_dir_hash,
|
||||||
|
},
|
||||||
|
DirEntry {
|
||||||
|
entry_type: EntryType::Dir,
|
||||||
|
name: "data".to_string(),
|
||||||
|
target_meta_hash: data_dir_hash,
|
||||||
|
},
|
||||||
|
];
|
||||||
|
let root_dir_obj = DirObj::new(root_dir_entries);
|
||||||
|
let root_dir_data = root_dir_obj.serialize();
|
||||||
|
let root_dir_hash = blake3_hash(&root_dir_data);
|
||||||
|
client.send_metadata(MetaType::Dir, &root_dir_hash, &root_dir_data)?;
|
||||||
|
|
||||||
|
// Step 4: Create partition
|
||||||
|
println!("\n💽 Creating partition metadata...");
|
||||||
|
let partition_obj = PartitionObj::new("test-partition".to_string(), root_dir_hash);
|
||||||
|
let partition_data = partition_obj.serialize();
|
||||||
|
let partition_hash = blake3_hash(&partition_data);
|
||||||
|
client.send_metadata(MetaType::Partition, &partition_hash, &partition_data)?;
|
||||||
|
|
||||||
|
// Step 5: Create disk
|
||||||
|
println!("\n🖥️ Creating disk metadata...");
|
||||||
|
let disk_obj = DiskObj::new("test-disk-001".to_string(), vec![partition_hash]);
|
||||||
|
let disk_data = disk_obj.serialize();
|
||||||
|
let disk_hash = blake3_hash(&disk_data);
|
||||||
|
client.send_metadata(MetaType::Disk, &disk_hash, &disk_data)?;
|
||||||
|
|
||||||
|
// Step 6: Create snapshot
|
||||||
|
println!("\n📸 Creating snapshot...");
|
||||||
|
let snapshot_obj = SnapshotObj::new(vec![disk_hash]);
|
||||||
|
let snapshot_data = snapshot_obj.serialize();
|
||||||
|
let snapshot_hash = blake3_hash(&snapshot_data);
|
||||||
|
|
||||||
|
// Upload snapshot using SendSnapshot command (not SendMeta)
|
||||||
|
client.send_snapshot(&snapshot_hash, &snapshot_data)?;
|
||||||
|
|
||||||
|
// Step 7: Verify everything is stored
|
||||||
|
println!("\n🔍 Verifying stored objects...");
|
||||||
|
|
||||||
|
// Check all metadata objects
|
||||||
|
let all_metadata = vec![
|
||||||
|
(MetaType::File, file1_meta_hash),
|
||||||
|
(MetaType::File, file2_meta_hash),
|
||||||
|
(MetaType::File, file3_meta_hash),
|
||||||
|
(MetaType::Dir, logs_dir_hash),
|
||||||
|
(MetaType::Dir, data_dir_hash),
|
||||||
|
(MetaType::Dir, root_dir_hash),
|
||||||
|
(MetaType::Partition, partition_hash),
|
||||||
|
(MetaType::Disk, disk_hash),
|
||||||
|
(MetaType::Snapshot, snapshot_hash),
|
||||||
|
];
|
||||||
|
|
||||||
|
let missing_metadata = client.check_metadata(&all_metadata)?;
|
||||||
|
if missing_metadata.is_empty() {
|
||||||
|
println!("✓ All metadata objects verified as stored");
|
||||||
|
} else {
|
||||||
|
println!("⚠ Warning: {} metadata objects still missing", missing_metadata.len());
|
||||||
|
for (meta_type, hash) in missing_metadata {
|
||||||
|
println!(" - Missing {:?}: {}", meta_type, hex::encode(hash));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check all chunks
|
||||||
|
let all_chunks = vec![file1_hash, file2_hash, file3_hash];
|
||||||
|
let missing_chunks_final = client.check_chunks(&all_chunks)?;
|
||||||
|
if missing_chunks_final.is_empty() {
|
||||||
|
println!("✓ All data chunks verified as stored");
|
||||||
|
} else {
|
||||||
|
println!("⚠ Warning: {} chunks still missing", missing_chunks_final.len());
|
||||||
|
}
|
||||||
|
|
||||||
|
println!("\n🎉 Complete filesystem hierarchy created!");
|
||||||
|
println!("📊 Summary:");
|
||||||
|
println!(" • 3 files (readme.txt, logs/app.log, data/binary.dat)");
|
||||||
|
println!(" • 3 directories (/, /logs, /data)");
|
||||||
|
println!(" • 1 partition (test-partition)");
|
||||||
|
println!(" • 1 disk (test-disk-001)");
|
||||||
|
println!(" • 1 snapshot");
|
||||||
|
println!(" • Snapshot hash: {}", hex::encode(snapshot_hash));
|
||||||
|
|
||||||
|
println!("\n✅ All tests completed successfully!");
|
||||||
|
|
||||||
|
// Close connection
|
||||||
|
client.close()?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
@@ -6,6 +6,7 @@ import Root from "@/common/layouts/Root.jsx";
|
|||||||
import UserManagement from "@/pages/UserManagement";
|
import UserManagement from "@/pages/UserManagement";
|
||||||
import SystemSettings from "@/pages/SystemSettings";
|
import SystemSettings from "@/pages/SystemSettings";
|
||||||
import Machines from "@/pages/Machines";
|
import Machines from "@/pages/Machines";
|
||||||
|
import MachineDetails from "@/pages/MachineDetails";
|
||||||
import "@fontsource/plus-jakarta-sans/300.css";
|
import "@fontsource/plus-jakarta-sans/300.css";
|
||||||
import "@fontsource/plus-jakarta-sans/400.css";
|
import "@fontsource/plus-jakarta-sans/400.css";
|
||||||
import "@fontsource/plus-jakarta-sans/600.css";
|
import "@fontsource/plus-jakarta-sans/600.css";
|
||||||
@@ -24,6 +25,7 @@ const App = () => {
|
|||||||
{path: "/", element: <Navigate to="/dashboard"/>},
|
{path: "/", element: <Navigate to="/dashboard"/>},
|
||||||
{path: "/dashboard", element: <Placeholder title="Dashboard"/>},
|
{path: "/dashboard", element: <Placeholder title="Dashboard"/>},
|
||||||
{path: "/machines", element: <Machines/>},
|
{path: "/machines", element: <Machines/>},
|
||||||
|
{path: "/machines/:id", element: <MachineDetails/>},
|
||||||
{path: "/servers", element: <Placeholder title="Servers"/>},
|
{path: "/servers", element: <Placeholder title="Servers"/>},
|
||||||
{path: "/settings", element: <Placeholder title="Settings"/>},
|
{path: "/settings", element: <Placeholder title="Settings"/>},
|
||||||
{path: "/admin/users", element: <UserManagement/>},
|
{path: "/admin/users", element: <UserManagement/>},
|
||||||
|
416
webui/src/pages/MachineDetails/MachineDetails.jsx
Normal file
416
webui/src/pages/MachineDetails/MachineDetails.jsx
Normal file
@@ -0,0 +1,416 @@
|
|||||||
|
import React, { useState, useEffect } from 'react';
|
||||||
|
import { useParams, useNavigate } from 'react-router-dom';
|
||||||
|
import { getRequest } from '@/common/utils/RequestUtil.js';
|
||||||
|
import { useToast } from '@/common/contexts/ToastContext.jsx';
|
||||||
|
import Card, { CardHeader, CardBody } from '@/common/components/Card';
|
||||||
|
import Grid from '@/common/components/Grid';
|
||||||
|
import LoadingSpinner from '@/common/components/LoadingSpinner';
|
||||||
|
import EmptyState from '@/common/components/EmptyState';
|
||||||
|
import PageHeader from '@/common/components/PageHeader';
|
||||||
|
import DetailItem, { DetailList } from '@/common/components/DetailItem';
|
||||||
|
import Badge from '@/common/components/Badge';
|
||||||
|
import Button from '@/common/components/Button';
|
||||||
|
import {
|
||||||
|
ArrowLeft,
|
||||||
|
Camera,
|
||||||
|
HardDrive,
|
||||||
|
Folder,
|
||||||
|
Calendar,
|
||||||
|
Hash,
|
||||||
|
Database,
|
||||||
|
Devices,
|
||||||
|
Eye,
|
||||||
|
ArrowCircleLeft
|
||||||
|
} from '@phosphor-icons/react';
|
||||||
|
import './styles.sass';
|
||||||
|
|
||||||
|
export const MachineDetails = () => {
|
||||||
|
const { id } = useParams();
|
||||||
|
const navigate = useNavigate();
|
||||||
|
const toast = useToast();
|
||||||
|
const [machine, setMachine] = useState(null);
|
||||||
|
const [snapshots, setSnapshots] = useState([]);
|
||||||
|
const [loading, setLoading] = useState(true);
|
||||||
|
const [selectedSnapshot, setSelectedSnapshot] = useState(null);
|
||||||
|
const [snapshotDetails, setSnapshotDetails] = useState(null);
|
||||||
|
const [loadingDetails, setLoadingDetails] = useState(false);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (id) {
|
||||||
|
fetchMachineData();
|
||||||
|
}
|
||||||
|
}, [id]);
|
||||||
|
|
||||||
|
const fetchMachineData = async () => {
|
||||||
|
try {
|
||||||
|
setLoading(true);
|
||||||
|
|
||||||
|
// Fetch machine info and snapshots in parallel
|
||||||
|
const [machineResponse, snapshotsResponse] = await Promise.all([
|
||||||
|
getRequest(`machines/${id}`),
|
||||||
|
getRequest(`machines/${id}/snapshots`)
|
||||||
|
]);
|
||||||
|
|
||||||
|
setMachine(machineResponse);
|
||||||
|
setSnapshots(snapshotsResponse);
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Failed to fetch machine data:', error);
|
||||||
|
toast.error('Failed to load machine details');
|
||||||
|
} finally {
|
||||||
|
setLoading(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const fetchSnapshotDetails = async (snapshotId) => {
|
||||||
|
try {
|
||||||
|
setLoadingDetails(true);
|
||||||
|
const details = await getRequest(`machines/${id}/snapshots/${snapshotId}`);
|
||||||
|
setSnapshotDetails(details);
|
||||||
|
setSelectedSnapshot(snapshotId);
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Failed to fetch snapshot details:', error);
|
||||||
|
toast.error('Failed to load snapshot details');
|
||||||
|
} finally {
|
||||||
|
setLoadingDetails(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const backToSnapshots = () => {
|
||||||
|
setSelectedSnapshot(null);
|
||||||
|
setSnapshotDetails(null);
|
||||||
|
};
|
||||||
|
|
||||||
|
const formatBytes = (bytes) => {
|
||||||
|
if (!bytes) return '0 B';
|
||||||
|
const k = 1024;
|
||||||
|
const sizes = ['B', 'KB', 'MB', 'GB', 'TB'];
|
||||||
|
const i = Math.floor(Math.log(bytes) / Math.log(k));
|
||||||
|
return `${parseFloat((bytes / Math.pow(k, i)).toFixed(2))} ${sizes[i]}`;
|
||||||
|
};
|
||||||
|
|
||||||
|
const formatDate = (dateString) => {
|
||||||
|
if (!dateString || dateString === 'Unknown') return 'Unknown';
|
||||||
|
try {
|
||||||
|
// Handle both "2025-09-09 20:19:48" and "2025-09-09 20:19:48 UTC" formats
|
||||||
|
const cleanDate = dateString.replace(' UTC', '');
|
||||||
|
const date = new Date(cleanDate);
|
||||||
|
if (isNaN(date.getTime())) {
|
||||||
|
return dateString; // Return original if parsing fails
|
||||||
|
}
|
||||||
|
return date.toLocaleString('en-US', {
|
||||||
|
year: 'numeric',
|
||||||
|
month: 'short',
|
||||||
|
day: 'numeric',
|
||||||
|
hour: '2-digit',
|
||||||
|
minute: '2-digit',
|
||||||
|
second: '2-digit'
|
||||||
|
});
|
||||||
|
} catch {
|
||||||
|
return dateString;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const formatLBA = (lba) => {
|
||||||
|
if (!lba && lba !== 0) return '0';
|
||||||
|
return lba.toLocaleString();
|
||||||
|
};
|
||||||
|
|
||||||
|
const getFsTypeColor = (fsType) => {
|
||||||
|
switch (fsType?.toLowerCase()) {
|
||||||
|
case 'ext':
|
||||||
|
case 'ext4':
|
||||||
|
case 'ext3':
|
||||||
|
case 'ext2':
|
||||||
|
return 'success';
|
||||||
|
case 'ntfs':
|
||||||
|
return 'info';
|
||||||
|
case 'fat32':
|
||||||
|
case 'fat':
|
||||||
|
return 'warning';
|
||||||
|
case 'xfs':
|
||||||
|
return 'info';
|
||||||
|
case 'btrfs':
|
||||||
|
return 'success';
|
||||||
|
default:
|
||||||
|
return 'secondary';
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const truncateHash = (hash, length = 16) => {
|
||||||
|
if (!hash) return 'Unknown';
|
||||||
|
return hash.length > length ? `${hash.substring(0, length)}...` : hash;
|
||||||
|
};
|
||||||
|
|
||||||
|
if (loading) {
|
||||||
|
return (
|
||||||
|
<div className="machine-details">
|
||||||
|
<PageHeader
|
||||||
|
title="Loading..."
|
||||||
|
subtitle="Fetching machine details"
|
||||||
|
actions={
|
||||||
|
<Button variant="secondary" onClick={() => navigate('/machines')}>
|
||||||
|
<ArrowLeft size={16} />
|
||||||
|
Back to Machines
|
||||||
|
</Button>
|
||||||
|
}
|
||||||
|
/>
|
||||||
|
<LoadingSpinner />
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!machine) {
|
||||||
|
return (
|
||||||
|
<div className="machine-details">
|
||||||
|
<PageHeader
|
||||||
|
title="Machine Not Found"
|
||||||
|
subtitle="The requested machine could not be found"
|
||||||
|
actions={
|
||||||
|
<Button variant="secondary" onClick={() => navigate('/machines')}>
|
||||||
|
<ArrowLeft size={16} />
|
||||||
|
Back to Machines
|
||||||
|
</Button>
|
||||||
|
}
|
||||||
|
/>
|
||||||
|
<EmptyState
|
||||||
|
icon={<Devices size={48} weight="duotone" />}
|
||||||
|
title="Machine Not Found"
|
||||||
|
subtitle="This machine may have been deleted or you don't have access to it."
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="machine-details">
|
||||||
|
<PageHeader
|
||||||
|
title={machine.name}
|
||||||
|
subtitle={
|
||||||
|
selectedSnapshot
|
||||||
|
? `Snapshot Details`
|
||||||
|
: `Machine ID: ${machine.machine_id}`
|
||||||
|
}
|
||||||
|
actions={
|
||||||
|
selectedSnapshot ? (
|
||||||
|
<Button variant="secondary" onClick={backToSnapshots}>
|
||||||
|
<ArrowCircleLeft size={16} />
|
||||||
|
Back to Snapshots
|
||||||
|
</Button>
|
||||||
|
) : (
|
||||||
|
<Button variant="secondary" onClick={() => navigate('/machines')}>
|
||||||
|
<ArrowLeft size={16} />
|
||||||
|
Back to Machines
|
||||||
|
</Button>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
/>
|
||||||
|
|
||||||
|
<Grid columns={1} gap="large">
|
||||||
|
{/* Machine Information - Only show when not viewing snapshot details */}
|
||||||
|
{!selectedSnapshot && (
|
||||||
|
<Card>
|
||||||
|
<CardHeader>
|
||||||
|
<h3><Devices size={20} /> Machine Information</h3>
|
||||||
|
</CardHeader>
|
||||||
|
<CardBody>
|
||||||
|
<DetailList>
|
||||||
|
<DetailItem label="Name" value={machine.name} />
|
||||||
|
<DetailItem label="Machine ID" value={machine.machine_id} />
|
||||||
|
<DetailItem label="Created" value={formatDate(machine.created_at)} />
|
||||||
|
<DetailItem label="Status" value={
|
||||||
|
<Badge variant="success">Active</Badge>
|
||||||
|
} />
|
||||||
|
</DetailList>
|
||||||
|
</CardBody>
|
||||||
|
</Card>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Snapshots List or Details */}
|
||||||
|
{!selectedSnapshot ? (
|
||||||
|
/* Snapshots List */
|
||||||
|
<Card>
|
||||||
|
<CardHeader>
|
||||||
|
<h3><Camera size={20} /> Snapshots ({snapshots.length})</h3>
|
||||||
|
</CardHeader>
|
||||||
|
<CardBody>
|
||||||
|
{snapshots.length === 0 ? (
|
||||||
|
<EmptyState
|
||||||
|
icon={<Camera size={48} weight="duotone" />}
|
||||||
|
title="No Snapshots"
|
||||||
|
subtitle="This machine hasn't created any snapshots yet."
|
||||||
|
/>
|
||||||
|
) : (
|
||||||
|
<Grid columns={1} gap="medium">
|
||||||
|
{snapshots.map((snapshot) => (
|
||||||
|
<Card key={snapshot.id} className="snapshot-summary-card">
|
||||||
|
<CardBody>
|
||||||
|
<div className="snapshot-summary">
|
||||||
|
<div className="snapshot-info">
|
||||||
|
<div className="snapshot-title">
|
||||||
|
<Camera size={18} />
|
||||||
|
<h4>Snapshot</h4>
|
||||||
|
</div>
|
||||||
|
<DetailList>
|
||||||
|
<DetailItem
|
||||||
|
label="Created"
|
||||||
|
value={
|
||||||
|
<div className="snapshot-date">
|
||||||
|
<Calendar size={14} />
|
||||||
|
{formatDate(snapshot.created_at)}
|
||||||
|
</div>
|
||||||
|
}
|
||||||
|
/>
|
||||||
|
<DetailItem
|
||||||
|
label="Snapshot ID"
|
||||||
|
value={
|
||||||
|
<div className="snapshot-hash">
|
||||||
|
<Hash size={14} />
|
||||||
|
<code>{truncateHash(snapshot.id, 24)}</code>
|
||||||
|
</div>
|
||||||
|
}
|
||||||
|
/>
|
||||||
|
<DetailItem
|
||||||
|
label="Hash"
|
||||||
|
value={
|
||||||
|
<div className="snapshot-hash">
|
||||||
|
<Hash size={14} />
|
||||||
|
<code>{truncateHash(snapshot.snapshot_hash, 24)}</code>
|
||||||
|
</div>
|
||||||
|
}
|
||||||
|
/>
|
||||||
|
</DetailList>
|
||||||
|
</div>
|
||||||
|
<div className="snapshot-actions">
|
||||||
|
<Button
|
||||||
|
variant="primary"
|
||||||
|
size="small"
|
||||||
|
onClick={() => fetchSnapshotDetails(snapshot.id)}
|
||||||
|
>
|
||||||
|
<Eye size={16} />
|
||||||
|
View Details
|
||||||
|
</Button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</CardBody>
|
||||||
|
</Card>
|
||||||
|
))}
|
||||||
|
</Grid>
|
||||||
|
)}
|
||||||
|
</CardBody>
|
||||||
|
</Card>
|
||||||
|
) : (
|
||||||
|
/* Snapshot Details */
|
||||||
|
<Card>
|
||||||
|
<CardHeader>
|
||||||
|
<h3><Camera size={20} /> Snapshot {selectedSnapshot} Details</h3>
|
||||||
|
</CardHeader>
|
||||||
|
<CardBody>
|
||||||
|
{loadingDetails ? (
|
||||||
|
<LoadingSpinner />
|
||||||
|
) : snapshotDetails ? (
|
||||||
|
<div className="snapshot-details">
|
||||||
|
{/* Snapshot Metadata */}
|
||||||
|
<Card className="snapshot-metadata">
|
||||||
|
<CardHeader>
|
||||||
|
<h4><Database size={18} /> Metadata</h4>
|
||||||
|
</CardHeader>
|
||||||
|
<CardBody>
|
||||||
|
<DetailList>
|
||||||
|
<DetailItem
|
||||||
|
label="Created"
|
||||||
|
value={
|
||||||
|
<div className="snapshot-date">
|
||||||
|
<Calendar size={14} />
|
||||||
|
{formatDate(snapshotDetails.created_at)}
|
||||||
|
</div>
|
||||||
|
}
|
||||||
|
/>
|
||||||
|
<DetailItem
|
||||||
|
label="Hash"
|
||||||
|
value={
|
||||||
|
<div className="snapshot-hash">
|
||||||
|
<Hash size={14} />
|
||||||
|
<code>{snapshotDetails.snapshot_hash}</code>
|
||||||
|
</div>
|
||||||
|
}
|
||||||
|
/>
|
||||||
|
<DetailItem
|
||||||
|
label="Disks"
|
||||||
|
value={`${snapshotDetails.disks.length} disk${snapshotDetails.disks.length !== 1 ? 's' : ''}`}
|
||||||
|
/>
|
||||||
|
</DetailList>
|
||||||
|
</CardBody>
|
||||||
|
</Card>
|
||||||
|
|
||||||
|
{/* Disks */}
|
||||||
|
<div className="disks-section">
|
||||||
|
<h4><HardDrive size={18} /> Disks ({snapshotDetails.disks.length})</h4>
|
||||||
|
<Grid columns={1} gap="medium">
|
||||||
|
{snapshotDetails.disks.map((disk, diskIndex) => (
|
||||||
|
<Card key={diskIndex} className="disk-card">
|
||||||
|
<CardHeader>
|
||||||
|
<h5><HardDrive size={16} /> Disk {diskIndex + 1}</h5>
|
||||||
|
</CardHeader>
|
||||||
|
<CardBody>
|
||||||
|
<DetailList>
|
||||||
|
<DetailItem label="Serial" value={disk.serial || 'Unknown'} />
|
||||||
|
<DetailItem label="Size" value={formatBytes(disk.size_bytes)} />
|
||||||
|
<DetailItem
|
||||||
|
label="Partitions"
|
||||||
|
value={`${disk.partitions.length} partition${disk.partitions.length !== 1 ? 's' : ''}`}
|
||||||
|
/>
|
||||||
|
</DetailList>
|
||||||
|
|
||||||
|
{/* Partitions */}
|
||||||
|
{disk.partitions.length > 0 && (
|
||||||
|
<div className="partitions-section">
|
||||||
|
<h6><Folder size={14} /> Partitions</h6>
|
||||||
|
<Grid columns="auto-fit" gap="1rem" minWidth="280px">
|
||||||
|
{disk.partitions.map((partition, partIndex) => (
|
||||||
|
<Card key={partIndex} className="partition-card">
|
||||||
|
<CardHeader>
|
||||||
|
<div className="partition-header">
|
||||||
|
<span>Partition {partIndex + 1}</span>
|
||||||
|
<Badge variant={getFsTypeColor(partition.fs_type)}>
|
||||||
|
{partition.fs_type.toUpperCase()}
|
||||||
|
</Badge>
|
||||||
|
</div>
|
||||||
|
</CardHeader>
|
||||||
|
<CardBody>
|
||||||
|
<DetailList>
|
||||||
|
<DetailItem label="Size" value={formatBytes(partition.size_bytes)} />
|
||||||
|
<DetailItem label="Start LBA" value={formatLBA(partition.start_lba)} />
|
||||||
|
<DetailItem label="End LBA" value={formatLBA(partition.end_lba)} />
|
||||||
|
<DetailItem
|
||||||
|
label="Sectors"
|
||||||
|
value={formatLBA(partition.end_lba - partition.start_lba)}
|
||||||
|
/>
|
||||||
|
</DetailList>
|
||||||
|
</CardBody>
|
||||||
|
</Card>
|
||||||
|
))}
|
||||||
|
</Grid>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</CardBody>
|
||||||
|
</Card>
|
||||||
|
))}
|
||||||
|
</Grid>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
) : (
|
||||||
|
<EmptyState
|
||||||
|
icon={<Camera size={48} weight="duotone" />}
|
||||||
|
title="No Details Available"
|
||||||
|
subtitle="Unable to load snapshot details."
|
||||||
|
/>
|
||||||
|
)}
|
||||||
|
</CardBody>
|
||||||
|
</Card>
|
||||||
|
)}
|
||||||
|
</Grid>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default MachineDetails;
|
2
webui/src/pages/MachineDetails/index.js
Normal file
2
webui/src/pages/MachineDetails/index.js
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
export { default } from './MachineDetails.jsx';
|
||||||
|
export { MachineDetails } from './MachineDetails.jsx';
|
232
webui/src/pages/MachineDetails/styles.sass
Normal file
232
webui/src/pages/MachineDetails/styles.sass
Normal file
@@ -0,0 +1,232 @@
|
|||||||
|
// Machine Details Page Styles
|
||||||
|
.machine-details
|
||||||
|
// Snapshot Summary Cards (list view)
|
||||||
|
.snapshot-summary-card
|
||||||
|
transition: all 0.2s ease
|
||||||
|
cursor: pointer
|
||||||
|
|
||||||
|
&:hover
|
||||||
|
border-color: var(--border-strong)
|
||||||
|
box-shadow: 0 4px 12px rgba(31, 36, 41, 0.1)
|
||||||
|
transform: translateY(-1px)
|
||||||
|
|
||||||
|
.snapshot-summary
|
||||||
|
display: flex
|
||||||
|
justify-content: space-between
|
||||||
|
align-items: flex-start
|
||||||
|
gap: 1.5rem
|
||||||
|
|
||||||
|
.snapshot-info
|
||||||
|
flex: 1
|
||||||
|
|
||||||
|
.snapshot-title
|
||||||
|
display: flex
|
||||||
|
align-items: center
|
||||||
|
gap: 0.75rem
|
||||||
|
margin-bottom: 1rem
|
||||||
|
|
||||||
|
h4
|
||||||
|
font-size: 1.125rem
|
||||||
|
font-weight: 600
|
||||||
|
color: var(--text)
|
||||||
|
margin: 0
|
||||||
|
|
||||||
|
.snapshot-date
|
||||||
|
display: flex
|
||||||
|
align-items: center
|
||||||
|
gap: 0.5rem
|
||||||
|
font-size: 0.875rem
|
||||||
|
color: var(--text-dim)
|
||||||
|
|
||||||
|
.snapshot-hash
|
||||||
|
display: flex
|
||||||
|
align-items: center
|
||||||
|
gap: 0.5rem
|
||||||
|
font-size: 0.875rem
|
||||||
|
|
||||||
|
code
|
||||||
|
background: var(--bg-elev)
|
||||||
|
padding: 0.25rem 0.5rem
|
||||||
|
border-radius: var(--radius-sm)
|
||||||
|
font-family: 'SF Mono', 'Monaco', 'Cascadia Code', 'Roboto Mono', monospace
|
||||||
|
color: var(--text-dim)
|
||||||
|
font-size: 0.8rem
|
||||||
|
|
||||||
|
.snapshot-actions
|
||||||
|
display: flex
|
||||||
|
flex-direction: column
|
||||||
|
gap: 0.5rem
|
||||||
|
|
||||||
|
// Snapshot Detail View
|
||||||
|
.snapshot-details
|
||||||
|
.snapshot-metadata
|
||||||
|
margin-bottom: 2rem
|
||||||
|
background: linear-gradient(135deg, var(--bg-alt) 0%, var(--bg-elev) 100%)
|
||||||
|
border: 1px solid var(--border)
|
||||||
|
|
||||||
|
.disks-section
|
||||||
|
h4
|
||||||
|
font-size: 1.25rem
|
||||||
|
font-weight: 600
|
||||||
|
color: var(--text)
|
||||||
|
margin-bottom: 1.5rem
|
||||||
|
display: flex
|
||||||
|
align-items: center
|
||||||
|
gap: 0.75rem
|
||||||
|
padding-bottom: 0.5rem
|
||||||
|
border-bottom: 2px solid var(--border)
|
||||||
|
|
||||||
|
.disk-card
|
||||||
|
border: 1px solid var(--border)
|
||||||
|
background: linear-gradient(135deg, var(--bg-alt) 0%, var(--bg-elev) 100%)
|
||||||
|
transition: all 0.2s ease
|
||||||
|
position: relative
|
||||||
|
overflow: hidden
|
||||||
|
|
||||||
|
&::before
|
||||||
|
content: ''
|
||||||
|
position: absolute
|
||||||
|
top: 0
|
||||||
|
left: 0
|
||||||
|
right: 0
|
||||||
|
height: 3px
|
||||||
|
background: linear-gradient(90deg, var(--accent) 0%, var(--success) 100%)
|
||||||
|
opacity: 0
|
||||||
|
transition: opacity 0.2s ease
|
||||||
|
|
||||||
|
&:hover
|
||||||
|
border-color: var(--border-strong)
|
||||||
|
box-shadow: 0 6px 20px rgba(31, 36, 41, 0.15)
|
||||||
|
transform: translateY(-2px)
|
||||||
|
|
||||||
|
&::before
|
||||||
|
opacity: 1
|
||||||
|
|
||||||
|
.partitions-section
|
||||||
|
margin-top: 2rem
|
||||||
|
|
||||||
|
h6
|
||||||
|
font-size: 1rem
|
||||||
|
font-weight: 600
|
||||||
|
color: var(--text)
|
||||||
|
margin-bottom: 1rem
|
||||||
|
display: flex
|
||||||
|
align-items: center
|
||||||
|
gap: 0.5rem
|
||||||
|
padding: 0.5rem 0
|
||||||
|
border-bottom: 1px solid var(--border)
|
||||||
|
|
||||||
|
.partition-card
|
||||||
|
border: 1px solid var(--border)
|
||||||
|
background: var(--bg-elev)
|
||||||
|
transition: all 0.2s ease
|
||||||
|
position: relative
|
||||||
|
|
||||||
|
&:hover
|
||||||
|
border-color: var(--border-strong)
|
||||||
|
box-shadow: 0 3px 10px rgba(31, 36, 41, 0.1)
|
||||||
|
transform: translateY(-1px)
|
||||||
|
|
||||||
|
.partition-header
|
||||||
|
display: flex
|
||||||
|
justify-content: space-between
|
||||||
|
align-items: center
|
||||||
|
|
||||||
|
span
|
||||||
|
font-size: 0.875rem
|
||||||
|
font-weight: 600
|
||||||
|
color: var(--text)
|
||||||
|
|
||||||
|
// Enhanced visual feedback
|
||||||
|
.snapshot-date, .snapshot-hash
|
||||||
|
transition: color 0.2s ease
|
||||||
|
|
||||||
|
&:hover
|
||||||
|
color: var(--text)
|
||||||
|
|
||||||
|
// Better spacing for detail items
|
||||||
|
.detail-list
|
||||||
|
.detail-item
|
||||||
|
padding: 0.75rem 0
|
||||||
|
border-bottom: 1px solid var(--border)
|
||||||
|
|
||||||
|
&:last-child
|
||||||
|
border-bottom: none
|
||||||
|
|
||||||
|
.detail-label
|
||||||
|
font-weight: 500
|
||||||
|
color: var(--text-dim)
|
||||||
|
font-size: 0.875rem
|
||||||
|
text-transform: uppercase
|
||||||
|
letter-spacing: 0.05em
|
||||||
|
|
||||||
|
.detail-value
|
||||||
|
font-weight: 500
|
||||||
|
color: var(--text)
|
||||||
|
|
||||||
|
code
|
||||||
|
background: var(--bg-elev)
|
||||||
|
padding: 0.25rem 0.5rem
|
||||||
|
border-radius: var(--radius-sm)
|
||||||
|
font-family: 'SF Mono', 'Monaco', 'Cascadia Code', 'Roboto Mono', monospace
|
||||||
|
font-size: 0.8rem
|
||||||
|
border: 1px solid var(--border)
|
||||||
|
|
||||||
|
// Loading and error states
|
||||||
|
.loading-section
|
||||||
|
text-align: center
|
||||||
|
padding: 3rem
|
||||||
|
|
||||||
|
.spinner
|
||||||
|
border: 3px solid var(--border)
|
||||||
|
border-top: 3px solid var(--accent)
|
||||||
|
border-radius: 50%
|
||||||
|
width: 40px
|
||||||
|
height: 40px
|
||||||
|
animation: spin 1s linear infinite
|
||||||
|
margin: 0 auto 1rem
|
||||||
|
|
||||||
|
@keyframes spin
|
||||||
|
0%
|
||||||
|
transform: rotate(0deg)
|
||||||
|
100%
|
||||||
|
transform: rotate(360deg)
|
||||||
|
|
||||||
|
// Responsive design
|
||||||
|
@media (max-width: 768px)
|
||||||
|
.snapshot-summary
|
||||||
|
flex-direction: column
|
||||||
|
gap: 1rem
|
||||||
|
|
||||||
|
.snapshot-actions
|
||||||
|
flex-direction: row
|
||||||
|
align-self: stretch
|
||||||
|
|
||||||
|
.disk-card .partitions-section h6
|
||||||
|
font-size: 0.875rem
|
||||||
|
|
||||||
|
.disks-section h4
|
||||||
|
font-size: 1.125rem
|
||||||
|
|
||||||
|
// Visual hierarchy improvements
|
||||||
|
.card
|
||||||
|
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.1)
|
||||||
|
|
||||||
|
&:hover
|
||||||
|
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.15)
|
||||||
|
|
||||||
|
.badge
|
||||||
|
font-weight: 600
|
||||||
|
letter-spacing: 0.025em
|
||||||
|
|
||||||
|
&.variant-success
|
||||||
|
background: linear-gradient(135deg, var(--success) 0%, #22c55e 100%)
|
||||||
|
|
||||||
|
&.variant-info
|
||||||
|
background: linear-gradient(135deg, var(--info) 0%, #3b82f6 100%)
|
||||||
|
|
||||||
|
&.variant-warning
|
||||||
|
background: linear-gradient(135deg, var(--warning) 0%, #f59e0b 100%)
|
||||||
|
|
||||||
|
&.variant-secondary
|
||||||
|
background: linear-gradient(135deg, var(--text-dim) 0%, #6b7280 100%)
|
@@ -1,4 +1,5 @@
|
|||||||
import React, {useState, useEffect, useContext} from 'react';
|
import React, {useState, useEffect, useContext} from 'react';
|
||||||
|
import {useNavigate} from 'react-router-dom';
|
||||||
import {UserContext} from '@/common/contexts/UserContext.jsx';
|
import {UserContext} from '@/common/contexts/UserContext.jsx';
|
||||||
import {useToast} from '@/common/contexts/ToastContext.jsx';
|
import {useToast} from '@/common/contexts/ToastContext.jsx';
|
||||||
import {getRequest, postRequest, deleteRequest} from '@/common/utils/RequestUtil.js';
|
import {getRequest, postRequest, deleteRequest} from '@/common/utils/RequestUtil.js';
|
||||||
@@ -28,6 +29,7 @@ import './styles.sass';
|
|||||||
export const Machines = () => {
|
export const Machines = () => {
|
||||||
const {user: currentUser} = useContext(UserContext);
|
const {user: currentUser} = useContext(UserContext);
|
||||||
const toast = useToast();
|
const toast = useToast();
|
||||||
|
const navigate = useNavigate();
|
||||||
const [machines, setMachines] = useState([]);
|
const [machines, setMachines] = useState([]);
|
||||||
const [loading, setLoading] = useState(true);
|
const [loading, setLoading] = useState(true);
|
||||||
const [showCreateModal, setShowCreateModal] = useState(false);
|
const [showCreateModal, setShowCreateModal] = useState(false);
|
||||||
@@ -179,6 +181,14 @@ export const Machines = () => {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
const handleMachineClick = (machineId) => {
|
||||||
|
navigate(`/machines/${machineId}`);
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleActionClick = (e) => {
|
||||||
|
e.stopPropagation(); // Prevent navigation when clicking action buttons
|
||||||
|
};
|
||||||
|
|
||||||
const handleInputChange = (e) => {
|
const handleInputChange = (e) => {
|
||||||
const {name, value} = e.target;
|
const {name, value} = e.target;
|
||||||
setFormData(prev => ({
|
setFormData(prev => ({
|
||||||
@@ -220,7 +230,13 @@ export const Machines = () => {
|
|||||||
|
|
||||||
<Grid minWidth="400px">
|
<Grid minWidth="400px">
|
||||||
{machines.map(machine => (
|
{machines.map(machine => (
|
||||||
<Card key={machine.id} hover className="machine-card">
|
<Card
|
||||||
|
key={machine.id}
|
||||||
|
hover
|
||||||
|
className="machine-card"
|
||||||
|
onClick={() => handleMachineClick(machine.id)}
|
||||||
|
style={{ cursor: 'pointer' }}
|
||||||
|
>
|
||||||
<CardHeader>
|
<CardHeader>
|
||||||
<div className="machine-card-header">
|
<div className="machine-card-header">
|
||||||
<div className="machine-icon">
|
<div className="machine-icon">
|
||||||
@@ -233,7 +249,7 @@ export const Machines = () => {
|
|||||||
<span className="uuid-text">{formatUuid(machine.uuid)}</span>
|
<span className="uuid-text">{formatUuid(machine.uuid)}</span>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div className="machine-actions">
|
<div className="machine-actions" onClick={handleActionClick}>
|
||||||
<Button
|
<Button
|
||||||
variant="subtle"
|
variant="subtle"
|
||||||
size="sm"
|
size="sm"
|
||||||
|
Reference in New Issue
Block a user