Add working test

This commit is contained in:
2025-09-09 22:23:01 +02:00
parent 4e38b13faa
commit fa00747e80
27 changed files with 2373 additions and 46 deletions

396
PROTOCOL.md Normal file
View File

@@ -0,0 +1,396 @@
Gute Einsicht — du hast recht: **nur Disk-Hashes** allein reichen nicht, der Sync-Client muss die **gesamte Objekt-Graph-Struktur** (Disk → Partition → RootDir → Dirs → Files → Chunks) erzeugen und der Server muss diese Meta-Objekte speichern/prüfen.
Ich aktualisiere das Protokoll so, dass es komplett, binär und KI-implementierbar ist — mit klaren Message-Formaten, Meta-Objekt-Binärlayouts und Pseudocode für Client **(was er erzeugen / berechnen muss)** und für Server **(wie er prüfen & persistieren muss)**.
Ich halte es pragmatisch: keine Kompression-Extensions, keine Encryption — alles einfach, später erweiterbar.
---
# Kurzüberblick (Was passiert)
1. Client scannt Disk/Partition, chunked Dateien (4 MiB), berechnet Chunks-Hashes (BLAKE3-256, 32B).
2. Client baut Binär-Meta-Objekte: FileObj, DirObj, PartitionObj, DiskObj, SnapshotObj (jeweils Body → `meta_hash = BLAKE3(body)`).
3. Client spricht Server über TCP an (HELLO → AUTH), fragt in Batches nach fehlenden Chunks/Metas, sendet nur fehlende Chunks/Metas.
4. Am Ende sendet er das Snapshot-Commit; Server validiert, schreibt Snapshot-Referenz (Server führt Pointer).
---
# Nachrichtengeneralstruktur (Envelopes)
Jede Nachricht: fixer 24-Byte Header + Payload:
```
struct MsgHeader {
u8 cmd; // Befehlscode (siehe Tabelle)
u8 flags; // reserved
u8 reserved[2];
u8 session_id[16]; // 0..0 bevor AUTH_OK
u32 payload_len; // LE
}
```
Antwort-Nachrichten haben dieselbe Hülle.
---
# Command-Codes (u8)
* 0x01 HELLO
* 0x02 HELLO_OK
* 0x10 AUTH_USERPASS
* 0x11 AUTH_CODE
* 0x12 AUTH_OK
* 0x13 AUTH_FAIL
* 0x20 BATCH_CHECK_CHUNK
* 0x21 CHECK_CHUNK_RESP
* 0x22 SEND_CHUNK
* 0x23 CHUNK_OK
* 0x24 CHUNK_FAIL
* 0x30 BATCH_CHECK_META
* 0x31 CHECK_META_RESP
* 0x32 SEND_META
* 0x33 META_OK
* 0x34 META_FAIL
* 0x40 SEND_SNAPSHOT (Snapshot-Commit)
* 0x41 SNAPSHOT_OK
* 0x42 SNAPSHOT_FAIL
* 0xFF CLOSE
---
# Wichtige Designentscheidungen (Kurz)
* **Hashes**: BLAKE3-256 (32 Bytes). Client berechnet alle Hashes (Chunks + Meta bodies).
* **Chunks auf Wire**: unkomprimiert (einfach & verlässlich). Kompression wäre später Erweiterung.
* **Meta-Objekt-Body**: kompakte binäre Strukturen (siehe unten). `meta_hash = BLAKE3(body)`.
* **Batch-Checks**: Client fragt in Batches nach fehlenden Chunks/Metas (+ Server liefert nur die fehlenden Hashes zurück). Minimiert RTT.
* **Server persistiert**: `chunks/<ab>/<cd>/<hash>.chk`, `meta/<type>/<ab>/<cd>/<hash>.meta`. Server verwaltet Snapshot-Pointers (z. B. `machines/<client>/snapshots/<id>.ref`).
* **Snapshot Commit**: Server validiert Objekt-Graph vor Abschluss; falls etwas fehlt, sendet Liste zurück (Snapshot_FAIL mit missing list).
---
# Binary Payload-Formate
Alle mehrteiligen Zähler / Längen sind little-endian (`LE`).
## A) BATCH_CHECK_CHUNK (Client → Server)
```
payload:
u32 count
for i in 0..count:
u8[32] chunk_hash
```
## CHECK_CHUNK_RESP (Server → Client)
```
payload:
u32 missing_count
for i in 0..missing_count:
u8[32] missing_chunk_hash
```
## SEND_CHUNK (Client → Server)
```
payload:
u8[32] chunk_hash
u32 size
u8[size] data // raw chunk bytes
```
Server computes BLAKE3(data) and compares to chunk_hash; if equal -> speichert.
## A) BATCH_CHECK_META
```
payload:
u32 count
for i in 0..count:
u8 meta_type // 1=file,2=dir,3=partition,4=disk,5=snapshot
u8[32] meta_hash
```
## CHECK_META_RESP
```
payload:
u32 missing_count
for i in 0..missing_count:
u8 meta_type
u8[32] meta_hash
```
## SEND_META
```
payload:
u8 meta_type // 1..5
u8[32] meta_hash
u32 body_len
u8[body_len] body_bytes // the canonical body; server will BLAKE3(body_bytes) and compare to meta_hash
```
## SEND_SNAPSHOT (Commit)
```
payload:
u8[32] snapshot_hash
u32 body_len
u8[body_len] snapshot_body // Snapshot body same encoding as meta (server validates body hash == snapshot_hash)
```
Server validates that snapshot_body references only existing meta objects (recursive / direct check). If OK → creates persistent snapshot pointer and replies SNAPSHOT_OK; if not, reply SNAPSHOT_FAIL with missing list (same format as CHECK_META_RESP).
---
# Meta-Objekt-Binärformate (Bodies)
> Client erzeugt `body_bytes` für jedes Meta-Objekt; `meta_hash = BLAKE3(body_bytes)`.
### FileObj (meta_type = 1)
```
FileObjBody:
u8 version (1)
u32 fs_type_code // e.g. 1=ext*, 2=ntfs, 3=fat32 (enum)
u64 size
u32 mode // POSIX mode for linux; 0 for FS without
u32 uid
u32 gid
u64 mtime_unixsec
u32 chunk_count
for i in 0..chunk_count:
u8[32] chunk_hash
// optional: xattrs/ACLs TLV (not in v1)
```
### DirObj (meta_type = 2)
```
DirObjBody:
u8 version (1)
u32 entry_count
for each entry:
u8 entry_type // 0 = file, 1 = dir, 2 = symlink
u16 name_len
u8[name_len] name (UTF-8)
u8[32] target_meta_hash
```
### PartitionObj (meta_type = 3)
```
PartitionObjBody:
u8 version (1)
u32 fs_type_code
u8[32] root_dir_hash // DirObj hash for root of this partition
u64 start_lba
u64 end_lba
u8[16] type_guid // zeroed if unused
```
### DiskObj (meta_type = 4)
```
DiskObjBody:
u8 version (1)
u32 partition_count
for i in 0..partition_count:
u8[32] partition_hash
u64 disk_size_bytes
u16 serial_len
u8[serial_len] serial_bytes
```
### SnapshotObj (meta_type = 5)
```
SnapshotObjBody:
u8 version (1)
u64 created_at_unixsec
u32 disk_count
for i in 0..disk_count:
u8[32] disk_hash
// optional: snapshot metadata (user, note) as TLV extension later
```
---
# Ablauf (Pseudocode) — **Client-Seite (Sync-Client)**
(Erzeugt alle Hashes; sendet nur fehlendes per Batch)
```text
FUNCTION client_backup(tcp_conn, computer_id, disks):
send_msg(HELLO{client_type=0, auth_type=0})
await HELLO_OK
send_msg(AUTH_USERPASS{username,password})
resp = await
if resp != AUTH_OK: abort
session_id = resp.session_id
// traverse per-partition to limit memory
snapshot_disk_hashes = []
FOR disk IN disks:
partition_hashes = []
FOR part IN disk.partitions:
root_dir_hash = process_dir(part.root_path, tcp_conn)
part_body = build_partition_body(part.fs_type, root_dir_hash, part.start, part.end, part.guid)
part_hash = blake3(part_body)
batch_check_and_send_meta_if_missing(tcp_conn, meta_type=3, [(part_hash,part_body)])
partition_hashes.append(part_hash)
disk_body = build_disk_body(partition_hashes, disk.size, disk.serial)
disk_hash = blake3(disk_body)
batch_check_and_send_meta_if_missing(tcp_conn, meta_type=4, [(disk_hash,disk_body)])
snapshot_disk_hashes.append(disk_hash)
snapshot_body = build_snapshot_body(now(), snapshot_disk_hashes)
snapshot_hash = blake3(snapshot_body)
// final TRY: ask server if snapshot can be committed (server will verify)
send_msg(SEND_SNAPSHOT(snapshot_hash, snapshot_body))
resp = await
if resp == SNAPSHOT_OK: success
else if resp == SNAPSHOT_FAIL: // server returns missing meta list
// receive missing metas; client should send the remaining missing meta/chunks (loop)
handle_missing_and_retry()
```
Hilfsfunktionen:
```text
FUNCTION process_dir(path, tcp_conn):
entries_meta = [] // list of (name, entry_type, target_hash)
collect a list meta_to_check_for_this_dir = []
FOR entry IN readdir(path):
IF entry.is_file:
file_hash = process_file(entry.path, tcp_conn) // below
entries_meta.append((entry.name, 0, file_hash))
ELSE IF entry.is_dir:
subdir_hash = process_dir(entry.path, tcp_conn)
entries_meta.append((entry.name, 1, subdir_hash))
ELSE IF symlink:
symlink_body = build_symlink_body(target)
symlink_hash = blake3(symlink_body)
batch_check_and_send_meta_if_missing(tcp_conn, meta_type=1, [(symlink_hash, symlink_body)])
entries_meta.append((entry.name, 2, symlink_hash))
dir_body = build_dir_body(entries_meta)
dir_hash = blake3(dir_body)
batch_check_and_send_meta_if_missing(tcp_conn, meta_type=2, [(dir_hash,dir_body)])
RETURN dir_hash
```
```text
FUNCTION process_file(path, tcp_conn):
chunk_hashes = []
FOR each chunk IN read_in_chunks(path, 4*1024*1024):
chunk_hash = blake3(chunk)
chunk_hashes.append(chunk_hash)
// Batch-check chunks for this file
missing = batch_check_chunks(tcp_conn, chunk_hashes)
FOR each missing_hash IN missing:
chunk_bytes = read_chunk_by_hash_from_disk(path, missing_hash) // or buffer earlier
send_msg(SEND_CHUNK {hash,size,data})
await CHUNK_OK
file_body = build_file_body(fs_type, size, mode, uid, gid, mtime, chunk_hashes)
file_hash = blake3(file_body)
batch_check_and_send_meta_if_missing(tcp_conn, meta_type=1, [(file_hash,file_body)])
RETURN file_hash
```
`batch_check_and_send_meta_if_missing`:
* Send BATCH_CHECK_META for all items
* Server returns list of missing metas
* For each missing, send SEND_META(meta_type, meta_hash, body)
* Await META_OK
Bemerkung: batching per directory/file-group reduziert RTT.
---
# Ablauf (Pseudocode) — **Server-Seite (Sync-Server)**
```text
ON connection:
read HELLO -> verify allowed client type
send HELLO_OK OR HELLO_FAIL
ON AUTH_USERPASS:
validate credentials
if ok: generate session_id (16B), send AUTH_OK{session_id}
else send AUTH_FAIL
ON BATCH_CHECK_CHUNK:
read list of hashes
missing_list = []
for hash in hashes:
if not exists chunks/shard(hash): missing_list.append(hash)
send CHECK_CHUNK_RESP {missing_list}
ON SEND_CHUNK:
read chunk_hash, size, data
computed = blake3(data)
if computed != chunk_hash: send CHUNK_FAIL{reason} and drop
else if exists chunk already: send CHUNK_OK
else: write atomic to chunks/<ab>/<cd>/<hash>.chk and send CHUNK_OK
ON BATCH_CHECK_META:
similar: check meta/<type>/<hash>.meta exists — return missing list
ON SEND_META:
verify blake3(body) == meta_hash; if ok write meta/<type>/<ab>/<cd>/<hash>.meta atomically; respond META_OK
ON SEND_SNAPSHOT:
verify blake3(snapshot_body) == snapshot_hash
// Validate the object graph:
missing = validate_graph(snapshot_body) // DFS: disks -> partitions -> dirs -> files -> chunks
if missing not empty:
send SNAPSHOT_FAIL {missing (as meta list and/or chunk list)}
else:
store snapshot file and create pointer machines/<client_id>/snapshots/<id>.ref
send SNAPSHOT_OK {snapshot_id}
```
`validate_graph`:
* parse snapshot_body → disk_hashes
* for each disk_hash check meta exists; load disk meta → for each partition_hash check meta exists … recursively for dir entries -> file metas -> check chunk existence for each chunk_hash. Collect missing set and return.
---
# Verhalten bei `SNAPSHOT_FAIL`
* Server liefert fehlende meta/chunk-Hashes.
* Client sendet diese gezielt (batch) und wiederholt `SEND_SNAPSHOT` (retry).
* Alternativ: Client kann beim ersten Versuch inkrementell alle benötigten metas/chunks hochladen (das ist die übliche Reihenfolge dieses Pseudocodes — so fehlt beim Commit nichts mehr).
---
# Speicherung / Pfade (Server intern)
* `chunks/<ab>/<cd>/<hash>.chk` (ab = first 2 hex chars; cd = next 2)
* `meta/files/<ab>/<cd>/<hash>.meta`
* `meta/dirs/<...>`
* `meta/parts/...`
* `meta/disks/...`
* `meta/snapshots/<snapshot_hash>.meta`
* `machines/<client_id>/snapshots/<snapshot_id>.ref` (Pointer -> snapshot_hash + timestamp)
Atomic writes: `tmp -> rename`.
---
# Wichtige Implementations-Hinweise für die KI/Server-Implementierung
* **Batching ist Pflicht**: Implementiere `BATCH_CHECK_CHUNK` & `BATCH_CHECK_META` effizient (Bitset, HashSet lookups).
* **Limits**: begrenze `count` pro Batch (z. B. 1000) — Client muss chunk lists stückeln.
* **Validation:** Server muss auf `SEND_SNAPSHOT` den Graph validieren (sonst verliert man Konsistenz).
* **Atomic Snapshot Commit:** erst persistieren, wenn Graph vollständig vorhanden.
* **SessionID**: muss in Header für alle Nachfolgemsgs verwendet werden.
* **Perf:** parallelisiere Chunk-Uploads (mehrere TCP-Tasks) und erlaubt Server mehrere parallele Handshakes.
* **Sicherheit:** produktiv TLS/TCP oder VPN; Rate-limit / brute-force Schutz; Provisioning-Codes mit TTL.

View File

@@ -0,0 +1,56 @@
{
"db_name": "SQLite",
"query": "\n SELECT pc.id, pc.code, pc.expires_at, pc.used, m.id as machine_id, m.user_id, u.username\n FROM provisioning_codes pc\n JOIN machines m ON pc.machine_id = m.id\n JOIN users u ON m.user_id = u.id\n WHERE pc.code = ? AND pc.used = 0\n ",
"describe": {
"columns": [
{
"name": "id",
"ordinal": 0,
"type_info": "Integer"
},
{
"name": "code",
"ordinal": 1,
"type_info": "Text"
},
{
"name": "expires_at",
"ordinal": 2,
"type_info": "Datetime"
},
{
"name": "used",
"ordinal": 3,
"type_info": "Bool"
},
{
"name": "machine_id",
"ordinal": 4,
"type_info": "Integer"
},
{
"name": "user_id",
"ordinal": 5,
"type_info": "Integer"
},
{
"name": "username",
"ordinal": 6,
"type_info": "Text"
}
],
"parameters": {
"Right": 1
},
"nullable": [
true,
false,
false,
true,
true,
false,
false
]
},
"hash": "2d6e5810f76e780a4a9b54c5ea39d707be614eb304dc6b4f32d8b6d28464c4b5"
}

View File

@@ -0,0 +1,26 @@
{
"db_name": "SQLite",
"query": "SELECT id, user_id FROM machines WHERE id = ?",
"describe": {
"columns": [
{
"name": "id",
"ordinal": 0,
"type_info": "Integer"
},
{
"name": "user_id",
"ordinal": 1,
"type_info": "Integer"
}
],
"parameters": {
"Right": 1
},
"nullable": [
false,
false
]
},
"hash": "43af0c22d05eca56b2a7b1f6eed873102d8e006330fd7d8063657d2df936b3fb"
}

View File

@@ -0,0 +1,12 @@
{
"db_name": "SQLite",
"query": "UPDATE provisioning_codes SET used = 1 WHERE id = ?",
"describe": {
"columns": [],
"parameters": {
"Right": 1
},
"nullable": []
},
"hash": "508e673540beae31730d323bbb52d91747bb405ef3d6f4a7f20776fdeb618688"
}

View File

@@ -0,0 +1,32 @@
{
"db_name": "SQLite",
"query": "SELECT id, username, password_hash FROM users WHERE username = ?",
"describe": {
"columns": [
{
"name": "id",
"ordinal": 0,
"type_info": "Integer"
},
{
"name": "username",
"ordinal": 1,
"type_info": "Text"
},
{
"name": "password_hash",
"ordinal": 2,
"type_info": "Text"
}
],
"parameters": {
"Right": 1
},
"nullable": [
true,
false,
false
]
},
"hash": "9f9215a05f729db6f707c84967f4f11033d39d17ded98f4fe9fb48f3d1598596"
}

View File

@@ -0,0 +1,26 @@
{
"db_name": "SQLite",
"query": "SELECT id, user_id FROM machines WHERE id = ? AND user_id = ?",
"describe": {
"columns": [
{
"name": "id",
"ordinal": 0,
"type_info": "Integer"
},
{
"name": "user_id",
"ordinal": 1,
"type_info": "Integer"
}
],
"parameters": {
"Right": 2
},
"nullable": [
false,
false
]
},
"hash": "cc5f2e47cc53dd29682506ff84f07f7d0914e3141e62b470e84b3886b50764a1"
}

View File

@@ -87,6 +87,7 @@ impl MachinesController {
id: row.get("id"),
user_id: row.get("user_id"),
uuid: Uuid::parse_str(&row.get::<String, _>("uuid")).unwrap(),
machine_id: row.get::<String, _>("uuid"),
name: row.get("name"),
created_at: row.get("created_at"),
})
@@ -109,6 +110,7 @@ impl MachinesController {
id: row.get("id"),
user_id: row.get("user_id"),
uuid: Uuid::parse_str(&row.get::<String, _>("uuid")).unwrap(),
machine_id: row.get::<String, _>("uuid"),
name: row.get("name"),
created_at: row.get("created_at"),
});

View File

@@ -1,3 +1,4 @@
pub mod auth;
pub mod machines;
pub mod snapshots;
pub mod users;

View File

@@ -0,0 +1,217 @@
use crate::sync::storage::Storage;
use crate::sync::meta::{MetaObj, FsType};
use crate::sync::protocol::MetaType;
use crate::utils::{error::*, models::*, DbPool};
use serde::Serialize;
use chrono::{DateTime, Utc};
#[derive(Debug, Serialize)]
pub struct SnapshotInfo {
pub id: String, // Use UUID string instead of integer
pub snapshot_hash: String,
pub created_at: String,
pub disks: Vec<DiskInfo>,
}
#[derive(Debug, Serialize)]
pub struct DiskInfo {
pub serial: String,
pub size_bytes: u64,
pub partitions: Vec<PartitionInfo>,
}
#[derive(Debug, Serialize)]
pub struct PartitionInfo {
pub fs_type: String,
pub start_lba: u64,
pub end_lba: u64,
pub size_bytes: u64,
}
pub struct SnapshotsController;
impl SnapshotsController {
pub async fn get_machine_snapshots(
pool: &DbPool,
machine_id: i64,
user: &User,
) -> AppResult<Vec<SnapshotInfo>> {
// Verify machine access
let machine = sqlx::query!(
"SELECT id, user_id FROM machines WHERE id = ? AND user_id = ?",
machine_id,
user.id
)
.fetch_optional(pool)
.await
.map_err(|e| AppError::DatabaseError(e.to_string()))?;
if machine.is_none() {
return Err(AppError::NotFoundError("Machine not found or access denied".to_string()));
}
let _machine = machine.unwrap();
let storage = Storage::new("./data");
let mut snapshot_infos = Vec::new();
// List all snapshots for this machine from storage
match storage.list_snapshots(machine_id).await {
Ok(snapshot_ids) => {
for snapshot_id in snapshot_ids {
// Load snapshot reference to get hash and timestamp
if let Ok(Some((snapshot_hash, created_at_timestamp))) = storage.load_snapshot_ref(machine_id, &snapshot_id).await {
// Load snapshot metadata
if let Ok(Some(snapshot_meta)) = storage.load_meta(MetaType::Snapshot, &snapshot_hash).await {
if let MetaObj::Snapshot(snapshot_obj) = snapshot_meta {
let mut disks = Vec::new();
for disk_hash in snapshot_obj.disk_hashes {
if let Ok(Some(disk_meta)) = storage.load_meta(MetaType::Disk, &disk_hash).await {
if let MetaObj::Disk(disk_obj) = disk_meta {
let mut partitions = Vec::new();
for partition_hash in disk_obj.partition_hashes {
if let Ok(Some(partition_meta)) = storage.load_meta(MetaType::Partition, &partition_hash).await {
if let MetaObj::Partition(partition_obj) = partition_meta {
let fs_type_str = match partition_obj.fs_type_code {
FsType::Ext => "ext",
FsType::Ntfs => "ntfs",
FsType::Fat32 => "fat32",
FsType::Unknown => "unknown",
};
partitions.push(PartitionInfo {
fs_type: fs_type_str.to_string(),
start_lba: partition_obj.start_lba,
end_lba: partition_obj.end_lba,
size_bytes: (partition_obj.end_lba - partition_obj.start_lba) * 512,
});
}
}
}
disks.push(DiskInfo {
serial: disk_obj.serial,
size_bytes: disk_obj.disk_size_bytes,
partitions,
});
}
}
}
// Convert timestamp to readable format
let created_at_str = DateTime::<Utc>::from_timestamp(created_at_timestamp as i64, 0)
.map(|dt| dt.format("%Y-%m-%d %H:%M:%S").to_string())
.unwrap_or_else(|| "Unknown".to_string());
snapshot_infos.push(SnapshotInfo {
id: snapshot_id,
snapshot_hash: hex::encode(snapshot_hash),
created_at: created_at_str,
disks,
});
}
}
}
}
}
Err(_) => {
// If no snapshots directory exists, return empty list
return Ok(Vec::new());
}
}
// Sort snapshots by creation time (newest first)
snapshot_infos.sort_by(|a, b| b.created_at.cmp(&a.created_at));
Ok(snapshot_infos)
}
pub async fn get_snapshot_details(
pool: &DbPool,
machine_id: i64,
snapshot_id: String,
user: &User,
) -> AppResult<SnapshotInfo> {
// Verify machine access
let machine = sqlx::query!(
"SELECT id, user_id FROM machines WHERE id = ? AND user_id = ?",
machine_id,
user.id
)
.fetch_optional(pool)
.await
.map_err(|e| AppError::DatabaseError(e.to_string()))?;
if machine.is_none() {
return Err(AppError::NotFoundError("Machine not found or access denied".to_string()));
}
let _machine = machine.unwrap();
let storage = Storage::new("./data");
// Load snapshot reference to get hash and timestamp
let (snapshot_hash, created_at_timestamp) = storage.load_snapshot_ref(machine_id, &snapshot_id).await
.map_err(|_| AppError::NotFoundError("Snapshot not found".to_string()))?
.ok_or_else(|| AppError::NotFoundError("Snapshot not found".to_string()))?;
// Load snapshot metadata
let snapshot_meta = storage.load_meta(MetaType::Snapshot, &snapshot_hash).await
.map_err(|_| AppError::NotFoundError("Snapshot metadata not found".to_string()))?
.ok_or_else(|| AppError::NotFoundError("Snapshot metadata not found".to_string()))?;
if let MetaObj::Snapshot(snapshot_obj) = snapshot_meta {
let mut disks = Vec::new();
for disk_hash in snapshot_obj.disk_hashes {
if let Ok(Some(disk_meta)) = storage.load_meta(MetaType::Disk, &disk_hash).await {
if let MetaObj::Disk(disk_obj) = disk_meta {
let mut partitions = Vec::new();
for partition_hash in disk_obj.partition_hashes {
if let Ok(Some(partition_meta)) = storage.load_meta(MetaType::Partition, &partition_hash).await {
if let MetaObj::Partition(partition_obj) = partition_meta {
let fs_type_str = match partition_obj.fs_type_code {
FsType::Ext => "ext",
FsType::Ntfs => "ntfs",
FsType::Fat32 => "fat32",
FsType::Unknown => "unknown",
};
partitions.push(PartitionInfo {
fs_type: fs_type_str.to_string(),
start_lba: partition_obj.start_lba,
end_lba: partition_obj.end_lba,
size_bytes: (partition_obj.end_lba - partition_obj.start_lba) * 512,
});
}
}
}
disks.push(DiskInfo {
serial: disk_obj.serial,
size_bytes: disk_obj.disk_size_bytes,
partitions,
});
}
}
}
// Convert timestamp to readable format
let created_at_str = DateTime::<Utc>::from_timestamp(created_at_timestamp as i64, 0)
.map(|dt| dt.format("%Y-%m-%d %H:%M:%S").to_string())
.unwrap_or_else(|| "Unknown".to_string());
Ok(SnapshotInfo {
id: snapshot_id,
snapshot_hash: hex::encode(snapshot_hash),
created_at: created_at_str,
disks,
})
} else {
Err(AppError::ValidationError("Invalid snapshot metadata".to_string()))
}
}
}

View File

@@ -8,7 +8,7 @@ use axum::{
routing::{delete, get, post, put},
Router,
};
use routes::{accounts, admin, auth as auth_routes, config, machines, setup};
use routes::{accounts, admin, auth, config, machines, setup, snapshots};
use std::path::Path;
use tokio::signal;
use tower_http::{
@@ -27,8 +27,8 @@ async fn main() -> Result<()> {
let api_routes = Router::new()
.route("/setup/status", get(setup::get_setup_status))
.route("/setup/init", post(setup::init_setup))
.route("/auth/login", post(auth_routes::login))
.route("/auth/logout", post(auth_routes::logout))
.route("/auth/login", post(auth::login))
.route("/auth/logout", post(auth::logout))
.route("/accounts/me", get(accounts::me))
.route("/admin/users", get(admin::get_users))
.route("/admin/users", post(admin::create_user_handler))
@@ -40,7 +40,10 @@ async fn main() -> Result<()> {
.route("/machines/register", post(machines::register_machine))
.route("/machines/provisioning-code", post(machines::create_provisioning_code))
.route("/machines", get(machines::get_machines))
.route("/machines/{id}", get(machines::get_machine))
.route("/machines/{id}", delete(machines::delete_machine))
.route("/machines/{id}/snapshots", get(snapshots::get_machine_snapshots))
.route("/machines/{machine_id}/snapshots/{snapshot_id}", get(snapshots::get_snapshot_details))
.layer(CorsLayer::permissive())
.with_state(pool);

View File

@@ -43,6 +43,21 @@ pub async fn get_machines(
Ok(success_response(machines))
}
pub async fn get_machine(
auth_user: AuthUser,
State(pool): State<DbPool>,
Path(machine_id): Path<i64>,
) -> Result<Json<Machine>, AppError> {
let machine = MachinesController::get_machine_by_id(&pool, machine_id).await?;
// Check if user has access to this machine
if auth_user.user.role != UserRole::Admin && machine.user_id != auth_user.user.id {
return Err(AppError::NotFoundError("Machine not found or access denied".to_string()));
}
Ok(success_response(machine))
}
pub async fn delete_machine(
auth_user: AuthUser,
State(pool): State<DbPool>,

View File

@@ -4,3 +4,4 @@ pub mod config;
pub mod machines;
pub mod setup;
pub mod accounts;
pub mod snapshots;

View File

@@ -0,0 +1,32 @@
use axum::{extract::{Path, State}, Json};
use crate::controllers::snapshots::{SnapshotsController, SnapshotInfo};
use crate::utils::{auth::AuthUser, error::AppResult, DbPool};
pub async fn get_machine_snapshots(
State(pool): State<DbPool>,
Path(machine_id): Path<i64>,
auth_user: AuthUser,
) -> AppResult<Json<Vec<SnapshotInfo>>> {
let snapshots = SnapshotsController::get_machine_snapshots(
&pool,
machine_id,
&auth_user.user,
).await?;
Ok(Json(snapshots))
}
pub async fn get_snapshot_details(
State(pool): State<DbPool>,
Path((machine_id, snapshot_id)): Path<(i64, String)>,
auth_user: AuthUser,
) -> AppResult<Json<SnapshotInfo>> {
let snapshot = SnapshotsController::get_snapshot_details(
&pool,
machine_id,
snapshot_id,
&auth_user.user,
).await?;
Ok(Json(snapshot))
}

View File

@@ -354,37 +354,60 @@ impl DiskObj {
}
pub fn deserialize(mut data: Bytes) -> Result<Self> {
println!("DiskObj::deserialize: input data length = {}", data.len());
if data.remaining() < 15 {
println!("DiskObj::deserialize: data too short, remaining = {}", data.remaining());
return Err(Error::new(ErrorKind::UnexpectedEof, "DiskObj data too short"));
}
let version = data.get_u8();
println!("DiskObj::deserialize: version = {}", version);
if version != 1 {
println!("DiskObj::deserialize: unsupported version {}", version);
return Err(Error::new(ErrorKind::InvalidData, "Unsupported DiskObj version"));
}
let partition_count = data.get_u32_le() as usize;
println!("DiskObj::deserialize: partition_count = {}", partition_count);
if data.remaining() < partition_count * 32 + 10 {
println!("DiskObj::deserialize: not enough data for partitions, remaining = {}, needed = {}",
data.remaining(), partition_count * 32 + 10);
return Err(Error::new(ErrorKind::UnexpectedEof, "DiskObj partitions too short"));
}
let mut partition_hashes = Vec::with_capacity(partition_count);
for _ in 0..partition_count {
for i in 0..partition_count {
let mut hash = [0u8; 32];
data.copy_to_slice(&mut hash);
println!("DiskObj::deserialize: partition {} hash = {}", i, hex::encode(&hash));
partition_hashes.push(hash);
}
let disk_size_bytes = data.get_u64_le();
println!("DiskObj::deserialize: disk_size_bytes = {}", disk_size_bytes);
let serial_len = data.get_u16_le() as usize;
println!("DiskObj::deserialize: serial_len = {}", serial_len);
if data.remaining() < serial_len {
println!("DiskObj::deserialize: not enough data for serial, remaining = {}, needed = {}",
data.remaining(), serial_len);
return Err(Error::new(ErrorKind::UnexpectedEof, "DiskObj serial too short"));
}
let serial = String::from_utf8(data.copy_to_bytes(serial_len).to_vec())
.map_err(|_| Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in serial"))?;
let serial_bytes = data.copy_to_bytes(serial_len).to_vec();
println!("DiskObj::deserialize: serial_bytes = {:?}", serial_bytes);
let serial = String::from_utf8(serial_bytes)
.map_err(|e| {
println!("DiskObj::deserialize: UTF-8 error: {}", e);
Error::new(ErrorKind::InvalidData, "Invalid UTF-8 in serial")
})?;
println!("DiskObj::deserialize: serial = '{}'", serial);
println!("DiskObj::deserialize: successfully deserialized");
Ok(Self {
version,

View File

@@ -113,7 +113,7 @@ struct ConnectionHandler {
validator: SnapshotValidator,
config: SyncServerConfig,
session_id: Option<[u8; 16]>,
machine_id: Option<String>,
machine_id: Option<i64>,
}
impl ConnectionHandler {
@@ -308,18 +308,27 @@ impl ConnectionHandler {
self.require_auth()?;
if body.len() > self.config.meta_size_limit {
println!("Snapshot rejected: size limit exceeded ({} > {})", body.len(), self.config.meta_size_limit);
return Ok(Some(Message::SnapshotFail {
missing_chunks: vec![],
missing_metas: vec![],
}));
}
println!("Validating snapshot hash: {}", hex::encode(&snapshot_hash));
// Validate snapshot
match self.validator.validate_snapshot(&snapshot_hash, &body).await {
Ok(validation_result) => {
println!("Validation result - is_valid: {}, missing_chunks: {}, missing_metas: {}",
validation_result.is_valid,
validation_result.missing_chunks.len(),
validation_result.missing_metas.len());
if validation_result.is_valid {
// Store snapshot meta
if let Err(_e) = self.storage.store_meta(MetaType::Snapshot, &snapshot_hash, &body).await {
if let Err(e) = self.storage.store_meta(MetaType::Snapshot, &snapshot_hash, &body).await {
println!("Failed to store snapshot meta: {}", e);
return Ok(Some(Message::SnapshotFail {
missing_chunks: vec![],
missing_metas: vec![],
@@ -328,46 +337,36 @@ impl ConnectionHandler {
// Create snapshot reference
let snapshot_id = Uuid::new_v4().to_string();
let machine_id = self.machine_id.as_ref().unwrap();
let machine_id = *self.machine_id.as_ref().unwrap();
let created_at = chrono::Utc::now().timestamp() as u64;
if let Err(_e) = self.storage.store_snapshot_ref(
println!("Creating snapshot reference: machine_id={}, snapshot_id={}", machine_id, snapshot_id);
if let Err(e) = self.storage.store_snapshot_ref(
machine_id,
&snapshot_id,
&snapshot_hash,
created_at
).await {
println!("Failed to store snapshot reference: {}", e);
return Ok(Some(Message::SnapshotFail {
missing_chunks: vec![],
missing_metas: vec![],
}));
}
// Store snapshot in database
let machine_id_num: i64 = machine_id.parse().unwrap_or(0);
let snapshot_hash_hex = hex::encode(snapshot_hash);
if let Err(_e) = sqlx::query!(
"INSERT INTO snapshots (machine_id, snapshot_hash) VALUES (?, ?)",
machine_id_num,
snapshot_hash_hex
)
.execute(self.session_manager.get_db_pool())
.await {
return Ok(Some(Message::SnapshotFail {
missing_chunks: vec![],
missing_metas: vec![],
}));
}
println!("Snapshot successfully stored with ID: {}", snapshot_id);
Ok(Some(Message::SnapshotOk { snapshot_id }))
} else {
println!("Snapshot validation failed - returning missing items");
Ok(Some(Message::SnapshotFail {
missing_chunks: validation_result.missing_chunks,
missing_metas: validation_result.missing_metas,
}))
}
}
Err(_e) => {
Err(e) => {
println!("Snapshot validation error: {}", e);
Ok(Some(Message::SnapshotFail {
missing_chunks: vec![],
missing_metas: vec![],

View File

@@ -4,13 +4,12 @@ use sqlx::SqlitePool;
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::RwLock;
use uuid::Uuid;
/// Session information
#[derive(Debug, Clone)]
pub struct Session {
pub session_id: [u8; 16],
pub machine_id: String,
pub machine_id: i64,
pub user_id: i64,
pub created_at: chrono::DateTime<chrono::Utc>,
}
@@ -79,12 +78,12 @@ impl SessionManager {
return Err(anyhow::anyhow!("Machine does not belong to user"));
}
// Create session
// Create session with machine ID
let session_id = Self::generate_session_id();
let machine_id_str = machine_id.to_string();
let machine_id = machine.id; // Use database ID
let session = Session {
session_id,
machine_id: machine_id_str,
machine_id,
user_id,
created_at: chrono::Utc::now(),
};
@@ -101,7 +100,7 @@ impl SessionManager {
// Query provisioning code from database
let provisioning_code = sqlx::query!(
r#"
SELECT pc.id, pc.code, pc.expires_at, pc.used, m.user_id, u.username
SELECT pc.id, pc.code, pc.expires_at, pc.used, m.id as machine_id, m.user_id, u.username
FROM provisioning_codes pc
JOIN machines m ON pc.machine_id = m.id
JOIN users u ON m.user_id = u.id
@@ -137,7 +136,7 @@ impl SessionManager {
// Create session
let session_id = Self::generate_session_id();
let machine_id = format!("machine-{}", Uuid::new_v4());
let machine_id = provisioning_code.machine_id.expect("Machine ID should not be null"); // Use machine ID from database
let session = Session {
session_id,
machine_id,
@@ -159,7 +158,7 @@ impl SessionManager {
}
/// Validate session and return associated machine ID
pub async fn validate_session(&self, session_id: &[u8; 16]) -> Result<String> {
pub async fn validate_session(&self, session_id: &[u8; 16]) -> Result<i64> {
let session = self.get_session(session_id).await
.ok_or_else(|| anyhow::anyhow!("Invalid session"))?;

View File

@@ -6,7 +6,7 @@ use tokio::fs;
use crate::sync::protocol::{Hash, MetaType};
use crate::sync::meta::MetaObj;
/// Storage backend for chunks and meta objects
/// Storage backend for chunks and metadata objects
#[derive(Debug, Clone)]
pub struct Storage {
data_dir: PathBuf,
@@ -199,30 +199,37 @@ impl Storage {
let path = self.meta_path(meta_type, hash);
if !path.exists() {
println!("Meta file does not exist: {:?}", path);
return Ok(None);
}
println!("Reading meta file: {:?}", path);
let data = fs::read(&path).await
.context("Failed to read meta file")?;
println!("Read {} bytes from meta file", data.len());
// Verify hash
let computed_hash = blake3::hash(&data);
if computed_hash.as_bytes() != hash {
println!("Hash mismatch: expected {}, got {}", hex::encode(hash), hex::encode(computed_hash.as_bytes()));
return Err(anyhow::anyhow!("Stored meta object hash mismatch"));
}
println!("Hash verified, deserializing {:?} object", meta_type);
let meta_obj = MetaObj::deserialize(meta_type, Bytes::from(data))
.context("Failed to deserialize meta object")?;
println!("Successfully deserialized meta object");
Ok(Some(meta_obj))
}
/// Get snapshot storage path for a machine
fn snapshot_ref_path(&self, machine_id: &str, snapshot_id: &str) -> PathBuf {
fn snapshot_ref_path(&self, machine_id: i64, snapshot_id: &str) -> PathBuf {
self.data_dir
.join("sync")
.join("machines")
.join(machine_id)
.join(machine_id.to_string())
.join("snapshots")
.join(format!("{}.ref", snapshot_id))
}
@@ -230,7 +237,7 @@ impl Storage {
/// Store a snapshot reference
pub async fn store_snapshot_ref(
&self,
machine_id: &str,
machine_id: i64,
snapshot_id: &str,
snapshot_hash: &Hash,
created_at: u64
@@ -258,7 +265,7 @@ impl Storage {
}
/// Load a snapshot reference
pub async fn load_snapshot_ref(&self, machine_id: &str, snapshot_id: &str) -> Result<Option<(Hash, u64)>> {
pub async fn load_snapshot_ref(&self, machine_id: i64, snapshot_id: &str) -> Result<Option<(Hash, u64)>> {
let path = self.snapshot_ref_path(machine_id, snapshot_id);
if !path.exists() {
@@ -285,11 +292,11 @@ impl Storage {
}
/// List snapshots for a machine
pub async fn list_snapshots(&self, machine_id: &str) -> Result<Vec<String>> {
pub async fn list_snapshots(&self, machine_id: i64) -> Result<Vec<String>> {
let snapshots_dir = self.data_dir
.join("sync")
.join("machines")
.join(machine_id)
.join(machine_id.to_string())
.join("snapshots");
if !snapshots_dir.exists() {
@@ -316,7 +323,7 @@ impl Storage {
}
/// Delete old snapshots, keeping only the latest N
pub async fn cleanup_snapshots(&self, machine_id: &str, keep_count: usize) -> Result<()> {
pub async fn cleanup_snapshots(&self, machine_id: i64, keep_count: usize) -> Result<()> {
let mut snapshots = self.list_snapshots(machine_id).await?;
if snapshots.len() <= keep_count {
@@ -382,7 +389,7 @@ mod tests {
let storage = Storage::new(temp_dir.path());
storage.init().await.unwrap();
let machine_id = "test-machine";
let machine_id = 123i64;
let snapshot_id = "snapshot-001";
let snapshot_hash = [1u8; 32];
let created_at = 1234567890;

View File

@@ -110,11 +110,13 @@ impl SnapshotValidator {
// Check if meta exists
if !self.storage.meta_exists(meta_type, &hash).await {
println!("Missing metadata: {:?} hash {}", meta_type, hex::encode(&hash));
missing_metas.push((meta_type, hash));
continue; // Skip loading if missing
}
// Load and process meta object
println!("Loading metadata: {:?} hash {}", meta_type, hex::encode(&hash));
if let Some(meta_obj) = self.storage.load_meta(meta_type, &hash).await
.context("Failed to load meta object")? {

View File

@@ -83,6 +83,8 @@ pub struct Machine {
pub id: i64,
pub user_id: i64,
pub uuid: Uuid,
#[serde(rename = "machine_id")]
pub machine_id: String,
pub name: String,
pub created_at: DateTime<Utc>,
}

76
sync_client_test/Cargo.lock generated Normal file
View File

@@ -0,0 +1,76 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
version = 4
[[package]]
name = "arrayref"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "76a2e8124351fda1ef8aaaa3bbd7ebbcb486bbcd4225aca0aa0d84bb2db8fecb"
[[package]]
name = "arrayvec"
version = "0.7.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7c02d123df017efcdfbd739ef81735b36c5ba83ec3c59c80a9d7ecc718f92e50"
[[package]]
name = "blake3"
version = "1.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3888aaa89e4b2a40fca9848e400f6a658a5a3978de7be858e209cafa8be9a4a0"
dependencies = [
"arrayref",
"arrayvec",
"cc",
"cfg-if",
"constant_time_eq",
]
[[package]]
name = "cc"
version = "1.2.36"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5252b3d2648e5eedbc1a6f501e3c795e07025c1e93bbf8bbdd6eef7f447a6d54"
dependencies = [
"find-msvc-tools",
"shlex",
]
[[package]]
name = "cfg-if"
version = "1.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2fd1289c04a9ea8cb22300a459a72a385d7c73d3259e2ed7dcb2af674838cfa9"
[[package]]
name = "constant_time_eq"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7c74b8349d32d297c9134b8c88677813a227df8f779daa29bfc29c183fe3dca6"
[[package]]
name = "find-msvc-tools"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7fd99930f64d146689264c637b5af2f0233a933bef0d8570e2526bf9e083192d"
[[package]]
name = "hex"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
[[package]]
name = "shlex"
version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64"
[[package]]
name = "sync_client_test"
version = "0.1.0"
dependencies = [
"blake3",
"hex",
]

View File

@@ -0,0 +1,8 @@
[package]
name = "sync_client_test"
version = "0.1.0"
edition = "2021"
[dependencies]
blake3 = "1.5"
hex = "0.4"

View File

@@ -0,0 +1,856 @@
// Mock sync client for testing the Arkendro sync server
// This implements the binary protocol specified in PROTOCOL.md
use std::io::{Read, Write, Result, Error, ErrorKind};
use std::net::TcpStream;
/// Command codes from the protocol
#[derive(Debug, Clone, Copy)]
#[repr(u8)]
enum Command {
Hello = 0x01,
HelloOk = 0x02,
AuthUserPass = 0x10,
AuthCode = 0x11,
AuthOk = 0x12,
AuthFail = 0x13,
BatchCheckChunk = 0x20,
CheckChunkResp = 0x21,
SendChunk = 0x22,
ChunkOk = 0x23,
ChunkFail = 0x24,
BatchCheckMeta = 0x30,
CheckMetaResp = 0x31,
SendMeta = 0x32,
MetaOk = 0x33,
MetaFail = 0x34,
SendSnapshot = 0x40,
SnapshotOk = 0x41,
SnapshotFail = 0x42,
Close = 0xFF,
}
impl Command {
fn from_u8(value: u8) -> Result<Self> {
match value {
0x01 => Ok(Command::Hello),
0x02 => Ok(Command::HelloOk),
0x10 => Ok(Command::AuthUserPass),
0x11 => Ok(Command::AuthCode),
0x12 => Ok(Command::AuthOk),
0x13 => Ok(Command::AuthFail),
0x20 => Ok(Command::BatchCheckChunk),
0x21 => Ok(Command::CheckChunkResp),
0x22 => Ok(Command::SendChunk),
0x23 => Ok(Command::ChunkOk),
0x24 => Ok(Command::ChunkFail),
0x30 => Ok(Command::BatchCheckMeta),
0x31 => Ok(Command::CheckMetaResp),
0x32 => Ok(Command::SendMeta),
0x33 => Ok(Command::MetaOk),
0x34 => Ok(Command::MetaFail),
0x40 => Ok(Command::SendSnapshot),
0x41 => Ok(Command::SnapshotOk),
0x42 => Ok(Command::SnapshotFail),
0xFF => Ok(Command::Close),
_ => Err(Error::new(ErrorKind::InvalidData, "Unknown command")),
}
}
}
/// Message header (24 bytes)
#[derive(Debug)]
struct MessageHeader {
cmd: Command,
flags: u8,
reserved: [u8; 2],
session_id: [u8; 16],
payload_len: u32,
}
impl MessageHeader {
fn new(cmd: Command, session_id: [u8; 16], payload_len: u32) -> Self {
Self {
cmd,
flags: 0,
reserved: [0; 2],
session_id,
payload_len,
}
}
fn to_bytes(&self) -> [u8; 24] {
let mut buf = [0u8; 24];
buf[0] = self.cmd as u8;
buf[1] = self.flags;
buf[2..4].copy_from_slice(&self.reserved);
buf[4..20].copy_from_slice(&self.session_id);
buf[20..24].copy_from_slice(&self.payload_len.to_le_bytes());
buf
}
fn from_bytes(buf: &[u8]) -> Result<Self> {
if buf.len() < 24 {
return Err(Error::new(ErrorKind::UnexpectedEof, "Header too short"));
}
let cmd = Command::from_u8(buf[0])?;
let flags = buf[1];
let reserved = [buf[2], buf[3]];
let mut session_id = [0u8; 16];
session_id.copy_from_slice(&buf[4..20]);
let payload_len = u32::from_le_bytes([buf[20], buf[21], buf[22], buf[23]]);
Ok(Self {
cmd,
flags,
reserved,
session_id,
payload_len,
})
}
}
/// Metadata types
#[derive(Debug, Clone, Copy)]
#[repr(u8)]
enum MetaType {
File = 1,
Dir = 2,
Partition = 3,
Disk = 4,
Snapshot = 5,
}
/// Filesystem types
#[derive(Debug, Clone, Copy)]
#[repr(u32)]
enum FsType {
Unknown = 0,
Ext = 1,
Ntfs = 2,
Fat32 = 3,
}
/// Directory entry types
#[derive(Debug, Clone, Copy)]
#[repr(u8)]
enum EntryType {
File = 0,
Dir = 1,
Symlink = 2,
}
/// Directory entry
#[derive(Debug, Clone)]
struct DirEntry {
entry_type: EntryType,
name: String,
target_meta_hash: [u8; 32],
}
/// File metadata object
#[derive(Debug, Clone)]
struct FileObj {
version: u8,
fs_type_code: FsType,
size: u64,
mode: u32,
uid: u32,
gid: u32,
mtime_unixsec: u64,
chunk_hashes: Vec<[u8; 32]>,
}
impl FileObj {
fn new(size: u64, chunk_hashes: Vec<[u8; 32]>) -> Self {
Self {
version: 1,
fs_type_code: FsType::Ext,
size,
mode: 0o644,
uid: 1000,
gid: 1000,
mtime_unixsec: std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_secs(),
chunk_hashes,
}
}
fn serialize(&self) -> Vec<u8> {
let mut buf = Vec::new();
buf.push(self.version);
buf.extend_from_slice(&(self.fs_type_code as u32).to_le_bytes());
buf.extend_from_slice(&self.size.to_le_bytes());
buf.extend_from_slice(&self.mode.to_le_bytes());
buf.extend_from_slice(&self.uid.to_le_bytes());
buf.extend_from_slice(&self.gid.to_le_bytes());
buf.extend_from_slice(&self.mtime_unixsec.to_le_bytes());
buf.extend_from_slice(&(self.chunk_hashes.len() as u32).to_le_bytes());
for hash in &self.chunk_hashes {
buf.extend_from_slice(hash);
}
buf
}
}
/// Directory metadata object
#[derive(Debug, Clone)]
struct DirObj {
version: u8,
entries: Vec<DirEntry>,
}
impl DirObj {
fn new(entries: Vec<DirEntry>) -> Self {
Self {
version: 1,
entries,
}
}
fn serialize(&self) -> Vec<u8> {
let mut buf = Vec::new();
buf.push(self.version);
buf.extend_from_slice(&(self.entries.len() as u32).to_le_bytes());
for entry in &self.entries {
buf.push(entry.entry_type as u8);
let name_bytes = entry.name.as_bytes();
buf.extend_from_slice(&(name_bytes.len() as u16).to_le_bytes());
buf.extend_from_slice(name_bytes);
buf.extend_from_slice(&entry.target_meta_hash);
}
buf
}
}
/// Partition metadata object
#[derive(Debug, Clone)]
struct PartitionObj {
version: u8,
fs_type: FsType,
root_dir_hash: [u8; 32],
start_lba: u64,
end_lba: u64,
type_guid: [u8; 16],
}
impl PartitionObj {
fn new(label: String, root_dir_hash: [u8; 32]) -> Self {
// Generate a deterministic GUID from the label for testing
let mut type_guid = [0u8; 16];
let label_bytes = label.as_bytes();
for (i, &byte) in label_bytes.iter().take(16).enumerate() {
type_guid[i] = byte;
}
Self {
version: 1,
fs_type: FsType::Ext,
root_dir_hash,
start_lba: 2048, // Common starting LBA
end_lba: 2097152, // ~1GB partition
type_guid,
}
}
fn serialize(&self) -> Vec<u8> {
let mut buf = Vec::new();
buf.push(self.version);
buf.extend_from_slice(&(self.fs_type as u32).to_le_bytes());
buf.extend_from_slice(&self.root_dir_hash);
buf.extend_from_slice(&self.start_lba.to_le_bytes());
buf.extend_from_slice(&self.end_lba.to_le_bytes());
buf.extend_from_slice(&self.type_guid);
buf
}
}
/// Disk metadata object
#[derive(Debug, Clone)]
struct DiskObj {
version: u8,
partition_hashes: Vec<[u8; 32]>,
disk_size_bytes: u64,
serial: String,
}
impl DiskObj {
fn new(serial: String, partition_hashes: Vec<[u8; 32]>) -> Self {
Self {
version: 1,
partition_hashes,
disk_size_bytes: 1024 * 1024 * 1024, // 1GB default
serial,
}
}
fn serialize(&self) -> Vec<u8> {
let mut buf = Vec::new();
buf.push(self.version);
buf.extend_from_slice(&(self.partition_hashes.len() as u32).to_le_bytes());
for hash in &self.partition_hashes {
buf.extend_from_slice(hash);
}
buf.extend_from_slice(&self.disk_size_bytes.to_le_bytes());
let serial_bytes = self.serial.as_bytes();
buf.extend_from_slice(&(serial_bytes.len() as u16).to_le_bytes());
buf.extend_from_slice(serial_bytes);
buf
}
}
/// Snapshot metadata object
#[derive(Debug, Clone)]
struct SnapshotObj {
version: u8,
created_at_unixsec: u64,
disk_hashes: Vec<[u8; 32]>,
}
impl SnapshotObj {
fn new(disk_hashes: Vec<[u8; 32]>) -> Self {
Self {
version: 1,
created_at_unixsec: std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_secs(),
disk_hashes,
}
}
fn serialize(&self) -> Vec<u8> {
let mut buf = Vec::new();
buf.push(self.version);
buf.extend_from_slice(&self.created_at_unixsec.to_le_bytes());
buf.extend_from_slice(&(self.disk_hashes.len() as u32).to_le_bytes());
for hash in &self.disk_hashes {
buf.extend_from_slice(hash);
}
buf
}
}
/// Simple sync client for testing
struct SyncClient {
stream: TcpStream,
session_id: [u8; 16],
}
impl SyncClient {
fn connect(addr: &str) -> Result<Self> {
let stream = TcpStream::connect(addr)?;
Ok(Self {
stream,
session_id: [0; 16],
})
}
fn send_message(&mut self, cmd: Command, payload: &[u8]) -> Result<()> {
let header = MessageHeader::new(cmd, self.session_id, payload.len() as u32);
self.stream.write_all(&header.to_bytes())?;
if !payload.is_empty() {
self.stream.write_all(payload)?;
}
self.stream.flush()?;
Ok(())
}
fn receive_message(&mut self) -> Result<(Command, Vec<u8>)> {
// Read header
let mut header_buf = [0u8; 24];
self.stream.read_exact(&mut header_buf)?;
let header = MessageHeader::from_bytes(&header_buf)?;
// Read payload
let mut payload = vec![0u8; header.payload_len as usize];
if header.payload_len > 0 {
self.stream.read_exact(&mut payload)?;
}
Ok((header.cmd, payload))
}
fn hello(&mut self) -> Result<()> {
println!("Sending HELLO...");
// Hello message needs client_type (1 byte) and auth_type (1 byte)
let payload = vec![0x01, 0x01]; // client_type=1, auth_type=1
self.send_message(Command::Hello, &payload)?;
let (cmd, _payload) = self.receive_message()?;
match cmd {
Command::HelloOk => {
println!("✓ Received HELLO_OK");
Ok(())
}
_ => Err(Error::new(ErrorKind::InvalidData, "Expected HELLO_OK")),
}
}
fn authenticate(&mut self, username: &str, password: &str, machine_id: i64) -> Result<()> {
println!("Authenticating as {} with machine ID {}...", username, machine_id);
// Build auth payload: username_len (u16_le) + username + password_len (u16_le) + password + machine_id (i64_le)
let mut payload = Vec::new();
payload.extend_from_slice(&(username.len() as u16).to_le_bytes());
payload.extend_from_slice(username.as_bytes());
payload.extend_from_slice(&(password.len() as u16).to_le_bytes());
payload.extend_from_slice(password.as_bytes());
payload.extend_from_slice(&machine_id.to_le_bytes());
self.send_message(Command::AuthUserPass, &payload)?;
let (cmd, payload) = self.receive_message()?;
match cmd {
Command::AuthOk => {
// Extract session ID from payload
if payload.len() >= 16 {
self.session_id.copy_from_slice(&payload[0..16]);
println!("✓ Authentication successful! Session ID: {:?}", self.session_id);
Ok(())
} else {
Err(Error::new(ErrorKind::InvalidData, "Invalid session ID"))
}
}
Command::AuthFail => Err(Error::new(ErrorKind::PermissionDenied, "Authentication failed")),
_ => Err(Error::new(ErrorKind::InvalidData, "Unexpected response")),
}
}
fn check_chunks(&mut self, hashes: &[[u8; 32]]) -> Result<Vec<[u8; 32]>> {
println!("Checking {} chunks...", hashes.len());
let mut payload = Vec::new();
payload.extend_from_slice(&(hashes.len() as u32).to_le_bytes());
for hash in hashes {
payload.extend_from_slice(hash);
}
self.send_message(Command::BatchCheckChunk, &payload)?;
let (cmd, payload) = self.receive_message()?;
match cmd {
Command::CheckChunkResp => {
if payload.len() < 4 {
return Err(Error::new(ErrorKind::InvalidData, "Invalid response"));
}
let count = u32::from_le_bytes([payload[0], payload[1], payload[2], payload[3]]) as usize;
let mut missing = Vec::new();
for i in 0..count {
let start = 4 + i * 32;
if payload.len() < start + 32 {
return Err(Error::new(ErrorKind::InvalidData, "Invalid hash in response"));
}
let mut hash = [0u8; 32];
hash.copy_from_slice(&payload[start..start + 32]);
missing.push(hash);
}
println!("{} chunks missing out of {}", missing.len(), hashes.len());
Ok(missing)
}
_ => Err(Error::new(ErrorKind::InvalidData, "Expected CheckChunkResp")),
}
}
fn send_chunk(&mut self, hash: &[u8; 32], data: &[u8]) -> Result<()> {
println!("Sending chunk {} bytes...", data.len());
println!("Chunk hash: {}", hex::encode(hash));
// Verify hash matches data
let computed_hash = blake3_hash(data);
if computed_hash != *hash {
return Err(Error::new(ErrorKind::InvalidData, "Hash mismatch"));
}
let mut payload = Vec::new();
payload.extend_from_slice(hash);
payload.extend_from_slice(&(data.len() as u32).to_le_bytes());
payload.extend_from_slice(data);
self.send_message(Command::SendChunk, &payload)?;
let (cmd, payload) = self.receive_message()?;
match cmd {
Command::ChunkOk => {
println!("✓ Chunk uploaded successfully");
Ok(())
}
Command::ChunkFail => {
let reason = if !payload.is_empty() {
String::from_utf8_lossy(&payload).to_string()
} else {
"Unknown error".to_string()
};
Err(Error::new(ErrorKind::Other, format!("Server rejected chunk: {}", reason)))
}
_ => Err(Error::new(ErrorKind::InvalidData, "Expected ChunkOk or ChunkFail")),
}
}
fn check_metadata(&mut self, items: &[(MetaType, [u8; 32])]) -> Result<Vec<(MetaType, [u8; 32])>> {
println!("Checking {} metadata items...", items.len());
let mut payload = Vec::new();
payload.extend_from_slice(&(items.len() as u32).to_le_bytes());
for (meta_type, hash) in items {
payload.push(*meta_type as u8);
payload.extend_from_slice(hash);
}
self.send_message(Command::BatchCheckMeta, &payload)?;
let (cmd, payload) = self.receive_message()?;
match cmd {
Command::CheckMetaResp => {
if payload.len() < 4 {
return Err(Error::new(ErrorKind::InvalidData, "Invalid response"));
}
let count = u32::from_le_bytes([payload[0], payload[1], payload[2], payload[3]]) as usize;
let mut missing = Vec::new();
for i in 0..count {
let start = 4 + i * 33; // 1 byte type + 32 bytes hash
if payload.len() < start + 33 {
return Err(Error::new(ErrorKind::InvalidData, "Invalid metadata in response"));
}
let meta_type = match payload[start] {
1 => MetaType::File,
2 => MetaType::Dir,
3 => MetaType::Partition,
4 => MetaType::Disk,
5 => MetaType::Snapshot,
_ => return Err(Error::new(ErrorKind::InvalidData, "Invalid metadata type")),
};
let mut hash = [0u8; 32];
hash.copy_from_slice(&payload[start + 1..start + 33]);
missing.push((meta_type, hash));
}
println!("{} metadata items missing out of {}", missing.len(), items.len());
Ok(missing)
}
_ => Err(Error::new(ErrorKind::InvalidData, "Expected CheckMetaResp")),
}
}
fn send_metadata(&mut self, meta_type: MetaType, meta_hash: &[u8; 32], body: &[u8]) -> Result<()> {
println!("Sending {:?} metadata {} bytes...", meta_type, body.len());
println!("Metadata hash: {}", hex::encode(meta_hash));
// Verify hash matches body
let computed_hash = blake3_hash(body);
if computed_hash != *meta_hash {
return Err(Error::new(ErrorKind::InvalidData, "Metadata hash mismatch"));
}
let mut payload = Vec::new();
payload.push(meta_type as u8);
payload.extend_from_slice(meta_hash);
payload.extend_from_slice(&(body.len() as u32).to_le_bytes());
payload.extend_from_slice(body);
self.send_message(Command::SendMeta, &payload)?;
let (cmd, payload) = self.receive_message()?;
match cmd {
Command::MetaOk => {
println!("✓ Metadata uploaded successfully");
Ok(())
}
Command::MetaFail => {
let reason = if !payload.is_empty() {
String::from_utf8_lossy(&payload).to_string()
} else {
"Unknown error".to_string()
};
Err(Error::new(ErrorKind::Other, format!("Server rejected metadata: {}", reason)))
}
_ => Err(Error::new(ErrorKind::InvalidData, "Expected MetaOk or MetaFail")),
}
}
fn send_snapshot(&mut self, snapshot_hash: &[u8; 32], snapshot_data: &[u8]) -> Result<()> {
println!("Sending snapshot {} bytes...", snapshot_data.len());
println!("Snapshot hash: {}", hex::encode(snapshot_hash));
// Verify hash matches data
let computed_hash = blake3_hash(snapshot_data);
if computed_hash != *snapshot_hash {
return Err(Error::new(ErrorKind::InvalidData, "Snapshot hash mismatch"));
}
let mut payload = Vec::new();
payload.extend_from_slice(snapshot_hash);
payload.extend_from_slice(&(snapshot_data.len() as u32).to_le_bytes());
payload.extend_from_slice(snapshot_data);
self.send_message(Command::SendSnapshot, &payload)?;
let (cmd, payload) = self.receive_message()?;
match cmd {
Command::SnapshotOk => {
println!("✓ Snapshot uploaded successfully");
Ok(())
}
Command::SnapshotFail => {
// Parse SnapshotFail payload: missing_chunks_count + chunks + missing_metas_count + metas
if payload.len() < 8 {
return Err(Error::new(ErrorKind::Other, "Server rejected snapshot: Invalid response format"));
}
let missing_chunks_count = u32::from_le_bytes([payload[0], payload[1], payload[2], payload[3]]) as usize;
let missing_metas_count = u32::from_le_bytes([payload[4], payload[5], payload[6], payload[7]]) as usize;
let mut error_msg = format!("Server rejected snapshot: {} missing chunks, {} missing metadata items",
missing_chunks_count, missing_metas_count);
// Optionally parse the actual missing items for more detailed error
if missing_chunks_count > 0 || missing_metas_count > 0 {
error_msg.push_str(" (run with chunk/metadata verification to see details)");
}
Err(Error::new(ErrorKind::Other, error_msg))
}
_ => Err(Error::new(ErrorKind::InvalidData, "Expected SnapshotOk or SnapshotFail")),
}
}
fn close(&mut self) -> Result<()> {
self.send_message(Command::Close, &[])?;
Ok(())
}
}
/// Hash function using blake3
fn blake3_hash(data: &[u8]) -> [u8; 32] {
blake3::hash(data).into()
}
/// Generate some mock data for testing
fn generate_mock_data() -> Vec<(Vec<u8>, [u8; 32])> {
let mut data_chunks = Vec::new();
// Some test data chunks
let chunks = [
b"Hello, Arkendro sync server! This is test chunk data.".to_vec(),
b"Another test chunk with different content for variety.".to_vec(),
b"Binary data test: \x00\x01\x02\x03\xFF\xFE\xFD\xFC".to_vec(),
];
for chunk in chunks {
let hash = blake3_hash(&chunk);
data_chunks.push((chunk, hash));
}
data_chunks
}
fn main() -> Result<()> {
println!("🚀 Arkendro Sync Client Extended Test");
println!("====================================\n");
// Connect to server
let mut client = SyncClient::connect("127.0.0.1:8380")?;
println!("Connected to sync server\n");
// Test protocol flow
client.hello()?;
// Try to authenticate with hardcoded machine ID (you'll need to create a machine first via the web interface)
let machine_id = 1; // Hardcoded machine ID for testing
match client.authenticate("admin", "password123", machine_id) {
Ok(()) => println!("Authentication successful!\n"),
Err(e) => {
println!("Authentication failed: {}", e);
println!("Make sure you have:");
println!("1. Created a user 'admin' with password 'password123' via the web interface");
println!("2. Created a machine with ID {} that belongs to user 'admin'", machine_id);
client.close()?;
return Ok(());
}
}
println!("📁 Creating test filesystem hierarchy...\n");
// Step 1: Create test file data chunks
let file1_data = b"Hello, this is the content of file1.txt in our test filesystem!";
let file2_data = b"This is file2.log with some different content for testing purposes.";
let file3_data = b"Binary data file: \x00\x01\x02\x03\xFF\xFE\xFD\xFC and some text after.";
let file1_hash = blake3_hash(file1_data);
let file2_hash = blake3_hash(file2_data);
let file3_hash = blake3_hash(file3_data);
// Upload chunks if needed
println!("🔗 Uploading file chunks...");
let chunk_hashes = vec![file1_hash, file2_hash, file3_hash];
let missing_chunks = client.check_chunks(&chunk_hashes)?;
if !missing_chunks.is_empty() {
for &missing_hash in &missing_chunks {
if missing_hash == file1_hash {
client.send_chunk(&file1_hash, file1_data)?;
} else if missing_hash == file2_hash {
client.send_chunk(&file2_hash, file2_data)?;
} else if missing_hash == file3_hash {
client.send_chunk(&file3_hash, file3_data)?;
}
}
} else {
println!("✓ All chunks already exist on server");
}
// Step 2: Create file metadata objects
println!("\n📄 Creating file metadata objects...");
let file1_obj = FileObj::new(file1_data.len() as u64, vec![file1_hash]);
let file2_obj = FileObj::new(file2_data.len() as u64, vec![file2_hash]);
let file3_obj = FileObj::new(file3_data.len() as u64, vec![file3_hash]);
let file1_meta_data = file1_obj.serialize();
let file2_meta_data = file2_obj.serialize();
let file3_meta_data = file3_obj.serialize();
let file1_meta_hash = blake3_hash(&file1_meta_data);
let file2_meta_hash = blake3_hash(&file2_meta_data);
let file3_meta_hash = blake3_hash(&file3_meta_data);
// Upload file metadata
client.send_metadata(MetaType::File, &file1_meta_hash, &file1_meta_data)?;
client.send_metadata(MetaType::File, &file2_meta_hash, &file2_meta_data)?;
client.send_metadata(MetaType::File, &file3_meta_hash, &file3_meta_data)?;
// Step 3: Create directory structures
println!("\n📁 Creating directory structures...");
// Create /logs subdirectory with file2
let logs_dir_entries = vec![
DirEntry {
entry_type: EntryType::File,
name: "app.log".to_string(),
target_meta_hash: file2_meta_hash,
},
];
let logs_dir_obj = DirObj::new(logs_dir_entries);
let logs_dir_data = logs_dir_obj.serialize();
let logs_dir_hash = blake3_hash(&logs_dir_data);
client.send_metadata(MetaType::Dir, &logs_dir_hash, &logs_dir_data)?;
// Create /data subdirectory with file3
let data_dir_entries = vec![
DirEntry {
entry_type: EntryType::File,
name: "binary.dat".to_string(),
target_meta_hash: file3_meta_hash,
},
];
let data_dir_obj = DirObj::new(data_dir_entries);
let data_dir_data = data_dir_obj.serialize();
let data_dir_hash = blake3_hash(&data_dir_data);
client.send_metadata(MetaType::Dir, &data_dir_hash, &data_dir_data)?;
// Create root directory with file1 and subdirectories
let root_dir_entries = vec![
DirEntry {
entry_type: EntryType::File,
name: "readme.txt".to_string(),
target_meta_hash: file1_meta_hash,
},
DirEntry {
entry_type: EntryType::Dir,
name: "logs".to_string(),
target_meta_hash: logs_dir_hash,
},
DirEntry {
entry_type: EntryType::Dir,
name: "data".to_string(),
target_meta_hash: data_dir_hash,
},
];
let root_dir_obj = DirObj::new(root_dir_entries);
let root_dir_data = root_dir_obj.serialize();
let root_dir_hash = blake3_hash(&root_dir_data);
client.send_metadata(MetaType::Dir, &root_dir_hash, &root_dir_data)?;
// Step 4: Create partition
println!("\n💽 Creating partition metadata...");
let partition_obj = PartitionObj::new("test-partition".to_string(), root_dir_hash);
let partition_data = partition_obj.serialize();
let partition_hash = blake3_hash(&partition_data);
client.send_metadata(MetaType::Partition, &partition_hash, &partition_data)?;
// Step 5: Create disk
println!("\n🖥️ Creating disk metadata...");
let disk_obj = DiskObj::new("test-disk-001".to_string(), vec![partition_hash]);
let disk_data = disk_obj.serialize();
let disk_hash = blake3_hash(&disk_data);
client.send_metadata(MetaType::Disk, &disk_hash, &disk_data)?;
// Step 6: Create snapshot
println!("\n📸 Creating snapshot...");
let snapshot_obj = SnapshotObj::new(vec![disk_hash]);
let snapshot_data = snapshot_obj.serialize();
let snapshot_hash = blake3_hash(&snapshot_data);
// Upload snapshot using SendSnapshot command (not SendMeta)
client.send_snapshot(&snapshot_hash, &snapshot_data)?;
// Step 7: Verify everything is stored
println!("\n🔍 Verifying stored objects...");
// Check all metadata objects
let all_metadata = vec![
(MetaType::File, file1_meta_hash),
(MetaType::File, file2_meta_hash),
(MetaType::File, file3_meta_hash),
(MetaType::Dir, logs_dir_hash),
(MetaType::Dir, data_dir_hash),
(MetaType::Dir, root_dir_hash),
(MetaType::Partition, partition_hash),
(MetaType::Disk, disk_hash),
(MetaType::Snapshot, snapshot_hash),
];
let missing_metadata = client.check_metadata(&all_metadata)?;
if missing_metadata.is_empty() {
println!("✓ All metadata objects verified as stored");
} else {
println!("⚠ Warning: {} metadata objects still missing", missing_metadata.len());
for (meta_type, hash) in missing_metadata {
println!(" - Missing {:?}: {}", meta_type, hex::encode(hash));
}
}
// Check all chunks
let all_chunks = vec![file1_hash, file2_hash, file3_hash];
let missing_chunks_final = client.check_chunks(&all_chunks)?;
if missing_chunks_final.is_empty() {
println!("✓ All data chunks verified as stored");
} else {
println!("⚠ Warning: {} chunks still missing", missing_chunks_final.len());
}
println!("\n🎉 Complete filesystem hierarchy created!");
println!("📊 Summary:");
println!(" • 3 files (readme.txt, logs/app.log, data/binary.dat)");
println!(" • 3 directories (/, /logs, /data)");
println!(" • 1 partition (test-partition)");
println!(" • 1 disk (test-disk-001)");
println!(" • 1 snapshot");
println!(" • Snapshot hash: {}", hex::encode(snapshot_hash));
println!("\n✅ All tests completed successfully!");
// Close connection
client.close()?;
Ok(())
}

View File

@@ -6,6 +6,7 @@ import Root from "@/common/layouts/Root.jsx";
import UserManagement from "@/pages/UserManagement";
import SystemSettings from "@/pages/SystemSettings";
import Machines from "@/pages/Machines";
import MachineDetails from "@/pages/MachineDetails";
import "@fontsource/plus-jakarta-sans/300.css";
import "@fontsource/plus-jakarta-sans/400.css";
import "@fontsource/plus-jakarta-sans/600.css";
@@ -24,6 +25,7 @@ const App = () => {
{path: "/", element: <Navigate to="/dashboard"/>},
{path: "/dashboard", element: <Placeholder title="Dashboard"/>},
{path: "/machines", element: <Machines/>},
{path: "/machines/:id", element: <MachineDetails/>},
{path: "/servers", element: <Placeholder title="Servers"/>},
{path: "/settings", element: <Placeholder title="Settings"/>},
{path: "/admin/users", element: <UserManagement/>},

View File

@@ -0,0 +1,266 @@
import React, { useState, useEffect } from 'react';
import { useParams, useNavigate } from 'react-router-dom';
import { getRequest } from '@/common/utils/RequestUtil.js';
import { useToast } from '@/common/contexts/ToastContext.jsx';
import Card, { CardHeader, CardBody } from '@/common/components/Card';
import Grid from '@/common/components/Grid';
import LoadingSpinner from '@/common/components/LoadingSpinner';
import EmptyState from '@/common/components/EmptyState';
import PageHeader from '@/common/components/PageHeader';
import DetailItem, { DetailList } from '@/common/components/DetailItem';
import Badge from '@/common/components/Badge';
import Button from '@/common/components/Button';
import {
ArrowLeft,
Camera,
HardDrive,
Folder,
Calendar,
Hash,
Database,
Devices
} from '@phosphor-icons/react';
import './styles.sass';
export const MachineDetails = () => {
const { id } = useParams();
const navigate = useNavigate();
const toast = useToast();
const [machine, setMachine] = useState(null);
const [snapshots, setSnapshots] = useState([]);
const [loading, setLoading] = useState(true);
const [selectedSnapshot, setSelectedSnapshot] = useState(null);
useEffect(() => {
if (id) {
fetchMachineData();
}
}, [id]);
const fetchMachineData = async () => {
try {
setLoading(true);
// Fetch machine info and snapshots in parallel
const [machineResponse, snapshotsResponse] = await Promise.all([
getRequest(`machines/${id}`),
getRequest(`machines/${id}/snapshots`)
]);
setMachine(machineResponse);
setSnapshots(snapshotsResponse);
} catch (error) {
console.error('Failed to fetch machine data:', error);
toast.error('Failed to load machine details');
} finally {
setLoading(false);
}
};
const formatBytes = (bytes) => {
if (!bytes) return '0 B';
const k = 1024;
const sizes = ['B', 'KB', 'MB', 'GB', 'TB'];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return `${parseFloat((bytes / Math.pow(k, i)).toFixed(2))} ${sizes[i]}`;
};
const formatDate = (dateString) => {
if (!dateString || dateString === 'Unknown') return 'Unknown';
try {
return new Date(dateString).toLocaleString();
} catch {
return dateString;
}
};
const getFsTypeColor = (fsType) => {
switch (fsType?.toLowerCase()) {
case 'ext': return 'success';
case 'ntfs': return 'info';
case 'fat32': return 'warning';
default: return 'secondary';
}
};
if (loading) {
return (
<div className="machine-details">
<PageHeader
title="Loading..."
subtitle="Fetching machine details"
actions={
<Button variant="secondary" onClick={() => navigate('/machines')}>
<ArrowLeft size={16} />
Back to Machines
</Button>
}
/>
<LoadingSpinner />
</div>
);
}
if (!machine) {
return (
<div className="machine-details">
<PageHeader
title="Machine Not Found"
subtitle="The requested machine could not be found"
actions={
<Button variant="secondary" onClick={() => navigate('/machines')}>
<ArrowLeft size={16} />
Back to Machines
</Button>
}
/>
<EmptyState
icon={<Devices size={48} weight="duotone" />}
title="Machine Not Found"
subtitle="This machine may have been deleted or you don't have access to it."
/>
</div>
);
}
return (
<div className="machine-details">
<PageHeader
title={machine.name}
subtitle={`Machine ID: ${machine.machine_id}`}
actions={
<Button variant="secondary" onClick={() => navigate('/machines')}>
<ArrowLeft size={16} />
Back to Machines
</Button>
}
/>
<Grid columns={1} gap="large">
{/* Machine Information */}
<Card>
<CardHeader>
<h3><Devices size={20} /> Machine Information</h3>
</CardHeader>
<CardBody>
<DetailList>
<DetailItem label="Name" value={machine.name} />
<DetailItem label="Machine ID" value={machine.machine_id} />
<DetailItem label="Created" value={formatDate(machine.created_at)} />
<DetailItem label="Status" value={
<Badge variant="success">Active</Badge>
} />
</DetailList>
</CardBody>
</Card>
{/* Snapshots */}
<Card>
<CardHeader>
<h3><Camera size={20} /> Snapshots ({snapshots.length})</h3>
</CardHeader>
<CardBody>
{snapshots.length === 0 ? (
<EmptyState
icon={<Camera size={48} weight="duotone" />}
title="No Snapshots"
subtitle="This machine hasn't created any snapshots yet."
/>
) : (
<Grid columns={1} gap="medium">
{snapshots.map((snapshot) => (
<Card key={snapshot.id} className="snapshot-card">
<CardHeader>
<div className="snapshot-header">
<h4>
<Camera size={16} />
Snapshot #{snapshot.id}
</h4>
<Badge variant="secondary">
{snapshot.disks.length} disk{snapshot.disks.length !== 1 ? 's' : ''}
</Badge>
</div>
</CardHeader>
<CardBody>
<DetailList>
<DetailItem
label="Created"
value={
<div className="snapshot-date">
<Calendar size={14} />
{formatDate(snapshot.created_at)}
</div>
}
/>
<DetailItem
label="Hash"
value={
<div className="snapshot-hash">
<Hash size={14} />
<code>{snapshot.snapshot_hash.substring(0, 16)}...</code>
</div>
}
/>
</DetailList>
{/* Disks */}
<div className="disks-section">
<h5><HardDrive size={16} /> Disks</h5>
<Grid columns={1} gap="small">
{snapshot.disks.map((disk, diskIndex) => (
<Card key={diskIndex} className="disk-card">
<CardBody>
<DetailList>
<DetailItem label="Serial" value={disk.serial || 'Unknown'} />
<DetailItem label="Size" value={formatBytes(disk.size_bytes)} />
<DetailItem
label="Partitions"
value={`${disk.partitions.length} partition${disk.partitions.length !== 1 ? 's' : ''}`}
/>
</DetailList>
{/* Partitions */}
{disk.partitions.length > 0 && (
<div className="partitions-section">
<h6><Folder size={14} /> Partitions</h6>
<Grid columns="auto-fit" gap="small" minColumnWidth="250px">
{disk.partitions.map((partition, partIndex) => (
<Card key={partIndex} className="partition-card">
<CardBody>
<DetailList>
<DetailItem
label="Filesystem"
value={
<Badge variant={getFsTypeColor(partition.fs_type)}>
{partition.fs_type.toUpperCase()}
</Badge>
}
/>
<DetailItem label="Size" value={formatBytes(partition.size_bytes)} />
<DetailItem label="Start LBA" value={partition.start_lba.toLocaleString()} />
<DetailItem label="End LBA" value={partition.end_lba.toLocaleString()} />
</DetailList>
</CardBody>
</Card>
))}
</Grid>
</div>
)}
</CardBody>
</Card>
))}
</Grid>
</div>
</CardBody>
</Card>
))}
</Grid>
)}
</CardBody>
</Card>
</Grid>
</div>
);
};
export default MachineDetails;

View File

@@ -0,0 +1,2 @@
export { default } from './MachineDetails.jsx';
export { MachineDetails } from './MachineDetails.jsx';

View File

@@ -0,0 +1,250 @@
// Variables are defined in main.sass root scope
.machine-details
.machine-header
display: flex
align-items: center
gap: 1rem
margin-bottom: 2rem
.back-button
padding: 0.5rem
border-radius: var(--radius)
border: 1px solid var(--border)
background: var(--bg-alt)
color: var(--text)
cursor: pointer
transition: all 0.2s ease
&:hover
background: var(--bg-elev)
border-color: var(--border-strong)
.machine-title
flex: 1
h1
font-size: 1.5rem
font-weight: 600
color: var(--text)
margin-bottom: 0.25rem
.machine-uuid
font-family: 'Courier New', monospace
font-size: 0.875rem
color: var(--text-dim)
background: var(--bg-elev)
padding: 0.25rem 0.5rem
border-radius: var(--radius-sm)
display: inline-block
.snapshots-section
h2
font-size: 1.25rem
font-weight: 600
color: var(--text)
margin-bottom: 1rem
.snapshots-grid
display: grid
grid-template-columns: repeat(auto-fill, minmax(350px, 1fr))
gap: 1.5rem
.snapshot-card
border: 1px solid var(--border)
border-radius: var(--radius-lg)
background: var(--bg-alt)
padding: 1.5rem
transition: all 0.2s ease
&:hover
border-color: var(--border-strong)
box-shadow: 0 2px 8px rgba(31, 36, 41, 0.1)
.snapshot-header
display: flex
justify-content: space-between
align-items: flex-start
margin-bottom: 1rem
.snapshot-info
h3
font-size: 1rem
font-weight: 600
color: var(--text)
margin-bottom: 0.25rem
.snapshot-hash
font-family: 'Courier New', monospace
font-size: 0.75rem
color: var(--text-dim)
background: var(--bg-elev)
padding: 0.125rem 0.375rem
border-radius: var(--radius-sm)
.snapshot-date
font-size: 0.875rem
color: var(--text-dim)
margin-top: 0.5rem
.disks-section
h4
font-size: 0.875rem
font-weight: 600
color: var(--text)
margin-bottom: 0.75rem
display: flex
align-items: center
gap: 0.5rem
&::before
content: "💾"
font-size: 1rem
.disk-list
display: flex
flex-direction: column
gap: 1rem
.disk-item
background: var(--bg-elev)
border: 1px solid var(--border)
border-radius: var(--radius)
padding: 1rem
.disk-header
display: flex
justify-content: space-between
align-items: center
margin-bottom: 0.75rem
.disk-serial
font-family: 'Courier New', monospace
font-size: 0.875rem
font-weight: 600
color: var(--text)
.disk-size
font-size: 0.875rem
color: var(--text-dim)
font-weight: 500
.partitions-section
h5
font-size: 0.75rem
font-weight: 600
color: var(--text-dim)
text-transform: uppercase
letter-spacing: 0.05em
margin-bottom: 0.5rem
.partition-list
display: flex
flex-direction: column
gap: 0.5rem
.partition-item
background: var(--bg-alt)
border: 1px solid var(--border)
border-radius: var(--radius-sm)
padding: 0.75rem
.partition-header
display: flex
justify-content: space-between
align-items: center
margin-bottom: 0.5rem
.partition-fs
background: var(--accent)
color: white
font-size: 0.75rem
font-weight: 600
padding: 0.125rem 0.5rem
border-radius: var(--radius-sm)
text-transform: uppercase
.partition-size
font-size: 0.75rem
color: var(--text-dim)
font-weight: 500
.partition-details
display: grid
grid-template-columns: 1fr 1fr
gap: 0.5rem
font-size: 0.75rem
color: var(--text-dim)
.detail-item
display: flex
justify-content: space-between
.label
font-weight: 500
.value
font-family: 'Courier New', monospace
.empty-snapshots
text-align: center
padding: 3rem 1rem
background: var(--bg-alt)
border: 2px dashed var(--border)
border-radius: var(--radius-lg)
.empty-icon
font-size: 3rem
margin-bottom: 1rem
opacity: 0.5
h3
font-size: 1.125rem
font-weight: 600
color: var(--text)
margin-bottom: 0.5rem
p
color: var(--text-dim)
line-height: 1.5
.loading-section
text-align: center
padding: 2rem
.spinner
border: 3px solid var(--border)
border-top: 3px solid var(--accent)
border-radius: 50%
width: 40px
height: 40px
animation: spin 1s linear infinite
margin: 0 auto 1rem
@keyframes spin
0%
transform: rotate(0deg)
100%
transform: rotate(360deg)
.error-section
text-align: center
padding: 2rem
background: rgba(217, 48, 37, 0.1)
border: 1px solid rgba(217, 48, 37, 0.2)
border-radius: var(--radius-lg)
.error-icon
font-size: 2rem
color: var(--danger)
margin-bottom: 1rem
h3
color: var(--danger)
font-size: 1.125rem
font-weight: 600
margin-bottom: 0.5rem
p
color: var(--text-dim)
line-height: 1.5

View File

@@ -1,4 +1,5 @@
import React, {useState, useEffect, useContext} from 'react';
import {useNavigate} from 'react-router-dom';
import {UserContext} from '@/common/contexts/UserContext.jsx';
import {useToast} from '@/common/contexts/ToastContext.jsx';
import {getRequest, postRequest, deleteRequest} from '@/common/utils/RequestUtil.js';
@@ -28,6 +29,7 @@ import './styles.sass';
export const Machines = () => {
const {user: currentUser} = useContext(UserContext);
const toast = useToast();
const navigate = useNavigate();
const [machines, setMachines] = useState([]);
const [loading, setLoading] = useState(true);
const [showCreateModal, setShowCreateModal] = useState(false);
@@ -179,6 +181,14 @@ export const Machines = () => {
}
};
const handleMachineClick = (machineId) => {
navigate(`/machines/${machineId}`);
};
const handleActionClick = (e) => {
e.stopPropagation(); // Prevent navigation when clicking action buttons
};
const handleInputChange = (e) => {
const {name, value} = e.target;
setFormData(prev => ({
@@ -220,7 +230,13 @@ export const Machines = () => {
<Grid minWidth="400px">
{machines.map(machine => (
<Card key={machine.id} hover className="machine-card">
<Card
key={machine.id}
hover
className="machine-card"
onClick={() => handleMachineClick(machine.id)}
style={{ cursor: 'pointer' }}
>
<CardHeader>
<div className="machine-card-header">
<div className="machine-icon">
@@ -233,7 +249,7 @@ export const Machines = () => {
<span className="uuid-text">{formatUuid(machine.uuid)}</span>
</div>
</div>
<div className="machine-actions">
<div className="machine-actions" onClick={handleActionClick}>
<Button
variant="subtle"
size="sm"