feat!: add file-format header, configurable chunks, integration tests

Introduce a self-describing on-disk format and use it to address several
shortcomings of the 0.9 file layout, where the file simply began with a
raw 19-byte STREAM nonce prefix and used a hardcoded 64 KiB chunk size.

What changed for users
----------------------
* fcry files now start with a 16-byte header: magic ("fcry"), version,
  algorithm id, flags, reserved byte, plaintext chunk_size (u32 LE),
  KDF id + params, then the 19-byte nonce prefix. The full encoded
  header is bound as AAD to every chunk, so tampering with chunk_size,
  algorithm id, nonce prefix, or any future KDF parameter causes
  authentication failure on every chunk -- not just the first.
* New `--chunk-size` CLI flag (encryption only). The decryptor reads
  the chunk size from the header, so files encrypted with a non-default
  size decrypt without the user having to remember it.
* Default plaintext chunk size raised from 64 KiB to 1 MiB.
* Bad input is now reported as an error instead of panicking: empty
  ciphertext, truncated final chunk, wrong magic, bad version, zero
  chunk_size, unknown algorithm id, and short --raw-key all return a
  non-zero exit status with a diagnostic on stderr.
* Empty plaintext now produces a valid (authenticated) empty
  ciphertext instead of panicking; the decryptor verifies it.
* `main` exits with status 1 on error (previously it printed and
  returned 0).

This is a breaking change to the file format: 0.9.x files have no magic
or header and cannot be read by 0.10.x. Version bumped to 0.10.0.

Why this approach
-----------------
The header-as-AAD pattern is the standard way to make file-format
metadata tamper-evident without a separate signature: any bit-flip in
the header propagates into every chunk's authentication tag check, so
an attacker cannot, for example, change chunk_size to mis-frame the
stream or downgrade the algorithm id.

Storing chunk_size in the header (rather than fixing it at compile
time) lets us experiment with chunk sizes without breaking decrypt
compatibility, and is preparation for the parallel-pipeline work in
Roadmap 1.0 where worker count and chunk size interact.

The KDF section is a tagged variant (currently only `Raw`) so that
adding Argon2id later only adds a new variant + its salt/cost fields;
existing files keep decrypting because they carry `kdf_id = 0`.

Other changes bundled in
------------------------
* Switch RNG from `rand` (0.10) to `getrandom` (0.3). We only need
  OS-provided random bytes for the nonce prefix; pulling in the full
  `rand` crate for one `OsRng.fill_bytes` call was overkill, and
  `rand` 0.10's `OsRng` API churn makes `getrandom` the cleaner fit.
* `FcryError` gains a `Format(String)` variant for header / framing
  errors and a `From<getrandom::Error>` impl (replacing the
  `rand::Error` impl).
* Drop the noisy `[reader]` / `[encrypt]` / `[decrypt]` stderr
  tracing prints and the `dbg!(&cli.raw_key)` (which leaked the key
  to stderr).
* Replace `unwrap()` on file open / create with `?` so I/O errors
  surface as structured `FcryError::Io` instead of aborting.
* Remove the unused `AheadReader::read_exact` wrapper -- the
  decryptor now reads the header through the underlying `BufRead`
  directly before wrapping it in `AheadReader`.

Tests
-----
Add `tests/roundtrip.rs` (assert_cmd + tempfile) covering: empty
input, single byte, sub-chunk, exact chunk, chunk+1, multi-chunk,
custom small chunk size (4096), pathological 1-byte chunk size,
stdin/stdout pipe mode, wrong key rejection, tampered header,
tampered ciphertext, truncated ciphertext, bad magic, short raw key,
and the header-is-authoritative property (encrypt with a weird chunk
size, decrypt without specifying one). Also adds a unit test in
`header.rs` for header encode/decode roundtrip and bad-magic rejection.

TODO.md trimmed to the concrete follow-up sequence (manual STREAM
nonces, secrets/rlimit, atomic output, argon2id KDF + prompt,
multi-threaded pipeline, length-committed mode).

Test plan
---------
* `cargo clippy && cargo clippy --tests` -- clean.
* `cargo +nightly fmt` -- no diff.
* `cargo test` -- 16 integration + 2 header unit tests pass.
* Manual: `echo hi | fcry --raw-key 0123456789abcdef0123456789abcdef
  | fcry -d --raw-key 0123456789abcdef0123456789abcdef` prints `hi`.

Trailers
--------
Refs: TODO.md (Roadmap 1.0 follow-up sequence)
Breaking-Change: file format; 0.9.x files cannot be decrypted by 0.10.x
This commit is contained in:
2026-05-02 17:22:47 +02:00
parent 5e51b4bfe1
commit 4eee8e7a95
10 changed files with 761 additions and 392 deletions
+48 -45
View File
@@ -1,57 +1,60 @@
// SPDX-License-Identifier: GPL-3.0-only
use chacha20poly1305::{KeyInit, XChaCha20Poly1305, aead::stream};
use rand::{RngCore, rngs::OsRng};
use crate::error::*;
use crate::reader::ReadInfoChunk;
use crate::utils::BUFSIZE;
use crate::header::{AlgId, Header, KdfParams, NONCE_PREFIX_LEN, TAG_LEN};
use crate::reader::{AheadReader, ReadInfoChunk};
use crate::utils::*;
pub fn encrypt<S: AsRef<str>>(
input_file: Option<S>,
output_file: Option<S>,
key: [u8; 32],
chunk_size: u32,
) -> Result<(), FcryError> {
let mut f_plain = read_from_file_or_stdin(input_file, BUFSIZE);
let mut f_encrypted = write_to_file_or_stdout(output_file);
let chunk_sz = chunk_size as usize;
let mut f_plain = AheadReader::from(open_input(input_file)?, chunk_sz);
let mut f_encrypted = open_output(output_file)?;
let mut nonce = [0u8; 19];
OsRng.fill_bytes(&mut nonce);
let mut nonce_prefix = [0u8; NONCE_PREFIX_LEN];
getrandom::fill(&mut nonce_prefix)?;
// let key = XChaCha20Poly1305::generate_key(&mut OsRng);
f_encrypted.write_all(&nonce)?;
let header = Header {
alg: AlgId::XChaCha20Poly1305,
flags: 0,
chunk_size,
kdf: KdfParams::Raw,
nonce_prefix,
};
let aad = header.encode();
f_encrypted.write_all(&aad)?;
let aead = XChaCha20Poly1305::new(&key.into());
let mut stream_encryptor = stream::EncryptorBE32::from_aead(aead, &nonce.into());
let mut stream_encryptor = stream::EncryptorBE32::from_aead(aead, &nonce_prefix.into());
let mut buf = vec![0; BUFSIZE];
let mut buf = vec![0u8; chunk_sz];
loop {
let read_result = f_plain.read_ahead(&mut buf)?;
match read_result {
ReadInfoChunk::Normal(n) => {
assert_eq!(n, BUFSIZE);
assert_eq!(buf.len(), BUFSIZE);
eprintln!("[encrypt]: read normal chunk");
stream_encryptor.encrypt_next_in_place(&[], &mut buf)?;
match f_plain.read_ahead(&mut buf)? {
ReadInfoChunk::Normal(_) => {
stream_encryptor.encrypt_next_in_place(&aad, &mut buf)?;
f_encrypted.write_all(&buf)?;
// buf grows after encrypt_next_in_place because of tag that is added
// we shrink it to the BUFSIZE in order to read the correct size
buf.truncate(BUFSIZE);
buf.truncate(chunk_sz);
}
ReadInfoChunk::Last(n) => {
eprintln!("[encrypt]: read last chunk");
buf.truncate(n);
stream_encryptor.encrypt_last_in_place(&[], &mut buf)?;
stream_encryptor.encrypt_last_in_place(&aad, &mut buf)?;
f_encrypted.write_all(&buf)?;
break;
}
ReadInfoChunk::Empty => {
eprintln!("[encrypt]: read empty chunk");
panic!("[ERROR] Empty Chunk while reading");
// Empty plaintext: still emit a final "last" tag so the decryptor
// authenticates the (empty) stream rather than silently producing nothing.
buf.clear();
stream_encryptor.encrypt_last_in_place(&aad, &mut buf)?;
f_encrypted.write_all(&buf)?;
break;
}
}
}
@@ -64,38 +67,38 @@ pub fn decrypt<S: AsRef<str>>(
output_file: Option<S>,
key: [u8; 32],
) -> Result<(), FcryError> {
let mut f_encrypted = read_from_file_or_stdin(input_file, BUFSIZE + 16);
let mut f_plain = write_to_file_or_stdout(output_file);
let mut reader = open_input(input_file)?;
let header = Header::read(&mut reader)?;
let aad = header.encode();
let mut nonce = [0u8; 19];
f_encrypted.read_exact(&mut nonce)?;
let chunk_sz = header.chunk_size as usize;
let cipher_chunk = chunk_sz + TAG_LEN;
let mut f_encrypted = AheadReader::from(reader, cipher_chunk);
let mut f_plain = open_output(output_file)?;
let aead = XChaCha20Poly1305::new(&key.into());
let mut stream_decryptor = stream::DecryptorBE32::from_aead(aead, &nonce.into());
let mut stream_decryptor = stream::DecryptorBE32::from_aead(aead, &header.nonce_prefix.into());
let mut buf = vec![0; BUFSIZE + 16];
let mut buf = vec![0u8; cipher_chunk];
loop {
let read_result = f_encrypted.read_ahead(&mut buf)?;
match read_result {
ReadInfoChunk::Normal(n) => {
assert_eq!(n, BUFSIZE + 16);
eprintln!("[decrypt]: read normal chunk");
stream_decryptor.decrypt_next_in_place(&[], &mut buf)?;
match f_encrypted.read_ahead(&mut buf)? {
ReadInfoChunk::Normal(_) => {
stream_decryptor.decrypt_next_in_place(&aad, &mut buf)?;
f_plain.write_all(&buf)?;
buf.resize(BUFSIZE + 16, 0);
buf.resize(cipher_chunk, 0);
}
ReadInfoChunk::Last(n) => {
eprintln!("[decrypt]: read last chunk");
buf.truncate(n);
stream_decryptor.decrypt_last_in_place(&[], &mut buf)?;
stream_decryptor.decrypt_last_in_place(&aad, &mut buf)?;
f_plain.write_all(&buf)?;
break;
}
ReadInfoChunk::Empty => {
eprintln!("[decrypt]: read empty chunk");
panic!("Empty Chunk while reading");
return Err(FcryError::Format(
"truncated ciphertext: missing final chunk".into(),
));
}
}
}