feat: initial implementation of radicle-reticulum bridge
Python adapter bridging Radicle (decentralized Git) over Reticulum mesh networking (LoRa, packet radio, serial, I2P). Enables offline-first code collaboration without internet infrastructure or public seed nodes. - Identity mapping: Radicle Ed25519 DIDs ↔ RNS destinations, with persistence - TCP↔RNS bridge: tunnels radicle-node traffic over mesh, auto-discovers peers - LXMF sync: store-and-forward bundle delivery for offline peers, auto-push - Adaptive strategy: selects FULL/INCREMENTAL/MINIMAL/QR by RTT + throughput - Git bundles: full and incremental, delay-tolerant transfer - QR air-gap: encode/decode bundles as QR codes (≤2953 bytes) - CLI: radicle-rns bridge/node/sync/bundle/ping/peers/identity commands - 158 tests
This commit is contained in:
commit
c418cfaccf
|
|
@ -0,0 +1,42 @@
|
|||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*.egg-info/
|
||||
dist/
|
||||
build/
|
||||
.venv/
|
||||
*.egg
|
||||
|
||||
# uv
|
||||
.uv/
|
||||
uv.lock
|
||||
|
||||
# Tests
|
||||
.pytest_cache/
|
||||
.coverage
|
||||
htmlcov/
|
||||
|
||||
# Type checking
|
||||
.mypy_cache/
|
||||
|
||||
# Identity / secrets
|
||||
~/.radicle-rns/
|
||||
*.identity
|
||||
|
||||
# Reticulum runtime
|
||||
~/.reticulum/
|
||||
|
||||
# Editor
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*~
|
||||
|
||||
# OS
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
|
||||
# Generated bundles / QR
|
||||
*.bundle
|
||||
*.radicle-bundle
|
||||
*.refs.json
|
||||
|
|
@ -0,0 +1,130 @@
|
|||
# radicle-reticulum
|
||||
|
||||
Bridges [Radicle](https://radicle.xyz) (decentralized Git) over [Reticulum](https://reticulum.network) mesh networking — LoRa, packet radio, serial, I2P, and more. Enables offline-first code collaboration without internet infrastructure.
|
||||
|
||||
**Why:** Radicle requires publicly reachable seed nodes; Reticulum routes over any physical medium. Both use Ed25519 keys — a natural fit.
|
||||
|
||||
## Install
|
||||
|
||||
```sh
|
||||
pip install uv # once
|
||||
uv sync # install deps into .venv
|
||||
```
|
||||
|
||||
QR encoding (optional):
|
||||
```sh
|
||||
uv sync --extra qr # ASCII QR output
|
||||
pip install pillow pyzbar # image encode/decode
|
||||
```
|
||||
|
||||
## Quickstart: bridge two machines over mesh
|
||||
|
||||
**Machine A** (or any node on the mesh):
|
||||
```sh
|
||||
uv run radicle-rns bridge
|
||||
# prints: RNS address: <HASH>
|
||||
```
|
||||
|
||||
**Machine B**:
|
||||
```sh
|
||||
uv run radicle-rns bridge --connect <HASH-FROM-A>
|
||||
```
|
||||
|
||||
**Both machines** — configure radicle-node to use the bridge as a seed:
|
||||
```toml
|
||||
# ~/.radicle/node/config.toml
|
||||
[[seeds]]
|
||||
address = "127.0.0.1:8777"
|
||||
```
|
||||
|
||||
Then start radicle-node and use it normally:
|
||||
```sh
|
||||
rad node start
|
||||
rad clone rad:z3xyz...
|
||||
rad push / rad pull
|
||||
```
|
||||
|
||||
The bridge auto-detects your local NID via `rad self` and auto-registers discovered remote NIDs with radicle-node (`--no-auto-seed` to disable).
|
||||
|
||||
## Commands
|
||||
|
||||
```
|
||||
radicle-rns bridge # TCP↔RNS bridge (main command)
|
||||
radicle-rns node # lightweight peer-announce node
|
||||
radicle-rns peers # discover peers on the mesh
|
||||
radicle-rns ping <hash> # RTT probe to a peer
|
||||
radicle-rns identity generate # create/show identity
|
||||
radicle-rns sync <repo> # LXMF store-and-forward sync
|
||||
radicle-rns bundle create <repo> # pack a repo into a bundle
|
||||
radicle-rns bundle apply <bundle> <repo> # unpack a bundle
|
||||
radicle-rns bundle info <bundle> # inspect a bundle
|
||||
radicle-rns bundle qr-encode <bundle> # print ASCII QR (≤2953 bytes)
|
||||
radicle-rns bundle qr-decode <image.png> # decode QR back to bundle
|
||||
```
|
||||
|
||||
Global flags: `-v` verbose, `--identity PATH` (default `~/.radicle-rns/identity`).
|
||||
|
||||
## Air-gapped / QR transfer
|
||||
|
||||
For truly offline transfers (tiny incremental bundles ≤ 2953 bytes):
|
||||
|
||||
```sh
|
||||
# Sender
|
||||
radicle-rns bundle create myrepo --incremental --basis prev.refs.json
|
||||
radicle-rns bundle qr-encode myrepo-*.radicle-bundle
|
||||
|
||||
# Receiver (photograph the QR, then:)
|
||||
radicle-rns bundle qr-decode qr-photo.png -o received.radicle-bundle
|
||||
radicle-rns bundle apply received.radicle-bundle ./myrepo
|
||||
```
|
||||
|
||||
## Bridge flags
|
||||
|
||||
| Flag | Default | Description |
|
||||
|------|---------|-------------|
|
||||
| `-l, --listen-port` | 8777 | TCP port radicle-node connects to |
|
||||
| `--radicle-port` | 8776 | Port radicle-node listens on |
|
||||
| `-c, --connect <hash>` | — | Manually connect to a remote bridge |
|
||||
| `--nid <NID>` | auto | Override local radicle NID |
|
||||
| `--no-auto-connect` | — | Disable auto-connect on discovery |
|
||||
| `--no-auto-seed` | — | Disable auto-registering remote NIDs |
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
radicle-node ──TCP:8777── RadicleBridge ──RNS Link── RadicleBridge ──TCP:8776── radicle-node
|
||||
│ │
|
||||
RNS announce RNS announce
|
||||
(auto-discovery) (auto-discovery)
|
||||
```
|
||||
|
||||
- **Identity** (`identity.py`) — Ed25519 DID ↔ RNS destination mapping; persisted to `~/.radicle-rns/identity`
|
||||
- **Adapter** (`adapter.py`) — peer discovery via RNS announces
|
||||
- **Link** (`link.py`) — buffered RNS Link with state machine
|
||||
- **SyncManager** (`sync.py`) — LXMF store-and-forward bundles; auto-push on refs announce
|
||||
- **AdaptiveSyncManager** (`adaptive.py`) — picks FULL/INCREMENTAL/MINIMAL/QR by RTT + throughput
|
||||
- **GitBundle** (`git_bundle.py`) — full and incremental Git bundles
|
||||
- **QR** (`qr.py`) — visual air-gap transfer for tiny bundles
|
||||
|
||||
## Development
|
||||
|
||||
```sh
|
||||
uv run pytest # 158 tests
|
||||
uv run pytest -x -q # stop on first failure
|
||||
```
|
||||
|
||||
## Reticulum interfaces
|
||||
|
||||
Reticulum auto-discovers local peers via UDP multicast. For LoRa / serial / I2P, configure `~/.reticulum/config`:
|
||||
|
||||
```ini
|
||||
[[lora_interface]]
|
||||
type = RNodeInterface
|
||||
port = /dev/ttyUSB0
|
||||
frequency = 868000000
|
||||
bandwidth = 125000
|
||||
spreadingfactor = 7
|
||||
codingrate = 5
|
||||
```
|
||||
|
||||
See [Reticulum docs](https://reticulum.network/manual/) for the full interface list.
|
||||
|
|
@ -0,0 +1,56 @@
|
|||
[project]
|
||||
name = "radicle-reticulum"
|
||||
version = "0.1.0"
|
||||
description = "Radicle transport adapter for Reticulum mesh networking"
|
||||
requires-python = ">=3.10"
|
||||
license = {text = "MIT"}
|
||||
authors = [
|
||||
{name = "Radicle-Reticulum Contributors"}
|
||||
]
|
||||
keywords = ["radicle", "reticulum", "mesh", "p2p", "git", "decentralized"]
|
||||
dependencies = [
|
||||
"rns>=0.7.0",
|
||||
"lxmf>=0.4.0",
|
||||
"cryptography>=41.0.0",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
qr = [
|
||||
"qrcode>=7.0",
|
||||
]
|
||||
dev = [
|
||||
"pytest>=7.0.0",
|
||||
"pytest-asyncio>=0.21.0",
|
||||
"mypy>=1.0.0",
|
||||
"qrcode>=7.0",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
radicle-rns = "radicle_reticulum.cli:main"
|
||||
|
||||
[tool.uv]
|
||||
package = true
|
||||
|
||||
[dependency-groups]
|
||||
dev = [
|
||||
"pytest>=7.0.0",
|
||||
"pytest-asyncio>=0.21.0",
|
||||
"mypy>=1.0.0",
|
||||
"qrcode>=8.2",
|
||||
]
|
||||
|
||||
[build-system]
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
[tool.hatch.build.targets.wheel]
|
||||
packages = ["src/radicle_reticulum"]
|
||||
|
||||
[tool.mypy]
|
||||
python_version = "3.10"
|
||||
warn_return_any = true
|
||||
warn_unused_configs = true
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
asyncio_mode = "auto"
|
||||
testpaths = ["tests"]
|
||||
|
|
@ -0,0 +1,86 @@
|
|||
"""Radicle transport adapter for Reticulum mesh networking."""
|
||||
|
||||
from radicle_reticulum.identity import RadicleIdentity
|
||||
from radicle_reticulum.adapter import RNSTransportAdapter
|
||||
from radicle_reticulum.link import RadicleLink
|
||||
from radicle_reticulum.messages import (
|
||||
MessageType,
|
||||
NodeAnnouncement,
|
||||
InventoryAnnouncement,
|
||||
RefAnnouncement,
|
||||
Ping,
|
||||
Pong,
|
||||
decode_message,
|
||||
)
|
||||
from radicle_reticulum.git_bundle import (
|
||||
GitBundle,
|
||||
GitBundleGenerator,
|
||||
GitBundleApplicator,
|
||||
BundleType,
|
||||
BundleMetadata,
|
||||
)
|
||||
from radicle_reticulum.sync import (
|
||||
SyncManager,
|
||||
SyncMode,
|
||||
RefsAnnouncement,
|
||||
create_dead_drop_bundle,
|
||||
apply_dead_drop_bundle,
|
||||
)
|
||||
from radicle_reticulum.adaptive import (
|
||||
SyncStrategy,
|
||||
LinkQuality,
|
||||
BandwidthProbe,
|
||||
AdaptiveSyncManager,
|
||||
select_strategy,
|
||||
)
|
||||
from radicle_reticulum.bridge import RadicleBridge
|
||||
from radicle_reticulum.qr import (
|
||||
encode_bundle_to_qr,
|
||||
decode_bundle_from_qr_data,
|
||||
decode_bundle_from_qr_image,
|
||||
BundleTooLargeForQR,
|
||||
QR_MAX_BYTES as QR_BUNDLE_MAX_BYTES,
|
||||
)
|
||||
|
||||
__version__ = "0.1.0"
|
||||
__all__ = [
|
||||
# Identity
|
||||
"RadicleIdentity",
|
||||
# Transport
|
||||
"RNSTransportAdapter",
|
||||
"RadicleLink",
|
||||
# Messages
|
||||
"MessageType",
|
||||
"NodeAnnouncement",
|
||||
"InventoryAnnouncement",
|
||||
"RefAnnouncement",
|
||||
"Ping",
|
||||
"Pong",
|
||||
"decode_message",
|
||||
# Git bundles
|
||||
"GitBundle",
|
||||
"GitBundleGenerator",
|
||||
"GitBundleApplicator",
|
||||
"BundleType",
|
||||
"BundleMetadata",
|
||||
# Sync
|
||||
"SyncManager",
|
||||
"SyncMode",
|
||||
"RefsAnnouncement",
|
||||
"create_dead_drop_bundle",
|
||||
"apply_dead_drop_bundle",
|
||||
# Adaptive sync
|
||||
"SyncStrategy",
|
||||
"LinkQuality",
|
||||
"BandwidthProbe",
|
||||
"AdaptiveSyncManager",
|
||||
"select_strategy",
|
||||
# Bridge
|
||||
"RadicleBridge",
|
||||
# QR
|
||||
"encode_bundle_to_qr",
|
||||
"decode_bundle_from_qr_data",
|
||||
"decode_bundle_from_qr_image",
|
||||
"BundleTooLargeForQR",
|
||||
"QR_BUNDLE_MAX_BYTES",
|
||||
]
|
||||
|
|
@ -0,0 +1,331 @@
|
|||
"""RNS Transport Adapter for Radicle.
|
||||
|
||||
This is the core adapter that allows Radicle to use Reticulum as a transport.
|
||||
It provides:
|
||||
- Destination registration and announcement
|
||||
- Incoming connection handling
|
||||
- Outbound connection establishment
|
||||
- Peer discovery via RNS announcements
|
||||
"""
|
||||
|
||||
import threading
|
||||
import time
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Callable, Dict, List, Optional, Set
|
||||
|
||||
import RNS
|
||||
|
||||
from radicle_reticulum.identity import RadicleIdentity
|
||||
from radicle_reticulum.link import RadicleLink, LinkState
|
||||
|
||||
|
||||
# Radicle app name for RNS destination
|
||||
APP_NAME = "radicle"
|
||||
|
||||
# Aspect for node connections
|
||||
ASPECT_NODE = "node"
|
||||
|
||||
# Aspect for repository-specific endpoints
|
||||
ASPECT_REPO = "repo"
|
||||
|
||||
# App data identifiers for filtering announces
|
||||
NODE_APP_DATA_MAGIC = b"RADICLE_NODE_V1"
|
||||
REPO_APP_DATA_MAGIC = b"RADICLE_REPO_V1"
|
||||
|
||||
|
||||
@dataclass
|
||||
class PeerInfo:
|
||||
"""Information about a discovered peer."""
|
||||
identity: RadicleIdentity
|
||||
destination_hash: bytes
|
||||
last_seen: float
|
||||
announced_repos: Set[str] = field(default_factory=set)
|
||||
|
||||
@property
|
||||
def age(self) -> float:
|
||||
"""Seconds since last seen."""
|
||||
return time.time() - self.last_seen
|
||||
|
||||
|
||||
class RNSTransportAdapter:
|
||||
"""Reticulum transport adapter for Radicle.
|
||||
|
||||
Provides the bridge between Radicle's network layer and Reticulum.
|
||||
Manages identity, destinations, links, and peer discovery.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
identity: Optional[RadicleIdentity] = None,
|
||||
config_path: Optional[str] = None,
|
||||
):
|
||||
"""Initialize the adapter.
|
||||
|
||||
Args:
|
||||
identity: Radicle identity to use. Generated if not provided.
|
||||
config_path: Path to Reticulum config. Uses default if not provided.
|
||||
"""
|
||||
# Initialize Reticulum
|
||||
self.reticulum = RNS.Reticulum(config_path)
|
||||
|
||||
# Set up identity
|
||||
if identity is None:
|
||||
identity = RadicleIdentity.generate()
|
||||
self.identity = identity
|
||||
|
||||
# Create main node destination
|
||||
self.node_destination = RNS.Destination(
|
||||
self.identity.rns_identity,
|
||||
RNS.Destination.IN,
|
||||
RNS.Destination.SINGLE,
|
||||
APP_NAME,
|
||||
ASPECT_NODE,
|
||||
)
|
||||
|
||||
# Set up link handling
|
||||
self.node_destination.set_link_established_callback(self._on_incoming_link)
|
||||
|
||||
# Peer tracking
|
||||
self._peers: Dict[bytes, PeerInfo] = {}
|
||||
self._peers_lock = threading.Lock()
|
||||
|
||||
# Active links
|
||||
self._links: Dict[bytes, RadicleLink] = {}
|
||||
self._links_lock = threading.Lock()
|
||||
|
||||
# Callbacks
|
||||
self._on_peer_discovered: Optional[Callable[[PeerInfo], None]] = None
|
||||
self._on_incoming_connection: Optional[Callable[[RadicleLink], None]] = None
|
||||
|
||||
# Repository destinations (hash -> destination)
|
||||
self._repo_destinations: Dict[str, RNS.Destination] = {}
|
||||
|
||||
RNS.log(f"RNS Transport Adapter initialized", RNS.LOG_INFO)
|
||||
RNS.log(f" Node ID: {self.identity.did}", RNS.LOG_INFO)
|
||||
RNS.log(f" RNS Hash: {self.identity.rns_hash_hex}", RNS.LOG_INFO)
|
||||
|
||||
def start(self):
|
||||
"""Start the adapter and begin announcing presence."""
|
||||
# Register announce handler to discover peers
|
||||
RNS.Transport.register_announce_handler(self._handle_announce)
|
||||
|
||||
# Announce our node destination
|
||||
self.announce()
|
||||
|
||||
RNS.log("RNS Transport Adapter started", RNS.LOG_INFO)
|
||||
|
||||
def stop(self):
|
||||
"""Stop the adapter and clean up."""
|
||||
# Close all active links
|
||||
with self._links_lock:
|
||||
for link in list(self._links.values()):
|
||||
link.close()
|
||||
self._links.clear()
|
||||
|
||||
RNS.log("RNS Transport Adapter stopped", RNS.LOG_INFO)
|
||||
|
||||
def announce(self, app_data: Optional[bytes] = None):
|
||||
"""Announce this node's presence on the network.
|
||||
|
||||
Args:
|
||||
app_data: Optional additional application data to include.
|
||||
Could include repository list, capabilities, etc.
|
||||
"""
|
||||
# Prepend node magic for filtering, then append any extra app_data
|
||||
full_app_data = NODE_APP_DATA_MAGIC
|
||||
if app_data:
|
||||
full_app_data += app_data
|
||||
|
||||
self.node_destination.announce(app_data=full_app_data)
|
||||
RNS.log(f"Announced node: {self.identity.rns_hash_hex}", RNS.LOG_DEBUG)
|
||||
|
||||
def announce_repository(self, repo_id: str, repo_data: Optional[bytes] = None):
|
||||
"""Announce availability of a specific repository.
|
||||
|
||||
Args:
|
||||
repo_id: Repository identifier (typically a hash or DID).
|
||||
repo_data: Optional metadata about the repository.
|
||||
"""
|
||||
if repo_id not in self._repo_destinations:
|
||||
# Create destination for this repository
|
||||
dest = RNS.Destination(
|
||||
self.identity.rns_identity,
|
||||
RNS.Destination.IN,
|
||||
RNS.Destination.SINGLE,
|
||||
APP_NAME,
|
||||
ASPECT_REPO,
|
||||
repo_id,
|
||||
)
|
||||
dest.set_link_established_callback(
|
||||
lambda link: self._on_incoming_link(link, repo_id)
|
||||
)
|
||||
self._repo_destinations[repo_id] = dest
|
||||
|
||||
# Prepend repo magic for filtering
|
||||
full_app_data = REPO_APP_DATA_MAGIC
|
||||
if repo_data:
|
||||
full_app_data += repo_data
|
||||
|
||||
self._repo_destinations[repo_id].announce(app_data=full_app_data)
|
||||
RNS.log(f"Announced repository: {repo_id}", RNS.LOG_DEBUG)
|
||||
|
||||
def connect(
|
||||
self,
|
||||
destination_hash: bytes,
|
||||
timeout: float = 30.0,
|
||||
) -> Optional[RadicleLink]:
|
||||
"""Connect to a peer by destination hash.
|
||||
|
||||
Args:
|
||||
destination_hash: 16-byte RNS destination hash.
|
||||
timeout: Connection timeout in seconds.
|
||||
|
||||
Returns:
|
||||
RadicleLink if connection successful, None otherwise.
|
||||
"""
|
||||
# Check if we already have a path to this destination
|
||||
if not RNS.Transport.has_path(destination_hash):
|
||||
RNS.Transport.request_path(destination_hash)
|
||||
|
||||
# Wait for path to be established
|
||||
deadline = time.time() + timeout
|
||||
while not RNS.Transport.has_path(destination_hash):
|
||||
if time.time() > deadline:
|
||||
RNS.log(f"Path request timeout: {destination_hash.hex()}", RNS.LOG_WARNING)
|
||||
return None
|
||||
time.sleep(0.1)
|
||||
|
||||
# Get the destination
|
||||
identity = RNS.Identity.recall(destination_hash)
|
||||
if identity is None:
|
||||
RNS.log(f"Could not recall identity for: {destination_hash.hex()}", RNS.LOG_WARNING)
|
||||
return None
|
||||
|
||||
destination = RNS.Destination(
|
||||
identity,
|
||||
RNS.Destination.OUT,
|
||||
RNS.Destination.SINGLE,
|
||||
APP_NAME,
|
||||
ASPECT_NODE,
|
||||
)
|
||||
|
||||
# Create link
|
||||
link = RadicleLink.create_outbound(destination)
|
||||
|
||||
# Wait for link to establish
|
||||
deadline = time.time() + timeout
|
||||
while link.state == LinkState.PENDING:
|
||||
if time.time() > deadline:
|
||||
RNS.log(f"Link timeout: {destination_hash.hex()}", RNS.LOG_WARNING)
|
||||
return None
|
||||
time.sleep(0.1)
|
||||
|
||||
if link.state != LinkState.ACTIVE:
|
||||
return None
|
||||
|
||||
# Track the link
|
||||
with self._links_lock:
|
||||
self._links[destination_hash] = link
|
||||
|
||||
return link
|
||||
|
||||
def connect_to_peer(
|
||||
self,
|
||||
peer: PeerInfo,
|
||||
timeout: float = 30.0,
|
||||
) -> Optional[RadicleLink]:
|
||||
"""Connect to a discovered peer.
|
||||
|
||||
Args:
|
||||
peer: PeerInfo from peer discovery.
|
||||
timeout: Connection timeout in seconds.
|
||||
"""
|
||||
return self.connect(peer.destination_hash, timeout)
|
||||
|
||||
def get_peers(self) -> List[PeerInfo]:
|
||||
"""Get list of discovered peers."""
|
||||
with self._peers_lock:
|
||||
return list(self._peers.values())
|
||||
|
||||
def get_peer_by_did(self, did: str) -> Optional[PeerInfo]:
|
||||
"""Look up a peer by their Radicle DID."""
|
||||
with self._peers_lock:
|
||||
for peer in self._peers.values():
|
||||
if peer.identity.did == did:
|
||||
return peer
|
||||
return None
|
||||
|
||||
def set_on_peer_discovered(self, callback: Callable[[PeerInfo], None]):
|
||||
"""Set callback for peer discovery events."""
|
||||
self._on_peer_discovered = callback
|
||||
|
||||
def set_on_incoming_connection(self, callback: Callable[[RadicleLink], None]):
|
||||
"""Set callback for incoming connections."""
|
||||
self._on_incoming_connection = callback
|
||||
|
||||
def _handle_announce(
|
||||
self,
|
||||
destination_hash: bytes,
|
||||
announced_identity: RNS.Identity,
|
||||
app_data: Optional[bytes],
|
||||
) -> None:
|
||||
"""Handle incoming announce from another node."""
|
||||
# Ignore our own announcements
|
||||
if destination_hash == self.node_destination.hash:
|
||||
return
|
||||
|
||||
# Filter: only accept announces with node magic in app_data
|
||||
if app_data is None or not app_data.startswith(NODE_APP_DATA_MAGIC):
|
||||
# Not a radicle node announce, ignore
|
||||
return
|
||||
|
||||
# Create RadicleIdentity from announced identity
|
||||
try:
|
||||
radicle_id = RadicleIdentity.from_rns_identity(announced_identity)
|
||||
except Exception as e:
|
||||
RNS.log(f"Failed to create RadicleIdentity from announce: {e}", RNS.LOG_WARNING)
|
||||
return
|
||||
|
||||
# Update or create peer info
|
||||
with self._peers_lock:
|
||||
if destination_hash in self._peers:
|
||||
peer = self._peers[destination_hash]
|
||||
peer.last_seen = time.time()
|
||||
else:
|
||||
peer = PeerInfo(
|
||||
identity=radicle_id,
|
||||
destination_hash=destination_hash,
|
||||
last_seen=time.time(),
|
||||
)
|
||||
self._peers[destination_hash] = peer
|
||||
RNS.log(f"Discovered peer: {radicle_id.did}", RNS.LOG_INFO)
|
||||
|
||||
# Notify callback
|
||||
if self._on_peer_discovered:
|
||||
self._on_peer_discovered(peer)
|
||||
|
||||
def _on_incoming_link(self, link: RNS.Link, repo_id: Optional[str] = None):
|
||||
"""Handle incoming link establishment."""
|
||||
radicle_link = RadicleLink.from_incoming(link)
|
||||
|
||||
# Track the link
|
||||
remote_hash = link.get_remote_identity().hash if link.get_remote_identity() else None
|
||||
if remote_hash:
|
||||
with self._links_lock:
|
||||
self._links[remote_hash] = radicle_link
|
||||
|
||||
RNS.log(f"Incoming connection established", RNS.LOG_INFO)
|
||||
|
||||
# Notify callback
|
||||
if self._on_incoming_connection:
|
||||
self._on_incoming_connection(radicle_link)
|
||||
|
||||
@property
|
||||
def node_hash(self) -> bytes:
|
||||
"""Get this node's destination hash."""
|
||||
return self.node_destination.hash
|
||||
|
||||
@property
|
||||
def node_hash_hex(self) -> str:
|
||||
"""Get this node's destination hash as hex."""
|
||||
return self.node_destination.hexhash
|
||||
|
|
@ -0,0 +1,347 @@
|
|||
"""Adaptive sync with automatic bandwidth detection.
|
||||
|
||||
Automatically selects sync strategy based on:
|
||||
- Link RTT (round-trip time)
|
||||
- Measured throughput
|
||||
- Bundle size estimation
|
||||
|
||||
Strategies:
|
||||
- FULL: Fast links (>100 Kbps) - send complete bundles
|
||||
- INCREMENTAL: Medium links (1-100 Kbps) - send only changes
|
||||
- MINIMAL: Slow links (<1 Kbps, LoRa) - send only refs + request
|
||||
- QR: Tiny payloads (<3KB) - encode as QR for visual transfer
|
||||
"""
|
||||
|
||||
import time
|
||||
import struct
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
from typing import Optional, Tuple
|
||||
|
||||
import RNS
|
||||
|
||||
from radicle_reticulum.link import RadicleLink
|
||||
|
||||
|
||||
class SyncStrategy(Enum):
|
||||
"""Sync strategy based on link quality."""
|
||||
FULL = "full" # >100 Kbps - send everything
|
||||
INCREMENTAL = "incremental" # 1-100 Kbps - only changes
|
||||
MINIMAL = "minimal" # <1 Kbps - refs only, request pulls
|
||||
QR = "qr" # <3 KB payload - QR code capable
|
||||
|
||||
|
||||
@dataclass
|
||||
class LinkQuality:
|
||||
"""Measured link quality metrics."""
|
||||
rtt_ms: float # Round-trip time in milliseconds
|
||||
throughput_bps: float # Estimated throughput in bits per second
|
||||
packet_loss: float # Packet loss ratio (0-1)
|
||||
is_lora: bool # Detected as LoRa link
|
||||
strategy: SyncStrategy # Recommended strategy
|
||||
|
||||
@property
|
||||
def throughput_kbps(self) -> float:
|
||||
return self.throughput_bps / 1000
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return (
|
||||
f"LinkQuality(rtt={self.rtt_ms:.0f}ms, "
|
||||
f"throughput={self.throughput_kbps:.1f}Kbps, "
|
||||
f"strategy={self.strategy.value})"
|
||||
)
|
||||
|
||||
|
||||
# Bandwidth thresholds (bits per second)
|
||||
THRESHOLD_FULL = 100_000 # 100 Kbps - use full sync
|
||||
THRESHOLD_INCREMENTAL = 1_000 # 1 Kbps - use incremental
|
||||
# Below 1 Kbps - use minimal/QR
|
||||
|
||||
# RTT thresholds (milliseconds) - used as heuristic
|
||||
RTT_FAST = 100 # <100ms likely ethernet/wifi
|
||||
RTT_MEDIUM = 1000 # <1s likely internet
|
||||
RTT_SLOW = 10000 # <10s likely LoRa/satellite
|
||||
|
||||
# QR code capacity
|
||||
QR_MAX_BYTES = 2953 # QR version 40, binary mode
|
||||
|
||||
|
||||
class BandwidthProbe:
|
||||
"""Probes link quality to determine optimal sync strategy."""
|
||||
|
||||
# Probe packet sizes
|
||||
PROBE_SMALL = 64 # Minimum probe
|
||||
PROBE_MEDIUM = 512 # Medium probe
|
||||
PROBE_LARGE = 2048 # Large probe (if link allows)
|
||||
|
||||
def __init__(self, link: RadicleLink):
|
||||
self.link = link
|
||||
self._measurements: list = []
|
||||
|
||||
def measure_rtt(self, samples: int = 3) -> float:
|
||||
"""Measure round-trip time with multiple samples."""
|
||||
rtts = []
|
||||
|
||||
for _ in range(samples):
|
||||
# Send probe packet
|
||||
probe_data = struct.pack("!BQ", 0x01, int(time.time() * 1000000))
|
||||
start = time.time()
|
||||
|
||||
if not self.link.send(probe_data):
|
||||
continue
|
||||
|
||||
# Wait for response
|
||||
response = self.link.recv(timeout=30.0)
|
||||
if response:
|
||||
rtt = (time.time() - start) * 1000 # ms
|
||||
rtts.append(rtt)
|
||||
|
||||
time.sleep(0.1) # Brief pause between probes
|
||||
|
||||
if not rtts:
|
||||
return float('inf')
|
||||
|
||||
# Use median RTT
|
||||
rtts.sort()
|
||||
return rtts[len(rtts) // 2]
|
||||
|
||||
def measure_throughput(self, duration: float = 2.0) -> float:
|
||||
"""Measure throughput by sending test data."""
|
||||
# Start with small packets, increase if successful
|
||||
packet_size = self.PROBE_SMALL
|
||||
bytes_sent = 0
|
||||
start = time.time()
|
||||
|
||||
while time.time() - start < duration:
|
||||
# Create probe packet
|
||||
probe_data = struct.pack("!BI", 0x02, packet_size) + b"\x00" * (packet_size - 5)
|
||||
|
||||
if self.link.send(probe_data):
|
||||
bytes_sent += len(probe_data)
|
||||
|
||||
# Try larger packets if successful
|
||||
if packet_size < self.PROBE_LARGE:
|
||||
packet_size = min(packet_size * 2, self.PROBE_LARGE)
|
||||
else:
|
||||
# Reduce packet size on failure
|
||||
packet_size = max(packet_size // 2, self.PROBE_SMALL)
|
||||
|
||||
time.sleep(0.01) # Small delay
|
||||
|
||||
elapsed = time.time() - start
|
||||
if elapsed > 0:
|
||||
return (bytes_sent * 8) / elapsed # bits per second
|
||||
return 0
|
||||
|
||||
def detect_link_type(self) -> bool:
|
||||
"""Detect if link is LoRa based on characteristics."""
|
||||
# LoRa typically has:
|
||||
# - High RTT (>1s)
|
||||
# - Low throughput (<10 Kbps)
|
||||
# - Link properties may indicate
|
||||
|
||||
if hasattr(self.link.rns_link, 'get_establishment_rate'):
|
||||
rate = self.link.rns_link.get_establishment_rate()
|
||||
if rate and rate < 10000: # <10 Kbps
|
||||
return True
|
||||
|
||||
# Check RTT heuristic
|
||||
if self.link.rtt and self.link.rtt > 1.0: # >1 second RTT
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def probe(self, quick: bool = False) -> LinkQuality:
|
||||
"""Probe link and return quality assessment.
|
||||
|
||||
Args:
|
||||
quick: If True, use faster but less accurate probing
|
||||
"""
|
||||
# Quick mode uses existing RTT if available
|
||||
if quick and self.link.rtt:
|
||||
rtt_ms = self.link.rtt * 1000
|
||||
# Estimate throughput from RTT
|
||||
if rtt_ms < RTT_FAST:
|
||||
throughput = THRESHOLD_FULL * 10 # Assume fast
|
||||
elif rtt_ms < RTT_MEDIUM:
|
||||
throughput = THRESHOLD_FULL
|
||||
elif rtt_ms < RTT_SLOW:
|
||||
throughput = THRESHOLD_INCREMENTAL * 10
|
||||
else:
|
||||
throughput = THRESHOLD_INCREMENTAL / 10
|
||||
else:
|
||||
# Full measurement
|
||||
rtt_ms = self.measure_rtt(samples=2 if quick else 3)
|
||||
throughput = self.measure_throughput(duration=1.0 if quick else 2.0)
|
||||
|
||||
is_lora = self.detect_link_type()
|
||||
|
||||
# Determine strategy
|
||||
if throughput >= THRESHOLD_FULL and rtt_ms < RTT_MEDIUM:
|
||||
strategy = SyncStrategy.FULL
|
||||
elif throughput >= THRESHOLD_INCREMENTAL:
|
||||
strategy = SyncStrategy.INCREMENTAL
|
||||
else:
|
||||
strategy = SyncStrategy.MINIMAL
|
||||
|
||||
return LinkQuality(
|
||||
rtt_ms=rtt_ms,
|
||||
throughput_bps=throughput,
|
||||
packet_loss=0.0, # TODO: measure
|
||||
is_lora=is_lora,
|
||||
strategy=strategy,
|
||||
)
|
||||
|
||||
|
||||
def estimate_transfer_time(size_bytes: int, quality: LinkQuality) -> float:
|
||||
"""Estimate transfer time in seconds for given size and quality."""
|
||||
if quality.throughput_bps <= 0:
|
||||
return float('inf')
|
||||
|
||||
# Account for protocol overhead (~20%)
|
||||
effective_throughput = quality.throughput_bps * 0.8
|
||||
bits = size_bytes * 8
|
||||
return bits / effective_throughput
|
||||
|
||||
|
||||
def select_strategy(
|
||||
bundle_size: int,
|
||||
incremental_size: Optional[int],
|
||||
quality: LinkQuality,
|
||||
max_transfer_time: float = 3600, # 1 hour default max
|
||||
) -> Tuple[SyncStrategy, str]:
|
||||
"""Select optimal sync strategy based on sizes and link quality.
|
||||
|
||||
Returns (strategy, reason).
|
||||
"""
|
||||
# Check if QR is viable for tiny payloads
|
||||
if incremental_size and incremental_size <= QR_MAX_BYTES:
|
||||
return SyncStrategy.QR, f"Incremental fits in QR ({incremental_size} bytes)"
|
||||
|
||||
# Calculate transfer times
|
||||
full_time = estimate_transfer_time(bundle_size, quality)
|
||||
incr_time = estimate_transfer_time(incremental_size or bundle_size, quality)
|
||||
|
||||
# If LoRa detected, prefer minimal/incremental
|
||||
if quality.is_lora:
|
||||
if incremental_size and incr_time < max_transfer_time:
|
||||
return SyncStrategy.INCREMENTAL, f"LoRa link, incremental viable ({incr_time:.0f}s)"
|
||||
return SyncStrategy.MINIMAL, "LoRa link, request-based sync recommended"
|
||||
|
||||
# For fast links, full sync if reasonable
|
||||
if quality.strategy == SyncStrategy.FULL:
|
||||
if full_time < 60: # Under 1 minute
|
||||
return SyncStrategy.FULL, f"Fast link, full sync in {full_time:.0f}s"
|
||||
elif incremental_size and incr_time < full_time / 2:
|
||||
return SyncStrategy.INCREMENTAL, f"Large repo, incremental faster ({incr_time:.0f}s vs {full_time:.0f}s)"
|
||||
return SyncStrategy.FULL, f"Fast link, full sync in {full_time:.0f}s"
|
||||
|
||||
# For medium links, prefer incremental
|
||||
if incremental_size and incr_time < max_transfer_time:
|
||||
return SyncStrategy.INCREMENTAL, f"Medium link, incremental in {incr_time:.0f}s"
|
||||
|
||||
if full_time < max_transfer_time:
|
||||
return SyncStrategy.FULL, f"No incremental available, full in {full_time:.0f}s"
|
||||
|
||||
return SyncStrategy.MINIMAL, f"Transfer too slow ({full_time:.0f}s), use request-based"
|
||||
|
||||
|
||||
class AdaptiveSyncManager:
|
||||
"""Sync manager with automatic bandwidth adaptation."""
|
||||
|
||||
def __init__(self, sync_manager):
|
||||
"""Wrap a SyncManager with adaptive capabilities."""
|
||||
self.sync_manager = sync_manager
|
||||
self._link_qualities: dict = {} # peer_hash -> LinkQuality
|
||||
|
||||
def probe_peer(self, link: RadicleLink, quick: bool = True) -> LinkQuality:
|
||||
"""Probe a peer's link quality."""
|
||||
prober = BandwidthProbe(link)
|
||||
quality = prober.probe(quick=quick)
|
||||
|
||||
# Cache result
|
||||
if link.remote_identity:
|
||||
self._link_qualities[link.remote_identity.hash] = quality
|
||||
|
||||
return quality
|
||||
|
||||
def get_cached_quality(self, peer_hash: bytes) -> Optional[LinkQuality]:
|
||||
"""Get cached link quality for a peer."""
|
||||
return self._link_qualities.get(peer_hash)
|
||||
|
||||
def adaptive_sync(
|
||||
self,
|
||||
repository_id: str,
|
||||
link: RadicleLink,
|
||||
peer_refs: Optional[dict] = None,
|
||||
) -> Tuple[bool, str]:
|
||||
"""Perform adaptive sync based on link quality.
|
||||
|
||||
Returns (success, description).
|
||||
"""
|
||||
from radicle_reticulum.git_bundle import GitBundleGenerator, estimate_bundle_size
|
||||
from radicle_reticulum.sync import SyncMode
|
||||
|
||||
# Get repository info
|
||||
state = self.sync_manager._repos.get(repository_id)
|
||||
if not state:
|
||||
return False, "Repository not registered"
|
||||
|
||||
# Probe link quality
|
||||
quality = self.probe_peer(link, quick=True)
|
||||
RNS.log(f"Link quality: {quality}", RNS.LOG_INFO)
|
||||
|
||||
# Estimate bundle sizes
|
||||
generator = GitBundleGenerator(state.local_path)
|
||||
full_size = estimate_bundle_size(state.local_path)
|
||||
|
||||
# Estimate incremental size (rough: compare ref counts)
|
||||
current_refs = generator.get_refs()
|
||||
if peer_refs:
|
||||
changed = sum(1 for r, s in current_refs.items()
|
||||
if r not in peer_refs or peer_refs[r] != s)
|
||||
# Rough estimate: assume ~10KB per changed ref average
|
||||
incr_size = changed * 10240
|
||||
else:
|
||||
incr_size = None
|
||||
|
||||
# Select strategy
|
||||
strategy, reason = select_strategy(full_size, incr_size, quality)
|
||||
RNS.log(f"Selected strategy: {strategy.value} - {reason}", RNS.LOG_INFO)
|
||||
|
||||
# Execute strategy
|
||||
if strategy == SyncStrategy.QR:
|
||||
bundle = self.sync_manager.create_sync_bundle(
|
||||
repository_id, peer_refs=peer_refs, mode=SyncMode.INCREMENTAL
|
||||
)
|
||||
if bundle:
|
||||
# Generate QR (handled by caller)
|
||||
return True, f"QR: {len(bundle.encode())} bytes"
|
||||
return False, "No changes for QR"
|
||||
|
||||
elif strategy == SyncStrategy.FULL:
|
||||
bundle = self.sync_manager.create_sync_bundle(
|
||||
repository_id, mode=SyncMode.FULL
|
||||
)
|
||||
elif strategy == SyncStrategy.INCREMENTAL:
|
||||
bundle = self.sync_manager.create_sync_bundle(
|
||||
repository_id, peer_refs=peer_refs, mode=SyncMode.INCREMENTAL
|
||||
)
|
||||
else: # MINIMAL
|
||||
# Just announce refs, let peer request what they need
|
||||
self.sync_manager.announce_refs(repository_id)
|
||||
return True, "Announced refs for request-based sync"
|
||||
|
||||
if bundle is None:
|
||||
return True, "No changes to sync"
|
||||
|
||||
# Send bundle
|
||||
if link.remote_identity:
|
||||
success = self.sync_manager.send_bundle(
|
||||
bundle, link.remote_identity.hash
|
||||
)
|
||||
if success:
|
||||
return True, f"Sent {bundle.metadata.bundle_type.value} bundle ({bundle.metadata.size_bytes} bytes)"
|
||||
return False, "Failed to send bundle"
|
||||
|
||||
return False, "No remote identity on link"
|
||||
|
|
@ -0,0 +1,582 @@
|
|||
"""TCP-to-Reticulum bridge for Radicle.
|
||||
|
||||
Bridges radicle-node's TCP connections over Reticulum mesh network.
|
||||
Allows using Radicle normally while syncing over LoRa, packet radio, etc.
|
||||
|
||||
Architecture:
|
||||
┌─────────────┐ TCP ┌─────────────┐ Reticulum ┌─────────────┐
|
||||
│ radicle-node│ ◄──────────► │ Bridge │ ◄───────────► │Remote Bridge│
|
||||
└─────────────┘ localhost └─────────────┘ LoRa/mesh └─────────────┘
|
||||
|
||||
The bridge:
|
||||
1. Listens on localhost TCP (default: 8776, Radicle's default port)
|
||||
2. Accepts connections from local radicle-node
|
||||
3. Tunnels traffic over Reticulum to remote bridges
|
||||
4. Remote bridges forward to their local radicle-node
|
||||
"""
|
||||
|
||||
import socket
|
||||
import select
|
||||
import struct
|
||||
import subprocess
|
||||
import threading
|
||||
import time
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Callable, Dict, List, Optional, Set, Tuple
|
||||
|
||||
import RNS
|
||||
|
||||
from radicle_reticulum.identity import RadicleIdentity
|
||||
|
||||
|
||||
# Default ports
|
||||
RADICLE_DEFAULT_PORT = 8776
|
||||
BRIDGE_DEFAULT_PORT = 8777 # Local listen port for bridge
|
||||
|
||||
# App name for RNS destinations
|
||||
APP_NAME = "radicle"
|
||||
ASPECT_BRIDGE = "bridge"
|
||||
|
||||
# App data identifier for bridge announces (used for filtering)
|
||||
BRIDGE_APP_DATA_MAGIC = b"RADICLE_BRIDGE_V1"
|
||||
|
||||
# Buffer sizes
|
||||
TCP_BUFFER_SIZE = 65536
|
||||
RNS_BUFFER_SIZE = 32768 # Smaller for RNS to avoid fragmentation
|
||||
|
||||
|
||||
@dataclass
|
||||
class TunnelConnection:
|
||||
"""Represents a tunneled TCP connection."""
|
||||
tunnel_id: int
|
||||
tcp_socket: Optional[socket.socket]
|
||||
rns_link: Optional[RNS.Link]
|
||||
remote_destination: Optional[bytes]
|
||||
created_at: float = field(default_factory=time.time)
|
||||
bytes_sent: int = 0
|
||||
bytes_received: int = 0
|
||||
active: bool = True
|
||||
|
||||
def close(self):
|
||||
"""Close the tunnel."""
|
||||
self.active = False
|
||||
if self.tcp_socket:
|
||||
try:
|
||||
self.tcp_socket.close()
|
||||
except:
|
||||
pass
|
||||
if self.rns_link:
|
||||
try:
|
||||
self.rns_link.teardown()
|
||||
except:
|
||||
pass
|
||||
|
||||
|
||||
class RadicleBridge:
|
||||
"""Bridges Radicle TCP connections over Reticulum.
|
||||
|
||||
Modes:
|
||||
- Server mode: Listens for TCP from local radicle-node, tunnels to remote bridges
|
||||
- Accepts incoming tunnels from remote bridges, forwards to local radicle-node
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
identity: Optional[RadicleIdentity] = None,
|
||||
listen_port: int = BRIDGE_DEFAULT_PORT,
|
||||
radicle_host: str = "127.0.0.1",
|
||||
radicle_port: int = RADICLE_DEFAULT_PORT,
|
||||
config_path: Optional[str] = None,
|
||||
auto_connect: bool = True,
|
||||
auto_seed: bool = True,
|
||||
):
|
||||
"""Initialize the bridge.
|
||||
|
||||
Args:
|
||||
identity: RNS identity for this bridge
|
||||
listen_port: TCP port to listen on for local radicle-node
|
||||
radicle_host: Host where radicle-node listens (for incoming tunnels)
|
||||
radicle_port: Port where radicle-node listens
|
||||
config_path: Path to Reticulum config
|
||||
auto_connect: Automatically connect to discovered bridges
|
||||
auto_seed: Automatically register discovered NIDs with radicle-node
|
||||
"""
|
||||
self.listen_port = listen_port
|
||||
self.radicle_host = radicle_host
|
||||
self.radicle_port = radicle_port
|
||||
self.auto_connect = auto_connect
|
||||
self.auto_seed = auto_seed
|
||||
|
||||
# Initialize Reticulum
|
||||
self.reticulum = RNS.Reticulum(config_path)
|
||||
|
||||
# Identity
|
||||
if identity is None:
|
||||
identity = RadicleIdentity.generate()
|
||||
self.identity = identity
|
||||
|
||||
# Create RNS destination for incoming tunnels
|
||||
self.destination = RNS.Destination(
|
||||
self.identity.rns_identity,
|
||||
RNS.Destination.IN,
|
||||
RNS.Destination.SINGLE,
|
||||
APP_NAME,
|
||||
ASPECT_BRIDGE,
|
||||
)
|
||||
self.destination.set_link_established_callback(self._on_incoming_link)
|
||||
|
||||
# Known remote bridges: hash -> last_seen
|
||||
self._remote_bridges: Dict[bytes, float] = {}
|
||||
self._remote_bridges_lock = threading.Lock()
|
||||
|
||||
# Active tunnels
|
||||
self._tunnels: Dict[int, TunnelConnection] = {}
|
||||
self._tunnel_counter = 0
|
||||
self._tunnels_lock = threading.Lock()
|
||||
|
||||
# TCP server
|
||||
self._tcp_server: Optional[socket.socket] = None
|
||||
self._running = False
|
||||
|
||||
# Callbacks
|
||||
self._on_tunnel_opened: Optional[Callable[[TunnelConnection], None]] = None
|
||||
self._on_tunnel_closed: Optional[Callable[[TunnelConnection], None]] = None
|
||||
self._on_bridge_discovered: Optional[Callable[[bytes, Optional[str]], None]] = None
|
||||
|
||||
# Local radicle node NID (for announcing to remote bridges)
|
||||
self._local_radicle_nid: Optional[str] = None
|
||||
|
||||
# Remote bridge NIDs: bridge_hash -> radicle_nid
|
||||
self._bridge_nids: Dict[bytes, str] = {}
|
||||
|
||||
def start(self):
|
||||
"""Start the bridge."""
|
||||
self._running = True
|
||||
|
||||
# Register announce handler to discover other bridges
|
||||
RNS.Transport.register_announce_handler(self._handle_announce)
|
||||
RNS.log(f"Registered announce handler for bridge discovery", RNS.LOG_INFO)
|
||||
|
||||
# Start TCP server
|
||||
self._tcp_server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
self._tcp_server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||
self._tcp_server.bind(("127.0.0.1", self.listen_port))
|
||||
self._tcp_server.listen(5)
|
||||
self._tcp_server.setblocking(False)
|
||||
|
||||
# Start accept thread
|
||||
self._accept_thread = threading.Thread(target=self._accept_loop, daemon=True)
|
||||
self._accept_thread.start()
|
||||
|
||||
# Announce presence
|
||||
self.announce()
|
||||
|
||||
RNS.log(f"Radicle bridge started", RNS.LOG_INFO)
|
||||
RNS.log(f" TCP listen: 127.0.0.1:{self.listen_port}", RNS.LOG_INFO)
|
||||
RNS.log(f" RNS hash: {self.destination.hexhash}", RNS.LOG_INFO)
|
||||
RNS.log(f" Radicle target: {self.radicle_host}:{self.radicle_port}", RNS.LOG_INFO)
|
||||
|
||||
def stop(self):
|
||||
"""Stop the bridge."""
|
||||
self._running = False
|
||||
|
||||
# Close all tunnels
|
||||
with self._tunnels_lock:
|
||||
for tunnel in list(self._tunnels.values()):
|
||||
tunnel.close()
|
||||
self._tunnels.clear()
|
||||
|
||||
# Close TCP server
|
||||
if self._tcp_server:
|
||||
self._tcp_server.close()
|
||||
|
||||
RNS.log("Radicle bridge stopped", RNS.LOG_INFO)
|
||||
|
||||
def set_local_radicle_nid(self, nid: str):
|
||||
"""Set the local radicle node's NID for announcement.
|
||||
|
||||
This allows remote bridges to know which radicle NID is reachable
|
||||
through this bridge.
|
||||
"""
|
||||
self._local_radicle_nid = nid
|
||||
RNS.log(f"Local radicle NID set: {nid[:32]}...", RNS.LOG_INFO)
|
||||
|
||||
def get_remote_bridge_nid(self, bridge_hash: bytes) -> Optional[str]:
|
||||
"""Get the radicle NID served by a remote bridge."""
|
||||
return self._bridge_nids.get(bridge_hash)
|
||||
|
||||
def announce(self):
|
||||
"""Announce this bridge on the network."""
|
||||
# Build app_data: magic + optional NID
|
||||
app_data = BRIDGE_APP_DATA_MAGIC
|
||||
if self._local_radicle_nid:
|
||||
nid_bytes = self._local_radicle_nid.encode("utf-8")
|
||||
app_data += struct.pack("!H", len(nid_bytes)) + nid_bytes
|
||||
|
||||
self.destination.announce(app_data=app_data)
|
||||
RNS.log(f"Announced bridge: {self.destination.hexhash} (app_data={len(app_data)} bytes)", RNS.LOG_INFO)
|
||||
|
||||
def connect_to_bridge(self, destination_hash: bytes, timeout: float = 30.0) -> bool:
|
||||
"""Connect to a remote bridge.
|
||||
|
||||
This establishes the RNS link. Actual tunneling happens when
|
||||
local radicle-node connects to our TCP port.
|
||||
"""
|
||||
# Check if we have a path
|
||||
if not RNS.Transport.has_path(destination_hash):
|
||||
RNS.Transport.request_path(destination_hash)
|
||||
deadline = time.time() + timeout
|
||||
while not RNS.Transport.has_path(destination_hash):
|
||||
if time.time() > deadline:
|
||||
RNS.log(f"Path timeout: {destination_hash.hex()}", RNS.LOG_WARNING)
|
||||
return False
|
||||
time.sleep(0.1)
|
||||
|
||||
# Remember this bridge
|
||||
with self._remote_bridges_lock:
|
||||
self._remote_bridges[destination_hash] = time.time()
|
||||
|
||||
RNS.log(f"Registered remote bridge: {destination_hash.hex()}", RNS.LOG_INFO)
|
||||
return True
|
||||
|
||||
def get_remote_bridges(self) -> List[bytes]:
|
||||
"""Get list of known remote bridge hashes."""
|
||||
with self._remote_bridges_lock:
|
||||
return list(self._remote_bridges.keys())
|
||||
|
||||
def _accept_loop(self):
|
||||
"""Accept incoming TCP connections from local radicle-node."""
|
||||
while self._running:
|
||||
try:
|
||||
readable, _, _ = select.select([self._tcp_server], [], [], 1.0)
|
||||
if readable:
|
||||
client_socket, addr = self._tcp_server.accept()
|
||||
RNS.log(f"TCP connection from {addr}", RNS.LOG_DEBUG)
|
||||
|
||||
# Handle in new thread
|
||||
thread = threading.Thread(
|
||||
target=self._handle_local_connection,
|
||||
args=(client_socket,),
|
||||
daemon=True,
|
||||
)
|
||||
thread.start()
|
||||
except Exception as e:
|
||||
if self._running:
|
||||
RNS.log(f"Accept error: {e}", RNS.LOG_ERROR)
|
||||
|
||||
def _handle_local_connection(self, tcp_socket: socket.socket):
|
||||
"""Handle a connection from local radicle-node.
|
||||
|
||||
Creates tunnel to remote bridge and forwards traffic.
|
||||
"""
|
||||
# Get a remote bridge to tunnel to
|
||||
remote_bridges = self.get_remote_bridges()
|
||||
if not remote_bridges:
|
||||
RNS.log("No remote bridges available", RNS.LOG_WARNING)
|
||||
tcp_socket.close()
|
||||
return
|
||||
|
||||
# Use first available bridge (could add load balancing later)
|
||||
remote_hash = remote_bridges[0]
|
||||
|
||||
# Create RNS link to remote bridge
|
||||
remote_identity = RNS.Identity.recall(remote_hash)
|
||||
if not remote_identity:
|
||||
RNS.log(f"Cannot recall identity for {remote_hash.hex()}", RNS.LOG_WARNING)
|
||||
tcp_socket.close()
|
||||
return
|
||||
|
||||
remote_dest = RNS.Destination(
|
||||
remote_identity,
|
||||
RNS.Destination.OUT,
|
||||
RNS.Destination.SINGLE,
|
||||
APP_NAME,
|
||||
ASPECT_BRIDGE,
|
||||
)
|
||||
|
||||
rns_link = RNS.Link(remote_dest)
|
||||
|
||||
# Wait for link establishment
|
||||
deadline = time.time() + 30.0
|
||||
while rns_link.status != RNS.Link.ACTIVE:
|
||||
if time.time() > deadline or rns_link.status == RNS.Link.CLOSED:
|
||||
RNS.log("Link establishment timeout", RNS.LOG_WARNING)
|
||||
tcp_socket.close()
|
||||
return
|
||||
time.sleep(0.1)
|
||||
|
||||
# Create tunnel
|
||||
with self._tunnels_lock:
|
||||
self._tunnel_counter += 1
|
||||
tunnel_id = self._tunnel_counter
|
||||
|
||||
tunnel = TunnelConnection(
|
||||
tunnel_id=tunnel_id,
|
||||
tcp_socket=tcp_socket,
|
||||
rns_link=rns_link,
|
||||
remote_destination=remote_hash,
|
||||
)
|
||||
|
||||
with self._tunnels_lock:
|
||||
self._tunnels[tunnel_id] = tunnel
|
||||
|
||||
RNS.log(f"Tunnel {tunnel_id} opened to {remote_hash.hex()[:16]}", RNS.LOG_INFO)
|
||||
|
||||
if self._on_tunnel_opened:
|
||||
self._on_tunnel_opened(tunnel)
|
||||
|
||||
# Set up bidirectional forwarding
|
||||
rns_link.set_packet_callback(
|
||||
lambda data, pkt: self._on_rns_data(tunnel_id, data)
|
||||
)
|
||||
rns_link.set_link_closed_callback(
|
||||
lambda link: self._on_tunnel_closed(tunnel_id)
|
||||
)
|
||||
|
||||
# Forward TCP to RNS
|
||||
self._forward_tcp_to_rns(tunnel)
|
||||
|
||||
def _forward_tcp_to_rns(self, tunnel: TunnelConnection):
|
||||
"""Forward data from TCP socket to RNS link."""
|
||||
tcp_socket = tunnel.tcp_socket
|
||||
rns_link = tunnel.rns_link
|
||||
tcp_socket.setblocking(False)
|
||||
|
||||
while tunnel.active and self._running:
|
||||
try:
|
||||
readable, _, errored = select.select([tcp_socket], [], [tcp_socket], 1.0)
|
||||
|
||||
if errored:
|
||||
break
|
||||
|
||||
if readable:
|
||||
data = tcp_socket.recv(RNS_BUFFER_SIZE)
|
||||
if not data:
|
||||
break # Connection closed
|
||||
|
||||
# Send over RNS
|
||||
if rns_link.status == RNS.Link.ACTIVE:
|
||||
packet = RNS.Packet(rns_link, data)
|
||||
packet.send()
|
||||
tunnel.bytes_sent += len(data)
|
||||
else:
|
||||
break
|
||||
|
||||
except socket.error:
|
||||
break
|
||||
except Exception as e:
|
||||
RNS.log(f"Forward error: {e}", RNS.LOG_DEBUG)
|
||||
break
|
||||
|
||||
self._on_tunnel_closed(tunnel.tunnel_id)
|
||||
|
||||
def _on_rns_data(self, tunnel_id: int, data: bytes):
|
||||
"""Handle data received from RNS link."""
|
||||
with self._tunnels_lock:
|
||||
tunnel = self._tunnels.get(tunnel_id)
|
||||
|
||||
if tunnel and tunnel.active and tunnel.tcp_socket:
|
||||
try:
|
||||
tunnel.tcp_socket.sendall(data)
|
||||
tunnel.bytes_received += len(data)
|
||||
except Exception as e:
|
||||
RNS.log(f"TCP send error: {e}", RNS.LOG_DEBUG)
|
||||
self._on_tunnel_closed(tunnel_id)
|
||||
|
||||
def _on_tunnel_closed(self, tunnel_id: int):
|
||||
"""Handle tunnel closure."""
|
||||
with self._tunnels_lock:
|
||||
tunnel = self._tunnels.pop(tunnel_id, None)
|
||||
|
||||
if tunnel:
|
||||
tunnel.close()
|
||||
RNS.log(
|
||||
f"Tunnel {tunnel_id} closed (sent: {tunnel.bytes_sent}, recv: {tunnel.bytes_received})",
|
||||
RNS.LOG_INFO
|
||||
)
|
||||
if self._on_tunnel_closed:
|
||||
self._on_tunnel_closed(tunnel)
|
||||
|
||||
def _on_incoming_link(self, link: RNS.Link):
|
||||
"""Handle incoming RNS link from remote bridge."""
|
||||
RNS.log(f"Incoming link from remote bridge", RNS.LOG_DEBUG)
|
||||
|
||||
# Connect to local radicle-node
|
||||
try:
|
||||
tcp_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
tcp_socket.connect((self.radicle_host, self.radicle_port))
|
||||
except Exception as e:
|
||||
RNS.log(f"Cannot connect to local radicle-node: {e}", RNS.LOG_ERROR)
|
||||
link.teardown()
|
||||
return
|
||||
|
||||
# Create tunnel
|
||||
with self._tunnels_lock:
|
||||
self._tunnel_counter += 1
|
||||
tunnel_id = self._tunnel_counter
|
||||
|
||||
tunnel = TunnelConnection(
|
||||
tunnel_id=tunnel_id,
|
||||
tcp_socket=tcp_socket,
|
||||
rns_link=link,
|
||||
remote_destination=link.destination.hash if link.destination else None,
|
||||
)
|
||||
|
||||
with self._tunnels_lock:
|
||||
self._tunnels[tunnel_id] = tunnel
|
||||
|
||||
RNS.log(f"Incoming tunnel {tunnel_id} opened", RNS.LOG_INFO)
|
||||
|
||||
# Set up bidirectional forwarding
|
||||
link.set_packet_callback(
|
||||
lambda data, pkt: self._on_rns_data(tunnel_id, data)
|
||||
)
|
||||
link.set_link_closed_callback(
|
||||
lambda l: self._on_tunnel_closed(tunnel_id)
|
||||
)
|
||||
|
||||
# Forward TCP to RNS in background
|
||||
thread = threading.Thread(
|
||||
target=self._forward_tcp_to_rns,
|
||||
args=(tunnel,),
|
||||
daemon=True,
|
||||
)
|
||||
thread.start()
|
||||
|
||||
def set_on_bridge_discovered(self, callback: Optional[Callable[[bytes, Optional[str]], None]]):
|
||||
"""Set callback for when a new bridge is discovered.
|
||||
|
||||
Callback receives (destination_hash, radicle_nid) where radicle_nid
|
||||
may be None if the remote bridge hasn't set one.
|
||||
"""
|
||||
self._on_bridge_discovered = callback
|
||||
|
||||
def _handle_announce(
|
||||
self,
|
||||
destination_hash: bytes,
|
||||
announced_identity: RNS.Identity,
|
||||
app_data: Optional[bytes],
|
||||
):
|
||||
"""Handle announce - only process if it's a bridge announce."""
|
||||
RNS.log(f"Received announce: {destination_hash.hex()[:16]}... app_data={app_data[:20] if app_data else None}", RNS.LOG_VERBOSE)
|
||||
|
||||
# Ignore our own announcements
|
||||
if destination_hash == self.destination.hash:
|
||||
RNS.log("Ignoring own announcement", RNS.LOG_VERBOSE)
|
||||
return
|
||||
|
||||
# Filter: only accept announces with bridge magic in app_data
|
||||
if app_data is None or not app_data.startswith(BRIDGE_APP_DATA_MAGIC):
|
||||
# Not a bridge announce, ignore
|
||||
RNS.log(f"Ignoring non-bridge announce (no magic)", RNS.LOG_VERBOSE)
|
||||
return
|
||||
|
||||
# Extract radicle NID if present
|
||||
radicle_nid = None
|
||||
if len(app_data) > len(BRIDGE_APP_DATA_MAGIC):
|
||||
try:
|
||||
offset = len(BRIDGE_APP_DATA_MAGIC)
|
||||
nid_len = struct.unpack("!H", app_data[offset:offset+2])[0]
|
||||
offset += 2
|
||||
nid_str = app_data[offset:offset+nid_len].decode("utf-8")
|
||||
radicle_nid = nid_str if nid_str else None
|
||||
except Exception as e:
|
||||
RNS.log(f"Failed to parse bridge app_data: {e}", RNS.LOG_DEBUG)
|
||||
|
||||
with self._remote_bridges_lock:
|
||||
is_new = destination_hash not in self._remote_bridges
|
||||
self._remote_bridges[destination_hash] = time.time()
|
||||
|
||||
# Store NID mapping
|
||||
if radicle_nid:
|
||||
self._bridge_nids[destination_hash] = radicle_nid
|
||||
|
||||
if is_new:
|
||||
nid_info = f" (NID: {radicle_nid[:32]}...)" if radicle_nid else ""
|
||||
RNS.log(f"Discovered bridge: {destination_hash.hex()}{nid_info}", RNS.LOG_INFO)
|
||||
|
||||
# Notify callback
|
||||
if self._on_bridge_discovered:
|
||||
self._on_bridge_discovered(destination_hash, radicle_nid)
|
||||
|
||||
# Auto-connect if enabled
|
||||
if self.auto_connect:
|
||||
threading.Thread(
|
||||
target=self._auto_connect_to_bridge,
|
||||
args=(destination_hash,),
|
||||
daemon=True,
|
||||
).start()
|
||||
|
||||
# Auto-register seed if enabled and NID is known
|
||||
if self.auto_seed and radicle_nid:
|
||||
threading.Thread(
|
||||
target=self._auto_register_seed,
|
||||
args=(radicle_nid,),
|
||||
daemon=True,
|
||||
).start()
|
||||
|
||||
def _auto_connect_to_bridge(self, destination_hash: bytes):
|
||||
"""Auto-connect to a discovered bridge in background."""
|
||||
RNS.log(f"Auto-connecting to bridge: {destination_hash.hex()}", RNS.LOG_INFO)
|
||||
if self.connect_to_bridge(destination_hash, timeout=30.0):
|
||||
RNS.log(f"Auto-connected to bridge: {destination_hash.hex()}", RNS.LOG_INFO)
|
||||
else:
|
||||
RNS.log(f"Auto-connect failed for bridge: {destination_hash.hex()}", RNS.LOG_WARNING)
|
||||
|
||||
def register_seed(self, radicle_nid: str) -> bool:
|
||||
"""Register a remote radicle NID as a seed through this bridge.
|
||||
|
||||
Calls 'rad node connect <NID>@127.0.0.1:<listen_port>' to tell
|
||||
radicle-node that the given NID is reachable through our bridge.
|
||||
|
||||
Returns True if successful.
|
||||
"""
|
||||
addr = f"{radicle_nid}@127.0.0.1:{self.listen_port}"
|
||||
RNS.log(f"Registering seed: {addr}", RNS.LOG_INFO)
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["rad", "node", "connect", addr],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=30,
|
||||
)
|
||||
if result.returncode == 0:
|
||||
RNS.log(f"Seed registered: {radicle_nid[:16]}...", RNS.LOG_INFO)
|
||||
return True
|
||||
else:
|
||||
RNS.log(f"Failed to register seed: {result.stderr}", RNS.LOG_WARNING)
|
||||
return False
|
||||
except FileNotFoundError:
|
||||
RNS.log("'rad' command not found - cannot auto-register seed", RNS.LOG_WARNING)
|
||||
return False
|
||||
except subprocess.TimeoutExpired:
|
||||
RNS.log("Timeout registering seed", RNS.LOG_WARNING)
|
||||
return False
|
||||
except Exception as e:
|
||||
RNS.log(f"Error registering seed: {e}", RNS.LOG_WARNING)
|
||||
return False
|
||||
|
||||
def _auto_register_seed(self, radicle_nid: str):
|
||||
"""Auto-register a seed in background after discovery."""
|
||||
# Small delay to allow bridge connection to establish first
|
||||
time.sleep(2.0)
|
||||
self.register_seed(radicle_nid)
|
||||
|
||||
def get_stats(self) -> dict:
|
||||
"""Get bridge statistics."""
|
||||
with self._tunnels_lock:
|
||||
active_tunnels = len(self._tunnels)
|
||||
total_sent = sum(t.bytes_sent for t in self._tunnels.values())
|
||||
total_recv = sum(t.bytes_received for t in self._tunnels.values())
|
||||
|
||||
with self._remote_bridges_lock:
|
||||
known_bridges = len(self._remote_bridges)
|
||||
|
||||
return {
|
||||
"active_tunnels": active_tunnels,
|
||||
"known_bridges": known_bridges,
|
||||
"bytes_sent": total_sent,
|
||||
"bytes_received": total_recv,
|
||||
"rns_hash": self.destination.hexhash,
|
||||
}
|
||||
|
|
@ -0,0 +1,864 @@
|
|||
"""Command-line interface for Radicle-Reticulum adapter."""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
import signal
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
DEFAULT_IDENTITY_PATH = Path.home() / ".radicle-rns" / "identity"
|
||||
|
||||
import RNS
|
||||
|
||||
from radicle_reticulum.adapter import RNSTransportAdapter, PeerInfo
|
||||
from radicle_reticulum.identity import RadicleIdentity
|
||||
from radicle_reticulum.git_bundle import (
|
||||
GitBundleGenerator,
|
||||
GitBundleApplicator,
|
||||
BundleType,
|
||||
estimate_bundle_size,
|
||||
)
|
||||
from radicle_reticulum.sync import (
|
||||
SyncManager,
|
||||
SyncMode,
|
||||
create_dead_drop_bundle,
|
||||
apply_dead_drop_bundle,
|
||||
)
|
||||
from radicle_reticulum.adaptive import (
|
||||
SyncStrategy,
|
||||
LinkQuality,
|
||||
select_strategy,
|
||||
)
|
||||
from radicle_reticulum.bridge import RadicleBridge
|
||||
|
||||
|
||||
def detect_radicle_nid() -> Optional[str]:
|
||||
"""Try to detect the local radicle NID by running 'rad self'.
|
||||
|
||||
Returns the NID string (e.g. 'z6Mk...') or None if rad is not available
|
||||
or the NID cannot be parsed.
|
||||
"""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["rad", "self"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=5,
|
||||
)
|
||||
if result.returncode != 0:
|
||||
return None
|
||||
for line in result.stdout.splitlines():
|
||||
parts = line.split()
|
||||
if len(parts) >= 2 and parts[0] == "NID":
|
||||
return parts[1]
|
||||
# Older versions may print DID format
|
||||
if len(parts) >= 2 and parts[0] == "DID":
|
||||
did = parts[1]
|
||||
if did.startswith("did:key:"):
|
||||
return did[len("did:key:"):]
|
||||
return None
|
||||
except (FileNotFoundError, subprocess.TimeoutExpired, Exception):
|
||||
return None
|
||||
|
||||
|
||||
def on_peer_discovered(peer: PeerInfo):
|
||||
"""Callback when a new peer is discovered."""
|
||||
print(f"[+] Discovered peer: {peer.identity.did}")
|
||||
print(f" RNS hash: {peer.destination_hash.hex()}")
|
||||
|
||||
|
||||
def cmd_node(args):
|
||||
"""Run a Radicle-RNS node."""
|
||||
print("Starting Radicle-RNS node...")
|
||||
|
||||
identity = RadicleIdentity.load_or_generate(args.identity)
|
||||
_print_identity_info(args.identity)
|
||||
|
||||
# Create adapter
|
||||
adapter = RNSTransportAdapter(identity=identity)
|
||||
adapter.set_on_peer_discovered(on_peer_discovered)
|
||||
|
||||
print(f"Node ID: {identity.did}")
|
||||
print(f"RNS Hash: {adapter.node_hash_hex}")
|
||||
print()
|
||||
|
||||
# Start adapter
|
||||
adapter.start()
|
||||
|
||||
print("Node running. Press Ctrl+C to stop.")
|
||||
print("Announcing every 60 seconds...")
|
||||
print()
|
||||
|
||||
# Handle graceful shutdown
|
||||
running = True
|
||||
def signal_handler(sig, frame):
|
||||
nonlocal running
|
||||
print("\nShutting down...")
|
||||
running = False
|
||||
|
||||
signal.signal(signal.SIGINT, signal_handler)
|
||||
signal.signal(signal.SIGTERM, signal_handler)
|
||||
|
||||
# Main loop
|
||||
last_announce = 0
|
||||
try:
|
||||
while running:
|
||||
now = time.time()
|
||||
|
||||
# Periodic announce
|
||||
if now - last_announce > 60:
|
||||
adapter.announce()
|
||||
last_announce = now
|
||||
|
||||
time.sleep(0.5)
|
||||
finally:
|
||||
adapter.stop()
|
||||
print("Node stopped.")
|
||||
|
||||
|
||||
def _print_identity_info(identity_path: Path):
|
||||
"""Print identity file location (new or loaded)."""
|
||||
path = Path(identity_path)
|
||||
status = "loaded" if path.exists() else "generated"
|
||||
print(f"Identity {status}: {path}")
|
||||
|
||||
|
||||
def cmd_identity(args):
|
||||
"""Generate or display identity information."""
|
||||
if args.action == "generate":
|
||||
path = Path(args.identity)
|
||||
if path.exists() and not args.force:
|
||||
print(f"Identity already exists: {path}")
|
||||
print("Use --force to overwrite, or --identity <path> for a different path.")
|
||||
sys.exit(1)
|
||||
identity = RadicleIdentity.generate()
|
||||
identity.save(path)
|
||||
print(f"Generated new identity: {path}")
|
||||
print(f" DID: {identity.did}")
|
||||
print(f" RNS Hash: {identity.rns_identity_hash_hex}")
|
||||
elif args.action == "info":
|
||||
path = Path(args.identity)
|
||||
if path.exists():
|
||||
try:
|
||||
identity = RadicleIdentity.load(path)
|
||||
print(f"Identity: {path}")
|
||||
print(f" DID: {identity.did}")
|
||||
print(f" RNS Hash: {identity.rns_identity_hash_hex}")
|
||||
print(f" Public key (hex): {identity.public_key_bytes.hex()}")
|
||||
except Exception as e:
|
||||
print(f"Error loading identity: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
elif args.did:
|
||||
try:
|
||||
identity = RadicleIdentity.from_did(args.did)
|
||||
print(f"DID: {identity.did}")
|
||||
print(f"Public key (hex): {identity.public_key_bytes.hex()}")
|
||||
except Exception as e:
|
||||
print(f"Error parsing DID: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
else:
|
||||
print(f"No identity file found at {path} and no --did provided.", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def cmd_ping(args):
|
||||
"""Ping a peer by RNS hash."""
|
||||
print(f"Connecting to {args.destination}...")
|
||||
|
||||
identity = RadicleIdentity.load_or_generate(args.identity)
|
||||
adapter = RNSTransportAdapter(identity=identity)
|
||||
adapter.start()
|
||||
|
||||
try:
|
||||
dest_hash = bytes.fromhex(args.destination)
|
||||
except ValueError:
|
||||
print("Error: Invalid destination hash (must be hex)", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
link = adapter.connect(dest_hash, timeout=args.timeout)
|
||||
if link is None:
|
||||
print("Failed to connect")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"Connected! RTT: {link.rtt:.3f}s" if link.rtt else "Connected!")
|
||||
|
||||
# Send ping
|
||||
from radicle_reticulum.messages import Ping, Pong, decode_message, MessageType
|
||||
import struct
|
||||
|
||||
ping = Ping()
|
||||
ping_time = time.time()
|
||||
link.send(ping.to_message())
|
||||
print("Ping sent, waiting for pong...")
|
||||
|
||||
response = link.recv(timeout=10.0)
|
||||
if response:
|
||||
header, msg = decode_message(response)
|
||||
if header.msg_type == MessageType.PONG:
|
||||
rtt = (time.time() - ping_time) * 1000
|
||||
print(f"Pong received! RTT: {rtt:.1f}ms")
|
||||
else:
|
||||
print(f"Unexpected response: {header.msg_type}")
|
||||
else:
|
||||
print("No response (timeout)")
|
||||
|
||||
link.close()
|
||||
adapter.stop()
|
||||
|
||||
|
||||
def cmd_peers(args):
|
||||
"""List discovered peers."""
|
||||
identity = RadicleIdentity.load_or_generate(args.identity)
|
||||
adapter = RNSTransportAdapter(identity=identity)
|
||||
adapter.set_on_peer_discovered(on_peer_discovered)
|
||||
adapter.start()
|
||||
|
||||
print(f"Listening for peers for {args.timeout} seconds...")
|
||||
print()
|
||||
|
||||
time.sleep(args.timeout)
|
||||
|
||||
peers = adapter.get_peers()
|
||||
if peers:
|
||||
print(f"\nDiscovered {len(peers)} peer(s):")
|
||||
for peer in peers:
|
||||
print(f" {peer.identity.did}")
|
||||
print(f" Hash: {peer.destination_hash.hex()}")
|
||||
print(f" Age: {peer.age:.1f}s")
|
||||
else:
|
||||
print("\nNo peers discovered.")
|
||||
|
||||
adapter.stop()
|
||||
|
||||
|
||||
def cmd_bundle_create(args):
|
||||
"""Create a bundle from a repository."""
|
||||
repo_path = Path(args.repo).resolve()
|
||||
if not repo_path.exists():
|
||||
print(f"Error: Repository not found: {repo_path}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Generate identity for signing
|
||||
identity = RadicleIdentity.generate()
|
||||
source_node = identity.did
|
||||
|
||||
# Repository ID
|
||||
repo_id = args.repo_id or f"rad:local:{repo_path.name}"
|
||||
|
||||
try:
|
||||
generator = GitBundleGenerator(repo_path)
|
||||
|
||||
# Get current refs
|
||||
refs = generator.get_refs()
|
||||
print(f"Repository: {repo_path}")
|
||||
print(f"Refs found: {len(refs)}")
|
||||
for ref, sha in refs.items():
|
||||
print(f" {ref}: {sha[:12]}")
|
||||
print()
|
||||
|
||||
# Determine output path
|
||||
if args.output:
|
||||
output_path = Path(args.output)
|
||||
else:
|
||||
timestamp = int(time.time())
|
||||
output_path = Path(f"{repo_path.name}-{timestamp}.bundle")
|
||||
|
||||
# Create bundle
|
||||
if args.incremental and args.basis:
|
||||
# Load basis refs from file
|
||||
basis_refs = json.loads(Path(args.basis).read_text())
|
||||
bundle = generator.create_incremental_bundle(
|
||||
repository_id=repo_id,
|
||||
source_node=source_node,
|
||||
basis_refs=basis_refs,
|
||||
output_path=output_path,
|
||||
)
|
||||
if bundle is None:
|
||||
print("No changes to bundle (repository is up to date)")
|
||||
sys.exit(0)
|
||||
else:
|
||||
bundle = generator.create_full_bundle(
|
||||
repository_id=repo_id,
|
||||
source_node=source_node,
|
||||
output_path=output_path,
|
||||
)
|
||||
|
||||
# Also save transport format
|
||||
transport_path = output_path.with_suffix(".radicle-bundle")
|
||||
transport_path.write_bytes(bundle.encode())
|
||||
|
||||
print(f"Bundle created: {output_path}")
|
||||
print(f"Transport file: {transport_path}")
|
||||
print(f"Type: {bundle.metadata.bundle_type.value}")
|
||||
print(f"Size: {bundle.metadata.size_bytes:,} bytes")
|
||||
print(f"Refs: {len(bundle.metadata.refs_included)}")
|
||||
|
||||
# Save current refs for future incremental
|
||||
refs_path = output_path.with_suffix(".refs.json")
|
||||
refs_path.write_text(json.dumps(refs, indent=2))
|
||||
print(f"Refs saved: {refs_path}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error creating bundle: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def cmd_bundle_apply(args):
|
||||
"""Apply a bundle to a repository."""
|
||||
bundle_path = Path(args.bundle).resolve()
|
||||
repo_path = Path(args.repo).resolve()
|
||||
|
||||
if not bundle_path.exists():
|
||||
print(f"Error: Bundle not found: {bundle_path}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
if not repo_path.exists():
|
||||
print(f"Error: Repository not found: {repo_path}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
try:
|
||||
# Determine bundle format
|
||||
if bundle_path.suffix == ".radicle-bundle":
|
||||
# Full transport format with metadata
|
||||
applied = apply_dead_drop_bundle(bundle_path, repo_path)
|
||||
else:
|
||||
# Raw git bundle
|
||||
applicator = GitBundleApplicator(repo_path)
|
||||
|
||||
# Create a minimal GitBundle wrapper
|
||||
from radicle_reticulum.git_bundle import GitBundle, BundleMetadata
|
||||
import hashlib
|
||||
|
||||
bundle_data = bundle_path.read_bytes()
|
||||
metadata = BundleMetadata(
|
||||
bundle_type=BundleType.FULL,
|
||||
repository_id="unknown",
|
||||
source_node="unknown",
|
||||
timestamp=int(time.time() * 1000),
|
||||
refs_included=[],
|
||||
prerequisites=[],
|
||||
size_bytes=len(bundle_data),
|
||||
checksum=hashlib.sha256(bundle_data).digest(),
|
||||
)
|
||||
bundle = GitBundle(metadata=metadata, data=bundle_data)
|
||||
applied = applicator.apply_bundle(bundle)
|
||||
|
||||
print(f"Bundle applied successfully!")
|
||||
print(f"Applied refs: {len(applied)}")
|
||||
for ref, sha in applied.items():
|
||||
print(f" {ref}: {sha[:12]}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error applying bundle: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def cmd_bundle_info(args):
|
||||
"""Show information about a bundle."""
|
||||
bundle_path = Path(args.bundle).resolve()
|
||||
|
||||
if not bundle_path.exists():
|
||||
print(f"Error: Bundle not found: {bundle_path}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
try:
|
||||
if bundle_path.suffix == ".radicle-bundle":
|
||||
# Full transport format
|
||||
from radicle_reticulum.git_bundle import GitBundle
|
||||
bundle = GitBundle.decode(bundle_path.read_bytes())
|
||||
|
||||
print(f"Bundle: {bundle_path}")
|
||||
print(f"Format: Radicle transport bundle")
|
||||
print(f"Type: {bundle.metadata.bundle_type.value}")
|
||||
print(f"Repository: {bundle.metadata.repository_id}")
|
||||
print(f"Source: {bundle.metadata.source_node}")
|
||||
print(f"Timestamp: {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(bundle.metadata.timestamp / 1000))}")
|
||||
print(f"Size: {bundle.metadata.size_bytes:,} bytes")
|
||||
print(f"Checksum: {bundle.metadata.checksum.hex()}")
|
||||
print(f"Refs ({len(bundle.metadata.refs_included)}):")
|
||||
for ref in bundle.metadata.refs_included:
|
||||
print(f" {ref}")
|
||||
if bundle.metadata.prerequisites:
|
||||
print(f"Prerequisites ({len(bundle.metadata.prerequisites)}):")
|
||||
for prereq in bundle.metadata.prerequisites:
|
||||
print(f" {prereq}")
|
||||
else:
|
||||
# Raw git bundle - use git to inspect
|
||||
import subprocess
|
||||
result = subprocess.run(
|
||||
["git", "bundle", "list-heads", str(bundle_path)],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
print(f"Bundle: {bundle_path}")
|
||||
print(f"Format: Raw Git bundle")
|
||||
print(f"Size: {bundle_path.stat().st_size:,} bytes")
|
||||
print(f"Refs:")
|
||||
for line in result.stdout.strip().split("\n"):
|
||||
if line:
|
||||
print(f" {line}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error reading bundle: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def cmd_bundle_qr_encode(args):
|
||||
"""Encode a bundle as a QR code for visual / air-gapped transfer."""
|
||||
from radicle_reticulum.qr import encode_bundle_to_qr, BundleTooLargeForQR, QR_MAX_BYTES
|
||||
from radicle_reticulum.git_bundle import GitBundle
|
||||
|
||||
bundle_path = Path(args.bundle).resolve()
|
||||
if not bundle_path.exists():
|
||||
print(f"Error: Bundle not found: {bundle_path}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
try:
|
||||
bundle = GitBundle.decode(bundle_path.read_bytes())
|
||||
except Exception as e:
|
||||
print(f"Error reading bundle: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
output_path = Path(args.output) if args.output else None
|
||||
|
||||
try:
|
||||
ascii_art = encode_bundle_to_qr(
|
||||
bundle,
|
||||
output_path=output_path,
|
||||
error_correction=args.error_correction,
|
||||
)
|
||||
except BundleTooLargeForQR as e:
|
||||
print(f"Error: {e}", file=sys.stderr)
|
||||
print(f"Tip: Use 'bundle create --incremental' to reduce bundle size.", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
except ImportError as e:
|
||||
print(f"Error: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
print(ascii_art)
|
||||
if output_path:
|
||||
print(f"QR image saved: {output_path}")
|
||||
print(f"Bundle size: {len(bundle.encode())} / {QR_MAX_BYTES} bytes")
|
||||
print(f"Repository: {bundle.metadata.repository_id}")
|
||||
print(f"Refs: {len(bundle.metadata.refs_included)}")
|
||||
|
||||
|
||||
def cmd_bundle_qr_decode(args):
|
||||
"""Decode a bundle from a QR code image file."""
|
||||
from radicle_reticulum.qr import decode_bundle_from_qr_image
|
||||
|
||||
image_path = Path(args.image).resolve()
|
||||
if not image_path.exists():
|
||||
print(f"Error: Image not found: {image_path}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
try:
|
||||
bundle = decode_bundle_from_qr_image(image_path)
|
||||
except ImportError as e:
|
||||
print(f"Error: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
except ValueError as e:
|
||||
print(f"Error: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
output_path = Path(args.output) if args.output else image_path.with_suffix(".radicle-bundle")
|
||||
output_path.write_bytes(bundle.encode())
|
||||
|
||||
print(f"Bundle decoded successfully!")
|
||||
print(f" Repository: {bundle.metadata.repository_id}")
|
||||
print(f" Refs: {len(bundle.metadata.refs_included)}")
|
||||
print(f" Size: {bundle.metadata.size_bytes:,} bytes")
|
||||
print(f" Saved to: {output_path}")
|
||||
print()
|
||||
print(f"Apply with: radicle-rns bundle apply {output_path} <repo-path>")
|
||||
|
||||
|
||||
def cmd_sync(args):
|
||||
"""Sync a repository with a peer."""
|
||||
repo_path = Path(args.repo).resolve()
|
||||
|
||||
if not repo_path.exists():
|
||||
print(f"Error: Repository not found: {repo_path}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
identity = RadicleIdentity.load_or_generate(args.identity)
|
||||
repo_id = args.repo_id or f"rad:local:{repo_path.name}"
|
||||
|
||||
print(f"Starting sync manager...")
|
||||
print(f"Repository: {repo_path}")
|
||||
print(f"Node ID: {identity.did}")
|
||||
|
||||
sync_manager = SyncManager(identity=identity)
|
||||
sync_manager.start()
|
||||
sync_manager.register_repository(repo_id, repo_path)
|
||||
|
||||
if args.peer:
|
||||
# Connect to specific peer
|
||||
try:
|
||||
dest_hash = bytes.fromhex(args.peer)
|
||||
except ValueError:
|
||||
print(f"Error: Invalid peer hash", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
print(f"Creating bundle for peer {args.peer}...")
|
||||
|
||||
# Load peer's known refs if available
|
||||
peer_refs = None
|
||||
if args.peer_refs:
|
||||
peer_refs = json.loads(Path(args.peer_refs).read_text())
|
||||
|
||||
mode = SyncMode.INCREMENTAL if peer_refs else SyncMode.FULL
|
||||
bundle = sync_manager.create_sync_bundle(repo_id, peer_refs=peer_refs, mode=mode)
|
||||
|
||||
if bundle:
|
||||
print(f"Bundle created: {bundle.metadata.size_bytes:,} bytes")
|
||||
print(f"Type: {bundle.metadata.bundle_type.value}")
|
||||
|
||||
if args.send:
|
||||
print(f"Sending to peer...")
|
||||
if sync_manager.send_bundle(bundle, dest_hash):
|
||||
print("Bundle sent successfully!")
|
||||
else:
|
||||
print("Failed to send bundle")
|
||||
else:
|
||||
print("No changes to sync")
|
||||
else:
|
||||
# Listen for incoming syncs
|
||||
print("Listening for sync requests... Press Ctrl+C to stop.")
|
||||
|
||||
def on_bundle(bundle):
|
||||
print(f"Received bundle: {bundle.metadata.repository_id}")
|
||||
print(f" From: {bundle.metadata.source_node}")
|
||||
print(f" Size: {bundle.metadata.size_bytes:,} bytes")
|
||||
|
||||
sync_manager.set_on_bundle_received(on_bundle)
|
||||
|
||||
running = True
|
||||
def signal_handler(sig, frame):
|
||||
nonlocal running
|
||||
running = False
|
||||
|
||||
signal.signal(signal.SIGINT, signal_handler)
|
||||
|
||||
while running:
|
||||
time.sleep(0.5)
|
||||
|
||||
sync_manager.stop()
|
||||
|
||||
|
||||
def cmd_bridge(args):
|
||||
"""Run Radicle-Reticulum bridge."""
|
||||
print("Starting Radicle-Reticulum bridge...")
|
||||
|
||||
identity = RadicleIdentity.load_or_generate(args.identity)
|
||||
_print_identity_info(args.identity)
|
||||
|
||||
# Determine auto modes
|
||||
auto_connect = not args.no_auto_connect and not args.connect
|
||||
auto_seed = not args.no_auto_seed
|
||||
|
||||
bridge = RadicleBridge(
|
||||
identity=identity,
|
||||
listen_port=args.listen_port,
|
||||
radicle_host=args.radicle_host,
|
||||
radicle_port=args.radicle_port,
|
||||
auto_connect=auto_connect,
|
||||
auto_seed=auto_seed,
|
||||
)
|
||||
|
||||
# Resolve local radicle NID: explicit flag > auto-detect from 'rad self'
|
||||
nid = args.nid
|
||||
if not nid:
|
||||
nid = detect_radicle_nid()
|
||||
if nid:
|
||||
print(f"Auto-detected radicle NID: {nid}")
|
||||
else:
|
||||
print("Could not auto-detect NID (is rad installed and initialized?)")
|
||||
print("Run with --nid <YOUR_NID> to set it manually.")
|
||||
if nid:
|
||||
bridge.set_local_radicle_nid(nid)
|
||||
|
||||
# Set up discovery callback
|
||||
def on_bridge_discovered(dest_hash: bytes, radicle_nid: str = None):
|
||||
nid_info = ""
|
||||
if radicle_nid:
|
||||
nid_info = f"\n Radicle NID: {radicle_nid}"
|
||||
print(f"[+] Discovered bridge: {dest_hash.hex()}{nid_info}")
|
||||
if auto_connect:
|
||||
print(f" Auto-connecting...")
|
||||
if auto_seed and radicle_nid:
|
||||
print(f" Auto-registering seed with radicle-node...")
|
||||
|
||||
bridge.set_on_bridge_discovered(on_bridge_discovered)
|
||||
|
||||
bridge.start()
|
||||
|
||||
print()
|
||||
print(f"Bridge running:")
|
||||
print(f" RNS address: {bridge.destination.hexhash}")
|
||||
print(f" TCP listen: 127.0.0.1:{args.listen_port}")
|
||||
print(f" Radicle: {args.radicle_host}:{args.radicle_port}")
|
||||
print(f" Auto-connect: {'enabled' if auto_connect else 'disabled'}")
|
||||
print(f" Auto-seed: {'enabled' if auto_seed else 'disabled'}")
|
||||
if args.nid:
|
||||
print(f" Local NID: {args.nid}")
|
||||
print()
|
||||
|
||||
# Connect to remote bridge if specified
|
||||
if args.connect:
|
||||
try:
|
||||
remote_hash = bytes.fromhex(args.connect)
|
||||
print(f"Connecting to remote bridge: {args.connect}")
|
||||
if bridge.connect_to_bridge(remote_hash):
|
||||
print("Connected!")
|
||||
else:
|
||||
print("Failed to connect to remote bridge")
|
||||
except ValueError:
|
||||
print(f"Invalid remote hash: {args.connect}", file=sys.stderr)
|
||||
elif auto_connect:
|
||||
print("Waiting for bridge announcements (auto-discovery enabled)...")
|
||||
|
||||
if nid and auto_seed:
|
||||
print()
|
||||
print("When remote bridges are discovered, their NIDs will be")
|
||||
print("automatically registered with radicle-node. You can then")
|
||||
print("use radicle normally (clone, push, pull, etc.)")
|
||||
|
||||
print()
|
||||
print("Press Ctrl+C to stop.")
|
||||
print()
|
||||
|
||||
# Handle shutdown
|
||||
running = True
|
||||
def signal_handler(sig, frame):
|
||||
nonlocal running
|
||||
print("\nShutting down...")
|
||||
running = False
|
||||
|
||||
signal.signal(signal.SIGINT, signal_handler)
|
||||
signal.signal(signal.SIGTERM, signal_handler)
|
||||
|
||||
# Status loop
|
||||
last_announce = 0
|
||||
last_stats = None
|
||||
try:
|
||||
while running:
|
||||
now = time.time()
|
||||
|
||||
# Periodic announce
|
||||
if now - last_announce > 120:
|
||||
bridge.announce()
|
||||
last_announce = now
|
||||
|
||||
# Show status changes
|
||||
stats = bridge.get_stats()
|
||||
if stats != last_stats:
|
||||
bridges = bridge.get_remote_bridges()
|
||||
print(f"[Status] Tunnels: {stats['active_tunnels']}, "
|
||||
f"Remote bridges: {stats['known_bridges']}, "
|
||||
f"TX: {stats['bytes_sent']}, RX: {stats['bytes_received']}")
|
||||
last_stats = stats.copy()
|
||||
|
||||
time.sleep(1.0)
|
||||
finally:
|
||||
bridge.stop()
|
||||
print("Bridge stopped.")
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point."""
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Radicle-Reticulum transport adapter"
|
||||
)
|
||||
parser.add_argument(
|
||||
"-v", "--verbose",
|
||||
action="store_true",
|
||||
help="Enable verbose logging"
|
||||
)
|
||||
|
||||
subparsers = parser.add_subparsers(dest="command", help="Commands")
|
||||
|
||||
def add_identity_arg(p):
|
||||
p.add_argument(
|
||||
"--identity",
|
||||
type=Path,
|
||||
default=DEFAULT_IDENTITY_PATH,
|
||||
metavar="PATH",
|
||||
help=f"Identity file (default: {DEFAULT_IDENTITY_PATH})",
|
||||
)
|
||||
|
||||
# node command
|
||||
node_parser = subparsers.add_parser("node", help="Run a Radicle-RNS node")
|
||||
add_identity_arg(node_parser)
|
||||
|
||||
# identity command
|
||||
id_parser = subparsers.add_parser("identity", help="Identity operations")
|
||||
id_parser.add_argument(
|
||||
"action",
|
||||
choices=["generate", "info"],
|
||||
help="Action to perform"
|
||||
)
|
||||
add_identity_arg(id_parser)
|
||||
id_parser.add_argument("--did", help="DID string for info action (if no identity file)")
|
||||
id_parser.add_argument(
|
||||
"--force",
|
||||
action="store_true",
|
||||
help="Overwrite existing identity file (for generate)"
|
||||
)
|
||||
|
||||
# ping command
|
||||
ping_parser = subparsers.add_parser("ping", help="Ping a peer")
|
||||
ping_parser.add_argument("destination", help="RNS destination hash (hex)")
|
||||
ping_parser.add_argument(
|
||||
"-t", "--timeout",
|
||||
type=float,
|
||||
default=30.0,
|
||||
help="Connection timeout (seconds)"
|
||||
)
|
||||
add_identity_arg(ping_parser)
|
||||
|
||||
# peers command
|
||||
peers_parser = subparsers.add_parser("peers", help="Discover peers")
|
||||
peers_parser.add_argument(
|
||||
"-t", "--timeout",
|
||||
type=float,
|
||||
default=10.0,
|
||||
help="Discovery timeout (seconds)"
|
||||
)
|
||||
add_identity_arg(peers_parser)
|
||||
|
||||
# bundle command group
|
||||
bundle_parser = subparsers.add_parser("bundle", help="Git bundle operations")
|
||||
bundle_subparsers = bundle_parser.add_subparsers(dest="bundle_command")
|
||||
|
||||
# bundle create
|
||||
bundle_create_parser = bundle_subparsers.add_parser("create", help="Create a bundle")
|
||||
bundle_create_parser.add_argument("repo", help="Path to Git repository")
|
||||
bundle_create_parser.add_argument("-o", "--output", help="Output bundle path")
|
||||
bundle_create_parser.add_argument("--repo-id", help="Radicle repository ID")
|
||||
bundle_create_parser.add_argument("--incremental", action="store_true",
|
||||
help="Create incremental bundle")
|
||||
bundle_create_parser.add_argument("--basis", help="Basis refs JSON file for incremental")
|
||||
|
||||
# bundle apply
|
||||
bundle_apply_parser = bundle_subparsers.add_parser("apply", help="Apply a bundle")
|
||||
bundle_apply_parser.add_argument("bundle", help="Path to bundle file")
|
||||
bundle_apply_parser.add_argument("repo", help="Path to Git repository")
|
||||
|
||||
# bundle info
|
||||
bundle_info_parser = bundle_subparsers.add_parser("info", help="Show bundle info")
|
||||
bundle_info_parser.add_argument("bundle", help="Path to bundle file")
|
||||
|
||||
# bundle qr-encode
|
||||
bundle_qr_enc_parser = bundle_subparsers.add_parser(
|
||||
"qr-encode", help=f"Encode bundle as QR code (max {2953} bytes)"
|
||||
)
|
||||
bundle_qr_enc_parser.add_argument("bundle", help="Path to .radicle-bundle file")
|
||||
bundle_qr_enc_parser.add_argument("-o", "--output", help="Output PNG path (requires Pillow)")
|
||||
bundle_qr_enc_parser.add_argument(
|
||||
"--ec", dest="error_correction", default="L",
|
||||
choices=["L", "M", "Q", "H"],
|
||||
help="QR error correction level (default: L = max capacity)",
|
||||
)
|
||||
|
||||
# bundle qr-decode
|
||||
bundle_qr_dec_parser = bundle_subparsers.add_parser(
|
||||
"qr-decode", help="Decode bundle from QR code image (requires pyzbar + Pillow)"
|
||||
)
|
||||
bundle_qr_dec_parser.add_argument("image", help="Path to QR code image")
|
||||
bundle_qr_dec_parser.add_argument("-o", "--output", help="Output .radicle-bundle path")
|
||||
|
||||
# sync command
|
||||
sync_parser = subparsers.add_parser("sync", help="Sync repository with peers")
|
||||
sync_parser.add_argument("repo", help="Path to Git repository")
|
||||
sync_parser.add_argument("--repo-id", help="Radicle repository ID")
|
||||
sync_parser.add_argument("--peer", help="Peer RNS hash to sync with")
|
||||
sync_parser.add_argument("--peer-refs", help="JSON file with peer's known refs")
|
||||
sync_parser.add_argument("--send", action="store_true", help="Send bundle to peer")
|
||||
add_identity_arg(sync_parser)
|
||||
|
||||
# bridge command
|
||||
bridge_parser = subparsers.add_parser("bridge", help="Run Radicle-Reticulum bridge")
|
||||
bridge_parser.add_argument(
|
||||
"-l", "--listen-port",
|
||||
type=int,
|
||||
default=8777,
|
||||
help="TCP port to listen on (default: 8777)"
|
||||
)
|
||||
bridge_parser.add_argument(
|
||||
"--radicle-host",
|
||||
default="127.0.0.1",
|
||||
help="Radicle node host (default: 127.0.0.1)"
|
||||
)
|
||||
bridge_parser.add_argument(
|
||||
"--radicle-port",
|
||||
type=int,
|
||||
default=8776,
|
||||
help="Radicle node port (default: 8776)"
|
||||
)
|
||||
bridge_parser.add_argument(
|
||||
"-c", "--connect",
|
||||
help="Connect to remote bridge (RNS hash)"
|
||||
)
|
||||
bridge_parser.add_argument(
|
||||
"--no-auto-connect",
|
||||
action="store_true",
|
||||
help="Disable auto-discovery and auto-connect to bridges"
|
||||
)
|
||||
bridge_parser.add_argument(
|
||||
"--no-auto-seed",
|
||||
action="store_true",
|
||||
help="Disable auto-registering discovered NIDs with radicle-node"
|
||||
)
|
||||
bridge_parser.add_argument(
|
||||
"--nid",
|
||||
help="Local radicle node NID to announce (from 'rad self')"
|
||||
)
|
||||
add_identity_arg(bridge_parser)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Configure logging
|
||||
if args.verbose:
|
||||
RNS.loglevel = RNS.LOG_VERBOSE
|
||||
else:
|
||||
RNS.loglevel = RNS.LOG_INFO
|
||||
|
||||
# Dispatch command
|
||||
if args.command == "node":
|
||||
cmd_node(args)
|
||||
elif args.command == "identity":
|
||||
cmd_identity(args)
|
||||
elif args.command == "ping":
|
||||
cmd_ping(args)
|
||||
elif args.command == "peers":
|
||||
cmd_peers(args)
|
||||
elif args.command == "bundle":
|
||||
if args.bundle_command == "create":
|
||||
cmd_bundle_create(args)
|
||||
elif args.bundle_command == "apply":
|
||||
cmd_bundle_apply(args)
|
||||
elif args.bundle_command == "info":
|
||||
cmd_bundle_info(args)
|
||||
elif args.bundle_command == "qr-encode":
|
||||
cmd_bundle_qr_encode(args)
|
||||
elif args.bundle_command == "qr-decode":
|
||||
cmd_bundle_qr_decode(args)
|
||||
else:
|
||||
bundle_parser.print_help()
|
||||
sys.exit(1)
|
||||
elif args.command == "sync":
|
||||
cmd_sync(args)
|
||||
elif args.command == "bridge":
|
||||
cmd_bridge(args)
|
||||
else:
|
||||
parser.print_help()
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
@ -0,0 +1,467 @@
|
|||
"""Git bundle generation and application for Radicle repos.
|
||||
|
||||
Supports both full repository bundles and incremental bundles
|
||||
containing only new commits since a known state.
|
||||
|
||||
Radicle stores data under these ref namespaces:
|
||||
- refs/heads/* - Git branches
|
||||
- refs/tags/* - Git tags
|
||||
- refs/rad/id - Repository identity
|
||||
- refs/rad/sigrefs - Signed refs
|
||||
- refs/rad/cob/* - Collaborative Objects (issues, patches, etc.)
|
||||
- refs/rad/cob/xyz.issue/*
|
||||
- refs/rad/cob/xyz.patch/*
|
||||
"""
|
||||
|
||||
import os
|
||||
import subprocess
|
||||
import tempfile
|
||||
import hashlib
|
||||
from dataclasses import dataclass, field
|
||||
from enum import Enum
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional, Set, Tuple
|
||||
import struct
|
||||
import time
|
||||
|
||||
|
||||
class BundleType(Enum):
|
||||
"""Type of Git bundle."""
|
||||
FULL = "full" # Complete repository
|
||||
INCREMENTAL = "incremental" # Only new commits
|
||||
|
||||
|
||||
# Radicle ref patterns to include in sync
|
||||
RADICLE_REF_PATTERNS = [
|
||||
"refs/heads/*",
|
||||
"refs/tags/*",
|
||||
"refs/rad/id",
|
||||
"refs/rad/sigrefs",
|
||||
"refs/rad/cob/*",
|
||||
]
|
||||
|
||||
|
||||
@dataclass
|
||||
class BundleMetadata:
|
||||
"""Metadata about a Git bundle for transport."""
|
||||
bundle_type: BundleType
|
||||
repository_id: str # Radicle repo ID (rad:z...)
|
||||
source_node: str # DID of source node
|
||||
timestamp: int # Unix timestamp ms
|
||||
refs_included: List[str] # List of refs in bundle
|
||||
prerequisites: List[str] # Commits required (for incremental)
|
||||
size_bytes: int
|
||||
checksum: bytes # SHA-256 of bundle data
|
||||
|
||||
def encode(self) -> bytes:
|
||||
"""Encode metadata to bytes."""
|
||||
repo_bytes = self.repository_id.encode("utf-8")
|
||||
node_bytes = self.source_node.encode("utf-8")
|
||||
refs_data = b"".join(
|
||||
struct.pack(f"!H{len(r)}s", len(r), r.encode("utf-8"))
|
||||
for r in self.refs_included
|
||||
)
|
||||
prereq_data = b"".join(
|
||||
struct.pack(f"!H{len(p)}s", len(p), p.encode("utf-8"))
|
||||
for p in self.prerequisites
|
||||
)
|
||||
|
||||
return struct.pack(
|
||||
f"!BH{len(repo_bytes)}sH{len(node_bytes)}sQIH{len(refs_data)}sH{len(prereq_data)}s32s",
|
||||
1 if self.bundle_type == BundleType.FULL else 2,
|
||||
len(repo_bytes), repo_bytes,
|
||||
len(node_bytes), node_bytes,
|
||||
self.timestamp,
|
||||
self.size_bytes,
|
||||
len(self.refs_included), refs_data,
|
||||
len(self.prerequisites), prereq_data,
|
||||
self.checksum,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def decode(cls, data: bytes) -> Tuple["BundleMetadata", int]:
|
||||
"""Decode metadata from bytes. Returns (metadata, bytes_consumed)."""
|
||||
offset = 0
|
||||
|
||||
# Bundle type
|
||||
bundle_type_raw = struct.unpack("!B", data[offset:offset+1])[0]
|
||||
bundle_type = BundleType.FULL if bundle_type_raw == 1 else BundleType.INCREMENTAL
|
||||
offset += 1
|
||||
|
||||
# Repository ID
|
||||
repo_len = struct.unpack("!H", data[offset:offset+2])[0]
|
||||
offset += 2
|
||||
repository_id = data[offset:offset+repo_len].decode("utf-8")
|
||||
offset += repo_len
|
||||
|
||||
# Source node
|
||||
node_len = struct.unpack("!H", data[offset:offset+2])[0]
|
||||
offset += 2
|
||||
source_node = data[offset:offset+node_len].decode("utf-8")
|
||||
offset += node_len
|
||||
|
||||
# Timestamp and size
|
||||
timestamp, size_bytes = struct.unpack("!QI", data[offset:offset+12])
|
||||
offset += 12
|
||||
|
||||
# Refs
|
||||
refs_count = struct.unpack("!H", data[offset:offset+2])[0]
|
||||
offset += 2
|
||||
refs_included = []
|
||||
for _ in range(refs_count):
|
||||
ref_len = struct.unpack("!H", data[offset:offset+2])[0]
|
||||
offset += 2
|
||||
refs_included.append(data[offset:offset+ref_len].decode("utf-8"))
|
||||
offset += ref_len
|
||||
|
||||
# Prerequisites
|
||||
prereq_count = struct.unpack("!H", data[offset:offset+2])[0]
|
||||
offset += 2
|
||||
prerequisites = []
|
||||
for _ in range(prereq_count):
|
||||
prereq_len = struct.unpack("!H", data[offset:offset+2])[0]
|
||||
offset += 2
|
||||
prerequisites.append(data[offset:offset+prereq_len].decode("utf-8"))
|
||||
offset += prereq_len
|
||||
|
||||
# Checksum
|
||||
checksum = data[offset:offset+32]
|
||||
offset += 32
|
||||
|
||||
return cls(
|
||||
bundle_type=bundle_type,
|
||||
repository_id=repository_id,
|
||||
source_node=source_node,
|
||||
timestamp=timestamp,
|
||||
refs_included=refs_included,
|
||||
prerequisites=prerequisites,
|
||||
size_bytes=size_bytes,
|
||||
checksum=checksum,
|
||||
), offset
|
||||
|
||||
|
||||
@dataclass
|
||||
class GitBundle:
|
||||
"""A Git bundle with metadata for transport."""
|
||||
metadata: BundleMetadata
|
||||
data: bytes
|
||||
|
||||
def encode(self) -> bytes:
|
||||
"""Encode bundle with metadata for transport."""
|
||||
meta_bytes = self.metadata.encode()
|
||||
return struct.pack("!I", len(meta_bytes)) + meta_bytes + self.data
|
||||
|
||||
@classmethod
|
||||
def decode(cls, data: bytes) -> "GitBundle":
|
||||
"""Decode bundle from transport format."""
|
||||
meta_len = struct.unpack("!I", data[:4])[0]
|
||||
metadata, _ = BundleMetadata.decode(data[4:4+meta_len])
|
||||
bundle_data = data[4+meta_len:]
|
||||
|
||||
# Verify checksum
|
||||
actual_checksum = hashlib.sha256(bundle_data).digest()
|
||||
if actual_checksum != metadata.checksum:
|
||||
raise ValueError("Bundle checksum mismatch")
|
||||
|
||||
return cls(metadata=metadata, data=bundle_data)
|
||||
|
||||
def save(self, path: Path) -> None:
|
||||
"""Save bundle data to file."""
|
||||
path.write_bytes(self.data)
|
||||
|
||||
@property
|
||||
def size(self) -> int:
|
||||
"""Get total size including metadata."""
|
||||
return len(self.encode())
|
||||
|
||||
|
||||
class GitBundleGenerator:
|
||||
"""Generates Git bundles from repositories."""
|
||||
|
||||
def __init__(self, repo_path: Path):
|
||||
"""Initialize with path to Git repository."""
|
||||
self.repo_path = Path(repo_path)
|
||||
if not (self.repo_path / ".git").exists() and not (self.repo_path / "HEAD").exists():
|
||||
raise ValueError(f"Not a Git repository: {repo_path}")
|
||||
|
||||
def _run_git(self, *args: str, check: bool = True) -> subprocess.CompletedProcess:
|
||||
"""Run a git command in the repository."""
|
||||
return subprocess.run(
|
||||
["git", *args],
|
||||
cwd=self.repo_path,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=check,
|
||||
)
|
||||
|
||||
def _run_git_binary(self, *args: str) -> bytes:
|
||||
"""Run a git command and return binary output."""
|
||||
result = subprocess.run(
|
||||
["git", *args],
|
||||
cwd=self.repo_path,
|
||||
capture_output=True,
|
||||
check=True,
|
||||
)
|
||||
return result.stdout
|
||||
|
||||
def get_refs(self, patterns: Optional[List[str]] = None) -> Dict[str, str]:
|
||||
"""Get refs matching patterns. Returns {ref_name: commit_sha}."""
|
||||
if patterns is None:
|
||||
patterns = RADICLE_REF_PATTERNS
|
||||
|
||||
refs = {}
|
||||
for pattern in patterns:
|
||||
result = self._run_git("for-each-ref", "--format=%(refname) %(objectname)", pattern, check=False)
|
||||
if result.returncode == 0:
|
||||
for line in result.stdout.strip().split("\n"):
|
||||
if line:
|
||||
parts = line.split()
|
||||
if len(parts) == 2:
|
||||
refs[parts[0]] = parts[1]
|
||||
return refs
|
||||
|
||||
def get_radicle_repo_id(self) -> Optional[str]:
|
||||
"""Get Radicle repository ID if this is a Radicle repo."""
|
||||
# Radicle stores repo ID in .git/rad or config
|
||||
rad_dir = self.repo_path / ".git" / "rad"
|
||||
if rad_dir.exists():
|
||||
# Try to read from rad config
|
||||
try:
|
||||
result = self._run_git("config", "--get", "rad.id", check=False)
|
||||
if result.returncode == 0:
|
||||
return result.stdout.strip()
|
||||
except Exception:
|
||||
pass
|
||||
return None
|
||||
|
||||
def create_full_bundle(
|
||||
self,
|
||||
repository_id: str,
|
||||
source_node: str,
|
||||
output_path: Optional[Path] = None,
|
||||
ref_patterns: Optional[List[str]] = None,
|
||||
) -> GitBundle:
|
||||
"""Create a full bundle containing all refs.
|
||||
|
||||
Args:
|
||||
repository_id: Radicle repo ID (rad:z...)
|
||||
source_node: DID of the source node
|
||||
output_path: Optional path to write bundle file
|
||||
ref_patterns: Ref patterns to include (default: Radicle patterns)
|
||||
"""
|
||||
refs = self.get_refs(ref_patterns)
|
||||
if not refs:
|
||||
raise ValueError("No refs to bundle")
|
||||
|
||||
# Create bundle with all refs
|
||||
with tempfile.NamedTemporaryFile(suffix=".bundle", delete=False) as f:
|
||||
bundle_path = f.name
|
||||
|
||||
try:
|
||||
# Build ref list for bundle create
|
||||
ref_args = list(refs.keys())
|
||||
self._run_git("bundle", "create", bundle_path, *ref_args)
|
||||
|
||||
bundle_data = Path(bundle_path).read_bytes()
|
||||
finally:
|
||||
os.unlink(bundle_path)
|
||||
|
||||
metadata = BundleMetadata(
|
||||
bundle_type=BundleType.FULL,
|
||||
repository_id=repository_id,
|
||||
source_node=source_node,
|
||||
timestamp=int(time.time() * 1000),
|
||||
refs_included=list(refs.keys()),
|
||||
prerequisites=[],
|
||||
size_bytes=len(bundle_data),
|
||||
checksum=hashlib.sha256(bundle_data).digest(),
|
||||
)
|
||||
|
||||
bundle = GitBundle(metadata=metadata, data=bundle_data)
|
||||
|
||||
if output_path:
|
||||
bundle.save(output_path)
|
||||
|
||||
return bundle
|
||||
|
||||
def create_incremental_bundle(
|
||||
self,
|
||||
repository_id: str,
|
||||
source_node: str,
|
||||
basis_refs: Dict[str, str],
|
||||
output_path: Optional[Path] = None,
|
||||
ref_patterns: Optional[List[str]] = None,
|
||||
) -> Optional[GitBundle]:
|
||||
"""Create an incremental bundle with only new commits.
|
||||
|
||||
Args:
|
||||
repository_id: Radicle repo ID
|
||||
source_node: DID of the source node
|
||||
basis_refs: Known refs at destination {ref_name: commit_sha}
|
||||
output_path: Optional path to write bundle file
|
||||
ref_patterns: Ref patterns to include
|
||||
|
||||
Returns:
|
||||
GitBundle if there are changes, None if no changes
|
||||
"""
|
||||
current_refs = self.get_refs(ref_patterns)
|
||||
if not current_refs:
|
||||
return None
|
||||
|
||||
# Find refs that have changed or are new
|
||||
changed_refs = {}
|
||||
for ref, sha in current_refs.items():
|
||||
if ref not in basis_refs or basis_refs[ref] != sha:
|
||||
changed_refs[ref] = sha
|
||||
|
||||
if not changed_refs:
|
||||
return None # No changes
|
||||
|
||||
# Build exclusion list (commits the destination already has)
|
||||
exclusions = [f"^{sha}" for sha in basis_refs.values() if sha]
|
||||
|
||||
with tempfile.NamedTemporaryFile(suffix=".bundle", delete=False) as f:
|
||||
bundle_path = f.name
|
||||
|
||||
try:
|
||||
# Create bundle with changed refs, excluding known commits
|
||||
bundle_args = list(changed_refs.keys()) + exclusions
|
||||
result = self._run_git("bundle", "create", bundle_path, *bundle_args, check=False)
|
||||
|
||||
if result.returncode != 0:
|
||||
# May fail if no new commits (all excluded)
|
||||
if "empty bundle" in result.stderr.lower():
|
||||
return None
|
||||
raise subprocess.CalledProcessError(result.returncode, "git bundle create", result.stderr)
|
||||
|
||||
bundle_data = Path(bundle_path).read_bytes()
|
||||
finally:
|
||||
if os.path.exists(bundle_path):
|
||||
os.unlink(bundle_path)
|
||||
|
||||
metadata = BundleMetadata(
|
||||
bundle_type=BundleType.INCREMENTAL,
|
||||
repository_id=repository_id,
|
||||
source_node=source_node,
|
||||
timestamp=int(time.time() * 1000),
|
||||
refs_included=list(changed_refs.keys()),
|
||||
prerequisites=list(basis_refs.values()),
|
||||
size_bytes=len(bundle_data),
|
||||
checksum=hashlib.sha256(bundle_data).digest(),
|
||||
)
|
||||
|
||||
bundle = GitBundle(metadata=metadata, data=bundle_data)
|
||||
|
||||
if output_path:
|
||||
bundle.save(output_path)
|
||||
|
||||
return bundle
|
||||
|
||||
|
||||
class GitBundleApplicator:
|
||||
"""Applies Git bundles to repositories."""
|
||||
|
||||
def __init__(self, repo_path: Path):
|
||||
"""Initialize with path to Git repository."""
|
||||
self.repo_path = Path(repo_path)
|
||||
|
||||
def _run_git(self, *args: str, check: bool = True) -> subprocess.CompletedProcess:
|
||||
"""Run a git command in the repository."""
|
||||
return subprocess.run(
|
||||
["git", *args],
|
||||
cwd=self.repo_path,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=check,
|
||||
)
|
||||
|
||||
def verify_bundle(self, bundle: GitBundle) -> Tuple[bool, str]:
|
||||
"""Verify a bundle can be applied.
|
||||
|
||||
Returns (success, message).
|
||||
"""
|
||||
with tempfile.NamedTemporaryFile(suffix=".bundle", delete=False) as f:
|
||||
f.write(bundle.data)
|
||||
bundle_path = f.name
|
||||
|
||||
try:
|
||||
result = self._run_git("bundle", "verify", bundle_path, check=False)
|
||||
if result.returncode == 0:
|
||||
return True, "Bundle verified successfully"
|
||||
else:
|
||||
return False, result.stderr.strip()
|
||||
finally:
|
||||
os.unlink(bundle_path)
|
||||
|
||||
def apply_bundle(self, bundle: GitBundle, fetch_all: bool = True) -> Dict[str, str]:
|
||||
"""Apply a bundle to the repository.
|
||||
|
||||
Args:
|
||||
bundle: The GitBundle to apply
|
||||
fetch_all: If True, fetch all refs from bundle
|
||||
|
||||
Returns:
|
||||
Dict of applied refs {ref_name: commit_sha}
|
||||
"""
|
||||
with tempfile.NamedTemporaryFile(suffix=".bundle", delete=False) as f:
|
||||
f.write(bundle.data)
|
||||
bundle_path = f.name
|
||||
|
||||
try:
|
||||
# Verify first
|
||||
ok, msg = self.verify_bundle(bundle)
|
||||
if not ok:
|
||||
raise ValueError(f"Bundle verification failed: {msg}")
|
||||
|
||||
# List refs in bundle
|
||||
result = self._run_git("bundle", "list-heads", bundle_path)
|
||||
bundle_refs = {}
|
||||
for line in result.stdout.strip().split("\n"):
|
||||
if line:
|
||||
parts = line.split()
|
||||
if len(parts) >= 2:
|
||||
bundle_refs[parts[1]] = parts[0]
|
||||
|
||||
# Fetch from bundle
|
||||
if fetch_all:
|
||||
# Fetch all refs, preserving their names
|
||||
for ref in bundle_refs:
|
||||
self._run_git("fetch", bundle_path, f"{ref}:{ref}", check=False)
|
||||
else:
|
||||
self._run_git("fetch", bundle_path)
|
||||
|
||||
return bundle_refs
|
||||
finally:
|
||||
os.unlink(bundle_path)
|
||||
|
||||
def get_current_refs(self, patterns: Optional[List[str]] = None) -> Dict[str, str]:
|
||||
"""Get current refs for computing incremental basis."""
|
||||
if patterns is None:
|
||||
patterns = RADICLE_REF_PATTERNS
|
||||
|
||||
refs = {}
|
||||
for pattern in patterns:
|
||||
result = self._run_git("for-each-ref", "--format=%(refname) %(objectname)", pattern, check=False)
|
||||
if result.returncode == 0:
|
||||
for line in result.stdout.strip().split("\n"):
|
||||
if line:
|
||||
parts = line.split()
|
||||
if len(parts) == 2:
|
||||
refs[parts[0]] = parts[1]
|
||||
return refs
|
||||
|
||||
|
||||
def estimate_bundle_size(repo_path: Path, ref_patterns: Optional[List[str]] = None) -> int:
|
||||
"""Estimate the size of a full bundle without creating it."""
|
||||
result = subprocess.run(
|
||||
["git", "count-objects", "-v"],
|
||||
cwd=repo_path,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
# Parse size-pack from output
|
||||
for line in result.stdout.split("\n"):
|
||||
if line.startswith("size-pack:"):
|
||||
# size-pack is in KB
|
||||
return int(line.split(":")[1].strip()) * 1024
|
||||
return 0
|
||||
|
|
@ -0,0 +1,284 @@
|
|||
"""Identity mapping between Radicle Node IDs and RNS Destinations.
|
||||
|
||||
Radicle uses Ed25519 public keys encoded as W3C DIDs (did:key:z6Mk...).
|
||||
Reticulum uses Ed25519 for signatures and X25519 for key exchange,
|
||||
with destinations being 128-bit truncated SHA-256 hashes.
|
||||
|
||||
Important distinction:
|
||||
- RNS Identity hash: hash(signing_pubkey) - identifies the keypair
|
||||
- RNS Destination hash: hash(identity_pubkey + app_name + aspects) - identifies an endpoint
|
||||
|
||||
This module provides bidirectional mapping between these identity systems.
|
||||
"""
|
||||
|
||||
import hashlib
|
||||
import os
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
import RNS
|
||||
from cryptography.hazmat.primitives.asymmetric.ed25519 import (
|
||||
Ed25519PrivateKey,
|
||||
Ed25519PublicKey,
|
||||
)
|
||||
from cryptography.hazmat.primitives import serialization
|
||||
|
||||
|
||||
# Multicodec prefix for Ed25519 public key (0xed01)
|
||||
ED25519_MULTICODEC_PREFIX = b"\xed\x01"
|
||||
|
||||
# Base58btc multibase prefix
|
||||
MULTIBASE_BASE58BTC = "z"
|
||||
|
||||
|
||||
def _base58btc_encode(data: bytes) -> str:
|
||||
"""Encode bytes to base58btc."""
|
||||
alphabet = "123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz"
|
||||
n = int.from_bytes(data, "big")
|
||||
if n == 0:
|
||||
return alphabet[0]
|
||||
result = ""
|
||||
while n > 0:
|
||||
n, remainder = divmod(n, 58)
|
||||
result = alphabet[remainder] + result
|
||||
# Preserve leading zeros
|
||||
for byte in data:
|
||||
if byte == 0:
|
||||
result = alphabet[0] + result
|
||||
else:
|
||||
break
|
||||
return result
|
||||
|
||||
|
||||
def _base58btc_decode(s: str) -> bytes:
|
||||
"""Decode base58btc string to bytes."""
|
||||
alphabet = "123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz"
|
||||
n = 0
|
||||
for char in s:
|
||||
n = n * 58 + alphabet.index(char)
|
||||
# Calculate byte length
|
||||
byte_length = (n.bit_length() + 7) // 8
|
||||
result = n.to_bytes(byte_length, "big") if byte_length > 0 else b""
|
||||
# Restore leading zeros
|
||||
leading_zeros = 0
|
||||
for char in s:
|
||||
if char == alphabet[0]:
|
||||
leading_zeros += 1
|
||||
else:
|
||||
break
|
||||
return b"\x00" * leading_zeros + result
|
||||
|
||||
|
||||
@dataclass
|
||||
class RadicleIdentity:
|
||||
"""Represents a Radicle identity with RNS mapping.
|
||||
|
||||
Provides conversion between:
|
||||
- Radicle DID (did:key:z6Mk...)
|
||||
- Raw Ed25519 public key bytes
|
||||
- RNS Identity/Destination
|
||||
|
||||
Note: rns_identity may be None for public-key-only identities created
|
||||
from DIDs, since a full RNS identity requires both Ed25519 (signing)
|
||||
and X25519 (encryption) keys, but DIDs only contain the Ed25519 key.
|
||||
"""
|
||||
|
||||
private_key: Optional[Ed25519PrivateKey]
|
||||
public_key: Ed25519PublicKey
|
||||
rns_identity: Optional[RNS.Identity] = None
|
||||
|
||||
@classmethod
|
||||
def generate(cls) -> "RadicleIdentity":
|
||||
"""Generate a new random identity."""
|
||||
# Generate RNS identity (uses Ed25519 internally)
|
||||
rns_identity = RNS.Identity()
|
||||
|
||||
# Extract the Ed25519 private key from RNS identity
|
||||
# RNS stores the signing key which is Ed25519
|
||||
private_key = Ed25519PrivateKey.from_private_bytes(
|
||||
rns_identity.sig_prv_bytes
|
||||
)
|
||||
public_key = private_key.public_key()
|
||||
|
||||
return cls(
|
||||
private_key=private_key,
|
||||
public_key=public_key,
|
||||
rns_identity=rns_identity,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_rns_identity(cls, rns_identity: RNS.Identity) -> "RadicleIdentity":
|
||||
"""Create from an existing RNS Identity."""
|
||||
if rns_identity.prv:
|
||||
private_key = Ed25519PrivateKey.from_private_bytes(
|
||||
rns_identity.sig_prv_bytes
|
||||
)
|
||||
public_key = private_key.public_key()
|
||||
else:
|
||||
private_key = None
|
||||
public_key = Ed25519PublicKey.from_public_bytes(
|
||||
rns_identity.sig_pub_bytes
|
||||
)
|
||||
|
||||
return cls(
|
||||
private_key=private_key,
|
||||
public_key=public_key,
|
||||
rns_identity=rns_identity,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_did(cls, did: str) -> "RadicleIdentity":
|
||||
"""Create from a Radicle DID (did:key:z6Mk...).
|
||||
|
||||
Note: This creates a public-key-only identity (no private key).
|
||||
The rns_identity will be None since DIDs only contain the Ed25519
|
||||
signing key, but RNS requires both Ed25519 and X25519 keys.
|
||||
"""
|
||||
if not did.startswith("did:key:"):
|
||||
raise ValueError(f"Invalid DID format: {did}")
|
||||
|
||||
multibase_value = did[8:] # Remove "did:key:"
|
||||
|
||||
if not multibase_value.startswith(MULTIBASE_BASE58BTC):
|
||||
raise ValueError(f"Unsupported multibase encoding: {multibase_value[0]}")
|
||||
|
||||
decoded = _base58btc_decode(multibase_value[1:]) # Remove 'z' prefix
|
||||
|
||||
if not decoded.startswith(ED25519_MULTICODEC_PREFIX):
|
||||
raise ValueError("DID does not contain an Ed25519 key")
|
||||
|
||||
pubkey_bytes = decoded[2:] # Remove multicodec prefix
|
||||
|
||||
if len(pubkey_bytes) != 32:
|
||||
raise ValueError(f"Invalid Ed25519 public key length: {len(pubkey_bytes)}")
|
||||
|
||||
public_key = Ed25519PublicKey.from_public_bytes(pubkey_bytes)
|
||||
|
||||
return cls(
|
||||
private_key=None,
|
||||
public_key=public_key,
|
||||
rns_identity=None,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_private_key_bytes(cls, private_key_bytes: bytes) -> "RadicleIdentity":
|
||||
"""Create from raw Ed25519 private key bytes (32 bytes).
|
||||
|
||||
Note: This creates an identity with the Ed25519 key for signing/DID
|
||||
purposes, but without RNS network capabilities. For full RNS support,
|
||||
use generate() or from_rns_identity() instead.
|
||||
"""
|
||||
private_key = Ed25519PrivateKey.from_private_bytes(private_key_bytes)
|
||||
public_key = private_key.public_key()
|
||||
|
||||
# Cannot create RNS identity from arbitrary Ed25519 bytes because
|
||||
# RNS requires both Ed25519 (signing) and X25519 (encryption) keys
|
||||
# with specific derivation. Return identity without RNS support.
|
||||
return cls(
|
||||
private_key=private_key,
|
||||
public_key=public_key,
|
||||
rns_identity=None, # No RNS support for imported keys
|
||||
)
|
||||
|
||||
@property
|
||||
def public_key_bytes(self) -> bytes:
|
||||
"""Get raw Ed25519 public key bytes (32 bytes)."""
|
||||
return self.public_key.public_bytes(
|
||||
encoding=serialization.Encoding.Raw,
|
||||
format=serialization.PublicFormat.Raw,
|
||||
)
|
||||
|
||||
@property
|
||||
def did(self) -> str:
|
||||
"""Get the Radicle DID representation (did:key:z6Mk...)."""
|
||||
multicodec_key = ED25519_MULTICODEC_PREFIX + self.public_key_bytes
|
||||
multibase_encoded = MULTIBASE_BASE58BTC + _base58btc_encode(multicodec_key)
|
||||
return f"did:key:{multibase_encoded}"
|
||||
|
||||
@property
|
||||
def node_id(self) -> str:
|
||||
"""Alias for did - returns the Radicle Node ID."""
|
||||
return self.did
|
||||
|
||||
@property
|
||||
def rns_identity_hash(self) -> Optional[bytes]:
|
||||
"""Get the RNS identity hash (16 bytes).
|
||||
|
||||
This is the hash of the identity's public key, NOT a destination hash.
|
||||
Destination hashes also include app_name and aspects.
|
||||
|
||||
Returns None if this is a public-key-only identity without RNS support.
|
||||
"""
|
||||
if self.rns_identity is None:
|
||||
return None
|
||||
return self.rns_identity.hash
|
||||
|
||||
@property
|
||||
def rns_identity_hash_hex(self) -> Optional[str]:
|
||||
"""Get the RNS identity hash as hex string.
|
||||
|
||||
Returns None if this is a public-key-only identity without RNS support.
|
||||
"""
|
||||
if self.rns_identity is None:
|
||||
return None
|
||||
return self.rns_identity.hexhash
|
||||
|
||||
# Aliases for backwards compatibility
|
||||
@property
|
||||
def rns_hash(self) -> Optional[bytes]:
|
||||
"""Alias for rns_identity_hash (deprecated, use rns_identity_hash)."""
|
||||
return self.rns_identity_hash
|
||||
|
||||
@property
|
||||
def rns_hash_hex(self) -> Optional[str]:
|
||||
"""Alias for rns_identity_hash_hex (deprecated, use rns_identity_hash_hex)."""
|
||||
return self.rns_identity_hash_hex
|
||||
|
||||
def save(self, path: "str | Path") -> None:
|
||||
"""Save identity to file. Overwrites if exists."""
|
||||
path = Path(path)
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
if self.rns_identity is None:
|
||||
raise ValueError("Cannot save a public-key-only identity (no RNS keypair)")
|
||||
if not self.rns_identity.to_file(str(path)):
|
||||
raise OSError(f"Failed to write identity to {path}")
|
||||
|
||||
@classmethod
|
||||
def load(cls, path: "str | Path") -> "RadicleIdentity":
|
||||
"""Load identity from file previously saved with save()."""
|
||||
path = Path(path)
|
||||
if not path.exists():
|
||||
raise FileNotFoundError(f"Identity file not found: {path}")
|
||||
rns_identity = RNS.Identity.from_file(str(path))
|
||||
if rns_identity is None:
|
||||
raise ValueError(f"Failed to load identity from {path}")
|
||||
return cls.from_rns_identity(rns_identity)
|
||||
|
||||
@classmethod
|
||||
def load_or_generate(cls, path: "str | Path") -> "RadicleIdentity":
|
||||
"""Load identity from path, or generate and save a new one if absent."""
|
||||
path = Path(path)
|
||||
if path.exists():
|
||||
return cls.load(path)
|
||||
identity = cls.generate()
|
||||
identity.save(path)
|
||||
return identity
|
||||
|
||||
def sign(self, data: bytes) -> bytes:
|
||||
"""Sign data with the private key."""
|
||||
if self.private_key is None:
|
||||
raise ValueError("Cannot sign without private key")
|
||||
return self.private_key.sign(data)
|
||||
|
||||
def verify(self, signature: bytes, data: bytes) -> bool:
|
||||
"""Verify a signature against data."""
|
||||
try:
|
||||
self.public_key.verify(signature, data)
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def __repr__(self) -> str:
|
||||
has_private = "with private key" if self.private_key else "public only"
|
||||
return f"RadicleIdentity({self.did[:32]}..., {has_private})"
|
||||
|
|
@ -0,0 +1,177 @@
|
|||
"""RNS Link wrapper providing Radicle-compatible connection interface.
|
||||
|
||||
Maps Radicle's Noise XK sessions to RNS Links, which provide:
|
||||
- Encrypted bidirectional channels
|
||||
- Forward secrecy via ephemeral ECDH
|
||||
- Reliable ordered message delivery
|
||||
"""
|
||||
|
||||
import threading
|
||||
import time
|
||||
from dataclasses import dataclass, field
|
||||
from enum import Enum
|
||||
from typing import Callable, Optional
|
||||
from collections import deque
|
||||
|
||||
import RNS
|
||||
|
||||
|
||||
class LinkState(Enum):
|
||||
"""Connection state for RadicleLink."""
|
||||
PENDING = "pending"
|
||||
ACTIVE = "active"
|
||||
CLOSED = "closed"
|
||||
FAILED = "failed"
|
||||
|
||||
|
||||
@dataclass
|
||||
class RadicleLink:
|
||||
"""Wrapper around RNS.Link providing Radicle-compatible interface.
|
||||
|
||||
RNS Links provide:
|
||||
- ECDH key exchange (X25519)
|
||||
- Symmetric encryption with forward secrecy
|
||||
- Reliable delivery with automatic retransmission
|
||||
|
||||
This maps to Radicle's Noise XK sessions conceptually.
|
||||
"""
|
||||
|
||||
rns_link: RNS.Link
|
||||
state: LinkState = LinkState.PENDING
|
||||
on_data: Optional[Callable[[bytes], None]] = None
|
||||
on_close: Optional[Callable[[], None]] = None
|
||||
_receive_buffer: deque = field(default_factory=deque)
|
||||
_buffer_lock: threading.Lock = field(default_factory=threading.Lock)
|
||||
_data_available: threading.Event = field(default_factory=threading.Event)
|
||||
|
||||
def __post_init__(self):
|
||||
"""Set up RNS link callbacks."""
|
||||
# Only set established callback for pending links (outbound)
|
||||
# For already-established links (incoming), this callback already fired
|
||||
if self.state == LinkState.PENDING:
|
||||
self.rns_link.set_link_established_callback(self._on_established)
|
||||
|
||||
self.rns_link.set_link_closed_callback(self._on_closed)
|
||||
self.rns_link.set_packet_callback(self._on_packet)
|
||||
|
||||
@classmethod
|
||||
def create_outbound(
|
||||
cls,
|
||||
destination: RNS.Destination,
|
||||
on_data: Optional[Callable[[bytes], None]] = None,
|
||||
on_close: Optional[Callable[[], None]] = None,
|
||||
) -> "RadicleLink":
|
||||
"""Create an outbound link to a destination."""
|
||||
rns_link = RNS.Link(destination)
|
||||
return cls(
|
||||
rns_link=rns_link,
|
||||
on_data=on_data,
|
||||
on_close=on_close,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_incoming(
|
||||
cls,
|
||||
rns_link: RNS.Link,
|
||||
on_data: Optional[Callable[[bytes], None]] = None,
|
||||
on_close: Optional[Callable[[], None]] = None,
|
||||
) -> "RadicleLink":
|
||||
"""Wrap an incoming RNS link."""
|
||||
link = cls(
|
||||
rns_link=rns_link,
|
||||
state=LinkState.ACTIVE, # Incoming links are already established
|
||||
on_data=on_data,
|
||||
on_close=on_close,
|
||||
)
|
||||
return link
|
||||
|
||||
def _on_established(self, link: RNS.Link):
|
||||
"""Called when link is established."""
|
||||
self.state = LinkState.ACTIVE
|
||||
RNS.log(f"Link established: {link}", RNS.LOG_DEBUG)
|
||||
|
||||
def _on_closed(self, link: RNS.Link):
|
||||
"""Called when link is closed."""
|
||||
self.state = LinkState.CLOSED
|
||||
self._data_available.set() # Wake up any waiting readers
|
||||
if self.on_close:
|
||||
self.on_close()
|
||||
RNS.log(f"Link closed: {link}", RNS.LOG_DEBUG)
|
||||
|
||||
def _on_packet(self, message: bytes, packet: RNS.Packet):
|
||||
"""Called when a packet is received."""
|
||||
with self._buffer_lock:
|
||||
self._receive_buffer.append(message)
|
||||
self._data_available.set()
|
||||
|
||||
if self.on_data:
|
||||
self.on_data(message)
|
||||
|
||||
def send(self, data: bytes) -> bool:
|
||||
"""Send data over the link.
|
||||
|
||||
Returns True if packet was sent successfully.
|
||||
"""
|
||||
if self.state != LinkState.ACTIVE:
|
||||
return False
|
||||
|
||||
try:
|
||||
packet = RNS.Packet(self.rns_link, data)
|
||||
packet.send()
|
||||
return True
|
||||
except Exception as e:
|
||||
RNS.log(f"Send failed: {e}", RNS.LOG_ERROR)
|
||||
return False
|
||||
|
||||
def recv(self, timeout: Optional[float] = None) -> Optional[bytes]:
|
||||
"""Receive data from the link.
|
||||
|
||||
Blocks until data is available or timeout expires.
|
||||
Returns None on timeout or if link is closed.
|
||||
"""
|
||||
deadline = time.time() + timeout if timeout else None
|
||||
|
||||
while True:
|
||||
with self._buffer_lock:
|
||||
if self._receive_buffer:
|
||||
return self._receive_buffer.popleft()
|
||||
|
||||
if self.state == LinkState.CLOSED:
|
||||
return None
|
||||
|
||||
self._data_available.clear()
|
||||
|
||||
remaining = None
|
||||
if deadline:
|
||||
remaining = deadline - time.time()
|
||||
if remaining <= 0:
|
||||
return None
|
||||
|
||||
if not self._data_available.wait(timeout=remaining):
|
||||
return None
|
||||
|
||||
def close(self):
|
||||
"""Close the link."""
|
||||
if self.state == LinkState.ACTIVE:
|
||||
self.rns_link.teardown()
|
||||
self.state = LinkState.CLOSED
|
||||
|
||||
@property
|
||||
def is_active(self) -> bool:
|
||||
"""Check if link is active."""
|
||||
return self.state == LinkState.ACTIVE
|
||||
|
||||
@property
|
||||
def remote_identity(self) -> Optional[RNS.Identity]:
|
||||
"""Get the remote peer's identity if known."""
|
||||
return self.rns_link.get_remote_identity()
|
||||
|
||||
@property
|
||||
def rtt(self) -> Optional[float]:
|
||||
"""Get current round-trip time estimate in seconds."""
|
||||
if hasattr(self.rns_link, 'rtt') and self.rns_link.rtt:
|
||||
return self.rns_link.rtt
|
||||
return None
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return f"RadicleLink(state={self.state.value}, rtt={self.rtt})"
|
||||
|
|
@ -0,0 +1,313 @@
|
|||
"""Message framing layer for Radicle protocol over RNS.
|
||||
|
||||
Implements the gossip message types:
|
||||
- Node Announcements: Broadcast Node IDs and network addresses
|
||||
- Inventory Announcements: Share repository inventories for routing
|
||||
- Reference Announcements: Push repository updates to subscribers
|
||||
|
||||
Messages are serialized to a compact binary format suitable for
|
||||
low-bandwidth Reticulum transports.
|
||||
"""
|
||||
|
||||
import struct
|
||||
import hashlib
|
||||
import time
|
||||
from dataclasses import dataclass, field
|
||||
from enum import IntEnum
|
||||
from typing import List, Optional, Set
|
||||
|
||||
|
||||
class MessageType(IntEnum):
|
||||
"""Radicle gossip message types."""
|
||||
NODE_ANNOUNCEMENT = 0x01
|
||||
INVENTORY_ANNOUNCEMENT = 0x02
|
||||
REF_ANNOUNCEMENT = 0x03
|
||||
PING = 0x10
|
||||
PONG = 0x11
|
||||
|
||||
|
||||
# Message header format: type (1 byte) + timestamp (8 bytes) + payload length (2 bytes)
|
||||
HEADER_FORMAT = "!BQH"
|
||||
HEADER_SIZE = struct.calcsize(HEADER_FORMAT)
|
||||
|
||||
# Maximum payload size (64KB - header)
|
||||
MAX_PAYLOAD_SIZE = 65535 - HEADER_SIZE
|
||||
|
||||
|
||||
@dataclass
|
||||
class MessageHeader:
|
||||
"""Common header for all Radicle messages."""
|
||||
msg_type: MessageType
|
||||
timestamp: int # Unix timestamp in milliseconds
|
||||
payload_length: int
|
||||
|
||||
def encode(self) -> bytes:
|
||||
"""Encode header to bytes."""
|
||||
return struct.pack(
|
||||
HEADER_FORMAT,
|
||||
self.msg_type,
|
||||
self.timestamp,
|
||||
self.payload_length,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def decode(cls, data: bytes) -> "MessageHeader":
|
||||
"""Decode header from bytes."""
|
||||
if len(data) < HEADER_SIZE:
|
||||
raise ValueError(f"Header too short: {len(data)} < {HEADER_SIZE}")
|
||||
|
||||
msg_type_raw, timestamp, payload_length = struct.unpack(
|
||||
HEADER_FORMAT, data[:HEADER_SIZE]
|
||||
)
|
||||
try:
|
||||
msg_type = MessageType(msg_type_raw)
|
||||
except ValueError:
|
||||
raise ValueError(f"Unknown message type: {msg_type_raw}")
|
||||
|
||||
return cls(
|
||||
msg_type=msg_type,
|
||||
timestamp=timestamp,
|
||||
payload_length=payload_length,
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class NodeAnnouncement:
|
||||
"""Announces a node's presence and capabilities.
|
||||
|
||||
Broadcast periodically to enable peer discovery.
|
||||
"""
|
||||
node_id: str # DID (did:key:z6Mk...)
|
||||
features: int = 0 # Bitmask of supported features
|
||||
version: int = 1 # Protocol version
|
||||
|
||||
def encode(self) -> bytes:
|
||||
"""Encode to bytes."""
|
||||
node_id_bytes = self.node_id.encode("utf-8")
|
||||
return struct.pack(
|
||||
f"!HH{len(node_id_bytes)}s",
|
||||
self.features,
|
||||
self.version,
|
||||
node_id_bytes,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def decode(cls, data: bytes) -> "NodeAnnouncement":
|
||||
"""Decode from bytes."""
|
||||
features, version = struct.unpack("!HH", data[:4])
|
||||
node_id = data[4:].decode("utf-8")
|
||||
return cls(node_id=node_id, features=features, version=version)
|
||||
|
||||
def to_message(self) -> bytes:
|
||||
"""Wrap in a full message with header."""
|
||||
payload = self.encode()
|
||||
header = MessageHeader(
|
||||
msg_type=MessageType.NODE_ANNOUNCEMENT,
|
||||
timestamp=int(time.time() * 1000),
|
||||
payload_length=len(payload),
|
||||
)
|
||||
return header.encode() + payload
|
||||
|
||||
|
||||
@dataclass
|
||||
class InventoryAnnouncement:
|
||||
"""Announces repositories hosted by a node.
|
||||
|
||||
Used to build routing tables for repository discovery.
|
||||
"""
|
||||
node_id: str # DID of the announcing node
|
||||
repositories: List[str] # List of repository IDs (hashes)
|
||||
|
||||
def encode(self) -> bytes:
|
||||
"""Encode to bytes."""
|
||||
node_id_bytes = self.node_id.encode("utf-8")
|
||||
repo_data = b""
|
||||
for repo_id in self.repositories:
|
||||
repo_bytes = repo_id.encode("utf-8")
|
||||
repo_data += struct.pack(f"!H{len(repo_bytes)}s", len(repo_bytes), repo_bytes)
|
||||
|
||||
return struct.pack(
|
||||
f"!H{len(node_id_bytes)}sH",
|
||||
len(node_id_bytes),
|
||||
node_id_bytes,
|
||||
len(self.repositories),
|
||||
) + repo_data
|
||||
|
||||
@classmethod
|
||||
def decode(cls, data: bytes) -> "InventoryAnnouncement":
|
||||
"""Decode from bytes."""
|
||||
offset = 0
|
||||
|
||||
# Node ID
|
||||
node_id_len = struct.unpack("!H", data[offset:offset+2])[0]
|
||||
offset += 2
|
||||
node_id = data[offset:offset+node_id_len].decode("utf-8")
|
||||
offset += node_id_len
|
||||
|
||||
# Repository count
|
||||
repo_count = struct.unpack("!H", data[offset:offset+2])[0]
|
||||
offset += 2
|
||||
|
||||
# Repositories
|
||||
repositories = []
|
||||
for _ in range(repo_count):
|
||||
repo_len = struct.unpack("!H", data[offset:offset+2])[0]
|
||||
offset += 2
|
||||
repo_id = data[offset:offset+repo_len].decode("utf-8")
|
||||
offset += repo_len
|
||||
repositories.append(repo_id)
|
||||
|
||||
return cls(node_id=node_id, repositories=repositories)
|
||||
|
||||
def to_message(self) -> bytes:
|
||||
"""Wrap in a full message with header."""
|
||||
payload = self.encode()
|
||||
header = MessageHeader(
|
||||
msg_type=MessageType.INVENTORY_ANNOUNCEMENT,
|
||||
timestamp=int(time.time() * 1000),
|
||||
payload_length=len(payload),
|
||||
)
|
||||
return header.encode() + payload
|
||||
|
||||
|
||||
@dataclass
|
||||
class RefAnnouncement:
|
||||
"""Announces a reference update in a repository.
|
||||
|
||||
Pushed to subscribers when refs change (commits, branches, etc).
|
||||
"""
|
||||
repository_id: str # Repository ID
|
||||
ref_name: str # Reference name (e.g., "refs/heads/main")
|
||||
old_oid: bytes # Previous object ID (20 bytes, all zeros if new)
|
||||
new_oid: bytes # New object ID (20 bytes)
|
||||
signature: bytes = b"" # Ed25519 signature of the update
|
||||
|
||||
def encode(self) -> bytes:
|
||||
"""Encode to bytes."""
|
||||
repo_bytes = self.repository_id.encode("utf-8")
|
||||
ref_bytes = self.ref_name.encode("utf-8")
|
||||
|
||||
return struct.pack(
|
||||
f"!H{len(repo_bytes)}sH{len(ref_bytes)}s20s20sH{len(self.signature)}s",
|
||||
len(repo_bytes),
|
||||
repo_bytes,
|
||||
len(ref_bytes),
|
||||
ref_bytes,
|
||||
self.old_oid,
|
||||
self.new_oid,
|
||||
len(self.signature),
|
||||
self.signature,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def decode(cls, data: bytes) -> "RefAnnouncement":
|
||||
"""Decode from bytes."""
|
||||
offset = 0
|
||||
|
||||
# Repository ID
|
||||
repo_len = struct.unpack("!H", data[offset:offset+2])[0]
|
||||
offset += 2
|
||||
repository_id = data[offset:offset+repo_len].decode("utf-8")
|
||||
offset += repo_len
|
||||
|
||||
# Ref name
|
||||
ref_len = struct.unpack("!H", data[offset:offset+2])[0]
|
||||
offset += 2
|
||||
ref_name = data[offset:offset+ref_len].decode("utf-8")
|
||||
offset += ref_len
|
||||
|
||||
# OIDs
|
||||
old_oid = data[offset:offset+20]
|
||||
offset += 20
|
||||
new_oid = data[offset:offset+20]
|
||||
offset += 20
|
||||
|
||||
# Signature
|
||||
sig_len = struct.unpack("!H", data[offset:offset+2])[0]
|
||||
offset += 2
|
||||
signature = data[offset:offset+sig_len]
|
||||
|
||||
return cls(
|
||||
repository_id=repository_id,
|
||||
ref_name=ref_name,
|
||||
old_oid=old_oid,
|
||||
new_oid=new_oid,
|
||||
signature=signature,
|
||||
)
|
||||
|
||||
def to_message(self) -> bytes:
|
||||
"""Wrap in a full message with header."""
|
||||
payload = self.encode()
|
||||
header = MessageHeader(
|
||||
msg_type=MessageType.REF_ANNOUNCEMENT,
|
||||
timestamp=int(time.time() * 1000),
|
||||
payload_length=len(payload),
|
||||
)
|
||||
return header.encode() + payload
|
||||
|
||||
|
||||
@dataclass
|
||||
class Ping:
|
||||
"""Simple ping message for keepalive/latency measurement."""
|
||||
nonce: bytes = field(default_factory=lambda: struct.pack("!Q", int(time.time() * 1000)))
|
||||
|
||||
def encode(self) -> bytes:
|
||||
return self.nonce
|
||||
|
||||
@classmethod
|
||||
def decode(cls, data: bytes) -> "Ping":
|
||||
return cls(nonce=data[:8])
|
||||
|
||||
def to_message(self) -> bytes:
|
||||
payload = self.encode()
|
||||
header = MessageHeader(
|
||||
msg_type=MessageType.PING,
|
||||
timestamp=int(time.time() * 1000),
|
||||
payload_length=len(payload),
|
||||
)
|
||||
return header.encode() + payload
|
||||
|
||||
|
||||
@dataclass
|
||||
class Pong:
|
||||
"""Response to ping."""
|
||||
nonce: bytes # Echo back the ping nonce
|
||||
|
||||
def encode(self) -> bytes:
|
||||
return self.nonce
|
||||
|
||||
@classmethod
|
||||
def decode(cls, data: bytes) -> "Pong":
|
||||
return cls(nonce=data[:8])
|
||||
|
||||
def to_message(self) -> bytes:
|
||||
payload = self.encode()
|
||||
header = MessageHeader(
|
||||
msg_type=MessageType.PONG,
|
||||
timestamp=int(time.time() * 1000),
|
||||
payload_length=len(payload),
|
||||
)
|
||||
return header.encode() + payload
|
||||
|
||||
|
||||
def decode_message(data: bytes):
|
||||
"""Decode a message from bytes.
|
||||
|
||||
Returns tuple of (header, message_object).
|
||||
"""
|
||||
header = MessageHeader.decode(data)
|
||||
payload = data[HEADER_SIZE:HEADER_SIZE + header.payload_length]
|
||||
|
||||
decoders = {
|
||||
MessageType.NODE_ANNOUNCEMENT: NodeAnnouncement.decode,
|
||||
MessageType.INVENTORY_ANNOUNCEMENT: InventoryAnnouncement.decode,
|
||||
MessageType.REF_ANNOUNCEMENT: RefAnnouncement.decode,
|
||||
MessageType.PING: Ping.decode,
|
||||
MessageType.PONG: Pong.decode,
|
||||
}
|
||||
|
||||
decoder = decoders.get(header.msg_type)
|
||||
if decoder is None:
|
||||
raise ValueError(f"Unknown message type: {header.msg_type}")
|
||||
|
||||
return header, decoder(payload)
|
||||
|
|
@ -0,0 +1,191 @@
|
|||
"""QR code encoding/decoding for Radicle bundles.
|
||||
|
||||
Enables visual transfer of tiny bundles (≤2953 bytes) without any network
|
||||
connection — useful for air-gapped machines, paper backups, or "dead drop"
|
||||
workflows where two people briefly see each other's screens.
|
||||
|
||||
Requires the optional 'qrcode' package:
|
||||
pip install radicle-reticulum[qr] # or: pip install qrcode
|
||||
"""
|
||||
|
||||
import base64
|
||||
import hashlib
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from radicle_reticulum.git_bundle import GitBundle
|
||||
|
||||
# QR code capacity: version 40, binary mode, error correction L
|
||||
QR_MAX_BYTES = 2953
|
||||
|
||||
# Magic header to identify Radicle QR payloads (8 bytes)
|
||||
QR_MAGIC = b"RADQR\x01\x00\x00"
|
||||
|
||||
|
||||
class BundleTooLargeForQR(ValueError):
|
||||
"""Raised when a bundle exceeds QR capacity."""
|
||||
pass
|
||||
|
||||
|
||||
def encode_bundle_to_qr(
|
||||
bundle: GitBundle,
|
||||
output_path: Optional[Path] = None,
|
||||
error_correction: str = "L",
|
||||
) -> str:
|
||||
"""Encode a GitBundle as a QR code.
|
||||
|
||||
The bundle data is base64-encoded and wrapped with a magic prefix + SHA-256
|
||||
checksum so the receiver can verify integrity after scanning.
|
||||
|
||||
Args:
|
||||
bundle: The GitBundle to encode. Must be ≤ QR_MAX_BYTES when serialised.
|
||||
output_path: If given, write a PNG image to this path (requires qrcode[pil]).
|
||||
If None, returns terminal-printable ASCII art.
|
||||
error_correction: QR error correction level: L, M, Q, or H (default L
|
||||
gives maximum data capacity).
|
||||
|
||||
Returns:
|
||||
ASCII art string for terminal display (always), and also writes PNG if
|
||||
output_path is specified.
|
||||
|
||||
Raises:
|
||||
BundleTooLargeForQR: If the serialised bundle exceeds 2953 bytes.
|
||||
ImportError: If the 'qrcode' package is not installed.
|
||||
"""
|
||||
try:
|
||||
import qrcode
|
||||
from qrcode.constants import ERROR_CORRECT_L, ERROR_CORRECT_M, ERROR_CORRECT_Q, ERROR_CORRECT_H
|
||||
except ImportError as e:
|
||||
raise ImportError(
|
||||
"The 'qrcode' package is required for QR encoding. "
|
||||
"Install it with: pip install 'radicle-reticulum[qr]' or pip install qrcode"
|
||||
) from e
|
||||
|
||||
ec_map = {
|
||||
"L": ERROR_CORRECT_L,
|
||||
"M": ERROR_CORRECT_M,
|
||||
"Q": ERROR_CORRECT_Q,
|
||||
"H": ERROR_CORRECT_H,
|
||||
}
|
||||
ec = ec_map.get(error_correction.upper(), ERROR_CORRECT_L)
|
||||
|
||||
bundle_bytes = bundle.encode()
|
||||
if len(bundle_bytes) > QR_MAX_BYTES:
|
||||
raise BundleTooLargeForQR(
|
||||
f"Bundle is {len(bundle_bytes)} bytes, exceeds QR capacity of {QR_MAX_BYTES} bytes. "
|
||||
"Use a different sync strategy for larger bundles."
|
||||
)
|
||||
|
||||
# Payload: magic + 4-byte length + checksum (32 bytes) + bundle data
|
||||
checksum = hashlib.sha256(bundle_bytes).digest()
|
||||
length_prefix = len(bundle_bytes).to_bytes(4, "big")
|
||||
payload = QR_MAGIC + length_prefix + checksum + bundle_bytes
|
||||
|
||||
qr = qrcode.QRCode(
|
||||
version=None, # auto-select
|
||||
error_correction=ec,
|
||||
box_size=10,
|
||||
border=4,
|
||||
)
|
||||
qr.add_data(payload)
|
||||
qr.make(fit=True)
|
||||
|
||||
if output_path is not None:
|
||||
try:
|
||||
from PIL import Image
|
||||
img = qr.make_image(fill_color="black", back_color="white")
|
||||
img.save(str(output_path))
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Saving QR images requires Pillow. "
|
||||
"Install it with: pip install 'qrcode[pil]'"
|
||||
)
|
||||
|
||||
# Always return ASCII art for terminal
|
||||
import io
|
||||
buf = io.StringIO()
|
||||
qr.print_ascii(out=buf, invert=True)
|
||||
return buf.getvalue()
|
||||
|
||||
|
||||
def decode_bundle_from_qr_data(data: bytes) -> GitBundle:
|
||||
"""Decode a GitBundle from raw QR payload bytes (as scanned from a QR code).
|
||||
|
||||
This is the inverse of encode_bundle_to_qr. The data should be the raw
|
||||
bytes scanned from the QR code.
|
||||
|
||||
Args:
|
||||
data: Raw bytes decoded from a QR code scan.
|
||||
|
||||
Returns:
|
||||
The decoded GitBundle.
|
||||
|
||||
Raises:
|
||||
ValueError: If the data is not a valid Radicle QR payload.
|
||||
"""
|
||||
if not data.startswith(QR_MAGIC):
|
||||
raise ValueError(
|
||||
f"Not a Radicle QR payload (expected magic {QR_MAGIC!r}, "
|
||||
f"got {data[:len(QR_MAGIC)]!r})"
|
||||
)
|
||||
|
||||
offset = len(QR_MAGIC)
|
||||
bundle_len = int.from_bytes(data[offset:offset + 4], "big")
|
||||
offset += 4
|
||||
|
||||
stored_checksum = data[offset:offset + 32]
|
||||
offset += 32
|
||||
|
||||
bundle_bytes = data[offset:offset + bundle_len]
|
||||
if len(bundle_bytes) != bundle_len:
|
||||
raise ValueError(
|
||||
f"Truncated QR payload: expected {bundle_len} bytes, got {len(bundle_bytes)}"
|
||||
)
|
||||
|
||||
actual_checksum = hashlib.sha256(bundle_bytes).digest()
|
||||
if actual_checksum != stored_checksum:
|
||||
raise ValueError(
|
||||
f"QR payload checksum mismatch: data may be corrupted"
|
||||
)
|
||||
|
||||
return GitBundle.decode(bundle_bytes)
|
||||
|
||||
|
||||
def decode_bundle_from_qr_image(image_path: Path) -> GitBundle:
|
||||
"""Decode a GitBundle from a QR code image file.
|
||||
|
||||
Requires pyzbar and Pillow:
|
||||
pip install pyzbar Pillow
|
||||
|
||||
Args:
|
||||
image_path: Path to the QR code image (PNG, JPG, etc.)
|
||||
|
||||
Returns:
|
||||
The decoded GitBundle.
|
||||
|
||||
Raises:
|
||||
ImportError: If pyzbar or Pillow are not installed.
|
||||
ValueError: If no valid Radicle QR code is found in the image.
|
||||
"""
|
||||
try:
|
||||
from pyzbar.pyzbar import decode as pyzbar_decode
|
||||
from PIL import Image
|
||||
except ImportError as e:
|
||||
raise ImportError(
|
||||
"Decoding QR images requires pyzbar and Pillow. "
|
||||
"Install them with: pip install pyzbar Pillow"
|
||||
) from e
|
||||
|
||||
image = Image.open(str(image_path))
|
||||
decoded_objects = pyzbar_decode(image)
|
||||
|
||||
for obj in decoded_objects:
|
||||
try:
|
||||
return decode_bundle_from_qr_data(obj.data)
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
raise ValueError(
|
||||
f"No valid Radicle QR code found in {image_path}. "
|
||||
"Make sure the image contains a QR code created by 'radicle-rns bundle qr-encode'."
|
||||
)
|
||||
|
|
@ -0,0 +1,770 @@
|
|||
"""Sync manager for Radicle repositories over Reticulum.
|
||||
|
||||
Handles repository synchronization using:
|
||||
- Full bundles for initial clone or fast networks
|
||||
- Incremental bundles for updates over low-bandwidth links (LoRa)
|
||||
- LXMF store-and-forward for offline peers
|
||||
"""
|
||||
|
||||
import os
|
||||
import threading
|
||||
import time
|
||||
import hashlib
|
||||
from dataclasses import dataclass, field
|
||||
from enum import Enum
|
||||
from pathlib import Path
|
||||
from typing import Callable, Dict, List, Optional, Set, Tuple
|
||||
|
||||
import RNS
|
||||
import LXMF
|
||||
|
||||
from radicle_reticulum.identity import RadicleIdentity
|
||||
from radicle_reticulum.git_bundle import (
|
||||
GitBundle,
|
||||
GitBundleGenerator,
|
||||
GitBundleApplicator,
|
||||
BundleMetadata,
|
||||
BundleType,
|
||||
RADICLE_REF_PATTERNS,
|
||||
estimate_bundle_size,
|
||||
)
|
||||
|
||||
|
||||
# LXMF content type for Radicle bundles
|
||||
CONTENT_TYPE_BUNDLE = 0x52 # 'R' for Radicle — complete bundle
|
||||
CONTENT_TYPE_BUNDLE_CHUNK = 0x53 # chunked fragment of a large bundle
|
||||
CONTENT_TYPE_BUNDLE_REQUEST = 0x55
|
||||
CONTENT_TYPE_REFS_ANNOUNCE = 0x54
|
||||
|
||||
# Chunk header layout: bundle_id(16) + chunk_num(2) + total_chunks(2) = 20 bytes
|
||||
CHUNK_HEADER_SIZE = 20
|
||||
|
||||
# Maximum LXMF message size (configurable, default ~500KB for LoRa-friendly chunks)
|
||||
DEFAULT_MAX_LXMF_SIZE = 500 * 1024
|
||||
|
||||
# For LoRa, much smaller chunks
|
||||
LORA_MAX_LXMF_SIZE = 32 * 1024
|
||||
|
||||
|
||||
class SyncMode(Enum):
|
||||
"""Sync mode based on transport capability."""
|
||||
FULL = "full" # Fast network - send full bundles
|
||||
INCREMENTAL = "incremental" # Slow network - send only changes
|
||||
AUTO = "auto" # Detect based on link quality
|
||||
|
||||
|
||||
@dataclass
|
||||
class RepoSyncState:
|
||||
"""Tracks sync state for a repository."""
|
||||
repository_id: str
|
||||
local_path: Path
|
||||
known_peers: Dict[str, Dict[str, str]] = field(default_factory=dict) # peer_did -> {ref: sha}
|
||||
last_sync: Dict[str, float] = field(default_factory=dict) # peer_did -> timestamp
|
||||
pending_bundles: List[bytes] = field(default_factory=list)
|
||||
|
||||
|
||||
@dataclass
|
||||
class RefsAnnouncement:
|
||||
"""Announces current ref state for a repository."""
|
||||
repository_id: str
|
||||
node_id: str
|
||||
refs: Dict[str, str] # {ref_name: commit_sha}
|
||||
timestamp: int
|
||||
|
||||
def encode(self) -> bytes:
|
||||
"""Encode to bytes for LXMF transport."""
|
||||
import struct
|
||||
|
||||
repo_bytes = self.repository_id.encode("utf-8")
|
||||
node_bytes = self.node_id.encode("utf-8")
|
||||
|
||||
refs_data = b""
|
||||
for ref, sha in self.refs.items():
|
||||
ref_bytes = ref.encode("utf-8")
|
||||
sha_bytes = sha.encode("utf-8")
|
||||
refs_data += struct.pack(f"!H{len(ref_bytes)}sH{len(sha_bytes)}s",
|
||||
len(ref_bytes), ref_bytes,
|
||||
len(sha_bytes), sha_bytes)
|
||||
|
||||
return struct.pack(
|
||||
f"!H{len(repo_bytes)}sH{len(node_bytes)}sQH",
|
||||
len(repo_bytes), repo_bytes,
|
||||
len(node_bytes), node_bytes,
|
||||
self.timestamp,
|
||||
len(self.refs),
|
||||
) + refs_data
|
||||
|
||||
@classmethod
|
||||
def decode(cls, data: bytes) -> "RefsAnnouncement":
|
||||
"""Decode from bytes."""
|
||||
import struct
|
||||
offset = 0
|
||||
|
||||
repo_len = struct.unpack("!H", data[offset:offset+2])[0]
|
||||
offset += 2
|
||||
repository_id = data[offset:offset+repo_len].decode("utf-8")
|
||||
offset += repo_len
|
||||
|
||||
node_len = struct.unpack("!H", data[offset:offset+2])[0]
|
||||
offset += 2
|
||||
node_id = data[offset:offset+node_len].decode("utf-8")
|
||||
offset += node_len
|
||||
|
||||
timestamp, refs_count = struct.unpack("!QH", data[offset:offset+10])
|
||||
offset += 10
|
||||
|
||||
refs = {}
|
||||
for _ in range(refs_count):
|
||||
ref_len = struct.unpack("!H", data[offset:offset+2])[0]
|
||||
offset += 2
|
||||
ref = data[offset:offset+ref_len].decode("utf-8")
|
||||
offset += ref_len
|
||||
|
||||
sha_len = struct.unpack("!H", data[offset:offset+2])[0]
|
||||
offset += 2
|
||||
sha = data[offset:offset+sha_len].decode("utf-8")
|
||||
offset += sha_len
|
||||
|
||||
refs[ref] = sha
|
||||
|
||||
return cls(
|
||||
repository_id=repository_id,
|
||||
node_id=node_id,
|
||||
refs=refs,
|
||||
timestamp=timestamp,
|
||||
)
|
||||
|
||||
|
||||
class SyncManager:
|
||||
"""Manages repository synchronization over Reticulum.
|
||||
|
||||
Features:
|
||||
- Full bundle sync for initial clone / fast networks
|
||||
- Incremental sync for LoRa / low-bandwidth
|
||||
- LXMF store-and-forward for offline peers
|
||||
- Automatic chunking for large bundles
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
identity: RadicleIdentity,
|
||||
storage_path: Optional[Path] = None,
|
||||
max_bundle_size: int = DEFAULT_MAX_LXMF_SIZE,
|
||||
auto_push: bool = False,
|
||||
):
|
||||
"""Initialize sync manager.
|
||||
|
||||
Args:
|
||||
identity: Local node identity
|
||||
storage_path: Path for storing sync state and pending bundles
|
||||
max_bundle_size: Maximum bundle size before chunking
|
||||
auto_push: If True, speculatively push incremental bundles to peers
|
||||
when we receive their RefsAnnouncement and we have newer data.
|
||||
Eliminates the request round-trip for common sync patterns.
|
||||
"""
|
||||
self.identity = identity
|
||||
self.max_bundle_size = max_bundle_size
|
||||
self.auto_push = auto_push
|
||||
|
||||
# Storage for sync state
|
||||
if storage_path is None:
|
||||
storage_path = Path.home() / ".radicle-rns"
|
||||
self.storage_path = Path(storage_path)
|
||||
self.storage_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Repository tracking
|
||||
self._repos: Dict[str, RepoSyncState] = {}
|
||||
self._repos_lock = threading.Lock()
|
||||
|
||||
# LXMF setup
|
||||
self._lxmf_router: Optional[LXMF.LXMRouter] = None
|
||||
self._lxmf_destination: Optional[LXMF.LXMFDeliveryDestination] = None
|
||||
|
||||
# Known peers: lxmf_source_hash (bytes) -> last_seen (float)
|
||||
self._known_peers: Dict[bytes, float] = {}
|
||||
self._peers_lock = threading.Lock()
|
||||
|
||||
# Chunk reassembly buffers: bundle_id (16 bytes) -> {chunk_num: data}
|
||||
self._chunk_buffers: Dict[bytes, Dict[int, bytes]] = {}
|
||||
self._chunk_totals: Dict[bytes, int] = {} # bundle_id -> total_chunks
|
||||
self._chunk_lock = threading.Lock()
|
||||
|
||||
# Callbacks
|
||||
self._on_bundle_received: Optional[Callable[[GitBundle], None]] = None
|
||||
self._on_refs_announced: Optional[Callable[[RefsAnnouncement], None]] = None
|
||||
|
||||
def start(self, reticulum: Optional[RNS.Reticulum] = None):
|
||||
"""Start the sync manager and LXMF router."""
|
||||
if reticulum is None:
|
||||
reticulum = RNS.Reticulum()
|
||||
|
||||
# Create LXMF router for store-and-forward
|
||||
self._lxmf_router = LXMF.LXMRouter(
|
||||
identity=self.identity.rns_identity,
|
||||
storagepath=str(self.storage_path / "lxmf"),
|
||||
)
|
||||
|
||||
# Create our LXMF destination
|
||||
self._lxmf_destination = self._lxmf_router.register_delivery_identity(
|
||||
self.identity.rns_identity,
|
||||
display_name=f"Radicle Node {self.identity.rns_hash_hex[:8]}",
|
||||
)
|
||||
self._lxmf_destination.set_delivery_callback(self._on_lxmf_delivery)
|
||||
|
||||
RNS.log(f"Sync manager started", RNS.LOG_INFO)
|
||||
RNS.log(f" LXMF address: {self._lxmf_destination.hash.hex()}", RNS.LOG_INFO)
|
||||
|
||||
def stop(self):
|
||||
"""Stop the sync manager."""
|
||||
if self._lxmf_router:
|
||||
# LXMF router cleanup
|
||||
pass
|
||||
RNS.log("Sync manager stopped", RNS.LOG_INFO)
|
||||
|
||||
def register_peer(self, destination_hash: bytes):
|
||||
"""Register a known peer by their LXMF source hash.
|
||||
|
||||
Peers are also learned automatically from incoming messages.
|
||||
"""
|
||||
with self._peers_lock:
|
||||
self._known_peers[destination_hash] = time.time()
|
||||
RNS.log(f"Registered peer: {destination_hash.hex()}", RNS.LOG_DEBUG)
|
||||
|
||||
def get_known_peers(self) -> List[bytes]:
|
||||
"""Return list of known peer hashes."""
|
||||
with self._peers_lock:
|
||||
return list(self._known_peers.keys())
|
||||
|
||||
def _send_lxmf_message(
|
||||
self,
|
||||
destination_hash: bytes,
|
||||
content_type: int,
|
||||
content: bytes,
|
||||
propagate: bool = True,
|
||||
) -> bool:
|
||||
"""Send an LXMF message to a destination hash.
|
||||
|
||||
Args:
|
||||
destination_hash: RNS identity hash of the recipient.
|
||||
content_type: Radicle content type constant.
|
||||
content: Message payload bytes.
|
||||
propagate: Use LXMF propagation (store-and-forward) if True.
|
||||
|
||||
Returns:
|
||||
True if the message was queued successfully.
|
||||
"""
|
||||
if self._lxmf_router is None or self._lxmf_destination is None:
|
||||
RNS.log("Sync manager not started, cannot send LXMF message", RNS.LOG_WARNING)
|
||||
return False
|
||||
|
||||
try:
|
||||
destination_identity = RNS.Identity.recall(destination_hash)
|
||||
if destination_identity is None:
|
||||
RNS.log(f"Unknown peer identity: {destination_hash.hex()}", RNS.LOG_WARNING)
|
||||
return False
|
||||
|
||||
lxmf_dest = RNS.Destination(
|
||||
destination_identity,
|
||||
RNS.Destination.OUT,
|
||||
RNS.Destination.SINGLE,
|
||||
"lxmf",
|
||||
"delivery",
|
||||
)
|
||||
|
||||
message = LXMF.LXMessage(
|
||||
lxmf_dest,
|
||||
self._lxmf_destination,
|
||||
content,
|
||||
desired_method=(
|
||||
LXMF.LXMessage.PROPAGATED if propagate else LXMF.LXMessage.DIRECT
|
||||
),
|
||||
)
|
||||
message.fields[LXMF.FIELD_CUSTOM_TYPE] = content_type
|
||||
self._lxmf_router.handle_outbound(message)
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
RNS.log(f"Failed to send LXMF message: {e}", RNS.LOG_ERROR)
|
||||
return False
|
||||
|
||||
def register_repository(self, repository_id: str, local_path: Path) -> RepoSyncState:
|
||||
"""Register a repository for syncing.
|
||||
|
||||
Args:
|
||||
repository_id: Radicle repo ID (rad:z...)
|
||||
local_path: Path to local Git repository
|
||||
"""
|
||||
with self._repos_lock:
|
||||
if repository_id in self._repos:
|
||||
return self._repos[repository_id]
|
||||
|
||||
state = RepoSyncState(
|
||||
repository_id=repository_id,
|
||||
local_path=Path(local_path),
|
||||
)
|
||||
self._repos[repository_id] = state
|
||||
|
||||
RNS.log(f"Registered repository: {repository_id}", RNS.LOG_INFO)
|
||||
return state
|
||||
|
||||
def announce_refs(self, repository_id: str) -> bool:
|
||||
"""Announce current refs for a repository.
|
||||
|
||||
This allows peers to determine if they need updates.
|
||||
"""
|
||||
with self._repos_lock:
|
||||
if repository_id not in self._repos:
|
||||
return False
|
||||
state = self._repos[repository_id]
|
||||
|
||||
try:
|
||||
generator = GitBundleGenerator(state.local_path)
|
||||
refs = generator.get_refs()
|
||||
|
||||
announcement = RefsAnnouncement(
|
||||
repository_id=repository_id,
|
||||
node_id=self.identity.did,
|
||||
refs=refs,
|
||||
timestamp=int(time.time() * 1000),
|
||||
)
|
||||
ann_bytes = announcement.encode()
|
||||
|
||||
# Broadcast to all known peers via LXMF (store-and-forward)
|
||||
peers = self.get_known_peers()
|
||||
if not peers:
|
||||
RNS.log(
|
||||
f"Refs announcement: {repository_id} ({len(refs)} refs) — no peers known",
|
||||
RNS.LOG_DEBUG,
|
||||
)
|
||||
return True
|
||||
|
||||
sent = sum(
|
||||
1 for peer_hash in peers
|
||||
if self._send_lxmf_message(
|
||||
peer_hash, CONTENT_TYPE_REFS_ANNOUNCE, ann_bytes, propagate=True
|
||||
)
|
||||
)
|
||||
RNS.log(
|
||||
f"Refs announcement: {repository_id} ({len(refs)} refs) sent to {sent}/{len(peers)} peers",
|
||||
RNS.LOG_INFO,
|
||||
)
|
||||
return True
|
||||
except Exception as e:
|
||||
RNS.log(f"Failed to announce refs: {e}", RNS.LOG_ERROR)
|
||||
return False
|
||||
|
||||
def create_sync_bundle(
|
||||
self,
|
||||
repository_id: str,
|
||||
peer_refs: Optional[Dict[str, str]] = None,
|
||||
mode: SyncMode = SyncMode.AUTO,
|
||||
) -> Optional[GitBundle]:
|
||||
"""Create a bundle for syncing to a peer.
|
||||
|
||||
Args:
|
||||
repository_id: Repository to sync
|
||||
peer_refs: Known refs at peer (for incremental sync)
|
||||
mode: Sync mode (full, incremental, or auto)
|
||||
|
||||
Returns:
|
||||
GitBundle ready for transport, or None if no changes
|
||||
"""
|
||||
with self._repos_lock:
|
||||
if repository_id not in self._repos:
|
||||
raise ValueError(f"Repository not registered: {repository_id}")
|
||||
state = self._repos[repository_id]
|
||||
|
||||
generator = GitBundleGenerator(state.local_path)
|
||||
|
||||
# Decide sync mode
|
||||
if mode == SyncMode.AUTO:
|
||||
if peer_refs:
|
||||
# Have peer state, can do incremental
|
||||
mode = SyncMode.INCREMENTAL
|
||||
else:
|
||||
mode = SyncMode.FULL
|
||||
|
||||
if mode == SyncMode.FULL:
|
||||
return generator.create_full_bundle(
|
||||
repository_id=repository_id,
|
||||
source_node=self.identity.did,
|
||||
)
|
||||
else:
|
||||
if peer_refs is None:
|
||||
peer_refs = {}
|
||||
return generator.create_incremental_bundle(
|
||||
repository_id=repository_id,
|
||||
source_node=self.identity.did,
|
||||
basis_refs=peer_refs,
|
||||
)
|
||||
|
||||
def send_bundle(
|
||||
self,
|
||||
bundle: GitBundle,
|
||||
destination_hash: bytes,
|
||||
propagate: bool = True,
|
||||
) -> bool:
|
||||
"""Send a bundle to a peer via LXMF.
|
||||
|
||||
Args:
|
||||
bundle: The bundle to send
|
||||
destination_hash: RNS destination hash of recipient
|
||||
propagate: If True, use LXMF propagation for offline delivery
|
||||
"""
|
||||
if self._lxmf_router is None:
|
||||
raise RuntimeError("Sync manager not started")
|
||||
|
||||
try:
|
||||
# Encode bundle for transport
|
||||
bundle_data = bundle.encode()
|
||||
|
||||
# Check if we need to chunk
|
||||
if len(bundle_data) > self.max_bundle_size:
|
||||
return self._send_chunked_bundle(bundle_data, destination_hash, propagate)
|
||||
|
||||
# Create LXMF message
|
||||
destination_identity = RNS.Identity.recall(destination_hash)
|
||||
if destination_identity is None:
|
||||
RNS.log(f"Unknown destination: {destination_hash.hex()}", RNS.LOG_WARNING)
|
||||
return False
|
||||
|
||||
lxmf_dest = RNS.Destination(
|
||||
destination_identity,
|
||||
RNS.Destination.OUT,
|
||||
RNS.Destination.SINGLE,
|
||||
"lxmf",
|
||||
"delivery",
|
||||
)
|
||||
|
||||
message = LXMF.LXMessage(
|
||||
lxmf_dest,
|
||||
self._lxmf_destination,
|
||||
bundle_data,
|
||||
desired_method=LXMF.LXMessage.PROPAGATED if propagate else LXMF.LXMessage.DIRECT,
|
||||
)
|
||||
message.fields[LXMF.FIELD_CUSTOM_TYPE] = CONTENT_TYPE_BUNDLE
|
||||
|
||||
self._lxmf_router.handle_outbound(message)
|
||||
RNS.log(f"Sent bundle ({len(bundle_data)} bytes) to {destination_hash.hex()}", RNS.LOG_INFO)
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
RNS.log(f"Failed to send bundle: {e}", RNS.LOG_ERROR)
|
||||
return False
|
||||
|
||||
def _send_chunked_bundle(
|
||||
self,
|
||||
bundle_data: bytes,
|
||||
destination_hash: bytes,
|
||||
propagate: bool,
|
||||
) -> bool:
|
||||
"""Send a large bundle as sequential LXMF chunks.
|
||||
|
||||
Each chunk has a 20-byte header: bundle_id(16) + chunk_num(2) + total(2).
|
||||
The receiver reassembles chunks ordered by chunk_num.
|
||||
"""
|
||||
import struct
|
||||
|
||||
chunk_size = self.max_bundle_size - CHUNK_HEADER_SIZE
|
||||
total_chunks = (len(bundle_data) + chunk_size - 1) // chunk_size
|
||||
bundle_id = hashlib.sha256(bundle_data).digest()[:16]
|
||||
|
||||
RNS.log(f"Sending bundle in {total_chunks} chunks ({len(bundle_data)} bytes)", RNS.LOG_INFO)
|
||||
|
||||
for i in range(total_chunks):
|
||||
start = i * chunk_size
|
||||
end = min(start + chunk_size, len(bundle_data))
|
||||
chunk_data = bundle_data[start:end]
|
||||
|
||||
chunk_msg = struct.pack("!16sHH", bundle_id, i, total_chunks) + chunk_data
|
||||
|
||||
if not self._send_lxmf_message(
|
||||
destination_hash, CONTENT_TYPE_BUNDLE_CHUNK, chunk_msg, propagate
|
||||
):
|
||||
RNS.log(f"Failed to send chunk {i+1}/{total_chunks}", RNS.LOG_WARNING)
|
||||
return False
|
||||
|
||||
RNS.log(f" Chunk {i+1}/{total_chunks} ({len(chunk_data)} bytes)", RNS.LOG_DEBUG)
|
||||
|
||||
return True
|
||||
|
||||
def apply_bundle(self, bundle: GitBundle) -> Dict[str, str]:
|
||||
"""Apply a received bundle to local repository.
|
||||
|
||||
Returns dict of applied refs.
|
||||
"""
|
||||
with self._repos_lock:
|
||||
repo_id = bundle.metadata.repository_id
|
||||
if repo_id not in self._repos:
|
||||
raise ValueError(f"Repository not registered: {repo_id}")
|
||||
state = self._repos[repo_id]
|
||||
|
||||
applicator = GitBundleApplicator(state.local_path)
|
||||
|
||||
# Verify and apply
|
||||
ok, msg = applicator.verify_bundle(bundle)
|
||||
if not ok:
|
||||
raise ValueError(f"Bundle verification failed: {msg}")
|
||||
|
||||
applied_refs = applicator.apply_bundle(bundle)
|
||||
|
||||
# Update known peer state
|
||||
source_node = bundle.metadata.source_node
|
||||
with self._repos_lock:
|
||||
state.known_peers[source_node] = applied_refs
|
||||
state.last_sync[source_node] = time.time()
|
||||
|
||||
RNS.log(f"Applied bundle from {source_node}: {len(applied_refs)} refs", RNS.LOG_INFO)
|
||||
return applied_refs
|
||||
|
||||
def _on_lxmf_delivery(self, message: LXMF.LXMessage):
|
||||
"""Handle incoming LXMF message."""
|
||||
import struct
|
||||
|
||||
# Learn the peer's address for future announcements
|
||||
if hasattr(message, "source_hash") and message.source_hash:
|
||||
with self._peers_lock:
|
||||
self._known_peers[message.source_hash] = time.time()
|
||||
|
||||
content_type = message.fields.get(LXMF.FIELD_CUSTOM_TYPE, 0)
|
||||
|
||||
if content_type == CONTENT_TYPE_BUNDLE:
|
||||
self._handle_bundle_message(message.content)
|
||||
|
||||
elif content_type == CONTENT_TYPE_BUNDLE_CHUNK:
|
||||
self._handle_chunk_message(message.content)
|
||||
|
||||
elif content_type == CONTENT_TYPE_REFS_ANNOUNCE:
|
||||
try:
|
||||
announcement = RefsAnnouncement.decode(message.content)
|
||||
RNS.log(
|
||||
f"Received refs announcement for {announcement.repository_id}",
|
||||
RNS.LOG_DEBUG,
|
||||
)
|
||||
if self._on_refs_announced:
|
||||
self._on_refs_announced(announcement)
|
||||
if self.auto_push and message.source_hash:
|
||||
self._maybe_push_to_peer(message.source_hash, announcement)
|
||||
except Exception as e:
|
||||
RNS.log(f"Failed to process refs announcement: {e}", RNS.LOG_ERROR)
|
||||
|
||||
def _handle_bundle_message(self, content: bytes):
|
||||
"""Process a complete bundle payload."""
|
||||
try:
|
||||
bundle = GitBundle.decode(content)
|
||||
RNS.log(f"Received bundle for {bundle.metadata.repository_id}", RNS.LOG_INFO)
|
||||
|
||||
if self._on_bundle_received:
|
||||
self._on_bundle_received(bundle)
|
||||
elif bundle.metadata.repository_id in self._repos:
|
||||
self.apply_bundle(bundle)
|
||||
except Exception as e:
|
||||
RNS.log(f"Failed to process bundle: {e}", RNS.LOG_ERROR)
|
||||
|
||||
def _handle_chunk_message(self, content: bytes):
|
||||
"""Reassemble a chunked bundle fragment."""
|
||||
import struct
|
||||
|
||||
if len(content) < CHUNK_HEADER_SIZE:
|
||||
RNS.log("Received malformed chunk (too short)", RNS.LOG_WARNING)
|
||||
return
|
||||
|
||||
bundle_id, chunk_num, total_chunks = struct.unpack("!16sHH", content[:CHUNK_HEADER_SIZE])
|
||||
chunk_data = content[CHUNK_HEADER_SIZE:]
|
||||
|
||||
with self._chunk_lock:
|
||||
if bundle_id not in self._chunk_buffers:
|
||||
self._chunk_buffers[bundle_id] = {}
|
||||
self._chunk_totals[bundle_id] = total_chunks
|
||||
|
||||
self._chunk_buffers[bundle_id][chunk_num] = chunk_data
|
||||
RNS.log(
|
||||
f"Chunk {chunk_num+1}/{total_chunks} received for bundle {bundle_id.hex()[:8]}",
|
||||
RNS.LOG_DEBUG,
|
||||
)
|
||||
|
||||
if len(self._chunk_buffers[bundle_id]) == total_chunks:
|
||||
# All chunks received — reassemble
|
||||
ordered = [self._chunk_buffers[bundle_id][i] for i in range(total_chunks)]
|
||||
bundle_data = b"".join(ordered)
|
||||
del self._chunk_buffers[bundle_id]
|
||||
del self._chunk_totals[bundle_id]
|
||||
else:
|
||||
return
|
||||
|
||||
RNS.log(f"All {total_chunks} chunks received, reassembling bundle", RNS.LOG_INFO)
|
||||
self._handle_bundle_message(bundle_data)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Phase 4: speculative push
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def _should_push_to_peer(
|
||||
self,
|
||||
our_refs: Dict[str, str],
|
||||
peer_refs: Dict[str, str],
|
||||
) -> bool:
|
||||
"""Return True if we have commits the peer doesn't.
|
||||
|
||||
True when any ref we hold differs from what the peer announced,
|
||||
or when we have refs entirely absent from their announcement.
|
||||
"""
|
||||
for ref, sha in our_refs.items():
|
||||
if peer_refs.get(ref) != sha:
|
||||
return True
|
||||
return False
|
||||
|
||||
def _maybe_push_to_peer(
|
||||
self,
|
||||
source_hash: bytes,
|
||||
announcement: RefsAnnouncement,
|
||||
):
|
||||
"""Speculatively push an incremental bundle to a peer if we have newer data.
|
||||
|
||||
Called when auto_push=True and we receive a RefsAnnouncement. We
|
||||
compute an incremental bundle relative to the peer's announced refs.
|
||||
If the repo produces a non-empty bundle we send it immediately —
|
||||
eliminating the request round-trip that would otherwise be needed.
|
||||
"""
|
||||
with self._repos_lock:
|
||||
state = self._repos.get(announcement.repository_id)
|
||||
if state is None:
|
||||
return
|
||||
|
||||
try:
|
||||
generator = GitBundleGenerator(state.local_path)
|
||||
our_refs = generator.get_refs()
|
||||
|
||||
if not self._should_push_to_peer(our_refs, announcement.refs):
|
||||
RNS.log(
|
||||
f"Speculative push skipped: peer {source_hash.hex()[:8]} "
|
||||
f"is up-to-date for {announcement.repository_id}",
|
||||
RNS.LOG_DEBUG,
|
||||
)
|
||||
return
|
||||
|
||||
bundle = generator.create_incremental_bundle(
|
||||
repository_id=announcement.repository_id,
|
||||
source_node=self.identity.did,
|
||||
basis_refs=announcement.refs,
|
||||
)
|
||||
if bundle is None:
|
||||
return # git reported no changes despite differing refs
|
||||
|
||||
bundle_data = bundle.encode()
|
||||
if len(bundle_data) > self.max_bundle_size:
|
||||
success = self._send_chunked_bundle(bundle_data, source_hash, propagate=True)
|
||||
else:
|
||||
success = self._send_lxmf_message(
|
||||
source_hash, CONTENT_TYPE_BUNDLE, bundle_data, propagate=True
|
||||
)
|
||||
|
||||
if success:
|
||||
RNS.log(
|
||||
f"Speculative push: sent {len(bundle_data)}-byte incremental bundle "
|
||||
f"to {source_hash.hex()[:8]} for {announcement.repository_id}",
|
||||
RNS.LOG_INFO,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
RNS.log(f"Speculative push failed: {e}", RNS.LOG_WARNING)
|
||||
|
||||
def set_on_bundle_received(self, callback: Callable[[GitBundle], None]):
|
||||
"""Set callback for received bundles."""
|
||||
self._on_bundle_received = callback
|
||||
|
||||
def set_on_refs_announced(self, callback: Callable[[RefsAnnouncement], None]):
|
||||
"""Set callback for refs announcements."""
|
||||
self._on_refs_announced = callback
|
||||
|
||||
def get_sync_status(self, repository_id: str) -> Optional[Dict]:
|
||||
"""Get sync status for a repository."""
|
||||
with self._repos_lock:
|
||||
if repository_id not in self._repos:
|
||||
return None
|
||||
state = self._repos[repository_id]
|
||||
|
||||
return {
|
||||
"repository_id": repository_id,
|
||||
"local_path": str(state.local_path),
|
||||
"known_peers": len(state.known_peers),
|
||||
"last_syncs": {
|
||||
peer: time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(ts))
|
||||
for peer, ts in state.last_sync.items()
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def create_dead_drop_bundle(
|
||||
repo_path: Path,
|
||||
repository_id: str,
|
||||
source_node: str,
|
||||
output_path: Path,
|
||||
incremental_basis: Optional[Dict[str, str]] = None,
|
||||
) -> GitBundle:
|
||||
"""Create a bundle file for dead-drop transfer (USB, etc).
|
||||
|
||||
Args:
|
||||
repo_path: Path to Git repository
|
||||
repository_id: Radicle repo ID
|
||||
source_node: Source node DID
|
||||
output_path: Where to write the bundle file
|
||||
incremental_basis: Known refs for incremental bundle
|
||||
|
||||
Returns:
|
||||
The created GitBundle
|
||||
"""
|
||||
generator = GitBundleGenerator(repo_path)
|
||||
|
||||
if incremental_basis:
|
||||
bundle = generator.create_incremental_bundle(
|
||||
repository_id=repository_id,
|
||||
source_node=source_node,
|
||||
basis_refs=incremental_basis,
|
||||
output_path=output_path,
|
||||
)
|
||||
else:
|
||||
bundle = generator.create_full_bundle(
|
||||
repository_id=repository_id,
|
||||
source_node=source_node,
|
||||
output_path=output_path,
|
||||
)
|
||||
|
||||
if bundle is None:
|
||||
raise ValueError("No changes to bundle")
|
||||
|
||||
# Also save the full transport format
|
||||
transport_path = output_path.with_suffix(".radicle-bundle")
|
||||
transport_path.write_bytes(bundle.encode())
|
||||
|
||||
RNS.log(f"Created dead-drop bundle: {output_path}", RNS.LOG_INFO)
|
||||
RNS.log(f" Type: {bundle.metadata.bundle_type.value}", RNS.LOG_INFO)
|
||||
RNS.log(f" Size: {bundle.metadata.size_bytes} bytes", RNS.LOG_INFO)
|
||||
RNS.log(f" Refs: {len(bundle.metadata.refs_included)}", RNS.LOG_INFO)
|
||||
|
||||
return bundle
|
||||
|
||||
|
||||
def apply_dead_drop_bundle(
|
||||
bundle_path: Path,
|
||||
repo_path: Path,
|
||||
) -> Dict[str, str]:
|
||||
"""Apply a dead-drop bundle to a repository.
|
||||
|
||||
Args:
|
||||
bundle_path: Path to .radicle-bundle file
|
||||
repo_path: Path to target Git repository
|
||||
|
||||
Returns:
|
||||
Dict of applied refs
|
||||
"""
|
||||
# Load bundle
|
||||
bundle_data = bundle_path.read_bytes()
|
||||
bundle = GitBundle.decode(bundle_data)
|
||||
|
||||
# Apply
|
||||
applicator = GitBundleApplicator(repo_path)
|
||||
applied = applicator.apply_bundle(bundle)
|
||||
|
||||
RNS.log(f"Applied dead-drop bundle: {len(applied)} refs", RNS.LOG_INFO)
|
||||
return applied
|
||||
|
|
@ -0,0 +1 @@
|
|||
"""Tests for radicle-reticulum."""
|
||||
|
|
@ -0,0 +1,206 @@
|
|||
"""Tests for RNSTransportAdapter peer discovery logic (RNS networking mocked)."""
|
||||
|
||||
import time
|
||||
from unittest.mock import MagicMock, patch, call
|
||||
|
||||
import pytest
|
||||
import RNS
|
||||
|
||||
from radicle_reticulum.adapter import (
|
||||
PeerInfo,
|
||||
RNSTransportAdapter,
|
||||
NODE_APP_DATA_MAGIC,
|
||||
REPO_APP_DATA_MAGIC,
|
||||
)
|
||||
from radicle_reticulum.identity import RadicleIdentity
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def _make_dest_mock(hash_bytes: bytes = b"\xaa" * 16) -> MagicMock:
|
||||
dest = MagicMock()
|
||||
dest.hash = hash_bytes
|
||||
dest.hexhash = hash_bytes.hex()
|
||||
return dest
|
||||
|
||||
|
||||
def _make_adapter() -> RNSTransportAdapter:
|
||||
"""Instantiate adapter with all RNS I/O patched out."""
|
||||
identity = RadicleIdentity.generate()
|
||||
dest_mock = _make_dest_mock()
|
||||
|
||||
with patch("radicle_reticulum.adapter.RNS.Reticulum"), \
|
||||
patch("radicle_reticulum.adapter.RNS.Destination", return_value=dest_mock), \
|
||||
patch("radicle_reticulum.adapter.RNS.log"):
|
||||
adapter = RNSTransportAdapter(identity=identity)
|
||||
|
||||
return adapter
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# PeerInfo
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestPeerInfo:
|
||||
def test_age_increases_over_time(self):
|
||||
identity = RadicleIdentity.generate()
|
||||
peer = PeerInfo(
|
||||
identity=identity,
|
||||
destination_hash=b"\x01" * 16,
|
||||
last_seen=time.time() - 5.0,
|
||||
)
|
||||
assert peer.age >= 5.0
|
||||
|
||||
def test_age_near_zero_for_fresh_peer(self):
|
||||
identity = RadicleIdentity.generate()
|
||||
peer = PeerInfo(
|
||||
identity=identity,
|
||||
destination_hash=b"\x02" * 16,
|
||||
last_seen=time.time(),
|
||||
)
|
||||
assert peer.age < 1.0
|
||||
|
||||
def test_announced_repos_defaults_empty(self):
|
||||
identity = RadicleIdentity.generate()
|
||||
peer = PeerInfo(identity=identity, destination_hash=b"\x03" * 16, last_seen=0)
|
||||
assert peer.announced_repos == set()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Adapter construction & peer list
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestAdapterConstruction:
|
||||
def test_identity_is_stored(self):
|
||||
adapter = _make_adapter()
|
||||
assert adapter.identity is not None
|
||||
assert adapter.identity.did.startswith("did:key:")
|
||||
|
||||
def test_initial_peer_list_is_empty(self):
|
||||
adapter = _make_adapter()
|
||||
assert adapter.get_peers() == []
|
||||
|
||||
def test_get_peer_by_did_returns_none_when_absent(self):
|
||||
adapter = _make_adapter()
|
||||
assert adapter.get_peer_by_did("did:key:z6Mkunknown") is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _handle_announce
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestHandleAnnounce:
|
||||
def _make_rns_identity(self) -> RNS.Identity:
|
||||
"""Create a real RNS.Identity (keypair only, no networking)."""
|
||||
return RNS.Identity()
|
||||
|
||||
def test_ignores_own_announce(self):
|
||||
adapter = _make_adapter()
|
||||
own_hash = adapter.node_destination.hash # the mock's hash
|
||||
|
||||
discovered = []
|
||||
adapter.set_on_peer_discovered(discovered.append)
|
||||
|
||||
rns_id = self._make_rns_identity()
|
||||
with patch("radicle_reticulum.adapter.RNS.log"):
|
||||
adapter._handle_announce(own_hash, rns_id, NODE_APP_DATA_MAGIC)
|
||||
|
||||
assert discovered == []
|
||||
assert adapter.get_peers() == []
|
||||
|
||||
def test_ignores_non_radicle_announce(self):
|
||||
adapter = _make_adapter()
|
||||
rns_id = self._make_rns_identity()
|
||||
foreign_hash = b"\xbb" * 16
|
||||
|
||||
discovered = []
|
||||
adapter.set_on_peer_discovered(discovered.append)
|
||||
|
||||
with patch("radicle_reticulum.adapter.RNS.log"):
|
||||
adapter._handle_announce(foreign_hash, rns_id, b"SOME_OTHER_APP")
|
||||
|
||||
assert discovered == []
|
||||
|
||||
def test_ignores_announce_with_no_app_data(self):
|
||||
adapter = _make_adapter()
|
||||
rns_id = self._make_rns_identity()
|
||||
|
||||
with patch("radicle_reticulum.adapter.RNS.log"):
|
||||
adapter._handle_announce(b"\xcc" * 16, rns_id, None)
|
||||
|
||||
assert adapter.get_peers() == []
|
||||
|
||||
def test_discovers_valid_peer(self):
|
||||
adapter = _make_adapter()
|
||||
rns_id = self._make_rns_identity()
|
||||
peer_hash = b"\xdd" * 16
|
||||
|
||||
discovered = []
|
||||
adapter.set_on_peer_discovered(discovered.append)
|
||||
|
||||
with patch("radicle_reticulum.adapter.RNS.log"):
|
||||
adapter._handle_announce(peer_hash, rns_id, NODE_APP_DATA_MAGIC)
|
||||
|
||||
assert len(discovered) == 1
|
||||
peers = adapter.get_peers()
|
||||
assert len(peers) == 1
|
||||
assert peers[0].destination_hash == peer_hash
|
||||
|
||||
def test_second_announce_updates_last_seen_not_duplicates(self):
|
||||
adapter = _make_adapter()
|
||||
rns_id = self._make_rns_identity()
|
||||
peer_hash = b"\xee" * 16
|
||||
|
||||
discovered = []
|
||||
adapter.set_on_peer_discovered(discovered.append)
|
||||
|
||||
with patch("radicle_reticulum.adapter.RNS.log"):
|
||||
adapter._handle_announce(peer_hash, rns_id, NODE_APP_DATA_MAGIC)
|
||||
old_seen = adapter.get_peers()[0].last_seen
|
||||
time.sleep(0.01)
|
||||
adapter._handle_announce(peer_hash, rns_id, NODE_APP_DATA_MAGIC)
|
||||
|
||||
# Callback only fires once (first discovery)
|
||||
assert len(discovered) == 1
|
||||
# But last_seen was refreshed
|
||||
assert adapter.get_peers()[0].last_seen >= old_seen
|
||||
|
||||
def test_multiple_distinct_peers_are_all_tracked(self):
|
||||
adapter = _make_adapter()
|
||||
|
||||
with patch("radicle_reticulum.adapter.RNS.log"):
|
||||
for i in range(3):
|
||||
rns_id = self._make_rns_identity()
|
||||
peer_hash = bytes([i]) * 16
|
||||
adapter._handle_announce(peer_hash, rns_id, NODE_APP_DATA_MAGIC)
|
||||
|
||||
assert len(adapter.get_peers()) == 3
|
||||
|
||||
def test_get_peer_by_did_returns_correct_peer(self):
|
||||
adapter = _make_adapter()
|
||||
rns_id = self._make_rns_identity()
|
||||
peer_hash = b"\xff" * 16
|
||||
|
||||
with patch("radicle_reticulum.adapter.RNS.log"):
|
||||
adapter._handle_announce(peer_hash, rns_id, NODE_APP_DATA_MAGIC)
|
||||
|
||||
peers = adapter.get_peers()
|
||||
did = peers[0].identity.did
|
||||
found = adapter.get_peer_by_did(did)
|
||||
assert found is not None
|
||||
assert found.destination_hash == peer_hash
|
||||
|
||||
def test_extra_app_data_after_magic_still_accepted(self):
|
||||
"""Announces with extra bytes after the magic prefix are valid."""
|
||||
adapter = _make_adapter()
|
||||
rns_id = self._make_rns_identity()
|
||||
peer_hash = b"\x11" * 16
|
||||
|
||||
with patch("radicle_reticulum.adapter.RNS.log"):
|
||||
adapter._handle_announce(
|
||||
peer_hash, rns_id, NODE_APP_DATA_MAGIC + b"\x00\x01extra"
|
||||
)
|
||||
|
||||
assert len(adapter.get_peers()) == 1
|
||||
|
|
@ -0,0 +1,153 @@
|
|||
"""Tests for adaptive sync strategy selection."""
|
||||
|
||||
import pytest
|
||||
from radicle_reticulum.adaptive import (
|
||||
SyncStrategy,
|
||||
LinkQuality,
|
||||
estimate_transfer_time,
|
||||
select_strategy,
|
||||
THRESHOLD_FULL,
|
||||
THRESHOLD_INCREMENTAL,
|
||||
RTT_FAST,
|
||||
RTT_MEDIUM,
|
||||
RTT_SLOW,
|
||||
QR_MAX_BYTES,
|
||||
)
|
||||
|
||||
|
||||
def make_quality(
|
||||
rtt_ms: float = 50,
|
||||
throughput_bps: float = 1_000_000,
|
||||
is_lora: bool = False,
|
||||
strategy: SyncStrategy = SyncStrategy.FULL,
|
||||
) -> LinkQuality:
|
||||
return LinkQuality(
|
||||
rtt_ms=rtt_ms,
|
||||
throughput_bps=throughput_bps,
|
||||
packet_loss=0.0,
|
||||
is_lora=is_lora,
|
||||
strategy=strategy,
|
||||
)
|
||||
|
||||
|
||||
class TestLinkQuality:
|
||||
def test_throughput_kbps(self):
|
||||
q = make_quality(throughput_bps=50_000)
|
||||
assert q.throughput_kbps == pytest.approx(50.0)
|
||||
|
||||
def test_repr(self):
|
||||
q = make_quality(rtt_ms=100, throughput_bps=1000)
|
||||
r = repr(q)
|
||||
assert "100ms" in r
|
||||
assert "1.0Kbps" in r
|
||||
|
||||
|
||||
class TestEstimateTransferTime:
|
||||
def test_zero_throughput_is_infinite(self):
|
||||
q = make_quality(throughput_bps=0)
|
||||
assert estimate_transfer_time(1024, q) == float("inf")
|
||||
|
||||
def test_known_value(self):
|
||||
# 1 MB at 1 Mbps effective = 8s, but 80% efficiency → 10s
|
||||
q = make_quality(throughput_bps=1_000_000)
|
||||
t = estimate_transfer_time(1_000_000, q)
|
||||
assert t == pytest.approx(10.0, rel=0.01)
|
||||
|
||||
def test_larger_file_takes_longer(self):
|
||||
q = make_quality(throughput_bps=10_000)
|
||||
t_small = estimate_transfer_time(1_000, q)
|
||||
t_large = estimate_transfer_time(100_000, q)
|
||||
assert t_large > t_small
|
||||
|
||||
|
||||
class TestSelectStrategy:
|
||||
def test_fast_link_small_repo_uses_full(self):
|
||||
q = make_quality(rtt_ms=10, throughput_bps=10_000_000, strategy=SyncStrategy.FULL)
|
||||
strategy, reason = select_strategy(
|
||||
bundle_size=100_000,
|
||||
incremental_size=None,
|
||||
quality=q,
|
||||
)
|
||||
assert strategy == SyncStrategy.FULL
|
||||
assert "full" in reason.lower() or "fast" in reason.lower()
|
||||
|
||||
def test_lora_link_prefers_incremental_when_viable(self):
|
||||
q = make_quality(rtt_ms=5000, throughput_bps=2400, is_lora=True, strategy=SyncStrategy.MINIMAL)
|
||||
strategy, reason = select_strategy(
|
||||
bundle_size=500_000,
|
||||
incremental_size=5_000,
|
||||
quality=q,
|
||||
max_transfer_time=3600,
|
||||
)
|
||||
assert strategy == SyncStrategy.INCREMENTAL
|
||||
|
||||
def test_lora_link_falls_back_to_minimal_when_too_large(self):
|
||||
q = make_quality(rtt_ms=5000, throughput_bps=100, is_lora=True, strategy=SyncStrategy.MINIMAL)
|
||||
strategy, reason = select_strategy(
|
||||
bundle_size=10_000_000,
|
||||
incremental_size=5_000_000,
|
||||
quality=q,
|
||||
max_transfer_time=3600,
|
||||
)
|
||||
assert strategy == SyncStrategy.MINIMAL
|
||||
|
||||
def test_tiny_incremental_uses_qr(self):
|
||||
q = make_quality(rtt_ms=10, throughput_bps=10_000_000, strategy=SyncStrategy.FULL)
|
||||
strategy, reason = select_strategy(
|
||||
bundle_size=50_000,
|
||||
incremental_size=QR_MAX_BYTES - 1,
|
||||
quality=q,
|
||||
)
|
||||
assert strategy == SyncStrategy.QR
|
||||
assert str(QR_MAX_BYTES - 1) in reason
|
||||
|
||||
def test_qr_not_selected_when_incremental_too_large(self):
|
||||
q = make_quality(rtt_ms=10, throughput_bps=10_000_000, strategy=SyncStrategy.FULL)
|
||||
strategy, _ = select_strategy(
|
||||
bundle_size=50_000,
|
||||
incremental_size=QR_MAX_BYTES + 1,
|
||||
quality=q,
|
||||
)
|
||||
assert strategy != SyncStrategy.QR
|
||||
|
||||
def test_medium_link_prefers_incremental_over_full(self):
|
||||
q = make_quality(
|
||||
rtt_ms=500,
|
||||
throughput_bps=50_000,
|
||||
strategy=SyncStrategy.INCREMENTAL,
|
||||
)
|
||||
strategy, _ = select_strategy(
|
||||
bundle_size=5_000_000,
|
||||
incremental_size=50_000,
|
||||
quality=q,
|
||||
max_transfer_time=3600,
|
||||
)
|
||||
assert strategy == SyncStrategy.INCREMENTAL
|
||||
|
||||
def test_no_incremental_available_falls_back_to_full_or_minimal(self):
|
||||
q = make_quality(rtt_ms=50, throughput_bps=10_000_000, strategy=SyncStrategy.FULL)
|
||||
strategy, _ = select_strategy(
|
||||
bundle_size=100_000,
|
||||
incremental_size=None,
|
||||
quality=q,
|
||||
)
|
||||
assert strategy in (SyncStrategy.FULL, SyncStrategy.INCREMENTAL)
|
||||
|
||||
def test_unreachably_slow_link_uses_minimal(self):
|
||||
q = make_quality(rtt_ms=30000, throughput_bps=10, strategy=SyncStrategy.MINIMAL)
|
||||
strategy, _ = select_strategy(
|
||||
bundle_size=10_000_000,
|
||||
incremental_size=None,
|
||||
quality=q,
|
||||
max_transfer_time=3600,
|
||||
)
|
||||
assert strategy == SyncStrategy.MINIMAL
|
||||
|
||||
def test_fast_link_large_repo_prefers_incremental_if_faster(self):
|
||||
q = make_quality(rtt_ms=10, throughput_bps=500_000, strategy=SyncStrategy.FULL)
|
||||
strategy, _ = select_strategy(
|
||||
bundle_size=100_000_000,
|
||||
incremental_size=100_000,
|
||||
quality=q,
|
||||
)
|
||||
assert strategy == SyncStrategy.INCREMENTAL
|
||||
|
|
@ -0,0 +1,299 @@
|
|||
"""Tests for RadicleBridge announce filtering and state logic (RNS mocked)."""
|
||||
|
||||
import struct
|
||||
import time
|
||||
from unittest.mock import MagicMock, patch, call
|
||||
|
||||
import pytest
|
||||
import RNS
|
||||
|
||||
from radicle_reticulum.bridge import (
|
||||
RadicleBridge,
|
||||
BRIDGE_APP_DATA_MAGIC,
|
||||
TunnelConnection,
|
||||
)
|
||||
from radicle_reticulum.identity import RadicleIdentity
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def _make_bridge(auto_connect: bool = False, auto_seed: bool = False) -> RadicleBridge:
|
||||
"""Instantiate bridge with all RNS I/O patched out."""
|
||||
identity = RadicleIdentity.generate()
|
||||
dest_mock = MagicMock()
|
||||
dest_mock.hash = b"\xaa" * 16
|
||||
dest_mock.hexhash = "aa" * 16
|
||||
|
||||
with patch("radicle_reticulum.bridge.RNS.Reticulum"), \
|
||||
patch("radicle_reticulum.bridge.RNS.Destination", return_value=dest_mock), \
|
||||
patch("radicle_reticulum.bridge.RNS.Transport"), \
|
||||
patch("radicle_reticulum.bridge.RNS.log"):
|
||||
bridge = RadicleBridge(
|
||||
identity=identity,
|
||||
auto_connect=auto_connect,
|
||||
auto_seed=auto_seed,
|
||||
)
|
||||
return bridge
|
||||
|
||||
|
||||
def _nid_app_data(nid: str) -> bytes:
|
||||
"""Build bridge app_data bytes that include a radicle NID."""
|
||||
nid_bytes = nid.encode("utf-8")
|
||||
return BRIDGE_APP_DATA_MAGIC + struct.pack("!H", len(nid_bytes)) + nid_bytes
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# TunnelConnection
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestTunnelConnection:
|
||||
def test_close_closes_tcp_socket(self):
|
||||
mock_socket = MagicMock()
|
||||
mock_link = MagicMock()
|
||||
tunnel = TunnelConnection(
|
||||
tunnel_id=1,
|
||||
tcp_socket=mock_socket,
|
||||
rns_link=mock_link,
|
||||
remote_destination=b"\x00" * 16,
|
||||
)
|
||||
tunnel.close()
|
||||
mock_socket.close.assert_called_once()
|
||||
mock_link.teardown.assert_called_once()
|
||||
assert not tunnel.active
|
||||
|
||||
def test_close_tolerates_already_closed_socket(self):
|
||||
mock_socket = MagicMock()
|
||||
mock_socket.close.side_effect = OSError("already closed")
|
||||
tunnel = TunnelConnection(
|
||||
tunnel_id=2,
|
||||
tcp_socket=mock_socket,
|
||||
rns_link=None,
|
||||
remote_destination=None,
|
||||
)
|
||||
tunnel.close() # should not raise
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Bridge construction
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestBridgeConstruction:
|
||||
def test_initial_state_is_empty(self):
|
||||
bridge = _make_bridge()
|
||||
assert bridge.get_remote_bridges() == []
|
||||
stats = bridge.get_stats()
|
||||
assert stats["active_tunnels"] == 0
|
||||
assert stats["known_bridges"] == 0
|
||||
|
||||
def test_local_nid_is_none_initially(self):
|
||||
bridge = _make_bridge()
|
||||
assert bridge._local_radicle_nid is None
|
||||
|
||||
def test_set_local_radicle_nid(self):
|
||||
bridge = _make_bridge()
|
||||
with patch("radicle_reticulum.bridge.RNS.log"):
|
||||
bridge.set_local_radicle_nid("z6Mktest")
|
||||
assert bridge._local_radicle_nid == "z6Mktest"
|
||||
|
||||
def test_get_remote_bridge_nid_returns_none_for_unknown(self):
|
||||
bridge = _make_bridge()
|
||||
assert bridge.get_remote_bridge_nid(b"\x01" * 16) is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _handle_announce
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestBridgeHandleAnnounce:
|
||||
def _rns_id(self):
|
||||
return RNS.Identity()
|
||||
|
||||
def test_ignores_own_announce(self):
|
||||
bridge = _make_bridge()
|
||||
own_hash = bridge.destination.hash # mock's b"\xaa" * 16
|
||||
|
||||
discovered = []
|
||||
bridge.set_on_bridge_discovered(lambda h, n: discovered.append(h))
|
||||
|
||||
with patch("radicle_reticulum.bridge.RNS.log"):
|
||||
bridge._handle_announce(own_hash, self._rns_id(), BRIDGE_APP_DATA_MAGIC)
|
||||
|
||||
assert discovered == []
|
||||
assert bridge.get_remote_bridges() == []
|
||||
|
||||
def test_ignores_non_bridge_announce(self):
|
||||
bridge = _make_bridge()
|
||||
|
||||
with patch("radicle_reticulum.bridge.RNS.log"):
|
||||
bridge._handle_announce(b"\xbb" * 16, self._rns_id(), b"SOME_OTHER_APP")
|
||||
|
||||
assert bridge.get_remote_bridges() == []
|
||||
|
||||
def test_ignores_announce_with_no_app_data(self):
|
||||
bridge = _make_bridge()
|
||||
|
||||
with patch("radicle_reticulum.bridge.RNS.log"):
|
||||
bridge._handle_announce(b"\xcc" * 16, self._rns_id(), None)
|
||||
|
||||
assert bridge.get_remote_bridges() == []
|
||||
|
||||
def test_discovers_bridge_with_magic_only(self):
|
||||
bridge = _make_bridge()
|
||||
peer_hash = b"\xdd" * 16
|
||||
discovered = []
|
||||
bridge.set_on_bridge_discovered(lambda h, n: discovered.append((h, n)))
|
||||
|
||||
with patch("radicle_reticulum.bridge.RNS.log"):
|
||||
bridge._handle_announce(peer_hash, self._rns_id(), BRIDGE_APP_DATA_MAGIC)
|
||||
|
||||
assert len(discovered) == 1
|
||||
assert discovered[0][0] == peer_hash
|
||||
assert discovered[0][1] is None
|
||||
assert peer_hash in bridge.get_remote_bridges()
|
||||
|
||||
def test_extracts_nid_from_app_data(self):
|
||||
bridge = _make_bridge()
|
||||
peer_hash = b"\xee" * 16
|
||||
nid = "z6MktestNID123"
|
||||
discovered = []
|
||||
bridge.set_on_bridge_discovered(lambda h, n: discovered.append((h, n)))
|
||||
|
||||
with patch("radicle_reticulum.bridge.RNS.log"):
|
||||
bridge._handle_announce(peer_hash, self._rns_id(), _nid_app_data(nid))
|
||||
|
||||
assert discovered[0][1] == nid
|
||||
assert bridge.get_remote_bridge_nid(peer_hash) == nid
|
||||
|
||||
def test_second_announce_does_not_fire_callback_again(self):
|
||||
bridge = _make_bridge()
|
||||
peer_hash = b"\xff" * 16
|
||||
discovered = []
|
||||
bridge.set_on_bridge_discovered(lambda h, n: discovered.append(h))
|
||||
|
||||
with patch("radicle_reticulum.bridge.RNS.log"):
|
||||
bridge._handle_announce(peer_hash, self._rns_id(), BRIDGE_APP_DATA_MAGIC)
|
||||
bridge._handle_announce(peer_hash, self._rns_id(), BRIDGE_APP_DATA_MAGIC)
|
||||
|
||||
assert len(discovered) == 1
|
||||
assert len(bridge.get_remote_bridges()) == 1
|
||||
|
||||
def test_multiple_distinct_bridges_all_tracked(self):
|
||||
bridge = _make_bridge()
|
||||
|
||||
with patch("radicle_reticulum.bridge.RNS.log"):
|
||||
for i in range(4):
|
||||
bridge._handle_announce(
|
||||
bytes([i]) * 16, self._rns_id(), BRIDGE_APP_DATA_MAGIC
|
||||
)
|
||||
|
||||
assert len(bridge.get_remote_bridges()) == 4
|
||||
|
||||
def test_auto_connect_spawns_thread(self):
|
||||
bridge = _make_bridge(auto_connect=True)
|
||||
peer_hash = b"\x11" * 16
|
||||
|
||||
with patch("radicle_reticulum.bridge.RNS.log"), \
|
||||
patch("radicle_reticulum.bridge.threading.Thread") as mock_thread:
|
||||
mock_t = MagicMock()
|
||||
mock_thread.return_value = mock_t
|
||||
bridge._handle_announce(peer_hash, self._rns_id(), BRIDGE_APP_DATA_MAGIC)
|
||||
|
||||
mock_t.start.assert_called()
|
||||
|
||||
def test_auto_seed_spawns_thread_when_nid_present(self):
|
||||
bridge = _make_bridge(auto_seed=True)
|
||||
peer_hash = b"\x22" * 16
|
||||
nid = "z6MkAutoSeed"
|
||||
|
||||
with patch("radicle_reticulum.bridge.RNS.log"), \
|
||||
patch("radicle_reticulum.bridge.threading.Thread") as mock_thread:
|
||||
mock_t = MagicMock()
|
||||
mock_thread.return_value = mock_t
|
||||
bridge._handle_announce(peer_hash, self._rns_id(), _nid_app_data(nid))
|
||||
|
||||
mock_t.start.assert_called()
|
||||
|
||||
def test_malformed_nid_in_app_data_is_ignored_gracefully(self):
|
||||
bridge = _make_bridge()
|
||||
peer_hash = b"\x33" * 16
|
||||
bad_app_data = BRIDGE_APP_DATA_MAGIC + b"\xff\xff" # nid_len=65535 but truncated
|
||||
|
||||
discovered = []
|
||||
bridge.set_on_bridge_discovered(lambda h, n: discovered.append((h, n)))
|
||||
|
||||
with patch("radicle_reticulum.bridge.RNS.log"):
|
||||
bridge._handle_announce(peer_hash, self._rns_id(), bad_app_data)
|
||||
|
||||
# Bridge is still discovered, NID just not parsed
|
||||
assert len(discovered) == 1
|
||||
assert discovered[0][1] is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# register_seed
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestRegisterSeed:
|
||||
def test_register_seed_success(self):
|
||||
bridge = _make_bridge()
|
||||
with patch("radicle_reticulum.bridge.subprocess.run") as mock_run, \
|
||||
patch("radicle_reticulum.bridge.RNS.log"):
|
||||
mock_run.return_value = MagicMock(returncode=0)
|
||||
result = bridge.register_seed("z6MktestNID")
|
||||
assert result is True
|
||||
cmd = mock_run.call_args[0][0]
|
||||
assert "rad" in cmd
|
||||
assert "z6MktestNID" in " ".join(cmd)
|
||||
|
||||
def test_register_seed_rad_not_found(self):
|
||||
bridge = _make_bridge()
|
||||
with patch("radicle_reticulum.bridge.subprocess.run",
|
||||
side_effect=FileNotFoundError), \
|
||||
patch("radicle_reticulum.bridge.RNS.log"):
|
||||
result = bridge.register_seed("z6MkNID")
|
||||
assert result is False
|
||||
|
||||
def test_register_seed_nonzero_exit(self):
|
||||
bridge = _make_bridge()
|
||||
with patch("radicle_reticulum.bridge.subprocess.run") as mock_run, \
|
||||
patch("radicle_reticulum.bridge.RNS.log"):
|
||||
mock_run.return_value = MagicMock(returncode=1, stderr="error")
|
||||
result = bridge.register_seed("z6MkNID")
|
||||
assert result is False
|
||||
|
||||
def test_register_seed_timeout(self):
|
||||
import subprocess
|
||||
bridge = _make_bridge()
|
||||
with patch("radicle_reticulum.bridge.subprocess.run",
|
||||
side_effect=subprocess.TimeoutExpired(cmd="rad", timeout=30)), \
|
||||
patch("radicle_reticulum.bridge.RNS.log"):
|
||||
result = bridge.register_seed("z6MkNID")
|
||||
assert result is False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# get_stats
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestBridgeStats:
|
||||
def test_stats_with_no_tunnels(self):
|
||||
bridge = _make_bridge()
|
||||
stats = bridge.get_stats()
|
||||
assert stats == {
|
||||
"active_tunnels": 0,
|
||||
"known_bridges": 0,
|
||||
"bytes_sent": 0,
|
||||
"bytes_received": 0,
|
||||
"rns_hash": bridge.destination.hexhash,
|
||||
}
|
||||
|
||||
def test_stats_counts_known_bridges(self):
|
||||
bridge = _make_bridge()
|
||||
with patch("radicle_reticulum.bridge.RNS.log"):
|
||||
bridge._handle_announce(b"\x01" * 16, RNS.Identity(), BRIDGE_APP_DATA_MAGIC)
|
||||
bridge._handle_announce(b"\x02" * 16, RNS.Identity(), BRIDGE_APP_DATA_MAGIC)
|
||||
|
||||
stats = bridge.get_stats()
|
||||
assert stats["known_bridges"] == 2
|
||||
|
|
@ -0,0 +1,267 @@
|
|||
"""Tests for Git bundle generation and application."""
|
||||
|
||||
import os
|
||||
import subprocess
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from radicle_reticulum.git_bundle import (
|
||||
BundleMetadata,
|
||||
BundleType,
|
||||
GitBundle,
|
||||
GitBundleGenerator,
|
||||
GitBundleApplicator,
|
||||
RADICLE_REF_PATTERNS,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def temp_git_repo():
|
||||
"""Create a temporary Git repository for testing."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
repo_path = Path(tmpdir) / "test_repo"
|
||||
repo_path.mkdir()
|
||||
|
||||
# Initialize repo
|
||||
subprocess.run(["git", "init"], cwd=repo_path, check=True, capture_output=True)
|
||||
subprocess.run(
|
||||
["git", "config", "user.email", "test@test.com"],
|
||||
cwd=repo_path, check=True, capture_output=True
|
||||
)
|
||||
subprocess.run(
|
||||
["git", "config", "user.name", "Test User"],
|
||||
cwd=repo_path, check=True, capture_output=True
|
||||
)
|
||||
|
||||
# Create initial commit
|
||||
(repo_path / "README.md").write_text("# Test Repo\n")
|
||||
subprocess.run(["git", "add", "README.md"], cwd=repo_path, check=True, capture_output=True)
|
||||
subprocess.run(
|
||||
["git", "commit", "-m", "Initial commit"],
|
||||
cwd=repo_path, check=True, capture_output=True
|
||||
)
|
||||
|
||||
yield repo_path
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def temp_bare_repo():
|
||||
"""Create a temporary bare Git repository for testing."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
repo_path = Path(tmpdir) / "bare_repo.git"
|
||||
subprocess.run(["git", "init", "--bare", str(repo_path)], check=True, capture_output=True)
|
||||
yield repo_path
|
||||
|
||||
|
||||
class TestBundleMetadata:
|
||||
"""Test BundleMetadata encoding/decoding."""
|
||||
|
||||
def test_encode_decode_full_bundle(self):
|
||||
"""Test encode/decode of full bundle metadata."""
|
||||
metadata = BundleMetadata(
|
||||
bundle_type=BundleType.FULL,
|
||||
repository_id="rad:z3gqcJUoA1n9HaHKufZs5FCSGazv5",
|
||||
source_node="did:key:z6MkhaXgBZDvotDkL5257faiztiGiC2QtKLGpbnnEGta2doK",
|
||||
timestamp=1234567890123,
|
||||
refs_included=["refs/heads/main", "refs/rad/id"],
|
||||
prerequisites=[],
|
||||
size_bytes=1024,
|
||||
checksum=b"\x00" * 32,
|
||||
)
|
||||
|
||||
encoded = metadata.encode()
|
||||
decoded, consumed = BundleMetadata.decode(encoded)
|
||||
|
||||
assert decoded.bundle_type == metadata.bundle_type
|
||||
assert decoded.repository_id == metadata.repository_id
|
||||
assert decoded.source_node == metadata.source_node
|
||||
assert decoded.timestamp == metadata.timestamp
|
||||
assert decoded.refs_included == metadata.refs_included
|
||||
assert decoded.prerequisites == metadata.prerequisites
|
||||
assert decoded.size_bytes == metadata.size_bytes
|
||||
assert decoded.checksum == metadata.checksum
|
||||
|
||||
def test_encode_decode_incremental_bundle(self):
|
||||
"""Test encode/decode of incremental bundle metadata."""
|
||||
metadata = BundleMetadata(
|
||||
bundle_type=BundleType.INCREMENTAL,
|
||||
repository_id="rad:test",
|
||||
source_node="did:key:test",
|
||||
timestamp=1000,
|
||||
refs_included=["refs/heads/feature"],
|
||||
prerequisites=["abc123", "def456"],
|
||||
size_bytes=512,
|
||||
checksum=b"\xff" * 32,
|
||||
)
|
||||
|
||||
encoded = metadata.encode()
|
||||
decoded, _ = BundleMetadata.decode(encoded)
|
||||
|
||||
assert decoded.bundle_type == BundleType.INCREMENTAL
|
||||
assert decoded.prerequisites == ["abc123", "def456"]
|
||||
|
||||
|
||||
class TestGitBundle:
|
||||
"""Test GitBundle encoding/decoding."""
|
||||
|
||||
def test_encode_decode_roundtrip(self):
|
||||
"""Test bundle encode/decode roundtrip."""
|
||||
import hashlib
|
||||
|
||||
bundle_data = b"fake git bundle data"
|
||||
metadata = BundleMetadata(
|
||||
bundle_type=BundleType.FULL,
|
||||
repository_id="rad:test",
|
||||
source_node="did:key:test",
|
||||
timestamp=1000,
|
||||
refs_included=["refs/heads/main"],
|
||||
prerequisites=[],
|
||||
size_bytes=len(bundle_data),
|
||||
checksum=hashlib.sha256(bundle_data).digest(),
|
||||
)
|
||||
|
||||
bundle = GitBundle(metadata=metadata, data=bundle_data)
|
||||
encoded = bundle.encode()
|
||||
decoded = GitBundle.decode(encoded)
|
||||
|
||||
assert decoded.data == bundle_data
|
||||
assert decoded.metadata.repository_id == metadata.repository_id
|
||||
|
||||
def test_checksum_verification_fails_on_corruption(self):
|
||||
"""Test that checksum verification catches corruption."""
|
||||
import hashlib
|
||||
|
||||
bundle_data = b"original data"
|
||||
metadata = BundleMetadata(
|
||||
bundle_type=BundleType.FULL,
|
||||
repository_id="rad:test",
|
||||
source_node="did:key:test",
|
||||
timestamp=1000,
|
||||
refs_included=[],
|
||||
prerequisites=[],
|
||||
size_bytes=len(bundle_data),
|
||||
checksum=hashlib.sha256(bundle_data).digest(),
|
||||
)
|
||||
|
||||
bundle = GitBundle(metadata=metadata, data=bundle_data)
|
||||
encoded = bytearray(bundle.encode())
|
||||
|
||||
# Corrupt the data portion
|
||||
encoded[-1] ^= 0xFF
|
||||
|
||||
with pytest.raises(ValueError, match="checksum mismatch"):
|
||||
GitBundle.decode(bytes(encoded))
|
||||
|
||||
|
||||
class TestGitBundleGenerator:
|
||||
"""Test GitBundleGenerator."""
|
||||
|
||||
def test_get_refs(self, temp_git_repo):
|
||||
"""Test getting refs from repository."""
|
||||
generator = GitBundleGenerator(temp_git_repo)
|
||||
refs = generator.get_refs(["refs/heads/*"])
|
||||
|
||||
assert "refs/heads/main" in refs or "refs/heads/master" in refs
|
||||
|
||||
def test_create_full_bundle(self, temp_git_repo):
|
||||
"""Test creating a full bundle."""
|
||||
generator = GitBundleGenerator(temp_git_repo)
|
||||
|
||||
bundle = generator.create_full_bundle(
|
||||
repository_id="rad:test",
|
||||
source_node="did:key:test",
|
||||
)
|
||||
|
||||
assert bundle is not None
|
||||
assert bundle.metadata.bundle_type == BundleType.FULL
|
||||
assert bundle.metadata.size_bytes > 0
|
||||
assert len(bundle.data) > 0
|
||||
|
||||
def test_create_incremental_bundle_no_changes(self, temp_git_repo):
|
||||
"""Test that incremental bundle returns None when no changes."""
|
||||
generator = GitBundleGenerator(temp_git_repo)
|
||||
|
||||
# Get current refs as basis
|
||||
current_refs = generator.get_refs()
|
||||
|
||||
# Create incremental with same refs - should be None
|
||||
bundle = generator.create_incremental_bundle(
|
||||
repository_id="rad:test",
|
||||
source_node="did:key:test",
|
||||
basis_refs=current_refs,
|
||||
)
|
||||
|
||||
assert bundle is None
|
||||
|
||||
def test_create_incremental_bundle_with_changes(self, temp_git_repo):
|
||||
"""Test creating incremental bundle with new commits."""
|
||||
generator = GitBundleGenerator(temp_git_repo)
|
||||
|
||||
# Get current refs as basis
|
||||
basis_refs = generator.get_refs()
|
||||
|
||||
# Add new commit
|
||||
(temp_git_repo / "new_file.txt").write_text("new content")
|
||||
subprocess.run(["git", "add", "new_file.txt"], cwd=temp_git_repo, check=True, capture_output=True)
|
||||
subprocess.run(
|
||||
["git", "commit", "-m", "Add new file"],
|
||||
cwd=temp_git_repo, check=True, capture_output=True
|
||||
)
|
||||
|
||||
# Create incremental
|
||||
bundle = generator.create_incremental_bundle(
|
||||
repository_id="rad:test",
|
||||
source_node="did:key:test",
|
||||
basis_refs=basis_refs,
|
||||
)
|
||||
|
||||
assert bundle is not None
|
||||
assert bundle.metadata.bundle_type == BundleType.INCREMENTAL
|
||||
|
||||
def test_invalid_repo_path(self):
|
||||
"""Test that invalid repo path raises error."""
|
||||
with pytest.raises(ValueError, match="Not a Git repository"):
|
||||
GitBundleGenerator(Path("/nonexistent/path"))
|
||||
|
||||
|
||||
class TestGitBundleApplicator:
|
||||
"""Test GitBundleApplicator."""
|
||||
|
||||
def test_apply_bundle(self, temp_git_repo, temp_bare_repo):
|
||||
"""Test applying a bundle to a repository."""
|
||||
# Create bundle from source repo
|
||||
generator = GitBundleGenerator(temp_git_repo)
|
||||
bundle = generator.create_full_bundle(
|
||||
repository_id="rad:test",
|
||||
source_node="did:key:test",
|
||||
)
|
||||
|
||||
# Apply to bare repo
|
||||
applicator = GitBundleApplicator(temp_bare_repo)
|
||||
applied_refs = applicator.apply_bundle(bundle)
|
||||
|
||||
assert len(applied_refs) > 0
|
||||
assert any("main" in ref or "master" in ref for ref in applied_refs)
|
||||
|
||||
def test_verify_bundle(self, temp_git_repo, temp_bare_repo):
|
||||
"""Test bundle verification."""
|
||||
generator = GitBundleGenerator(temp_git_repo)
|
||||
bundle = generator.create_full_bundle(
|
||||
repository_id="rad:test",
|
||||
source_node="did:key:test",
|
||||
)
|
||||
|
||||
applicator = GitBundleApplicator(temp_bare_repo)
|
||||
ok, msg = applicator.verify_bundle(bundle)
|
||||
|
||||
assert ok
|
||||
assert "verified" in msg.lower() or msg == ""
|
||||
|
||||
def test_get_current_refs(self, temp_git_repo):
|
||||
"""Test getting current refs."""
|
||||
applicator = GitBundleApplicator(temp_git_repo)
|
||||
refs = applicator.get_current_refs(["refs/heads/*"])
|
||||
|
||||
assert len(refs) > 0
|
||||
|
|
@ -0,0 +1,171 @@
|
|||
"""Tests for identity mapping."""
|
||||
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
from radicle_reticulum.identity import (
|
||||
RadicleIdentity,
|
||||
_base58btc_encode,
|
||||
_base58btc_decode,
|
||||
)
|
||||
|
||||
|
||||
class TestBase58:
|
||||
"""Test base58btc encoding/decoding."""
|
||||
|
||||
def test_encode_decode_roundtrip(self):
|
||||
"""Test that encode/decode are inverses."""
|
||||
test_data = b"\x00\x01\x02\x03\x04\x05"
|
||||
encoded = _base58btc_encode(test_data)
|
||||
decoded = _base58btc_decode(encoded)
|
||||
assert decoded == test_data
|
||||
|
||||
def test_encode_known_value(self):
|
||||
"""Test encoding against known value."""
|
||||
# "Hello" in base58btc
|
||||
data = b"Hello"
|
||||
encoded = _base58btc_encode(data)
|
||||
assert encoded == "9Ajdvzr"
|
||||
|
||||
def test_preserves_leading_zeros(self):
|
||||
"""Test that leading zeros are preserved."""
|
||||
data = b"\x00\x00\x00test"
|
||||
encoded = _base58btc_encode(data)
|
||||
decoded = _base58btc_decode(encoded)
|
||||
assert decoded == data
|
||||
|
||||
|
||||
class TestRadicleIdentity:
|
||||
"""Test RadicleIdentity class."""
|
||||
|
||||
def test_generate_creates_valid_identity(self):
|
||||
"""Test that generate() creates a valid identity."""
|
||||
identity = RadicleIdentity.generate()
|
||||
|
||||
assert identity.private_key is not None
|
||||
assert identity.public_key is not None
|
||||
assert identity.rns_identity is not None
|
||||
assert len(identity.public_key_bytes) == 32
|
||||
assert len(identity.rns_hash) == 16
|
||||
|
||||
def test_did_format(self):
|
||||
"""Test that DID has correct format."""
|
||||
identity = RadicleIdentity.generate()
|
||||
did = identity.did
|
||||
|
||||
assert did.startswith("did:key:z6Mk")
|
||||
assert identity.node_id == did
|
||||
|
||||
def test_did_roundtrip(self):
|
||||
"""Test DID encoding/decoding roundtrip."""
|
||||
identity = RadicleIdentity.generate()
|
||||
original_did = identity.did
|
||||
|
||||
# Create identity from DID (public key only)
|
||||
restored = RadicleIdentity.from_did(original_did)
|
||||
|
||||
assert restored.did == original_did
|
||||
assert restored.public_key_bytes == identity.public_key_bytes
|
||||
assert restored.private_key is None # DID import is public-only
|
||||
assert restored.rns_identity is None # DID doesn't have X25519 key
|
||||
|
||||
def test_sign_and_verify(self):
|
||||
"""Test signing and verification."""
|
||||
identity = RadicleIdentity.generate()
|
||||
message = b"test message"
|
||||
|
||||
signature = identity.sign(message)
|
||||
assert len(signature) == 64 # Ed25519 signature size
|
||||
|
||||
assert identity.verify(signature, message)
|
||||
assert not identity.verify(signature, b"wrong message")
|
||||
|
||||
def test_cannot_sign_without_private_key(self):
|
||||
"""Test that signing fails without private key."""
|
||||
identity = RadicleIdentity.generate()
|
||||
public_only = RadicleIdentity.from_did(identity.did)
|
||||
|
||||
with pytest.raises(ValueError, match="Cannot sign without private key"):
|
||||
public_only.sign(b"test")
|
||||
|
||||
def test_invalid_did_format(self):
|
||||
"""Test that invalid DIDs are rejected."""
|
||||
with pytest.raises(ValueError, match="Invalid DID format"):
|
||||
RadicleIdentity.from_did("not:a:valid:did")
|
||||
|
||||
def test_rns_hash_is_stable(self):
|
||||
"""Test that RNS hash is deterministic for same key."""
|
||||
identity = RadicleIdentity.generate()
|
||||
hash1 = identity.rns_hash_hex
|
||||
|
||||
# The hash should be consistent
|
||||
assert identity.rns_hash_hex == hash1
|
||||
assert len(hash1) == 32 # 16 bytes = 32 hex chars
|
||||
|
||||
def test_repr(self):
|
||||
"""Test string representation."""
|
||||
identity = RadicleIdentity.generate()
|
||||
repr_str = repr(identity)
|
||||
|
||||
assert "RadicleIdentity" in repr_str
|
||||
assert "with private key" in repr_str
|
||||
|
||||
public_only = RadicleIdentity.from_did(identity.did)
|
||||
assert "public only" in repr(public_only)
|
||||
|
||||
|
||||
class TestIdentityPersistence:
|
||||
"""Test identity save/load."""
|
||||
|
||||
def test_save_and_load_roundtrip(self):
|
||||
"""Saved identity reloads with same DID and RNS hash."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
path = Path(tmpdir) / "identity"
|
||||
original = RadicleIdentity.generate()
|
||||
original.save(path)
|
||||
|
||||
loaded = RadicleIdentity.load(path)
|
||||
assert loaded.did == original.did
|
||||
assert loaded.rns_identity_hash_hex == original.rns_identity_hash_hex
|
||||
assert loaded.private_key is not None
|
||||
|
||||
def test_save_creates_parent_dirs(self):
|
||||
"""save() creates intermediate directories."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
path = Path(tmpdir) / "subdir" / "nested" / "identity"
|
||||
identity = RadicleIdentity.generate()
|
||||
identity.save(path)
|
||||
assert path.exists()
|
||||
|
||||
def test_load_missing_file_raises(self):
|
||||
"""load() raises FileNotFoundError for missing path."""
|
||||
with pytest.raises(FileNotFoundError):
|
||||
RadicleIdentity.load("/nonexistent/path/identity")
|
||||
|
||||
def test_load_or_generate_creates_on_first_run(self):
|
||||
"""load_or_generate() creates and saves a new identity when absent."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
path = Path(tmpdir) / "identity"
|
||||
assert not path.exists()
|
||||
identity = RadicleIdentity.load_or_generate(path)
|
||||
assert path.exists()
|
||||
assert identity.did.startswith("did:key:z6Mk")
|
||||
|
||||
def test_load_or_generate_stable_across_calls(self):
|
||||
"""load_or_generate() returns the same identity on subsequent calls."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
path = Path(tmpdir) / "identity"
|
||||
first = RadicleIdentity.load_or_generate(path)
|
||||
second = RadicleIdentity.load_or_generate(path)
|
||||
assert first.did == second.did
|
||||
assert first.rns_identity_hash_hex == second.rns_identity_hash_hex
|
||||
|
||||
def test_save_public_only_raises(self):
|
||||
"""save() raises ValueError for public-key-only identities."""
|
||||
identity = RadicleIdentity.from_did(
|
||||
"did:key:z6MkhaXgBZDvotDkL5257faiztiGiC2QtKLGpbnnEGta2doK"
|
||||
)
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
with pytest.raises(ValueError, match="public-key-only"):
|
||||
identity.save(Path(tmpdir) / "identity")
|
||||
|
|
@ -0,0 +1,196 @@
|
|||
"""Tests for RadicleLink (pure logic — no RNS networking required)."""
|
||||
|
||||
import threading
|
||||
import time
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from radicle_reticulum.link import RadicleLink, LinkState
|
||||
|
||||
|
||||
def make_link(state: LinkState = LinkState.ACTIVE) -> tuple[RadicleLink, MagicMock]:
|
||||
"""Create a RadicleLink with a mock RNS.Link."""
|
||||
mock_rns_link = MagicMock()
|
||||
mock_rns_link.rtt = None
|
||||
link = RadicleLink(rns_link=mock_rns_link, state=state)
|
||||
return link, mock_rns_link
|
||||
|
||||
|
||||
class TestRadicleLinkState:
|
||||
def test_active_link_is_active(self):
|
||||
link, _ = make_link(LinkState.ACTIVE)
|
||||
assert link.is_active
|
||||
assert link.state == LinkState.ACTIVE
|
||||
|
||||
def test_pending_link_is_not_active(self):
|
||||
link, _ = make_link(LinkState.PENDING)
|
||||
assert not link.is_active
|
||||
|
||||
def test_on_established_sets_active(self):
|
||||
link, _ = make_link(LinkState.PENDING)
|
||||
link._on_established(MagicMock())
|
||||
assert link.state == LinkState.ACTIVE
|
||||
assert link.is_active
|
||||
|
||||
def test_on_closed_sets_closed(self):
|
||||
link, _ = make_link(LinkState.ACTIVE)
|
||||
link._on_closed(MagicMock())
|
||||
assert link.state == LinkState.CLOSED
|
||||
assert not link.is_active
|
||||
|
||||
def test_close_calls_teardown(self):
|
||||
link, mock_rns = make_link(LinkState.ACTIVE)
|
||||
link.close()
|
||||
mock_rns.teardown.assert_called_once()
|
||||
assert link.state == LinkState.CLOSED
|
||||
|
||||
def test_close_on_inactive_link_is_noop(self):
|
||||
link, mock_rns = make_link(LinkState.CLOSED)
|
||||
link.close()
|
||||
mock_rns.teardown.assert_not_called()
|
||||
|
||||
def test_repr(self):
|
||||
link, _ = make_link(LinkState.ACTIVE)
|
||||
r = repr(link)
|
||||
assert "active" in r
|
||||
assert "RadicleLink" in r
|
||||
|
||||
|
||||
class TestRadicleLinkSend:
|
||||
def test_send_on_active_link_succeeds(self):
|
||||
link, _ = make_link(LinkState.ACTIVE)
|
||||
with patch("radicle_reticulum.link.RNS.Packet") as mock_packet_cls:
|
||||
mock_packet = MagicMock()
|
||||
mock_packet_cls.return_value = mock_packet
|
||||
result = link.send(b"hello world")
|
||||
assert result is True
|
||||
mock_packet.send.assert_called_once()
|
||||
|
||||
def test_send_on_inactive_link_returns_false(self):
|
||||
link, _ = make_link(LinkState.CLOSED)
|
||||
result = link.send(b"data")
|
||||
assert result is False
|
||||
|
||||
def test_send_on_pending_link_returns_false(self):
|
||||
link, _ = make_link(LinkState.PENDING)
|
||||
result = link.send(b"data")
|
||||
assert result is False
|
||||
|
||||
def test_send_exception_returns_false(self):
|
||||
link, _ = make_link(LinkState.ACTIVE)
|
||||
with patch("radicle_reticulum.link.RNS.Packet") as mock_packet_cls:
|
||||
mock_packet_cls.side_effect = RuntimeError("send failed")
|
||||
result = link.send(b"data")
|
||||
assert result is False
|
||||
|
||||
|
||||
class TestRadicleLinkRecv:
|
||||
def test_recv_returns_buffered_data(self):
|
||||
link, _ = make_link()
|
||||
link._on_packet(b"buffered", MagicMock())
|
||||
result = link.recv(timeout=0.1)
|
||||
assert result == b"buffered"
|
||||
|
||||
def test_recv_returns_in_order(self):
|
||||
link, _ = make_link()
|
||||
for i in range(3):
|
||||
link._on_packet(f"msg{i}".encode(), MagicMock())
|
||||
assert link.recv(timeout=0.1) == b"msg0"
|
||||
assert link.recv(timeout=0.1) == b"msg1"
|
||||
assert link.recv(timeout=0.1) == b"msg2"
|
||||
|
||||
def test_recv_timeout_returns_none(self):
|
||||
link, _ = make_link()
|
||||
start = time.time()
|
||||
result = link.recv(timeout=0.05)
|
||||
elapsed = time.time() - start
|
||||
assert result is None
|
||||
assert elapsed >= 0.04
|
||||
|
||||
def test_recv_woken_by_data(self):
|
||||
link, _ = make_link()
|
||||
results = []
|
||||
|
||||
def delayed_send():
|
||||
time.sleep(0.05)
|
||||
link._on_packet(b"delayed", MagicMock())
|
||||
|
||||
t = threading.Thread(target=delayed_send, daemon=True)
|
||||
t.start()
|
||||
result = link.recv(timeout=1.0)
|
||||
t.join()
|
||||
assert result == b"delayed"
|
||||
|
||||
def test_recv_returns_none_when_closed(self):
|
||||
link, _ = make_link()
|
||||
link._on_closed(MagicMock())
|
||||
result = link.recv(timeout=0.1)
|
||||
assert result is None
|
||||
|
||||
def test_recv_woken_by_close(self):
|
||||
"""recv() unblocks when link closes while waiting."""
|
||||
link, _ = make_link()
|
||||
result_holder = []
|
||||
|
||||
def close_after_delay():
|
||||
time.sleep(0.05)
|
||||
link._on_closed(MagicMock())
|
||||
|
||||
t = threading.Thread(target=close_after_delay, daemon=True)
|
||||
t.start()
|
||||
result = link.recv(timeout=2.0)
|
||||
t.join()
|
||||
assert result is None
|
||||
|
||||
def test_on_packet_calls_on_data_callback(self):
|
||||
link, _ = make_link()
|
||||
received = []
|
||||
link.on_data = received.append
|
||||
link._on_packet(b"callback data", MagicMock())
|
||||
assert received == [b"callback data"]
|
||||
|
||||
def test_on_closed_calls_on_close_callback(self):
|
||||
link, _ = make_link()
|
||||
called = []
|
||||
link.on_close = lambda: called.append(True)
|
||||
link._on_closed(MagicMock())
|
||||
assert called == [True]
|
||||
|
||||
|
||||
class TestRadicleLinkProperties:
|
||||
def test_rtt_returns_none_when_unavailable(self):
|
||||
link, mock_rns = make_link()
|
||||
mock_rns.rtt = None
|
||||
assert link.rtt is None
|
||||
|
||||
def test_rtt_returns_value_when_available(self):
|
||||
link, mock_rns = make_link()
|
||||
mock_rns.rtt = 0.15
|
||||
assert link.rtt == pytest.approx(0.15)
|
||||
|
||||
def test_remote_identity_delegates_to_rns(self):
|
||||
link, mock_rns = make_link()
|
||||
fake_id = MagicMock()
|
||||
mock_rns.get_remote_identity.return_value = fake_id
|
||||
assert link.remote_identity is fake_id
|
||||
|
||||
|
||||
class TestRadicleLinkFactories:
|
||||
def test_from_incoming_is_active(self):
|
||||
mock_rns = MagicMock()
|
||||
link = RadicleLink.from_incoming(mock_rns)
|
||||
assert link.state == LinkState.ACTIVE
|
||||
|
||||
def test_from_incoming_does_not_set_established_callback(self):
|
||||
mock_rns = MagicMock()
|
||||
RadicleLink.from_incoming(mock_rns)
|
||||
mock_rns.set_link_established_callback.assert_not_called()
|
||||
|
||||
def test_create_outbound_is_pending(self):
|
||||
mock_dest = MagicMock()
|
||||
with patch("radicle_reticulum.link.RNS.Link") as mock_link_cls:
|
||||
mock_rns = MagicMock()
|
||||
mock_link_cls.return_value = mock_rns
|
||||
link = RadicleLink.create_outbound(mock_dest)
|
||||
assert link.state == LinkState.PENDING
|
||||
|
|
@ -0,0 +1,192 @@
|
|||
"""Tests for message framing."""
|
||||
|
||||
import pytest
|
||||
import time
|
||||
from radicle_reticulum.messages import (
|
||||
MessageType,
|
||||
MessageHeader,
|
||||
NodeAnnouncement,
|
||||
InventoryAnnouncement,
|
||||
RefAnnouncement,
|
||||
Ping,
|
||||
Pong,
|
||||
decode_message,
|
||||
HEADER_SIZE,
|
||||
)
|
||||
|
||||
|
||||
class TestMessageHeader:
|
||||
"""Test message header encoding/decoding."""
|
||||
|
||||
def test_encode_decode_roundtrip(self):
|
||||
"""Test header encode/decode roundtrip."""
|
||||
header = MessageHeader(
|
||||
msg_type=MessageType.NODE_ANNOUNCEMENT,
|
||||
timestamp=1234567890123,
|
||||
payload_length=42,
|
||||
)
|
||||
|
||||
encoded = header.encode()
|
||||
assert len(encoded) == HEADER_SIZE
|
||||
|
||||
decoded = MessageHeader.decode(encoded)
|
||||
assert decoded.msg_type == header.msg_type
|
||||
assert decoded.timestamp == header.timestamp
|
||||
assert decoded.payload_length == header.payload_length
|
||||
|
||||
def test_header_too_short(self):
|
||||
"""Test that short data raises error."""
|
||||
with pytest.raises(ValueError, match="Header too short"):
|
||||
MessageHeader.decode(b"\x00\x01")
|
||||
|
||||
|
||||
class TestNodeAnnouncement:
|
||||
"""Test NodeAnnouncement message."""
|
||||
|
||||
def test_encode_decode_roundtrip(self):
|
||||
"""Test encode/decode roundtrip."""
|
||||
msg = NodeAnnouncement(
|
||||
node_id="did:key:z6MkhaXgBZDvotDkL5257faiztiGiC2QtKLGpbnnEGta2doK",
|
||||
features=0x0003,
|
||||
version=1,
|
||||
)
|
||||
|
||||
encoded = msg.encode()
|
||||
decoded = NodeAnnouncement.decode(encoded)
|
||||
|
||||
assert decoded.node_id == msg.node_id
|
||||
assert decoded.features == msg.features
|
||||
assert decoded.version == msg.version
|
||||
|
||||
def test_to_message_includes_header(self):
|
||||
"""Test that to_message includes proper header."""
|
||||
msg = NodeAnnouncement(node_id="did:key:z6Mk...")
|
||||
|
||||
full_message = msg.to_message()
|
||||
header = MessageHeader.decode(full_message)
|
||||
|
||||
assert header.msg_type == MessageType.NODE_ANNOUNCEMENT
|
||||
assert header.timestamp > 0
|
||||
assert header.payload_length == len(msg.encode())
|
||||
|
||||
|
||||
class TestInventoryAnnouncement:
|
||||
"""Test InventoryAnnouncement message."""
|
||||
|
||||
def test_encode_decode_roundtrip(self):
|
||||
"""Test encode/decode roundtrip."""
|
||||
msg = InventoryAnnouncement(
|
||||
node_id="did:key:z6MkhaXgBZDvotDkL5257faiztiGiC2QtKLGpbnnEGta2doK",
|
||||
repositories=[
|
||||
"rad:z3gqcJUoA1n9HaHKufZs5FCSGazv5",
|
||||
"rad:z4gqcJUoA1n9HaHKufZs5FCSGazv6",
|
||||
],
|
||||
)
|
||||
|
||||
encoded = msg.encode()
|
||||
decoded = InventoryAnnouncement.decode(encoded)
|
||||
|
||||
assert decoded.node_id == msg.node_id
|
||||
assert decoded.repositories == msg.repositories
|
||||
|
||||
def test_empty_repositories(self):
|
||||
"""Test with empty repository list."""
|
||||
msg = InventoryAnnouncement(
|
||||
node_id="did:key:z6Mk...",
|
||||
repositories=[],
|
||||
)
|
||||
|
||||
encoded = msg.encode()
|
||||
decoded = InventoryAnnouncement.decode(encoded)
|
||||
|
||||
assert decoded.repositories == []
|
||||
|
||||
|
||||
class TestRefAnnouncement:
|
||||
"""Test RefAnnouncement message."""
|
||||
|
||||
def test_encode_decode_roundtrip(self):
|
||||
"""Test encode/decode roundtrip."""
|
||||
msg = RefAnnouncement(
|
||||
repository_id="rad:z3gqcJUoA1n9HaHKufZs5FCSGazv5",
|
||||
ref_name="refs/heads/main",
|
||||
old_oid=b"\x00" * 20,
|
||||
new_oid=bytes.fromhex("abc123def456789012345678901234567890abcd"),
|
||||
signature=b"fake_signature_bytes",
|
||||
)
|
||||
|
||||
encoded = msg.encode()
|
||||
decoded = RefAnnouncement.decode(encoded)
|
||||
|
||||
assert decoded.repository_id == msg.repository_id
|
||||
assert decoded.ref_name == msg.ref_name
|
||||
assert decoded.old_oid == msg.old_oid
|
||||
assert decoded.new_oid == msg.new_oid
|
||||
assert decoded.signature == msg.signature
|
||||
|
||||
|
||||
class TestPingPong:
|
||||
"""Test Ping/Pong messages."""
|
||||
|
||||
def test_ping_encode_decode(self):
|
||||
"""Test Ping encode/decode."""
|
||||
ping = Ping()
|
||||
encoded = ping.encode()
|
||||
decoded = Ping.decode(encoded)
|
||||
|
||||
assert decoded.nonce == ping.nonce
|
||||
|
||||
def test_pong_echoes_nonce(self):
|
||||
"""Test Pong echoes ping nonce."""
|
||||
ping = Ping()
|
||||
pong = Pong(nonce=ping.nonce)
|
||||
|
||||
assert pong.nonce == ping.nonce
|
||||
|
||||
|
||||
class TestDecodeMessage:
|
||||
"""Test the decode_message function."""
|
||||
|
||||
def test_decode_node_announcement(self):
|
||||
"""Test decoding a NodeAnnouncement message."""
|
||||
msg = NodeAnnouncement(node_id="did:key:z6Mk...")
|
||||
full_message = msg.to_message()
|
||||
|
||||
header, decoded = decode_message(full_message)
|
||||
|
||||
assert header.msg_type == MessageType.NODE_ANNOUNCEMENT
|
||||
assert isinstance(decoded, NodeAnnouncement)
|
||||
assert decoded.node_id == msg.node_id
|
||||
|
||||
def test_decode_inventory_announcement(self):
|
||||
"""Test decoding an InventoryAnnouncement message."""
|
||||
msg = InventoryAnnouncement(
|
||||
node_id="did:key:z6Mk...",
|
||||
repositories=["repo1", "repo2"],
|
||||
)
|
||||
full_message = msg.to_message()
|
||||
|
||||
header, decoded = decode_message(full_message)
|
||||
|
||||
assert header.msg_type == MessageType.INVENTORY_ANNOUNCEMENT
|
||||
assert isinstance(decoded, InventoryAnnouncement)
|
||||
assert decoded.repositories == msg.repositories
|
||||
|
||||
def test_decode_ping(self):
|
||||
"""Test decoding a Ping message."""
|
||||
ping = Ping()
|
||||
full_message = ping.to_message()
|
||||
|
||||
header, decoded = decode_message(full_message)
|
||||
|
||||
assert header.msg_type == MessageType.PING
|
||||
assert isinstance(decoded, Ping)
|
||||
|
||||
def test_unknown_message_type_raises(self):
|
||||
"""Test that unknown message types raise error."""
|
||||
# Create a message with invalid type
|
||||
import struct
|
||||
bad_message = struct.pack("!BQH", 0xFF, 0, 0)
|
||||
|
||||
with pytest.raises(ValueError, match="Unknown message type"):
|
||||
decode_message(bad_message)
|
||||
|
|
@ -0,0 +1,108 @@
|
|||
"""Tests for QR bundle encoding/decoding."""
|
||||
|
||||
import hashlib
|
||||
import pytest
|
||||
|
||||
from radicle_reticulum.git_bundle import GitBundle, BundleMetadata, BundleType
|
||||
from radicle_reticulum.qr import (
|
||||
encode_bundle_to_qr,
|
||||
decode_bundle_from_qr_data,
|
||||
BundleTooLargeForQR,
|
||||
QR_MAX_BYTES,
|
||||
QR_MAGIC,
|
||||
)
|
||||
|
||||
|
||||
def make_small_bundle(data: bytes = b"tiny git bundle data") -> GitBundle:
|
||||
"""Create a minimal GitBundle for testing."""
|
||||
metadata = BundleMetadata(
|
||||
bundle_type=BundleType.INCREMENTAL,
|
||||
repository_id="rad:z3test",
|
||||
source_node="did:key:z6Mktest",
|
||||
timestamp=1000,
|
||||
refs_included=["refs/heads/main"],
|
||||
prerequisites=["abc123"],
|
||||
size_bytes=len(data),
|
||||
checksum=hashlib.sha256(data).digest(),
|
||||
)
|
||||
return GitBundle(metadata=metadata, data=data)
|
||||
|
||||
|
||||
class TestQRPayloadRoundtrip:
|
||||
"""Test the binary payload encode/decode without QR rendering."""
|
||||
|
||||
def _encode_payload(self, bundle: GitBundle) -> bytes:
|
||||
"""Extract the raw bytes that would be put into a QR code."""
|
||||
import struct
|
||||
bundle_bytes = bundle.encode()
|
||||
checksum = hashlib.sha256(bundle_bytes).digest()
|
||||
length_prefix = len(bundle_bytes).to_bytes(4, "big")
|
||||
return QR_MAGIC + length_prefix + checksum + bundle_bytes
|
||||
|
||||
def test_decode_roundtrip(self):
|
||||
bundle = make_small_bundle()
|
||||
payload = self._encode_payload(bundle)
|
||||
decoded = decode_bundle_from_qr_data(payload)
|
||||
assert decoded.metadata.repository_id == bundle.metadata.repository_id
|
||||
assert decoded.data == bundle.data
|
||||
|
||||
def test_decode_rejects_wrong_magic(self):
|
||||
bundle = make_small_bundle()
|
||||
payload = b"WRONGMAGIC" + self._encode_payload(bundle)[len(QR_MAGIC):]
|
||||
with pytest.raises(ValueError, match="Not a Radicle QR payload"):
|
||||
decode_bundle_from_qr_data(payload)
|
||||
|
||||
def test_decode_detects_corruption(self):
|
||||
bundle = make_small_bundle()
|
||||
payload = bytearray(self._encode_payload(bundle))
|
||||
payload[-1] ^= 0xFF # flip last bit of bundle data
|
||||
with pytest.raises(ValueError, match="checksum mismatch"):
|
||||
decode_bundle_from_qr_data(bytes(payload))
|
||||
|
||||
def test_decode_rejects_truncated_payload(self):
|
||||
bundle = make_small_bundle()
|
||||
payload = self._encode_payload(bundle)
|
||||
# Truncate the bundle data portion
|
||||
truncated = payload[:-10]
|
||||
with pytest.raises(ValueError, match="Truncated"):
|
||||
decode_bundle_from_qr_data(truncated)
|
||||
|
||||
|
||||
class TestBundleTooLarge:
|
||||
def test_oversized_bundle_raises(self):
|
||||
large_data = b"x" * (QR_MAX_BYTES + 1)
|
||||
bundle = make_small_bundle(data=large_data)
|
||||
with pytest.raises(BundleTooLargeForQR, match="QR capacity"):
|
||||
encode_bundle_to_qr(bundle)
|
||||
|
||||
def test_exact_limit_would_include_overhead(self):
|
||||
# QR_MAX_BYTES is the limit for the serialised bundle, including metadata.
|
||||
# A bundle with data just under the limit should still fail due to metadata overhead.
|
||||
# This test verifies the check catches realistic oversized bundles.
|
||||
oversized_data = b"a" * QR_MAX_BYTES
|
||||
bundle = make_small_bundle(data=oversized_data)
|
||||
with pytest.raises(BundleTooLargeForQR):
|
||||
encode_bundle_to_qr(bundle)
|
||||
|
||||
|
||||
class TestQREncodeOutput:
|
||||
"""Test QR rendering (requires qrcode package)."""
|
||||
|
||||
def test_encode_returns_ascii_art(self):
|
||||
bundle = make_small_bundle()
|
||||
result = encode_bundle_to_qr(bundle)
|
||||
assert isinstance(result, str)
|
||||
assert len(result) > 0
|
||||
# ASCII art QR codes use block characters
|
||||
assert any(c in result for c in ("█", "░", " ", "\n"))
|
||||
|
||||
def test_encode_small_bundle_succeeds(self):
|
||||
bundle = make_small_bundle(b"small")
|
||||
result = encode_bundle_to_qr(bundle)
|
||||
assert result # non-empty
|
||||
|
||||
def test_error_correction_levels(self):
|
||||
bundle = make_small_bundle()
|
||||
for level in ("L", "M", "Q", "H"):
|
||||
result = encode_bundle_to_qr(bundle, error_correction=level)
|
||||
assert isinstance(result, str)
|
||||
|
|
@ -0,0 +1,543 @@
|
|||
"""Tests for sync data structures and SyncManager logic (no RNS networking)."""
|
||||
|
||||
import struct
|
||||
import time
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
import LXMF
|
||||
|
||||
from radicle_reticulum.sync import (
|
||||
RefsAnnouncement,
|
||||
SyncManager,
|
||||
SyncMode,
|
||||
CONTENT_TYPE_BUNDLE,
|
||||
CONTENT_TYPE_BUNDLE_CHUNK,
|
||||
CONTENT_TYPE_REFS_ANNOUNCE,
|
||||
CHUNK_HEADER_SIZE,
|
||||
)
|
||||
from radicle_reticulum.identity import RadicleIdentity
|
||||
from radicle_reticulum.git_bundle import GitBundle, BundleMetadata, BundleType
|
||||
import hashlib
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def _make_manager(tmp_path: Path) -> SyncManager:
|
||||
"""Instantiate SyncManager with LXMF/RNS components patched out."""
|
||||
identity = RadicleIdentity.generate()
|
||||
|
||||
mock_router = MagicMock()
|
||||
mock_dest = MagicMock()
|
||||
mock_dest.hash = b"\x00" * 32
|
||||
mock_dest.hash_hex = "00" * 32
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.Reticulum"), \
|
||||
patch("radicle_reticulum.sync.LXMF.LXMRouter", return_value=mock_router), \
|
||||
patch("radicle_reticulum.sync.RNS.log"):
|
||||
mock_router.register_delivery_identity.return_value = mock_dest
|
||||
manager = SyncManager(identity=identity, storage_path=tmp_path)
|
||||
manager._lxmf_router = mock_router
|
||||
manager._lxmf_destination = mock_dest
|
||||
|
||||
return manager
|
||||
|
||||
|
||||
def _make_bundle(data: bytes = b"git bundle") -> GitBundle:
|
||||
checksum = hashlib.sha256(data).digest()
|
||||
metadata = BundleMetadata(
|
||||
bundle_type=BundleType.FULL,
|
||||
repository_id="rad:z3test",
|
||||
source_node="did:key:z6Mktest",
|
||||
timestamp=1000,
|
||||
refs_included=["refs/heads/main"],
|
||||
prerequisites=[],
|
||||
size_bytes=len(data),
|
||||
checksum=checksum,
|
||||
)
|
||||
return GitBundle(metadata=metadata, data=data)
|
||||
|
||||
|
||||
class TestRefsAnnouncement:
|
||||
def test_encode_decode_roundtrip(self):
|
||||
ann = RefsAnnouncement(
|
||||
repository_id="rad:z3gqcJUoA1n9HaHKufZs5FCSGazv5",
|
||||
node_id="did:key:z6MkhaXgBZDvotDkL5257faiztiGiC2QtKLGpbnnEGta2doK",
|
||||
refs={
|
||||
"refs/heads/main": "abc123def456789012345678901234567890abcd",
|
||||
"refs/rad/id": "0000000000000000000000000000000000000001",
|
||||
},
|
||||
timestamp=1234567890123,
|
||||
)
|
||||
|
||||
encoded = ann.encode()
|
||||
decoded = RefsAnnouncement.decode(encoded)
|
||||
|
||||
assert decoded.repository_id == ann.repository_id
|
||||
assert decoded.node_id == ann.node_id
|
||||
assert decoded.refs == ann.refs
|
||||
assert decoded.timestamp == ann.timestamp
|
||||
|
||||
def test_empty_refs(self):
|
||||
ann = RefsAnnouncement(
|
||||
repository_id="rad:test",
|
||||
node_id="did:key:z6Mk...",
|
||||
refs={},
|
||||
timestamp=0,
|
||||
)
|
||||
decoded = RefsAnnouncement.decode(ann.encode())
|
||||
assert decoded.refs == {}
|
||||
|
||||
def test_many_refs(self):
|
||||
refs = {f"refs/heads/branch-{i}": "a" * 40 for i in range(50)}
|
||||
ann = RefsAnnouncement(
|
||||
repository_id="rad:z3test",
|
||||
node_id="did:key:z6Mktest",
|
||||
refs=refs,
|
||||
timestamp=9999,
|
||||
)
|
||||
decoded = RefsAnnouncement.decode(ann.encode())
|
||||
assert decoded.refs == refs
|
||||
|
||||
def test_special_characters_in_ref_names(self):
|
||||
refs = {"refs/heads/feature/my-branch_v2.0": "b" * 40}
|
||||
ann = RefsAnnouncement(
|
||||
repository_id="rad:z3test",
|
||||
node_id="did:key:z6Mktest",
|
||||
refs=refs,
|
||||
timestamp=1,
|
||||
)
|
||||
decoded = RefsAnnouncement.decode(ann.encode())
|
||||
assert decoded.refs == refs
|
||||
|
||||
def test_encode_produces_bytes(self):
|
||||
ann = RefsAnnouncement(
|
||||
repository_id="rad:test",
|
||||
node_id="did:key:z6Mk",
|
||||
refs={"refs/heads/main": "a" * 40},
|
||||
timestamp=1000,
|
||||
)
|
||||
assert isinstance(ann.encode(), bytes)
|
||||
assert len(ann.encode()) > 0
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# SyncManager state management
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestSyncManagerPeers:
|
||||
def test_initial_peer_list_empty(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
assert manager.get_known_peers() == []
|
||||
|
||||
def test_register_peer_adds_to_list(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
with patch("radicle_reticulum.sync.RNS.log"):
|
||||
manager.register_peer(b"\x01" * 16)
|
||||
assert b"\x01" * 16 in manager.get_known_peers()
|
||||
|
||||
def test_register_multiple_peers(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
hashes = [bytes([i]) * 16 for i in range(3)]
|
||||
with patch("radicle_reticulum.sync.RNS.log"):
|
||||
for h in hashes:
|
||||
manager.register_peer(h)
|
||||
assert set(manager.get_known_peers()) == set(hashes)
|
||||
|
||||
def test_peer_learned_from_incoming_message(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
source_hash = b"\xab" * 16
|
||||
|
||||
bundle = _make_bundle()
|
||||
msg = MagicMock()
|
||||
msg.source_hash = source_hash
|
||||
msg.fields = {LXMF.FIELD_CUSTOM_TYPE: CONTENT_TYPE_BUNDLE}
|
||||
msg.content = bundle.encode()
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.log"):
|
||||
manager._on_lxmf_delivery(msg)
|
||||
|
||||
assert source_hash in manager.get_known_peers()
|
||||
|
||||
def test_message_without_source_hash_still_processed(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
bundle = _make_bundle()
|
||||
|
||||
msg = MagicMock()
|
||||
msg.source_hash = None
|
||||
msg.fields = {LXMF.FIELD_CUSTOM_TYPE: CONTENT_TYPE_BUNDLE}
|
||||
msg.content = bundle.encode()
|
||||
|
||||
received = []
|
||||
manager.set_on_bundle_received(received.append)
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.log"):
|
||||
manager._on_lxmf_delivery(msg)
|
||||
|
||||
assert len(received) == 1
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# SyncManager delivery callbacks
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestSyncManagerDelivery:
|
||||
def test_bundle_callback_fires_on_bundle_message(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
bundle = _make_bundle()
|
||||
|
||||
received = []
|
||||
manager.set_on_bundle_received(received.append)
|
||||
|
||||
msg = MagicMock()
|
||||
msg.source_hash = b"\x01" * 16
|
||||
msg.fields = {LXMF.FIELD_CUSTOM_TYPE: CONTENT_TYPE_BUNDLE}
|
||||
msg.content = bundle.encode()
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.log"):
|
||||
manager._on_lxmf_delivery(msg)
|
||||
|
||||
assert len(received) == 1
|
||||
assert received[0].metadata.repository_id == "rad:z3test"
|
||||
|
||||
def test_refs_callback_fires_on_refs_announce_message(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
ann = RefsAnnouncement(
|
||||
repository_id="rad:z3test",
|
||||
node_id="did:key:z6Mktest",
|
||||
refs={"refs/heads/main": "a" * 40},
|
||||
timestamp=999,
|
||||
)
|
||||
|
||||
received = []
|
||||
manager.set_on_refs_announced(received.append)
|
||||
|
||||
msg = MagicMock()
|
||||
msg.source_hash = b"\x02" * 16
|
||||
msg.fields = {LXMF.FIELD_CUSTOM_TYPE: CONTENT_TYPE_REFS_ANNOUNCE}
|
||||
msg.content = ann.encode()
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.log"):
|
||||
manager._on_lxmf_delivery(msg)
|
||||
|
||||
assert len(received) == 1
|
||||
assert received[0].repository_id == "rad:z3test"
|
||||
|
||||
def test_unknown_content_type_silently_ignored(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
|
||||
msg = MagicMock()
|
||||
msg.source_hash = None
|
||||
msg.fields = {LXMF.FIELD_CUSTOM_TYPE: 0xFF}
|
||||
msg.content = b"garbage"
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.log"):
|
||||
manager._on_lxmf_delivery(msg) # should not raise
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Chunk reassembly
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestChunkReassembly:
|
||||
def _make_chunks(self, data: bytes, chunk_size: int = 10):
|
||||
"""Split data into chunk messages as _send_chunked_bundle would."""
|
||||
bundle_id = hashlib.sha256(data).digest()[:16]
|
||||
total = (len(data) + chunk_size - 1) // chunk_size
|
||||
msgs = []
|
||||
for i in range(total):
|
||||
chunk = data[i * chunk_size: (i + 1) * chunk_size]
|
||||
header = struct.pack("!16sHH", bundle_id, i, total)
|
||||
msgs.append(header + chunk)
|
||||
return msgs
|
||||
|
||||
def test_reassemble_two_chunks(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
bundle = _make_bundle(b"first half second half ")
|
||||
bundle_bytes = bundle.encode()
|
||||
|
||||
received = []
|
||||
manager.set_on_bundle_received(received.append)
|
||||
|
||||
chunks = self._make_chunks(bundle_bytes, chunk_size=len(bundle_bytes) // 2 + 1)
|
||||
assert len(chunks) == 2
|
||||
|
||||
for chunk_data in chunks:
|
||||
msg = MagicMock()
|
||||
msg.source_hash = b"\x01" * 16
|
||||
msg.fields = {LXMF.FIELD_CUSTOM_TYPE: CONTENT_TYPE_BUNDLE_CHUNK}
|
||||
msg.content = chunk_data
|
||||
with patch("radicle_reticulum.sync.RNS.log"):
|
||||
manager._on_lxmf_delivery(msg)
|
||||
|
||||
assert len(received) == 1
|
||||
assert received[0].data == bundle.data
|
||||
|
||||
def test_out_of_order_chunks_reassemble_correctly(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
bundle = _make_bundle(b"abcdefghijklmnopqrstuvwxyz0123456789")
|
||||
bundle_bytes = bundle.encode()
|
||||
|
||||
received = []
|
||||
manager.set_on_bundle_received(received.append)
|
||||
|
||||
chunks = self._make_chunks(bundle_bytes, chunk_size=8)
|
||||
assert len(chunks) >= 3
|
||||
|
||||
for chunk_data in reversed(chunks): # deliver in reverse order
|
||||
msg = MagicMock()
|
||||
msg.source_hash = None
|
||||
msg.fields = {LXMF.FIELD_CUSTOM_TYPE: CONTENT_TYPE_BUNDLE_CHUNK}
|
||||
msg.content = chunk_data
|
||||
with patch("radicle_reticulum.sync.RNS.log"):
|
||||
manager._on_lxmf_delivery(msg)
|
||||
|
||||
assert len(received) == 1
|
||||
assert received[0].data == bundle.data
|
||||
|
||||
def test_malformed_chunk_too_short_is_ignored(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
|
||||
msg = MagicMock()
|
||||
msg.source_hash = None
|
||||
msg.fields = {LXMF.FIELD_CUSTOM_TYPE: CONTENT_TYPE_BUNDLE_CHUNK}
|
||||
msg.content = b"\x00" * (CHUNK_HEADER_SIZE - 1) # too short
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.log"):
|
||||
manager._on_lxmf_delivery(msg) # should not raise
|
||||
|
||||
assert manager._chunk_buffers == {}
|
||||
|
||||
def test_partial_chunks_not_delivered_until_complete(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
bundle = _make_bundle(b"partial test data here")
|
||||
bundle_bytes = bundle.encode()
|
||||
|
||||
received = []
|
||||
manager.set_on_bundle_received(received.append)
|
||||
|
||||
chunks = self._make_chunks(bundle_bytes, chunk_size=5)
|
||||
assert len(chunks) >= 2
|
||||
|
||||
# Send only first chunk
|
||||
msg = MagicMock()
|
||||
msg.source_hash = None
|
||||
msg.fields = {LXMF.FIELD_CUSTOM_TYPE: CONTENT_TYPE_BUNDLE_CHUNK}
|
||||
msg.content = chunks[0]
|
||||
with patch("radicle_reticulum.sync.RNS.log"):
|
||||
manager._on_lxmf_delivery(msg)
|
||||
|
||||
assert received == [] # not yet complete
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# SyncManager repository management
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestSyncManagerRepositories:
|
||||
def test_register_repository(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
repo_path = tmp_path / "repo"
|
||||
repo_path.mkdir()
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.log"):
|
||||
state = manager.register_repository("rad:z3test", repo_path)
|
||||
|
||||
assert state.repository_id == "rad:z3test"
|
||||
assert state.local_path == repo_path
|
||||
|
||||
def test_register_same_repo_twice_returns_same_state(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
repo_path = tmp_path / "repo"
|
||||
repo_path.mkdir()
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.log"):
|
||||
s1 = manager.register_repository("rad:z3test", repo_path)
|
||||
s2 = manager.register_repository("rad:z3test", repo_path)
|
||||
|
||||
assert s1 is s2
|
||||
|
||||
def test_get_sync_status_returns_none_for_unknown(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
assert manager.get_sync_status("rad:unknown") is None
|
||||
|
||||
def test_get_sync_status_after_register(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
repo_path = tmp_path / "repo"
|
||||
repo_path.mkdir()
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.log"):
|
||||
manager.register_repository("rad:z3test", repo_path)
|
||||
|
||||
status = manager.get_sync_status("rad:z3test")
|
||||
assert status is not None
|
||||
assert status["repository_id"] == "rad:z3test"
|
||||
assert status["known_peers"] == 0
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Phase 4: speculative push
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def _make_manager_auto_push(tmp_path: Path) -> SyncManager:
|
||||
identity = RadicleIdentity.generate()
|
||||
mock_router = MagicMock()
|
||||
mock_dest = MagicMock()
|
||||
mock_dest.hash = b"\x00" * 32
|
||||
mock_dest.hash_hex = "00" * 32
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.Reticulum"), \
|
||||
patch("radicle_reticulum.sync.LXMF.LXMRouter", return_value=mock_router), \
|
||||
patch("radicle_reticulum.sync.RNS.log"):
|
||||
mock_router.register_delivery_identity.return_value = mock_dest
|
||||
manager = SyncManager(identity=identity, storage_path=tmp_path, auto_push=True)
|
||||
manager._lxmf_router = mock_router
|
||||
manager._lxmf_destination = mock_dest
|
||||
|
||||
return manager
|
||||
|
||||
|
||||
class TestShouldPushToPeer:
|
||||
def test_no_push_when_peer_up_to_date(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
our_refs = {"refs/heads/main": "a" * 40}
|
||||
peer_refs = {"refs/heads/main": "a" * 40}
|
||||
assert manager._should_push_to_peer(our_refs, peer_refs) is False
|
||||
|
||||
def test_push_when_peer_behind(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
our_refs = {"refs/heads/main": "b" * 40}
|
||||
peer_refs = {"refs/heads/main": "a" * 40}
|
||||
assert manager._should_push_to_peer(our_refs, peer_refs) is True
|
||||
|
||||
def test_push_when_peer_missing_ref(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
our_refs = {"refs/heads/main": "a" * 40, "refs/heads/dev": "b" * 40}
|
||||
peer_refs = {"refs/heads/main": "a" * 40}
|
||||
assert manager._should_push_to_peer(our_refs, peer_refs) is True
|
||||
|
||||
def test_no_push_when_our_refs_empty(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
assert manager._should_push_to_peer({}, {"refs/heads/main": "a" * 40}) is False
|
||||
|
||||
def test_no_push_when_both_empty(self, tmp_path):
|
||||
manager = _make_manager(tmp_path)
|
||||
assert manager._should_push_to_peer({}, {}) is False
|
||||
|
||||
|
||||
class TestSpeculativePush:
|
||||
def _make_announcement(self, repo_id="rad:z3test", refs=None):
|
||||
return RefsAnnouncement(
|
||||
repository_id=repo_id,
|
||||
node_id="did:key:z6Mkpeer",
|
||||
refs=refs or {"refs/heads/main": "a" * 40},
|
||||
timestamp=1000,
|
||||
)
|
||||
|
||||
def test_auto_push_false_does_not_call_maybe_push(self, tmp_path):
|
||||
manager = _make_manager(tmp_path) # auto_push=False by default
|
||||
ann = self._make_announcement()
|
||||
|
||||
msg = MagicMock()
|
||||
msg.source_hash = b"\x01" * 16
|
||||
msg.fields = {LXMF.FIELD_CUSTOM_TYPE: CONTENT_TYPE_REFS_ANNOUNCE}
|
||||
msg.content = ann.encode()
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.log"), \
|
||||
patch.object(manager, "_maybe_push_to_peer") as mock_push:
|
||||
manager._on_lxmf_delivery(msg)
|
||||
|
||||
mock_push.assert_not_called()
|
||||
|
||||
def test_auto_push_true_calls_maybe_push_on_refs_announce(self, tmp_path):
|
||||
manager = _make_manager_auto_push(tmp_path)
|
||||
ann = self._make_announcement()
|
||||
|
||||
msg = MagicMock()
|
||||
msg.source_hash = b"\x02" * 16
|
||||
msg.fields = {LXMF.FIELD_CUSTOM_TYPE: CONTENT_TYPE_REFS_ANNOUNCE}
|
||||
msg.content = ann.encode()
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.log"), \
|
||||
patch.object(manager, "_maybe_push_to_peer") as mock_push:
|
||||
manager._on_lxmf_delivery(msg)
|
||||
|
||||
mock_push.assert_called_once_with(b"\x02" * 16, mock_push.call_args[0][1])
|
||||
|
||||
def test_auto_push_skipped_when_source_hash_none(self, tmp_path):
|
||||
manager = _make_manager_auto_push(tmp_path)
|
||||
ann = self._make_announcement()
|
||||
|
||||
msg = MagicMock()
|
||||
msg.source_hash = None
|
||||
msg.fields = {LXMF.FIELD_CUSTOM_TYPE: CONTENT_TYPE_REFS_ANNOUNCE}
|
||||
msg.content = ann.encode()
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.log"), \
|
||||
patch.object(manager, "_maybe_push_to_peer") as mock_push:
|
||||
manager._on_lxmf_delivery(msg)
|
||||
|
||||
mock_push.assert_not_called()
|
||||
|
||||
def test_maybe_push_skips_unknown_repository(self, tmp_path):
|
||||
manager = _make_manager_auto_push(tmp_path)
|
||||
ann = self._make_announcement(repo_id="rad:unknown")
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.log"), \
|
||||
patch("radicle_reticulum.sync.GitBundleGenerator") as mock_gen:
|
||||
manager._maybe_push_to_peer(b"\x03" * 16, ann)
|
||||
|
||||
mock_gen.assert_not_called()
|
||||
|
||||
def test_maybe_push_skips_when_peer_up_to_date(self, tmp_path):
|
||||
manager = _make_manager_auto_push(tmp_path)
|
||||
repo_path = tmp_path / "repo"
|
||||
repo_path.mkdir()
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.log"):
|
||||
manager.register_repository("rad:z3test", repo_path)
|
||||
|
||||
our_refs = {"refs/heads/main": "a" * 40}
|
||||
peer_refs = {"refs/heads/main": "a" * 40}
|
||||
ann = self._make_announcement(refs=peer_refs)
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.log"), \
|
||||
patch("radicle_reticulum.sync.GitBundleGenerator") as mock_gen_cls:
|
||||
mock_gen = MagicMock()
|
||||
mock_gen_cls.return_value = mock_gen
|
||||
mock_gen.get_refs.return_value = our_refs
|
||||
manager._maybe_push_to_peer(b"\x04" * 16, ann)
|
||||
|
||||
mock_gen.create_incremental_bundle.assert_not_called()
|
||||
|
||||
def test_maybe_push_sends_bundle_when_ahead(self, tmp_path):
|
||||
manager = _make_manager_auto_push(tmp_path)
|
||||
repo_path = tmp_path / "repo"
|
||||
repo_path.mkdir()
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.log"):
|
||||
manager.register_repository("rad:z3test", repo_path)
|
||||
|
||||
our_refs = {"refs/heads/main": "b" * 40}
|
||||
peer_refs = {"refs/heads/main": "a" * 40}
|
||||
ann = self._make_announcement(refs=peer_refs)
|
||||
bundle = _make_bundle(b"incremental data")
|
||||
|
||||
with patch("radicle_reticulum.sync.RNS.log"), \
|
||||
patch("radicle_reticulum.sync.GitBundleGenerator") as mock_gen_cls, \
|
||||
patch.object(manager, "_send_lxmf_message", return_value=True) as mock_send:
|
||||
mock_gen = MagicMock()
|
||||
mock_gen_cls.return_value = mock_gen
|
||||
mock_gen.get_refs.return_value = our_refs
|
||||
mock_gen.create_incremental_bundle.return_value = bundle
|
||||
manager._maybe_push_to_peer(b"\x05" * 16, ann)
|
||||
|
||||
mock_send.assert_called_once()
|
||||
call_args = mock_send.call_args[0]
|
||||
assert call_args[0] == b"\x05" * 16
|
||||
assert call_args[1] == CONTENT_TYPE_BUNDLE
|
||||
Loading…
Reference in New Issue