Skip to content

Releases: cubert-hyperspectral/cuvis-ai

Release v0.7.2

11 May 22:02
b9f180c

Choose a tag to compare

0.7.2 - 2026-05-11

  • CI: run the gh-pages deploy-docs job inside cubertgmbh/cuvis_pyil:3.5.0-ubuntu24.04 with libgl1 / libglib2.0-0 / ffmpeg installed, matching the working doc-build job in ci.yml. The 0.7.1 deploy failed because the auto-generated Nodes-catalog generator imports cuvis_ai.node, which transitively initializes the cuvis package and aborts on a vanilla runner. No cuvis_ai code changes — this release exists solely to re-trigger the release pipeline so gh-pages actually updates.

Release v0.7.1

11 May 21:44
eefffd1

Choose a tag to compare

0.7.1 - 2026-05-11

  • Docs IA restructure (ALL-5655). Nine top-level sections — Home, Get Started, Concepts, Tutorials, Catalogs, Workflows, Agentic Integration, Deployment, Reference — ordered as a learning path. Major moves: user-guide/{installation,quickstart}get-started/; how-to/*workflows/; node-catalog/*catalogs/nodes/*; config/*reference/configuration/; api/*reference/python-api/; plugin-system/*reference/plugin-development/; development/*reference/contributing/; grpc/* + use_cases/grpc-workflow.mddeployment/. New: agentic-integration section, datasets catalog mirrored from HuggingFace, notebook tutorial gallery, get-started/first-pipeline.md, workflows/{statistical,gradient}-training.md. Removed docs/use_cases/, user-guide/configuration.md stub, duplicate plugin-system/overview.md. All URLs change — no redirects. mkdocs build --strict clean.
  • Auto-generated Nodes catalog. mkdocs-gen-files + scripts/generate_node_catalog.py build a category-grouped index page from NodeRegistry with per-category SVG icons (docs/images/node-categories/) and a client-side filter (docs/javascripts/node_catalog_filter.js). Replaces nine hand-maintained docs/catalogs/nodes/*.md pages. Added scripts/math_directive_hook.py MkDocs hook (RST .. math:: → MathJax; auto-hide TOC on catalog pages).
  • Lentils Dinomaly use-case notebook (notebooks/use_cases/lentils_dinomaly.ipynb) — HF dataset integration and H.264 video export.
  • Helper-scripts package renamed tools/scripts/. Updated [project.scripts] (create-stubs = "scripts.generate_node_port_stubs:main"), MkDocs macros, codecov ignore, .gitignore, git hooks, copilot-instructions.
  • Removed configs/plugins/registry.yaml. Use per-plugin manifests (configs/plugins/<plugin>.yaml).
  • Bundled ffmpeg via imageio-ffmpeg. ToVideoNode resolves the binary from the wheel by default — no system install needed. Override with CUVIS_AI_FFMPEG_BIN for h264_nvenc / vaapi / amf. Blood-perfusion notebook MP4 export gains +faststart.
  • Site rebrand to Cubert CI. palette: custom lets docs/stylesheets/extra.css drive both Material schemes; Rajdhani headings via Google Fonts @import, Roboto / Roboto Mono for body and code. Mermaid theme variables updated to match. Mermaid diagrams in docs/concepts/*.md switched from inline style X fill: to classDef so node colors stay legible in dark mode.
  • mkdocs plugin swap. Dropped mkdocs-literate-nav; added mkdocs-macros-plugin (drives scripts/docs_macros.py) and mkdocs-llmstxt (emits llms-full.txt). mkdocs-gen-files re-added for the Nodes-catalog generator. API reference consolidated into the Nodes catalog. Install guide gains a Cuvis SDK section.
  • Renamed local blood-perfusion dataset folder data/XMR_Blood_Perfusion/data/XMR_Demo_Blood_Perfusion/ following the HuggingFace rename to cubert-gmbh/XMR_Demo_Blood_Perfusion. Users with the old folder can rename it in place; otherwise uv run dataset download blood_perfusion re-fetches ~7 GB.
  • Fixed broken cross-doc links surfaced by mkdocs build --strict.
  • Dependency floors: cuvis-ai-core>=0.5.3 (Blood_Perfusion registry repoint), cuvis-ai-schemas[full]>=0.4.1. Locked dev deps bumped to clear pip-audit CVEs.

Release v0.7.0

04 May 09:49
c5e734b

Choose a tag to compare

0.7.0 - 2026-05-04

  • Extracted the examples/ tree (70 files) into a new sister repo, cuvis-ai-cookbook. Removed docs/grpc/example-clients.md (now redundant) and rerouted all in-doc examples/... links to cookbook GitHub URLs.
  • Renamed CIETristimulusFalseRGBSelectorCIETristimulusRGBSelector. The CIE 1931 tristimulus integration produces a faithful RGB rendering, not a false-color rendering — the previous name was misleading. Updated the plugin registry (configs/plugins/cuvis_ai_builtin.yaml), the four SAM3 pipeline configs (configs/pipeline/sam3/sam3_*.yaml, including name: false_rgbname: true_rgb, edge references, and false-rgb metadata tags/description), and the object-tracking notebooks under notebooks/use_cases/. No deprecation shim — direct rename.
  • Added _category (NodeCategory) and _tags (frozenset[NodeTag]) ClassVars on every auto-registered Node subclass across cuvis_ai/node, cuvis_ai/anomaly, and cuvis_ai/deciders (105 classes, including private/base classes such as _ScoreNormalizerBase and _BaseJsonWriterNode). New tests/test_node_categories.py enforces per-class declarations (via __dict__), requires at least one modality or lifecycle tag per node, and rejects any single category covering >70% of the catalog.
  • Added assets/node_icons/*.svg to package-data and bumped cuvis-ai-schemas[full]>=0.4.0.
  • Added Windows FFmpeg DLL bootstrap (cuvis_ai/__init__.py): walks PATH at import time and registers every directory containing an avcodec-*.dll via os.add_dll_directory, so torchcodec can load libtorchcodec_core*.dll on Python 3.8+ where PATH is no longer consulted for DLL dependencies. No-op on non-Windows.
  • Refactored cuvis_ai.anomaly and cuvis_ai.deciders modules into cuvis_ai.node.anomaly and cuvis_ai.node.deciders; the legacy locations now emit DeprecationWarning and re-export. cuvis_ai.node namespace exposes BinaryDecider, DeepSVDDProjection, LADGlobal, QuantileBinaryDecider, RXGlobal, RXPerBatch, TwoStageBinaryDecider, and ZScoreNormalizerGlobal directly. Plugin manifest and 15 pipeline YAMLs updated to the new class_name paths.
  • Added InsetComposer (cuvis_ai/node/compositing.py): pastes a fixed-size inset frame into a corner of a larger base frame for picture-in-picture video output. Pairs with ROIZoomNode (the inset is expected at final pixel size). Configurable corner (top-left / top-right / bottom-left / bottom-right), margin_px, border_px, and border_color; per-frame valid port leaves the base untouched when the ROI is stale.
  • Changed ToVideoNode to drop its unused video_path output port — it is a SINK and now matches the canonical contract used by NumpyFeatureWriterNode and the JSON writers (empty OUTPUT_SPECS, forward() returns {}).
  • Bumped pinned plugin tags to latest published patches: adaclip v0.1.2 → v0.1.3, ultralytics v0.1.0 → v0.1.1, deepeiou v0.1.0 → v0.1.1, trackeval v0.1.0 → v0.1.1, sam3 v0.1.3 → v0.1.5. Picks up _category / _tags palette annotations and the cuvis-ai-schemas>=0.4.0 floor across all upstream plugins.
  • Removed imantics from runtime deps (no longer used).
  • Added notebook to the dev extra so uv sync --extra dev is sufficient to run the use-case notebooks locally.
  • Synchronized the plugin-manifest test-tag expectations to v0.1.1 (tests/test_plugin_manifest.py) to match the latest published plugin tags.
  • Removed three orphaned test files that imported helpers from the extracted examples/object_tracking/ and examples/export_cu3s_false_rgb_video.py (now in cuvis-ai-cookbook); coverage now belongs in the cookbook repo.
  • Renamed docs/tutorials/docs/usecases/docs/use_cases/ (mkdocs nav heading and 9 doc pages); rewrote docs/use_cases/blood-perfusion.md to mirror the notebook section-for-section (NDVI flow + custom-node SpO2 example), dropping the unused PCA+HSV and band-limited PCA sections.
  • Moved notebooks/blood_perfusion/nd_blood_perfusion.ipynb to notebooks/use_cases/blood_perfusion.ipynb and split the object-tracking walkthrough into two notebooks under notebooks/use_cases/: object_tracking_passive.ipynb (text-prompt + SAM3 mask propagation on RGB and CIR video, sharing a cached rgb_video.mp4 + COCO tracking_results.json) and object_tracking_active.ipynb (invisible-spectral-ink active tracking via SPAM). Both follow the prepare → build → run → watch rhythm.
  • Added an Open-in-Colab badge and a bootstrap install cell to every use-case notebook so they run end-to-end on colab.research.google.com without a pre-cloned checkout. Badges link the notebook on main so released revisions stay reproducible.
  • Standardized "Cuvis.AI" casing across the docs (was a mix of "CUVIS.AI" / "cuvis.ai").
  • Added docs/javascripts/os-tab-sync.js to keep OS-tabbed install snippets (Linux / macOS / Windows) in sync across the install guide.
  • Documented the Graphviz system-binary requirement in docs/installation.md so dot-based pipeline visualization works out of the box on a fresh install (Python graphviz is just bindings).
  • Refreshed the README.md status badges to flat-square style and added a link to the cuvis-ai-agentic-skills sister repo.
  • Regenerated the docstring-coverage badge (assets/interrogate_badge.svg) at 96.0%.
  • Fixed auto_register_package registry-size floor regression after the anomaly/deciders move: pointed the auto-register walk at cuvis_ai.node.{anomaly,deciders} since the legacy shims re-export classes whose __module__ now points at the new locations.
  • Bumped cuvis-ai-core floor >=0.3.4>=0.5.0 and cuvis-ai-schemas[full] >=0.3.0>=0.4.0 to pick up the gRPC list_available_nodes metadata populator, the new MissingNodeMetadataWarning runtime check, and the NodeCategory / NodeTag / NodeInfo.{category,tags,icon_svg} schema additions. Widened requires-python from >=3.11,<3.12 to >=3.11,<3.14. Removed opencv-python-headless from the docs extra (not required by the current mkdocs config).
  • Updated the gRPC workflow helper docstring to describe the concrete session lifecycle (create / build / train / predict) instead of internal release vocabulary.

Release v0.6.0

27 Apr 15:50
3eb59da

Choose a tag to compare

0.6.0 - 2026-04-27

  • Removed examples/hugging_face/ example scripts (huggingface_api_demo.py, huggingface_local_demo.py, huggingface_gradient_training.py, test_huggingface_local_minimal.py) and the in-tree cuvis_ai/node/adaclip.py (AdaCLIPLocalNode). The released AdaCLIP plugin (cuvis_ai_adaclip via configs/plugins/adaclip.yaml) is unaffected.
  • Removed the ### AdaCLIP Nodes autodoc section from docs/api/nodes.md; it pointed at the deleted in-tree module.
  • Renamed PipelineComparisonVisualizer input port adaclip_scoresanomaly_scores (and the corresponding TensorBoard heatmap artifact adaclip_scores_heatmap_sample_*anomaly_scores_heatmap_sample_*). The port is plugin-agnostic; updated tests/node/test_pipeline_visualization.py, cuvis_ai/node/losses.py docstring example, AdaCLIP pipeline/trainrun YAMLs, examples/adaclip/*_training.py, docs/tutorials/adaclip-workflow.md, and docs/how-to/monitoring-and-viz.md.
  • Registered the previously-omitted built-in nodes ROIZoomNode, MaskRobustifier, MaskToBBoxKalman, MaskedMeanSpectrum, and SpectrumPlotNode in configs/plugins/cuvis_ai_builtin.yaml so they are discoverable when the gRPC server runs in a separate venv.
  • Changed ToVideoNode encoder backend from OpenCV cv2.VideoWriter (FOURCC mp4v, uncontrollable bitrate, ~1.6 Mbps MPEG-4 Part 2 output) to a lazily-spawned ffmpeg subprocess that pipes raw rgb24 frames over stdin. Produces H.264 (libx264) at a configurable target bitrate (default 12M). Requires the ffmpeg binary on PATH.
  • Added video_codec (default "libx264") and bitrate (default "12M") parameters to ToVideoNode. Hardcoded -pix_fmt yuv420p plus -vf pad=ceil(iw/2)*2:ceil(ih/2)*2 to guarantee valid output dimensions for 4:2:0 chroma subsampling.
  • Removed ToVideoNode(codec=...) (FourCC) parameter — renamed to video_codec (ffmpeg codec name) since the value namespace changed. Pipeline YAML configs do not set codec= explicitly, so no existing config files need updates.
  • Added robust subprocess lifecycle handling to ToVideoNode: close() sends EOF, waits for mux completion, and raises RuntimeError with drained stderr on non-zero ffmpeg exit. Per-frame stdin.write catches BrokenPipeError and surfaces the encoder error rather than silently truncating the video.
  • Relocated cu3s false-RGB video exporter from examples/object_tracking/export_cu3s_false_rgb_video.py to examples/export_cu3s_false_rgb_video.py; updated tests/node/test_export_cu3s_false_rgb_video.py and tests/node/test_range_average_false_rgb_selector.py imports accordingly.
  • Added ffmpeg to CI apt-install steps (ci.yml, plugin-runtime-smoke.yml) so future integration tests can exercise the encoder end-to-end.
  • Consolidated NpyReader and NumpyFeatureWriterNode into a single cuvis_ai.node.numpy_file module, mirroring the existing json_file pattern. Updated imports, plugin manifest, tests, docs, and examples.
  • Registered TrackingPointerOverlayNode, BBoxPrompt, MaskPrompt, and TextPrompt in configs/plugins/cuvis_ai_builtin.yaml so they are discoverable via the plugin manifest.
  • Added ROIZoomNode (cuvis_ai/node/compositing.py): crops a bbox region from an RGB frame and resizes to fixed dimensions for zoom-inset video streams.
  • Added MaskRobustifier and MaskToBBoxKalman (cuvis_ai/node/mask_ops.py): morphological cleanup of binary masks and Kalman-smoothed bbox tracking from mask outputs.
  • Added SpectrumPlotNode (cuvis_ai/node/spectrum_plot.py): renders per-frame matplotlib line plots (reference vs tracked spectrum) to RGB frames for secondary spectrum video export.
  • Added MaskedMeanSpectrum (in cuvis_ai/node/spectral_extractor.py): computes per-frame mean spectrum of a hyperspectral cube over a binary mask; clarified batch semantics on BBoxSpectralExtractor / SpectralSignatureExtractor docstrings.
  • Updated examples/spectral_angle_mapper/spam_invisible_ink.py to emit synchronized side videos (ROI zoom via ROIZoomNode, spectrum plot via SpectrumPlotNode) alongside the main overlay, with conditional profiling summary and refactored output-directory handling.
  • Removed --rgb-xml-path argument from spam_invisible_ink_every_where.py and simplified downstream bootstrap dispatch.
  • Added --overlay-frame-id flag to examples/object_tracking/render_tracking_overlay.py, which renders the frame index in the top-left corner of each output frame.
  • Rewrote .github/copilot-instructions.md to clarify that this repo is the plugin node catalog (with cuvis-ai-core and cuvis-ai-schemas as sibling repos), and to document the Python 3.11 / uv / node-registration conventions.
  • Updated README.md to use a locally-hosted banner (docs/images/banner.png) instead of an external CDN URL, and revised the project description to emphasize extensibility and video pipelines. Minor capitalization fixes in CONTRIBUTING.md.

Release v0.5.0

10 Apr 14:41
fde417f

Choose a tag to compare

0.5.0 - 2026-04-10

  • Added TextPrompt, scheduled --prompt <text@frame_id> parsing, and updated local/gRPC SAM3 text-propagation examples to drive SAM3TextPropagation through a runtime text_prompt port instead of constructor hparams.
  • Added SAM3 prompt-free segment-everything tooling: SAM3SegmentEverything, local CLI wiring, and CU3S/video pipeline YAMLs for per-frame automatic mask generation with overlay/video/JSON outputs.
  • Added runtime SAM3 bbox propagation tooling: BBoxPrompt, local/gRPC bbox-propagation examples, and CU3S/video bbox-propagation pipeline YAMLs using scheduled --prompt <object_id:detection_id@frame_id> bbox updates from detection JSON.
  • Added runtime SAM3 mask propagation tooling: MaskPrompt, local/gRPC mask-propagation examples, and CU3S/video mask-propagation pipeline YAMLs using scheduled --prompt <object_id:detection_id@frame_id> mask updates from detection JSON.
  • Added SAM3 text-propagation pipeline configs and a new gRPC client (examples/grpc/sam3/sam3_text_propagation_client.py) supporting CU3S/video inputs plus plugin-manifest bootstrap.
  • Added SAM3 tracking workflow updates across propagation scripts and examples, including batch processing for full-folder video runs, per-node profiling, threshold/name-suffix options, and frame-lookup support in TrackingResultsReader.
  • Added NDVISelector for normalized-difference vegetation index band selection, ScalarHSVColormapNode for scalar-to-HSV colormap rendering, and DetectionCocoJsonNode for streaming COCO detection JSON output.
  • Added per-frame PCA dimensionality reduction node alongside the existing trainable variant.
  • Added Spectral Angle Mapper (SPAM) pipeline nodes and tooling for spectral-angle-based workflows.
  • Added BBoxSpectralExtractor, sparkline visualization helpers, and richer BBoxesOverlayNode annotations (draw_labels, frame_id).
  • Added occlusion and Poisson inpainting utilities with tests and object-tracking example integrations.
  • Added ByteTrack and tracker workflow expansion: spectral-aware association, COCO JSON sinks, threshold/JSON sweep tooling, spectral re-ID validation, RT-DETR/YOLO integration points, and overlay/transcoding helpers for rendered tracking outputs.
  • Added DeepEIOU plugin integration plus related preprocessing, NumPy writer, and tracking overlay renderer updates.
  • Added TrackEval preparation/evaluation tooling updates for aligned HOTA benchmarking workflows, including prediction frame-id passthrough in evaluator pipelines when supported by the metric plugin.
  • Added released tracking plugin manifests for ByteTrack, DeepEIOU, TrackEval, Ultralytics, RT-DETR, and a cuvis_ai_builtin manifest.
  • Added blood perfusion tutorial (docs/tutorials/blood-perfusion.md) and four example scripts under examples/blood_perfusion/ covering NDVI, PCA, and PCA-HSV visualizations.
  • Added plugin node catalog documentation page listing all available plugin nodes.
  • Added ~41 new test files covering PCA, NDVI, colormap, text prompt, manifest sync, spectral extractor, occlusion, video, tracking overlay, and more.
  • Changed tracking JSON export so CocoTrackMaskWriter can consume optional category_ids and category_semantics inputs, preserving the old single-category behavior when they are absent and writing multi-category categories headers when they are present.
  • Changed local SAM3 bbox propagation from the archived --detection single-seed flow to the same scheduled prompt contract used by mask propagation, including optional bbox prompt debug overlays.
  • Changed local SAM3 mask propagation from archived PNG prompts to detection-JSON-driven label-map prompting, and clarified that gRPC mask propagation sends masks directly through InputBatch.mask.
  • Renamed CocoTrackMaskWriter(category_name=...) to CocoTrackMaskWriter(default_category_name=...), changed the default fallback label to "object", and clarified that this constructor value is only the fallback label when category_semantics is absent.
  • Refactored and consolidated video/tracking utilities (including cuvis_ai/node/video.py), moved SAM3 examples into a dedicated subdirectory, and adopted shorthand port syntax across updated examples.
  • Refactored shared XML plugin helpers into cuvis_ai/utils/xml_plugin_parser.py.
  • Refactored prompt specs, parsers, and frame-hw resolution to deduplicate shared logic across text/bbox/mask propagation modes.
  • Reorganized AdaCLIP gRPC examples under examples/grpc/adaclip/ and updated gRPC workflow/docs utilities around explicit config resolution and session search paths.
  • Refined tracking output tooling with JSON IO/overlay updates, new CLI output-dir helpers, and expanded tracking regression tests.
  • Updated SAM3 text-propagation pipeline YAMLs and example docs to match runtime text prompting plus category-aware tracking JSON output.
  • Updated ByteTrack and tracking documentation, including multi-pipeline usage and FFmpeg/torchcodec setup guidance.
  • Updated plugin/trainrun configs to match current SAM3 and channel-selector runtime paths.
  • Updated SAM3 plugin to v0.1.3 and switched AdaCLIP plugin to released repository.
  • Updated docs: removed 7 redundant pages and cleaned up stale references across the documentation site.
  • Bumped cuvis-ai-schemas to >=0.3.0 and cuvis-ai-core to >=0.3.4.
  • Consolidated json_reader and json_writer modules into a single json_file module; updated all node registrations, pipeline configs, imports, and documentation.
  • Switched from opencv-python to opencv-python-headless.
  • Fixed SAM3 batch-runner control flow and mask-overlay color handling.
  • Fixed JSON reader robustness and pre-push regressions in manifest sync, CLI commands, and statistical-contract tests.
  • Fixed video/tracking fallback and output handling: VideoIterator now falls back to OpenCV when torchcodec is unavailable, output_video_path naming is normalized, and ByteTrack JSON output path heuristics were hardened.

Release v0.4.0

28 Feb 15:56
73cdd3f

Choose a tag to compare

0.4.0 - 2026-02-27

  • Added reusable WelfordAccumulator utility (cuvis_ai.utils.welford) for streaming mean/variance/covariance
  • Added resolve_reduce_dims() as shared module-level utility in binary_decider
  • Added TRAINABLE_BUFFERS class attribute — 5 nodes declare trainable buffers, base class handles buffer↔parameter conversion in freeze/unfreeze automatically
  • Added freeze() for LearnableChannelMixer matching existing unfreeze() override
  • Added ConcreteChannelMixer and LearnableChannelMixer exported from cuvis_ai.node
  • Added all 6 visualization nodes exported from cuvis_ai.node: AnomalyMask, RGBAnomalyMask, ScoreHeatmapVisualizer, CubeRGBVisualizer, PCAVisualization, PipelineComparisonVisualizer
  • Added insufficient-samples guard to RXGlobal and ScoreToLogit — raises early when training data has too few samples
  • Added plugin runtime smoke CI workflow (plugin-runtime-smoke.yml) with slow plugin tests
  • Added AdaCLIP standalone plugin manifest (configs/plugins/adaclip.yaml) and 6 example scripts
  • Added plugin contract, manifest sync, and runtime smoke test files
  • Added 8 new test files: test_welford, test_freeze_unfreeze, test_channel_selector_coverage, test_concrete_channel_mixer, test_pipeline_visualization, test_binary_decider, test_data_node, test_rx_per_batch
  • Added pytest markers (unit/integration/slow) on all 30 test files; session-scoped fixtures for expensive operations; pytest config consolidated in pytest.ini
  • Changed RXGlobal, ScoreToLogit, LADGlobal to use WelfordAccumulator instead of inline Welford implementations
  • Changed _compute_band_correlation_matrix to single-pass streaming with WelfordAccumulator
  • Changed TrainablePCA and LearnableChannelMixer to use streaming covariance + eigh instead of concat + SVD
  • Changed SoftChannelSelector variance init to use streaming WelfordAccumulator
  • Changed ZScoreNormalizerGlobal to use streaming WelfordAccumulator instead of concat + subsample
  • Changed supervised band selectors to use template method pattern, pulling shared forward() and statistical_initialization() into SupervisedSelectorBase
  • Changed YAML configs and docs to use new schema field names (hparams, class_name)
  • Changed EXECUTION_STAGE_VALIDATE references to VAL across gRPC docs
  • Changed .freezed references to .frozen in tests and docs (matches cuvis-ai-core rename)
  • Breaking: Reorganized channel selector and mixer nodes into separate files: band_selection.py + selector.pychannel_selector.py, concrete_selector.py + channel_mixer.pychannel_mixer.py, pca.pydimensionality_reduction.py, visualizations.py + drcnn_tensorboard_viz.pyanomaly_visualization.py + pipeline_visualization.py
  • Breaking: Renamed 9 classes to reflect selector/mixer distinction: BandSelectorBaseChannelSelectorBase, BaselineFalseRGBSelectorFixedWavelengthSelector, HighContrastBandSelectorHighContrastSelector, CIRFalseColorSelectorCIRSelector, SupervisedBandSelectorBaseSupervisedSelectorBase, SupervisedCIRBandSelectorSupervisedCIRSelector, SupervisedWindowedFalseRGBSelectorSupervisedWindowedSelector, SupervisedFullSpectrumBandSelectorSupervisedFullSpectrumSelector, ConcreteBandSelectorConcreteChannelMixer, DRCNNTensorBoardVizPipelineComparisonVisualizer
  • Breaking: Deleted old files — no deprecation stubs or re-exports
  • Removed redundant .to(device) calls from adaclip.py, anomaly_visualization.py, channel_selector.py — pipeline handles device placement
  • Changed pipeline configs reorganized into anomaly/ subdirectories (adaclip/, deep_svdd/, rx/)
  • Changed AdaCLIP pipeline node names and synced tuning values across 8 pipeline configs
  • Changed Deep SVDD configs, examples, and docs cleaned up for consistency
  • Changed CI workflows to install libgl1/libglib2.0-0 system dependencies for plugin imports
  • Updated 13 pipeline + 17 trainrun YAML configs with new class_name paths
  • Updated 11 example scripts with new import paths
  • Updated 19 documentation files with new class names, import paths, and new content for WelfordAccumulator and TRAINABLE_BUFFERS
  • Fixed pyproject.toml uv source field (develop to editable)
  • Fixed wavelength batching in supervised band selector _collect_training_data (flatten [B, C] to [C])
  • Fixed trainrun callback field name and channel_selector weights config
  • Fixed Werkzeug CVE-2026-27199 by bumping 3.1.5 → 3.1.6
  • Removed dead _quantile_threshold() and duplicate _resolve_reduce_dims() from TwoStageBinaryDecider
  • Removed frozen_nodes from pipeline configs and docs

Release v0.3.0

11 Feb 10:04
ab860e1

Choose a tag to compare

[0.3.0] - 2026-02-11

Added

  • Comprehensive documentation site (70+ pages): 6 tutorials, API reference, node catalog (50+ nodes across 11 categories), gRPC guides, config reference, how-to guides, plugin system docs, development guides, 20+ Mermaid/Graphviz diagrams
  • MkDocs Material theme with dark mode, versioned deployment via mike, custom branding (deep orange, Lato/Source Code Pro fonts, logo/favicon), numpy-style mkdocstrings
  • AnomalyPixelStatisticsMetric node in cuvis_ai.node.metrics (replaces duplicate SampleCustomMetrics in examples)
  • deep_svdd_factory.py utility module with ChannelConfig dataclass in cuvis_ai/utils/
  • Central plugin registry at configs/plugins/registry.yaml
  • configs/trainrun/default_statistical.yaml for statistical-only training workflows
  • CI/CD pipeline (ci.yml): test + coverage (Codecov), lint (ruff, interrogate), security (pip-audit, bandit, detect-secrets), typecheck (mypy) — replaces run_tests.yml
  • PyPI release workflow (pypi-release.yml): build validation, TestPyPI publish with smoke tests, production PyPI publish, versioned docs deploy to gh-pages
  • Dependabot configuration for GitHub Actions and pip dependencies
  • Automated test data downloader (scripts/download_data.py with download-data CLI entry point)
  • Documentation test suite (tests/docs/): link checker, CLI command tests, runnable code example validation
  • Git hooks for automated code quality checks (ruff format, module case checking)
  • LICENSE file (Apache-2.0 full text)
  • pytest.ini, codecov.yml, .secrets.baseline, baseline_coverage.txt

Changed

  • Breaking: TrainablePCA.__init__() now requires num_channels parameter; buffers initialized with correct shapes
  • Migrated type imports from cuvis_ai_core to new cuvis-ai-schemas package across all source files (PortSpec, Context, InputStream, Metric, ExecutionStage)
  • Renamed RXLogitHeadScoreToLogit and moved from cuvis_ai.anomaly.rx_logit_head to cuvis_ai.node.conversion; updated all pipeline configs and examples
  • Renamed BaseDecider import to BinaryDecider in deciders module
  • Split configs/trainrun/default.yaml into default_statistical.yaml and default_gradient.yaml
  • Enhanced docstrings to 95%+ coverage across all public APIs (NumPy-style)
  • pyproject.toml updates for PyPI compliance:
    • Package name: cuvis_aicuvis-ai; license: SPDX Apache-2.0; author email updated
    • Python classifiers aligned to 3.11 only; ruff target py310py311
    • Added tool configs: [tool.interrogate] (95% threshold), [tool.mypy], [tool.bandit]
  • Dependencies: added cuvis-ai-schemas[full]>=0.1.0; loosened cuvis>=3.5.0 (was ==3.5.0); pinned cuvis-ai-core>=0.1.2; removed graphviz>=0.21
  • Dev deps: added twine, pip-audit, bandit, detect-secrets, pip-licenses, cyclonedx-bom, interrogate
  • Docs deps: added mike, pytest-check-links, pytest-md-report
  • restore-pipeline/restore-trainrun CLI entry points now point to cuvis_ai_core
  • cuvis-ai-core dependency handling: local editable path for dev, PyPI for release
  • README refactored; CONTRIBUTING.md enhanced with 7-step plugin contribution workflow
  • Examples updated: removed inline SampleCustomMetrics, updated all imports for schema migration and ScoreToLogit rename

Fixed

  • LAD detector reset(): buffers now initialized with proper shapes instead of torch.empty(0)
  • LAD detector unfreeze(): preserves device when converting buffers to parameters
  • TrainablePCA: 17 failing tests fixed by adding required num_channels parameter and proper buffer shapes; centralized fixture in tests/fixtures/mock_nodes.py
  • Node import paths updated for cuvis-ai-schemas migration
  • Config references: trainrun/defaultdefault_statistical/default_gradient; RXLogitHeadScoreToLogit in pipeline YAMLs
  • Documentation: broken internal links, outdated module references, empty placeholder content, incorrect script/path references
  • MkDocs build warnings and docstring formatting issues
  • Package metadata alignment for PyPI submission

Removed

  • restore_pipeline.md from repo root (replaced by docs site)
  • changelog.md (replaced by CHANGELOG.md with Keep a Changelog format)
  • .github/workflows/run_tests.yml (replaced by ci.yml)
  • docs/api/grpc_api.md and docs/reference/architecture.md (replaced by expanded docs sections)

v0.2.4

28 Jan 12:30
6ef2671

Choose a tag to compare

v0.2.4 Pre-release
Pre-release

What's Changed

  • fix(plugins): resolve plugin installation and restore pipeline issues by @nghorbani in #3

Full Changelog: v0.2.3...v0.2.4

v0.2.3

23 Jan 20:09
46ae974

Choose a tag to compare

  • Repository split into cuvis-ai-core (framework) and cuvis-ai (catalog at https://github.com/cubert-hyperspectral/cuvis-ai) with clear API boundaries and independent versioning
  • Framework extraction: base Node class, port system, Pipeline, training infrastructure, gRPC services, NodeRegistry, data infrastructure moved to cuvis-ai-core
  • Plugin system with Git repository and local filesystem support via extended NodeRegistry
  • Pydantic plugin configuration models: GitPluginConfig, LocalPluginConfig, PluginManifest with strict validation
  • Plugin caching in ~/.cuvis_plugins/ with intelligent cache reuse and version verification
  • Session-scoped plugin isolation: each gRPC session has independent plugin namespaces
  • New gRPC RPCs: LoadPlugins, ListLoadedPlugins, GetPluginInfo, ListAvailableNodes, ClearPluginCache
  • JSON transport pattern for plugin manifests via config_bytes field matching existing conventions
  • Test migration: 426 tests moved to cuvis-ai-core with reusable fixtures in tests/fixtures/
  • Bug fixes: DataLoader access violation resolved with num_workers=0, single-threaded gRPC servers for cuvis SDK compatibility
  • 421 tests passing in cuvis-ai-core, independent CI/CD capability established
  • Import pattern change: from cuvis_ai_core.* import ... for framework components

Full Changelog: v0.2.2...v0.2.3