Releases: cubert-hyperspectral/cuvis-ai
Releases · cubert-hyperspectral/cuvis-ai
Release v0.7.2
0.7.2 - 2026-05-11
- CI: run the gh-pages
deploy-docsjob insidecubertgmbh/cuvis_pyil:3.5.0-ubuntu24.04withlibgl1/libglib2.0-0/ffmpeginstalled, matching the workingdoc-buildjob inci.yml. The 0.7.1 deploy failed because the auto-generated Nodes-catalog generator importscuvis_ai.node, which transitively initializes thecuvispackage and aborts on a vanilla runner. Nocuvis_aicode changes — this release exists solely to re-trigger the release pipeline so gh-pages actually updates.
Release v0.7.1
0.7.1 - 2026-05-11
- Docs IA restructure (ALL-5655). Nine top-level sections — Home, Get Started, Concepts, Tutorials, Catalogs, Workflows, Agentic Integration, Deployment, Reference — ordered as a learning path. Major moves:
user-guide/{installation,quickstart}→get-started/;how-to/*→workflows/;node-catalog/*→catalogs/nodes/*;config/*→reference/configuration/;api/*→reference/python-api/;plugin-system/*→reference/plugin-development/;development/*→reference/contributing/;grpc/*+use_cases/grpc-workflow.md→deployment/. New: agentic-integration section, datasets catalog mirrored from HuggingFace, notebook tutorial gallery,get-started/first-pipeline.md,workflows/{statistical,gradient}-training.md. Removeddocs/use_cases/,user-guide/configuration.mdstub, duplicateplugin-system/overview.md. All URLs change — no redirects.mkdocs build --strictclean. - Auto-generated Nodes catalog.
mkdocs-gen-files+scripts/generate_node_catalog.pybuild a category-grouped index page fromNodeRegistrywith per-category SVG icons (docs/images/node-categories/) and a client-side filter (docs/javascripts/node_catalog_filter.js). Replaces nine hand-maintaineddocs/catalogs/nodes/*.mdpages. Addedscripts/math_directive_hook.pyMkDocs hook (RST.. math::→ MathJax; auto-hide TOC on catalog pages). - Lentils Dinomaly use-case notebook (
notebooks/use_cases/lentils_dinomaly.ipynb) — HF dataset integration and H.264 video export. - Helper-scripts package renamed
tools/→scripts/. Updated[project.scripts](create-stubs = "scripts.generate_node_port_stubs:main"), MkDocs macros, codecov ignore,.gitignore, git hooks, copilot-instructions. - Removed
configs/plugins/registry.yaml. Use per-plugin manifests (configs/plugins/<plugin>.yaml). - Bundled ffmpeg via
imageio-ffmpeg.ToVideoNoderesolves the binary from the wheel by default — no system install needed. Override withCUVIS_AI_FFMPEG_BINforh264_nvenc/vaapi/amf. Blood-perfusion notebook MP4 export gains+faststart. - Site rebrand to Cubert CI.
palette: customletsdocs/stylesheets/extra.cssdrive both Material schemes; Rajdhani headings via Google Fonts@import, Roboto / Roboto Mono for body and code. Mermaid theme variables updated to match. Mermaid diagrams indocs/concepts/*.mdswitched from inlinestyle X fill:toclassDefso node colors stay legible in dark mode. - mkdocs plugin swap. Dropped
mkdocs-literate-nav; addedmkdocs-macros-plugin(drivesscripts/docs_macros.py) andmkdocs-llmstxt(emitsllms-full.txt).mkdocs-gen-filesre-added for the Nodes-catalog generator. API reference consolidated into the Nodes catalog. Install guide gains a Cuvis SDK section. - Renamed local blood-perfusion dataset folder
data/XMR_Blood_Perfusion/→data/XMR_Demo_Blood_Perfusion/following the HuggingFace rename tocubert-gmbh/XMR_Demo_Blood_Perfusion. Users with the old folder can rename it in place; otherwiseuv run dataset download blood_perfusionre-fetches ~7 GB. - Fixed broken cross-doc links surfaced by
mkdocs build --strict. - Dependency floors:
cuvis-ai-core>=0.5.3(Blood_Perfusion registry repoint),cuvis-ai-schemas[full]>=0.4.1. Locked dev deps bumped to clear pip-audit CVEs.
Release v0.7.0
0.7.0 - 2026-05-04
- Extracted the
examples/tree (70 files) into a new sister repo,cuvis-ai-cookbook. Removeddocs/grpc/example-clients.md(now redundant) and rerouted all in-docexamples/...links to cookbook GitHub URLs. - Renamed
CIETristimulusFalseRGBSelector→CIETristimulusRGBSelector. The CIE 1931 tristimulus integration produces a faithful RGB rendering, not a false-color rendering — the previous name was misleading. Updated the plugin registry (configs/plugins/cuvis_ai_builtin.yaml), the four SAM3 pipeline configs (configs/pipeline/sam3/sam3_*.yaml, includingname: false_rgb→name: true_rgb, edge references, andfalse-rgbmetadata tags/description), and the object-tracking notebooks undernotebooks/use_cases/. No deprecation shim — direct rename. - Added
_category(NodeCategory) and_tags(frozenset[NodeTag]) ClassVars on every auto-registeredNodesubclass acrosscuvis_ai/node,cuvis_ai/anomaly, andcuvis_ai/deciders(105 classes, including private/base classes such as_ScoreNormalizerBaseand_BaseJsonWriterNode). Newtests/test_node_categories.pyenforces per-class declarations (via__dict__), requires at least one modality or lifecycle tag per node, and rejects any single category covering >70% of the catalog. - Added
assets/node_icons/*.svgto package-data and bumpedcuvis-ai-schemas[full]>=0.4.0. - Added Windows FFmpeg DLL bootstrap (
cuvis_ai/__init__.py): walksPATHat import time and registers every directory containing anavcodec-*.dllviaos.add_dll_directory, so torchcodec can loadlibtorchcodec_core*.dllon Python 3.8+ wherePATHis no longer consulted for DLL dependencies. No-op on non-Windows. - Refactored
cuvis_ai.anomalyandcuvis_ai.decidersmodules intocuvis_ai.node.anomalyandcuvis_ai.node.deciders; the legacy locations now emitDeprecationWarningand re-export.cuvis_ai.nodenamespace exposesBinaryDecider,DeepSVDDProjection,LADGlobal,QuantileBinaryDecider,RXGlobal,RXPerBatch,TwoStageBinaryDecider, andZScoreNormalizerGlobaldirectly. Plugin manifest and 15 pipeline YAMLs updated to the newclass_namepaths. - Added
InsetComposer(cuvis_ai/node/compositing.py): pastes a fixed-size inset frame into a corner of a larger base frame for picture-in-picture video output. Pairs withROIZoomNode(the inset is expected at final pixel size). Configurable corner (top-left/top-right/bottom-left/bottom-right),margin_px,border_px, andborder_color; per-framevalidport leaves the base untouched when the ROI is stale. - Changed
ToVideoNodeto drop its unusedvideo_pathoutput port — it is a SINK and now matches the canonical contract used byNumpyFeatureWriterNodeand the JSON writers (emptyOUTPUT_SPECS,forward()returns{}). - Bumped pinned plugin tags to latest published patches: adaclip
v0.1.2 → v0.1.3, ultralyticsv0.1.0 → v0.1.1, deepeiouv0.1.0 → v0.1.1, trackevalv0.1.0 → v0.1.1, sam3v0.1.3 → v0.1.5. Picks up_category/_tagspalette annotations and thecuvis-ai-schemas>=0.4.0floor across all upstream plugins. - Removed
imanticsfrom runtime deps (no longer used). - Added
notebookto thedevextra souv sync --extra devis sufficient to run the use-case notebooks locally. - Synchronized the plugin-manifest test-tag expectations to
v0.1.1(tests/test_plugin_manifest.py) to match the latest published plugin tags. - Removed three orphaned test files that imported helpers from the extracted
examples/object_tracking/andexamples/export_cu3s_false_rgb_video.py(now incuvis-ai-cookbook); coverage now belongs in the cookbook repo. - Renamed
docs/tutorials/→docs/usecases/→docs/use_cases/(mkdocs nav heading and 9 doc pages); rewrotedocs/use_cases/blood-perfusion.mdto mirror the notebook section-for-section (NDVI flow + custom-node SpO2 example), dropping the unused PCA+HSV and band-limited PCA sections. - Moved
notebooks/blood_perfusion/nd_blood_perfusion.ipynbtonotebooks/use_cases/blood_perfusion.ipynband split the object-tracking walkthrough into two notebooks undernotebooks/use_cases/:object_tracking_passive.ipynb(text-prompt + SAM3 mask propagation on RGB and CIR video, sharing a cachedrgb_video.mp4+ COCOtracking_results.json) andobject_tracking_active.ipynb(invisible-spectral-ink active tracking via SPAM). Both follow the prepare → build → run → watch rhythm. - Added an Open-in-Colab badge and a bootstrap install cell to every use-case notebook so they run end-to-end on
colab.research.google.comwithout a pre-cloned checkout. Badges link the notebook onmainso released revisions stay reproducible. - Standardized "Cuvis.AI" casing across the docs (was a mix of "CUVIS.AI" / "cuvis.ai").
- Added
docs/javascripts/os-tab-sync.jsto keep OS-tabbed install snippets (Linux / macOS / Windows) in sync across the install guide. - Documented the Graphviz system-binary requirement in
docs/installation.mdsodot-based pipeline visualization works out of the box on a fresh install (Pythongraphvizis just bindings). - Refreshed the
README.mdstatus badges to flat-square style and added a link to thecuvis-ai-agentic-skillssister repo. - Regenerated the docstring-coverage badge (
assets/interrogate_badge.svg) at96.0%. - Fixed
auto_register_packageregistry-size floor regression after the anomaly/deciders move: pointed the auto-register walk atcuvis_ai.node.{anomaly,deciders}since the legacy shims re-export classes whose__module__now points at the new locations. - Bumped
cuvis-ai-corefloor>=0.3.4→>=0.5.0andcuvis-ai-schemas[full]>=0.3.0→>=0.4.0to pick up the gRPClist_available_nodesmetadata populator, the newMissingNodeMetadataWarningruntime check, and theNodeCategory/NodeTag/NodeInfo.{category,tags,icon_svg}schema additions. Widenedrequires-pythonfrom>=3.11,<3.12to>=3.11,<3.14. Removedopencv-python-headlessfrom the docs extra (not required by the current mkdocs config). - Updated the gRPC workflow helper docstring to describe the concrete session lifecycle (create / build / train / predict) instead of internal release vocabulary.
Release v0.6.0
0.6.0 - 2026-04-27
- Removed
examples/hugging_face/example scripts (huggingface_api_demo.py,huggingface_local_demo.py,huggingface_gradient_training.py,test_huggingface_local_minimal.py) and the in-treecuvis_ai/node/adaclip.py(AdaCLIPLocalNode). The released AdaCLIP plugin (cuvis_ai_adaclipviaconfigs/plugins/adaclip.yaml) is unaffected. - Removed the
### AdaCLIP Nodesautodoc section fromdocs/api/nodes.md; it pointed at the deleted in-tree module. - Renamed
PipelineComparisonVisualizerinput portadaclip_scores→anomaly_scores(and the corresponding TensorBoard heatmap artifactadaclip_scores_heatmap_sample_*→anomaly_scores_heatmap_sample_*). The port is plugin-agnostic; updatedtests/node/test_pipeline_visualization.py,cuvis_ai/node/losses.pydocstring example, AdaCLIP pipeline/trainrun YAMLs,examples/adaclip/*_training.py,docs/tutorials/adaclip-workflow.md, anddocs/how-to/monitoring-and-viz.md. - Registered the previously-omitted built-in nodes
ROIZoomNode,MaskRobustifier,MaskToBBoxKalman,MaskedMeanSpectrum, andSpectrumPlotNodeinconfigs/plugins/cuvis_ai_builtin.yamlso they are discoverable when the gRPC server runs in a separate venv. - Changed
ToVideoNodeencoder backend from OpenCVcv2.VideoWriter(FOURCCmp4v, uncontrollable bitrate, ~1.6 Mbps MPEG-4 Part 2 output) to a lazily-spawnedffmpegsubprocess that pipes rawrgb24frames over stdin. Produces H.264 (libx264) at a configurable target bitrate (default12M). Requires theffmpegbinary onPATH. - Added
video_codec(default"libx264") andbitrate(default"12M") parameters toToVideoNode. Hardcoded-pix_fmt yuv420pplus-vf pad=ceil(iw/2)*2:ceil(ih/2)*2to guarantee valid output dimensions for 4:2:0 chroma subsampling. - Removed
ToVideoNode(codec=...)(FourCC) parameter — renamed tovideo_codec(ffmpeg codec name) since the value namespace changed. Pipeline YAML configs do not setcodec=explicitly, so no existing config files need updates. - Added robust subprocess lifecycle handling to
ToVideoNode:close()sends EOF, waits for mux completion, and raisesRuntimeErrorwith drained stderr on non-zero ffmpeg exit. Per-framestdin.writecatchesBrokenPipeErrorand surfaces the encoder error rather than silently truncating the video. - Relocated cu3s false-RGB video exporter from
examples/object_tracking/export_cu3s_false_rgb_video.pytoexamples/export_cu3s_false_rgb_video.py; updatedtests/node/test_export_cu3s_false_rgb_video.pyandtests/node/test_range_average_false_rgb_selector.pyimports accordingly. - Added
ffmpegto CI apt-install steps (ci.yml,plugin-runtime-smoke.yml) so future integration tests can exercise the encoder end-to-end. - Consolidated
NpyReaderandNumpyFeatureWriterNodeinto a singlecuvis_ai.node.numpy_filemodule, mirroring the existingjson_filepattern. Updated imports, plugin manifest, tests, docs, and examples. - Registered
TrackingPointerOverlayNode,BBoxPrompt,MaskPrompt, andTextPromptinconfigs/plugins/cuvis_ai_builtin.yamlso they are discoverable via the plugin manifest. - Added
ROIZoomNode(cuvis_ai/node/compositing.py): crops a bbox region from an RGB frame and resizes to fixed dimensions for zoom-inset video streams. - Added
MaskRobustifierandMaskToBBoxKalman(cuvis_ai/node/mask_ops.py): morphological cleanup of binary masks and Kalman-smoothed bbox tracking from mask outputs. - Added
SpectrumPlotNode(cuvis_ai/node/spectrum_plot.py): renders per-frame matplotlib line plots (reference vs tracked spectrum) to RGB frames for secondary spectrum video export. - Added
MaskedMeanSpectrum(incuvis_ai/node/spectral_extractor.py): computes per-frame mean spectrum of a hyperspectral cube over a binary mask; clarified batch semantics onBBoxSpectralExtractor/SpectralSignatureExtractordocstrings. - Updated
examples/spectral_angle_mapper/spam_invisible_ink.pyto emit synchronized side videos (ROI zoom viaROIZoomNode, spectrum plot viaSpectrumPlotNode) alongside the main overlay, with conditional profiling summary and refactored output-directory handling. - Removed
--rgb-xml-pathargument fromspam_invisible_ink_every_where.pyand simplified downstream bootstrap dispatch. - Added
--overlay-frame-idflag toexamples/object_tracking/render_tracking_overlay.py, which renders the frame index in the top-left corner of each output frame. - Rewrote
.github/copilot-instructions.mdto clarify that this repo is the plugin node catalog (withcuvis-ai-coreandcuvis-ai-schemasas sibling repos), and to document the Python 3.11 / uv / node-registration conventions. - Updated
README.mdto use a locally-hosted banner (docs/images/banner.png) instead of an external CDN URL, and revised the project description to emphasize extensibility and video pipelines. Minor capitalization fixes inCONTRIBUTING.md.
Release v0.5.0
0.5.0 - 2026-04-10
- Added
TextPrompt, scheduled--prompt <text@frame_id>parsing, and updated local/gRPC SAM3 text-propagation examples to driveSAM3TextPropagationthrough a runtimetext_promptport instead of constructor hparams. - Added SAM3 prompt-free segment-everything tooling:
SAM3SegmentEverything, local CLI wiring, and CU3S/video pipeline YAMLs for per-frame automatic mask generation with overlay/video/JSON outputs. - Added runtime SAM3 bbox propagation tooling:
BBoxPrompt, local/gRPC bbox-propagation examples, and CU3S/video bbox-propagation pipeline YAMLs using scheduled--prompt <object_id:detection_id@frame_id>bbox updates from detection JSON. - Added runtime SAM3 mask propagation tooling:
MaskPrompt, local/gRPC mask-propagation examples, and CU3S/video mask-propagation pipeline YAMLs using scheduled--prompt <object_id:detection_id@frame_id>mask updates from detection JSON. - Added SAM3 text-propagation pipeline configs and a new gRPC client (
examples/grpc/sam3/sam3_text_propagation_client.py) supporting CU3S/video inputs plus plugin-manifest bootstrap. - Added SAM3 tracking workflow updates across propagation scripts and examples, including batch processing for full-folder video runs, per-node profiling, threshold/name-suffix options, and frame-lookup support in
TrackingResultsReader. - Added
NDVISelectorfor normalized-difference vegetation index band selection,ScalarHSVColormapNodefor scalar-to-HSV colormap rendering, andDetectionCocoJsonNodefor streaming COCO detection JSON output. - Added per-frame
PCAdimensionality reduction node alongside the existing trainable variant. - Added Spectral Angle Mapper (SPAM) pipeline nodes and tooling for spectral-angle-based workflows.
- Added
BBoxSpectralExtractor, sparkline visualization helpers, and richerBBoxesOverlayNodeannotations (draw_labels,frame_id). - Added occlusion and Poisson inpainting utilities with tests and object-tracking example integrations.
- Added ByteTrack and tracker workflow expansion: spectral-aware association, COCO JSON sinks, threshold/JSON sweep tooling, spectral re-ID validation, RT-DETR/YOLO integration points, and overlay/transcoding helpers for rendered tracking outputs.
- Added DeepEIOU plugin integration plus related preprocessing, NumPy writer, and tracking overlay renderer updates.
- Added TrackEval preparation/evaluation tooling updates for aligned HOTA benchmarking workflows, including prediction frame-id passthrough in evaluator pipelines when supported by the metric plugin.
- Added released tracking plugin manifests for ByteTrack, DeepEIOU, TrackEval, Ultralytics, RT-DETR, and a
cuvis_ai_builtinmanifest. - Added blood perfusion tutorial (
docs/tutorials/blood-perfusion.md) and four example scripts underexamples/blood_perfusion/covering NDVI, PCA, and PCA-HSV visualizations. - Added plugin node catalog documentation page listing all available plugin nodes.
- Added ~41 new test files covering PCA, NDVI, colormap, text prompt, manifest sync, spectral extractor, occlusion, video, tracking overlay, and more.
- Changed tracking JSON export so
CocoTrackMaskWritercan consume optionalcategory_idsandcategory_semanticsinputs, preserving the old single-category behavior when they are absent and writing multi-categorycategoriesheaders when they are present. - Changed local SAM3 bbox propagation from the archived
--detectionsingle-seed flow to the same scheduled prompt contract used by mask propagation, including optional bbox prompt debug overlays. - Changed local SAM3 mask propagation from archived PNG prompts to detection-JSON-driven label-map prompting, and clarified that gRPC mask propagation sends masks directly through
InputBatch.mask. - Renamed
CocoTrackMaskWriter(category_name=...)toCocoTrackMaskWriter(default_category_name=...), changed the default fallback label to"object", and clarified that this constructor value is only the fallback label whencategory_semanticsis absent. - Refactored and consolidated video/tracking utilities (including
cuvis_ai/node/video.py), moved SAM3 examples into a dedicated subdirectory, and adopted shorthand port syntax across updated examples. - Refactored shared XML plugin helpers into
cuvis_ai/utils/xml_plugin_parser.py. - Refactored prompt specs, parsers, and frame-hw resolution to deduplicate shared logic across text/bbox/mask propagation modes.
- Reorganized AdaCLIP gRPC examples under
examples/grpc/adaclip/and updated gRPC workflow/docs utilities around explicit config resolution and session search paths. - Refined tracking output tooling with JSON IO/overlay updates, new CLI output-dir helpers, and expanded tracking regression tests.
- Updated SAM3 text-propagation pipeline YAMLs and example docs to match runtime text prompting plus category-aware tracking JSON output.
- Updated ByteTrack and tracking documentation, including multi-pipeline usage and FFmpeg/torchcodec setup guidance.
- Updated plugin/trainrun configs to match current SAM3 and channel-selector runtime paths.
- Updated SAM3 plugin to v0.1.3 and switched AdaCLIP plugin to released repository.
- Updated docs: removed 7 redundant pages and cleaned up stale references across the documentation site.
- Bumped cuvis-ai-schemas to >=0.3.0 and cuvis-ai-core to >=0.3.4.
- Consolidated
json_readerandjson_writermodules into a singlejson_filemodule; updated all node registrations, pipeline configs, imports, and documentation. - Switched from
opencv-pythontoopencv-python-headless. - Fixed SAM3 batch-runner control flow and mask-overlay color handling.
- Fixed JSON reader robustness and pre-push regressions in manifest sync, CLI commands, and statistical-contract tests.
- Fixed video/tracking fallback and output handling:
VideoIteratornow falls back to OpenCV when torchcodec is unavailable,output_video_pathnaming is normalized, and ByteTrack JSON output path heuristics were hardened.
Release v0.4.0
0.4.0 - 2026-02-27
- Added reusable
WelfordAccumulatorutility (cuvis_ai.utils.welford) for streaming mean/variance/covariance - Added
resolve_reduce_dims()as shared module-level utility inbinary_decider - Added
TRAINABLE_BUFFERSclass attribute — 5 nodes declare trainable buffers, base class handles buffer↔parameter conversion in freeze/unfreeze automatically - Added
freeze()forLearnableChannelMixermatching existingunfreeze()override - Added
ConcreteChannelMixerandLearnableChannelMixerexported fromcuvis_ai.node - Added all 6 visualization nodes exported from
cuvis_ai.node:AnomalyMask,RGBAnomalyMask,ScoreHeatmapVisualizer,CubeRGBVisualizer,PCAVisualization,PipelineComparisonVisualizer - Added insufficient-samples guard to
RXGlobalandScoreToLogit— raises early when training data has too few samples - Added plugin runtime smoke CI workflow (
plugin-runtime-smoke.yml) with slow plugin tests - Added AdaCLIP standalone plugin manifest (
configs/plugins/adaclip.yaml) and 6 example scripts - Added plugin contract, manifest sync, and runtime smoke test files
- Added 8 new test files:
test_welford,test_freeze_unfreeze,test_channel_selector_coverage,test_concrete_channel_mixer,test_pipeline_visualization,test_binary_decider,test_data_node,test_rx_per_batch - Added pytest markers (
unit/integration/slow) on all 30 test files; session-scoped fixtures for expensive operations; pytest config consolidated inpytest.ini - Changed RXGlobal, ScoreToLogit, LADGlobal to use
WelfordAccumulatorinstead of inline Welford implementations - Changed
_compute_band_correlation_matrixto single-pass streaming withWelfordAccumulator - Changed TrainablePCA and LearnableChannelMixer to use streaming covariance +
eighinstead of concat + SVD - Changed SoftChannelSelector variance init to use streaming
WelfordAccumulator - Changed ZScoreNormalizerGlobal to use streaming
WelfordAccumulatorinstead of concat + subsample - Changed supervised band selectors to use template method pattern, pulling shared
forward()andstatistical_initialization()intoSupervisedSelectorBase - Changed YAML configs and docs to use new schema field names (
hparams,class_name) - Changed
EXECUTION_STAGE_VALIDATEreferences toVALacross gRPC docs - Changed
.freezedreferences to.frozenin tests and docs (matches cuvis-ai-core rename) - Breaking: Reorganized channel selector and mixer nodes into separate files:
band_selection.py+selector.py→channel_selector.py,concrete_selector.py+channel_mixer.py→channel_mixer.py,pca.py→dimensionality_reduction.py,visualizations.py+drcnn_tensorboard_viz.py→anomaly_visualization.py+pipeline_visualization.py - Breaking: Renamed 9 classes to reflect selector/mixer distinction:
BandSelectorBase→ChannelSelectorBase,BaselineFalseRGBSelector→FixedWavelengthSelector,HighContrastBandSelector→HighContrastSelector,CIRFalseColorSelector→CIRSelector,SupervisedBandSelectorBase→SupervisedSelectorBase,SupervisedCIRBandSelector→SupervisedCIRSelector,SupervisedWindowedFalseRGBSelector→SupervisedWindowedSelector,SupervisedFullSpectrumBandSelector→SupervisedFullSpectrumSelector,ConcreteBandSelector→ConcreteChannelMixer,DRCNNTensorBoardViz→PipelineComparisonVisualizer - Breaking: Deleted old files — no deprecation stubs or re-exports
- Removed redundant
.to(device)calls fromadaclip.py,anomaly_visualization.py,channel_selector.py— pipeline handles device placement - Changed pipeline configs reorganized into
anomaly/subdirectories (adaclip/,deep_svdd/,rx/) - Changed AdaCLIP pipeline node names and synced tuning values across 8 pipeline configs
- Changed Deep SVDD configs, examples, and docs cleaned up for consistency
- Changed CI workflows to install
libgl1/libglib2.0-0system dependencies for plugin imports - Updated 13 pipeline + 17 trainrun YAML configs with new
class_namepaths - Updated 11 example scripts with new import paths
- Updated 19 documentation files with new class names, import paths, and new content for
WelfordAccumulatorandTRAINABLE_BUFFERS - Fixed
pyproject.tomluv source field (developtoeditable) - Fixed wavelength batching in supervised band selector
_collect_training_data(flatten[B, C]to[C]) - Fixed trainrun callback field name and
channel_selectorweights config - Fixed Werkzeug CVE-2026-27199 by bumping 3.1.5 → 3.1.6
- Removed dead
_quantile_threshold()and duplicate_resolve_reduce_dims()fromTwoStageBinaryDecider - Removed
frozen_nodesfrom pipeline configs and docs
Release v0.3.0
[0.3.0] - 2026-02-11
Added
- Comprehensive documentation site (70+ pages): 6 tutorials, API reference, node catalog (50+ nodes across 11 categories), gRPC guides, config reference, how-to guides, plugin system docs, development guides, 20+ Mermaid/Graphviz diagrams
- MkDocs Material theme with dark mode, versioned deployment via mike, custom branding (deep orange, Lato/Source Code Pro fonts, logo/favicon), numpy-style mkdocstrings
AnomalyPixelStatisticsMetricnode incuvis_ai.node.metrics(replaces duplicateSampleCustomMetricsin examples)deep_svdd_factory.pyutility module withChannelConfigdataclass incuvis_ai/utils/- Central plugin registry at
configs/plugins/registry.yaml configs/trainrun/default_statistical.yamlfor statistical-only training workflows- CI/CD pipeline (
ci.yml): test + coverage (Codecov), lint (ruff, interrogate), security (pip-audit, bandit, detect-secrets), typecheck (mypy) — replacesrun_tests.yml - PyPI release workflow (
pypi-release.yml): build validation, TestPyPI publish with smoke tests, production PyPI publish, versioned docs deploy to gh-pages - Dependabot configuration for GitHub Actions and pip dependencies
- Automated test data downloader (
scripts/download_data.pywithdownload-dataCLI entry point) - Documentation test suite (
tests/docs/): link checker, CLI command tests, runnable code example validation - Git hooks for automated code quality checks (ruff format, module case checking)
LICENSEfile (Apache-2.0 full text)pytest.ini,codecov.yml,.secrets.baseline,baseline_coverage.txt
Changed
- Breaking:
TrainablePCA.__init__()now requiresnum_channelsparameter; buffers initialized with correct shapes - Migrated type imports from
cuvis_ai_coreto newcuvis-ai-schemaspackage across all source files (PortSpec,Context,InputStream,Metric,ExecutionStage) - Renamed
RXLogitHead→ScoreToLogitand moved fromcuvis_ai.anomaly.rx_logit_headtocuvis_ai.node.conversion; updated all pipeline configs and examples - Renamed
BaseDeciderimport toBinaryDeciderin deciders module - Split
configs/trainrun/default.yamlintodefault_statistical.yamlanddefault_gradient.yaml - Enhanced docstrings to 95%+ coverage across all public APIs (NumPy-style)
pyproject.tomlupdates for PyPI compliance:- Package name:
cuvis_ai→cuvis-ai; license: SPDXApache-2.0; author email updated - Python classifiers aligned to 3.11 only; ruff target
py310→py311 - Added tool configs:
[tool.interrogate](95% threshold),[tool.mypy],[tool.bandit]
- Package name:
- Dependencies: added
cuvis-ai-schemas[full]>=0.1.0; loosenedcuvis>=3.5.0(was==3.5.0); pinnedcuvis-ai-core>=0.1.2; removedgraphviz>=0.21 - Dev deps: added twine, pip-audit, bandit, detect-secrets, pip-licenses, cyclonedx-bom, interrogate
- Docs deps: added mike, pytest-check-links, pytest-md-report
restore-pipeline/restore-trainrunCLI entry points now point tocuvis_ai_core- cuvis-ai-core dependency handling: local editable path for dev, PyPI for release
- README refactored; CONTRIBUTING.md enhanced with 7-step plugin contribution workflow
- Examples updated: removed inline
SampleCustomMetrics, updated all imports for schema migration and ScoreToLogit rename
Fixed
- LAD detector
reset(): buffers now initialized with proper shapes instead oftorch.empty(0) - LAD detector
unfreeze(): preserves device when converting buffers to parameters - TrainablePCA: 17 failing tests fixed by adding required
num_channelsparameter and proper buffer shapes; centralized fixture intests/fixtures/mock_nodes.py - Node import paths updated for cuvis-ai-schemas migration
- Config references:
trainrun/default→default_statistical/default_gradient;RXLogitHead→ScoreToLogitin pipeline YAMLs - Documentation: broken internal links, outdated module references, empty placeholder content, incorrect script/path references
- MkDocs build warnings and docstring formatting issues
- Package metadata alignment for PyPI submission
Removed
restore_pipeline.mdfrom repo root (replaced by docs site)changelog.md(replaced byCHANGELOG.mdwith Keep a Changelog format).github/workflows/run_tests.yml(replaced byci.yml)docs/api/grpc_api.mdanddocs/reference/architecture.md(replaced by expanded docs sections)
v0.2.4
What's Changed
- fix(plugins): resolve plugin installation and restore pipeline issues by @nghorbani in #3
Full Changelog: v0.2.3...v0.2.4
v0.2.3
- Repository split into
cuvis-ai-core(framework) andcuvis-ai(catalog at https://github.com/cubert-hyperspectral/cuvis-ai) with clear API boundaries and independent versioning - Framework extraction: base
Nodeclass, port system,Pipeline, training infrastructure, gRPC services,NodeRegistry, data infrastructure moved to cuvis-ai-core - Plugin system with Git repository and local filesystem support via extended
NodeRegistry - Pydantic plugin configuration models:
GitPluginConfig,LocalPluginConfig,PluginManifestwith strict validation - Plugin caching in
~/.cuvis_plugins/with intelligent cache reuse and version verification - Session-scoped plugin isolation: each gRPC session has independent plugin namespaces
- New gRPC RPCs:
LoadPlugins,ListLoadedPlugins,GetPluginInfo,ListAvailableNodes,ClearPluginCache - JSON transport pattern for plugin manifests via
config_bytesfield matching existing conventions - Test migration: 426 tests moved to cuvis-ai-core with reusable fixtures in
tests/fixtures/ - Bug fixes: DataLoader access violation resolved with
num_workers=0, single-threaded gRPC servers for cuvis SDK compatibility - 421 tests passing in cuvis-ai-core, independent CI/CD capability established
- Import pattern change:
from cuvis_ai_core.* import ...for framework components
Full Changelog: v0.2.2...v0.2.3