You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## Summary
- Replaces all instances of `UniFlow` (incorrect) with `Uniflow`
(correct) across docs and README
- Files changed: `docs/user-guides/ml-pipelines/reference-system.md`,
`docs/user-guides/ml-pipelines/type-system.md`, `python/README.md`
- Left untouched:
`python/michelangelo/cli/mactl/plugins/entity/pipeline/create.py` line
452 — `UniFlowConf` is a protobuf type URL identifier, not prose
## Test plan
- [ ] Verify no remaining `UniFlow` in prose (only `UniFlowConf` in
protobuf type URL is acceptable)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Copy file name to clipboardExpand all lines: docs/user-guides/ml-pipelines/reference-system.md
+16-16Lines changed: 16 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,8 @@
1
-
# Data Passing and References in UniFlow
1
+
# Data Passing and References in Uniflow
2
2
3
3
## What you'll learn
4
4
5
-
* How data flows between tasks in UniFlow
5
+
* How data flows between tasks in Uniflow
6
6
* What References are and why they're needed
7
7
* How to work with task outputs and inputs
8
8
* Automatic serialization and deserialization
@@ -12,7 +12,7 @@
12
12
13
13
## The Problem: Data Between Tasks
14
14
15
-
When tasks run on distributed clusters, you can't just pass Python objects directly between them. UniFlow solves this with **References** - a smart system that handles data serialization, storage, and retrieval automatically.
15
+
When tasks run on distributed clusters, you can't just pass Python objects directly between them. Uniflow solves this with **References** - a smart system that handles data serialization, storage, and retrieval automatically.
Copy file name to clipboardExpand all lines: docs/user-guides/ml-pipelines/type-system.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,17 +2,17 @@
2
2
3
3
## What you'll learn
4
4
5
-
* What types UniFlow supports natively
5
+
* What types Uniflow supports natively
6
6
* The 5 codec types and when to use each
7
7
* How to serialize custom data types
8
8
* Best practices for type safety in workflows
9
9
* How to add custom codecs for your types
10
10
11
11
---
12
12
13
-
## Overview: UniFlow's Type System
13
+
## Overview: Uniflow's Type System
14
14
15
-
When data flows between tasks, UniFlow automatically **serializes** your Python objects for storage and **deserializes** them when the next task runs. This is powered by a flexible type system supporting 5 built-in codecs.
15
+
When data flows between tasks, Uniflow automatically **serializes** your Python objects for storage and **deserializes** them when the next task runs. This is powered by a flexible type system supporting 5 built-in codecs.
Copy file name to clipboardExpand all lines: python/README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ Michelangelo gives ML engineers and data scientists a unified Python SDK for the
9
9
10
10
## Key Features
11
11
12
-
-**UniFlow Pipeline Framework** — Define ML workflows with `@task` and `@workflow` decorators. Write plain Python functions and Michelangelo handles distributed execution, data passing between tasks, and result caching.
12
+
-**Uniflow Pipeline Framework** — Define ML workflows with `@task` and `@workflow` decorators. Write plain Python functions and Michelangelo handles distributed execution, data passing between tasks, and result caching.
13
13
14
14
-**Distributed Execution** — Scale tasks across Ray or Spark clusters with a single config change. Specify CPU, memory, GPU, and worker resources per task — no changes to your business logic required.
Full documentation is available at **[michelangelo-ai.github.io/michelangelo/docs](https://michelangelo-ai.github.io/michelangelo/docs)**.
138
138
139
139
-[User Guides](https://michelangelo-ai.github.io/michelangelo/docs/user-guides) — Step-by-step guides for data preparation, training, and deployment
140
-
-[ML Pipelines](https://michelangelo-ai.github.io/michelangelo/docs/user-guides/ml-pipelines) — Deep dive into the UniFlow pipeline framework
140
+
-[ML Pipelines](https://michelangelo-ai.github.io/michelangelo/docs/user-guides/ml-pipelines) — Deep dive into the Uniflow pipeline framework
141
141
-[Set Up Triggers](https://michelangelo-ai.github.io/michelangelo/docs/user-guides/set-up-triggers) — Automate pipeline execution with cron and backfill triggers
142
142
-[CLI Reference](https://michelangelo-ai.github.io/michelangelo/docs/user-guides/cli) — Full command-line interface documentation
0 commit comments