Skip to content

Commit a2d3d43

Browse files
olruasManul from Pathway
authored andcommitted
Typos templates (#9660)
GitOrigin-RevId: 84efa04fd56c4ce2b5414861f06de192b26b99ce
1 parent cfa099c commit a2d3d43

File tree

4 files changed

+7
-7
lines changed
  • docs/2.developers

4 files changed

+7
-7
lines changed

docs/2.developers/4.user-guide/80.advanced/.declarative_vs_imperative/article.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
# ```
3535
# we would expect three "finished" chunks: `(0,1,2)`, `(3,4,5,6)`, `(7,8)` and one unfinished chunk `(9,...)`.
3636
#
37-
# One way to do this would be imperative style: go through rows one-by-one in order storing current chunk in a state and emiting it whenever `flag` is equal to True, while clearing the state.
37+
# One way to do this would be imperative style: go through rows one-by-one in order storing current chunk in a state and emitting it whenever `flag` is equal to True, while clearing the state.
3838
# Even though, its not recommended approach, let's see how to code it in Pathway.
3939

4040
# %%
@@ -100,7 +100,7 @@ def split_by_flag(
100100
# %% [markdown]
101101
# Instead of manually managing state and control flow, Pathway allows you to define such logic using declarative constructs like `sort`, `iterate`, `groupby`. The result is a clear and concise pipeline that emits chunks of event times splitting the flag, showcasing the power and readability of declarative data processing.
102102
#
103-
# In the following, we tell Pathway to propagate the starting time of each chunk across the rows. This is done by declaring a simple local rule: take the starting time of a chunk from previous row or use current event time. This rule is then iterated until fixed-point, so that the information is spreaded until all rows know the starting time of their chunk.
103+
# In the following, we tell Pathway to propagate the starting time of each chunk across the rows. This is done by declaring a simple local rule: take the starting time of a chunk from previous row or use current event time. This rule is then iterated until fixed-point, so that the information is spread until all rows know the starting time of their chunk.
104104
#
105105
# Then we can just group rows by starting time of the chunk to get a table of chunks.
106106

docs/2.developers/7.templates/ETL/.alerting_on_significant_changes/article.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ class InputSchema(pw.Schema):
9999
)
100100

101101
# %% [markdown]
102-
# To track the maximum value, we could write `input.groupby().reduce(max=pw.reducers.max(input.value))`. Here we want to keep track also *when* this maximum occured, therefore we use the `argmax_rows` utility function.
102+
# To track the maximum value, we could write `input.groupby().reduce(max=pw.reducers.max(input.value))`. Here we want to keep track also *when* this maximum occurred, therefore we use the `argmax_rows` utility function.
103103

104104
# %%
105105
reduced = pw.utils.filtering.argmax_rows(input, what=input.value)
@@ -128,7 +128,7 @@ def accept_larger_max(new_max: float, prev_max: float) -> bool:
128128
result = pw.stateful.deduplicate(reduced, col=reduced.value, acceptor=accept_larger_max)
129129

130130
# %% [markdown]
131-
# Now we can send the alerts to e.g. Slack. We can do it similarily as in the [realtime log monitoring tutorial](/developers/templates/etl/realtime-log-monitoring#scenario-2-sending-the-alert-to-slack) by using `pw.io.subscribe`.
131+
# Now we can send the alerts to e.g. Slack. We can do it similarly as in the [realtime log monitoring tutorial](/developers/templates/etl/realtime-log-monitoring#scenario-2-sending-the-alert-to-slack) by using `pw.io.subscribe`.
132132
#
133133
# Here, for testing purposes, instead of sending an alert, we will store the accepted maxima in the list.
134134

@@ -147,7 +147,7 @@ def send_alert(key, row, time, is_addition):
147147
pw.io.subscribe(result, send_alert)
148148

149149
# %% [markdown]
150-
# Let's run the program. Since the stream we defined is bounded (and we set high `input_rate` in the `generate_custom_stream`), the call to `pw.run` will finish quickly. Hovever, in most usecases, you will be streaming data (e.g. from kafka) indefinitely.
150+
# Let's run the program. Since the stream we defined is bounded (and we set high `input_rate` in the `generate_custom_stream`), the call to `pw.run` will finish quickly. However, in most usecases, you will be streaming data (e.g. from kafka) indefinitely.
151151

152152
# %%
153153
pw.run(monitoring_level=pw.MonitoringLevel.NONE)

docs/2.developers/7.templates/ETL/.live_data_jupyter/article.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -286,7 +286,7 @@ def stats_plotter(src):
286286
# %% [markdown]
287287
# ## Jupyter Notebooks & Streaming Data in Production
288288
#
289-
# Congratulations! You have succesfully built a live data streaming pipeline with useful data visualisations and real-time alerts, right from a Jupyter notebook 😄
289+
# Congratulations! You have successfully built a live data streaming pipeline with useful data visualisations and real-time alerts, right from a Jupyter notebook 😄
290290
#
291291
# This is just a taste of what is possible. If you're interested in diving deeper and building a production-grade data science pipeline all the way from data exploration to deployment, you may want to check out the full-length [From Jupyter to Deploy](/developers/user-guide/deployment/from-jupyter-to-deploy) tutorial.
292292
#

docs/2.developers/7.templates/rag/.private_rag_ollama_mistral/article.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -192,7 +192,7 @@ class InputSchema(pw.Schema):
192192
# )
193193
# -
194194

195-
# #### 4. Local LLM Deployement
195+
# #### 4. Local LLM Deployment
196196
# Due to its size and performance we decided to run the `Mistral 7B` Local Language Model. We deploy it as a service running on GPU, using `Ollama`.
197197
#
198198
# In order to run local LLM, refer to these steps:

0 commit comments

Comments
 (0)