benchmark: add live measurements for all three backends (Tornado, web…#51
benchmark: add live measurements for all three backends (Tornado, web…#51
Conversation
…rockets, Rust) - Ran benchmark/run_benchmark.py at 4 load levels (5/20/50/100 clients) against Tornado, webrockets, and ovos-rust-messagebus v1.1.2 - Updated docs/backends.md: full 3-backend comparison tables with min/median/p95/p99/max latency columns; updated observations - Updated FAQ.md: added Rust column to benchmark table, added "Rust connection saturation at 100 clients" Q&A - Updated MAINTENANCE_REPORT.md with two transparency entries Key results (vs Tornado baseline): 5c×200m: Rust +18%, webrockets +11% 20c×1000m: Rust +20%, webrockets +9% 50c×2000m: webrockets +24%, Rust +20% 100c×500m: webrockets +4%, Rust ⚠ 28 conn errors Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (18)
📝 WalkthroughWalkthroughAdds documentation, CI workflows, tests, a new optional Rust-backed websocket server (webrockets), and a benchmarking tool; introduces pyproject.toml and license update. No API signature changes to existing public interfaces; new test and benchmark modules and backend implementation files were added. Changes
Sequence Diagram(s)sequenceDiagram
participant ClientA as Client A
participant WSServer as WebSocket Server<br/>(Tornado / Webrockets / Rust)
participant ClientB as Client B
ClientA->>WSServer: WebSocket connect
WSServer->>ClientA: Send {"type":"connected",...}
ClientB->>WSServer: WebSocket connect
WSServer->>ClientB: Send {"type":"connected",...}
ClientA->>WSServer: Send message (JSON)
WSServer->>ClientA: Broadcast message (include sender)
WSServer->>ClientB: Broadcast message (include sender)
sequenceDiagram
participant CLI
participant Bench as Benchmark Runner
participant Tornado as Tornado Backend
participant Webrockets as Webrockets Backend
participant Rust as Rust Backend
CLI->>Bench: start --compare
Bench->>Tornado: connect N subscribers & sender
Bench->>Tornado: warmup + send M messages
Tornado->>Bench: return latencies & stats
Bench->>Webrockets: connect N subscribers & sender
Bench->>Webrockets: warmup + send M messages
Webrockets->>Bench: return latencies & stats
Bench->>Rust: connect N subscribers & sender
Bench->>Rust: warmup + send M messages
Rust->>Bench: return latencies & stats
Bench->>CLI: print individual results + comparison table
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
New workflows added: - unit_tests.yml — test.yml@dev, triggers on push+PR to dev - coverage.yml — coverage.yml@dev with pr_comment - pip_audit.yml — pip-audit.yml@dev with pr_comment - lint.yml — lint.yml@dev (ruff) with pr_comment Updated workflows: - build_tests.yml: add python_versions matrix [3.10–3.13], pr_comment, trigger on dev push as well as master - license_tests.yml: add secrets: inherit, fix empty `with:`, trigger on dev push as well as master Removed: - install_tests.yml (replaced by build_tests.yml matrix) All workflows use @dev ref per workspace conventions. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (5)
ovos_messagebus/backends/webrockets_backend.py (2)
138-153: Consider logging deserialization failures for debugging.The silent
except: passat lines 148-149 could hide issues with malformed messages. While broadcasting should continue regardless, logging at DEBUG level would aid troubleshooting without affecting normal operation.♻️ Proposed improvement
try: msg = Message.deserialize(data) if msg.msg_type not in filter_logs: LOG.debug( msg.msg_type + f' source: {msg.context.get("source", [])}' + f' destination: {msg.context.get("destination", [])}\n' + f'SESSION: {SessionManager.get(msg).serialize()}' ) except Exception: - pass # non-OVOS or malformed message — still broadcast it + LOG.debug("webrockets: failed to deserialize message for filtering")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ovos_messagebus/backends/webrockets_backend.py` around lines 138 - 153, The code currently swallows all deserialization errors silently in the try/except around Message.deserialize, making debugging difficult; update the except block in webrockets_backend.py (the block using Message.deserialize and SessionManager.get) to log the exception at DEBUG level (including exception info/traceback) before continuing so malformed or non-OVOS messages still get broadcast via conn.broadcast but failures are visible in logs; ensure you keep behavior of continuing to conn.broadcast([_GLOBAL_ROOM], data, exclude_self=False).
197-200: Fixed sleep delay for socket binding is fragile.The 0.3-second sleep assumes the Rust runtime will bind within that window. Under high system load or slow startup conditions, this could result in
ready_hook()being called before the server is actually accepting connections.Consider polling for socket availability or using a callback mechanism if the webrockets library supports one.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ovos_messagebus/backends/webrockets_backend.py` around lines 197 - 200, The fixed 0.3s sleep after launching the Rust server thread (threading.Thread -> server.start) is fragile; replace it with an active readiness check before invoking ready_hook(): poll the server socket (or attempt a TCP connect to the server's host/port) with a short retry loop and timeout, or use any library-provided readiness/callback from webrockets if available, and only call ready_hook() once the probe succeeds or the timeout is reached (failing with a clear error).docs/backends.md (1)
202-220: Consider adding language hint to fenced blocks for linter compliance.The summary blocks at lines 202-209 and 213-220 use fenced code blocks without a language specifier, triggering MD040 warnings. While these are ASCII tables (not code), adding
```textwould satisfy the linter.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/backends.md` around lines 202 - 220, The two ASCII-table fenced blocks (the throughput table under the "Scenario ..." header and the "Summary — p99 latency (lower is better)" block) lack a language hint and trigger MD040; fix by adding a language specifier to each opening fence (use ```text) so the blocks become fenced as text, e.g., replace the leading triple-backticks for both tables with ```text to satisfy the linter.benchmark/run_benchmark.py (1)
78-85: Silent timeout on initial greeting could mask connection issues.If the server doesn't send the "connected" greeting within the timeout, the exception is caught and ignored (lines 83-84). This could lead to false benchmark results if the server isn't responding correctly.
♻️ Consider logging the timeout
try: await asyncio.wait_for(ws.recv(), timeout=timeout) # 'connected' msg except asyncio.TimeoutError: - pass + pass # Server may not send greeting; continue anywayOr for stricter validation:
try: await asyncio.wait_for(ws.recv(), timeout=timeout) # 'connected' msg except asyncio.TimeoutError: - pass + print(" [WARN] No 'connected' greeting received", file=sys.stderr)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmark/run_benchmark.py` around lines 78 - 85, The helper _connect_subscriber currently swallows asyncio.TimeoutError from await ws.recv(), which can hide connection problems; instead, when the initial greeting times out, log a warning including the url and timeout and then treat it as a real connection failure by closing ws and raising a descriptive exception (e.g., ConnectionError) so callers can't proceed on a half-open connection; update the exception handler around await ws.recv() to perform logging, await ws.close(), and raise a clear error referencing the url/timeout (or make this behavior configurable via a strict flag if you prefer).pyproject.toml (1)
28-29: Remove duplicate entry point from setup.py.The console script
ovos-messagebusis defined in bothpyproject.toml([project.scripts]) andsetup.py(entry_points). Keep the definition only inpyproject.tomland remove it fromsetup.pyto follow modern Python packaging standards.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyproject.toml` around lines 28 - 29, Remove the duplicate console script entry for "ovos-messagebus" from setup.py to avoid defining the same entry point in both setup.py and pyproject.toml; locate the entry_points mapping in setup.py where "ovos-messagebus" is listed (the same target function ovos_messagebus.__main__:main) and delete that entry so the script is only defined under [project.scripts] in pyproject.toml. Ensure setup.py still functions without the removed entry_points block or adjust import logic if it relied on that mapping.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/configuration.md`:
- Around line 4-20: The docs currently show "ssl" nested under "websocket" but
load_message_bus_config() actually reads config.get("ssl"), so update the
documentation and JSON examples (including the intro, the example block and the
option text around lines referenced) to show "ssl" as a top-level config key
(not inside "websocket"); ensure the sample JSON and explanatory text reflect
this top-level placement so it matches the behavior of load_message_bus_config()
and config.get("ssl").
In `@docs/events.md`:
- Around line 99-110: The docs claim "Filter mode does not affect message
delivery" but in the Tornado backend MessageBusEventHandler.on_message() returns
early on deserialization failure, which can prevent delivery; either (A) update
the docs text near the Filter / Debug Mode section to explicitly note that
Tornado's MessageBusEventHandler.on_message() will drop messages that fail
deserialization (so filter=true only affects logging but Tornado may also drop
malformed messages), or (B) change the Tornado behavior in
MessageBusEventHandler.on_message() to match webrockets by catching
deserialization errors, logging them, and continuing to broadcast the
raw/original payload (while still applying filter_logs for log suppression) so
delivery is preserved; implement one of these fixes and reference
MessageBusEventHandler.on_message(), filter_logs, and the Filter / Debug Mode
documentation block when making the change.
In `@docs/server.md`:
- Around line 8-17: The docs state that MessageBusEventHandler uses a
class-level set named client_connections, but the implementation defines
client_connections as a module-level list; update the implementation in
ovos_messagebus/event_handler.py so client_connections is a class attribute
(set) on MessageBusEventHandler rather than a module-level list: change the
declaration to a class-level set on MessageBusEventHandler, migrate any code
that appends/removes connections to use set.add() / set.discard(), and ensure
any iteration (broadcast logic) still iterates over
MessageBusEventHandler.client_connections.
In `@pyproject.toml`:
- Around line 37-38: The pyproject dynamic version points to
ovos_messagebus.version.__version__ but ovos_messagebus/version.py only defines
VERSION_MAJOR, VERSION_MINOR, VERSION_BUILD and VERSION_ALPHA; add a
module-level __version__ constant in ovos_messagebus.version that composes those
VERSION_* values (e.g. "<major>.<minor>.<build>a<alpha>") so the attribute
exists for setuptools to read; update the file defining __version__ using the
existing VERSION_MAJOR, VERSION_MINOR, VERSION_BUILD and VERSION_ALPHA symbols.
- Around line 13-18: The project has conflicting dependency minimums between
pyproject.toml (dependencies list: "ovos_bus_client>=1.3.0,<2.0.0" and
"ovos-utils>=0.8.0,<1.0.0") and requirements.txt (used by setup.py via
required('requirements.txt')), causing inconsistent installs; fix by
consolidating to a single source of truth: either update requirements.txt to
match the exact version specifiers from pyproject.toml (ensure ovos_bus_client
and ovos-utils minimums align) or change setup.py to read/parse the
pyproject.toml dependency table instead of required('requirements.txt'), and
verify both installers produce identical dependency specs.
In `@test/unittests/test_event_handler.py`:
- Around line 96-115: Add tests exercising MessageBusEventHandler.on_message
with handler.filter = True and malformed payloads so deserialization failures
don't cause an early return: using _make_handler create a handler, set
handler.filter = True and mock handler.write_message (or broadcast equivalent),
then call handler.on_message with (a) invalid JSON string/bytes and (b) JSON
that parses but is not a dict or lacks expected fields; assert the handler still
proceeds to the broadcast/write loop by checking handler.write_message was
called (or broadcast method invoked) and no early return occurred. Reference
MessageBusEventHandler.on_message, the handler.filter attribute, _make_handler,
and handler.write_message in the new tests.
In `@test/unittests/test_load_config.py`:
- Around line 46-55: Add a regression test that ensures a false ssl override
wins over a truthy config value: create a new test (e.g.,
test_overrides_false_ssl_precedence) that patches
ovos_messagebus.load_config.Configuration to return a config where "ssl" is True
(use _make_config(VALID_WS_CONFIG) modified to have ssl True), then call
load_message_bus_config(ssl=False) and assert the returned object's ssl is
False; reference load_message_bus_config and the patched
Configuration/_make_config to locate where to inject this case so the code path
using overrides.get("ssl") or config.get("ssl") is exercised.
In `@test/unittests/test_webrockets_backend.py`:
- Around line 31-98: The fake webrockets module (_webrockets_stub) is installed
into sys.modules at import time and can leak across tests; wrap the stub
installation and the import of ovos_messagebus.backends.webrockets_backend in a
fixture or context manager that inserts sys.modules["webrockets"] before
importing and then cleans up afterwards by popping sys.modules["webrockets"]
(and any submodules) and removing the imported backend module (e.g.,
ovos_messagebus.backends.webrockets_backend) from sys.modules in teardown;
ensure tests access the backend via the fixture to avoid global leakage and
restore original sys.modules state after each test.
---
Nitpick comments:
In `@benchmark/run_benchmark.py`:
- Around line 78-85: The helper _connect_subscriber currently swallows
asyncio.TimeoutError from await ws.recv(), which can hide connection problems;
instead, when the initial greeting times out, log a warning including the url
and timeout and then treat it as a real connection failure by closing ws and
raising a descriptive exception (e.g., ConnectionError) so callers can't proceed
on a half-open connection; update the exception handler around await ws.recv()
to perform logging, await ws.close(), and raise a clear error referencing the
url/timeout (or make this behavior configurable via a strict flag if you
prefer).
In `@docs/backends.md`:
- Around line 202-220: The two ASCII-table fenced blocks (the throughput table
under the "Scenario ..." header and the "Summary — p99 latency (lower is
better)" block) lack a language hint and trigger MD040; fix by adding a language
specifier to each opening fence (use ```text) so the blocks become fenced as
text, e.g., replace the leading triple-backticks for both tables with ```text to
satisfy the linter.
In `@ovos_messagebus/backends/webrockets_backend.py`:
- Around line 138-153: The code currently swallows all deserialization errors
silently in the try/except around Message.deserialize, making debugging
difficult; update the except block in webrockets_backend.py (the block using
Message.deserialize and SessionManager.get) to log the exception at DEBUG level
(including exception info/traceback) before continuing so malformed or non-OVOS
messages still get broadcast via conn.broadcast but failures are visible in
logs; ensure you keep behavior of continuing to conn.broadcast([_GLOBAL_ROOM],
data, exclude_self=False).
- Around line 197-200: The fixed 0.3s sleep after launching the Rust server
thread (threading.Thread -> server.start) is fragile; replace it with an active
readiness check before invoking ready_hook(): poll the server socket (or attempt
a TCP connect to the server's host/port) with a short retry loop and timeout, or
use any library-provided readiness/callback from webrockets if available, and
only call ready_hook() once the probe succeeds or the timeout is reached
(failing with a clear error).
In `@pyproject.toml`:
- Around line 28-29: Remove the duplicate console script entry for
"ovos-messagebus" from setup.py to avoid defining the same entry point in both
setup.py and pyproject.toml; locate the entry_points mapping in setup.py where
"ovos-messagebus" is listed (the same target function
ovos_messagebus.__main__:main) and delete that entry so the script is only
defined under [project.scripts] in pyproject.toml. Ensure setup.py still
functions without the removed entry_points block or adjust import logic if it
relied on that mapping.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8e0f69cb-6077-4564-bf55-bc833e65acc0
📒 Files selected for processing (17)
FAQ.mdLICENSEMAINTENANCE_REPORT.mdbenchmark/run_benchmark.pydocs/backends.mddocs/configuration.mddocs/events.mddocs/index.mddocs/server.mdovos_messagebus/backends/__init__.pyovos_messagebus/backends/webrockets_backend.pypyproject.tomltest/__init__.pytest/unittests/__init__.pytest/unittests/test_event_handler.pytest/unittests/test_load_config.pytest/unittests/test_webrockets_backend.py
| Tornado `WebSocketHandler` subclass that implements the OVOS message bus. All connected clients share a single class-level connection set; every received message is broadcast to every client. | ||
|
|
||
| --- | ||
|
|
||
| ### Module Attributes | ||
|
|
||
| | Attribute | Type | Description | | ||
| |---|---|---| | ||
| | `client_connections` | `list` | Module-level list of all open `MessageBusEventHandler` instances | | ||
|
|
There was a problem hiding this comment.
The connection registry isn't a class-level set.
ovos_messagebus/event_handler.py defines client_connections as a module-level list. Calling it a class-level set changes the documented semantics and conflicts with the table below.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/server.md` around lines 8 - 17, The docs state that
MessageBusEventHandler uses a class-level set named client_connections, but the
implementation defines client_connections as a module-level list; update the
implementation in ovos_messagebus/event_handler.py so client_connections is a
class attribute (set) on MessageBusEventHandler rather than a module-level list:
change the declaration to a class-level set on MessageBusEventHandler, migrate
any code that appends/removes connections to use set.add() / set.discard(), and
ensure any iteration (broadcast logic) still iterates over
MessageBusEventHandler.client_connections.
| _webrockets_stub = types.ModuleType("webrockets") | ||
|
|
||
|
|
||
| class _FakeConnectDecorator: | ||
| """Mimics webrockets ConnectDecorator — callable that stores the handler.""" | ||
|
|
||
| def __init__(self): | ||
| self.handler = None | ||
|
|
||
| def __call__(self, func): | ||
| self.handler = func | ||
| return func | ||
|
|
||
|
|
||
| class _FakeRoute: | ||
| """Mimics webrockets WebsocketRoute.""" | ||
|
|
||
| def __init__(self): | ||
| self._connect_decorator = _FakeConnectDecorator() | ||
| self._receive_handler = None | ||
| self._disconnect_handler = None | ||
|
|
||
| def connect(self, when="after"): | ||
| return self._connect_decorator | ||
|
|
||
| # Direct-decorator usage: @route.receive | ||
| def receive(self, func): | ||
| self._receive_handler = func | ||
| return func | ||
|
|
||
| # Direct-decorator usage: @route.disconnect | ||
| def disconnect(self, func): | ||
| self._disconnect_handler = func | ||
| return func | ||
|
|
||
|
|
||
| class _FakeServer: | ||
| """Mimics webrockets WebsocketServer.""" | ||
|
|
||
| def __init__(self, host="0.0.0.0", port=8181, broker=None): | ||
| self.host = host | ||
| self.port = port | ||
| self._route: _FakeRoute | None = None | ||
|
|
||
| def create_route(self, path: str, default_group: str | None = None, | ||
| authentication_classes=None) -> _FakeRoute: | ||
| self._route = _FakeRoute() | ||
| return self._route | ||
|
|
||
| def start(self): | ||
| pass # no-op for tests | ||
|
|
||
| def stop(self): | ||
| pass | ||
|
|
||
|
|
||
| _webrockets_stub.WebsocketServer = _FakeServer # type: ignore[attr-defined] | ||
| sys.modules["webrockets"] = _webrockets_stub | ||
|
|
||
| # Now import the backend — it will pick up the stub. | ||
| from ovos_messagebus.backends.webrockets_backend import ( # noqa: E402 | ||
| _build_server, | ||
| _GLOBAL_ROOM, | ||
| main, | ||
| on_ready, | ||
| on_error, | ||
| on_stopping, | ||
| ) |
There was a problem hiding this comment.
The fake webrockets module leaks across the test session.
Installing the stub in sys.modules at import time leaves later tests free to import the fake package or the already-imported backend module by accident. Scope this with a fixture/context and clean up the module cache so test order doesn't change behavior.
🧰 Tools
🪛 Ruff (0.15.5)
[warning] 53-53: Unused method argument: when
(ARG002)
[error] 70-70: Possible binding to all interfaces
(S104)
[warning] 70-70: Unused method argument: broker
(ARG002)
[warning] 75-75: Unused method argument: path
(ARG002)
[warning] 75-75: Unused method argument: default_group
(ARG002)
[warning] 76-76: Unused method argument: authentication_classes
(ARG002)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/unittests/test_webrockets_backend.py` around lines 31 - 98, The fake
webrockets module (_webrockets_stub) is installed into sys.modules at import
time and can leak across tests; wrap the stub installation and the import of
ovos_messagebus.backends.webrockets_backend in a fixture or context manager that
inserts sys.modules["webrockets"] before importing and then cleans up afterwards
by popping sys.modules["webrockets"] (and any submodules) and removing the
imported backend module (e.g., ovos_messagebus.backends.webrockets_backend) from
sys.modules in teardown; ensure tests access the backend via the fixture to
avoid global leakage and restore original sys.modules state after each test.
Just the facts, ma'am. Here's your report. 👮♂️I've aggregated the results of the automated checks for this PR below. 🔍 LintHere's the latest update on this check. 🗞️ ❌ ruff: issues found — see job log 📊 CoverageChecking the integrity of our test cases. 💎 Per-file coverage (7 files)
Full report: download the 🔨 Build TestsI've poured the digital concrete for this build. 🏗️ ✅ All versions pass
⚖️ License CheckEnsuring our project is well-protected legally. 🛡️ ✅ No license violations found (33 packages). License distribution: 9× MIT License, 6× Apache Software License, 5× MIT, 2× Apache-2.0, 2× BSD-3-Clause, 2× ISC License (ISCL), 1× Apache, 1× Apache Software License; BSD License, +5 more Full breakdown — 33 packages
Copyright (c) 2022 Phil Ewels Permission is hereby granted, free of charge, to any person obtaining a copy The above copyright notice and this permission notice shall be included in all THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR Policy: Apache 2.0 (universal donor). StrongCopyleft / NetworkCopyleft / WeakCopyleft / Other / Error categories fail. MPL allowed. 🔒 Security (pip-audit)I've checked for any insecure file permissions. 📂 ✅ No known vulnerabilities found (52 packages scanned). Beep boop, I'm just a script 🤖 |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
.github/workflows/publish_stable.yml (1)
9-10: Consider pinning the external workflow to a stable tag instead of@dev.The actor guard is a good safeguard against recursive workflow triggers. However, referencing
@devfor a production release workflow is risky—breaking changes in the upstreamgh-automationsrepository could unexpectedly break your stable release process.♻️ Suggested change
- uses: OpenVoiceOS/gh-automations/.github/workflows/publish-stable.yml@dev + uses: OpenVoiceOS/gh-automations/.github/workflows/publish-stable.yml@v1.0.0 # or `@master`Replace with a tagged version or
@masterfor more predictable behavior in production workflows.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/publish_stable.yml around lines 9 - 10, The workflow references an external workflow with a floating ref `@dev` (uses: OpenVoiceOS/gh-automations/.github/workflows/publish-stable.yml@dev), which risks breaking the stable release pipeline; replace the `@dev` ref with a pinned stable tag or branch (for example a semver tag like `@v1.2.3` or `@master`) so the `uses:` points to a fixed, reviewed commit, then update the ref whenever you intentionally adopt upstream changes and validate the actor guard `if: github.actor != 'github-actions[bot]'` still behaves as expected..github/workflows/license_tests.yml (1)
5-6: Addmastertopull_requestbranches if that branch receives hotfix or release PRs.Currently,
pushtriggers on bothdevandmaster, butpull_requestonly triggers ondev. This creates a gap where PRs targetingmaster(e.g., hotfixes, releases) skip the license check until after merge.pull_request: branches: [dev, master]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/license_tests.yml around lines 5 - 6, Update the workflow's pull_request trigger to include master so PRs targeting master run the license tests: modify the pull_request block (the "pull_request" trigger's branches list) to contain both dev and master (i.e., branches: [dev, master]) so hotfix/release PRs to master are checked before merge..github/workflows/unit_tests.yml (1)
4-6: Note: Unit tests only run ondevbranch, notmaster.Unlike
build_tests.ymlwhich pushes to bothdevandmaster, this workflow only triggers ondev. This appears intentional but worth confirming if unit tests should also run on pushes tomaster.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/unit_tests.yml around lines 4 - 6, The workflow currently only lists "branches: [dev]" for both the push and pull_request triggers, so update the branches arrays under the push and pull_request keys (the two occurrences of branches: [dev]) to include "master" as well (e.g., branches: [dev, master]) if you want unit tests to run on master pushes/PRs; otherwise confirm in the PR description that the single-branch trigger is intentional and leave the branches arrays unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/license_tests.yml:
- Around line 11-12: Replace the reusable workflow reference that currently uses
the mutable ref
"OpenVoiceOS/gh-automations/.github/workflows/license-check.yml@dev" with an
immutable commit SHA (e.g., change "@dev" to "@<COMMIT_SHA>") before keeping
"secrets: inherit"; update the "uses" value in the workflow to point to that
specific SHA so the workflow is pinned to a known commit and cannot change
unexpectedly.
In @.github/workflows/release_workflow.yml:
- Around line 11-22: version.py currently defines VERSION_MAJOR, VERSION_MINOR,
VERSION_BUILD, VERSION_ALPHA inside VERSION_BLOCK but does not export
__version__, which pyproject.toml expects at
ovos_messagebus.version.__version__; update version.py to build a single string
from these symbols (VERSION_MAJOR, VERSION_MINOR, VERSION_BUILD, VERSION_ALPHA)
and assign it to __version__ (e.g., inside the same VERSION_BLOCK or after it)
using the pattern major.minor.build plus an "aN" suffix when VERSION_ALPHA is
non-zero so the external publish workflow can read the version.
In @.github/workflows/unit_tests.yml:
- Around line 11-16: Add pytest to the project's optional test dependencies so
local runs of tests like test_event_handler.py and test_load_config.py don't
rely on the external workflow; update the [project.optional-dependencies]
section in pyproject.toml to include pytest (e.g., add "pytest>=7.0" under the
"test" group) ensuring the test runner is declared explicitly for local
development and CI.
---
Nitpick comments:
In @.github/workflows/license_tests.yml:
- Around line 5-6: Update the workflow's pull_request trigger to include master
so PRs targeting master run the license tests: modify the pull_request block
(the "pull_request" trigger's branches list) to contain both dev and master
(i.e., branches: [dev, master]) so hotfix/release PRs to master are checked
before merge.
In @.github/workflows/publish_stable.yml:
- Around line 9-10: The workflow references an external workflow with a floating
ref `@dev` (uses:
OpenVoiceOS/gh-automations/.github/workflows/publish-stable.yml@dev), which
risks breaking the stable release pipeline; replace the `@dev` ref with a pinned
stable tag or branch (for example a semver tag like `@v1.2.3` or `@master`) so
the `uses:` points to a fixed, reviewed commit, then update the ref whenever you
intentionally adopt upstream changes and validate the actor guard `if:
github.actor != 'github-actions[bot]'` still behaves as expected.
In @.github/workflows/unit_tests.yml:
- Around line 4-6: The workflow currently only lists "branches: [dev]" for both
the push and pull_request triggers, so update the branches arrays under the push
and pull_request keys (the two occurrences of branches: [dev]) to include
"master" as well (e.g., branches: [dev, master]) if you want unit tests to run
on master pushes/PRs; otherwise confirm in the PR description that the
single-branch trigger is intentional and leave the branches arrays unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5cbeb200-7e84-41fa-8d46-d557298bcbea
📒 Files selected for processing (9)
.github/workflows/build_tests.yml.github/workflows/coverage.yml.github/workflows/install_tests.yml.github/workflows/license_tests.yml.github/workflows/lint.yml.github/workflows/pip_audit.yml.github/workflows/publish_stable.yml.github/workflows/release_workflow.yml.github/workflows/unit_tests.yml
💤 Files with no reviewable changes (1)
- .github/workflows/install_tests.yml
✅ Files skipped from review due to trivial changes (1)
- .github/workflows/lint.yml
Bug fixes: - load_config.py: ssl=False override was silently ignored due to falsy `or` logic; replaced with `overrides['ssl'] if 'ssl' in overrides` - event_handler.py: filter=True mode returned before broadcast on deserialization failure, dropping malformed frames; now logs at DEBUG and broadcasts raw payload unchanged (matches webrockets behavior) Improvements: - webrockets_backend.py: replace fixed 0.3s sleep with socket-based readiness probe (_wait_for_server_ready); add DEBUG log on deserialization failure instead of silent except - requirements.txt: sync minimums to match pyproject.toml (ovos_bus_client >=1.3.0, ovos-utils >=0.8.0) Tests (47 total, +20): - test_event_handler.py: add TestOnMessage — filter=False/True with valid, malformed JSON, and non-OVOS dict payloads (4 new tests) - test_load_config.py: add ssl override precedence regression tests (ssl=False wins, ssl=True wins, fallback to config) (3 new tests) - test_webrockets_backend.py: add TestWaitForServerReady (port open, timeout graceful); update TestMain to patch _wait_for_server_ready instead of time; use sys.modules.setdefault for stub isolation (3 new tests, 2 updated) Docs: - docs/configuration.md: move ssl to top-level JSON example, add note that it is NOT nested under websocket - docs/events.md: clarify filter mode delivery guarantee (malformed frames are still forwarded after fix) - docs/server.md: fix "class-level set" → "module-level list" Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…dd test extra
Bug fixes:
- load_config.py: ssl was read from config top-level (config.get('ssl'))
but mycroft.conf defines it under websocket; now reads from
websocket_configs.get('ssl') matching the actual default config
Docs corrected against ovos_config/mycroft.conf source of truth:
- docs/configuration.md: move ssl back under websocket in JSON example;
correct host default 0.0.0.0 → 127.0.0.1; correct max_msg_size
default 10 → 25 (per ovos_config/mycroft.conf); add citation to source
- docs/server.md: update ssl source citation
pyproject.toml:
- add [test] optional-dependency group with pytest>=7.0 and pytest-cov>=4.0
so local test runs don't need external tooling
Tests:
- test_load_config.py: update VALID_WS_CONFIG to nest ssl under websocket
(matching actual ovos-config defaults); update ssl regression tests to
use websocket-nested ssl key; fix host assertion 0.0.0.0 → 127.0.0.1
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…rockets, Rust)
Key results (vs Tornado baseline):
5c×200m: Rust +18%, webrockets +11%
20c×1000m: Rust +20%, webrockets +9%
50c×2000m: webrockets +24%, Rust +20%
100c×500m: webrockets +4%, Rust ⚠ 28 conn errors
Summary by CodeRabbit
New Features
Documentation
Tests
Chores