Skip to content

feat: add DingTalk channel plugin with OpenClaw connector#82

Merged
xiami762 merged 3 commits intomainfrom
feat/dingtalk-channel
Apr 10, 2026
Merged

feat: add DingTalk channel plugin with OpenClaw connector#82
xiami762 merged 3 commits intomainfrom
feat/dingtalk-channel

Conversation

@duguwanglong
Copy link
Copy Markdown
Contributor

  • Add dingtalk channel plugin files to .flocks/plugins/channels/dingtalk/ including dingtalk.py, runner.ts and dingtalk-openclaw-connector
  • Add channels/dingtalk/ as the source copy alongside runtime plugin
  • Pin axios to 1.14.0 in connector package.json for version stability
  • Initialize root workspace package.json / package-lock.json

@duguwanglong duguwanglong force-pushed the feat/dingtalk-channel branch from 6247e9d to 25899a4 Compare April 10, 2026 05:13
@duguwanglong duguwanglong requested a review from xiami762 April 10, 2026 05:14
- Add dingtalk channel plugin files to .flocks/plugins/channels/dingtalk/
  including dingtalk.py, runner.ts and dingtalk-openclaw-connector
- Add channels/dingtalk/ as the source copy alongside runtime plugin
- Pin axios to 1.14.0 in connector package.json for version stability
- Initialize root workspace package.json / package-lock.json

Made-with: Cursor
@duguwanglong duguwanglong force-pushed the feat/dingtalk-channel branch from 25899a4 to e8297d0 Compare April 10, 2026 05:26
Copy link
Copy Markdown
Contributor

@xiami762 xiami762 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall

The approach is clear: dingtalk.py spawns Node, runner.ts provides a minimal OpenClaw runtime shim, and plugin.ts keeps calling streamFromGateway against a local HTTP proxy that translates to Flocks’ POST /api/session, POST .../message, and GET /api/event. That design is reasonable.

Main risks: global SSE is not filtered by session, integration bypasses the normal channel inbound path, and dependency / docs / version consistency.


Must-fix (high priority)

1. /api/event is a global broadcast — filter by sessionID

Flocks’ SSE uses a global EventBroadcaster; every subscriber sees the same stream. In runner.ts, flocksToOpenAIStream consumes message.part.updated / message.updated without checking that properties.part.sessionID (or the equivalent on message.updated) matches the current request’s sessionId.

The server already includes sessionID on those events (e.g. publish_event("message.part.updated", {"part": {..., "sessionID": sessionID}})).

With multiple DingTalk conversations or other clients (Web/TUI) running sessions at once, the proxy can merge another session’s deltas or completion signals into the wrong OpenAI-style SSE response — incorrect behavior and possible cross-session leakage. Filter all handled events by sessionId before yielding or finishing.

2. Python start() never calls on_message

GatewayManager passes on_message = InboundDispatcher.dispatch. DingTalkChannel.start only starts the subprocess and monitors it; it never turns DingTalk traffic into InboundMessage and forwards it to the dispatcher.

So DingTalk does not go through dedup, debounce, channel.inbound hooks, session binding, group policies, etc., like Feishu/WeCom. Either document this explicitly (“by design: direct Session API only, not InboundDispatcher”) or bridge inbound events if parity is required.

3. Doc’d optional config vs actual env / cfg

dingtalk.py documents dmPolicy, allowFrom, separateSessionByConversation, etc. as passed through to plugin.ts, but the subprocess env and the cfg.channels['dingtalk-connector'] object in runner.ts only carry a small subset. Either plumb those options through (e.g. env or JSON blob) or trim the doc so users are not misled.


Medium priority

4. First run and lockfiles

The child runs npm run start:runner under dingtalk-openclaw-connector, so npm install must be run first. validate_config does not check node_modules or script availability — failures may be late and opaque.

The PR includes bun.lock but no package-lock.json while the launcher uses npm. For reproducible installs and CI, commit package-lock.json, or standardize on Bun and change dingtalk.py accordingly — avoid npm without a lockfile.

5. Version skew

openclaw.plugin.json says 0.7.4; package.json says 0.7.5. Align them.

6. mark_connected() timing

mark_connected() runs right after Popen, before runner/proxy/DingTalk are actually ready. If accurate status matters, consider a readiness signal (stdout line or local HTTP probe) before marking connected.

7. Repo layout and size

Everything lives under .flocks/plugins/channels/dingtalk/, including a very large plugin.ts. ChannelRegistry loads <cwd>/.flocks/plugins/channels, so the plugin appears when the repo root is the working directory; document behavior when cwd is elsewhere.

Long term, prefer submodule, versioned npm dependency, or a fetch script instead of vendoring a huge file for easier security review and upgrades.

8. Licensing

The subtree has its own MIT LICENSE and upstream repo metadata — ensure compatibility with the main project license and consider a short NOTICE for third-party origin.


Four bugs caused channel status to stay stuck in "connecting":

1. _kill_process() in dingtalk.py used subprocess.wait(timeout=5) which
   blocked the asyncio event loop for up to 5 seconds, causing a race
   condition where the new channel task started before the old task fully
   unwound and called mark_disconnected().
   Fix: replace with _kill_process_async() using asyncio.to_thread.

2. _monitor() swallowed subprocess non-zero exit by calling
   mark_disconnected() and returning normally, so _run_with_reconnect
   treated it as a clean exit and retried immediately with no backoff.
   Fix: raise RuntimeError on non-zero exit to trigger exponential backoff.

3. _run_with_reconnect had no delay after a clean start() return, causing
   a tight busy-loop when the connection drops immediately after connect.
   Fix: sleep RECONNECT_BASE_DELAY (1s) before retrying clean exits.

4. get_status() called registry.get(channel_id) which returned the latest
   registered plugin instance after each file-watcher rescan, while the
   running task held a reference to an older instance. The new instance
   always had connected=False, making the UI show "connecting" forever.
   Fix: add _running_plugins dict to GatewayManager that pins the plugin
   instance used by the running task; all status/stop paths use it.

Made-with: Cursor
…, node_modules check

- runner.ts: filter SSE events by sessionId in flocksToOpenAIStream to prevent
  delta/finish signals from concurrent sessions being mixed into the wrong stream
  (props.part.sessionID for message.part.updated, props.info.sessionID for
  message.updated)

- dingtalk.py + runner.ts: forward optional plugin.ts config fields
  (dmPolicy, allowFrom, separateSessionByConversation, groupSessionScope,
  sharedMemoryAcrossConversations) via env vars so flocks.json settings
  match documented behaviour

- dingtalk.py: validate_config now checks node_modules exists and emits
  a clear install hint if npm/bun install has not been run yet

- dingtalk.py: document that on_message/InboundDispatcher is intentionally
  unused; DingTalk routes messages directly through the Session API inside
  the runner.ts ↔ plugin.ts layer

Made-with: Cursor
@xiami762 xiami762 merged commit e973524 into main Apr 10, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants