IPC¶
Calling Python from the Frontend¶
Ref:
pytauri implements IPC API consistent with rust tauri. Reading tauri's documentation is like reading pytauri's documentation.
Commands¶
Registering Commands¶
You can register a command handler using the decorator @Commands.command.
Similar to tauri::command!
, the handler
signature can be arbitrary. We will use inspect.signature to inspect its signature and dynamically pass the required parameters.
Info
You might have seen this pattern in FastAPI
🤓.
The currently supported signature pattern is ArgumentsType. You must ensure that the parameter names and type annotations are correct, and @Commands.command
will check them.
# pyright: reportRedeclaration=none
# ruff: noqa: F811
from pytauri import AppHandle, Commands
commands = Commands()
# ⭐ OK
@commands.command()
async def command(body: bytes) -> bytes: ...
# ⭐ OK
@commands.command()
async def command(body: bytes, app_handle: AppHandle) -> bytes: ...
# 💥 ERROR: missing/wrong type annotation
@commands.command()
async def command(
body: bytes,
app_handle, # pyright: ignore[reportUnknownParameterType, reportMissingParameterType] # noqa: ANN001
) -> bytes: ...
# 💥 ERROR: wrong parameter name
@commands.command()
async def command(body: bytes, foo: AppHandle) -> bytes: ...
# 💥 ERROR: not an async function
@commands.command() # pyright: ignore[reportArgumentType, reportUntypedFunctionDecorator]
def command(body: bytes) -> bytes: ...
Deserializing the Body¶
For the body
argument, it is of type bytes
, allowing you to pass binary data such as files between the frontend and backend.
However, in most cases, we want strong type checking when calling. Rust tauri
achieves this through serde
, while pytauri
uses pydantic.
Info
pydantic
is a super-fast Python validation and serialization library written in rust
/pyo3
🤓.
If you use BaseModel/RootModel as the type annotation for the body
parameter/return value, pytauri will automatically serialize/deserialize it for you:
# pyright: reportRedeclaration=none
# ruff: noqa: F811
from pydantic import BaseModel, RootModel
from pytauri import AppHandle, Commands
commands = Commands()
class Input(BaseModel):
foo: str
bar: int
Output = RootModel[list[str]]
# ⭐ OK
@commands.command()
async def command(body: Input, app_handle: AppHandle) -> Output: ...
# ⭐ OK
@commands.command()
async def command(body: Input) -> bytes: ...
# ⭐ OK
@commands.command()
async def command(body: bytes) -> Output: ...
Generate Invoke Handler for App¶
To execute async commands, we need an async runtime. We use anyio.from_thread.BlockingPortal as the async runtime in a child thread (the main thread is used for the Tauri app's event loop).
Refer to the anyio docs for more information.
You can obtain a BlockingPortal
as follows:
After that, you generate an invoke_handler
and pass it to the App
, similar to Rust's tauri::generate_handler
:
from anyio.from_thread import start_blocking_portal
from pytauri import BuilderArgs, Commands, builder_factory, context_factory
commands = Commands()
with start_blocking_portal("asyncio") as portal: # or "trio"
builder = builder_factory()
app = builder.build(
BuilderArgs(
context_factory(),
# 👇
invoke_handler=commands.generate_handler(portal),
)
)
app.run()
The key point here is that you must not close the BlockingPortal
(i.e., do not exit
the context manager) while App.run is still running.
If you want to obtain this invoke_handler
and keep the BlockingPortal
running, you can use contextlib.ExitStack to achieve this:
from contextlib import ExitStack
from sys import exc_info
from anyio.from_thread import start_blocking_portal
from pytauri import Commands
commands = Commands()
exit_stack = ExitStack()
portal = exit_stack.enter_context(start_blocking_portal("asyncio"))
# 👉 the `invoke_handler` will keep available until the `ExitStack` is closed
invoke_handler = commands.generate_handler(portal)
"""do some stuff ..."""
# 👉 then remember to close the `ExitStack` to exit the portal
exit_stack.__exit__(*exc_info())
You can also spawn tasks in the async runtime (in the child thread) from the main thread in a thread-safe manner using the portal
: https://anyio.readthedocs.io/en/stable/threads.html#spawning-tasks-from-worker-threads
Calling Commands¶
import { pyInvoke, rawPyInvoke } from "tauri-plugin-pytauri-api";
// or if tauri config `app.withGlobalTauri = true`:
//
// ```js
// const { pyInvoke, rawPyInvoke } = window.__TAURI__.pytauri;
// ```
const output = await pyInvoke<[string]>("command", { foo: "foo", bar: 42 });
The difference between rawPyInvoke
and pyInvoke
is that the input and output of rawPyInvoke
are both ArrayBuffer
, allowing you to pass binary data.
Returning Errors to the Frontend¶
Similar to FastAPI
, as long as you throw an InvokeException in the command
, the promise will reject with the error message.
from pytauri import Commands
from pytauri.ipc import InvokeException
commands = Commands()
@commands.command()
async def command() -> bytes:
raise InvokeException("error message")
Calling Frontend from Python¶
Ref:
- https://tauri.app/develop/calling-frontend/
- pytauri.ipc.JavaScriptChannelId and pytauri.ipc.Channel
- pytauri.webview.WebviewWindow.eval
Channels¶
Channels are designed to be fast and deliver ordered data. They are used internally for streaming operations such as download progress, child process output, and WebSocket messages.
To use a channel
, you only need to add the JavaScriptChannelId field to the BaseModel
/RootModel
, and then use JavaScriptChannelId.channel_on to get a Channel instance.
Info
JavaScriptChannelId
itself is a RootModel
, so you can directly use it as the body
parameter.
from pydantic import RootModel
from pytauri import Commands
from pytauri.ipc import Channel, JavaScriptChannelId
from pytauri.webview import WebviewWindow
commands = Commands()
Msg = RootModel[str]
@commands.command()
async def command(
body: JavaScriptChannelId[Msg], webview_window: WebviewWindow
) -> bytes:
channel: Channel[Msg] = body.channel_on(webview_window.as_ref_webview())
# 👇 you should do this as background task, here just keep it simple as a example
channel.send(b'"message"')
channel.send_model(Msg("message"))
return b"null"
import { pyInvoke, Channel } from "tauri-plugin-pytauri-api";
// const { pyInvoke, Channel } = window.__TAURI__.pytauri;
const channel = new Channel<string>();
channel.addJsonListener((msg) => console.log(msg));
await pyInvoke("command", channel);
Info
The Channel
in tauri-plugin-pytauri-api
is just a subclass of the Channel
in @tauri-apps/api/event
.
It adds the addJsonListener
method to help serialize data. You can use Channel.onmessage
to handle raw ArrayBuffer
data.
Evaluating JavaScript¶
You can use WebviewWindow.eval to evaluate JavaScript code in the frontend.
Event System¶
Ref:
- https://tauri.app/develop/calling-frontend/#event-system
- https://tauri.app/develop/calling-rust/#event-system
- pytauri.Listener
- pytauri.Emitter
Tauri ships a simple event system you can use to have bi-directional communication between Rust and your frontend.
The event system was designed for situations where small amounts of data need to be streamed or you need to implement a multi consumer multi producer pattern (e.g. push notification system).
The event system is not designed for low latency or high throughput situations. See the channels section for the implementation optimized for streaming data.
The major differences between a Tauri command and a Tauri event are that events have no strong type support, event payloads are always JSON strings making them not suitable for bigger messages and there is no support of the capabilities system to fine grain control event data and channels.
See: