services
#
BaseService #
Bases: MessageQueuePublisherMixin
, ABC
, BaseModel
Base class for a service.
The general structure of a service is as follows: - A service has a name. - A service has a service definition. - A service uses a message queue to send/receive messages. - A service has a processing loop, for continuous processing of messages. - A service can process a message. - A service can publish a message to another service. - A service can be launched in-process. - A service can be launched as a server. - A service can be registered to the control plane. - A service can be registered to the message queue.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
service_name
|
str
|
|
required |
Source code in llama_deploy/services/base.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 |
|
service_definition
abstractmethod
property
#
service_definition: ServiceDefinition
The service definition.
as_consumer
abstractmethod
#
as_consumer(remote: bool = False) -> BaseMessageQueueConsumer
Get the consumer for the message queue.
Source code in llama_deploy/services/base.py
50 51 52 53 |
|
processing_loop
abstractmethod
async
#
processing_loop() -> None
The processing loop for the service.
Source code in llama_deploy/services/base.py
55 56 57 58 |
|
process_message
abstractmethod
async
#
process_message(message: QueueMessage) -> Any
Process a message.
Source code in llama_deploy/services/base.py
60 61 62 63 |
|
launch_local
abstractmethod
async
#
launch_local() -> Task
Launch the service in-process.
Source code in llama_deploy/services/base.py
65 66 67 68 |
|
launch_server
abstractmethod
async
#
launch_server() -> None
Launch the service as a server.
Source code in llama_deploy/services/base.py
70 71 72 73 |
|
register_to_control_plane
async
#
register_to_control_plane(control_plane_url: str) -> None
Register the service to the control plane.
Source code in llama_deploy/services/base.py
75 76 77 78 79 80 81 82 83 84 85 |
|
deregister_from_control_plane
async
#
deregister_from_control_plane() -> None
Deregister the service from the control plane.
Source code in llama_deploy/services/base.py
87 88 89 90 91 92 93 94 95 96 97 98 |
|
get_session_state
async
#
get_session_state(session_id: str) -> dict[str, Any] | None
Get the session state from the control plane.
Source code in llama_deploy/services/base.py
100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
|
update_session_state
async
#
update_session_state(session_id: str, state: dict[str, Any]) -> None
Update the session state in the control plane.
Source code in llama_deploy/services/base.py
116 117 118 119 120 121 122 123 124 125 126 127 128 |
|
register_to_message_queue
async
#
register_to_message_queue() -> StartConsumingCallable
Register the service to the message queue.
Source code in llama_deploy/services/base.py
130 131 132 133 134 |
|
AgentService #
Bases: BaseService
Agent Service.
A service that runs an agent locally, processing incoming tasks step-wise in an endless loop.
Messages are published to the message queue, and the agent processes them in a loop, finally returning a message with the completed task.
This AgentService can either be run in a local loop or as a Fast-API server.
Exposes the following endpoints:
- GET /
: Home endpoint.
- POST /process_message
: Process a message.
- POST /task
: Create a task.
- GET /messages
: Get messages.
- POST /toggle_agent_running
: Toggle the agent running state.
- GET /is_worker_running
: Check if the agent is running.
- POST /reset_agent
: Reset the agent.
Since the agent can launch as a FastAPI server, you can visit /docs
for full swagger documentation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
service_name
|
str
|
|
required |
agent
|
AgentRunner
|
|
required |
description
|
str
|
|
'Local Agent Service.'
|
prompt
|
List[ChatMessage] | None
|
|
None
|
running
|
bool
|
|
True
|
step_interval
|
float
|
|
0.1
|
host
|
str
|
|
required |
port
|
int
|
|
required |
raise_exceptions
|
bool
|
|
False
|
Attributes:
Name | Type | Description |
---|---|---|
service_name |
str
|
The name of the service. |
agent |
AgentRunner
|
The agent to run. |
description |
str
|
The description of the service. |
prompt |
Optional[List[ChatMessage]]
|
The prompt messages, meant to be appended to the start of tasks (currently TODO). |
running |
bool
|
Whether the agent is running. |
step_interval |
float
|
The interval in seconds to poll for task completion. Defaults to 0.1s. |
host |
Optional[str]
|
The host to launch a FastAPI server on. |
port |
Optional[int]
|
The port to launch a FastAPI server on. |
raise_exceptions |
bool
|
Whether to raise exceptions in the processing loop. |
Examples:
from llama_deploy import AgentService
from llama_index.core.agent import ReActAgent
agent = ReActAgent.from_tools([...], llm=llm)
agent_service = AgentService(
agent,
message_queue,
service_name="my_agent_service",
description="My Agent Service",
host="127.0.0.1",
port=8003,
)
# launch as a server for remote access or documentation
await agent_service.launch_server()
Source code in llama_deploy/services/agent.py
38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 |
|
publish_callback
property
#
publish_callback: Optional[PublishCallback]
The publish callback, if any.
processing_loop
async
#
processing_loop() -> None
The processing loop for the agent.
Source code in llama_deploy/services/agent.py
216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 |
|
process_message
async
#
process_message(message: QueueMessage) -> None
Handling for when a message is received.
Source code in llama_deploy/services/agent.py
311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 |
|
as_consumer #
as_consumer(remote: bool = False) -> BaseMessageQueueConsumer
Get the consumer for the message queue.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
remote
|
bool
|
Whether to get a remote consumer or local.
If remote, calls the |
False
|
Source code in llama_deploy/services/agent.py
336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 |
|
launch_local
async
#
launch_local() -> Task
Launch the agent locally.
Source code in llama_deploy/services/agent.py
362 363 364 365 |
|
lifespan
async
#
lifespan(app: FastAPI) -> AsyncGenerator[None, None]
Starts the processing loop when the fastapi app starts.
Source code in llama_deploy/services/agent.py
369 370 371 372 373 374 |
|
home
async
#
home() -> Dict[str, str]
Home endpoint. Gets general information about the agent service.
Source code in llama_deploy/services/agent.py
376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 |
|
create_task
async
#
create_task(task_definition: TaskDefinition) -> Dict[str, str]
Create a task.
Source code in llama_deploy/services/agent.py
403 404 405 406 407 408 |
|
get_messages
async
#
get_messages() -> List[_ChatMessage]
Get messages from the agent.
Source code in llama_deploy/services/agent.py
410 411 412 413 414 415 416 417 |
|
toggle_agent_running
async
#
toggle_agent_running(state: Literal['running', 'stopped']) -> Dict[str, bool]
Toggle the agent running state.
Source code in llama_deploy/services/agent.py
419 420 421 422 423 424 425 |
|
is_worker_running
async
#
is_worker_running() -> Dict[str, bool]
Check if the agent is running.
Source code in llama_deploy/services/agent.py
427 428 429 |
|
reset_agent
async
#
reset_agent() -> Dict[str, str]
Reset the agent.
Source code in llama_deploy/services/agent.py
431 432 433 434 435 |
|
launch_server
async
#
launch_server() -> None
Launch the agent as a FastAPI server.
Source code in llama_deploy/services/agent.py
437 438 439 440 441 442 443 444 445 446 447 448 |
|
HumanService #
Bases: BaseService
A human service for providing human-in-the-loop assistance.
When launched locally, it will prompt the user for input, which is blocking!
When launched as a server, it will provide an API for creating and handling tasks.
Exposes the following endpoints:
- GET /
: Get the service information.
- POST /process_message
: Process a message.
- POST /tasks
: Create a task.
- GET /tasks
: Get all tasks.
- GET /tasks/{task_id}
: Get a task.
- POST /tasks/{task_id}/handle
: Handle a task.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
service_name
|
str
|
|
required |
description
|
str
|
|
'Local Human Service.'
|
running
|
bool
|
|
True
|
step_interval
|
float
|
|
0.1
|
fn_input
|
HumanInputFn
|
|
<function default_human_input_fn at 0x7fa9c1d33d80>
|
human_input_prompt
|
str
|
|
'Your assistance is needed. Please respond to the request provided below:\n===\n\n{input_str}\n\n===\n'
|
host
|
str
|
|
required |
port
|
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
service_name |
str
|
The name of the service. |
description |
str
|
The description of the service. |
running |
bool
|
Whether the service is running. |
step_interval |
float
|
The interval in seconds to poll for tool call results. Defaults to 0.1s. |
host |
Optional[str]
|
The host of the service. |
port |
Optional[int]
|
The port of the service. |
Source code in llama_deploy/services/human.py
66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 |
|
publish_callback
property
#
publish_callback: Optional[PublishCallback]
The publish callback, if any.
HumanTask #
Bases: BaseModel
Container for Tasks to be completed by HumanService.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
task_def
|
TaskDefinition
|
|
required |
tool_call
|
ToolCall | None
|
|
None
|
Source code in llama_deploy/services/human.py
279 280 281 282 283 |
|
processing_loop
async
#
processing_loop() -> None
The processing loop for the service.
Source code in llama_deploy/services/human.py
204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 |
|
process_message
async
#
process_message(message: QueueMessage) -> None
Process a message received from the message queue.
Source code in llama_deploy/services/human.py
285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 |
|
as_consumer #
as_consumer(remote: bool = False) -> BaseMessageQueueConsumer
Get the consumer for the service.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
remote
|
bool
|
Whether the consumer is remote. Defaults to False.
If True, the consumer will be a RemoteMessageConsumer that uses the |
False
|
Source code in llama_deploy/services/human.py
310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 |
|
launch_local
async
#
launch_local() -> Task
Launch the service in-process.
Source code in llama_deploy/services/human.py
336 337 338 339 |
|
lifespan
async
#
lifespan(app: FastAPI) -> AsyncGenerator[None, None]
Starts the processing loop when the fastapi app starts.
Source code in llama_deploy/services/human.py
343 344 345 346 347 348 |
|
home
async
#
home() -> Dict[str, str]
Get general service information.
Source code in llama_deploy/services/human.py
350 351 352 353 354 355 356 357 358 359 360 |
|
create_task
async
#
create_task(task: TaskDefinition) -> Dict[str, str]
Create a task for the human service.
Source code in llama_deploy/services/human.py
362 363 364 365 366 367 |
|
get_tasks
async
#
get_tasks() -> List[TaskDefinition]
Get all outstanding tasks.
Source code in llama_deploy/services/human.py
369 370 371 372 |
|
get_task
async
#
get_task(task_id: str) -> Optional[TaskDefinition]
Get a specific task by ID.
Source code in llama_deploy/services/human.py
374 375 376 377 378 379 380 |
|
handle_task
async
#
handle_task(task_id: str, result: HumanResponse) -> None
Handle a task by providing a result.
Source code in llama_deploy/services/human.py
382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 |
|
launch_server
async
#
launch_server() -> None
Launch the service as a FastAPI server.
Source code in llama_deploy/services/human.py
415 416 417 418 419 420 421 422 423 424 425 426 427 |
|
validate_human_input_prompt
classmethod
#
validate_human_input_prompt(v: str) -> str
Check if input_str
is a prompt key.
Source code in llama_deploy/services/human.py
429 430 431 432 433 434 435 436 437 438 |
|
ToolService #
Bases: BaseService
A service that executes tools remotely for other services.
This service is responsible for executing tools remotely for other services and agents.
Exposes the following endpoints:
- GET /
: Home endpoint.
- POST /tool_call
: Create a tool call.
- GET /tool
: Get a tool by name.
- POST /process_message
: Process a message.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
service_name
|
str
|
|
required |
tools
|
List[AsyncBaseTool]
|
|
required |
description
|
str
|
|
'Local Tool Service.'
|
running
|
bool
|
|
True
|
step_interval
|
float
|
|
0.1
|
host
|
str
|
|
required |
port
|
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
tools |
List[AsyncBaseTool]
|
A list of tools to execute. |
description |
str
|
The description of the tool service. |
running |
bool
|
Whether the service is running. |
step_interval |
float
|
The interval in seconds to poll for tool call results. Defaults to 0.1s. |
host |
Optional[str]
|
The host of the service. |
port |
Optional[int]
|
The port of the service. |
Examples:
from llama_deploy import ToolService, MetaServiceTool, SimpleMessageQueue
from llama_index.core.llms import OpenAI
from llama_index.core.agent import FunctionCallingAgentWorker
message_queue = SimpleMessageQueue()
tool_service = ToolService(
message_queue=message_queue,
tools=[tool],
running=True,
step_interval=0.5,
)
# create a meta tool and use it in any other agent
# this allows remote execution of that tool
meta_tool = MetaServiceTool(
tool_metadata=tool.metadata,
message_queue=message_queue,
tool_service_name=tool_service.service_name,
)
agent = FunctionCallingAgentWorker.from_tools(
[meta_tool],
llm=OpenAI(),
).as_agent()
Source code in llama_deploy/services/tool.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 |
|
publish_callback
property
#
publish_callback: Optional[PublishCallback]
The publish callback, if any.
processing_loop
async
#
processing_loop() -> None
The processing loop for the service.
Source code in llama_deploy/services/tool.py
182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 |
|
process_message
async
#
process_message(message: QueueMessage) -> None
Process a message.
Source code in llama_deploy/services/tool.py
238 239 240 241 242 243 244 245 246 247 248 249 |
|
as_consumer #
as_consumer(remote: bool = False) -> BaseMessageQueueConsumer
Get the consumer for the service.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
remote
|
bool
|
Whether the consumer is remote. Defaults to False.
If True, the consumer will be a RemoteMessageConsumer that uses the |
False
|
Source code in llama_deploy/services/tool.py
251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 |
|
launch_local
async
#
launch_local() -> Task
Launch the service in-process.
Source code in llama_deploy/services/tool.py
276 277 278 |
|
lifespan
async
#
lifespan(app: FastAPI) -> AsyncGenerator[None, None]
Starts the processing loop when the fastapi app starts.
Source code in llama_deploy/services/tool.py
282 283 284 285 286 287 |
|
home
async
#
home() -> Dict[str, str]
Home endpoint. Returns the general information about the service.
Source code in llama_deploy/services/tool.py
289 290 291 292 293 294 295 296 297 298 299 300 301 302 |
|
create_tool_call
async
#
create_tool_call(tool_call: ToolCall) -> Dict[str, str]
Create a tool call.
Source code in llama_deploy/services/tool.py
304 305 306 307 308 |
|
get_tool_by_name
async
#
get_tool_by_name(name: str) -> Dict[str, Any]
Get a tool by name.
Source code in llama_deploy/services/tool.py
310 311 312 313 314 315 |
|
launch_server
async
#
launch_server() -> None
Launch the service as a FastAPI server.
Source code in llama_deploy/services/tool.py
317 318 319 320 321 322 323 324 325 326 327 328 |
|
ComponentService #
Bases: BaseService
Component service.
Wraps a query pipeline component into a service.
Exposes the following endpoints:
- GET /
: Home endpoint.
- POST /process_message
: Process a message.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
service_name
|
str
|
|
required |
component
|
Any
|
|
required |
description
|
str
|
|
'Component service.'
|
running
|
bool
|
|
True
|
step_interval
|
float
|
|
0.1
|
host
|
str
|
|
required |
port
|
int
|
|
required |
raise_exceptions
|
bool
|
|
False
|
Attributes:
Name | Type | Description |
---|---|---|
component |
Any
|
The query pipeline component. |
description |
str
|
The description of the service. |
running |
bool
|
Whether the service is running. |
step_interval |
float
|
The interval in seconds to poll for tool call results. Defaults to 0.1s. |
host |
Optional[str]
|
The host of the service. |
port |
Optional[int]
|
The port of the service. |
raise_exceptions |
bool
|
Whether to raise exceptions. |
Examples:
from llama_deploy import ComponentService
from llama_index.core.query_pipeline import QueryComponent
component_service = ComponentService(
component=query_component,
message_queue=message_queue,
description="component_service",
service_name="my_component_service",
)
Source code in llama_deploy/services/component.py
31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 |
|
processing_loop
async
#
processing_loop() -> None
The processing loop for the service.
Source code in llama_deploy/services/component.py
158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 |
|
process_message
async
#
process_message(message: QueueMessage) -> None
Process a message received from the message queue.
Source code in llama_deploy/services/component.py
192 193 194 195 196 197 198 199 200 |
|
as_consumer #
as_consumer(remote: bool = False) -> BaseMessageQueueConsumer
Get the consumer for the message queue.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
remote
|
bool
|
Whether the consumer is remote. Defaults to False.
If True, the consumer will be a RemoteMessageConsumer that uses the |
False
|
Source code in llama_deploy/services/component.py
202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 |
|
launch_local
async
#
launch_local() -> Task
Launch the service in-process.
Source code in llama_deploy/services/component.py
228 229 230 231 |
|
lifespan
async
#
lifespan(app: FastAPI) -> AsyncGenerator[None, None]
Starts the processing loop when the fastapi app starts.
Source code in llama_deploy/services/component.py
235 236 237 238 239 240 |
|
home
async
#
home() -> Dict[str, str]
Home endpoint. Returns general information about the service.
Source code in llama_deploy/services/component.py
242 243 244 245 246 247 248 249 250 251 |
|
launch_server
async
#
launch_server() -> None
Launch the service as a FastAPI server.
Source code in llama_deploy/services/component.py
253 254 255 256 257 258 259 260 261 262 263 264 |
|
WorkflowService #
Bases: BaseService
Workflow service.
Wraps a llama-index workflow into a service.
Exposes the following endpoints:
- GET /
: Home endpoint.
- POST /process_message
: Process a message.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
service_name
|
str
|
|
required |
workflow
|
Workflow
|
|
required |
description
|
str
|
|
'Workflow service.'
|
running
|
bool
|
|
True
|
step_interval
|
float
|
|
0.1
|
max_concurrent_tasks
|
int
|
|
8
|
host
|
str
|
|
required |
port
|
int
|
|
required |
internal_host
|
str | None
|
|
None
|
internal_port
|
int | None
|
|
None
|
raise_exceptions
|
bool
|
|
False
|
Attributes:
Name | Type | Description |
---|---|---|
workflow |
Workflow
|
The workflow itself. |
description |
str
|
The description of the service. |
running |
bool
|
Whether the service is running. |
step_interval |
float
|
The interval in seconds to poll for tool call results. Defaults to 0.1s. |
max_concurrent_tasks |
int
|
The number of tasks that the service can process at a given time. |
host |
Optional[str]
|
The host of the service. |
port |
Optional[int]
|
The port of the service. |
raise_exceptions |
bool
|
Whether to raise exceptions. |
Examples:
from llama_deploy import WorkflowService
from llama_index.core.workflow import Workflow
workflow_service = WorkflowService(
workflow,
message_queue=message_queue,
description="workflow_service",
service_name="my_workflow_service",
)
Source code in llama_deploy/services/workflow.py
83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 |
|
get_workflow_state
async
#
get_workflow_state(state: WorkflowState) -> Optional[Context]
Load the existing context from the workflow state.
TODO: Support managing the workflow state?
Source code in llama_deploy/services/workflow.py
218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 |
|
set_workflow_state
async
#
set_workflow_state(ctx: Context, current_state: WorkflowState) -> None
Set the workflow state for this session.
Source code in llama_deploy/services/workflow.py
252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 |
|
process_call
async
#
process_call(current_call: WorkflowState) -> None
Processes a given task, and writes a response to the message queue.
Handles errors with a generic try/except, and publishes the error message as the result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
current_call
|
WorkflowState
|
The state of the current task, including run_kwargs and other session state. |
required |
Source code in llama_deploy/services/workflow.py
278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 |
|
manage_tasks
async
#
manage_tasks() -> None
Acts as a manager to process outstanding tasks from a queue.
Limits number of tasks in progress to self.max_concurrent_tasks
.
If the number of ongoing tasks is greater than or equal to self.max_concurrent_tasks
,
they are buffered until there is room to run it.
Source code in llama_deploy/services/workflow.py
383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 |
|
processing_loop
async
#
processing_loop() -> None
The processing loop for the service with non-blocking concurrent task execution.
Source code in llama_deploy/services/workflow.py
420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 |
|
process_message
async
#
process_message(message: QueueMessage) -> None
Process a message received from the message queue.
Source code in llama_deploy/services/workflow.py
437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 |
|
as_consumer #
as_consumer(remote: bool = False) -> BaseMessageQueueConsumer
Get the consumer for the message queue.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
remote
|
bool
|
Whether the consumer is remote. Defaults to False.
If True, the consumer will be a RemoteMessageConsumer that uses the |
False
|
Source code in llama_deploy/services/workflow.py
462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 |
|
launch_local
async
#
launch_local() -> Task
Launch the service in-process.
Source code in llama_deploy/services/workflow.py
488 489 490 491 |
|
lifespan
async
#
lifespan(app: FastAPI) -> AsyncGenerator[None, None]
Starts the processing loop when the fastapi app starts.
Source code in llama_deploy/services/workflow.py
495 496 497 498 499 500 |
|
home
async
#
home() -> Dict[str, str]
Home endpoint. Returns general information about the service.
Source code in llama_deploy/services/workflow.py
502 503 504 505 506 507 508 509 510 511 |
|
launch_server
async
#
launch_server() -> None
Launch the service as a FastAPI server.
Source code in llama_deploy/services/workflow.py
513 514 515 516 517 518 519 520 521 522 523 524 525 |
|
WorkflowServiceConfig #
Bases: BaseSettings
Workflow service configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
host
|
str
|
|
required |
port
|
int
|
|
required |
internal_host
|
str | None
|
|
None
|
internal_port
|
int | None
|
|
None
|
service_name
|
str
|
|
required |
description
|
str
|
|
'A service that wraps a llama-index workflow.'
|
running
|
bool
|
|
True
|
step_interval
|
float
|
|
0.1
|
max_concurrent_tasks
|
int
|
|
8
|
raise_exceptions
|
bool
|
|
False
|
Source code in llama_deploy/services/workflow.py
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
|