services
#
BaseService #
Bases: MessageQueuePublisherMixin
, ABC
, BaseModel
Base class for a service.
The general structure of a service is as follows: - A service has a name. - A service has a service definition. - A service uses a message queue to send/receive messages. - A service has a processing loop, for continuous processing of messages. - A service can process a message. - A service can publish a message to another service. - A service can be launched in-process. - A service can be launched as a server. - A service can be registered to the control plane. - A service can be registered to the message queue.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
service_name |
str
|
|
required |
Source code in llama_deploy/services/base.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
|
service_definition
abstractmethod
property
#
service_definition: ServiceDefinition
The service definition.
as_consumer
abstractmethod
#
as_consumer(remote: bool = False) -> BaseMessageQueueConsumer
Get the consumer for the message queue.
Source code in llama_deploy/services/base.py
44 45 46 47 |
|
processing_loop
abstractmethod
async
#
processing_loop() -> None
The processing loop for the service.
Source code in llama_deploy/services/base.py
49 50 51 52 |
|
process_message
abstractmethod
async
#
process_message(message: QueueMessage) -> Any
Process a message.
Source code in llama_deploy/services/base.py
54 55 56 57 |
|
launch_local
abstractmethod
async
#
launch_local() -> Task
Launch the service in-process.
Source code in llama_deploy/services/base.py
59 60 61 62 |
|
launch_server
abstractmethod
async
#
launch_server() -> None
Launch the service as a server.
Source code in llama_deploy/services/base.py
64 65 66 67 |
|
register_to_control_plane
async
#
register_to_control_plane(control_plane_url: str) -> None
Register the service to the control plane.
Source code in llama_deploy/services/base.py
69 70 71 72 73 74 75 76 77 |
|
deregister_from_control_plane
async
#
deregister_from_control_plane(control_plane_url: str) -> None
Deregister the service from the control plane.
Source code in llama_deploy/services/base.py
79 80 81 82 83 84 85 86 |
|
register_to_message_queue
async
#
register_to_message_queue() -> StartConsumingCallable
Register the service to the message queue.
Source code in llama_deploy/services/base.py
88 89 90 |
|
AgentService #
Bases: BaseService
Agent Service.
A service that runs an agent locally, processing incoming tasks step-wise in an endless loop.
Messages are published to the message queue, and the agent processes them in a loop, finally returning a message with the completed task.
This AgentService can either be run in a local loop or as a Fast-API server.
Exposes the following endpoints:
- GET /
: Home endpoint.
- POST /process_message
: Process a message.
- POST /task
: Create a task.
- GET /messages
: Get messages.
- POST /toggle_agent_running
: Toggle the agent running state.
- GET /is_worker_running
: Check if the agent is running.
- POST /reset_agent
: Reset the agent.
Since the agent can launch as a FastAPI server, you can visit /docs
for full swagger documentation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
service_name |
str
|
|
required |
agent |
AgentRunner
|
|
required |
description |
str
|
|
'Local Agent Service.'
|
prompt |
List[ChatMessage] | None
|
|
None
|
running |
bool
|
|
True
|
step_interval |
float
|
|
0.1
|
host |
str
|
|
required |
port |
int
|
|
required |
raise_exceptions |
bool
|
|
False
|
Attributes:
Name | Type | Description |
---|---|---|
service_name |
str
|
The name of the service. |
agent |
AgentRunner
|
The agent to run. |
description |
str
|
The description of the service. |
prompt |
Optional[List[ChatMessage]]
|
The prompt messages, meant to be appended to the start of tasks (currently TODO). |
running |
bool
|
Whether the agent is running. |
step_interval |
float
|
The interval in seconds to poll for task completion. Defaults to 0.1s. |
host |
Optional[str]
|
The host to launch a FastAPI server on. |
port |
Optional[int]
|
The port to launch a FastAPI server on. |
raise_exceptions |
bool
|
Whether to raise exceptions in the processing loop. |
Examples:
from llama_deploy import AgentService
from llama_index.core.agent import ReActAgent
agent = ReActAgent.from_tools([...], llm=llm)
agent_service = AgentService(
agent,
message_queue,
service_name="my_agent_service",
description="My Agent Service",
host="127.0.0.1",
port=8003,
)
# launch as a server for remote access or documentation
await agent_service.launch_server()
Source code in llama_deploy/services/agent.py
38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 |
|
publish_callback
property
#
publish_callback: Optional[PublishCallback]
The publish callback, if any.
processing_loop
async
#
processing_loop() -> None
The processing loop for the agent.
Source code in llama_deploy/services/agent.py
216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 |
|
process_message
async
#
process_message(message: QueueMessage) -> None
Handling for when a message is received.
Source code in llama_deploy/services/agent.py
310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 |
|
as_consumer #
as_consumer(remote: bool = False) -> BaseMessageQueueConsumer
Get the consumer for the message queue.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
remote |
bool
|
Whether to get a remote consumer or local.
If remote, calls the |
False
|
Source code in llama_deploy/services/agent.py
336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 |
|
launch_local
async
#
launch_local() -> Task
Launch the agent locally.
Source code in llama_deploy/services/agent.py
362 363 364 365 |
|
lifespan
async
#
lifespan(app: FastAPI) -> AsyncGenerator[None, None]
Starts the processing loop when the fastapi app starts.
Source code in llama_deploy/services/agent.py
369 370 371 372 373 374 |
|
home
async
#
home() -> Dict[str, str]
Home endpoint. Gets general information about the agent service.
Source code in llama_deploy/services/agent.py
376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 |
|
create_task
async
#
create_task(task_definition: TaskDefinition) -> Dict[str, str]
Create a task.
Source code in llama_deploy/services/agent.py
403 404 405 406 407 408 |
|
get_messages
async
#
get_messages() -> List[_ChatMessage]
Get messages from the agent.
Source code in llama_deploy/services/agent.py
410 411 412 413 414 415 416 417 |
|
toggle_agent_running
async
#
toggle_agent_running(state: Literal['running', 'stopped']) -> Dict[str, bool]
Toggle the agent running state.
Source code in llama_deploy/services/agent.py
419 420 421 422 423 424 425 |
|
is_worker_running
async
#
is_worker_running() -> Dict[str, bool]
Check if the agent is running.
Source code in llama_deploy/services/agent.py
427 428 429 |
|
reset_agent
async
#
reset_agent() -> Dict[str, str]
Reset the agent.
Source code in llama_deploy/services/agent.py
431 432 433 434 435 |
|
launch_server
async
#
launch_server() -> None
Launch the agent as a FastAPI server.
Source code in llama_deploy/services/agent.py
437 438 439 440 441 442 443 444 445 446 447 448 |
|
HumanService #
Bases: BaseService
A human service for providing human-in-the-loop assistance.
When launched locally, it will prompt the user for input, which is blocking!
When launched as a server, it will provide an API for creating and handling tasks.
Exposes the following endpoints:
- GET /
: Get the service information.
- POST /process_message
: Process a message.
- POST /tasks
: Create a task.
- GET /tasks
: Get all tasks.
- GET /tasks/{task_id}
: Get a task.
- POST /tasks/{task_id}/handle
: Handle a task.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
service_name |
str
|
|
required |
description |
str
|
|
'Local Human Service.'
|
running |
bool
|
|
True
|
step_interval |
float
|
|
0.1
|
fn_input |
HumanInputFn
|
|
<function default_human_input_fn at 0x7fe1d19b7b00>
|
human_input_prompt |
str
|
|
'Your assistance is needed. Please respond to the request provided below:\n===\n\n{input_str}\n\n===\n'
|
host |
str
|
|
required |
port |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
service_name |
str
|
The name of the service. |
description |
str
|
The description of the service. |
running |
bool
|
Whether the service is running. |
step_interval |
float
|
The interval in seconds to poll for tool call results. Defaults to 0.1s. |
host |
Optional[str]
|
The host of the service. |
port |
Optional[int]
|
The port of the service. |
Source code in llama_deploy/services/human.py
69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 |
|
publish_callback
property
#
publish_callback: Optional[PublishCallback]
The publish callback, if any.
HumanTask #
Bases: BaseModel
Container for Tasks to be completed by HumanService.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
task_def |
TaskDefinition
|
|
required |
tool_call |
ToolCall | None
|
|
None
|
Source code in llama_deploy/services/human.py
282 283 284 285 286 |
|
processing_loop
async
#
processing_loop() -> None
The processing loop for the service.
Source code in llama_deploy/services/human.py
207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 |
|
process_message
async
#
process_message(message: QueueMessage) -> None
Process a message received from the message queue.
Source code in llama_deploy/services/human.py
288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 |
|
as_consumer #
as_consumer(remote: bool = False) -> BaseMessageQueueConsumer
Get the consumer for the service.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
remote |
bool
|
Whether the consumer is remote. Defaults to False.
If True, the consumer will be a RemoteMessageConsumer that uses the |
False
|
Source code in llama_deploy/services/human.py
314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 |
|
launch_local
async
#
launch_local() -> Task
Launch the service in-process.
Source code in llama_deploy/services/human.py
340 341 342 343 |
|
lifespan
async
#
lifespan(app: FastAPI) -> AsyncGenerator[None, None]
Starts the processing loop when the fastapi app starts.
Source code in llama_deploy/services/human.py
347 348 349 350 351 352 |
|
home
async
#
home() -> Dict[str, str]
Get general service information.
Source code in llama_deploy/services/human.py
354 355 356 357 358 359 360 361 362 363 364 |
|
create_task
async
#
create_task(task: TaskDefinition) -> Dict[str, str]
Create a task for the human service.
Source code in llama_deploy/services/human.py
366 367 368 369 370 371 |
|
get_tasks
async
#
get_tasks() -> List[TaskDefinition]
Get all outstanding tasks.
Source code in llama_deploy/services/human.py
373 374 375 376 |
|
get_task
async
#
get_task(task_id: str) -> Optional[TaskDefinition]
Get a specific task by ID.
Source code in llama_deploy/services/human.py
378 379 380 381 382 383 384 |
|
handle_task
async
#
handle_task(task_id: str, result: HumanResponse) -> None
Handle a task by providing a result.
Source code in llama_deploy/services/human.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 |
|
launch_server
async
#
launch_server() -> None
Launch the service as a FastAPI server.
Source code in llama_deploy/services/human.py
419 420 421 422 423 424 425 426 427 428 429 430 431 |
|
validate_human_input_prompt
classmethod
#
validate_human_input_prompt(v: str) -> str
Check if input_str
is a prompt key.
Source code in llama_deploy/services/human.py
433 434 435 436 437 438 439 440 441 442 |
|
ToolService #
Bases: BaseService
A service that executes tools remotely for other services.
This service is responsible for executing tools remotely for other services and agents.
Exposes the following endpoints:
- GET /
: Home endpoint.
- POST /tool_call
: Create a tool call.
- GET /tool
: Get a tool by name.
- POST /process_message
: Process a message.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
service_name |
str
|
|
required |
tools |
List[AsyncBaseTool]
|
|
required |
description |
str
|
|
'Local Tool Service.'
|
running |
bool
|
|
True
|
step_interval |
float
|
|
0.1
|
host |
str
|
|
required |
port |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
tools |
List[AsyncBaseTool]
|
A list of tools to execute. |
description |
str
|
The description of the tool service. |
running |
bool
|
Whether the service is running. |
step_interval |
float
|
The interval in seconds to poll for tool call results. Defaults to 0.1s. |
host |
Optional[str]
|
The host of the service. |
port |
Optional[int]
|
The port of the service. |
Examples:
from llama_deploy import ToolService, MetaServiceTool, SimpleMessageQueue
from llama_index.core.llms import OpenAI
from llama_index.core.agent import FunctionCallingAgentWorker
message_queue = SimpleMessageQueue()
tool_service = ToolService(
message_queue=message_queue,
tools=[tool],
running=True,
step_interval=0.5,
)
# create a meta tool and use it in any other agent
# this allows remote execution of that tool
meta_tool = MetaServiceTool(
tool_metadata=tool.metadata,
message_queue=message_queue,
tool_service_name=tool_service.service_name,
)
agent = FunctionCallingAgentWorker.from_tools(
[meta_tool],
llm=OpenAI(),
).as_agent()
Source code in llama_deploy/services/tool.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 |
|
publish_callback
property
#
publish_callback: Optional[PublishCallback]
The publish callback, if any.
processing_loop
async
#
processing_loop() -> None
The processing loop for the service.
Source code in llama_deploy/services/tool.py
182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 |
|
process_message
async
#
process_message(message: QueueMessage) -> None
Process a message.
Source code in llama_deploy/services/tool.py
238 239 240 241 242 243 244 245 246 247 248 249 |
|
as_consumer #
as_consumer(remote: bool = False) -> BaseMessageQueueConsumer
Get the consumer for the service.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
remote |
bool
|
Whether the consumer is remote. Defaults to False.
If True, the consumer will be a RemoteMessageConsumer that uses the |
False
|
Source code in llama_deploy/services/tool.py
251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 |
|
launch_local
async
#
launch_local() -> Task
Launch the service in-process.
Source code in llama_deploy/services/tool.py
276 277 278 |
|
lifespan
async
#
lifespan(app: FastAPI) -> AsyncGenerator[None, None]
Starts the processing loop when the fastapi app starts.
Source code in llama_deploy/services/tool.py
282 283 284 285 286 287 |
|
home
async
#
home() -> Dict[str, str]
Home endpoint. Returns the general information about the service.
Source code in llama_deploy/services/tool.py
289 290 291 292 293 294 295 296 297 298 299 300 301 302 |
|
create_tool_call
async
#
create_tool_call(tool_call: ToolCall) -> Dict[str, str]
Create a tool call.
Source code in llama_deploy/services/tool.py
304 305 306 307 308 |
|
get_tool_by_name
async
#
get_tool_by_name(name: str) -> Dict<