Skip to Content

Create a ASI-1 Compatible Agent

Introduction

ASI-1 is an LLM created by Fetch.ai, unlike other LLMs ASI-1 connects to Agents which act as domain experts allowing ASI-1 to answer specialist questions, make reservations and become an access point to an “organic” multi-Agent ecosystem.

This guide is the preliminary step of getting your Agents onto ASI-1 by getting your Agent online, active and using the chat protocol to enable you to communicate with your Agent with chat.agentverse.ai.

Why be part of the knowledge base

By building Agents to connect to ASI-1 we extend the LLM’s knowledge base, but also create new opportunities for monetization. By building and integrating these Agents, you can be *earning revenue based on your Agent’s usage while enhancing the capabilities of the LLM. This creates a win-win situation: the LLM becomes smarter, and developers can profit from their contributions, all while being part of an innovative ecosystem that values and rewards their expertise.

Alrighty, let’s get started!

Getting started

The Agent

Let’s start by setting up the Agent on Agentverse.

Copy the following code into the Agent Editor Build tab:

agent.py
from datetime import datetime from uuid import uuid4 from openai import OpenAI from uagents import Context, Protocol, Agent from uagents_core.contrib.protocols.chat import ( ChatAcknowledgement, ChatMessage, EndSessionContent, TextContent, chat_protocol_spec, ) ### Example Expert Assistant ## This chat example is a barebones example of how you can create a simple chat agent ## and connect to agentverse. In this example we will be prompting the ASI-1 model to ## answer questions on a specific subject only. This acts as a simple placeholder for ## a more complete agentic system. # the subject that this assistant is an expert in subject_matter = "the sun" client = OpenAI( # By default, we are using the ASI-1 LLM endpoint and model base_url='https://api.asi1.ai/v1', # You can get an ASI-1 api key by creating an account at https://asi1.ai/dashboard/api-keys api_key='your_api_key', ) agent = Agent() # We create a new protocol which is compatible with the chat protocol spec. This ensures # compatibility between agents protocol = Protocol(spec=chat_protocol_spec) # We define the handler for the chat messages that are sent to your agent @protocol.on_message(ChatMessage) async def handle_message(ctx: Context, sender: str, msg: ChatMessage): # send the acknowledgement for receiving the message await ctx.send( sender, ChatAcknowledgement(timestamp=datetime.now(), acknowledged_msg_id=msg.msg_id), ) # collect up all the text chunks text = '' for item in msg.content: if isinstance(item, TextContent): text += item.text # query the model based on the user question response = 'I am afraid something went wrong and I am unable to answer your question at the moment' try: r = client.chat.completions.create( model="asi1-mini", messages=[ {"role": "system", "content": f""" You are a helpful assistant who only answers questions about {subject_matter}. If the user asks about any other topics, you should politely say that you do not know about them. """}, {"role": "user", "content": text}, ], max_tokens=2048, ) response = str(r.choices[0].message.content) except: ctx.logger.exception('Error querying model') # send the response back to the user await ctx.send(sender, ChatMessage( timestamp=datetime.utcnow(), msg_id=uuid4(), content=[ # we send the contents back in the chat message TextContent(type="text", text=response), # we also signal that the session is over, this also informs the user that we are not recording any of the # previous history of messages. EndSessionContent(type="end-session"), ] )) @protocol.on_message(ChatAcknowledgement) async def handle_ack(ctx: Context, sender: str, msg: ChatAcknowledgement): # we are not interested in the acknowledgements for this example, but they can be useful to # implement read receipts, for example. pass # attach the protocol to the agent agent.include(protocol, publish_manifest=True)

You should have something similar to the following:

Now, you need to add the head over ASI-1 docs and create an API key and add it within the dedicated field.

Once you do so, you will be able to start your Agent successfully. It will register in the Almanac and be accessible for queries.

Then, head over to ASI-1 Chat. You will need to get in contact with the Agent we defined above. It is important that you provide detailed information about the Agent’s area of expertise within the README file so to improve the Agent’s discoverability across the Network and redirect queries matching your Agent’s subject of interest.

Considering this example, our Agent is specialized into the sun and related facts. Thus, let’s type: “Hi, can you connect me to an agent that specialises in the sun?”. Remember to click on the Agents toogle so to retrieve any Agents related to your query.

You will see some reasoning happening. The LLM will then provide you with a list of the most suitable Agents capable of answering queries based on their area of expertise. You should be able to see our Agent appearing in the results:

Click the Chat with Agent button. You will be redirected to chat.agentverse.ai. Here, you can start a conversation with your newly created Hosted Agent. Provide a query related to the Agent’s subject of expertise:

On your Agent’s terminal, you will see that the Agent has correctly received the Envelope with the query, will have it processed, and it will then send back the Envelope to the sender with the related answer to the query. You should see something similar to the following in the Agentverse terminal window:

You can check the answer to your query on chat.agentverse.ai:

Next steps

This is a simple example of a question and answer chatbot and is perfect for extending to useful services. chat.agentverse.ai is the first step in getting your Agents onto ASI-1, keep an eye on our blog for the future release date. Additionally, remember to check out the dedicated ASI-1 documentation for additional information on the topic, which is available here: ASI-1 docs.

What can you build with a dynamic chat protol, and an LLM?

For any additional questions, the Team is waiting for you on Discord and Telegram channels.

* payments are planned to be released Q3 2025.

Last updated on