Building your first application

This guide will get you up and running with the Chariot API. We'll cover how to create an application, add a source, and start a new conversation.

Installing the SDK

You can install our Node SDK with npm:

npm install chariotai

Our SDKs for Python and .NET are coming soon.

1. Create an application

To get started using Chariot, you need to create an application. You can do this from the dashboard, or by making a request to the API:

import { Chariot } from "chariotai";

const chariot = new Chariot(process.env.CHARIOT_API_KEY);

const application = await chariot.createApplication({
  name: 'Hello world',
  model: 'gpt-3.5-turbo'
});

console.log(application);

This will return the newly created application which includes the application id:

Response

{
  "id": "app_YmM3NGQ3",
  "name": "Hello world",
  "model": "gpt-3.5-turbo",
  "system_message": "You are a helpful assistant. Please help the user with their task and answer questions.",
}

You can update your application's system message and language model at any time using the applications endpoint.

2. Add a source

Sources represent the data that is accessible by the language model during a conversation. A source can be raw text, a file (e.g. PDF, Word, PowerPoint), or a website URL. See sources for more information.

You can add sources from the dashboard, or by making a request to the API. For this example, let's add information about an imaginary island that we can ask questions about (written by GPT):

const source = await chariot.createSource({
  name: 'Imaginary island',
  type: 'text',
  content: `Blue Bay, an imaginary 533-acre island, is home to 2013 residents, 
    with a unique weather pattern that brings snow only on Tuesdays. Its 
    botanical richness flourishes under purple rain showers, while a 
    radiant rainbow reef teems with colorful fish. Once a year, the island 
    hosts the Firefly Festival in May.`,
  application_id: '{YOUR_APPLICATION_ID}',
});

console.log(source);

Chariot automatically generates the embeddings for each source and stores them in a vector database.

You can check the embed status of your source by polling the sources endpoint using the source id. The response will include the source's embed_status:

Response

{
  "id": "src_M3NGQ3yQ",
  "embed_status": "PENDING",
}

Once the embed_status equals SUCCESS, the source is ready to be used in a conversation. This can take a few minutes depending on the size of the source.

3. Start a conversation

Now that we have an application and a source, we can start a conversation with the application. Let's ask our application questions about the imaginary island.

You can start a conversation from the dashboard, or by making a request to the API:

const conversation = {
  application_id: '{YOUR_APPLICATION_ID}',
  message: 'How big is Blue Bay island?'
}

const response = await chariot.createOrContinueConversation(conversation);

console.log(response);

Chariot automatically retrieves relevant sources for your application and uses them to get a response from the language model:

Response

{
  "id": "conv_OTkxYzQ5",
  "title": "Blue Bay Island",
  "message": "Blue Bay island is 533 acres.",
  "sources": ["Imaginary island"]
}

As you can see in this example, GPT-3 is able to answer our question about the imaginary island's size. Following this same pattern, you can leverage Chariot to quickly add AI-powered features to your application.

What's next?

Now that you're up and running with the Chariot API, you can use language models powered by your own data in your applications.