Devii
Purpose and Vision:
This project's goal is to allow any person to talk to any relational database. For example if connected to a meal planning application, a user could say 'Create a meal plan for next week with at least two meals meatless' and our engine would create the database mutations necessary to get that done for the user. Our engine has access to the user's specific schema at runtime so it knows whether it needs to ask follow-up questions or if it can execute the command from the user.
To accomplish this we are using Devii's schema generation, api, and security engine so every database has consistent structure. This means you will need to set up a Devii account to use this api.
Quick Start
Authenticate to your database via https://api.devii.io/auth
Use your Access token to send a POST request to https://api.nlapi.io/nlapi
Documentation
Key Concepts
Messages
Return Messages: The api will return the latest message in the thread object returned in the messages
array. The latest message will always be at messages[0]
and the array will be ordered by create time in descending order (latest -> oldest ).
Message Object: Each message object returned will have the following keys:
content
: The natural language input from the user (human) or natural language output from our models (bot)
speaker
: Identifies the message author. Currently will always be bot or human
created_at
: Timestamp of when the message was created.
Threads
Threads are simply a conversation of messages.
Using Threads: Sometimes a user will try to interact with a model, but not provide required information for a valid db mutation. When this happens, the nlapi will respond in natural language with a message indicating what information it needs to complete the users' request. When this happens, the developer will need to pass the thread_id
key in the next request to follow up on the conversation. The nlapi will use the whole context of the whole thread to complete the request so the user does not have to repeat previous information already mentioned in the thread.
New Threads: If no thread_id
is provided in the request, a new thread is created and the nlapi has no access to previous messages.
Thread Expiration: [in development] After a thread expires, a user cannot add additional messages to a thead. Expiring threads is a security feature. If we didn't expire threads, and a user had access to something early on in the thread, but not later, the nlapi may assume access still and hallucinate bad database interactions. With the proper Role Based Access Control policies, your user will still not be able to perform any action they are not allowed, but it could lead to a poor user expirience and more hallucinations with longer threads. We may change this in the future.
Thread Object: The thread object is what is returned from every /nlapi
request. Each thread object returned will have the following keys: thread_id
: This is used to keep track of threads so a user can followup with a conversation run_id
: Used for providing feedback on individual nlapi responses. messages
: The array of message objects (see message object for more details) created_at
: [in development] The timestamp the thread was created. expires_at
: [in development] The timestamp when the thread expires. After this, no more messages will be accepted and the nlapi will return the thread object with the last message's content: 'Error: Thread has Expired, please start a new thread.'
Context
Context is information the developer knows about the user or current location of the request that the user and the nlapi would not. For example on a project management software if a user is on a project dashboard page and sends the payload of {'user_input' : 'add a task to this project called make documentation for feature x'}
. The nlapi would not know what project the user is referencing. But because the user is on the project page, the user could add the context key context: ["user is viewing project with id 71"]
to the payload and the nlapi would understand that the user is likely wanting to add a task with project_id of 71.
How context is being implemented is currently being shaped and is subject to change. Please send us your feedback on how you'd like to implement this. Email at jase@jasekraft.com
Streaming
To request a streamed response, you must include the "options" key which contains an object with the key "stream" set to "true" with stream set to true. A sample payload might look like:
The event field describes the type of event that is being sent.
The data field contains a json string with information corresponding to the type of the event.
Streaming Events
staus_message
Status messages relay information about the steps the NLAPI is taking behind the scenes, whether that be the initial processing of the request, making queries, etc.
message_chunk
Message chunks contain json strings with "content" denoting the current message token and "thread_id" containing the id of the current conversation.
close
The close event is the last event in the response. In addition to "content" and "thread_id", close events also contain a "run_id".
error
If the NLAPI encountered an error while processing the request, it will send the error message in an error event.
Feedback
To continue to improve our models we optionally allow users to give feedback on responses. We currently use this data internally to continually improve our models. However, if you'd like to not participate in the continued refinement of our data, you can simply reach out and we can discuss options. Coming In Some Amount of Time: with enterprise installations of this software, the enterprises exclusive model will continue to learn from the data in this feedback loop
Developers can send POST requests to /feedback
with the feedback object to help provide feedback and improve our models. Users can only provide feedback on responses from threads they are logged into. As with requests made to nlapi
, you must include your access_token in the Authorization header of your request.
Feedback Object
Last updated