Elastic D&D - Update 12 - Veverbot - Asking Questions and Receiving Answers
Joe
Posted on November 17, 2023
In the last post we talked about Veverbot and data vectorization. If you missed it, you can check that out here!
NOTE:
The first bit of this post will be similar to the last post. If you are caught up, you can skip ahead.
Veverbot
Veverbot is my own custom AI assistant that aims to help players get quick answers about things that happened during their campaign so far. This is absolutely a work-in-progress, but even the first iteration of him is very cool.
We have already talked about the logging process, so today I will be talking about what needs to be done to ask questions and receive answers from Veverbot.
Elastic Configuration
To refresh your memory, I want to provide the Elastic templates in place for this data. Currently, I am using two templates: one for the "dnd-notes-*" indices, and another for an index named "virtual_dm-questions_answers". The second index contains the questions that players ask Veverbot, as well as the responses that Veverbot provides back to the players.
dnd-notes-* component template
{
"name": "dnd-notes",
"component_template": {
"template": {
"mappings": {
"properties": {
"@timestamp": {
"format": "strict_date_optional_time",
"type": "date"
},
"session": {
"type": "long"
},
"name": {
"type": "text",
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
}
},
"finished": {
"type": "boolean"
},
"message": {
"type": "text",
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
}
},
"type": {
"type": "text",
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
}
},
"message_vector": {
"dims": 1536,
"similarity": "cosine",
"index": "true",
"type": "dense_vector"
}
}
}
}
}
}
virtual_dm-questions_answers component template
{
"name": "virtual_dm-questions_answers",
"component_template": {
"template": {
"mappings": {
"properties": {
"question_vector": {
"dims": 1536,
"similarity": "cosine",
"index": "true",
"type": "dense_vector"
},
"answer": {
"type": "text",
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
}
},
"question": {
"type": "text",
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
}
},
"answer_vector": {
"dims": 1536,
"similarity": "cosine",
"index": "true",
"type": "dense_vector"
}
}
}
}
}
}
NOTE:
The mappings and templates are automatically created via the docker-compose file! This is simply educational, a user will not have to deal with the creation of any of this.
Asking Questions
I showed the code for this page in the Streamlit app here. Definitely go check that out.
Asking Veverbot a question is fairly straightforward with the chat window implementation -- type a questions into the chat bar!
From here, the question is stored in a variable, the question is vectorized via FastAPI (see this post) and stored in another variable.
Receiving Answers
To receive an answer, a Kibana KNN query is run with the vectorized question.
Both the question and the query results are then sent to OpenAI via FastAPI (see above link) to formulate a coherent response. This response is returned to the chat window below the question.
The question and the response from OpenAI is also stored in an Elastic index for later use that is to be determined.
Closing Remarks
Full disclosure -- our D&D group hasn't played in a few weeks and I haven't put as much effort into the project in that time. Saying that to say, I have no clue what I will talk about next week. Maybe keeping up with the blog will motivate me to dedicate a few hours each week to this; only time will tell.
Check out the GitHub repo below. You can also find my Twitch account in the socials link, where I will be actively working on this during the week while interacting with whoever is hanging out!
Happy Coding,
Joe
Posted on November 17, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
November 17, 2023