The ChatGPT component is great; I’d been looking to do this another way, but I’ve been able to spin up a test version really quickly on Moxly!
Am I right in thinking that the start instruction is the system prompt? If not, is there a way of being able to change the system prompt? Plus, is it possible to pull the system prompt from the database instead of putting it in manually, so users could maybe select different options that go to the same screen but have a different chat?
Finally, is there any way that I can set the temperature?
You can customize the chat instructions as you need, including links to your documentation, website, etc.
Example instructions:
Name: Jessica
Character: AI Assisted Doctor
Introduction: Hey, my name is Jessica and I am a Doctor. How can I help you today?
Prompt: I want you to act as an AI assisted doctor. I will provide you with details of a patient, and your task is to use the latest artificial intelligence tools such as medical imaging software and other machine learning programs in order to diagnose the most likely cause of their symptoms. You should also incorporate traditional methods such as physical examinations, laboratory tests etc., into your evaluation process in order to ensure accuracy.
This way you will teach your bot. In the next release, we will add several blanks with instructions for bots.
You can create as many different bots as you want, with your own skills and professions. On one screen, you can add 1 component with a chat bot, create several screens, make a menu page and navigation to go to the desired screen (it all depends on your idea)
No, you can specify in the instructions the limit of requests for the number of tokens (symbols), study the documentation on the use of chatgpt models
I’ve build apps on ChatGPT before. You can send temperature settings to the API. I can share the API call I made in PHP to change the temperature setting.
I just wondered if this was possibly within the Moxly component and if the instructions are changing the system prompt or are just the first user prompt. This is important if the conversation goes on for a while. And if there is any way for me to set the instructions in the database and to pull the instructions from there.
Thanks again. I think I’m not making my points clear, I do apologise. I converted my Web apps which wrote content to ChatGPT when the model came out, so I do understand the differences between Davinci and Turbo/GPT4.
I don’t want to fine tune the model using training by training it, I just want to set the temperature setting.
I’ve shared a screenshot from the playground. I usually set it around 0.8-0.9. And I usually increase the max length to 1000ish. This doesn’t get set by the system prompt, you need to set it in the API call or it’s left at the default.
And I might not be explaining my use case properly. I want people to be able to choose different chats - each one with a different set of instructions. Rather than having multiple chat screens I’d prefer to have one chat screen which sets the instructions from the database depending on what the user has selected.
No. As of Mar 1, 2023, you can only fine-tune base GPT-3 models. See the fine-tuning guide for more details on how to use fine-tuned models.
We will add temperature settings, but I am not sure that they will work in the future
As for switching between chats, you need to make new screens anyway, you can’t do everything in one screen, for the user it will be just like choosing a dialog
Thank you so much for doing this so quickly. This is 100% the best low/no code mobile app builder out there!
I’m sure you’re going to think that I’m being cheeky here; are you able to also add a maximum length number in there? I’m not sure what it’s currently set at, but if I could change it, that would be amazing.
Sorry to request something else, especially as it’s not in the feature request forum. But is it possible to allow the output to be returned with the line breaks as well? It’s all coming through as one paragraph even when it’s clearly returning multiple paragraphs.
He wants the line breaks from the api response to translate to a paragraph ending and a separation to occur for the next paragraph after the line break.
I do have a question relating to this and it would be the ‘awesome sauce’ if you guys do it. Currently the 3.5 turbo and 4 models are limited to 4090 tokens which hinders the amount of data to use in the api. Recently companies like Pinecone.io have created ways to store large amounts of data for LLM’s like chatgpt. will we ever see this implemented so we can upload files or large databases to Pinecone and use its api directly in nwicode to interact with chatgpt?
We have added a simple component for chatgpt, later we will add the ability to store history. I’m not sure it makes sense to develop it further. since it is possible to use a low-code editor and implement all your ideas