Skip to main content

Unanswered Questions

126 questions with no upvoted or accepted answers
6 votes
0 answers
536 views

I can't get my custom GPT to accept an API key - "error saving draft"

I'm trying to build a custom gpt with an action, basically the custom gpt can access an API if given proper instructions. I'm trying to get it to connect to plausible. I have built a simple code as a ...
5 votes
1 answer
255 views

How to fine-tune an LLM on a very small document

I am currently trying to fine-tune an LLM on a single document (about 1700 characters). I know that generally it is better to use prompt injection or something like a RAG system to provide specific ...
4 votes
0 answers
176 views

I'm writing a children's book and want to use Leonardo.ai to create all the characters in the same illustration style. Is that possible?

It seems that if I include the seed of the style I like, that's what creates consistency. Is that correct and complete? I'm a n00b, so if there's anything else you can suggest, I'd sure appreciate it....
3 votes
0 answers
105 views

How can I prompt an image generator to create scaled battle maps with a grid overlay for D&D or other table top games?

I have given instructions such as "grid overlay" "bird's eye view", yet the tool never seems to be able to scale properly. I am currently using Midjourneyon a PC with Windows 11.
3 votes
0 answers
141 views

How to avoid anti-aliasing in StableDiffusion output images?

I'm running stabilityai/stable-diffusion-xl-base-1.0 with the madebyollin/sdxl-vae-fp16-fix vae and custom lora weights. The lora was trained on a dataset of thousands of 512x512 images that contained ...
3 votes
0 answers
50 views

What's the most universally compatible structure for storing audio recording datasets for voice cloning?

I would like to know the best way to store a voice actor's voice for reuse with multiple platforms. Let's say I record my voice to be cloned, or pay a voice actor to narrate for the purpose of using ...
3 votes
0 answers
3k views

Prompt for generating two characters in the same scene

There's a core issue with the AI tools that i'm having trouble with. I generated my characters as I wanted, but then trying to combine them in a single image is very hard. Character one: https://cdn....
3 votes
0 answers
464 views

Poor concurrency and perf on multiple GPUs VM (running CodeLlama locally using Ollama)

I am trying to figure out if self hosting CodeLlama on a sufficiently powerful multi-GPU machine can be cost effective for my specific product needs. However, when I go from 1 GPU to 4 GPU VM, I see ...
3 votes
0 answers
168 views

What are the temperature and top_p values used on chat.openai.com?

What are the temperature and top_p values used on https://chat.openai.com/? I know that the temperature and top_p values with the openai.com API both default to 1 for chat completion, but I don't know ...
3 votes
0 answers
127 views

Denoise image without converting to latent space?

KSampler requires using a variational autoencoder to convert an initial image to latent space before it tries to de-noise it. And this is awesome, but if you simply encode to latent space and decode ...
3 votes
0 answers
157 views

How to provide faces of public characters as a visual clue to Leonardo.ai?

I am a beginner in generative prompting and I am experimenting with Leonardo.ai. I like it's photoreal and cinematic effect, but it does not support upload of an image as input, apparently. How can I ...
3 votes
0 answers
145 views

How is the "Corresponding requests per minute (RPM)" calculated when creating a GPT deployment in Azure?

I created an Azure OpenAI resource in my Azure account. I want to deploy a GPT-4-32k model. How is the "Corresponding requests per minute (RPM)" calculated when creating a GPT deployment in ...
2 votes
0 answers
24 views

How can LLMs be prompted to avoid a wrap-up or coda?

LLMs in general seem to be very strongly constrained to provide responses that use the 3-act-play structure: Set the context, or recap what came before. Give the answer. <- This part is all that ...
2 votes
0 answers
34 views

Why is my LLM generating excessive code for a simple task?

I'm experimenting with LLMs on my local machine using Ollama. I've installed it and am running the CodeLlama 7b-code model: ollama run codellama:7b-code I gave it a simple prompt asking for a basic ...
2 votes
0 answers
76 views

How do people fine tune LLMs to not answer specific questions?

I am new to LLMs and learning new things daily. I am right now trying to fine-tune Llama model that should avoid answering questions which are privacy related. Is it a good practice to use model fine-...

15 30 50 per page
1
2 3 4 5
9