ai:private-gpt
                Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| ai:private-gpt [2024/04/23 21:49] – [NVIDIA docker] Wulf Rajek | ai:private-gpt [2024/05/01 17:33] (current) – Wulf Rajek | ||
|---|---|---|---|
| Line 250: | Line 250: | ||
| </ | </ | ||
| - | ===== NPL settings patch ===== | + | ===== NGL settings patch ===== | 
| To add the amount of layers loaded in the GPU for llamacpp, apply this NGL option patch, then add       " | To add the amount of layers loaded in the GPU for llamacpp, apply this NGL option patch, then add       " | ||
| Line 302: | Line 302: | ||
| </ | </ | ||
| + | |||
| + | ===== Max New Tokens / Context Size / Temperature settings patch ===== | ||
| + | |||
| + | To be able to set Max New Tokens, Context Size and Temperature in the docker compose file as variables, the settings.yaml file needs to be adjusted. | ||
| + | |||
| + | docker compose file additions: | ||
| + | < | ||
| + | environment: | ||
| + | PGPT_MAX_NEW_TOKENS: | ||
| + | PGPT_CONTEXT_WINDOW: | ||
| + | PGPT_TEMPERATURE: | ||
| + | </ | ||
| + | |||
| + | <code bash> | ||
| + | cat << EOD >> token-ctx-temp-settings-option.patch | ||
| + | diff --git a/ | ||
| + | index e881a55..8666b86 100644 | ||
| + | --- a/ | ||
| + | +++ b/ | ||
| + | @@ -37,10 +37,10 @@ ui: | ||
| + | llm: | ||
| + | mode: llamacpp | ||
| + | # Should be matching the selected model | ||
| + | -  max_new_tokens: | ||
| + | -  context_window: | ||
| + | +  max_new_tokens: | ||
| + | +  context_window: | ||
| + |  | ||
| + | -  temperature: | ||
| + | +  temperature: | ||
| + | |||
| + | rag: | ||
| + |  | ||
| + | EOD | ||
| + | |||
| + | git apply token-ctx-temp-settings-option.patch | ||
| + | </ | ||
| + | |||
| + | ===== CSS Customisation ===== | ||
| + | |||
| + | To adjust the main input box and fix mobile/low height browser window issue of the input box wrapping to the right, some css trickery is required. The last three css lines are added to privategpt/ | ||
| + | <code python privategpt/ | ||
| + | def _build_ui_blocks(self) -> gr.Blocks: | ||
| + | logger.debug(" | ||
| + | with gr.Blocks( | ||
| + | title=UI_TAB_TITLE, | ||
| + | theme=gr.themes.Soft(primary_hue=slate), | ||
| + | css=" | ||
| + | " | ||
| + | " | ||
| + | " | ||
| + | " | ||
| + | " | ||
| + | " | ||
| + | " | ||
| + | " | ||
| + | " | ||
| + | " | ||
| + | " | ||
| + | ".logo img { height: 100% }" | ||
| + | " | ||
| + | "# | ||
| + | "# | ||
| + | "#col { height: calc(100vh - 112px - 16px) !important; }" | ||
| + | "# | ||
| + | "# | ||
| + | "#col { min-height: | ||
| + | ) as blocks: | ||
| + | with gr.Row(): | ||
| + | |||
| + | </ | ||
| + | |||
ai/private-gpt.1713905365.txt.gz · Last modified:  by Wulf Rajek
                
                