31220b41b9
Support multiple GPT backends in gptel config
...
Enhanced the gptel configuration to support multiple backends (Ollama and OpenAI) and added a function to select the desired backend. Default backend is set to OpenAI.
2024-07-16 22:42:22 -04:00
9360580a94
Add Forge diff-for-pr function for pull request creation
...
This commit adds a new function forge-diff-for-pr to generate a Git diff for use in pull request creation. The function is called automatically when creating a pull request using Forge, providing an easy way to view changes between the source and target branches.
2024-07-16 14:24:30 -04:00
3537134c23
Add Terraform console functionality to Emacs.
...
This commit adds the ability to run a Terraform console at a specified path in a comint buffer, making it easier to interactively explore and manage Terraform state.
2024-07-16 14:24:16 -04:00
be07e673f9
Update gptel-commit-message prompt
2024-07-16 14:23:02 -04:00
bcb8d468a6
Add function to generate commit messages with gptel
2024-07-15 16:16:11 -04:00
0b8be7b2f7
Add function to jump to tf plan output
2024-07-15 16:16:11 -04:00
8b75209521
Remove casual-calc
2024-07-13 09:27:45 -04:00
57203210b8
Source local zshrc if exists
2024-07-11 11:30:22 -04:00
e29116f2f6
Add utility function to render ansi colors in a file
2024-07-09 13:20:24 -04:00
f394365ff9
Replace pixel-scroll with ultra-scroll-mac
2024-07-09 13:20:13 -04:00
fbc12a5434
Add casual-calc
2024-07-09 10:42:36 -04:00
6a33416fad
Add "ask AI" mu4e action
2024-07-08 11:06:39 -04:00
8ac32f0955
Add posframe
2024-07-03 16:15:18 -04:00
e0f367faad
Add gptel-quick
2024-07-03 16:09:55 -04:00
ccaf60abbd
Add dbg macro
2024-07-03 15:21:40 -04:00
893a673616
Add gptel
2024-07-03 15:21:34 -04:00
9acc16c9aa
Add some utility AI replacement functions
2024-07-02 15:07:57 -04:00
11755d6201
Add spinner indicating when the AI is responding, and inhibit input during that time
2024-07-02 14:41:14 -04:00
211904a190
Fix terraform devdocs eldoc docs
2024-06-28 13:58:29 -04:00
f8c3f76cbd
Add ollama-copilot-mode to switch to local LLM for copilot
2024-06-28 13:21:47 -04:00
c140740a32
When prompting about a whole buffer, don't put whole buffer in chat
2024-06-27 21:49:16 -05:00
78bc060018
Open a new Llama chat if you call llama-ask* without an existing chat
2024-06-18 16:00:09 -04:00
93bc223e5b
Fix plist header
2024-06-18 15:03:14 -04:00
b13cc823f5
Add witchcraft script injector
2024-06-18 15:02:50 -04:00
f3c529781c
Fix delete-file implementation
2024-06-18 12:35:05 -04:00
68955452a5
Add dape for debugging
2024-06-13 13:49:30 -04:00
7aea6199cd
Fix org-daily capture template
2024-06-13 13:49:23 -04:00
5f63c40b97
Use quit-window instead of bury-buffer
2024-06-13 13:49:10 -04:00
33cd6207f9
Enable evil-mode in the minibuffer
2024-06-06 16:04:19 -04:00
2960fc5ad1
Set up Avy
2024-06-06 11:49:48 -04:00
78dc31a1d9
Use pixel-scroll-precision-mode to scroll by pixel instead of by line.
2024-06-06 11:49:31 -04:00
5362bb63e3
Don't let switch-to-buffer mess with window layout
2024-06-06 09:50:17 -04:00
f993466d81
Run rspec in spec root if found
2024-06-06 09:50:08 -04:00
5169285be0
Add comfy-ui command
2024-06-05 13:13:57 -04:00
f1698e3247
Remove eval, add filter mechanism
2024-05-29 14:36:26 -04:00
d36967e371
Minor tweaks
2024-05-24 01:03:44 -04:00
8db9cc7e8f
Make a version of llama-replace-in-region that does not have context
2024-05-23 22:59:43 -04:00
6191654511
Give the AI the ability to evaluate elisp
2024-05-23 18:31:13 -04:00
cdd674fdcd
Many more llama improvement
2024-05-23 13:28:24 -04:00
e91b1d6480
Make llama chat nicer
2024-05-23 10:49:23 -04:00
8ae6544709
Use correct straight-use-package recipe format for a local repo
2024-05-23 00:53:32 -04:00
cdd892d4e1
Switch out llm for llama
2024-05-23 00:53:18 -04:00
5faa99158d
Make the default llm chat aware that its inside of Emacs
2024-05-21 11:01:13 -04:00
b231ec9327
Add new autoloaded commands to use-package declaration and set default model
2024-05-21 10:27:02 -04:00
3d39b0c3df
Fix llm output buffer sentinal formatting and add autoloads
2024-05-21 10:26:45 -04:00
df79eee678
Add llm-chat command and improve prompt code structure
2024-05-20 11:33:32 -04:00
c618c4d3ce
Use llama3 as default llm
2024-05-17 13:30:27 -04:00
3c09eb3cbd
Enable llm.el to work with ollama models
2024-05-17 13:30:09 -04:00
71c86332c5
Enable copilot in yaml- and json-ts-modes
2024-05-17 13:29:52 -04:00
87584357a9
Set some customizations for llm integration
2024-05-16 10:37:26 -04:00