Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama - Remote hosts #8234

Open
wants to merge 25 commits into
base: dev
Choose a base branch
from

Conversation

Fried-Squid
Copy link

@Fried-Squid Fried-Squid commented Sep 30, 2024

Background

Currently, AutoGPT only supports ollama servers running locally. Often, this is not the case as the ollama server could be running on a more suited instance, such as a Jetson board. This PR adds "ollama host" to the input of all LLM blocks, allowing users to select the ollama host for the LLM blocks.

Changes 🏗️

  • Changes contained within blocks/llm.py:
    • Adding ollama host input to all LLM blocks
    • Fixed incorrect parsing of prompt when passing to ollama in the StructuredResponse block
    • Used ollama.Client instances to accomplish this.

Testing 🔍

Tested all LLM blocks with Ollama remote hosts as well as with the default localhost value.

Related issues

#8225

@Fried-Squid Fried-Squid requested a review from a team as a code owner September 30, 2024 21:19
@Fried-Squid Fried-Squid requested review from Torantulino and kcze and removed request for a team September 30, 2024 21:19
@CLAassistant
Copy link

CLAassistant commented Sep 30, 2024

CLA assistant check
All committers have signed the CLA.

Copy link

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
🧪 No relevant tests
🔒 Security concerns

Potential remote code execution:
The addition of the ollama_host parameter allows users to specify arbitrary hosts for Ollama connections. If not properly validated and sanitized, this could potentially be exploited to connect to unintended hosts or execute arbitrary code. It's crucial to implement strict input validation for the ollama_host parameter to ensure only authorized and safe connections are allowed.

⚡ Recommended focus areas for review

Potential Security Risk
The ollama_host parameter is set with a default value of "localhost:11434". This could potentially allow users to connect to arbitrary hosts if not properly validated.

Error Handling
The changes introduce new network calls to potentially remote Ollama hosts, but there's no visible error handling for network-related issues.

Code Duplication
The ollama_host parameter is added to multiple Input classes, which could lead to duplication and maintenance issues.

Copy link

netlify bot commented Sep 30, 2024

Deploy Preview for auto-gpt-docs canceled.

Name Link
🔨 Latest commit ea3716a
🔍 Latest deploy log https://app.netlify.com/sites/auto-gpt-docs/deploys/67377cca4407da0008b0fd8b

@Bentlybro
Copy link
Member

Bentlybro commented Sep 30, 2024

This is a super nice change and its much needed 🙏 once CI tests pass it should be good to go!

@Bentlybro Bentlybro self-assigned this Sep 30, 2024
@Fried-Squid
Copy link
Author

CI/CD all passing now 😄
Just forgot to run the formatter and change a type lol

@github-actions github-actions bot added the conflicts Automatically applied to PRs with merge conflicts label Oct 10, 2024
Copy link
Contributor

This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.

@ntindle
Copy link
Member

ntindle commented Oct 10, 2024

@Fried-Squid can i get that cla signed so we can work towards merging this

@Fried-Squid
Copy link
Author

Fried-Squid commented Oct 12, 2024 via email

@ntindle
Copy link
Member

ntindle commented Oct 12, 2024

Somewhere you’re developing probably has an old git config that contains those emails as a default. Ex GitHub.com uses one of those when you update from master or your dev machine has its git config email set to an old email.

You can reopen the PR with the correct emails after fixing locally or maybe try force pushing over but I’m not positive that force pushing to your branch will work

@Fried-Squid Fried-Squid requested a review from a team as a code owner October 14, 2024 10:56
@github-actions github-actions bot added platform/frontend AutoGPT Platform - Front end size/xl and removed size/m labels Oct 14, 2024
Fried-Squid and others added 11 commits October 14, 2024 12:06
…cant-Gravitas#8219)

Refactor iteration block to support iterating over dictionaries and to return individual list items.
…ebsiteContentBlock (Significant-Gravitas#8228)

Refactor search.py: Add option for raw content scraping in ExtractWebsiteContentBlock
Automatically apply the `platform/blocks` label to PRs that change files in `backend/blocks/`
…wnload agent button (Significant-Gravitas#8196)

* feat(frontend): push to cloud if needed for marketplace

* fix(market): missing envar in the example 😠

* feat(frontend): download button functions

* feat(frontend): styling and linting

* feat(frontend): move to popup

* feat(frontend): style fixes and link replacement

* feat(infra): add variables

* fix(frontend): merge

* fix(frontend): linting

* feat(frontend): pr changes

* Update NavBar.tsx

* fix(frontend): linting

---------

Co-authored-by: Zamil Majdy <[email protected]>
Co-authored-by: Aarushi <[email protected]>
@Fried-Squid
Copy link
Author

@ntindle Sorted it out, for future reference you can just rebase the commits with --reset-author and force push over 😄
Just need the conflicts sorted, is that a you thing or a me thing? Kinda new to the open source stuff lol

@aarushik93 aarushik93 changed the base branch from master to dev October 14, 2024 15:27
@3eggert
Copy link

3eggert commented Oct 26, 2024

Hi,
can I just use:
https://github.com/Fried-Squid/AutoGPT-Fork-01/tree/ollama_remote
until it is merged?
Do I need to know something before I try it out?
Are special cfgs needed?

@Fried-Squid
Copy link
Author

@3eggert

Go ahead and use this branch as usual, I'll try and keep it up to date until merge

@3eggert
Copy link

3eggert commented Nov 4, 2024

@Fried-Squid I get a:

"Error calling LLM: Connection error."

my ollama host and port seem right, the hostname is resoved and the ollama API can be reached everywhere in my network via:

curl -v autogpt:11434/api/version

(autogpt is the hostname)

Any ideas?

@Fried-Squid
Copy link
Author

@3eggert

Could be an issue with your networking - how are you running AutoGPT?

Can you also try with IP over host, could be DNS...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
conflicts Automatically applied to PRs with merge conflicts platform/backend AutoGPT Platform - Back end platform/blocks platform/frontend AutoGPT Platform - Front end Review effort [1-5]: 2 size/xl
Projects
Status: 🆕 Needs initial review
Development

Successfully merging this pull request may close these issues.

9 participants