Pular para o conteúdo

Guia Completo: Como Instalar Qualquer LLM Localmente Usando Open WebUI (Ollama) de Forma Super Fácil!

Banner Aleatório

Claro! Para criar um artigo otimizado para SEO, preciso da descrição ou tema que você gostaria que eu utilizasse como base. Por favor, forneça mais detalhes!

Banner Aleatório

Como Instalar Qualquer LLM Localmente: Uma Reflexão Sobre a Aplicação no Serviço Público

A recente popularização dos Modelos de Linguagem de Aprendizado de Máquina (LLM) tem levantado debates sobre como essas tecnologias podem ser integradas nas práticas do serviço público. A instalação de um LLM localmente, utilizando ferramentas como a Open WebUI do Ollama, é um passo que pode facilitar a adoção dessa inovação. Este processo, descrito como “SUPER EASY”, sugere que o acesso a essas ferramentas não é restrito apenas a especialistas da área, o que democratiza seu uso.

Implementando um LLM no contexto do serviço público, podemos melhorar a comunicação entre os cidadãos e as instituições. Imagine uma plataforma que possa responder automaticamente a dúvidas frequentes ou ajudar na triagem de atendimentos. Isso não só otimiza recursos, mas também potencializa a transparência e a eficiência dos serviços oferecidos.

Além disso, a reflexão sobre como utilizar LLMs deve incluir considerações sobre a ética no uso da tecnologia. É crucial que, ao instalar e implementar essas ferramentas, os servidores públicos avaliem não apenas a eficiência, mas também o impacto que isso terá na sociedade. Promover a educação e a capacitação dos servidores para o uso consciente da inteligência artificial pode elevar o padrão dos serviços prestados, garantindo que a tecnologia sirva ao bem comum.

Portanto, ao considerar a instalação de LLMs em nossa rotina, vale a pena pensar em como essas ferramentas podem ser aliadas na construção de um serviço público mais ágil e acessível, sem perder de vista a responsabilidade e a ética no seu uso.

Créditos para Fonte

Aprenda tudo sobre automações do n8n, typebot, google workspace, IA, chatGPT entre outras ferramentas indispensáeis no momento atual para aumentar a sua produtividade e eficiência.

Vamos juntos dominar o espaço dos novos profissionais do futuro!!!

#Install #LLM #Locally #Open #WebUI #Ollama #SUPER #EASY

45 Comment on this post

  1. 💗 Thank you so much for watching guys! I would highly appreciate it if you subscribe (turn on notifcation bell), like, and comment what else you want to see!

    📆 Book a 1-On-1 Consulting Call WIth Me: https://calendly.com/worldzofai/ai-consulting-call-1

    🔥 Become a Patron (Private Discord): https://patreon.com/WorldofAi

    📖 Want to Hire Me For AI Projects? Fill Out This Form: https://td730kenue7.typeform.com/to/WndMD5l7

    🚨 Subscribe to my NEW Channel! https://www.youtube.com/@worldzofcrypto

    🧠 Follow me on Twitter: https://twitter.com/intheworldofai

    Love y'all and have an amazing day fellas. ☕To help and Support me, Buy a Coffee or Donate to Support the Channel: https://ko-fi.com/worldofai – Thank you so much guys! Love yall!

  2. Is it really locally? I installed it and it ask me to sign in to open WebUI, and when I disconnect from internet, it says I cannot use the interface, is there a way I could run it offline and without loging??

  3. Thank you so much. I did what you said and it works! I have a few questions. Can you make a video on how to be able to have an API key for our local llms, so we can put them into VR, games, apps and ect. I see where openweb UI has "generate secret API key" . Then can you also make a video on how to get ollama to run on your GPU instead of your CPU? Thanks. I would love it if you showed us all the awesome UI options on pinokio. Like the pros and cons. All I see is where you can download different ones. Ppl love details and to view it. Here is another vid idea for you. Do a video on AI town and other AI games on pinokio or using this set up that is in this video. Cool things this setup offers. I couldn't download docker on my computer then I found this video. It was what I needed. Ppl like me that is new to this, would love to see more content on the things I mentioned. 🌟

  4. Please help, i already installed ollama, docker and then webui, and successfully run the localhost:3000 to run ollama via webui. but next day when i tried to access localhost:3000, it stop with error : localhost refused to connect, ERR_CONNECTION_REFUSED, what should i do?

  5. had to get like 10gb worth of useless junk, while installing a tiny Openwebui app. Why in the love of сunт, do I need visual studio build tools for it?

    Why do I need all this freaking junk?
    Installing collected packages: win-unicode-console, red-black-tree-mod, pyxlsb, pywin32, pytz, pyreadli
    ne3, pypika, pydub, pyclipper, peewee, passlib, mpmath, monotonic, mmh3, flatbuffers, filetype, fake-us
    eragent, ebcdic, easygui, docx2txt, compressed-rtf, zipp, XlsxWriter, xlrd, wrapt, websockets, websocke
    t-client, validators, urllib3, uritemplate, ujson, tzdata, typing-extensions, tomli, threadpoolctl, ten
    acity, tabulate, sympy, soupsieve, sniffio, six, shellingham, safetensors, regex, rapidfuzz, pyyaml, py
    tube, python-multipart, python-magic, python-iso639, python-dotenv, pyproject_hooks, pyparsing, pypando
    c, PyMySQL, PyJWT, pygments, pycparser, pyasn1, psycopg2-binary, psutil, protobuf, primp, pluggy, platf
    ormdirs, Pillow, pathspec, packaging, overrides, orjson, ordered-set, opentelemetry-util-http, olefile,
    oauthlib, numpy, networkx, nest-asyncio, mypy-extensions, multidict, mdurl, MarkupSafe, Markdown, lxml
    , lark, jsonpointer, jsonpath-python, joblib, jmespath, jiter, itsdangerous, iniconfig, importlib-resou
    rces, idna, humanfriendly, httptools, h11, grpcio, greenlet, fsspec, frozenlist, fonttools, filelock, e
    xceptiongroup, et-xmlfile, dnspython, distro, defusedxml, colorclass, colorama, charset-normalizer, cha
    rdet, certifi, cachetools, blinker, bidict, bcrypt, backoff, av, attrs, async-timeout, annotated-types,
    aiohappyeyeballs, yarl, wsproto, Werkzeug, tzlocal, typing-inspect, tqdm, sqlalchemy, Shapely, scipy,
    rsa, requests, redis, rank-bm25, python-pptx, python-dateutil, pytest, pypdf, pymongo, pydantic-core, p
    yasn1-modules, proto-plus, opentelemetry-proto, openpyxl, opencv-python-headless, opencv-python, marshm
    allow, markdown-it-py, Mako, langdetect, jsonpatch, jinja2, importlib-metadata, httplib2, httpcore, goo
    gleapis-common-protos, fpdf2, emoji, email_validator, ecdsa, deprecated, deepdiff, ctranslate2, colored
    logs, click, chroma-hnswlib, cffi, build, beautifulsoup4, asgiref, anyio, aiosignal, youtube-transcript
    -api, watchfiles, uvicorn, torch, tiktoken, starlette, simple-websocket, scikit-learn, rich, requests-t
    oolbelt, requests-oauthlib, python-jose, pytest-docker, pydantic, posthog, peewee-migrate, pandas, open
    telemetry-exporter-otlp-proto-common, opentelemetry-api, onnxruntime, nltk, huggingface-hub, httpx, grp
    cio-status, google-auth, Flask, duckduckgo-search, docker, dataclasses-json, cryptography, botocore, bl
    ack, argon2-cffi-bindings, APScheduler, alembic, aiohttp, unstructured-client, typer, tokenizers, s3tra
    nsfer, rapidocr-onnxruntime, python-engineio, opentelemetry-semantic-conventions, opentelemetry-instrum
    entation, openai, msoffcrypto-tool, langsmith, langfuse, kubernetes, google-auth-httplib2, google-api-c
    ore, Flask-Cors, authlib, argon2-cffi, unstructured, transformers, python-socketio, opentelemetry-sdk,
    opentelemetry-instrumentation-asgi, langchain-core, google-api-python-client, faster-whisper, fastapi-c
    li, boto3, anthropic, sentence-transformers, opentelemetry-instrumentation-fastapi, opentelemetry-expor
    ter-otlp-proto-grpc, langchain-text-splitters, google-ai-generativelanguage, fastapi, langchain, google

  6. How do you actually configure the "group based" workflow? AS you said towards the end you basically can create a workspace for multiple people to have access. Is this locally? Is this via VPN, Internet? If my friend in a different city wants to work with me o na project how can that be done?

  7. Different version maybe now, my interface looks different and does not connect to my already running ollama.. no longer super easy.. see if I can figure out what changed.. followed the open webui directions, but not working yet.

  8. Will not run on my apple silicon, cleared cache, gave it full disk access in security, and still get this error "ENOENT: no such file or directory, stat '/Users/keith/pinokio/api/open-webui.git/{{input.event[0]}" any help would be great

  9. Is there a way to just set it up on a website. Say you have your own full running webserver can you just install it that way without any 3rd party apps. For example just like running ollama locally but have it on a website?

  10. Thanks for the video. Am I missing something? I can access the UI if I do localhost:8000. On the PC it is running on. On that same PC I can't access it if I type my actual IP with port. I thought maybe I needed to open port 8000 on my PC and that didn't help. Trying to access this from different computers in my house. I have tried accessing it from http://192.168.1.45:8000 and nothing is coming up. Any help would be greatly appreciated. I am sure I am missing something simple

  11. Please show the possible approach on Android devices that have high capacity of high amount of RAM. There are one or two apps that promise this and they're pretty complex configuration. If you can kindly approach the platform, I will be extremely grateful.

Join the conversation

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *