LLMExpress

Run any models locally by using a single script. That is the whole idea behind LLMExpress. There are good tools available for similar purposes like Hugging Face Transformers, Ollama latter being the easiest to use. llama.cpp also comes with a similar interface you can use to run, chat with any model.

llama-cli -m your_model.gguf -p "You are a helpful assistant" -cnv

# Output:
# > hi, who are you?
# Hi there! I'm your helpful assistant! I'm an AI-powered chatbot designed to assist and provide information to users like you. I'm here to help answer your questions, provide guidance, and offer support on a wide range of topics. I'm a friendly and knowledgeable AI, and I'm always happy to help with anything you need. What's on your mind, and how can I assist you today?
#
# > what is 1+1?
# Easy peasy! The answer to 1+1 is... 2!

LLMExpress

What & How

Here's what it should be doing when you run the script:

  • check if git is installed and if not should install it.
  • have a default model to run and a default prompt.
  • when provided with a model link and a prompt, it should download the model and the tokenizer.
  • when running the model, it should check if the model and tokenizer are already downloaded and if not should download them.
  • give list of models to choose from if multiple models are downloaded.
  • give a option to run a http server endpoint for the model so that it can be used by other applications as well.
  • may be a option to run the same prompt with multiple models.
  • shows currently available models on hugging face(this would be a api call if possible).
  • basic CRUD operations on the models downloaded.
  • AND all this should be happening in a single script, which is open source, customizable and easy to use.
  • it a shell script for now, but may be a powershell version as well for windows users. I am guessing you can run it with git bash on windows as well.

Currently it can do most of the stuff mentioned above. Rest is in progress.

When finished script will be available at github llmexpress. Feel free to reach out to me on twitter or linkedin for any suggestions or feedback.