The model is the product.
Building at the intersection of local AI, edge deployment, and developer tooling. By Deepak Banswan - LinkedIn
Projects
Labelytics
Mobile app to read and understand any product ingredients label.
Voice Print
Transcribe Audio Files Locally with privacy and speed
OneSupport
AI based support semantic search engine
LLMExpress (Discontinued)
Run any LLM models locally with just one script.
Writing
seven skills for proficiency in developing agents
Its not just prompt engineering
please return only json
The simple art of getting only json back from llm api's
serve huggingface models locally over http
A guide to serve Hugging Face models locally over HTTP using the Transformers library.
workflow design for creating generative ai solutions
Creating generative ai solutions while being very popular right now, it can get overwhelming very soon if a well defined workflow agreed upon by all stakeholders is not in place.
zero shot vs few shot prompting
Getting prompts just right achieving optimal result from the llm's model is a crucial and iterative process. In this article we will discuss the difference between zero-shot and few-shot prompting and how to use them effectively.
how to avoid prompt injection in your ai models
How to avoid prompt injection in your AI models