In this primer, we will provide an overview of the LocalAI platform as well as get our hands dirty using the command-line as well as the Python SDK.
Here is the link to the article on LocalAI:
Enjoy π !!!
Articles | Notes | Tips | Tutorials
In this primer, we will provide an overview of the LocalAI platform as well as get our hands dirty using the command-line as well as the Python SDK.
Here is the link to the article on LocalAI:
Enjoy π !!!
In this article, we will understand the basics of model Quantization which allows one to shrink the size of any pre-trained LLM model.
Here is the link to the article on Quantization:
Β Understanding Model Quantization
Enjoy π !!!
In this primer, we will provide an overview of the popular LLM framework LangChain AND get our hands dirty with some code samples on the various core components.
Here is the link to the article on LangChain:
Enjoy π !!!
In this primer, we will provide an overview of the Ollama platform as well as get our hands dirty using the command-line as well as the Open WebUI.
Here is the link to the article on Ollama:
Enjoy π !!!
In this primer, we will provide an overview of the Hugging Face platform as well as get our hands dirty with some code samples on the various text processing tasks.
Here is the link to the article on Hugging Face:
Enjoy π !!!
Posted my next article on LinkedIn and here is the link:
Unpacking the Mystery behind Deep Learning !!!
Hope is useful for others !!!
In this part of the Deep Learning series, we will dive deep into the different blocks of the Transformer model and unravel them to get a better grasp of their internals.
Here is the link to the article on understanding Transformers:
Deep Learning – Understanding the Transformer Models
Enjoy π !!!
In this part of the Deep Learning series, we will introduce the concept of the Sequence-to-Sequence (also known as the Encoder-Decoder) model in the context of a Language Translation use-case. In addition, we will get our hands dirty to demonstrate the translation of English sentences to Spanish sentences. To accomplish this task, we will implement an GRU model using PyTorch.
Here is the link to the article on Deep Learning demonstrating the English-to-Spanish Language Translation using GRU:
Deep Learning – Sequence-to-Sequence Model
Enjoy π !!!
In this part of the Deep Learning series, we will explain how aΒ Gated Recurrent Unit (GRU) network works and use PyTorch for a practical demonstration to predict the Next Word.
Here is the link to the article on Deep Learning using GRU:
Deep Learning – Gated Recurrent Unit
Enjoy π !!!
In this article, we will explain a high-level overview of the Milvus Vector Database, explain the architectural components of the Milvus Vector Database, install and setup the required software, and finally get our hands dirty with a simple yet powerful search example.
Here is the link to the article on Milvus Vector Database:
Hands-On with Milvus Vector Database
Enjoy π !!!