Farid S

Farid S

AI

3 stories

An overview of the RAG pipeline. For documents storage: input documents -> text chunks -> encoder model -> vector database. For LLM prompting: User question -> encoder model -> vector database -> top-k relevant chunks -> generator LLM model. The LLM then answers the question with the retrieved context.
Farid S

Farid S

Python

8 stories

Farid S

Farid S

Block Chain

1 story

Farid S

Farid S

Dev Ops

1 story

Farid S

Farid S

Back End Developer, Data Engineering, Web Scraper, Data Extraction, Python and PHP Spesialist. Cek https://faridsanusi.com