Skip to content

Yasouimo/Llms-recommander

Repository files navigation

Leveraging Large Language Models for Recommender Tasks

This project explores the application of lightweight Large Language Models (LLMs) in recommender systems through four comprehensive tasks. Each task focuses on different aspects of LLM implementation and optimization for recommendation scenarios.

Project Overview

The project is divided into four main tasks:

  1. Task 1: Lightweight LLMs for Recommenders - Performance evaluation and testing of lightweight LLMs
  2. Task 2: Fine-Tuning Strategies for Lightweight LLMs in Recommender Systems - Exploration of fine-tuning methodologies
  3. Task 3: Fine-Tuning Tiny Llama on Yambda Dataset - Advanced fine-tuning with SFT and DPO techniques
  4. Task 4: Fine-Tuning Tiny Llama on Declic Events Dataset - Contextual event recommendations with multi-dimensional awareness

Model Deployment & Usage

The final trained model from Task 4 is deployed and accessible for inference through multiple channels:

Docker Deployment

Pre-built Docker Image: yasouimo14/declic_tinyllama_model:latest

docker pull yasouimo14/declic_tinyllama_model:latest
docker run -p 8000:8000 yasouimo14/declic_tinyllama_model:latest

Ollama Integration

Convert to GGUF format using llama.cpp:

# Clone llama.cpp repository
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp

# Convert HuggingFace model to GGUF
python convert.py --outfile ./model.gguf --outtype f16 ./final_model

# Add to Ollama
ollama create declic_model -f ./Modelfile
ollama run declic_model

Usage: The deployed model provides contextual event recommendations with 91.31% accuracy, supporting location, time, and mood-aware queries.

Declic Context-Aware Event Recommendation Interface

Technologies Used

  • Python
  • Jupyter Notebooks
  • Hugging Face Transformers
  • Fine-tuning techniques (SFT, DPO)
  • T5-enhanced dataset generation
  • Various lightweight LLM architectures

Getting Started

Each task folder contains detailed documentation and implementation files. Start with Task 1 for baseline performance evaluation, then proceed through Tasks 2 and 3 for advanced fine-tuning strategies. Task 4 demonstrates contextual recommendation systems with multi-dimensional awareness including location, time, and mood preferences.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published