Yanli Liu

AI

Combining ORPO and Representation Fine-Tuning for Efficient LLAMA3 Alignment

Achieving Better Results and Efficiency in Language Model Fine-Tuning Yanli Liu · Follow Published in Towards Data Science · 11 min read · 1 hour ago — Fine-tuning is one of the most popular techniques for adapting language models to specific tasks. However, in most cases, this will require large amounts of computing power and resources. Recent advances, among them PeFT, the parameter-efficient fine-tuning such as the Low-Rank Adaptation method, Representation Fine-Tuning, and ORPO…

Read More »
AI

Why Representation Finetuning is the Most Efficient Approach Today?

A Step-by-Step Guide to Representation Finetuning LLAMA3 Yanli Liu · Follow Published in Towards Data Science · 10 min read · 11 hours ago — Do you know it’s possible to fine-tune a Language Model using just a few parameters and a tiny dataset with as few as 10 data points? Well, it’s not magic. Photo by Mrika Selimi on Unsplash

Read More »
AI

Building Local RAG Chatbots Without Coding Using LangFlow and Ollama

A Quick Way to Prototype RAG Applications Based on LangChain Yanli Liu · Follow Published in Towards Data Science · 10 min read · 13 hours ago — ⁤Remember the days when building a smart chatbot took months of coding? Frameworks like LangChain have definitely streamlined development, but hundreds of lines of code can still be a hurdle for those who aren’t programmers. ⁤ Is there a simpler way ? Photo by Ravi Palwe on

Read More »